00:00:00.001 Started by upstream project "autotest-per-patch" build number 127205 00:00:00.001 originally caused by: 00:00:00.002 Started by user sys_sgci 00:00:00.120 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/nvmf-tcp-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-phy.groovy 00:00:00.121 The recommended git tool is: git 00:00:00.121 using credential 00000000-0000-0000-0000-000000000002 00:00:00.123 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/nvmf-tcp-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.177 Fetching changes from the remote Git repository 00:00:00.179 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.217 Using shallow fetch with depth 1 00:00:00.217 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.217 > git --version # timeout=10 00:00:00.253 > git --version # 'git version 2.39.2' 00:00:00.253 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.276 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.276 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:06.471 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:06.484 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:06.500 Checking out Revision 4313f32deecbb7108199ebd1913b403a3005dece (FETCH_HEAD) 00:00:06.500 > git config core.sparsecheckout # timeout=10 00:00:06.512 > git read-tree -mu HEAD # timeout=10 00:00:06.532 > git checkout -f 4313f32deecbb7108199ebd1913b403a3005dece # timeout=5 00:00:06.556 Commit message: "packer: Add bios builder" 00:00:06.556 > git rev-list --no-walk 4313f32deecbb7108199ebd1913b403a3005dece # timeout=10 00:00:06.644 [Pipeline] Start of Pipeline 00:00:06.655 [Pipeline] library 00:00:06.656 Loading library shm_lib@master 00:00:06.656 Library shm_lib@master is cached. Copying from home. 00:00:06.671 [Pipeline] node 00:00:06.678 Running on WFP6 in /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:00:06.679 [Pipeline] { 00:00:06.688 [Pipeline] catchError 00:00:06.689 [Pipeline] { 00:00:06.699 [Pipeline] wrap 00:00:06.705 [Pipeline] { 00:00:06.710 [Pipeline] stage 00:00:06.711 [Pipeline] { (Prologue) 00:00:06.882 [Pipeline] sh 00:00:07.164 + logger -p user.info -t JENKINS-CI 00:00:07.182 [Pipeline] echo 00:00:07.183 Node: WFP6 00:00:07.190 [Pipeline] sh 00:00:07.486 [Pipeline] setCustomBuildProperty 00:00:07.499 [Pipeline] echo 00:00:07.501 Cleanup processes 00:00:07.507 [Pipeline] sh 00:00:07.793 + sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:07.793 1224137 sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:07.805 [Pipeline] sh 00:00:08.084 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:08.084 ++ grep -v 'sudo pgrep' 00:00:08.084 ++ awk '{print $1}' 00:00:08.084 + sudo kill -9 00:00:08.084 + true 00:00:08.098 [Pipeline] cleanWs 00:00:08.108 [WS-CLEANUP] Deleting project workspace... 00:00:08.108 [WS-CLEANUP] Deferred wipeout is used... 00:00:08.113 [WS-CLEANUP] done 00:00:08.116 [Pipeline] setCustomBuildProperty 00:00:08.129 [Pipeline] sh 00:00:08.407 + sudo git config --global --replace-all safe.directory '*' 00:00:08.496 [Pipeline] httpRequest 00:00:08.519 [Pipeline] echo 00:00:08.520 Sorcerer 10.211.164.101 is alive 00:00:08.528 [Pipeline] httpRequest 00:00:08.532 HttpMethod: GET 00:00:08.533 URL: http://10.211.164.101/packages/jbp_4313f32deecbb7108199ebd1913b403a3005dece.tar.gz 00:00:08.533 Sending request to url: http://10.211.164.101/packages/jbp_4313f32deecbb7108199ebd1913b403a3005dece.tar.gz 00:00:08.559 Response Code: HTTP/1.1 200 OK 00:00:08.560 Success: Status code 200 is in the accepted range: 200,404 00:00:08.561 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/jbp_4313f32deecbb7108199ebd1913b403a3005dece.tar.gz 00:00:15.269 [Pipeline] sh 00:00:15.557 + tar --no-same-owner -xf jbp_4313f32deecbb7108199ebd1913b403a3005dece.tar.gz 00:00:15.576 [Pipeline] httpRequest 00:00:15.598 [Pipeline] echo 00:00:15.600 Sorcerer 10.211.164.101 is alive 00:00:15.610 [Pipeline] httpRequest 00:00:15.615 HttpMethod: GET 00:00:15.615 URL: http://10.211.164.101/packages/spdk_487ff9e1a11f021d35737cc7c68c1a173253666a.tar.gz 00:00:15.616 Sending request to url: http://10.211.164.101/packages/spdk_487ff9e1a11f021d35737cc7c68c1a173253666a.tar.gz 00:00:15.629 Response Code: HTTP/1.1 200 OK 00:00:15.630 Success: Status code 200 is in the accepted range: 200,404 00:00:15.630 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk_487ff9e1a11f021d35737cc7c68c1a173253666a.tar.gz 00:01:33.216 [Pipeline] sh 00:01:33.503 + tar --no-same-owner -xf spdk_487ff9e1a11f021d35737cc7c68c1a173253666a.tar.gz 00:01:36.052 [Pipeline] sh 00:01:36.373 + git -C spdk log --oneline -n5 00:01:36.373 487ff9e1a pkgdep/rhel: add misspell checker package for fedora 00:01:36.373 064b11df7 general: fix misspells and typos 00:01:36.373 704257090 lib/reduce: fix the incorrect calculation method for the number of io_unit required for metadata. 00:01:36.373 fc2398dfa raid: clear base bdev configure_cb after executing 00:01:36.373 5558f3f50 raid: complete bdev_raid_create after sb is written 00:01:36.409 [Pipeline] } 00:01:36.427 [Pipeline] // stage 00:01:36.436 [Pipeline] stage 00:01:36.439 [Pipeline] { (Prepare) 00:01:36.457 [Pipeline] writeFile 00:01:36.474 [Pipeline] sh 00:01:36.758 + logger -p user.info -t JENKINS-CI 00:01:36.770 [Pipeline] sh 00:01:37.057 + logger -p user.info -t JENKINS-CI 00:01:37.074 [Pipeline] sh 00:01:37.357 + cat autorun-spdk.conf 00:01:37.357 SPDK_RUN_FUNCTIONAL_TEST=1 00:01:37.357 SPDK_TEST_NVMF=1 00:01:37.357 SPDK_TEST_NVME_CLI=1 00:01:37.357 SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:37.357 SPDK_TEST_NVMF_NICS=e810 00:01:37.357 SPDK_TEST_VFIOUSER=1 00:01:37.357 SPDK_RUN_UBSAN=1 00:01:37.357 NET_TYPE=phy 00:01:37.365 RUN_NIGHTLY=0 00:01:37.370 [Pipeline] readFile 00:01:37.394 [Pipeline] withEnv 00:01:37.396 [Pipeline] { 00:01:37.410 [Pipeline] sh 00:01:37.695 + set -ex 00:01:37.695 + [[ -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf ]] 00:01:37.695 + source /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:01:37.695 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:37.695 ++ SPDK_TEST_NVMF=1 00:01:37.695 ++ SPDK_TEST_NVME_CLI=1 00:01:37.695 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:37.695 ++ SPDK_TEST_NVMF_NICS=e810 00:01:37.695 ++ SPDK_TEST_VFIOUSER=1 00:01:37.695 ++ SPDK_RUN_UBSAN=1 00:01:37.695 ++ NET_TYPE=phy 00:01:37.695 ++ RUN_NIGHTLY=0 00:01:37.695 + case $SPDK_TEST_NVMF_NICS in 00:01:37.695 + DRIVERS=ice 00:01:37.695 + [[ tcp == \r\d\m\a ]] 00:01:37.695 + [[ -n ice ]] 00:01:37.695 + sudo rmmod mlx4_ib mlx5_ib irdma i40iw iw_cxgb4 00:01:37.695 rmmod: ERROR: Module mlx4_ib is not currently loaded 00:01:37.695 rmmod: ERROR: Module mlx5_ib is not currently loaded 00:01:37.695 rmmod: ERROR: Module irdma is not currently loaded 00:01:37.695 rmmod: ERROR: Module i40iw is not currently loaded 00:01:37.695 rmmod: ERROR: Module iw_cxgb4 is not currently loaded 00:01:37.695 + true 00:01:37.695 + for D in $DRIVERS 00:01:37.695 + sudo modprobe ice 00:01:37.695 + exit 0 00:01:37.705 [Pipeline] } 00:01:37.723 [Pipeline] // withEnv 00:01:37.730 [Pipeline] } 00:01:37.748 [Pipeline] // stage 00:01:37.759 [Pipeline] catchError 00:01:37.761 [Pipeline] { 00:01:37.777 [Pipeline] timeout 00:01:37.778 Timeout set to expire in 50 min 00:01:37.780 [Pipeline] { 00:01:37.796 [Pipeline] stage 00:01:37.798 [Pipeline] { (Tests) 00:01:37.815 [Pipeline] sh 00:01:38.100 + jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:01:38.100 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:01:38.100 + DIR_ROOT=/var/jenkins/workspace/nvmf-tcp-phy-autotest 00:01:38.100 + [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest ]] 00:01:38.100 + DIR_SPDK=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:01:38.100 + DIR_OUTPUT=/var/jenkins/workspace/nvmf-tcp-phy-autotest/output 00:01:38.100 + [[ -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk ]] 00:01:38.100 + [[ ! -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/output ]] 00:01:38.100 + mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/output 00:01:38.100 + [[ -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/output ]] 00:01:38.100 + [[ nvmf-tcp-phy-autotest == pkgdep-* ]] 00:01:38.100 + cd /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:01:38.100 + source /etc/os-release 00:01:38.100 ++ NAME='Fedora Linux' 00:01:38.100 ++ VERSION='38 (Cloud Edition)' 00:01:38.100 ++ ID=fedora 00:01:38.100 ++ VERSION_ID=38 00:01:38.100 ++ VERSION_CODENAME= 00:01:38.100 ++ PLATFORM_ID=platform:f38 00:01:38.100 ++ PRETTY_NAME='Fedora Linux 38 (Cloud Edition)' 00:01:38.100 ++ ANSI_COLOR='0;38;2;60;110;180' 00:01:38.100 ++ LOGO=fedora-logo-icon 00:01:38.100 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:38 00:01:38.100 ++ HOME_URL=https://fedoraproject.org/ 00:01:38.100 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f38/system-administrators-guide/ 00:01:38.100 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:01:38.100 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:01:38.100 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:01:38.100 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=38 00:01:38.100 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:01:38.100 ++ REDHAT_SUPPORT_PRODUCT_VERSION=38 00:01:38.100 ++ SUPPORT_END=2024-05-14 00:01:38.100 ++ VARIANT='Cloud Edition' 00:01:38.100 ++ VARIANT_ID=cloud 00:01:38.100 + uname -a 00:01:38.100 Linux spdk-wfp-06 6.7.0-68.fc38.x86_64 #1 SMP PREEMPT_DYNAMIC Mon Jan 15 00:59:40 UTC 2024 x86_64 GNU/Linux 00:01:38.100 + sudo /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:01:40.635 Hugepages 00:01:40.635 node hugesize free / total 00:01:40.635 node0 1048576kB 0 / 0 00:01:40.635 node0 2048kB 0 / 0 00:01:40.635 node1 1048576kB 0 / 0 00:01:40.635 node1 2048kB 0 / 0 00:01:40.635 00:01:40.635 Type BDF Vendor Device NUMA Driver Device Block devices 00:01:40.635 I/OAT 0000:00:04.0 8086 2021 0 ioatdma - - 00:01:40.635 I/OAT 0000:00:04.1 8086 2021 0 ioatdma - - 00:01:40.635 I/OAT 0000:00:04.2 8086 2021 0 ioatdma - - 00:01:40.635 I/OAT 0000:00:04.3 8086 2021 0 ioatdma - - 00:01:40.635 I/OAT 0000:00:04.4 8086 2021 0 ioatdma - - 00:01:40.635 I/OAT 0000:00:04.5 8086 2021 0 ioatdma - - 00:01:40.635 I/OAT 0000:00:04.6 8086 2021 0 ioatdma - - 00:01:40.635 I/OAT 0000:00:04.7 8086 2021 0 ioatdma - - 00:01:40.635 NVMe 0000:5e:00.0 8086 0a54 0 nvme nvme0 nvme0n1 00:01:40.635 I/OAT 0000:80:04.0 8086 2021 1 ioatdma - - 00:01:40.635 I/OAT 0000:80:04.1 8086 2021 1 ioatdma - - 00:01:40.635 I/OAT 0000:80:04.2 8086 2021 1 ioatdma - - 00:01:40.635 I/OAT 0000:80:04.3 8086 2021 1 ioatdma - - 00:01:40.635 I/OAT 0000:80:04.4 8086 2021 1 ioatdma - - 00:01:40.635 I/OAT 0000:80:04.5 8086 2021 1 ioatdma - - 00:01:40.635 I/OAT 0000:80:04.6 8086 2021 1 ioatdma - - 00:01:40.635 I/OAT 0000:80:04.7 8086 2021 1 ioatdma - - 00:01:40.635 + rm -f /tmp/spdk-ld-path 00:01:40.635 + source autorun-spdk.conf 00:01:40.635 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:40.635 ++ SPDK_TEST_NVMF=1 00:01:40.635 ++ SPDK_TEST_NVME_CLI=1 00:01:40.635 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:40.635 ++ SPDK_TEST_NVMF_NICS=e810 00:01:40.635 ++ SPDK_TEST_VFIOUSER=1 00:01:40.635 ++ SPDK_RUN_UBSAN=1 00:01:40.635 ++ NET_TYPE=phy 00:01:40.635 ++ RUN_NIGHTLY=0 00:01:40.635 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:01:40.635 + [[ -n '' ]] 00:01:40.635 + sudo git config --global --add safe.directory /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:01:40.635 + for M in /var/spdk/build-*-manifest.txt 00:01:40.635 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:01:40.635 + cp /var/spdk/build-pkg-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:01:40.635 + for M in /var/spdk/build-*-manifest.txt 00:01:40.635 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:01:40.635 + cp /var/spdk/build-repo-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:01:40.635 ++ uname 00:01:40.635 + [[ Linux == \L\i\n\u\x ]] 00:01:40.635 + sudo dmesg -T 00:01:40.635 + sudo dmesg --clear 00:01:40.635 + dmesg_pid=1225589 00:01:40.635 + [[ Fedora Linux == FreeBSD ]] 00:01:40.635 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:01:40.635 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:01:40.635 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:01:40.635 + sudo dmesg -Tw 00:01:40.635 + [[ -x /usr/src/fio-static/fio ]] 00:01:40.635 + export FIO_BIN=/usr/src/fio-static/fio 00:01:40.635 + FIO_BIN=/usr/src/fio-static/fio 00:01:40.635 + [[ '' == \/\v\a\r\/\j\e\n\k\i\n\s\/\w\o\r\k\s\p\a\c\e\/\n\v\m\f\-\t\c\p\-\p\h\y\-\a\u\t\o\t\e\s\t\/\q\e\m\u\_\v\f\i\o\/* ]] 00:01:40.635 + [[ ! -v VFIO_QEMU_BIN ]] 00:01:40.635 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:01:40.635 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:01:40.635 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:01:40.635 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:01:40.635 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:01:40.635 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:01:40.635 + spdk/autorun.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:01:40.894 Test configuration: 00:01:40.894 SPDK_RUN_FUNCTIONAL_TEST=1 00:01:40.894 SPDK_TEST_NVMF=1 00:01:40.894 SPDK_TEST_NVME_CLI=1 00:01:40.894 SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:40.894 SPDK_TEST_NVMF_NICS=e810 00:01:40.894 SPDK_TEST_VFIOUSER=1 00:01:40.894 SPDK_RUN_UBSAN=1 00:01:40.894 NET_TYPE=phy 00:01:40.894 RUN_NIGHTLY=0 11:09:36 -- common/autobuild_common.sh@15 -- $ source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:01:40.894 11:09:36 -- scripts/common.sh@508 -- $ [[ -e /bin/wpdk_common.sh ]] 00:01:40.894 11:09:36 -- scripts/common.sh@516 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:01:40.894 11:09:36 -- scripts/common.sh@517 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:01:40.894 11:09:36 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:40.894 11:09:36 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:40.895 11:09:36 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:40.895 11:09:36 -- paths/export.sh@5 -- $ export PATH 00:01:40.895 11:09:36 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:40.895 11:09:36 -- common/autobuild_common.sh@446 -- $ out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:01:40.895 11:09:36 -- common/autobuild_common.sh@447 -- $ date +%s 00:01:40.895 11:09:36 -- common/autobuild_common.sh@447 -- $ mktemp -dt spdk_1721984976.XXXXXX 00:01:40.895 11:09:36 -- common/autobuild_common.sh@447 -- $ SPDK_WORKSPACE=/tmp/spdk_1721984976.RjZhkK 00:01:40.895 11:09:36 -- common/autobuild_common.sh@449 -- $ [[ -n '' ]] 00:01:40.895 11:09:36 -- common/autobuild_common.sh@453 -- $ '[' -n '' ']' 00:01:40.895 11:09:36 -- common/autobuild_common.sh@456 -- $ scanbuild_exclude='--exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/' 00:01:40.895 11:09:36 -- common/autobuild_common.sh@460 -- $ scanbuild_exclude+=' --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp' 00:01:40.895 11:09:36 -- common/autobuild_common.sh@462 -- $ scanbuild='scan-build -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/scan-build-tmp --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/ --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp --status-bugs' 00:01:40.895 11:09:36 -- common/autobuild_common.sh@463 -- $ get_config_params 00:01:40.895 11:09:36 -- common/autotest_common.sh@398 -- $ xtrace_disable 00:01:40.895 11:09:36 -- common/autotest_common.sh@10 -- $ set +x 00:01:40.895 11:09:36 -- common/autobuild_common.sh@463 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user' 00:01:40.895 11:09:36 -- common/autobuild_common.sh@465 -- $ start_monitor_resources 00:01:40.895 11:09:36 -- pm/common@17 -- $ local monitor 00:01:40.895 11:09:36 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:40.895 11:09:36 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:40.895 11:09:36 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:40.895 11:09:36 -- pm/common@21 -- $ date +%s 00:01:40.895 11:09:36 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:40.895 11:09:36 -- pm/common@21 -- $ date +%s 00:01:40.895 11:09:36 -- pm/common@25 -- $ sleep 1 00:01:40.895 11:09:36 -- pm/common@21 -- $ date +%s 00:01:40.895 11:09:36 -- pm/common@21 -- $ date +%s 00:01:40.895 11:09:36 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1721984976 00:01:40.895 11:09:36 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1721984976 00:01:40.895 11:09:36 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1721984976 00:01:40.895 11:09:36 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1721984976 00:01:40.895 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1721984976_collect-vmstat.pm.log 00:01:40.895 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1721984976_collect-cpu-load.pm.log 00:01:40.895 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1721984976_collect-cpu-temp.pm.log 00:01:40.895 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1721984976_collect-bmc-pm.bmc.pm.log 00:01:41.833 11:09:37 -- common/autobuild_common.sh@466 -- $ trap stop_monitor_resources EXIT 00:01:41.833 11:09:37 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:01:41.833 11:09:37 -- spdk/autobuild.sh@12 -- $ umask 022 00:01:41.833 11:09:37 -- spdk/autobuild.sh@13 -- $ cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:01:41.833 11:09:37 -- spdk/autobuild.sh@16 -- $ date -u 00:01:41.833 Fri Jul 26 09:09:37 AM UTC 2024 00:01:41.833 11:09:37 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:01:41.833 v24.09-pre-323-g487ff9e1a 00:01:41.833 11:09:37 -- spdk/autobuild.sh@19 -- $ '[' 0 -eq 1 ']' 00:01:41.833 11:09:37 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:01:41.833 11:09:37 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:01:41.833 11:09:37 -- common/autotest_common.sh@1101 -- $ '[' 3 -le 1 ']' 00:01:41.833 11:09:37 -- common/autotest_common.sh@1107 -- $ xtrace_disable 00:01:41.833 11:09:37 -- common/autotest_common.sh@10 -- $ set +x 00:01:41.833 ************************************ 00:01:41.833 START TEST ubsan 00:01:41.833 ************************************ 00:01:41.833 11:09:37 ubsan -- common/autotest_common.sh@1125 -- $ echo 'using ubsan' 00:01:41.833 using ubsan 00:01:41.833 00:01:41.833 real 0m0.000s 00:01:41.833 user 0m0.000s 00:01:41.833 sys 0m0.000s 00:01:41.833 11:09:37 ubsan -- common/autotest_common.sh@1126 -- $ xtrace_disable 00:01:41.833 11:09:37 ubsan -- common/autotest_common.sh@10 -- $ set +x 00:01:41.833 ************************************ 00:01:41.833 END TEST ubsan 00:01:41.833 ************************************ 00:01:42.092 11:09:37 -- spdk/autobuild.sh@27 -- $ '[' -n '' ']' 00:01:42.092 11:09:37 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:01:42.092 11:09:37 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:01:42.092 11:09:37 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:01:42.092 11:09:37 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:01:42.092 11:09:37 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:01:42.092 11:09:37 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:01:42.092 11:09:37 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:01:42.092 11:09:37 -- spdk/autobuild.sh@67 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user --with-shared 00:01:42.092 Using default SPDK env in /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:01:42.092 Using default DPDK in /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:01:42.351 Using 'verbs' RDMA provider 00:01:55.496 Configuring ISA-L (logfile: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.spdk-isal.log)...done. 00:02:07.707 Configuring ISA-L-crypto (logfile: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.spdk-isal-crypto.log)...done. 00:02:07.707 Creating mk/config.mk...done. 00:02:07.707 Creating mk/cc.flags.mk...done. 00:02:07.707 Type 'make' to build. 00:02:07.707 11:10:02 -- spdk/autobuild.sh@69 -- $ run_test make make -j96 00:02:07.707 11:10:02 -- common/autotest_common.sh@1101 -- $ '[' 3 -le 1 ']' 00:02:07.707 11:10:02 -- common/autotest_common.sh@1107 -- $ xtrace_disable 00:02:07.707 11:10:02 -- common/autotest_common.sh@10 -- $ set +x 00:02:07.707 ************************************ 00:02:07.707 START TEST make 00:02:07.707 ************************************ 00:02:07.707 11:10:02 make -- common/autotest_common.sh@1125 -- $ make -j96 00:02:07.707 make[1]: Nothing to be done for 'all'. 00:02:08.644 The Meson build system 00:02:08.644 Version: 1.3.1 00:02:08.644 Source dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user 00:02:08.644 Build dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:02:08.644 Build type: native build 00:02:08.644 Project name: libvfio-user 00:02:08.644 Project version: 0.0.1 00:02:08.644 C compiler for the host machine: cc (gcc 13.2.1 "cc (GCC) 13.2.1 20231011 (Red Hat 13.2.1-4)") 00:02:08.644 C linker for the host machine: cc ld.bfd 2.39-16 00:02:08.644 Host machine cpu family: x86_64 00:02:08.644 Host machine cpu: x86_64 00:02:08.644 Run-time dependency threads found: YES 00:02:08.644 Library dl found: YES 00:02:08.644 Found pkg-config: YES (/usr/bin/pkg-config) 1.8.0 00:02:08.644 Run-time dependency json-c found: YES 0.17 00:02:08.644 Run-time dependency cmocka found: YES 1.1.7 00:02:08.644 Program pytest-3 found: NO 00:02:08.644 Program flake8 found: NO 00:02:08.644 Program misspell-fixer found: NO 00:02:08.644 Program restructuredtext-lint found: NO 00:02:08.644 Program valgrind found: YES (/usr/bin/valgrind) 00:02:08.644 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:02:08.644 Compiler for C supports arguments -Wmissing-declarations: YES 00:02:08.644 Compiler for C supports arguments -Wwrite-strings: YES 00:02:08.644 ../libvfio-user/test/meson.build:20: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup. 00:02:08.644 Program test-lspci.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user/test/test-lspci.sh) 00:02:08.644 Program test-linkage.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user/test/test-linkage.sh) 00:02:08.644 ../libvfio-user/test/py/meson.build:16: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup. 00:02:08.644 Build targets in project: 8 00:02:08.644 WARNING: Project specifies a minimum meson_version '>= 0.53.0' but uses features which were added in newer versions: 00:02:08.644 * 0.57.0: {'exclude_suites arg in add_test_setup'} 00:02:08.644 00:02:08.644 libvfio-user 0.0.1 00:02:08.644 00:02:08.644 User defined options 00:02:08.644 buildtype : debug 00:02:08.644 default_library: shared 00:02:08.644 libdir : /usr/local/lib 00:02:08.644 00:02:08.644 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:02:09.210 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug' 00:02:09.210 [1/37] Compiling C object samples/shadow_ioeventfd_server.p/shadow_ioeventfd_server.c.o 00:02:09.210 [2/37] Compiling C object samples/lspci.p/lspci.c.o 00:02:09.210 [3/37] Compiling C object samples/null.p/null.c.o 00:02:09.210 [4/37] Compiling C object lib/libvfio-user.so.0.0.1.p/tran.c.o 00:02:09.210 [5/37] Compiling C object samples/gpio-pci-idio-16.p/gpio-pci-idio-16.c.o 00:02:09.210 [6/37] Compiling C object test/unit_tests.p/.._lib_irq.c.o 00:02:09.210 [7/37] Compiling C object samples/client.p/.._lib_tran.c.o 00:02:09.210 [8/37] Compiling C object lib/libvfio-user.so.0.0.1.p/irq.c.o 00:02:09.210 [9/37] Compiling C object lib/libvfio-user.so.0.0.1.p/dma.c.o 00:02:09.210 [10/37] Compiling C object lib/libvfio-user.so.0.0.1.p/migration.c.o 00:02:09.210 [11/37] Compiling C object lib/libvfio-user.so.0.0.1.p/pci.c.o 00:02:09.210 [12/37] Compiling C object samples/client.p/.._lib_migration.c.o 00:02:09.210 [13/37] Compiling C object test/unit_tests.p/.._lib_tran.c.o 00:02:09.210 [14/37] Compiling C object test/unit_tests.p/.._lib_pci.c.o 00:02:09.210 [15/37] Compiling C object test/unit_tests.p/.._lib_migration.c.o 00:02:09.210 [16/37] Compiling C object test/unit_tests.p/.._lib_tran_pipe.c.o 00:02:09.210 [17/37] Compiling C object test/unit_tests.p/mocks.c.o 00:02:09.210 [18/37] Compiling C object lib/libvfio-user.so.0.0.1.p/tran_sock.c.o 00:02:09.210 [19/37] Compiling C object test/unit_tests.p/.._lib_tran_sock.c.o 00:02:09.210 [20/37] Compiling C object samples/client.p/.._lib_tran_sock.c.o 00:02:09.210 [21/37] Compiling C object test/unit_tests.p/.._lib_dma.c.o 00:02:09.210 [22/37] Compiling C object lib/libvfio-user.so.0.0.1.p/pci_caps.c.o 00:02:09.210 [23/37] Compiling C object test/unit_tests.p/.._lib_pci_caps.c.o 00:02:09.210 [24/37] Compiling C object test/unit_tests.p/unit-tests.c.o 00:02:09.210 [25/37] Compiling C object samples/server.p/server.c.o 00:02:09.468 [26/37] Compiling C object samples/client.p/client.c.o 00:02:09.468 [27/37] Linking target samples/client 00:02:09.468 [28/37] Compiling C object test/unit_tests.p/.._lib_libvfio-user.c.o 00:02:09.468 [29/37] Compiling C object lib/libvfio-user.so.0.0.1.p/libvfio-user.c.o 00:02:09.468 [30/37] Linking target test/unit_tests 00:02:09.468 [31/37] Linking target lib/libvfio-user.so.0.0.1 00:02:09.726 [32/37] Generating symbol file lib/libvfio-user.so.0.0.1.p/libvfio-user.so.0.0.1.symbols 00:02:09.726 [33/37] Linking target samples/lspci 00:02:09.726 [34/37] Linking target samples/gpio-pci-idio-16 00:02:09.726 [35/37] Linking target samples/null 00:02:09.726 [36/37] Linking target samples/server 00:02:09.726 [37/37] Linking target samples/shadow_ioeventfd_server 00:02:09.726 INFO: autodetecting backend as ninja 00:02:09.726 INFO: calculating backend command to run: /usr/local/bin/ninja -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:02:09.726 DESTDIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user meson install --quiet -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:02:09.984 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug' 00:02:09.984 ninja: no work to do. 00:02:15.253 The Meson build system 00:02:15.253 Version: 1.3.1 00:02:15.253 Source dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk 00:02:15.253 Build dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp 00:02:15.253 Build type: native build 00:02:15.253 Program cat found: YES (/usr/bin/cat) 00:02:15.253 Project name: DPDK 00:02:15.253 Project version: 24.03.0 00:02:15.253 C compiler for the host machine: cc (gcc 13.2.1 "cc (GCC) 13.2.1 20231011 (Red Hat 13.2.1-4)") 00:02:15.253 C linker for the host machine: cc ld.bfd 2.39-16 00:02:15.253 Host machine cpu family: x86_64 00:02:15.253 Host machine cpu: x86_64 00:02:15.253 Message: ## Building in Developer Mode ## 00:02:15.253 Program pkg-config found: YES (/usr/bin/pkg-config) 00:02:15.253 Program check-symbols.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/buildtools/check-symbols.sh) 00:02:15.253 Program options-ibverbs-static.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/buildtools/options-ibverbs-static.sh) 00:02:15.253 Program python3 found: YES (/usr/bin/python3) 00:02:15.253 Program cat found: YES (/usr/bin/cat) 00:02:15.253 Compiler for C supports arguments -march=native: YES 00:02:15.253 Checking for size of "void *" : 8 00:02:15.253 Checking for size of "void *" : 8 (cached) 00:02:15.253 Compiler for C supports link arguments -Wl,--undefined-version: NO 00:02:15.253 Library m found: YES 00:02:15.253 Library numa found: YES 00:02:15.253 Has header "numaif.h" : YES 00:02:15.253 Library fdt found: NO 00:02:15.253 Library execinfo found: NO 00:02:15.253 Has header "execinfo.h" : YES 00:02:15.253 Found pkg-config: YES (/usr/bin/pkg-config) 1.8.0 00:02:15.253 Run-time dependency libarchive found: NO (tried pkgconfig) 00:02:15.253 Run-time dependency libbsd found: NO (tried pkgconfig) 00:02:15.253 Run-time dependency jansson found: NO (tried pkgconfig) 00:02:15.253 Run-time dependency openssl found: YES 3.0.9 00:02:15.253 Run-time dependency libpcap found: YES 1.10.4 00:02:15.253 Has header "pcap.h" with dependency libpcap: YES 00:02:15.253 Compiler for C supports arguments -Wcast-qual: YES 00:02:15.253 Compiler for C supports arguments -Wdeprecated: YES 00:02:15.253 Compiler for C supports arguments -Wformat: YES 00:02:15.253 Compiler for C supports arguments -Wformat-nonliteral: NO 00:02:15.253 Compiler for C supports arguments -Wformat-security: NO 00:02:15.253 Compiler for C supports arguments -Wmissing-declarations: YES 00:02:15.253 Compiler for C supports arguments -Wmissing-prototypes: YES 00:02:15.253 Compiler for C supports arguments -Wnested-externs: YES 00:02:15.254 Compiler for C supports arguments -Wold-style-definition: YES 00:02:15.254 Compiler for C supports arguments -Wpointer-arith: YES 00:02:15.254 Compiler for C supports arguments -Wsign-compare: YES 00:02:15.254 Compiler for C supports arguments -Wstrict-prototypes: YES 00:02:15.254 Compiler for C supports arguments -Wundef: YES 00:02:15.254 Compiler for C supports arguments -Wwrite-strings: YES 00:02:15.254 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:02:15.254 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:02:15.254 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:02:15.254 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:02:15.254 Program objdump found: YES (/usr/bin/objdump) 00:02:15.254 Compiler for C supports arguments -mavx512f: YES 00:02:15.254 Checking if "AVX512 checking" compiles: YES 00:02:15.254 Fetching value of define "__SSE4_2__" : 1 00:02:15.254 Fetching value of define "__AES__" : 1 00:02:15.254 Fetching value of define "__AVX__" : 1 00:02:15.254 Fetching value of define "__AVX2__" : 1 00:02:15.254 Fetching value of define "__AVX512BW__" : 1 00:02:15.254 Fetching value of define "__AVX512CD__" : 1 00:02:15.254 Fetching value of define "__AVX512DQ__" : 1 00:02:15.254 Fetching value of define "__AVX512F__" : 1 00:02:15.254 Fetching value of define "__AVX512VL__" : 1 00:02:15.254 Fetching value of define "__PCLMUL__" : 1 00:02:15.254 Fetching value of define "__RDRND__" : 1 00:02:15.254 Fetching value of define "__RDSEED__" : 1 00:02:15.254 Fetching value of define "__VPCLMULQDQ__" : (undefined) 00:02:15.254 Fetching value of define "__znver1__" : (undefined) 00:02:15.254 Fetching value of define "__znver2__" : (undefined) 00:02:15.254 Fetching value of define "__znver3__" : (undefined) 00:02:15.254 Fetching value of define "__znver4__" : (undefined) 00:02:15.254 Compiler for C supports arguments -Wno-format-truncation: YES 00:02:15.254 Message: lib/log: Defining dependency "log" 00:02:15.254 Message: lib/kvargs: Defining dependency "kvargs" 00:02:15.254 Message: lib/telemetry: Defining dependency "telemetry" 00:02:15.254 Checking for function "getentropy" : NO 00:02:15.254 Message: lib/eal: Defining dependency "eal" 00:02:15.254 Message: lib/ring: Defining dependency "ring" 00:02:15.254 Message: lib/rcu: Defining dependency "rcu" 00:02:15.254 Message: lib/mempool: Defining dependency "mempool" 00:02:15.254 Message: lib/mbuf: Defining dependency "mbuf" 00:02:15.254 Fetching value of define "__PCLMUL__" : 1 (cached) 00:02:15.254 Fetching value of define "__AVX512F__" : 1 (cached) 00:02:15.254 Fetching value of define "__AVX512BW__" : 1 (cached) 00:02:15.254 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:02:15.254 Fetching value of define "__AVX512VL__" : 1 (cached) 00:02:15.254 Fetching value of define "__VPCLMULQDQ__" : (undefined) (cached) 00:02:15.254 Compiler for C supports arguments -mpclmul: YES 00:02:15.254 Compiler for C supports arguments -maes: YES 00:02:15.254 Compiler for C supports arguments -mavx512f: YES (cached) 00:02:15.254 Compiler for C supports arguments -mavx512bw: YES 00:02:15.254 Compiler for C supports arguments -mavx512dq: YES 00:02:15.254 Compiler for C supports arguments -mavx512vl: YES 00:02:15.254 Compiler for C supports arguments -mvpclmulqdq: YES 00:02:15.254 Compiler for C supports arguments -mavx2: YES 00:02:15.254 Compiler for C supports arguments -mavx: YES 00:02:15.254 Message: lib/net: Defining dependency "net" 00:02:15.254 Message: lib/meter: Defining dependency "meter" 00:02:15.254 Message: lib/ethdev: Defining dependency "ethdev" 00:02:15.254 Message: lib/pci: Defining dependency "pci" 00:02:15.254 Message: lib/cmdline: Defining dependency "cmdline" 00:02:15.254 Message: lib/hash: Defining dependency "hash" 00:02:15.254 Message: lib/timer: Defining dependency "timer" 00:02:15.254 Message: lib/compressdev: Defining dependency "compressdev" 00:02:15.254 Message: lib/cryptodev: Defining dependency "cryptodev" 00:02:15.254 Message: lib/dmadev: Defining dependency "dmadev" 00:02:15.254 Compiler for C supports arguments -Wno-cast-qual: YES 00:02:15.254 Message: lib/power: Defining dependency "power" 00:02:15.254 Message: lib/reorder: Defining dependency "reorder" 00:02:15.254 Message: lib/security: Defining dependency "security" 00:02:15.254 Has header "linux/userfaultfd.h" : YES 00:02:15.254 Has header "linux/vduse.h" : YES 00:02:15.254 Message: lib/vhost: Defining dependency "vhost" 00:02:15.254 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:02:15.254 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:02:15.254 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:02:15.254 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:02:15.254 Message: Disabling raw/* drivers: missing internal dependency "rawdev" 00:02:15.254 Message: Disabling regex/* drivers: missing internal dependency "regexdev" 00:02:15.254 Message: Disabling ml/* drivers: missing internal dependency "mldev" 00:02:15.254 Message: Disabling event/* drivers: missing internal dependency "eventdev" 00:02:15.254 Message: Disabling baseband/* drivers: missing internal dependency "bbdev" 00:02:15.254 Message: Disabling gpu/* drivers: missing internal dependency "gpudev" 00:02:15.254 Program doxygen found: YES (/usr/bin/doxygen) 00:02:15.254 Configuring doxy-api-html.conf using configuration 00:02:15.254 Configuring doxy-api-man.conf using configuration 00:02:15.254 Program mandb found: YES (/usr/bin/mandb) 00:02:15.254 Program sphinx-build found: NO 00:02:15.254 Configuring rte_build_config.h using configuration 00:02:15.254 Message: 00:02:15.254 ================= 00:02:15.254 Applications Enabled 00:02:15.254 ================= 00:02:15.254 00:02:15.254 apps: 00:02:15.254 00:02:15.254 00:02:15.254 Message: 00:02:15.254 ================= 00:02:15.254 Libraries Enabled 00:02:15.254 ================= 00:02:15.254 00:02:15.254 libs: 00:02:15.254 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:02:15.254 net, meter, ethdev, pci, cmdline, hash, timer, compressdev, 00:02:15.254 cryptodev, dmadev, power, reorder, security, vhost, 00:02:15.254 00:02:15.254 Message: 00:02:15.254 =============== 00:02:15.254 Drivers Enabled 00:02:15.254 =============== 00:02:15.254 00:02:15.254 common: 00:02:15.254 00:02:15.254 bus: 00:02:15.254 pci, vdev, 00:02:15.254 mempool: 00:02:15.254 ring, 00:02:15.254 dma: 00:02:15.254 00:02:15.254 net: 00:02:15.254 00:02:15.254 crypto: 00:02:15.254 00:02:15.254 compress: 00:02:15.254 00:02:15.254 vdpa: 00:02:15.254 00:02:15.254 00:02:15.254 Message: 00:02:15.254 ================= 00:02:15.254 Content Skipped 00:02:15.254 ================= 00:02:15.254 00:02:15.254 apps: 00:02:15.254 dumpcap: explicitly disabled via build config 00:02:15.254 graph: explicitly disabled via build config 00:02:15.254 pdump: explicitly disabled via build config 00:02:15.254 proc-info: explicitly disabled via build config 00:02:15.254 test-acl: explicitly disabled via build config 00:02:15.254 test-bbdev: explicitly disabled via build config 00:02:15.254 test-cmdline: explicitly disabled via build config 00:02:15.254 test-compress-perf: explicitly disabled via build config 00:02:15.254 test-crypto-perf: explicitly disabled via build config 00:02:15.254 test-dma-perf: explicitly disabled via build config 00:02:15.254 test-eventdev: explicitly disabled via build config 00:02:15.254 test-fib: explicitly disabled via build config 00:02:15.254 test-flow-perf: explicitly disabled via build config 00:02:15.254 test-gpudev: explicitly disabled via build config 00:02:15.254 test-mldev: explicitly disabled via build config 00:02:15.254 test-pipeline: explicitly disabled via build config 00:02:15.254 test-pmd: explicitly disabled via build config 00:02:15.254 test-regex: explicitly disabled via build config 00:02:15.254 test-sad: explicitly disabled via build config 00:02:15.254 test-security-perf: explicitly disabled via build config 00:02:15.254 00:02:15.254 libs: 00:02:15.254 argparse: explicitly disabled via build config 00:02:15.254 metrics: explicitly disabled via build config 00:02:15.254 acl: explicitly disabled via build config 00:02:15.254 bbdev: explicitly disabled via build config 00:02:15.254 bitratestats: explicitly disabled via build config 00:02:15.254 bpf: explicitly disabled via build config 00:02:15.254 cfgfile: explicitly disabled via build config 00:02:15.254 distributor: explicitly disabled via build config 00:02:15.254 efd: explicitly disabled via build config 00:02:15.254 eventdev: explicitly disabled via build config 00:02:15.254 dispatcher: explicitly disabled via build config 00:02:15.254 gpudev: explicitly disabled via build config 00:02:15.254 gro: explicitly disabled via build config 00:02:15.254 gso: explicitly disabled via build config 00:02:15.254 ip_frag: explicitly disabled via build config 00:02:15.254 jobstats: explicitly disabled via build config 00:02:15.254 latencystats: explicitly disabled via build config 00:02:15.254 lpm: explicitly disabled via build config 00:02:15.254 member: explicitly disabled via build config 00:02:15.254 pcapng: explicitly disabled via build config 00:02:15.254 rawdev: explicitly disabled via build config 00:02:15.254 regexdev: explicitly disabled via build config 00:02:15.254 mldev: explicitly disabled via build config 00:02:15.254 rib: explicitly disabled via build config 00:02:15.254 sched: explicitly disabled via build config 00:02:15.254 stack: explicitly disabled via build config 00:02:15.254 ipsec: explicitly disabled via build config 00:02:15.254 pdcp: explicitly disabled via build config 00:02:15.254 fib: explicitly disabled via build config 00:02:15.254 port: explicitly disabled via build config 00:02:15.254 pdump: explicitly disabled via build config 00:02:15.254 table: explicitly disabled via build config 00:02:15.254 pipeline: explicitly disabled via build config 00:02:15.254 graph: explicitly disabled via build config 00:02:15.254 node: explicitly disabled via build config 00:02:15.255 00:02:15.255 drivers: 00:02:15.255 common/cpt: not in enabled drivers build config 00:02:15.255 common/dpaax: not in enabled drivers build config 00:02:15.255 common/iavf: not in enabled drivers build config 00:02:15.255 common/idpf: not in enabled drivers build config 00:02:15.255 common/ionic: not in enabled drivers build config 00:02:15.255 common/mvep: not in enabled drivers build config 00:02:15.255 common/octeontx: not in enabled drivers build config 00:02:15.255 bus/auxiliary: not in enabled drivers build config 00:02:15.255 bus/cdx: not in enabled drivers build config 00:02:15.255 bus/dpaa: not in enabled drivers build config 00:02:15.255 bus/fslmc: not in enabled drivers build config 00:02:15.255 bus/ifpga: not in enabled drivers build config 00:02:15.255 bus/platform: not in enabled drivers build config 00:02:15.255 bus/uacce: not in enabled drivers build config 00:02:15.255 bus/vmbus: not in enabled drivers build config 00:02:15.255 common/cnxk: not in enabled drivers build config 00:02:15.255 common/mlx5: not in enabled drivers build config 00:02:15.255 common/nfp: not in enabled drivers build config 00:02:15.255 common/nitrox: not in enabled drivers build config 00:02:15.255 common/qat: not in enabled drivers build config 00:02:15.255 common/sfc_efx: not in enabled drivers build config 00:02:15.255 mempool/bucket: not in enabled drivers build config 00:02:15.255 mempool/cnxk: not in enabled drivers build config 00:02:15.255 mempool/dpaa: not in enabled drivers build config 00:02:15.255 mempool/dpaa2: not in enabled drivers build config 00:02:15.255 mempool/octeontx: not in enabled drivers build config 00:02:15.255 mempool/stack: not in enabled drivers build config 00:02:15.255 dma/cnxk: not in enabled drivers build config 00:02:15.255 dma/dpaa: not in enabled drivers build config 00:02:15.255 dma/dpaa2: not in enabled drivers build config 00:02:15.255 dma/hisilicon: not in enabled drivers build config 00:02:15.255 dma/idxd: not in enabled drivers build config 00:02:15.255 dma/ioat: not in enabled drivers build config 00:02:15.255 dma/skeleton: not in enabled drivers build config 00:02:15.255 net/af_packet: not in enabled drivers build config 00:02:15.255 net/af_xdp: not in enabled drivers build config 00:02:15.255 net/ark: not in enabled drivers build config 00:02:15.255 net/atlantic: not in enabled drivers build config 00:02:15.255 net/avp: not in enabled drivers build config 00:02:15.255 net/axgbe: not in enabled drivers build config 00:02:15.255 net/bnx2x: not in enabled drivers build config 00:02:15.255 net/bnxt: not in enabled drivers build config 00:02:15.255 net/bonding: not in enabled drivers build config 00:02:15.255 net/cnxk: not in enabled drivers build config 00:02:15.255 net/cpfl: not in enabled drivers build config 00:02:15.255 net/cxgbe: not in enabled drivers build config 00:02:15.255 net/dpaa: not in enabled drivers build config 00:02:15.255 net/dpaa2: not in enabled drivers build config 00:02:15.255 net/e1000: not in enabled drivers build config 00:02:15.255 net/ena: not in enabled drivers build config 00:02:15.255 net/enetc: not in enabled drivers build config 00:02:15.255 net/enetfec: not in enabled drivers build config 00:02:15.255 net/enic: not in enabled drivers build config 00:02:15.255 net/failsafe: not in enabled drivers build config 00:02:15.255 net/fm10k: not in enabled drivers build config 00:02:15.255 net/gve: not in enabled drivers build config 00:02:15.255 net/hinic: not in enabled drivers build config 00:02:15.255 net/hns3: not in enabled drivers build config 00:02:15.255 net/i40e: not in enabled drivers build config 00:02:15.255 net/iavf: not in enabled drivers build config 00:02:15.255 net/ice: not in enabled drivers build config 00:02:15.255 net/idpf: not in enabled drivers build config 00:02:15.255 net/igc: not in enabled drivers build config 00:02:15.255 net/ionic: not in enabled drivers build config 00:02:15.255 net/ipn3ke: not in enabled drivers build config 00:02:15.255 net/ixgbe: not in enabled drivers build config 00:02:15.255 net/mana: not in enabled drivers build config 00:02:15.255 net/memif: not in enabled drivers build config 00:02:15.255 net/mlx4: not in enabled drivers build config 00:02:15.255 net/mlx5: not in enabled drivers build config 00:02:15.255 net/mvneta: not in enabled drivers build config 00:02:15.255 net/mvpp2: not in enabled drivers build config 00:02:15.255 net/netvsc: not in enabled drivers build config 00:02:15.255 net/nfb: not in enabled drivers build config 00:02:15.255 net/nfp: not in enabled drivers build config 00:02:15.255 net/ngbe: not in enabled drivers build config 00:02:15.255 net/null: not in enabled drivers build config 00:02:15.255 net/octeontx: not in enabled drivers build config 00:02:15.255 net/octeon_ep: not in enabled drivers build config 00:02:15.255 net/pcap: not in enabled drivers build config 00:02:15.255 net/pfe: not in enabled drivers build config 00:02:15.255 net/qede: not in enabled drivers build config 00:02:15.255 net/ring: not in enabled drivers build config 00:02:15.255 net/sfc: not in enabled drivers build config 00:02:15.255 net/softnic: not in enabled drivers build config 00:02:15.255 net/tap: not in enabled drivers build config 00:02:15.255 net/thunderx: not in enabled drivers build config 00:02:15.255 net/txgbe: not in enabled drivers build config 00:02:15.255 net/vdev_netvsc: not in enabled drivers build config 00:02:15.255 net/vhost: not in enabled drivers build config 00:02:15.255 net/virtio: not in enabled drivers build config 00:02:15.255 net/vmxnet3: not in enabled drivers build config 00:02:15.255 raw/*: missing internal dependency, "rawdev" 00:02:15.255 crypto/armv8: not in enabled drivers build config 00:02:15.255 crypto/bcmfs: not in enabled drivers build config 00:02:15.255 crypto/caam_jr: not in enabled drivers build config 00:02:15.255 crypto/ccp: not in enabled drivers build config 00:02:15.255 crypto/cnxk: not in enabled drivers build config 00:02:15.255 crypto/dpaa_sec: not in enabled drivers build config 00:02:15.255 crypto/dpaa2_sec: not in enabled drivers build config 00:02:15.255 crypto/ipsec_mb: not in enabled drivers build config 00:02:15.255 crypto/mlx5: not in enabled drivers build config 00:02:15.255 crypto/mvsam: not in enabled drivers build config 00:02:15.255 crypto/nitrox: not in enabled drivers build config 00:02:15.255 crypto/null: not in enabled drivers build config 00:02:15.255 crypto/octeontx: not in enabled drivers build config 00:02:15.255 crypto/openssl: not in enabled drivers build config 00:02:15.255 crypto/scheduler: not in enabled drivers build config 00:02:15.255 crypto/uadk: not in enabled drivers build config 00:02:15.255 crypto/virtio: not in enabled drivers build config 00:02:15.255 compress/isal: not in enabled drivers build config 00:02:15.255 compress/mlx5: not in enabled drivers build config 00:02:15.255 compress/nitrox: not in enabled drivers build config 00:02:15.255 compress/octeontx: not in enabled drivers build config 00:02:15.255 compress/zlib: not in enabled drivers build config 00:02:15.255 regex/*: missing internal dependency, "regexdev" 00:02:15.255 ml/*: missing internal dependency, "mldev" 00:02:15.255 vdpa/ifc: not in enabled drivers build config 00:02:15.255 vdpa/mlx5: not in enabled drivers build config 00:02:15.255 vdpa/nfp: not in enabled drivers build config 00:02:15.255 vdpa/sfc: not in enabled drivers build config 00:02:15.255 event/*: missing internal dependency, "eventdev" 00:02:15.255 baseband/*: missing internal dependency, "bbdev" 00:02:15.255 gpu/*: missing internal dependency, "gpudev" 00:02:15.255 00:02:15.255 00:02:15.255 Build targets in project: 85 00:02:15.255 00:02:15.255 DPDK 24.03.0 00:02:15.255 00:02:15.255 User defined options 00:02:15.255 buildtype : debug 00:02:15.255 default_library : shared 00:02:15.255 libdir : lib 00:02:15.255 prefix : /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:02:15.255 c_args : -Wno-stringop-overflow -fcommon -Wno-stringop-overread -Wno-array-bounds -fPIC -Werror 00:02:15.255 c_link_args : 00:02:15.255 cpu_instruction_set: native 00:02:15.255 disable_apps : test-sad,test-acl,test-dma-perf,test-pipeline,test-compress-perf,test-fib,test-flow-perf,test-crypto-perf,test-bbdev,test-eventdev,pdump,test-mldev,test-cmdline,graph,test-security-perf,test-pmd,test,proc-info,test-regex,dumpcap,test-gpudev 00:02:15.255 disable_libs : port,sched,rib,node,ipsec,distributor,gro,eventdev,pdcp,acl,member,latencystats,efd,stack,regexdev,rawdev,bpf,metrics,gpudev,pipeline,pdump,table,fib,dispatcher,mldev,gso,cfgfile,bitratestats,ip_frag,graph,lpm,jobstats,argparse,pcapng,bbdev 00:02:15.255 enable_docs : false 00:02:15.255 enable_drivers : bus,bus/pci,bus/vdev,mempool/ring 00:02:15.255 enable_kmods : false 00:02:15.255 max_lcores : 128 00:02:15.255 tests : false 00:02:15.255 00:02:15.255 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:02:15.515 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp' 00:02:15.784 [1/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:02:15.784 [2/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:02:15.784 [3/268] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:02:15.784 [4/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:02:15.784 [5/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:02:15.784 [6/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:02:15.784 [7/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:02:15.784 [8/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:02:15.784 [9/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:02:15.784 [10/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:02:15.784 [11/268] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:02:15.784 [12/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:02:15.784 [13/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:02:15.784 [14/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:02:15.784 [15/268] Linking static target lib/librte_kvargs.a 00:02:15.784 [16/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:02:15.784 [17/268] Compiling C object lib/librte_log.a.p/log_log.c.o 00:02:15.784 [18/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:02:16.043 [19/268] Linking static target lib/librte_log.a 00:02:16.043 [20/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:02:16.043 [21/268] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:02:16.043 [22/268] Linking static target lib/librte_pci.a 00:02:16.043 [23/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:02:16.043 [24/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:02:16.043 [25/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:02:16.300 [26/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:02:16.300 [27/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:02:16.300 [28/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:02:16.300 [29/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:02:16.300 [30/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:02:16.300 [31/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:02:16.300 [32/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:02:16.300 [33/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:02:16.300 [34/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:02:16.300 [35/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:02:16.300 [36/268] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:02:16.300 [37/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:02:16.300 [38/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:02:16.300 [39/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:02:16.300 [40/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:02:16.300 [41/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:02:16.300 [42/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:02:16.300 [43/268] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:02:16.300 [44/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:02:16.300 [45/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:02:16.300 [46/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:02:16.300 [47/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:02:16.300 [48/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:02:16.300 [49/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:02:16.300 [50/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:02:16.300 [51/268] Linking static target lib/librte_meter.a 00:02:16.300 [52/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:02:16.300 [53/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:02:16.300 [54/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:02:16.300 [55/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:02:16.300 [56/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:02:16.300 [57/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:02:16.300 [58/268] Compiling C object lib/librte_hash.a.p/hash_rte_hash_crc.c.o 00:02:16.300 [59/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:02:16.300 [60/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:02:16.300 [61/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:02:16.300 [62/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:02:16.300 [63/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:02:16.300 [64/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:02:16.300 [65/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:02:16.300 [66/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:02:16.300 [67/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:02:16.300 [68/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:02:16.300 [69/268] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:02:16.301 [70/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:02:16.301 [71/268] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:02:16.301 [72/268] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:02:16.301 [73/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:02:16.301 [74/268] Linking static target lib/librte_telemetry.a 00:02:16.301 [75/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:02:16.301 [76/268] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:02:16.301 [77/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:02:16.301 [78/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:02:16.301 [79/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:02:16.301 [80/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:02:16.301 [81/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:02:16.301 [82/268] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:02:16.301 [83/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:02:16.301 [84/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:02:16.301 [85/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:02:16.301 [86/268] Compiling C object lib/net/libnet_crc_avx512_lib.a.p/net_crc_avx512.c.o 00:02:16.301 [87/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:02:16.301 [88/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:02:16.301 [89/268] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:02:16.301 [90/268] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:02:16.301 [91/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:02:16.301 [92/268] Linking static target lib/net/libnet_crc_avx512_lib.a 00:02:16.301 [93/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:02:16.301 [94/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:02:16.301 [95/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:02:16.301 [96/268] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:02:16.301 [97/268] Linking static target lib/librte_ring.a 00:02:16.301 [98/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:02:16.301 [99/268] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:02:16.301 [100/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:02:16.558 [101/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:02:16.558 [102/268] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:02:16.558 [103/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:02:16.558 [104/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:02:16.558 [105/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:02:16.558 [106/268] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:02:16.558 [107/268] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:02:16.558 [108/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:02:16.558 [109/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_linux_ethtool.c.o 00:02:16.558 [110/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:02:16.558 [111/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:02:16.558 [112/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:02:16.558 [113/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:02:16.558 [114/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:02:16.558 [115/268] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:02:16.558 [116/268] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:02:16.558 [117/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:02:16.558 [118/268] Linking static target lib/librte_rcu.a 00:02:16.558 [119/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:02:16.558 [120/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:02:16.558 [121/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:02:16.558 [122/268] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:02:16.558 [123/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:02:16.558 [124/268] Linking static target lib/librte_mempool.a 00:02:16.558 [125/268] Linking static target lib/librte_net.a 00:02:16.558 [126/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:02:16.558 [127/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:02:16.558 [128/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:02:16.558 [129/268] Linking static target lib/librte_eal.a 00:02:16.558 [130/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:02:16.558 [131/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:02:16.558 [132/268] Linking static target lib/librte_cmdline.a 00:02:16.558 [133/268] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:02:16.558 [134/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:02:16.558 [135/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:02:16.558 [136/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:02:16.558 [137/268] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:02:16.558 [138/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash_gfni.c.o 00:02:16.558 [139/268] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:02:16.817 [140/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:02:16.817 [141/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:02:16.817 [142/268] Linking static target lib/librte_mbuf.a 00:02:16.817 [143/268] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:02:16.817 [144/268] Linking target lib/librte_log.so.24.1 00:02:16.817 [145/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:02:16.817 [146/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:02:16.817 [147/268] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:02:16.817 [148/268] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:02:16.817 [149/268] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:02:16.817 [150/268] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:02:16.817 [151/268] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:02:16.817 [152/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:02:16.817 [153/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:02:16.817 [154/268] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:02:16.817 [155/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:02:16.817 [156/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:02:16.817 [157/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:02:16.817 [158/268] Linking static target drivers/libtmp_rte_bus_vdev.a 00:02:16.817 [159/268] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:02:16.817 [160/268] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:02:16.817 [161/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:02:16.817 [162/268] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:02:16.817 [163/268] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:02:16.817 [164/268] Linking static target lib/librte_reorder.a 00:02:16.817 [165/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:02:16.817 [166/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:02:16.817 [167/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:02:16.817 [168/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:02:16.817 [169/268] Generating symbol file lib/librte_log.so.24.1.p/librte_log.so.24.1.symbols 00:02:16.817 [170/268] Linking static target lib/librte_dmadev.a 00:02:16.817 [171/268] Linking static target lib/librte_compressdev.a 00:02:16.817 [172/268] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:02:16.817 [173/268] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:02:16.817 [174/268] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:02:16.817 [175/268] Linking target lib/librte_telemetry.so.24.1 00:02:16.817 [176/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:02:16.817 [177/268] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:02:16.817 [178/268] Linking target lib/librte_kvargs.so.24.1 00:02:16.817 [179/268] Linking static target lib/librte_power.a 00:02:16.817 [180/268] Linking static target lib/librte_timer.a 00:02:16.817 [181/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:02:17.076 [182/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:02:17.076 [183/268] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:02:17.076 [184/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:02:17.076 [185/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:02:17.076 [186/268] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:02:17.076 [187/268] Linking static target drivers/libtmp_rte_bus_pci.a 00:02:17.076 [188/268] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:02:17.076 [189/268] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:02:17.076 [190/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:02:17.076 [191/268] Generating symbol file lib/librte_telemetry.so.24.1.p/librte_telemetry.so.24.1.symbols 00:02:17.076 [192/268] Generating symbol file lib/librte_kvargs.so.24.1.p/librte_kvargs.so.24.1.symbols 00:02:17.076 [193/268] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:02:17.077 [194/268] Compiling C object drivers/librte_bus_vdev.so.24.1.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:02:17.077 [195/268] Linking static target drivers/librte_bus_vdev.a 00:02:17.077 [196/268] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:02:17.077 [197/268] Linking static target drivers/libtmp_rte_mempool_ring.a 00:02:17.077 [198/268] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:02:17.077 [199/268] Linking static target lib/librte_security.a 00:02:17.077 [200/268] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:02:17.077 [201/268] Linking static target lib/librte_hash.a 00:02:17.077 [202/268] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:02:17.077 [203/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:02:17.335 [204/268] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:02:17.335 [205/268] Compiling C object drivers/librte_bus_pci.so.24.1.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:02:17.335 [206/268] Linking static target drivers/librte_bus_pci.a 00:02:17.335 [207/268] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:02:17.335 [208/268] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:02:17.335 [209/268] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:02:17.335 [210/268] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:02:17.335 [211/268] Compiling C object drivers/librte_mempool_ring.so.24.1.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:02:17.335 [212/268] Linking static target drivers/librte_mempool_ring.a 00:02:17.335 [213/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:02:17.336 [214/268] Linking static target lib/librte_cryptodev.a 00:02:17.336 [215/268] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:02:17.336 [216/268] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:17.336 [217/268] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:02:17.336 [218/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:02:17.336 [219/268] Linking static target lib/librte_ethdev.a 00:02:17.596 [220/268] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:17.596 [221/268] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:17.596 [222/268] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:02:17.596 [223/268] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:02:17.596 [224/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:02:17.931 [225/268] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:02:17.931 [226/268] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:02:17.931 [227/268] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:02:18.877 [228/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:02:18.877 [229/268] Linking static target lib/librte_vhost.a 00:02:19.136 [230/268] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:20.516 [231/268] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:02:25.790 [232/268] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:26.724 [233/268] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:02:26.724 [234/268] Linking target lib/librte_eal.so.24.1 00:02:26.983 [235/268] Generating symbol file lib/librte_eal.so.24.1.p/librte_eal.so.24.1.symbols 00:02:26.983 [236/268] Linking target lib/librte_ring.so.24.1 00:02:26.983 [237/268] Linking target lib/librte_meter.so.24.1 00:02:26.983 [238/268] Linking target drivers/librte_bus_vdev.so.24.1 00:02:26.983 [239/268] Linking target lib/librte_pci.so.24.1 00:02:26.983 [240/268] Linking target lib/librte_timer.so.24.1 00:02:26.983 [241/268] Linking target lib/librte_dmadev.so.24.1 00:02:27.241 [242/268] Generating symbol file lib/librte_meter.so.24.1.p/librte_meter.so.24.1.symbols 00:02:27.241 [243/268] Generating symbol file lib/librte_pci.so.24.1.p/librte_pci.so.24.1.symbols 00:02:27.241 [244/268] Generating symbol file lib/librte_ring.so.24.1.p/librte_ring.so.24.1.symbols 00:02:27.241 [245/268] Generating symbol file lib/librte_timer.so.24.1.p/librte_timer.so.24.1.symbols 00:02:27.241 [246/268] Generating symbol file lib/librte_dmadev.so.24.1.p/librte_dmadev.so.24.1.symbols 00:02:27.241 [247/268] Linking target drivers/librte_bus_pci.so.24.1 00:02:27.241 [248/268] Linking target lib/librte_rcu.so.24.1 00:02:27.241 [249/268] Linking target lib/librte_mempool.so.24.1 00:02:27.241 [250/268] Generating symbol file lib/librte_rcu.so.24.1.p/librte_rcu.so.24.1.symbols 00:02:27.241 [251/268] Generating symbol file lib/librte_mempool.so.24.1.p/librte_mempool.so.24.1.symbols 00:02:27.241 [252/268] Linking target lib/librte_mbuf.so.24.1 00:02:27.241 [253/268] Linking target drivers/librte_mempool_ring.so.24.1 00:02:27.499 [254/268] Generating symbol file lib/librte_mbuf.so.24.1.p/librte_mbuf.so.24.1.symbols 00:02:27.499 [255/268] Linking target lib/librte_net.so.24.1 00:02:27.499 [256/268] Linking target lib/librte_compressdev.so.24.1 00:02:27.499 [257/268] Linking target lib/librte_reorder.so.24.1 00:02:27.499 [258/268] Linking target lib/librte_cryptodev.so.24.1 00:02:27.757 [259/268] Generating symbol file lib/librte_net.so.24.1.p/librte_net.so.24.1.symbols 00:02:27.757 [260/268] Generating symbol file lib/librte_cryptodev.so.24.1.p/librte_cryptodev.so.24.1.symbols 00:02:27.757 [261/268] Linking target lib/librte_cmdline.so.24.1 00:02:27.757 [262/268] Linking target lib/librte_hash.so.24.1 00:02:27.758 [263/268] Linking target lib/librte_security.so.24.1 00:02:27.758 [264/268] Linking target lib/librte_ethdev.so.24.1 00:02:27.758 [265/268] Generating symbol file lib/librte_hash.so.24.1.p/librte_hash.so.24.1.symbols 00:02:27.758 [266/268] Generating symbol file lib/librte_ethdev.so.24.1.p/librte_ethdev.so.24.1.symbols 00:02:28.016 [267/268] Linking target lib/librte_power.so.24.1 00:02:28.016 [268/268] Linking target lib/librte_vhost.so.24.1 00:02:28.016 INFO: autodetecting backend as ninja 00:02:28.016 INFO: calculating backend command to run: /usr/local/bin/ninja -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp -j 96 00:02:28.951 CC lib/log/log.o 00:02:28.951 CC lib/log/log_flags.o 00:02:28.951 CC lib/log/log_deprecated.o 00:02:28.951 CC lib/ut/ut.o 00:02:28.951 CC lib/ut_mock/mock.o 00:02:28.951 LIB libspdk_ut.a 00:02:28.951 LIB libspdk_log.a 00:02:28.951 LIB libspdk_ut_mock.a 00:02:28.951 SO libspdk_ut.so.2.0 00:02:29.210 SO libspdk_log.so.7.0 00:02:29.210 SO libspdk_ut_mock.so.6.0 00:02:29.210 SYMLINK libspdk_ut.so 00:02:29.210 SYMLINK libspdk_log.so 00:02:29.210 SYMLINK libspdk_ut_mock.so 00:02:29.468 CC lib/dma/dma.o 00:02:29.468 CC lib/util/base64.o 00:02:29.468 CC lib/util/bit_array.o 00:02:29.468 CC lib/ioat/ioat.o 00:02:29.468 CC lib/util/cpuset.o 00:02:29.468 CC lib/util/crc32.o 00:02:29.468 CC lib/util/crc16.o 00:02:29.468 CC lib/util/crc32c.o 00:02:29.468 CC lib/util/crc32_ieee.o 00:02:29.468 CXX lib/trace_parser/trace.o 00:02:29.468 CC lib/util/crc64.o 00:02:29.468 CC lib/util/fd.o 00:02:29.468 CC lib/util/dif.o 00:02:29.468 CC lib/util/fd_group.o 00:02:29.468 CC lib/util/file.o 00:02:29.468 CC lib/util/hexlify.o 00:02:29.468 CC lib/util/iov.o 00:02:29.468 CC lib/util/math.o 00:02:29.468 CC lib/util/net.o 00:02:29.468 CC lib/util/pipe.o 00:02:29.468 CC lib/util/strerror_tls.o 00:02:29.468 CC lib/util/string.o 00:02:29.468 CC lib/util/uuid.o 00:02:29.468 CC lib/util/xor.o 00:02:29.468 CC lib/util/zipf.o 00:02:29.727 CC lib/vfio_user/host/vfio_user_pci.o 00:02:29.727 CC lib/vfio_user/host/vfio_user.o 00:02:29.727 LIB libspdk_dma.a 00:02:29.727 SO libspdk_dma.so.4.0 00:02:29.727 SYMLINK libspdk_dma.so 00:02:29.727 LIB libspdk_ioat.a 00:02:29.727 SO libspdk_ioat.so.7.0 00:02:29.727 LIB libspdk_vfio_user.a 00:02:29.727 SYMLINK libspdk_ioat.so 00:02:29.727 SO libspdk_vfio_user.so.5.0 00:02:29.985 LIB libspdk_util.a 00:02:29.985 SYMLINK libspdk_vfio_user.so 00:02:29.985 SO libspdk_util.so.10.0 00:02:29.985 SYMLINK libspdk_util.so 00:02:30.243 LIB libspdk_trace_parser.a 00:02:30.244 SO libspdk_trace_parser.so.5.0 00:02:30.244 SYMLINK libspdk_trace_parser.so 00:02:30.244 CC lib/json/json_parse.o 00:02:30.244 CC lib/json/json_util.o 00:02:30.244 CC lib/json/json_write.o 00:02:30.502 CC lib/idxd/idxd.o 00:02:30.502 CC lib/idxd/idxd_user.o 00:02:30.502 CC lib/idxd/idxd_kernel.o 00:02:30.502 CC lib/rdma_provider/common.o 00:02:30.502 CC lib/rdma_provider/rdma_provider_verbs.o 00:02:30.502 CC lib/vmd/vmd.o 00:02:30.502 CC lib/vmd/led.o 00:02:30.502 CC lib/conf/conf.o 00:02:30.502 CC lib/rdma_utils/rdma_utils.o 00:02:30.502 CC lib/env_dpdk/env.o 00:02:30.502 CC lib/env_dpdk/memory.o 00:02:30.502 CC lib/env_dpdk/pci.o 00:02:30.502 CC lib/env_dpdk/init.o 00:02:30.502 CC lib/env_dpdk/threads.o 00:02:30.502 CC lib/env_dpdk/pci_ioat.o 00:02:30.502 CC lib/env_dpdk/pci_virtio.o 00:02:30.502 CC lib/env_dpdk/pci_vmd.o 00:02:30.502 CC lib/env_dpdk/pci_idxd.o 00:02:30.502 CC lib/env_dpdk/pci_event.o 00:02:30.502 CC lib/env_dpdk/sigbus_handler.o 00:02:30.502 CC lib/env_dpdk/pci_dpdk.o 00:02:30.502 CC lib/env_dpdk/pci_dpdk_2207.o 00:02:30.502 CC lib/env_dpdk/pci_dpdk_2211.o 00:02:30.502 LIB libspdk_rdma_provider.a 00:02:30.502 SO libspdk_rdma_provider.so.6.0 00:02:30.760 LIB libspdk_rdma_utils.a 00:02:30.760 LIB libspdk_conf.a 00:02:30.760 LIB libspdk_json.a 00:02:30.760 SO libspdk_conf.so.6.0 00:02:30.760 SO libspdk_rdma_utils.so.1.0 00:02:30.760 SYMLINK libspdk_rdma_provider.so 00:02:30.760 SO libspdk_json.so.6.0 00:02:30.760 SYMLINK libspdk_conf.so 00:02:30.760 SYMLINK libspdk_rdma_utils.so 00:02:30.760 SYMLINK libspdk_json.so 00:02:30.760 LIB libspdk_idxd.a 00:02:30.760 SO libspdk_idxd.so.12.0 00:02:31.017 LIB libspdk_vmd.a 00:02:31.017 SYMLINK libspdk_idxd.so 00:02:31.017 SO libspdk_vmd.so.6.0 00:02:31.017 SYMLINK libspdk_vmd.so 00:02:31.017 CC lib/jsonrpc/jsonrpc_server.o 00:02:31.017 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:02:31.017 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:02:31.017 CC lib/jsonrpc/jsonrpc_client.o 00:02:31.275 LIB libspdk_jsonrpc.a 00:02:31.275 SO libspdk_jsonrpc.so.6.0 00:02:31.275 SYMLINK libspdk_jsonrpc.so 00:02:31.275 LIB libspdk_env_dpdk.a 00:02:31.533 SO libspdk_env_dpdk.so.15.0 00:02:31.533 SYMLINK libspdk_env_dpdk.so 00:02:31.533 CC lib/rpc/rpc.o 00:02:31.792 LIB libspdk_rpc.a 00:02:31.792 SO libspdk_rpc.so.6.0 00:02:32.051 SYMLINK libspdk_rpc.so 00:02:32.310 CC lib/notify/notify.o 00:02:32.310 CC lib/notify/notify_rpc.o 00:02:32.310 CC lib/keyring/keyring.o 00:02:32.310 CC lib/keyring/keyring_rpc.o 00:02:32.310 CC lib/trace/trace.o 00:02:32.310 CC lib/trace/trace_flags.o 00:02:32.310 CC lib/trace/trace_rpc.o 00:02:32.310 LIB libspdk_notify.a 00:02:32.310 SO libspdk_notify.so.6.0 00:02:32.569 LIB libspdk_keyring.a 00:02:32.569 LIB libspdk_trace.a 00:02:32.569 SO libspdk_keyring.so.1.0 00:02:32.569 SYMLINK libspdk_notify.so 00:02:32.569 SO libspdk_trace.so.10.0 00:02:32.569 SYMLINK libspdk_keyring.so 00:02:32.569 SYMLINK libspdk_trace.so 00:02:32.827 CC lib/thread/thread.o 00:02:32.827 CC lib/thread/iobuf.o 00:02:32.827 CC lib/sock/sock.o 00:02:32.827 CC lib/sock/sock_rpc.o 00:02:33.085 LIB libspdk_sock.a 00:02:33.344 SO libspdk_sock.so.10.0 00:02:33.344 SYMLINK libspdk_sock.so 00:02:33.626 CC lib/nvme/nvme_ctrlr_cmd.o 00:02:33.626 CC lib/nvme/nvme_ctrlr.o 00:02:33.626 CC lib/nvme/nvme_fabric.o 00:02:33.626 CC lib/nvme/nvme_ns_cmd.o 00:02:33.626 CC lib/nvme/nvme_ns.o 00:02:33.626 CC lib/nvme/nvme_pcie_common.o 00:02:33.626 CC lib/nvme/nvme_pcie.o 00:02:33.626 CC lib/nvme/nvme_qpair.o 00:02:33.626 CC lib/nvme/nvme.o 00:02:33.626 CC lib/nvme/nvme_quirks.o 00:02:33.626 CC lib/nvme/nvme_transport.o 00:02:33.626 CC lib/nvme/nvme_discovery.o 00:02:33.626 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:02:33.626 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:02:33.626 CC lib/nvme/nvme_tcp.o 00:02:33.626 CC lib/nvme/nvme_opal.o 00:02:33.626 CC lib/nvme/nvme_io_msg.o 00:02:33.626 CC lib/nvme/nvme_poll_group.o 00:02:33.626 CC lib/nvme/nvme_zns.o 00:02:33.626 CC lib/nvme/nvme_stubs.o 00:02:33.626 CC lib/nvme/nvme_auth.o 00:02:33.626 CC lib/nvme/nvme_cuse.o 00:02:33.626 CC lib/nvme/nvme_vfio_user.o 00:02:33.626 CC lib/nvme/nvme_rdma.o 00:02:33.886 LIB libspdk_thread.a 00:02:33.886 SO libspdk_thread.so.10.1 00:02:33.886 SYMLINK libspdk_thread.so 00:02:34.144 CC lib/init/json_config.o 00:02:34.144 CC lib/init/subsystem.o 00:02:34.144 CC lib/init/subsystem_rpc.o 00:02:34.144 CC lib/init/rpc.o 00:02:34.144 CC lib/blob/request.o 00:02:34.402 CC lib/blob/zeroes.o 00:02:34.402 CC lib/blob/blobstore.o 00:02:34.402 CC lib/accel/accel.o 00:02:34.402 CC lib/blob/blob_bs_dev.o 00:02:34.402 CC lib/accel/accel_rpc.o 00:02:34.402 CC lib/accel/accel_sw.o 00:02:34.402 CC lib/vfu_tgt/tgt_endpoint.o 00:02:34.402 CC lib/vfu_tgt/tgt_rpc.o 00:02:34.402 CC lib/virtio/virtio.o 00:02:34.402 CC lib/virtio/virtio_vhost_user.o 00:02:34.402 CC lib/virtio/virtio_vfio_user.o 00:02:34.402 CC lib/virtio/virtio_pci.o 00:02:34.402 LIB libspdk_init.a 00:02:34.402 SO libspdk_init.so.5.0 00:02:34.661 LIB libspdk_virtio.a 00:02:34.661 LIB libspdk_vfu_tgt.a 00:02:34.661 SYMLINK libspdk_init.so 00:02:34.661 SO libspdk_virtio.so.7.0 00:02:34.661 SO libspdk_vfu_tgt.so.3.0 00:02:34.661 SYMLINK libspdk_virtio.so 00:02:34.661 SYMLINK libspdk_vfu_tgt.so 00:02:34.920 CC lib/event/app.o 00:02:34.920 CC lib/event/reactor.o 00:02:34.920 CC lib/event/log_rpc.o 00:02:34.920 CC lib/event/app_rpc.o 00:02:34.920 CC lib/event/scheduler_static.o 00:02:34.920 LIB libspdk_accel.a 00:02:34.920 SO libspdk_accel.so.16.0 00:02:35.179 SYMLINK libspdk_accel.so 00:02:35.179 LIB libspdk_nvme.a 00:02:35.179 LIB libspdk_event.a 00:02:35.179 SO libspdk_event.so.14.0 00:02:35.179 SO libspdk_nvme.so.13.1 00:02:35.437 SYMLINK libspdk_event.so 00:02:35.437 CC lib/bdev/bdev.o 00:02:35.437 CC lib/bdev/bdev_rpc.o 00:02:35.437 CC lib/bdev/bdev_zone.o 00:02:35.438 CC lib/bdev/part.o 00:02:35.438 CC lib/bdev/scsi_nvme.o 00:02:35.438 SYMLINK libspdk_nvme.so 00:02:36.375 LIB libspdk_blob.a 00:02:36.375 SO libspdk_blob.so.11.0 00:02:36.375 SYMLINK libspdk_blob.so 00:02:36.634 CC lib/lvol/lvol.o 00:02:36.634 CC lib/blobfs/blobfs.o 00:02:36.634 CC lib/blobfs/tree.o 00:02:37.202 LIB libspdk_bdev.a 00:02:37.202 SO libspdk_bdev.so.16.0 00:02:37.202 SYMLINK libspdk_bdev.so 00:02:37.202 LIB libspdk_blobfs.a 00:02:37.461 LIB libspdk_lvol.a 00:02:37.461 SO libspdk_blobfs.so.10.0 00:02:37.461 SO libspdk_lvol.so.10.0 00:02:37.461 SYMLINK libspdk_lvol.so 00:02:37.461 SYMLINK libspdk_blobfs.so 00:02:37.461 CC lib/scsi/dev.o 00:02:37.461 CC lib/scsi/lun.o 00:02:37.461 CC lib/scsi/port.o 00:02:37.461 CC lib/ftl/ftl_core.o 00:02:37.461 CC lib/scsi/scsi.o 00:02:37.461 CC lib/ftl/ftl_init.o 00:02:37.461 CC lib/scsi/scsi_bdev.o 00:02:37.461 CC lib/ftl/ftl_layout.o 00:02:37.461 CC lib/scsi/scsi_pr.o 00:02:37.461 CC lib/ftl/ftl_debug.o 00:02:37.461 CC lib/scsi/scsi_rpc.o 00:02:37.461 CC lib/ftl/ftl_io.o 00:02:37.461 CC lib/scsi/task.o 00:02:37.461 CC lib/ftl/ftl_sb.o 00:02:37.461 CC lib/ftl/ftl_l2p.o 00:02:37.461 CC lib/ftl/ftl_l2p_flat.o 00:02:37.461 CC lib/ftl/ftl_nv_cache.o 00:02:37.461 CC lib/ftl/ftl_band_ops.o 00:02:37.461 CC lib/nbd/nbd.o 00:02:37.462 CC lib/ftl/ftl_band.o 00:02:37.462 CC lib/nbd/nbd_rpc.o 00:02:37.462 CC lib/ftl/ftl_writer.o 00:02:37.462 CC lib/ublk/ublk.o 00:02:37.462 CC lib/ftl/ftl_rq.o 00:02:37.462 CC lib/ublk/ublk_rpc.o 00:02:37.462 CC lib/nvmf/ctrlr.o 00:02:37.462 CC lib/ftl/ftl_reloc.o 00:02:37.462 CC lib/nvmf/ctrlr_discovery.o 00:02:37.462 CC lib/ftl/ftl_p2l.o 00:02:37.462 CC lib/ftl/ftl_l2p_cache.o 00:02:37.462 CC lib/nvmf/ctrlr_bdev.o 00:02:37.462 CC lib/nvmf/subsystem.o 00:02:37.462 CC lib/ftl/mngt/ftl_mngt.o 00:02:37.462 CC lib/nvmf/nvmf.o 00:02:37.462 CC lib/nvmf/nvmf_rpc.o 00:02:37.462 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:02:37.462 CC lib/nvmf/transport.o 00:02:37.462 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:02:37.462 CC lib/nvmf/tcp.o 00:02:37.462 CC lib/ftl/mngt/ftl_mngt_startup.o 00:02:37.462 CC lib/ftl/mngt/ftl_mngt_md.o 00:02:37.462 CC lib/nvmf/stubs.o 00:02:37.462 CC lib/ftl/mngt/ftl_mngt_misc.o 00:02:37.462 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:02:37.462 CC lib/nvmf/mdns_server.o 00:02:37.462 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:02:37.462 CC lib/nvmf/rdma.o 00:02:37.462 CC lib/nvmf/vfio_user.o 00:02:37.462 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:02:37.462 CC lib/ftl/mngt/ftl_mngt_band.o 00:02:37.462 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:02:37.462 CC lib/nvmf/auth.o 00:02:37.462 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:02:37.462 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:02:37.462 CC lib/ftl/utils/ftl_md.o 00:02:37.462 CC lib/ftl/utils/ftl_conf.o 00:02:37.720 CC lib/ftl/utils/ftl_mempool.o 00:02:37.720 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:02:37.720 CC lib/ftl/utils/ftl_property.o 00:02:37.720 CC lib/ftl/utils/ftl_bitmap.o 00:02:37.720 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:02:37.720 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:02:37.720 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:02:37.720 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:02:37.720 CC lib/ftl/upgrade/ftl_trim_upgrade.o 00:02:37.720 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:02:37.720 CC lib/ftl/upgrade/ftl_sb_v5.o 00:02:37.720 CC lib/ftl/upgrade/ftl_sb_v3.o 00:02:37.720 CC lib/ftl/nvc/ftl_nvc_dev.o 00:02:37.720 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:02:37.720 CC lib/ftl/base/ftl_base_bdev.o 00:02:37.720 CC lib/ftl/ftl_trace.o 00:02:37.720 CC lib/ftl/base/ftl_base_dev.o 00:02:38.285 LIB libspdk_scsi.a 00:02:38.285 SO libspdk_scsi.so.9.0 00:02:38.285 LIB libspdk_nbd.a 00:02:38.285 SO libspdk_nbd.so.7.0 00:02:38.285 SYMLINK libspdk_scsi.so 00:02:38.285 LIB libspdk_ublk.a 00:02:38.285 SYMLINK libspdk_nbd.so 00:02:38.285 SO libspdk_ublk.so.3.0 00:02:38.285 SYMLINK libspdk_ublk.so 00:02:38.544 CC lib/vhost/vhost.o 00:02:38.544 CC lib/vhost/vhost_rpc.o 00:02:38.544 CC lib/vhost/vhost_scsi.o 00:02:38.544 CC lib/vhost/vhost_blk.o 00:02:38.544 CC lib/vhost/rte_vhost_user.o 00:02:38.544 CC lib/iscsi/conn.o 00:02:38.544 CC lib/iscsi/init_grp.o 00:02:38.544 CC lib/iscsi/iscsi.o 00:02:38.544 CC lib/iscsi/md5.o 00:02:38.544 CC lib/iscsi/portal_grp.o 00:02:38.544 CC lib/iscsi/param.o 00:02:38.544 CC lib/iscsi/tgt_node.o 00:02:38.544 CC lib/iscsi/iscsi_subsystem.o 00:02:38.544 CC lib/iscsi/iscsi_rpc.o 00:02:38.544 CC lib/iscsi/task.o 00:02:38.544 LIB libspdk_ftl.a 00:02:38.803 SO libspdk_ftl.so.9.0 00:02:39.062 SYMLINK libspdk_ftl.so 00:02:39.322 LIB libspdk_nvmf.a 00:02:39.322 LIB libspdk_vhost.a 00:02:39.322 SO libspdk_vhost.so.8.0 00:02:39.322 SO libspdk_nvmf.so.19.0 00:02:39.580 SYMLINK libspdk_vhost.so 00:02:39.580 LIB libspdk_iscsi.a 00:02:39.581 SYMLINK libspdk_nvmf.so 00:02:39.581 SO libspdk_iscsi.so.8.0 00:02:39.581 SYMLINK libspdk_iscsi.so 00:02:40.149 CC module/env_dpdk/env_dpdk_rpc.o 00:02:40.149 CC module/vfu_device/vfu_virtio_blk.o 00:02:40.149 CC module/vfu_device/vfu_virtio.o 00:02:40.149 CC module/vfu_device/vfu_virtio_scsi.o 00:02:40.149 CC module/vfu_device/vfu_virtio_rpc.o 00:02:40.408 CC module/keyring/file/keyring.o 00:02:40.408 CC module/accel/ioat/accel_ioat.o 00:02:40.408 CC module/keyring/file/keyring_rpc.o 00:02:40.408 LIB libspdk_env_dpdk_rpc.a 00:02:40.408 CC module/accel/ioat/accel_ioat_rpc.o 00:02:40.408 CC module/accel/dsa/accel_dsa.o 00:02:40.408 CC module/sock/posix/posix.o 00:02:40.408 CC module/keyring/linux/keyring.o 00:02:40.408 CC module/accel/dsa/accel_dsa_rpc.o 00:02:40.408 CC module/keyring/linux/keyring_rpc.o 00:02:40.408 CC module/scheduler/dynamic/scheduler_dynamic.o 00:02:40.408 CC module/blob/bdev/blob_bdev.o 00:02:40.408 CC module/accel/error/accel_error.o 00:02:40.408 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:02:40.408 CC module/accel/error/accel_error_rpc.o 00:02:40.408 CC module/scheduler/gscheduler/gscheduler.o 00:02:40.408 CC module/accel/iaa/accel_iaa.o 00:02:40.408 CC module/accel/iaa/accel_iaa_rpc.o 00:02:40.408 SO libspdk_env_dpdk_rpc.so.6.0 00:02:40.408 SYMLINK libspdk_env_dpdk_rpc.so 00:02:40.408 LIB libspdk_keyring_file.a 00:02:40.408 LIB libspdk_keyring_linux.a 00:02:40.408 LIB libspdk_accel_ioat.a 00:02:40.408 LIB libspdk_scheduler_gscheduler.a 00:02:40.408 SO libspdk_keyring_file.so.1.0 00:02:40.408 LIB libspdk_scheduler_dpdk_governor.a 00:02:40.408 LIB libspdk_scheduler_dynamic.a 00:02:40.408 SO libspdk_keyring_linux.so.1.0 00:02:40.408 LIB libspdk_accel_error.a 00:02:40.667 SO libspdk_scheduler_gscheduler.so.4.0 00:02:40.667 SO libspdk_accel_ioat.so.6.0 00:02:40.667 SO libspdk_scheduler_dynamic.so.4.0 00:02:40.667 LIB libspdk_accel_iaa.a 00:02:40.667 SO libspdk_scheduler_dpdk_governor.so.4.0 00:02:40.667 SO libspdk_accel_error.so.2.0 00:02:40.667 SYMLINK libspdk_keyring_file.so 00:02:40.667 LIB libspdk_accel_dsa.a 00:02:40.667 SYMLINK libspdk_keyring_linux.so 00:02:40.667 SO libspdk_accel_iaa.so.3.0 00:02:40.667 LIB libspdk_blob_bdev.a 00:02:40.667 SYMLINK libspdk_accel_ioat.so 00:02:40.667 SYMLINK libspdk_scheduler_dynamic.so 00:02:40.667 SYMLINK libspdk_scheduler_gscheduler.so 00:02:40.667 SO libspdk_accel_dsa.so.5.0 00:02:40.667 SYMLINK libspdk_scheduler_dpdk_governor.so 00:02:40.667 SYMLINK libspdk_accel_error.so 00:02:40.667 SO libspdk_blob_bdev.so.11.0 00:02:40.667 SYMLINK libspdk_accel_iaa.so 00:02:40.667 LIB libspdk_vfu_device.a 00:02:40.667 SYMLINK libspdk_accel_dsa.so 00:02:40.667 SYMLINK libspdk_blob_bdev.so 00:02:40.667 SO libspdk_vfu_device.so.3.0 00:02:40.925 SYMLINK libspdk_vfu_device.so 00:02:40.925 LIB libspdk_sock_posix.a 00:02:40.925 SO libspdk_sock_posix.so.6.0 00:02:41.184 SYMLINK libspdk_sock_posix.so 00:02:41.184 CC module/bdev/error/vbdev_error.o 00:02:41.184 CC module/bdev/delay/vbdev_delay.o 00:02:41.184 CC module/bdev/error/vbdev_error_rpc.o 00:02:41.184 CC module/bdev/zone_block/vbdev_zone_block.o 00:02:41.184 CC module/blobfs/bdev/blobfs_bdev.o 00:02:41.184 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:02:41.184 CC module/bdev/raid/bdev_raid.o 00:02:41.184 CC module/bdev/gpt/gpt.o 00:02:41.184 CC module/bdev/delay/vbdev_delay_rpc.o 00:02:41.184 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:02:41.184 CC module/bdev/raid/bdev_raid_rpc.o 00:02:41.184 CC module/bdev/gpt/vbdev_gpt.o 00:02:41.184 CC module/bdev/raid/bdev_raid_sb.o 00:02:41.184 CC module/bdev/passthru/vbdev_passthru.o 00:02:41.184 CC module/bdev/raid/raid0.o 00:02:41.184 CC module/bdev/raid/raid1.o 00:02:41.184 CC module/bdev/lvol/vbdev_lvol.o 00:02:41.184 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:02:41.184 CC module/bdev/raid/concat.o 00:02:41.184 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:02:41.184 CC module/bdev/malloc/bdev_malloc.o 00:02:41.184 CC module/bdev/null/bdev_null.o 00:02:41.184 CC module/bdev/malloc/bdev_malloc_rpc.o 00:02:41.184 CC module/bdev/null/bdev_null_rpc.o 00:02:41.184 CC module/bdev/virtio/bdev_virtio_scsi.o 00:02:41.184 CC module/bdev/aio/bdev_aio.o 00:02:41.184 CC module/bdev/virtio/bdev_virtio_blk.o 00:02:41.184 CC module/bdev/aio/bdev_aio_rpc.o 00:02:41.184 CC module/bdev/virtio/bdev_virtio_rpc.o 00:02:41.184 CC module/bdev/split/vbdev_split.o 00:02:41.184 CC module/bdev/split/vbdev_split_rpc.o 00:02:41.184 CC module/bdev/ftl/bdev_ftl.o 00:02:41.184 CC module/bdev/nvme/bdev_nvme.o 00:02:41.184 CC module/bdev/ftl/bdev_ftl_rpc.o 00:02:41.184 CC module/bdev/iscsi/bdev_iscsi.o 00:02:41.184 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:02:41.184 CC module/bdev/nvme/bdev_nvme_rpc.o 00:02:41.184 CC module/bdev/nvme/nvme_rpc.o 00:02:41.184 CC module/bdev/nvme/bdev_mdns_client.o 00:02:41.184 CC module/bdev/nvme/vbdev_opal.o 00:02:41.184 CC module/bdev/nvme/vbdev_opal_rpc.o 00:02:41.184 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:02:41.443 LIB libspdk_blobfs_bdev.a 00:02:41.443 LIB libspdk_bdev_split.a 00:02:41.443 LIB libspdk_bdev_error.a 00:02:41.443 SO libspdk_blobfs_bdev.so.6.0 00:02:41.443 SO libspdk_bdev_split.so.6.0 00:02:41.443 LIB libspdk_bdev_zone_block.a 00:02:41.443 LIB libspdk_bdev_ftl.a 00:02:41.443 SO libspdk_bdev_error.so.6.0 00:02:41.443 LIB libspdk_bdev_null.a 00:02:41.443 SYMLINK libspdk_blobfs_bdev.so 00:02:41.443 SO libspdk_bdev_zone_block.so.6.0 00:02:41.443 LIB libspdk_bdev_gpt.a 00:02:41.443 LIB libspdk_bdev_aio.a 00:02:41.443 SO libspdk_bdev_ftl.so.6.0 00:02:41.443 SYMLINK libspdk_bdev_split.so 00:02:41.443 LIB libspdk_bdev_delay.a 00:02:41.443 LIB libspdk_bdev_passthru.a 00:02:41.443 LIB libspdk_bdev_iscsi.a 00:02:41.443 SO libspdk_bdev_gpt.so.6.0 00:02:41.443 SO libspdk_bdev_null.so.6.0 00:02:41.443 SO libspdk_bdev_aio.so.6.0 00:02:41.702 SO libspdk_bdev_delay.so.6.0 00:02:41.702 SYMLINK libspdk_bdev_error.so 00:02:41.702 SYMLINK libspdk_bdev_zone_block.so 00:02:41.702 SO libspdk_bdev_passthru.so.6.0 00:02:41.702 SO libspdk_bdev_iscsi.so.6.0 00:02:41.702 SYMLINK libspdk_bdev_ftl.so 00:02:41.702 LIB libspdk_bdev_malloc.a 00:02:41.702 SYMLINK libspdk_bdev_null.so 00:02:41.702 SYMLINK libspdk_bdev_gpt.so 00:02:41.702 SYMLINK libspdk_bdev_aio.so 00:02:41.702 SYMLINK libspdk_bdev_delay.so 00:02:41.702 SO libspdk_bdev_malloc.so.6.0 00:02:41.702 SYMLINK libspdk_bdev_passthru.so 00:02:41.702 SYMLINK libspdk_bdev_iscsi.so 00:02:41.702 SYMLINK libspdk_bdev_malloc.so 00:02:41.702 LIB libspdk_bdev_virtio.a 00:02:41.702 LIB libspdk_bdev_lvol.a 00:02:41.702 SO libspdk_bdev_virtio.so.6.0 00:02:41.702 SO libspdk_bdev_lvol.so.6.0 00:02:41.702 SYMLINK libspdk_bdev_virtio.so 00:02:41.961 SYMLINK libspdk_bdev_lvol.so 00:02:41.961 LIB libspdk_bdev_raid.a 00:02:41.961 SO libspdk_bdev_raid.so.6.0 00:02:42.219 SYMLINK libspdk_bdev_raid.so 00:02:42.786 LIB libspdk_bdev_nvme.a 00:02:42.786 SO libspdk_bdev_nvme.so.7.0 00:02:42.786 SYMLINK libspdk_bdev_nvme.so 00:02:43.723 CC module/event/subsystems/iobuf/iobuf.o 00:02:43.723 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:02:43.723 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:02:43.723 CC module/event/subsystems/keyring/keyring.o 00:02:43.723 CC module/event/subsystems/vmd/vmd.o 00:02:43.723 CC module/event/subsystems/vmd/vmd_rpc.o 00:02:43.723 CC module/event/subsystems/scheduler/scheduler.o 00:02:43.723 CC module/event/subsystems/sock/sock.o 00:02:43.723 CC module/event/subsystems/vfu_tgt/vfu_tgt.o 00:02:43.723 LIB libspdk_event_keyring.a 00:02:43.723 LIB libspdk_event_vhost_blk.a 00:02:43.723 LIB libspdk_event_vmd.a 00:02:43.723 LIB libspdk_event_scheduler.a 00:02:43.723 LIB libspdk_event_iobuf.a 00:02:43.723 LIB libspdk_event_sock.a 00:02:43.723 LIB libspdk_event_vfu_tgt.a 00:02:43.723 SO libspdk_event_vhost_blk.so.3.0 00:02:43.723 SO libspdk_event_keyring.so.1.0 00:02:43.723 SO libspdk_event_vmd.so.6.0 00:02:43.723 SO libspdk_event_scheduler.so.4.0 00:02:43.723 SO libspdk_event_iobuf.so.3.0 00:02:43.723 SO libspdk_event_sock.so.5.0 00:02:43.723 SO libspdk_event_vfu_tgt.so.3.0 00:02:43.723 SYMLINK libspdk_event_vhost_blk.so 00:02:43.723 SYMLINK libspdk_event_keyring.so 00:02:43.723 SYMLINK libspdk_event_vmd.so 00:02:43.723 SYMLINK libspdk_event_scheduler.so 00:02:43.723 SYMLINK libspdk_event_sock.so 00:02:43.723 SYMLINK libspdk_event_vfu_tgt.so 00:02:43.723 SYMLINK libspdk_event_iobuf.so 00:02:43.982 CC module/event/subsystems/accel/accel.o 00:02:44.241 LIB libspdk_event_accel.a 00:02:44.241 SO libspdk_event_accel.so.6.0 00:02:44.241 SYMLINK libspdk_event_accel.so 00:02:44.808 CC module/event/subsystems/bdev/bdev.o 00:02:44.808 LIB libspdk_event_bdev.a 00:02:44.808 SO libspdk_event_bdev.so.6.0 00:02:44.808 SYMLINK libspdk_event_bdev.so 00:02:45.066 CC module/event/subsystems/nbd/nbd.o 00:02:45.066 CC module/event/subsystems/ublk/ublk.o 00:02:45.067 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:02:45.067 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:02:45.067 CC module/event/subsystems/scsi/scsi.o 00:02:45.324 LIB libspdk_event_nbd.a 00:02:45.324 LIB libspdk_event_ublk.a 00:02:45.324 LIB libspdk_event_scsi.a 00:02:45.324 SO libspdk_event_nbd.so.6.0 00:02:45.324 SO libspdk_event_ublk.so.3.0 00:02:45.324 SO libspdk_event_scsi.so.6.0 00:02:45.324 LIB libspdk_event_nvmf.a 00:02:45.324 SYMLINK libspdk_event_scsi.so 00:02:45.324 SYMLINK libspdk_event_ublk.so 00:02:45.324 SYMLINK libspdk_event_nbd.so 00:02:45.324 SO libspdk_event_nvmf.so.6.0 00:02:45.583 SYMLINK libspdk_event_nvmf.so 00:02:45.583 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:02:45.583 CC module/event/subsystems/iscsi/iscsi.o 00:02:45.857 LIB libspdk_event_vhost_scsi.a 00:02:45.857 LIB libspdk_event_iscsi.a 00:02:45.857 SO libspdk_event_vhost_scsi.so.3.0 00:02:45.857 SO libspdk_event_iscsi.so.6.0 00:02:45.857 SYMLINK libspdk_event_vhost_scsi.so 00:02:45.857 SYMLINK libspdk_event_iscsi.so 00:02:46.163 SO libspdk.so.6.0 00:02:46.163 SYMLINK libspdk.so 00:02:46.452 CC app/spdk_lspci/spdk_lspci.o 00:02:46.452 CXX app/trace/trace.o 00:02:46.452 CC app/trace_record/trace_record.o 00:02:46.452 CC app/spdk_nvme_discover/discovery_aer.o 00:02:46.452 CC app/spdk_top/spdk_top.o 00:02:46.452 CC test/rpc_client/rpc_client_test.o 00:02:46.452 CC app/spdk_nvme_perf/perf.o 00:02:46.452 CC app/spdk_nvme_identify/identify.o 00:02:46.452 TEST_HEADER include/spdk/accel_module.h 00:02:46.452 TEST_HEADER include/spdk/accel.h 00:02:46.452 TEST_HEADER include/spdk/assert.h 00:02:46.452 TEST_HEADER include/spdk/base64.h 00:02:46.452 TEST_HEADER include/spdk/bdev.h 00:02:46.453 TEST_HEADER include/spdk/barrier.h 00:02:46.453 TEST_HEADER include/spdk/bdev_module.h 00:02:46.453 TEST_HEADER include/spdk/bdev_zone.h 00:02:46.453 TEST_HEADER include/spdk/bit_array.h 00:02:46.453 TEST_HEADER include/spdk/blob_bdev.h 00:02:46.453 TEST_HEADER include/spdk/bit_pool.h 00:02:46.453 TEST_HEADER include/spdk/blobfs_bdev.h 00:02:46.453 TEST_HEADER include/spdk/blobfs.h 00:02:46.453 TEST_HEADER include/spdk/blob.h 00:02:46.453 TEST_HEADER include/spdk/conf.h 00:02:46.453 TEST_HEADER include/spdk/config.h 00:02:46.453 TEST_HEADER include/spdk/cpuset.h 00:02:46.453 TEST_HEADER include/spdk/crc32.h 00:02:46.453 TEST_HEADER include/spdk/crc16.h 00:02:46.453 TEST_HEADER include/spdk/crc64.h 00:02:46.453 TEST_HEADER include/spdk/dif.h 00:02:46.453 TEST_HEADER include/spdk/endian.h 00:02:46.453 TEST_HEADER include/spdk/dma.h 00:02:46.453 TEST_HEADER include/spdk/env_dpdk.h 00:02:46.453 TEST_HEADER include/spdk/env.h 00:02:46.453 TEST_HEADER include/spdk/event.h 00:02:46.453 TEST_HEADER include/spdk/fd_group.h 00:02:46.453 TEST_HEADER include/spdk/file.h 00:02:46.453 TEST_HEADER include/spdk/ftl.h 00:02:46.453 TEST_HEADER include/spdk/fd.h 00:02:46.453 TEST_HEADER include/spdk/gpt_spec.h 00:02:46.453 TEST_HEADER include/spdk/histogram_data.h 00:02:46.453 TEST_HEADER include/spdk/hexlify.h 00:02:46.453 TEST_HEADER include/spdk/ioat.h 00:02:46.453 CC app/spdk_dd/spdk_dd.o 00:02:46.453 TEST_HEADER include/spdk/idxd_spec.h 00:02:46.453 TEST_HEADER include/spdk/idxd.h 00:02:46.453 TEST_HEADER include/spdk/init.h 00:02:46.453 TEST_HEADER include/spdk/iscsi_spec.h 00:02:46.453 TEST_HEADER include/spdk/json.h 00:02:46.453 TEST_HEADER include/spdk/jsonrpc.h 00:02:46.453 TEST_HEADER include/spdk/ioat_spec.h 00:02:46.453 CC app/nvmf_tgt/nvmf_main.o 00:02:46.453 CC examples/interrupt_tgt/interrupt_tgt.o 00:02:46.453 TEST_HEADER include/spdk/keyring_module.h 00:02:46.453 TEST_HEADER include/spdk/keyring.h 00:02:46.453 TEST_HEADER include/spdk/likely.h 00:02:46.453 TEST_HEADER include/spdk/log.h 00:02:46.453 TEST_HEADER include/spdk/lvol.h 00:02:46.453 TEST_HEADER include/spdk/nbd.h 00:02:46.453 TEST_HEADER include/spdk/memory.h 00:02:46.453 TEST_HEADER include/spdk/mmio.h 00:02:46.453 TEST_HEADER include/spdk/notify.h 00:02:46.453 TEST_HEADER include/spdk/nvme_intel.h 00:02:46.453 TEST_HEADER include/spdk/net.h 00:02:46.453 TEST_HEADER include/spdk/nvme.h 00:02:46.453 TEST_HEADER include/spdk/nvme_ocssd.h 00:02:46.453 TEST_HEADER include/spdk/nvme_spec.h 00:02:46.453 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:02:46.453 CC app/iscsi_tgt/iscsi_tgt.o 00:02:46.453 TEST_HEADER include/spdk/nvmf_cmd.h 00:02:46.453 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:02:46.453 TEST_HEADER include/spdk/nvme_zns.h 00:02:46.453 TEST_HEADER include/spdk/nvmf_spec.h 00:02:46.453 TEST_HEADER include/spdk/nvmf_transport.h 00:02:46.453 TEST_HEADER include/spdk/nvmf.h 00:02:46.453 TEST_HEADER include/spdk/opal.h 00:02:46.453 TEST_HEADER include/spdk/opal_spec.h 00:02:46.453 TEST_HEADER include/spdk/queue.h 00:02:46.453 TEST_HEADER include/spdk/pci_ids.h 00:02:46.453 TEST_HEADER include/spdk/pipe.h 00:02:46.453 TEST_HEADER include/spdk/reduce.h 00:02:46.453 TEST_HEADER include/spdk/rpc.h 00:02:46.453 TEST_HEADER include/spdk/scsi.h 00:02:46.453 TEST_HEADER include/spdk/scheduler.h 00:02:46.453 TEST_HEADER include/spdk/scsi_spec.h 00:02:46.453 TEST_HEADER include/spdk/sock.h 00:02:46.453 TEST_HEADER include/spdk/stdinc.h 00:02:46.453 TEST_HEADER include/spdk/string.h 00:02:46.453 TEST_HEADER include/spdk/trace.h 00:02:46.453 TEST_HEADER include/spdk/trace_parser.h 00:02:46.453 TEST_HEADER include/spdk/thread.h 00:02:46.453 TEST_HEADER include/spdk/ublk.h 00:02:46.453 TEST_HEADER include/spdk/tree.h 00:02:46.453 TEST_HEADER include/spdk/util.h 00:02:46.453 TEST_HEADER include/spdk/version.h 00:02:46.453 TEST_HEADER include/spdk/vfio_user_pci.h 00:02:46.453 TEST_HEADER include/spdk/uuid.h 00:02:46.453 TEST_HEADER include/spdk/vfio_user_spec.h 00:02:46.453 TEST_HEADER include/spdk/vhost.h 00:02:46.453 TEST_HEADER include/spdk/vmd.h 00:02:46.454 TEST_HEADER include/spdk/zipf.h 00:02:46.454 TEST_HEADER include/spdk/xor.h 00:02:46.454 CXX test/cpp_headers/accel.o 00:02:46.454 CXX test/cpp_headers/accel_module.o 00:02:46.454 CXX test/cpp_headers/assert.o 00:02:46.454 CC app/spdk_tgt/spdk_tgt.o 00:02:46.454 CXX test/cpp_headers/barrier.o 00:02:46.454 CXX test/cpp_headers/bdev_module.o 00:02:46.454 CXX test/cpp_headers/base64.o 00:02:46.454 CXX test/cpp_headers/bdev.o 00:02:46.454 CXX test/cpp_headers/bdev_zone.o 00:02:46.454 CXX test/cpp_headers/bit_pool.o 00:02:46.454 CXX test/cpp_headers/bit_array.o 00:02:46.454 CXX test/cpp_headers/blobfs.o 00:02:46.454 CXX test/cpp_headers/blob_bdev.o 00:02:46.454 CXX test/cpp_headers/blob.o 00:02:46.454 CXX test/cpp_headers/blobfs_bdev.o 00:02:46.454 CXX test/cpp_headers/config.o 00:02:46.454 CXX test/cpp_headers/cpuset.o 00:02:46.454 CXX test/cpp_headers/conf.o 00:02:46.454 CXX test/cpp_headers/crc64.o 00:02:46.454 CXX test/cpp_headers/crc32.o 00:02:46.454 CXX test/cpp_headers/dif.o 00:02:46.454 CXX test/cpp_headers/crc16.o 00:02:46.454 CXX test/cpp_headers/dma.o 00:02:46.454 CXX test/cpp_headers/endian.o 00:02:46.454 CXX test/cpp_headers/env_dpdk.o 00:02:46.454 CXX test/cpp_headers/env.o 00:02:46.454 CXX test/cpp_headers/fd_group.o 00:02:46.454 CXX test/cpp_headers/fd.o 00:02:46.454 CXX test/cpp_headers/event.o 00:02:46.454 CXX test/cpp_headers/ftl.o 00:02:46.454 CXX test/cpp_headers/file.o 00:02:46.454 CXX test/cpp_headers/gpt_spec.o 00:02:46.454 CXX test/cpp_headers/hexlify.o 00:02:46.454 CXX test/cpp_headers/histogram_data.o 00:02:46.454 CXX test/cpp_headers/idxd.o 00:02:46.454 CXX test/cpp_headers/idxd_spec.o 00:02:46.454 CXX test/cpp_headers/init.o 00:02:46.454 CXX test/cpp_headers/ioat.o 00:02:46.454 CXX test/cpp_headers/iscsi_spec.o 00:02:46.454 CXX test/cpp_headers/jsonrpc.o 00:02:46.454 CXX test/cpp_headers/ioat_spec.o 00:02:46.454 CXX test/cpp_headers/json.o 00:02:46.454 CXX test/cpp_headers/keyring.o 00:02:46.454 CXX test/cpp_headers/keyring_module.o 00:02:46.454 CXX test/cpp_headers/likely.o 00:02:46.454 CXX test/cpp_headers/lvol.o 00:02:46.454 CXX test/cpp_headers/log.o 00:02:46.454 CXX test/cpp_headers/memory.o 00:02:46.454 CXX test/cpp_headers/nbd.o 00:02:46.454 CXX test/cpp_headers/mmio.o 00:02:46.454 CXX test/cpp_headers/net.o 00:02:46.454 CXX test/cpp_headers/nvme_intel.o 00:02:46.454 CXX test/cpp_headers/notify.o 00:02:46.454 CXX test/cpp_headers/nvme.o 00:02:46.724 CXX test/cpp_headers/nvme_ocssd.o 00:02:46.724 CXX test/cpp_headers/nvme_spec.o 00:02:46.724 CXX test/cpp_headers/nvme_ocssd_spec.o 00:02:46.724 CXX test/cpp_headers/nvme_zns.o 00:02:46.724 CXX test/cpp_headers/nvmf_cmd.o 00:02:46.724 CXX test/cpp_headers/nvmf_fc_spec.o 00:02:46.724 CXX test/cpp_headers/nvmf.o 00:02:46.724 CXX test/cpp_headers/nvmf_spec.o 00:02:46.724 CXX test/cpp_headers/nvmf_transport.o 00:02:46.724 CXX test/cpp_headers/opal.o 00:02:46.724 CXX test/cpp_headers/opal_spec.o 00:02:46.724 CXX test/cpp_headers/pci_ids.o 00:02:46.724 CXX test/cpp_headers/pipe.o 00:02:46.724 CXX test/cpp_headers/queue.o 00:02:46.724 CC test/thread/poller_perf/poller_perf.o 00:02:46.724 CC test/env/memory/memory_ut.o 00:02:46.724 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:02:46.724 CC examples/util/zipf/zipf.o 00:02:46.724 CXX test/cpp_headers/reduce.o 00:02:46.724 CC test/env/vtophys/vtophys.o 00:02:46.724 CC test/app/jsoncat/jsoncat.o 00:02:46.724 CC examples/ioat/perf/perf.o 00:02:46.724 CC test/app/stub/stub.o 00:02:46.724 CC test/env/pci/pci_ut.o 00:02:46.724 CC test/dma/test_dma/test_dma.o 00:02:46.724 CC examples/ioat/verify/verify.o 00:02:46.724 CC app/fio/nvme/fio_plugin.o 00:02:46.724 CXX test/cpp_headers/rpc.o 00:02:46.724 CC test/app/histogram_perf/histogram_perf.o 00:02:46.724 CC test/app/bdev_svc/bdev_svc.o 00:02:46.724 CC app/fio/bdev/fio_plugin.o 00:02:46.724 LINK spdk_lspci 00:02:46.987 LINK rpc_client_test 00:02:46.987 LINK interrupt_tgt 00:02:46.987 CC test/env/mem_callbacks/mem_callbacks.o 00:02:46.987 LINK poller_perf 00:02:47.244 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:02:47.244 LINK spdk_nvme_discover 00:02:47.244 LINK nvmf_tgt 00:02:47.244 LINK env_dpdk_post_init 00:02:47.244 LINK zipf 00:02:47.244 CXX test/cpp_headers/scheduler.o 00:02:47.244 CXX test/cpp_headers/scsi.o 00:02:47.244 CXX test/cpp_headers/scsi_spec.o 00:02:47.244 CXX test/cpp_headers/stdinc.o 00:02:47.244 CXX test/cpp_headers/sock.o 00:02:47.244 CXX test/cpp_headers/string.o 00:02:47.244 CXX test/cpp_headers/thread.o 00:02:47.244 CXX test/cpp_headers/trace.o 00:02:47.244 LINK spdk_trace_record 00:02:47.244 CXX test/cpp_headers/tree.o 00:02:47.244 CXX test/cpp_headers/trace_parser.o 00:02:47.244 CXX test/cpp_headers/ublk.o 00:02:47.244 CXX test/cpp_headers/util.o 00:02:47.244 CXX test/cpp_headers/uuid.o 00:02:47.244 CXX test/cpp_headers/version.o 00:02:47.244 CXX test/cpp_headers/vfio_user_pci.o 00:02:47.244 CXX test/cpp_headers/vfio_user_spec.o 00:02:47.244 CXX test/cpp_headers/vhost.o 00:02:47.244 CXX test/cpp_headers/vmd.o 00:02:47.244 CXX test/cpp_headers/xor.o 00:02:47.244 CXX test/cpp_headers/zipf.o 00:02:47.244 LINK jsoncat 00:02:47.244 LINK iscsi_tgt 00:02:47.244 LINK bdev_svc 00:02:47.244 LINK ioat_perf 00:02:47.244 LINK vtophys 00:02:47.244 LINK spdk_dd 00:02:47.244 LINK histogram_perf 00:02:47.244 LINK spdk_tgt 00:02:47.244 LINK stub 00:02:47.244 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:02:47.244 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:02:47.244 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:02:47.244 LINK spdk_trace 00:02:47.244 LINK verify 00:02:47.502 LINK pci_ut 00:02:47.502 LINK test_dma 00:02:47.761 CC test/event/reactor_perf/reactor_perf.o 00:02:47.761 CC examples/vmd/lsvmd/lsvmd.o 00:02:47.761 CC test/event/event_perf/event_perf.o 00:02:47.761 CC test/event/reactor/reactor.o 00:02:47.761 CC examples/idxd/perf/perf.o 00:02:47.761 CC test/event/app_repeat/app_repeat.o 00:02:47.761 CC examples/sock/hello_world/hello_sock.o 00:02:47.761 LINK nvme_fuzz 00:02:47.761 CC examples/vmd/led/led.o 00:02:47.761 CC test/event/scheduler/scheduler.o 00:02:47.761 LINK spdk_nvme_perf 00:02:47.761 CC examples/thread/thread/thread_ex.o 00:02:47.761 LINK spdk_bdev 00:02:47.761 LINK spdk_top 00:02:47.761 LINK vhost_fuzz 00:02:47.761 CC app/vhost/vhost.o 00:02:47.761 LINK lsvmd 00:02:47.761 LINK spdk_nvme 00:02:47.761 LINK reactor_perf 00:02:47.761 LINK event_perf 00:02:47.761 LINK reactor 00:02:47.761 LINK led 00:02:47.761 LINK mem_callbacks 00:02:47.761 LINK app_repeat 00:02:48.020 LINK hello_sock 00:02:48.020 LINK spdk_nvme_identify 00:02:48.020 LINK scheduler 00:02:48.020 LINK thread 00:02:48.020 CC test/nvme/startup/startup.o 00:02:48.020 CC test/nvme/reserve/reserve.o 00:02:48.020 CC test/nvme/err_injection/err_injection.o 00:02:48.020 CC test/nvme/reset/reset.o 00:02:48.020 CC test/nvme/fdp/fdp.o 00:02:48.020 CC test/nvme/sgl/sgl.o 00:02:48.020 CC test/nvme/aer/aer.o 00:02:48.020 CC test/nvme/e2edp/nvme_dp.o 00:02:48.020 CC test/nvme/doorbell_aers/doorbell_aers.o 00:02:48.020 LINK vhost 00:02:48.020 CC test/nvme/cuse/cuse.o 00:02:48.020 CC test/nvme/fused_ordering/fused_ordering.o 00:02:48.020 CC test/nvme/overhead/overhead.o 00:02:48.020 CC test/nvme/connect_stress/connect_stress.o 00:02:48.020 CC test/nvme/compliance/nvme_compliance.o 00:02:48.020 CC test/nvme/boot_partition/boot_partition.o 00:02:48.020 CC test/nvme/simple_copy/simple_copy.o 00:02:48.020 CC test/accel/dif/dif.o 00:02:48.020 LINK idxd_perf 00:02:48.020 CC test/blobfs/mkfs/mkfs.o 00:02:48.020 CC test/lvol/esnap/esnap.o 00:02:48.020 LINK memory_ut 00:02:48.278 LINK startup 00:02:48.278 LINK doorbell_aers 00:02:48.278 LINK boot_partition 00:02:48.278 LINK reserve 00:02:48.278 LINK err_injection 00:02:48.278 LINK connect_stress 00:02:48.278 LINK fused_ordering 00:02:48.278 LINK mkfs 00:02:48.278 LINK reset 00:02:48.278 LINK simple_copy 00:02:48.278 LINK sgl 00:02:48.278 LINK nvme_dp 00:02:48.278 LINK overhead 00:02:48.278 LINK aer 00:02:48.278 LINK nvme_compliance 00:02:48.278 LINK fdp 00:02:48.278 CC examples/nvme/reconnect/reconnect.o 00:02:48.278 CC examples/nvme/arbitration/arbitration.o 00:02:48.278 CC examples/nvme/hotplug/hotplug.o 00:02:48.278 CC examples/nvme/cmb_copy/cmb_copy.o 00:02:48.278 CC examples/nvme/abort/abort.o 00:02:48.278 CC examples/nvme/nvme_manage/nvme_manage.o 00:02:48.278 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:02:48.278 CC examples/nvme/hello_world/hello_world.o 00:02:48.536 LINK dif 00:02:48.536 CC examples/accel/perf/accel_perf.o 00:02:48.536 CC examples/blob/cli/blobcli.o 00:02:48.536 CC examples/blob/hello_world/hello_blob.o 00:02:48.537 LINK cmb_copy 00:02:48.537 LINK pmr_persistence 00:02:48.537 LINK hello_world 00:02:48.537 LINK hotplug 00:02:48.537 LINK reconnect 00:02:48.537 LINK arbitration 00:02:48.537 LINK abort 00:02:48.795 LINK hello_blob 00:02:48.795 LINK iscsi_fuzz 00:02:48.795 LINK nvme_manage 00:02:48.795 LINK accel_perf 00:02:48.795 LINK blobcli 00:02:48.795 CC test/bdev/bdevio/bdevio.o 00:02:49.054 LINK cuse 00:02:49.313 LINK bdevio 00:02:49.313 CC examples/bdev/hello_world/hello_bdev.o 00:02:49.313 CC examples/bdev/bdevperf/bdevperf.o 00:02:49.571 LINK hello_bdev 00:02:49.830 LINK bdevperf 00:02:50.398 CC examples/nvmf/nvmf/nvmf.o 00:02:50.656 LINK nvmf 00:02:51.590 LINK esnap 00:02:51.849 00:02:51.849 real 0m44.776s 00:02:51.849 user 6m46.177s 00:02:51.849 sys 3m27.751s 00:02:51.849 11:10:47 make -- common/autotest_common.sh@1126 -- $ xtrace_disable 00:02:51.849 11:10:47 make -- common/autotest_common.sh@10 -- $ set +x 00:02:51.849 ************************************ 00:02:51.849 END TEST make 00:02:51.849 ************************************ 00:02:51.849 11:10:47 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:02:51.849 11:10:47 -- pm/common@29 -- $ signal_monitor_resources TERM 00:02:51.849 11:10:47 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:02:51.849 11:10:47 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:51.849 11:10:47 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-load.pid ]] 00:02:51.849 11:10:47 -- pm/common@44 -- $ pid=1225624 00:02:51.849 11:10:47 -- pm/common@50 -- $ kill -TERM 1225624 00:02:51.849 11:10:47 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:51.849 11:10:47 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-vmstat.pid ]] 00:02:51.849 11:10:47 -- pm/common@44 -- $ pid=1225626 00:02:51.849 11:10:47 -- pm/common@50 -- $ kill -TERM 1225626 00:02:51.849 11:10:47 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:51.849 11:10:47 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-temp.pid ]] 00:02:51.849 11:10:47 -- pm/common@44 -- $ pid=1225627 00:02:51.849 11:10:47 -- pm/common@50 -- $ kill -TERM 1225627 00:02:51.849 11:10:47 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:51.849 11:10:47 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-bmc-pm.pid ]] 00:02:51.849 11:10:47 -- pm/common@44 -- $ pid=1225650 00:02:51.849 11:10:47 -- pm/common@50 -- $ sudo -E kill -TERM 1225650 00:02:51.849 11:10:47 -- spdk/autotest.sh@25 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:02:51.849 11:10:47 -- nvmf/common.sh@7 -- # uname -s 00:02:51.849 11:10:47 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:02:51.849 11:10:47 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:02:51.849 11:10:47 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:02:51.849 11:10:47 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:02:51.849 11:10:47 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:02:51.849 11:10:47 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:02:51.849 11:10:47 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:02:51.849 11:10:47 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:02:51.849 11:10:47 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:02:51.849 11:10:47 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:02:51.849 11:10:47 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:02:51.849 11:10:47 -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:02:51.849 11:10:47 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:02:51.849 11:10:47 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:02:51.849 11:10:47 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:02:51.849 11:10:47 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:02:51.849 11:10:47 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:02:51.849 11:10:47 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:02:51.849 11:10:47 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:02:51.849 11:10:47 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:02:51.849 11:10:47 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:51.849 11:10:47 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:51.849 11:10:47 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:51.849 11:10:47 -- paths/export.sh@5 -- # export PATH 00:02:51.849 11:10:47 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:51.849 11:10:47 -- nvmf/common.sh@47 -- # : 0 00:02:51.849 11:10:47 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:02:51.849 11:10:47 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:02:51.849 11:10:47 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:02:51.849 11:10:47 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:02:51.849 11:10:47 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:02:51.849 11:10:47 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:02:51.849 11:10:47 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:02:51.849 11:10:47 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:02:51.849 11:10:47 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:02:51.849 11:10:47 -- spdk/autotest.sh@32 -- # uname -s 00:02:51.849 11:10:47 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:02:51.849 11:10:47 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:02:51.849 11:10:47 -- spdk/autotest.sh@34 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/coredumps 00:02:51.849 11:10:47 -- spdk/autotest.sh@39 -- # echo '|/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/core-collector.sh %P %s %t' 00:02:51.849 11:10:47 -- spdk/autotest.sh@40 -- # echo /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/coredumps 00:02:51.849 11:10:47 -- spdk/autotest.sh@44 -- # modprobe nbd 00:02:51.849 11:10:47 -- spdk/autotest.sh@46 -- # type -P udevadm 00:02:51.849 11:10:47 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:02:51.849 11:10:47 -- spdk/autotest.sh@48 -- # udevadm_pid=1284796 00:02:51.849 11:10:47 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:02:51.849 11:10:47 -- pm/common@17 -- # local monitor 00:02:51.849 11:10:47 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:02:51.849 11:10:47 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:02:51.849 11:10:47 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:02:51.849 11:10:47 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:02:51.849 11:10:47 -- pm/common@21 -- # date +%s 00:02:51.849 11:10:47 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:02:51.849 11:10:47 -- pm/common@21 -- # date +%s 00:02:51.849 11:10:47 -- pm/common@25 -- # sleep 1 00:02:51.849 11:10:47 -- pm/common@21 -- # date +%s 00:02:51.849 11:10:47 -- pm/common@21 -- # date +%s 00:02:51.849 11:10:47 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1721985047 00:02:51.849 11:10:47 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1721985047 00:02:51.849 11:10:47 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1721985047 00:02:51.849 11:10:47 -- pm/common@21 -- # sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1721985047 00:02:52.107 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1721985047_collect-vmstat.pm.log 00:02:52.108 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1721985047_collect-cpu-load.pm.log 00:02:52.108 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1721985047_collect-cpu-temp.pm.log 00:02:52.108 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1721985047_collect-bmc-pm.bmc.pm.log 00:02:53.042 11:10:48 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:02:53.042 11:10:48 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:02:53.042 11:10:48 -- common/autotest_common.sh@724 -- # xtrace_disable 00:02:53.042 11:10:48 -- common/autotest_common.sh@10 -- # set +x 00:02:53.042 11:10:48 -- spdk/autotest.sh@59 -- # create_test_list 00:02:53.042 11:10:48 -- common/autotest_common.sh@748 -- # xtrace_disable 00:02:53.042 11:10:48 -- common/autotest_common.sh@10 -- # set +x 00:02:53.042 11:10:48 -- spdk/autotest.sh@61 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/autotest.sh 00:02:53.042 11:10:48 -- spdk/autotest.sh@61 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:02:53.042 11:10:48 -- spdk/autotest.sh@61 -- # src=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:02:53.042 11:10:48 -- spdk/autotest.sh@62 -- # out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:02:53.042 11:10:48 -- spdk/autotest.sh@63 -- # cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:02:53.042 11:10:48 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:02:53.042 11:10:48 -- common/autotest_common.sh@1455 -- # uname 00:02:53.042 11:10:48 -- common/autotest_common.sh@1455 -- # '[' Linux = FreeBSD ']' 00:02:53.042 11:10:48 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:02:53.042 11:10:48 -- common/autotest_common.sh@1475 -- # uname 00:02:53.042 11:10:48 -- common/autotest_common.sh@1475 -- # [[ Linux = FreeBSD ]] 00:02:53.042 11:10:48 -- spdk/autotest.sh@71 -- # grep CC_TYPE mk/cc.mk 00:02:53.042 11:10:48 -- spdk/autotest.sh@71 -- # CC_TYPE=CC_TYPE=gcc 00:02:53.042 11:10:48 -- spdk/autotest.sh@72 -- # hash lcov 00:02:53.042 11:10:48 -- spdk/autotest.sh@72 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:02:53.042 11:10:48 -- spdk/autotest.sh@80 -- # export 'LCOV_OPTS= 00:02:53.042 --rc lcov_branch_coverage=1 00:02:53.042 --rc lcov_function_coverage=1 00:02:53.042 --rc genhtml_branch_coverage=1 00:02:53.042 --rc genhtml_function_coverage=1 00:02:53.042 --rc genhtml_legend=1 00:02:53.042 --rc geninfo_all_blocks=1 00:02:53.042 ' 00:02:53.042 11:10:48 -- spdk/autotest.sh@80 -- # LCOV_OPTS=' 00:02:53.042 --rc lcov_branch_coverage=1 00:02:53.042 --rc lcov_function_coverage=1 00:02:53.042 --rc genhtml_branch_coverage=1 00:02:53.042 --rc genhtml_function_coverage=1 00:02:53.042 --rc genhtml_legend=1 00:02:53.042 --rc geninfo_all_blocks=1 00:02:53.042 ' 00:02:53.042 11:10:48 -- spdk/autotest.sh@81 -- # export 'LCOV=lcov 00:02:53.042 --rc lcov_branch_coverage=1 00:02:53.042 --rc lcov_function_coverage=1 00:02:53.042 --rc genhtml_branch_coverage=1 00:02:53.042 --rc genhtml_function_coverage=1 00:02:53.042 --rc genhtml_legend=1 00:02:53.042 --rc geninfo_all_blocks=1 00:02:53.042 --no-external' 00:02:53.042 11:10:48 -- spdk/autotest.sh@81 -- # LCOV='lcov 00:02:53.042 --rc lcov_branch_coverage=1 00:02:53.042 --rc lcov_function_coverage=1 00:02:53.042 --rc genhtml_branch_coverage=1 00:02:53.042 --rc genhtml_function_coverage=1 00:02:53.042 --rc genhtml_legend=1 00:02:53.042 --rc geninfo_all_blocks=1 00:02:53.042 --no-external' 00:02:53.042 11:10:48 -- spdk/autotest.sh@83 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -v 00:02:53.042 lcov: LCOV version 1.14 00:02:53.042 11:10:48 -- spdk/autotest.sh@85 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -c -i -t Baseline -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_base.info 00:03:05.242 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:03:05.242 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno 00:03:13.482 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/accel.gcno:no functions found 00:03:13.482 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/accel.gcno 00:03:13.482 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev_module.gcno:no functions found 00:03:13.482 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev_module.gcno 00:03:13.482 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/accel_module.gcno:no functions found 00:03:13.482 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/accel_module.gcno 00:03:13.482 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/barrier.gcno:no functions found 00:03:13.482 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/barrier.gcno 00:03:13.482 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/assert.gcno:no functions found 00:03:13.482 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/assert.gcno 00:03:13.482 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev_zone.gcno:no functions found 00:03:13.482 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev_zone.gcno 00:03:13.482 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev.gcno:no functions found 00:03:13.482 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev.gcno 00:03:13.482 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bit_array.gcno:no functions found 00:03:13.482 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bit_array.gcno 00:03:13.482 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/base64.gcno:no functions found 00:03:13.482 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/base64.gcno 00:03:13.482 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blob.gcno:no functions found 00:03:13.482 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blob.gcno 00:03:13.482 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/config.gcno:no functions found 00:03:13.482 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/config.gcno 00:03:13.482 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bit_pool.gcno:no functions found 00:03:13.482 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bit_pool.gcno 00:03:13.482 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blobfs.gcno:no functions found 00:03:13.482 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blobfs.gcno 00:03:13.482 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blob_bdev.gcno:no functions found 00:03:13.482 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blob_bdev.gcno 00:03:13.482 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blobfs_bdev.gcno:no functions found 00:03:13.482 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blobfs_bdev.gcno 00:03:13.482 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/conf.gcno:no functions found 00:03:13.482 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/conf.gcno 00:03:13.482 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc16.gcno:no functions found 00:03:13.482 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc16.gcno 00:03:13.482 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/endian.gcno:no functions found 00:03:13.482 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/endian.gcno 00:03:13.482 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/cpuset.gcno:no functions found 00:03:13.482 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/cpuset.gcno 00:03:13.482 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/dma.gcno:no functions found 00:03:13.482 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/dma.gcno 00:03:13.482 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/dif.gcno:no functions found 00:03:13.482 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/dif.gcno 00:03:13.482 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc64.gcno:no functions found 00:03:13.482 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc64.gcno 00:03:13.482 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc32.gcno:no functions found 00:03:13.482 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc32.gcno 00:03:13.482 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/fd.gcno:no functions found 00:03:13.482 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/fd.gcno 00:03:13.482 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/event.gcno:no functions found 00:03:13.482 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/event.gcno 00:03:13.482 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/env_dpdk.gcno:no functions found 00:03:13.482 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/env_dpdk.gcno 00:03:13.482 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/env.gcno:no functions found 00:03:13.482 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/env.gcno 00:03:13.482 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ftl.gcno:no functions found 00:03:13.482 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ftl.gcno 00:03:13.482 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/fd_group.gcno:no functions found 00:03:13.482 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/fd_group.gcno 00:03:13.482 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/gpt_spec.gcno:no functions found 00:03:13.482 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/gpt_spec.gcno 00:03:13.482 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/histogram_data.gcno:no functions found 00:03:13.482 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/histogram_data.gcno 00:03:13.482 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/idxd.gcno:no functions found 00:03:13.482 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/idxd.gcno 00:03:13.482 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/init.gcno:no functions found 00:03:13.483 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/init.gcno 00:03:13.483 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/file.gcno:no functions found 00:03:13.483 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/file.gcno 00:03:13.483 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/idxd_spec.gcno:no functions found 00:03:13.483 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/idxd_spec.gcno 00:03:13.483 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/jsonrpc.gcno:no functions found 00:03:13.483 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/jsonrpc.gcno 00:03:13.483 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/hexlify.gcno:no functions found 00:03:13.483 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/hexlify.gcno 00:03:13.483 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/iscsi_spec.gcno:no functions found 00:03:13.483 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/iscsi_spec.gcno 00:03:13.483 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/keyring.gcno:no functions found 00:03:13.483 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/keyring.gcno 00:03:13.483 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ioat.gcno:no functions found 00:03:13.483 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ioat.gcno 00:03:13.483 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/likely.gcno:no functions found 00:03:13.483 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/likely.gcno 00:03:13.483 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/log.gcno:no functions found 00:03:13.483 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/log.gcno 00:03:13.742 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ioat_spec.gcno:no functions found 00:03:13.742 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ioat_spec.gcno 00:03:13.742 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/keyring_module.gcno:no functions found 00:03:13.742 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/keyring_module.gcno 00:03:13.742 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/json.gcno:no functions found 00:03:13.742 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/json.gcno 00:03:13.742 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/lvol.gcno:no functions found 00:03:13.742 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/lvol.gcno 00:03:13.742 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/memory.gcno:no functions found 00:03:13.742 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/memory.gcno 00:03:13.742 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nbd.gcno:no functions found 00:03:13.742 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nbd.gcno 00:03:13.742 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/mmio.gcno:no functions found 00:03:13.742 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/mmio.gcno 00:03:13.742 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_intel.gcno:no functions found 00:03:13.742 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_intel.gcno 00:03:13.742 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/net.gcno:no functions found 00:03:13.742 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/net.gcno 00:03:13.742 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme.gcno:no functions found 00:03:13.742 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme.gcno 00:03:13.742 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_spec.gcno:no functions found 00:03:13.742 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_spec.gcno 00:03:13.742 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_zns.gcno:no functions found 00:03:13.742 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_zns.gcno 00:03:13.742 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_ocssd_spec.gcno:no functions found 00:03:13.742 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_ocssd_spec.gcno 00:03:13.742 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/notify.gcno:no functions found 00:03:13.742 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/notify.gcno 00:03:13.742 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_cmd.gcno:no functions found 00:03:13.742 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_cmd.gcno 00:03:13.742 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_spec.gcno:no functions found 00:03:13.742 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_spec.gcno 00:03:13.742 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_ocssd.gcno:no functions found 00:03:13.742 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_ocssd.gcno 00:03:13.742 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf.gcno:no functions found 00:03:13.742 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf.gcno 00:03:13.742 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_fc_spec.gcno:no functions found 00:03:13.742 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_fc_spec.gcno 00:03:13.742 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_transport.gcno:no functions found 00:03:13.742 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_transport.gcno 00:03:13.742 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/opal.gcno:no functions found 00:03:13.742 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/opal.gcno 00:03:13.742 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/opal_spec.gcno:no functions found 00:03:13.742 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/opal_spec.gcno 00:03:13.742 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/pci_ids.gcno:no functions found 00:03:13.742 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/pci_ids.gcno 00:03:13.742 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/pipe.gcno:no functions found 00:03:13.742 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/pipe.gcno 00:03:13.742 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/queue.gcno:no functions found 00:03:13.743 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/queue.gcno 00:03:13.743 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/reduce.gcno:no functions found 00:03:13.743 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/reduce.gcno 00:03:13.743 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/rpc.gcno:no functions found 00:03:13.743 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/rpc.gcno 00:03:13.743 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scheduler.gcno:no functions found 00:03:13.743 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scheduler.gcno 00:03:13.743 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scsi.gcno:no functions found 00:03:13.743 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scsi.gcno 00:03:13.743 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scsi_spec.gcno:no functions found 00:03:13.743 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scsi_spec.gcno 00:03:13.743 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/stdinc.gcno:no functions found 00:03:13.743 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/stdinc.gcno 00:03:13.743 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/sock.gcno:no functions found 00:03:13.743 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/sock.gcno 00:03:13.743 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/string.gcno:no functions found 00:03:13.743 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/string.gcno 00:03:14.002 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/thread.gcno:no functions found 00:03:14.002 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/thread.gcno 00:03:14.002 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/trace.gcno:no functions found 00:03:14.002 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/trace.gcno 00:03:14.002 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/tree.gcno:no functions found 00:03:14.002 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/tree.gcno 00:03:14.002 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/trace_parser.gcno:no functions found 00:03:14.002 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/trace_parser.gcno 00:03:14.002 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/util.gcno:no functions found 00:03:14.002 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/util.gcno 00:03:14.002 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ublk.gcno:no functions found 00:03:14.002 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ublk.gcno 00:03:14.002 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/uuid.gcno:no functions found 00:03:14.002 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/uuid.gcno 00:03:14.002 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/version.gcno:no functions found 00:03:14.002 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/version.gcno 00:03:14.002 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vfio_user_spec.gcno:no functions found 00:03:14.002 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vfio_user_spec.gcno 00:03:14.002 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vfio_user_pci.gcno:no functions found 00:03:14.002 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vfio_user_pci.gcno 00:03:14.002 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vhost.gcno:no functions found 00:03:14.002 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vhost.gcno 00:03:14.002 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vmd.gcno:no functions found 00:03:14.002 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vmd.gcno 00:03:14.002 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/xor.gcno:no functions found 00:03:14.002 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/xor.gcno 00:03:14.002 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/zipf.gcno:no functions found 00:03:14.002 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/zipf.gcno 00:03:17.291 11:11:12 -- spdk/autotest.sh@89 -- # timing_enter pre_cleanup 00:03:17.291 11:11:12 -- common/autotest_common.sh@724 -- # xtrace_disable 00:03:17.291 11:11:12 -- common/autotest_common.sh@10 -- # set +x 00:03:17.291 11:11:12 -- spdk/autotest.sh@91 -- # rm -f 00:03:17.291 11:11:12 -- spdk/autotest.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:03:20.599 0000:5e:00.0 (8086 0a54): Already using the nvme driver 00:03:20.599 0000:00:04.7 (8086 2021): Already using the ioatdma driver 00:03:20.599 0000:00:04.6 (8086 2021): Already using the ioatdma driver 00:03:20.599 0000:00:04.5 (8086 2021): Already using the ioatdma driver 00:03:20.599 0000:00:04.4 (8086 2021): Already using the ioatdma driver 00:03:20.599 0000:00:04.3 (8086 2021): Already using the ioatdma driver 00:03:20.599 0000:00:04.2 (8086 2021): Already using the ioatdma driver 00:03:20.599 0000:00:04.1 (8086 2021): Already using the ioatdma driver 00:03:20.599 0000:00:04.0 (8086 2021): Already using the ioatdma driver 00:03:20.599 0000:80:04.7 (8086 2021): Already using the ioatdma driver 00:03:20.599 0000:80:04.6 (8086 2021): Already using the ioatdma driver 00:03:20.599 0000:80:04.5 (8086 2021): Already using the ioatdma driver 00:03:20.599 0000:80:04.4 (8086 2021): Already using the ioatdma driver 00:03:20.599 0000:80:04.3 (8086 2021): Already using the ioatdma driver 00:03:20.599 0000:80:04.2 (8086 2021): Already using the ioatdma driver 00:03:20.599 0000:80:04.1 (8086 2021): Already using the ioatdma driver 00:03:20.599 0000:80:04.0 (8086 2021): Already using the ioatdma driver 00:03:20.599 11:11:15 -- spdk/autotest.sh@96 -- # get_zoned_devs 00:03:20.599 11:11:15 -- common/autotest_common.sh@1669 -- # zoned_devs=() 00:03:20.599 11:11:15 -- common/autotest_common.sh@1669 -- # local -gA zoned_devs 00:03:20.599 11:11:15 -- common/autotest_common.sh@1670 -- # local nvme bdf 00:03:20.599 11:11:15 -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:03:20.599 11:11:15 -- common/autotest_common.sh@1673 -- # is_block_zoned nvme0n1 00:03:20.599 11:11:15 -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:03:20.599 11:11:15 -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:03:20.599 11:11:15 -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:03:20.599 11:11:15 -- spdk/autotest.sh@98 -- # (( 0 > 0 )) 00:03:20.599 11:11:15 -- spdk/autotest.sh@110 -- # for dev in /dev/nvme*n!(*p*) 00:03:20.599 11:11:15 -- spdk/autotest.sh@112 -- # [[ -z '' ]] 00:03:20.599 11:11:15 -- spdk/autotest.sh@113 -- # block_in_use /dev/nvme0n1 00:03:20.599 11:11:15 -- scripts/common.sh@378 -- # local block=/dev/nvme0n1 pt 00:03:20.599 11:11:15 -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:03:20.599 No valid GPT data, bailing 00:03:20.599 11:11:15 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:03:20.599 11:11:15 -- scripts/common.sh@391 -- # pt= 00:03:20.599 11:11:15 -- scripts/common.sh@392 -- # return 1 00:03:20.599 11:11:15 -- spdk/autotest.sh@114 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:03:20.599 1+0 records in 00:03:20.599 1+0 records out 00:03:20.599 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00410848 s, 255 MB/s 00:03:20.599 11:11:15 -- spdk/autotest.sh@118 -- # sync 00:03:20.599 11:11:15 -- spdk/autotest.sh@120 -- # xtrace_disable_per_cmd reap_spdk_processes 00:03:20.599 11:11:15 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:03:20.599 11:11:15 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:03:25.872 11:11:21 -- spdk/autotest.sh@124 -- # uname -s 00:03:25.872 11:11:21 -- spdk/autotest.sh@124 -- # '[' Linux = Linux ']' 00:03:25.872 11:11:21 -- spdk/autotest.sh@125 -- # run_test setup.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/test-setup.sh 00:03:25.872 11:11:21 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:03:25.872 11:11:21 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:03:25.872 11:11:21 -- common/autotest_common.sh@10 -- # set +x 00:03:25.872 ************************************ 00:03:25.872 START TEST setup.sh 00:03:25.872 ************************************ 00:03:25.872 11:11:21 setup.sh -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/test-setup.sh 00:03:25.872 * Looking for test storage... 00:03:25.872 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:03:25.872 11:11:21 setup.sh -- setup/test-setup.sh@10 -- # uname -s 00:03:25.872 11:11:21 setup.sh -- setup/test-setup.sh@10 -- # [[ Linux == Linux ]] 00:03:25.872 11:11:21 setup.sh -- setup/test-setup.sh@12 -- # run_test acl /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/acl.sh 00:03:25.872 11:11:21 setup.sh -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:03:25.872 11:11:21 setup.sh -- common/autotest_common.sh@1107 -- # xtrace_disable 00:03:25.872 11:11:21 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:03:25.872 ************************************ 00:03:25.872 START TEST acl 00:03:25.872 ************************************ 00:03:25.872 11:11:21 setup.sh.acl -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/acl.sh 00:03:25.872 * Looking for test storage... 00:03:25.872 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:03:25.872 11:11:21 setup.sh.acl -- setup/acl.sh@10 -- # get_zoned_devs 00:03:25.872 11:11:21 setup.sh.acl -- common/autotest_common.sh@1669 -- # zoned_devs=() 00:03:25.872 11:11:21 setup.sh.acl -- common/autotest_common.sh@1669 -- # local -gA zoned_devs 00:03:25.872 11:11:21 setup.sh.acl -- common/autotest_common.sh@1670 -- # local nvme bdf 00:03:25.872 11:11:21 setup.sh.acl -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:03:25.872 11:11:21 setup.sh.acl -- common/autotest_common.sh@1673 -- # is_block_zoned nvme0n1 00:03:25.872 11:11:21 setup.sh.acl -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:03:25.872 11:11:21 setup.sh.acl -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:03:25.872 11:11:21 setup.sh.acl -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:03:25.872 11:11:21 setup.sh.acl -- setup/acl.sh@12 -- # devs=() 00:03:25.872 11:11:21 setup.sh.acl -- setup/acl.sh@12 -- # declare -a devs 00:03:25.872 11:11:21 setup.sh.acl -- setup/acl.sh@13 -- # drivers=() 00:03:25.872 11:11:21 setup.sh.acl -- setup/acl.sh@13 -- # declare -A drivers 00:03:25.872 11:11:21 setup.sh.acl -- setup/acl.sh@51 -- # setup reset 00:03:25.872 11:11:21 setup.sh.acl -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:25.872 11:11:21 setup.sh.acl -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:03:29.162 11:11:24 setup.sh.acl -- setup/acl.sh@52 -- # collect_setup_devs 00:03:29.162 11:11:24 setup.sh.acl -- setup/acl.sh@16 -- # local dev driver 00:03:29.162 11:11:24 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:29.162 11:11:24 setup.sh.acl -- setup/acl.sh@15 -- # setup output status 00:03:29.162 11:11:24 setup.sh.acl -- setup/common.sh@9 -- # [[ output == output ]] 00:03:29.162 11:11:24 setup.sh.acl -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:03:31.765 Hugepages 00:03:31.765 node hugesize free / total 00:03:31.765 11:11:27 setup.sh.acl -- setup/acl.sh@19 -- # [[ 1048576kB == *:*:*.* ]] 00:03:31.765 11:11:27 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:03:31.765 11:11:27 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:31.765 11:11:27 setup.sh.acl -- setup/acl.sh@19 -- # [[ 2048kB == *:*:*.* ]] 00:03:31.765 11:11:27 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:03:31.765 11:11:27 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:31.765 11:11:27 setup.sh.acl -- setup/acl.sh@19 -- # [[ 1048576kB == *:*:*.* ]] 00:03:31.765 11:11:27 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:03:31.765 11:11:27 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:31.765 00:03:31.765 Type BDF Vendor Device NUMA Driver Device Block devices 00:03:31.765 11:11:27 setup.sh.acl -- setup/acl.sh@19 -- # [[ 2048kB == *:*:*.* ]] 00:03:31.765 11:11:27 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:03:31.765 11:11:27 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:31.765 11:11:27 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.0 == *:*:*.* ]] 00:03:31.765 11:11:27 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:31.765 11:11:27 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:31.765 11:11:27 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:31.766 11:11:27 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.1 == *:*:*.* ]] 00:03:31.766 11:11:27 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:31.766 11:11:27 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:31.766 11:11:27 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:31.766 11:11:27 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.2 == *:*:*.* ]] 00:03:31.766 11:11:27 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:31.766 11:11:27 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:31.766 11:11:27 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:31.766 11:11:27 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.3 == *:*:*.* ]] 00:03:31.766 11:11:27 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:31.766 11:11:27 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:31.766 11:11:27 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:31.766 11:11:27 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.4 == *:*:*.* ]] 00:03:31.766 11:11:27 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:31.766 11:11:27 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:31.766 11:11:27 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:31.766 11:11:27 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.5 == *:*:*.* ]] 00:03:31.766 11:11:27 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:31.766 11:11:27 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:31.766 11:11:27 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:31.766 11:11:27 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.6 == *:*:*.* ]] 00:03:31.766 11:11:27 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:31.766 11:11:27 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:31.766 11:11:27 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:31.766 11:11:27 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.7 == *:*:*.* ]] 00:03:31.766 11:11:27 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:31.766 11:11:27 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:31.766 11:11:27 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:32.025 11:11:27 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:5e:00.0 == *:*:*.* ]] 00:03:32.025 11:11:27 setup.sh.acl -- setup/acl.sh@20 -- # [[ nvme == nvme ]] 00:03:32.025 11:11:27 setup.sh.acl -- setup/acl.sh@21 -- # [[ '' == *\0\0\0\0\:\5\e\:\0\0\.\0* ]] 00:03:32.025 11:11:27 setup.sh.acl -- setup/acl.sh@22 -- # devs+=("$dev") 00:03:32.025 11:11:27 setup.sh.acl -- setup/acl.sh@22 -- # drivers["$dev"]=nvme 00:03:32.025 11:11:27 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:32.025 11:11:27 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.0 == *:*:*.* ]] 00:03:32.025 11:11:27 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:32.025 11:11:27 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:32.025 11:11:27 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:32.025 11:11:27 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.1 == *:*:*.* ]] 00:03:32.025 11:11:27 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:32.025 11:11:27 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:32.025 11:11:27 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:32.025 11:11:27 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.2 == *:*:*.* ]] 00:03:32.025 11:11:27 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:32.025 11:11:27 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:32.025 11:11:27 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:32.025 11:11:27 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.3 == *:*:*.* ]] 00:03:32.025 11:11:27 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:32.025 11:11:27 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:32.025 11:11:27 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:32.025 11:11:27 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.4 == *:*:*.* ]] 00:03:32.025 11:11:27 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:32.025 11:11:27 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:32.025 11:11:27 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:32.025 11:11:27 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.5 == *:*:*.* ]] 00:03:32.025 11:11:27 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:32.025 11:11:27 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:32.025 11:11:27 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:32.025 11:11:27 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.6 == *:*:*.* ]] 00:03:32.025 11:11:27 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:32.025 11:11:27 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:32.025 11:11:27 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:32.025 11:11:27 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.7 == *:*:*.* ]] 00:03:32.025 11:11:27 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:32.025 11:11:27 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:32.025 11:11:27 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:32.025 11:11:27 setup.sh.acl -- setup/acl.sh@24 -- # (( 1 > 0 )) 00:03:32.025 11:11:27 setup.sh.acl -- setup/acl.sh@54 -- # run_test denied denied 00:03:32.025 11:11:27 setup.sh.acl -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:03:32.025 11:11:27 setup.sh.acl -- common/autotest_common.sh@1107 -- # xtrace_disable 00:03:32.025 11:11:27 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:03:32.025 ************************************ 00:03:32.025 START TEST denied 00:03:32.025 ************************************ 00:03:32.025 11:11:27 setup.sh.acl.denied -- common/autotest_common.sh@1125 -- # denied 00:03:32.025 11:11:27 setup.sh.acl.denied -- setup/acl.sh@38 -- # PCI_BLOCKED=' 0000:5e:00.0' 00:03:32.025 11:11:27 setup.sh.acl.denied -- setup/acl.sh@38 -- # setup output config 00:03:32.025 11:11:27 setup.sh.acl.denied -- setup/acl.sh@39 -- # grep 'Skipping denied controller at 0000:5e:00.0' 00:03:32.025 11:11:27 setup.sh.acl.denied -- setup/common.sh@9 -- # [[ output == output ]] 00:03:32.025 11:11:27 setup.sh.acl.denied -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:03:35.311 0000:5e:00.0 (8086 0a54): Skipping denied controller at 0000:5e:00.0 00:03:35.311 11:11:30 setup.sh.acl.denied -- setup/acl.sh@40 -- # verify 0000:5e:00.0 00:03:35.311 11:11:30 setup.sh.acl.denied -- setup/acl.sh@28 -- # local dev driver 00:03:35.311 11:11:30 setup.sh.acl.denied -- setup/acl.sh@30 -- # for dev in "$@" 00:03:35.311 11:11:30 setup.sh.acl.denied -- setup/acl.sh@31 -- # [[ -e /sys/bus/pci/devices/0000:5e:00.0 ]] 00:03:35.311 11:11:30 setup.sh.acl.denied -- setup/acl.sh@32 -- # readlink -f /sys/bus/pci/devices/0000:5e:00.0/driver 00:03:35.311 11:11:30 setup.sh.acl.denied -- setup/acl.sh@32 -- # driver=/sys/bus/pci/drivers/nvme 00:03:35.311 11:11:30 setup.sh.acl.denied -- setup/acl.sh@33 -- # [[ nvme == \n\v\m\e ]] 00:03:35.311 11:11:30 setup.sh.acl.denied -- setup/acl.sh@41 -- # setup reset 00:03:35.311 11:11:30 setup.sh.acl.denied -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:35.311 11:11:30 setup.sh.acl.denied -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:03:39.501 00:03:39.501 real 0m7.134s 00:03:39.501 user 0m2.298s 00:03:39.501 sys 0m4.098s 00:03:39.501 11:11:34 setup.sh.acl.denied -- common/autotest_common.sh@1126 -- # xtrace_disable 00:03:39.501 11:11:34 setup.sh.acl.denied -- common/autotest_common.sh@10 -- # set +x 00:03:39.501 ************************************ 00:03:39.501 END TEST denied 00:03:39.501 ************************************ 00:03:39.501 11:11:34 setup.sh.acl -- setup/acl.sh@55 -- # run_test allowed allowed 00:03:39.501 11:11:34 setup.sh.acl -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:03:39.501 11:11:34 setup.sh.acl -- common/autotest_common.sh@1107 -- # xtrace_disable 00:03:39.501 11:11:34 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:03:39.501 ************************************ 00:03:39.501 START TEST allowed 00:03:39.501 ************************************ 00:03:39.501 11:11:34 setup.sh.acl.allowed -- common/autotest_common.sh@1125 -- # allowed 00:03:39.501 11:11:34 setup.sh.acl.allowed -- setup/acl.sh@45 -- # PCI_ALLOWED=0000:5e:00.0 00:03:39.501 11:11:34 setup.sh.acl.allowed -- setup/acl.sh@45 -- # setup output config 00:03:39.501 11:11:34 setup.sh.acl.allowed -- setup/acl.sh@46 -- # grep -E '0000:5e:00.0 .*: nvme -> .*' 00:03:39.501 11:11:34 setup.sh.acl.allowed -- setup/common.sh@9 -- # [[ output == output ]] 00:03:39.501 11:11:34 setup.sh.acl.allowed -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:03:43.688 0000:5e:00.0 (8086 0a54): nvme -> vfio-pci 00:03:43.688 11:11:39 setup.sh.acl.allowed -- setup/acl.sh@47 -- # verify 00:03:43.688 11:11:39 setup.sh.acl.allowed -- setup/acl.sh@28 -- # local dev driver 00:03:43.688 11:11:39 setup.sh.acl.allowed -- setup/acl.sh@48 -- # setup reset 00:03:43.688 11:11:39 setup.sh.acl.allowed -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:43.688 11:11:39 setup.sh.acl.allowed -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:03:46.975 00:03:46.975 real 0m7.540s 00:03:46.975 user 0m2.253s 00:03:46.975 sys 0m3.954s 00:03:46.975 11:11:42 setup.sh.acl.allowed -- common/autotest_common.sh@1126 -- # xtrace_disable 00:03:46.975 11:11:42 setup.sh.acl.allowed -- common/autotest_common.sh@10 -- # set +x 00:03:46.975 ************************************ 00:03:46.975 END TEST allowed 00:03:46.975 ************************************ 00:03:46.975 00:03:46.975 real 0m20.955s 00:03:46.975 user 0m6.949s 00:03:46.975 sys 0m12.151s 00:03:46.975 11:11:42 setup.sh.acl -- common/autotest_common.sh@1126 -- # xtrace_disable 00:03:46.975 11:11:42 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:03:46.975 ************************************ 00:03:46.975 END TEST acl 00:03:46.975 ************************************ 00:03:46.975 11:11:42 setup.sh -- setup/test-setup.sh@13 -- # run_test hugepages /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/hugepages.sh 00:03:46.975 11:11:42 setup.sh -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:03:46.975 11:11:42 setup.sh -- common/autotest_common.sh@1107 -- # xtrace_disable 00:03:46.975 11:11:42 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:03:46.975 ************************************ 00:03:46.975 START TEST hugepages 00:03:46.975 ************************************ 00:03:46.975 11:11:42 setup.sh.hugepages -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/hugepages.sh 00:03:46.975 * Looking for test storage... 00:03:46.975 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:03:46.975 11:11:42 setup.sh.hugepages -- setup/hugepages.sh@10 -- # nodes_sys=() 00:03:46.975 11:11:42 setup.sh.hugepages -- setup/hugepages.sh@10 -- # declare -a nodes_sys 00:03:46.975 11:11:42 setup.sh.hugepages -- setup/hugepages.sh@12 -- # declare -i default_hugepages=0 00:03:46.975 11:11:42 setup.sh.hugepages -- setup/hugepages.sh@13 -- # declare -i no_nodes=0 00:03:46.975 11:11:42 setup.sh.hugepages -- setup/hugepages.sh@14 -- # declare -i nr_hugepages=0 00:03:46.975 11:11:42 setup.sh.hugepages -- setup/hugepages.sh@16 -- # get_meminfo Hugepagesize 00:03:46.975 11:11:42 setup.sh.hugepages -- setup/common.sh@17 -- # local get=Hugepagesize 00:03:46.975 11:11:42 setup.sh.hugepages -- setup/common.sh@18 -- # local node= 00:03:46.975 11:11:42 setup.sh.hugepages -- setup/common.sh@19 -- # local var val 00:03:46.975 11:11:42 setup.sh.hugepages -- setup/common.sh@20 -- # local mem_f mem 00:03:46.975 11:11:42 setup.sh.hugepages -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:46.975 11:11:42 setup.sh.hugepages -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:46.975 11:11:42 setup.sh.hugepages -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:46.975 11:11:42 setup.sh.hugepages -- setup/common.sh@28 -- # mapfile -t mem 00:03:46.975 11:11:42 setup.sh.hugepages -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:46.975 11:11:42 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:46.975 11:11:42 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:46.975 11:11:42 setup.sh.hugepages -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381160 kB' 'MemFree: 174155784 kB' 'MemAvailable: 177011180 kB' 'Buffers: 4132 kB' 'Cached: 9369976 kB' 'SwapCached: 0 kB' 'Active: 6383556 kB' 'Inactive: 3506552 kB' 'Active(anon): 5995740 kB' 'Inactive(anon): 0 kB' 'Active(file): 387816 kB' 'Inactive(file): 3506552 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 519316 kB' 'Mapped: 196860 kB' 'Shmem: 5479740 kB' 'KReclaimable: 210956 kB' 'Slab: 712296 kB' 'SReclaimable: 210956 kB' 'SUnreclaim: 501340 kB' 'KernelStack: 20400 kB' 'PageTables: 8584 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 101982032 kB' 'Committed_AS: 7510904 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 314728 kB' 'VmallocChunk: 0 kB' 'Percpu: 66048 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 2048' 'HugePages_Free: 2048' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 4194304 kB' 'DirectMap4k: 2257876 kB' 'DirectMap2M: 12101632 kB' 'DirectMap1G: 187695104 kB' 00:03:46.975 11:11:42 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:46.975 11:11:42 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:46.975 11:11:42 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:46.975 11:11:42 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:46.975 11:11:42 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:46.975 11:11:42 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:46.975 11:11:42 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:46.975 11:11:42 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:46.975 11:11:42 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:46.975 11:11:42 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:46.975 11:11:42 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:46.975 11:11:42 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:46.975 11:11:42 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:46.975 11:11:42 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:46.975 11:11:42 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:46.975 11:11:42 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:46.975 11:11:42 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:46.975 11:11:42 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:46.975 11:11:42 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:46.975 11:11:42 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:46.975 11:11:42 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:46.976 11:11:42 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:46.976 11:11:42 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:46.976 11:11:42 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:46.976 11:11:42 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:46.976 11:11:42 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:46.976 11:11:42 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:46.976 11:11:42 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:46.976 11:11:42 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:46.976 11:11:42 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:46.976 11:11:42 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:46.976 11:11:42 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:46.976 11:11:42 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:46.976 11:11:42 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:46.976 11:11:42 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:46.976 11:11:42 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:46.976 11:11:42 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:46.976 11:11:42 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:46.976 11:11:42 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:46.976 11:11:42 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:46.976 11:11:42 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:46.976 11:11:42 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:46.976 11:11:42 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:46.976 11:11:42 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:46.976 11:11:42 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:46.976 11:11:42 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:46.976 11:11:42 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:46.976 11:11:42 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:46.976 11:11:42 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:46.976 11:11:42 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:46.976 11:11:42 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:46.976 11:11:42 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:46.976 11:11:42 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:46.976 11:11:42 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:46.976 11:11:42 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:46.976 11:11:42 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:46.976 11:11:42 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:46.976 11:11:42 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:46.976 11:11:42 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:46.976 11:11:42 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:46.976 11:11:42 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:46.976 11:11:42 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:46.976 11:11:42 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:46.976 11:11:42 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:46.976 11:11:42 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:46.976 11:11:42 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:46.976 11:11:42 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:46.976 11:11:42 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:46.976 11:11:42 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:46.976 11:11:42 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:46.976 11:11:42 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:46.976 11:11:42 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:46.976 11:11:42 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:46.976 11:11:42 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:46.976 11:11:42 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:46.976 11:11:42 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:46.976 11:11:42 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:46.976 11:11:42 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:46.976 11:11:42 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:46.976 11:11:42 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:46.976 11:11:42 setup.sh.hugepages -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:46.976 11:11:42 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:46.976 11:11:42 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:46.976 11:11:42 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:46.976 11:11:42 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:46.976 11:11:42 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:46.976 11:11:42 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:46.976 11:11:42 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:46.976 11:11:42 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:46.976 11:11:42 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:46.976 11:11:42 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:46.976 11:11:42 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:46.976 11:11:42 setup.sh.hugepages -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:46.976 11:11:42 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:46.976 11:11:42 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:46.976 11:11:42 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:46.976 11:11:42 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:46.976 11:11:42 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:46.976 11:11:42 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:46.976 11:11:42 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:46.976 11:11:42 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:46.976 11:11:42 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:46.976 11:11:42 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:46.976 11:11:42 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:46.976 11:11:42 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:46.976 11:11:42 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:46.976 11:11:42 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:46.976 11:11:42 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:46.976 11:11:42 setup.sh.hugepages -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:46.976 11:11:42 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:46.976 11:11:42 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:46.976 11:11:42 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:46.976 11:11:42 setup.sh.hugepages -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:46.976 11:11:42 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:46.976 11:11:42 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:46.976 11:11:42 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:46.976 11:11:42 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:46.976 11:11:42 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:46.976 11:11:42 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:46.976 11:11:42 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:46.976 11:11:42 setup.sh.hugepages -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:46.976 11:11:42 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:46.976 11:11:42 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:46.976 11:11:42 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:46.976 11:11:42 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:46.976 11:11:42 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:46.976 11:11:42 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:46.976 11:11:42 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:46.976 11:11:42 setup.sh.hugepages -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:46.976 11:11:42 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:46.976 11:11:42 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:46.976 11:11:42 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:46.976 11:11:42 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:46.976 11:11:42 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:46.976 11:11:42 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:46.977 11:11:42 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:46.977 11:11:42 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:46.977 11:11:42 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:46.977 11:11:42 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:46.977 11:11:42 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:46.977 11:11:42 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:46.977 11:11:42 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:46.977 11:11:42 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:46.977 11:11:42 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:46.977 11:11:42 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:46.977 11:11:42 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:46.977 11:11:42 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:46.977 11:11:42 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:46.977 11:11:42 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:46.977 11:11:42 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:46.977 11:11:42 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:46.977 11:11:42 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:46.977 11:11:42 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:46.977 11:11:42 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:46.977 11:11:42 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:46.977 11:11:42 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:46.977 11:11:42 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:46.977 11:11:42 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:46.977 11:11:42 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:46.977 11:11:42 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:46.977 11:11:42 setup.sh.hugepages -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:46.977 11:11:42 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:46.977 11:11:42 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:46.977 11:11:42 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:46.977 11:11:42 setup.sh.hugepages -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:46.977 11:11:42 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:46.977 11:11:42 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:46.977 11:11:42 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:46.977 11:11:42 setup.sh.hugepages -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:46.977 11:11:42 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:46.977 11:11:42 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:46.977 11:11:42 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:46.977 11:11:42 setup.sh.hugepages -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:46.977 11:11:42 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:46.977 11:11:42 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:46.977 11:11:42 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:46.977 11:11:42 setup.sh.hugepages -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:46.977 11:11:42 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:46.977 11:11:42 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:46.977 11:11:42 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:46.977 11:11:42 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:46.977 11:11:42 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:46.977 11:11:42 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:46.977 11:11:42 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:46.977 11:11:42 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:46.977 11:11:42 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:46.977 11:11:42 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:46.977 11:11:42 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:46.977 11:11:42 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:46.977 11:11:42 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:46.977 11:11:42 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:46.977 11:11:42 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:46.977 11:11:42 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:46.977 11:11:42 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:46.977 11:11:42 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:46.977 11:11:42 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:46.977 11:11:42 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:46.977 11:11:42 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:46.977 11:11:42 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:46.977 11:11:42 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:46.977 11:11:42 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:46.977 11:11:42 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:46.977 11:11:42 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:46.977 11:11:42 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:46.977 11:11:42 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:46.977 11:11:42 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:46.977 11:11:42 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:46.977 11:11:42 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:46.977 11:11:42 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Hugepagesize == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:46.977 11:11:42 setup.sh.hugepages -- setup/common.sh@33 -- # echo 2048 00:03:46.977 11:11:42 setup.sh.hugepages -- setup/common.sh@33 -- # return 0 00:03:46.977 11:11:42 setup.sh.hugepages -- setup/hugepages.sh@16 -- # default_hugepages=2048 00:03:46.977 11:11:42 setup.sh.hugepages -- setup/hugepages.sh@17 -- # default_huge_nr=/sys/kernel/mm/hugepages/hugepages-2048kB/nr_hugepages 00:03:46.977 11:11:42 setup.sh.hugepages -- setup/hugepages.sh@18 -- # global_huge_nr=/proc/sys/vm/nr_hugepages 00:03:46.977 11:11:42 setup.sh.hugepages -- setup/hugepages.sh@21 -- # unset -v HUGE_EVEN_ALLOC 00:03:46.977 11:11:42 setup.sh.hugepages -- setup/hugepages.sh@22 -- # unset -v HUGEMEM 00:03:46.977 11:11:42 setup.sh.hugepages -- setup/hugepages.sh@23 -- # unset -v HUGENODE 00:03:46.977 11:11:42 setup.sh.hugepages -- setup/hugepages.sh@24 -- # unset -v NRHUGE 00:03:46.977 11:11:42 setup.sh.hugepages -- setup/hugepages.sh@207 -- # get_nodes 00:03:46.977 11:11:42 setup.sh.hugepages -- setup/hugepages.sh@27 -- # local node 00:03:46.977 11:11:42 setup.sh.hugepages -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:46.977 11:11:42 setup.sh.hugepages -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=2048 00:03:46.977 11:11:42 setup.sh.hugepages -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:46.977 11:11:42 setup.sh.hugepages -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:03:46.977 11:11:42 setup.sh.hugepages -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:46.977 11:11:42 setup.sh.hugepages -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:46.977 11:11:42 setup.sh.hugepages -- setup/hugepages.sh@208 -- # clear_hp 00:03:46.977 11:11:42 setup.sh.hugepages -- setup/hugepages.sh@37 -- # local node hp 00:03:46.977 11:11:42 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:03:46.977 11:11:42 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:46.977 11:11:42 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:03:46.977 11:11:42 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:46.977 11:11:42 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:03:46.977 11:11:42 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:03:46.977 11:11:42 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:46.977 11:11:42 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:03:46.977 11:11:42 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:46.977 11:11:42 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:03:46.977 11:11:42 setup.sh.hugepages -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:03:46.977 11:11:42 setup.sh.hugepages -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:03:46.977 11:11:42 setup.sh.hugepages -- setup/hugepages.sh@210 -- # run_test default_setup default_setup 00:03:46.977 11:11:42 setup.sh.hugepages -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:03:46.977 11:11:42 setup.sh.hugepages -- common/autotest_common.sh@1107 -- # xtrace_disable 00:03:46.977 11:11:42 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:03:46.977 ************************************ 00:03:46.977 START TEST default_setup 00:03:46.977 ************************************ 00:03:46.977 11:11:42 setup.sh.hugepages.default_setup -- common/autotest_common.sh@1125 -- # default_setup 00:03:46.977 11:11:42 setup.sh.hugepages.default_setup -- setup/hugepages.sh@136 -- # get_test_nr_hugepages 2097152 0 00:03:46.977 11:11:42 setup.sh.hugepages.default_setup -- setup/hugepages.sh@49 -- # local size=2097152 00:03:46.977 11:11:42 setup.sh.hugepages.default_setup -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:03:46.977 11:11:42 setup.sh.hugepages.default_setup -- setup/hugepages.sh@51 -- # shift 00:03:46.977 11:11:42 setup.sh.hugepages.default_setup -- setup/hugepages.sh@52 -- # node_ids=('0') 00:03:46.977 11:11:42 setup.sh.hugepages.default_setup -- setup/hugepages.sh@52 -- # local node_ids 00:03:46.977 11:11:42 setup.sh.hugepages.default_setup -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:46.977 11:11:42 setup.sh.hugepages.default_setup -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:03:46.978 11:11:42 setup.sh.hugepages.default_setup -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:03:46.978 11:11:42 setup.sh.hugepages.default_setup -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:03:46.978 11:11:42 setup.sh.hugepages.default_setup -- setup/hugepages.sh@62 -- # local user_nodes 00:03:46.978 11:11:42 setup.sh.hugepages.default_setup -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:03:46.978 11:11:42 setup.sh.hugepages.default_setup -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:46.978 11:11:42 setup.sh.hugepages.default_setup -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:46.978 11:11:42 setup.sh.hugepages.default_setup -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:46.978 11:11:42 setup.sh.hugepages.default_setup -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:03:46.978 11:11:42 setup.sh.hugepages.default_setup -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:03:46.978 11:11:42 setup.sh.hugepages.default_setup -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:03:46.978 11:11:42 setup.sh.hugepages.default_setup -- setup/hugepages.sh@73 -- # return 0 00:03:46.978 11:11:42 setup.sh.hugepages.default_setup -- setup/hugepages.sh@137 -- # setup output 00:03:46.978 11:11:42 setup.sh.hugepages.default_setup -- setup/common.sh@9 -- # [[ output == output ]] 00:03:46.978 11:11:42 setup.sh.hugepages.default_setup -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:03:50.265 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:03:50.265 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:03:50.265 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:03:50.265 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:03:50.265 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:03:50.265 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:03:50.265 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:03:50.265 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:03:50.265 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:03:50.265 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:03:50.265 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:03:50.265 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:03:50.265 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:03:50.265 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:03:50.265 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:03:50.265 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:03:51.201 0000:5e:00.0 (8086 0a54): nvme -> vfio-pci 00:03:51.465 11:11:46 setup.sh.hugepages.default_setup -- setup/hugepages.sh@138 -- # verify_nr_hugepages 00:03:51.465 11:11:46 setup.sh.hugepages.default_setup -- setup/hugepages.sh@89 -- # local node 00:03:51.465 11:11:46 setup.sh.hugepages.default_setup -- setup/hugepages.sh@90 -- # local sorted_t 00:03:51.465 11:11:46 setup.sh.hugepages.default_setup -- setup/hugepages.sh@91 -- # local sorted_s 00:03:51.465 11:11:46 setup.sh.hugepages.default_setup -- setup/hugepages.sh@92 -- # local surp 00:03:51.465 11:11:46 setup.sh.hugepages.default_setup -- setup/hugepages.sh@93 -- # local resv 00:03:51.465 11:11:46 setup.sh.hugepages.default_setup -- setup/hugepages.sh@94 -- # local anon 00:03:51.465 11:11:46 setup.sh.hugepages.default_setup -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:51.465 11:11:46 setup.sh.hugepages.default_setup -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:51.465 11:11:46 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:51.465 11:11:46 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:03:51.465 11:11:46 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:03:51.465 11:11:46 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:03:51.465 11:11:46 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:51.465 11:11:46 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:51.465 11:11:46 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:51.465 11:11:46 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:03:51.465 11:11:46 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:51.465 11:11:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:51.465 11:11:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:51.465 11:11:46 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381160 kB' 'MemFree: 176377088 kB' 'MemAvailable: 179232400 kB' 'Buffers: 4132 kB' 'Cached: 9370084 kB' 'SwapCached: 0 kB' 'Active: 6400100 kB' 'Inactive: 3506552 kB' 'Active(anon): 6012284 kB' 'Inactive(anon): 0 kB' 'Active(file): 387816 kB' 'Inactive(file): 3506552 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 535748 kB' 'Mapped: 196844 kB' 'Shmem: 5479848 kB' 'KReclaimable: 210788 kB' 'Slab: 710812 kB' 'SReclaimable: 210788 kB' 'SUnreclaim: 500024 kB' 'KernelStack: 20592 kB' 'PageTables: 9132 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103030608 kB' 'Committed_AS: 7527264 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 314904 kB' 'VmallocChunk: 0 kB' 'Percpu: 66048 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2257876 kB' 'DirectMap2M: 12101632 kB' 'DirectMap1G: 187695104 kB' 00:03:51.465 11:11:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:51.465 11:11:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:51.465 11:11:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:51.465 11:11:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:51.465 11:11:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:51.465 11:11:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:51.465 11:11:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:51.465 11:11:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:51.465 11:11:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:51.465 11:11:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:51.465 11:11:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:51.465 11:11:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:51.465 11:11:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:51.465 11:11:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:51.465 11:11:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:51.465 11:11:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:51.465 11:11:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:51.465 11:11:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:51.465 11:11:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:51.465 11:11:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:51.465 11:11:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:51.465 11:11:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:51.465 11:11:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:51.465 11:11:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:51.465 11:11:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:51.465 11:11:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:51.465 11:11:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:51.465 11:11:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:51.465 11:11:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:51.465 11:11:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:51.465 11:11:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:51.465 11:11:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:51.465 11:11:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:51.465 11:11:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:51.465 11:11:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:51.465 11:11:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:51.465 11:11:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:51.465 11:11:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:51.465 11:11:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:51.465 11:11:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:51.465 11:11:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:51.465 11:11:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:51.465 11:11:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:51.465 11:11:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:51.465 11:11:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:51.466 11:11:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:51.466 11:11:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:51.466 11:11:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:51.466 11:11:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:51.466 11:11:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:51.466 11:11:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:51.466 11:11:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:51.466 11:11:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:51.466 11:11:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:51.466 11:11:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:51.466 11:11:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:51.466 11:11:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:51.466 11:11:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:51.466 11:11:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:51.466 11:11:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:51.466 11:11:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:51.466 11:11:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:51.466 11:11:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:51.466 11:11:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:51.466 11:11:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:51.466 11:11:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:51.466 11:11:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:51.466 11:11:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:51.466 11:11:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:51.466 11:11:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:51.466 11:11:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:51.466 11:11:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:51.466 11:11:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:51.466 11:11:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:51.466 11:11:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:51.466 11:11:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:51.466 11:11:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:51.466 11:11:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:51.466 11:11:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:51.466 11:11:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:51.466 11:11:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:51.466 11:11:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:51.466 11:11:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:51.466 11:11:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:51.466 11:11:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:51.466 11:11:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:51.466 11:11:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:51.466 11:11:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:51.466 11:11:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:51.466 11:11:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:51.466 11:11:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:51.466 11:11:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:51.466 11:11:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:51.466 11:11:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:51.466 11:11:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:51.466 11:11:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:51.466 11:11:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:51.466 11:11:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:51.466 11:11:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:51.466 11:11:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:51.466 11:11:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:51.466 11:11:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:51.466 11:11:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:51.466 11:11:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:51.466 11:11:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:51.466 11:11:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:51.466 11:11:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:51.466 11:11:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:51.466 11:11:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:51.466 11:11:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:51.466 11:11:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:51.466 11:11:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:51.466 11:11:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:51.466 11:11:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:51.466 11:11:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:51.466 11:11:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:51.466 11:11:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:51.466 11:11:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:51.466 11:11:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:51.466 11:11:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:51.466 11:11:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:51.466 11:11:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:51.466 11:11:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:51.466 11:11:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:51.466 11:11:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:51.466 11:11:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:51.466 11:11:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:51.466 11:11:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:51.466 11:11:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:51.466 11:11:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:51.466 11:11:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:51.466 11:11:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:51.466 11:11:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:51.466 11:11:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:51.466 11:11:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:51.466 11:11:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:51.466 11:11:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:51.466 11:11:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:51.466 11:11:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:51.466 11:11:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:51.466 11:11:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:51.466 11:11:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:51.466 11:11:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:51.466 11:11:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:51.466 11:11:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:51.466 11:11:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:51.467 11:11:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:51.467 11:11:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:51.467 11:11:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:51.467 11:11:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:51.467 11:11:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:51.467 11:11:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:51.467 11:11:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:51.467 11:11:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:51.467 11:11:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:51.467 11:11:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:51.467 11:11:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:51.467 11:11:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:51.467 11:11:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:51.467 11:11:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:51.467 11:11:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:51.467 11:11:46 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:03:51.467 11:11:46 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:03:51.467 11:11:46 setup.sh.hugepages.default_setup -- setup/hugepages.sh@97 -- # anon=0 00:03:51.467 11:11:46 setup.sh.hugepages.default_setup -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:51.467 11:11:46 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:51.467 11:11:46 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:03:51.467 11:11:46 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:03:51.467 11:11:46 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:03:51.467 11:11:46 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:51.467 11:11:46 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:51.467 11:11:46 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:51.467 11:11:46 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:03:51.467 11:11:46 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:51.467 11:11:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:51.467 11:11:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:51.467 11:11:46 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381160 kB' 'MemFree: 176378356 kB' 'MemAvailable: 179233668 kB' 'Buffers: 4132 kB' 'Cached: 9370088 kB' 'SwapCached: 0 kB' 'Active: 6399524 kB' 'Inactive: 3506552 kB' 'Active(anon): 6011708 kB' 'Inactive(anon): 0 kB' 'Active(file): 387816 kB' 'Inactive(file): 3506552 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 535196 kB' 'Mapped: 196796 kB' 'Shmem: 5479852 kB' 'KReclaimable: 210788 kB' 'Slab: 710876 kB' 'SReclaimable: 210788 kB' 'SUnreclaim: 500088 kB' 'KernelStack: 20496 kB' 'PageTables: 8836 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103030608 kB' 'Committed_AS: 7528908 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 314840 kB' 'VmallocChunk: 0 kB' 'Percpu: 66048 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2257876 kB' 'DirectMap2M: 12101632 kB' 'DirectMap1G: 187695104 kB' 00:03:51.467 11:11:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.467 11:11:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:51.467 11:11:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:51.467 11:11:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:51.467 11:11:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.467 11:11:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:51.467 11:11:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:51.467 11:11:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:51.467 11:11:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.467 11:11:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:51.467 11:11:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:51.467 11:11:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:51.467 11:11:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.467 11:11:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:51.467 11:11:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:51.467 11:11:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:51.467 11:11:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.467 11:11:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:51.467 11:11:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:51.467 11:11:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:51.467 11:11:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.467 11:11:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:51.467 11:11:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:51.467 11:11:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:51.467 11:11:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.467 11:11:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:51.467 11:11:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:51.467 11:11:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:51.467 11:11:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.467 11:11:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:51.467 11:11:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:51.467 11:11:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:51.467 11:11:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.467 11:11:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:51.467 11:11:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:51.467 11:11:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:51.467 11:11:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.467 11:11:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:51.467 11:11:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:51.467 11:11:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:51.467 11:11:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.467 11:11:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:51.467 11:11:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:51.467 11:11:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:51.467 11:11:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.467 11:11:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:51.467 11:11:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:51.467 11:11:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:51.467 11:11:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.467 11:11:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:51.467 11:11:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:51.467 11:11:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:51.467 11:11:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.467 11:11:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:51.467 11:11:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:51.467 11:11:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:51.467 11:11:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.467 11:11:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:51.467 11:11:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:51.467 11:11:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:51.467 11:11:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.468 11:11:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:51.468 11:11:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:51.468 11:11:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:51.468 11:11:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.468 11:11:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:51.468 11:11:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:51.468 11:11:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:51.468 11:11:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.468 11:11:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:51.468 11:11:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:51.468 11:11:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:51.468 11:11:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.468 11:11:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:51.468 11:11:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:51.468 11:11:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:51.468 11:11:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.468 11:11:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:51.468 11:11:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:51.468 11:11:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:51.468 11:11:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.468 11:11:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:51.468 11:11:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:51.468 11:11:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:51.468 11:11:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.468 11:11:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:51.468 11:11:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:51.468 11:11:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:51.468 11:11:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.468 11:11:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:51.468 11:11:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:51.468 11:11:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:51.468 11:11:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.468 11:11:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:51.468 11:11:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:51.468 11:11:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:51.468 11:11:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.468 11:11:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:51.468 11:11:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:51.468 11:11:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:51.468 11:11:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.468 11:11:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:51.468 11:11:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:51.468 11:11:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:51.468 11:11:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.468 11:11:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:51.468 11:11:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:51.468 11:11:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:51.468 11:11:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.468 11:11:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:51.468 11:11:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:51.468 11:11:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:51.468 11:11:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.468 11:11:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:51.468 11:11:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:51.468 11:11:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:51.468 11:11:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.468 11:11:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:51.468 11:11:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:51.468 11:11:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:51.468 11:11:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.468 11:11:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:51.468 11:11:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:51.468 11:11:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:51.468 11:11:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.468 11:11:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:51.468 11:11:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:51.468 11:11:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:51.468 11:11:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.468 11:11:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:51.468 11:11:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:51.468 11:11:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:51.468 11:11:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.468 11:11:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:51.468 11:11:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:51.468 11:11:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:51.468 11:11:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.468 11:11:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:51.468 11:11:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:51.468 11:11:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:51.468 11:11:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.468 11:11:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:51.468 11:11:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:51.468 11:11:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:51.468 11:11:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.468 11:11:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:51.468 11:11:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:51.468 11:11:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:51.468 11:11:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.468 11:11:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:51.468 11:11:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:51.468 11:11:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:51.468 11:11:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.468 11:11:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:51.468 11:11:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:51.468 11:11:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:51.468 11:11:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.468 11:11:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:51.468 11:11:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:51.469 11:11:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:51.469 11:11:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.469 11:11:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:51.469 11:11:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:51.469 11:11:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:51.469 11:11:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.469 11:11:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:51.469 11:11:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:51.469 11:11:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:51.469 11:11:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.469 11:11:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:51.469 11:11:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:51.469 11:11:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:51.469 11:11:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.469 11:11:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:51.469 11:11:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:51.469 11:11:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:51.469 11:11:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.469 11:11:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:51.469 11:11:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:51.469 11:11:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:51.469 11:11:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.469 11:11:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:51.469 11:11:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:51.469 11:11:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:51.469 11:11:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.469 11:11:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:51.469 11:11:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:51.469 11:11:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:51.469 11:11:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.469 11:11:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:51.469 11:11:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:51.469 11:11:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:51.469 11:11:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.469 11:11:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:51.469 11:11:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:51.469 11:11:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:51.469 11:11:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.469 11:11:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:51.469 11:11:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:51.469 11:11:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:51.469 11:11:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.469 11:11:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:51.469 11:11:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:51.469 11:11:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:51.469 11:11:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.469 11:11:46 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:03:51.469 11:11:46 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:03:51.469 11:11:46 setup.sh.hugepages.default_setup -- setup/hugepages.sh@99 -- # surp=0 00:03:51.469 11:11:46 setup.sh.hugepages.default_setup -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:51.469 11:11:46 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:51.469 11:11:46 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:03:51.469 11:11:46 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:03:51.469 11:11:46 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:03:51.469 11:11:46 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:51.469 11:11:46 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:51.469 11:11:46 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:51.469 11:11:46 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:03:51.469 11:11:46 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:51.469 11:11:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:51.469 11:11:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:51.469 11:11:46 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381160 kB' 'MemFree: 176378164 kB' 'MemAvailable: 179233476 kB' 'Buffers: 4132 kB' 'Cached: 9370100 kB' 'SwapCached: 0 kB' 'Active: 6400368 kB' 'Inactive: 3506552 kB' 'Active(anon): 6012552 kB' 'Inactive(anon): 0 kB' 'Active(file): 387816 kB' 'Inactive(file): 3506552 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 536072 kB' 'Mapped: 196796 kB' 'Shmem: 5479864 kB' 'KReclaimable: 210788 kB' 'Slab: 710876 kB' 'SReclaimable: 210788 kB' 'SUnreclaim: 500088 kB' 'KernelStack: 20640 kB' 'PageTables: 9120 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103030608 kB' 'Committed_AS: 7527808 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 314856 kB' 'VmallocChunk: 0 kB' 'Percpu: 66048 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2257876 kB' 'DirectMap2M: 12101632 kB' 'DirectMap1G: 187695104 kB' 00:03:51.469 11:11:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:51.469 11:11:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:51.469 11:11:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:51.469 11:11:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:51.469 11:11:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:51.469 11:11:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:51.469 11:11:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:51.469 11:11:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:51.469 11:11:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:51.469 11:11:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:51.469 11:11:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:51.469 11:11:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:51.469 11:11:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:51.469 11:11:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:51.469 11:11:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:51.469 11:11:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:51.469 11:11:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:51.469 11:11:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:51.469 11:11:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:51.469 11:11:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:51.469 11:11:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:51.469 11:11:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:51.469 11:11:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:51.469 11:11:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:51.469 11:11:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:51.469 11:11:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:51.469 11:11:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:51.469 11:11:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:51.469 11:11:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:51.469 11:11:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:51.469 11:11:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:51.469 11:11:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:51.469 11:11:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:51.469 11:11:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:51.469 11:11:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:51.470 11:11:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:51.470 11:11:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:51.470 11:11:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:51.470 11:11:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:51.470 11:11:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:51.470 11:11:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:51.470 11:11:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:51.470 11:11:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:51.470 11:11:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:51.470 11:11:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:51.470 11:11:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:51.470 11:11:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:51.470 11:11:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:51.470 11:11:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:51.470 11:11:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:51.470 11:11:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:51.470 11:11:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:51.470 11:11:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:51.470 11:11:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:51.470 11:11:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:51.470 11:11:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:51.470 11:11:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:51.470 11:11:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:51.470 11:11:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:51.470 11:11:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:51.470 11:11:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:51.470 11:11:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:51.470 11:11:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:51.470 11:11:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:51.470 11:11:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:51.470 11:11:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:51.470 11:11:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:51.470 11:11:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:51.470 11:11:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:51.470 11:11:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:51.470 11:11:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:51.470 11:11:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:51.470 11:11:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:51.470 11:11:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:51.470 11:11:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:51.470 11:11:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:51.470 11:11:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:51.470 11:11:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:51.470 11:11:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:51.470 11:11:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:51.470 11:11:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:51.470 11:11:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:51.470 11:11:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:51.470 11:11:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:51.470 11:11:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:51.470 11:11:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:51.470 11:11:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:51.470 11:11:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:51.470 11:11:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:51.470 11:11:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:51.470 11:11:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:51.470 11:11:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:51.470 11:11:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:51.470 11:11:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:51.470 11:11:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:51.470 11:11:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:51.470 11:11:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:51.470 11:11:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:51.470 11:11:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:51.470 11:11:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:51.470 11:11:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:51.470 11:11:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:51.470 11:11:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:51.470 11:11:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:51.470 11:11:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:51.470 11:11:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:51.470 11:11:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:51.470 11:11:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:51.470 11:11:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:51.470 11:11:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:51.470 11:11:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:51.470 11:11:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:51.471 11:11:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:51.471 11:11:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:51.471 11:11:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:51.471 11:11:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:51.471 11:11:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:51.471 11:11:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:51.471 11:11:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:51.471 11:11:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:51.471 11:11:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:51.471 11:11:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:51.471 11:11:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:51.471 11:11:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:51.471 11:11:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:51.471 11:11:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:51.471 11:11:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:51.471 11:11:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:51.471 11:11:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:51.471 11:11:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:51.471 11:11:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:51.471 11:11:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:51.471 11:11:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:51.471 11:11:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:51.471 11:11:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:51.471 11:11:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:51.471 11:11:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:51.471 11:11:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:51.471 11:11:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:51.471 11:11:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:51.471 11:11:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:51.471 11:11:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:51.471 11:11:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:51.471 11:11:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:51.471 11:11:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:51.471 11:11:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:51.471 11:11:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:51.471 11:11:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:51.471 11:11:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:51.471 11:11:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:51.471 11:11:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:51.471 11:11:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:51.471 11:11:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:51.471 11:11:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:51.471 11:11:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:51.471 11:11:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:51.471 11:11:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:51.471 11:11:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:51.471 11:11:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:51.471 11:11:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:51.471 11:11:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:51.471 11:11:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:51.471 11:11:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:51.471 11:11:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:51.471 11:11:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:51.471 11:11:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:51.471 11:11:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:51.471 11:11:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:51.471 11:11:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:51.471 11:11:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:51.471 11:11:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:51.471 11:11:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:51.471 11:11:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:51.471 11:11:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:51.471 11:11:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:51.471 11:11:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:51.471 11:11:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:51.471 11:11:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:51.471 11:11:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:51.471 11:11:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:51.471 11:11:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:51.471 11:11:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:51.471 11:11:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:51.471 11:11:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:51.471 11:11:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:51.471 11:11:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:51.471 11:11:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:51.471 11:11:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:51.471 11:11:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:51.471 11:11:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:51.471 11:11:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:51.471 11:11:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:51.471 11:11:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:51.471 11:11:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:51.471 11:11:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:51.471 11:11:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:51.471 11:11:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:51.471 11:11:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:51.471 11:11:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:51.471 11:11:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:51.471 11:11:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:51.471 11:11:46 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:03:51.471 11:11:46 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:03:51.471 11:11:46 setup.sh.hugepages.default_setup -- setup/hugepages.sh@100 -- # resv=0 00:03:51.471 11:11:46 setup.sh.hugepages.default_setup -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:03:51.471 nr_hugepages=1024 00:03:51.471 11:11:46 setup.sh.hugepages.default_setup -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:51.471 resv_hugepages=0 00:03:51.471 11:11:46 setup.sh.hugepages.default_setup -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:51.471 surplus_hugepages=0 00:03:51.471 11:11:46 setup.sh.hugepages.default_setup -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:51.471 anon_hugepages=0 00:03:51.471 11:11:46 setup.sh.hugepages.default_setup -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:51.471 11:11:46 setup.sh.hugepages.default_setup -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:03:51.471 11:11:46 setup.sh.hugepages.default_setup -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:51.471 11:11:46 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:51.471 11:11:46 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:03:51.471 11:11:46 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:03:51.471 11:11:46 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:03:51.471 11:11:46 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:51.471 11:11:46 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:51.471 11:11:46 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:51.471 11:11:46 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:03:51.471 11:11:46 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:51.471 11:11:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:51.472 11:11:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:51.472 11:11:46 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381160 kB' 'MemFree: 176379844 kB' 'MemAvailable: 179235156 kB' 'Buffers: 4132 kB' 'Cached: 9370136 kB' 'SwapCached: 0 kB' 'Active: 6399952 kB' 'Inactive: 3506552 kB' 'Active(anon): 6012136 kB' 'Inactive(anon): 0 kB' 'Active(file): 387816 kB' 'Inactive(file): 3506552 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 535556 kB' 'Mapped: 196796 kB' 'Shmem: 5479900 kB' 'KReclaimable: 210788 kB' 'Slab: 710876 kB' 'SReclaimable: 210788 kB' 'SUnreclaim: 500088 kB' 'KernelStack: 20448 kB' 'PageTables: 8852 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103030608 kB' 'Committed_AS: 7527832 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 314824 kB' 'VmallocChunk: 0 kB' 'Percpu: 66048 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2257876 kB' 'DirectMap2M: 12101632 kB' 'DirectMap1G: 187695104 kB' 00:03:51.472 11:11:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:51.472 11:11:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:51.472 11:11:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:51.472 11:11:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:51.472 11:11:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:51.472 11:11:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:51.472 11:11:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:51.472 11:11:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:51.472 11:11:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:51.472 11:11:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:51.472 11:11:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:51.472 11:11:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:51.472 11:11:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:51.472 11:11:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:51.472 11:11:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:51.472 11:11:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:51.472 11:11:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:51.472 11:11:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:51.472 11:11:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:51.472 11:11:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:51.472 11:11:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:51.472 11:11:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:51.472 11:11:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:51.472 11:11:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:51.472 11:11:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:51.472 11:11:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:51.472 11:11:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:51.472 11:11:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:51.472 11:11:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:51.472 11:11:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:51.472 11:11:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:51.472 11:11:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:51.472 11:11:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:51.472 11:11:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:51.472 11:11:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:51.472 11:11:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:51.472 11:11:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:51.472 11:11:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:51.472 11:11:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:51.472 11:11:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:51.472 11:11:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:51.472 11:11:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:51.472 11:11:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:51.472 11:11:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:51.472 11:11:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:51.472 11:11:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:51.472 11:11:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:51.472 11:11:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:51.472 11:11:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:51.472 11:11:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:51.472 11:11:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:51.472 11:11:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:51.472 11:11:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:51.472 11:11:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:51.472 11:11:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:51.472 11:11:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:51.472 11:11:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:51.472 11:11:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:51.472 11:11:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:51.472 11:11:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:51.472 11:11:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:51.472 11:11:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:51.472 11:11:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:51.472 11:11:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:51.472 11:11:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:51.472 11:11:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:51.472 11:11:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:51.472 11:11:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:51.472 11:11:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:51.472 11:11:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:51.472 11:11:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:51.472 11:11:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:51.472 11:11:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:51.472 11:11:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:51.472 11:11:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:51.472 11:11:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:51.472 11:11:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:51.472 11:11:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:51.472 11:11:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:51.472 11:11:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:51.472 11:11:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:51.472 11:11:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:51.472 11:11:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:51.472 11:11:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:51.472 11:11:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:51.472 11:11:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:51.472 11:11:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:51.472 11:11:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:51.472 11:11:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:51.472 11:11:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:51.472 11:11:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:51.472 11:11:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:51.472 11:11:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:51.472 11:11:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:51.472 11:11:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:51.472 11:11:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:51.473 11:11:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:51.473 11:11:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:51.473 11:11:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:51.473 11:11:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:51.473 11:11:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:51.473 11:11:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:51.473 11:11:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:51.473 11:11:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:51.473 11:11:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:51.473 11:11:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:51.473 11:11:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:51.473 11:11:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:51.473 11:11:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:51.473 11:11:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:51.473 11:11:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:51.473 11:11:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:51.473 11:11:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:51.473 11:11:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:51.473 11:11:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:51.473 11:11:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:51.473 11:11:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:51.473 11:11:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:51.473 11:11:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:51.473 11:11:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:51.473 11:11:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:51.473 11:11:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:51.473 11:11:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:51.473 11:11:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:51.473 11:11:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:51.473 11:11:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:51.473 11:11:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:51.473 11:11:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:51.473 11:11:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:51.473 11:11:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:51.473 11:11:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:51.473 11:11:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:51.473 11:11:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:51.473 11:11:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:51.473 11:11:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:51.473 11:11:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:51.473 11:11:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:51.473 11:11:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:51.473 11:11:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:51.473 11:11:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:51.473 11:11:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:51.473 11:11:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:51.473 11:11:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:51.473 11:11:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:51.473 11:11:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:51.473 11:11:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:51.473 11:11:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:51.473 11:11:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:51.473 11:11:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:51.473 11:11:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:51.473 11:11:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:51.473 11:11:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:51.473 11:11:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:51.473 11:11:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:51.473 11:11:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:51.473 11:11:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:51.473 11:11:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:51.473 11:11:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:51.473 11:11:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:51.473 11:11:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:51.473 11:11:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:51.473 11:11:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:51.473 11:11:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:51.473 11:11:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:51.473 11:11:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:51.473 11:11:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:51.473 11:11:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:51.473 11:11:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:51.473 11:11:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:51.473 11:11:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:51.473 11:11:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:51.473 11:11:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:51.473 11:11:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:51.473 11:11:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:51.473 11:11:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:51.473 11:11:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:51.473 11:11:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:51.473 11:11:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:51.473 11:11:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:51.473 11:11:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:51.473 11:11:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:51.473 11:11:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:51.473 11:11:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:51.473 11:11:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:51.473 11:11:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:51.473 11:11:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:51.473 11:11:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:51.473 11:11:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:51.473 11:11:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:51.473 11:11:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:51.473 11:11:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:51.473 11:11:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:51.473 11:11:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:51.473 11:11:47 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 1024 00:03:51.473 11:11:47 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:03:51.473 11:11:47 setup.sh.hugepages.default_setup -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:51.473 11:11:47 setup.sh.hugepages.default_setup -- setup/hugepages.sh@112 -- # get_nodes 00:03:51.473 11:11:47 setup.sh.hugepages.default_setup -- setup/hugepages.sh@27 -- # local node 00:03:51.473 11:11:47 setup.sh.hugepages.default_setup -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:51.473 11:11:47 setup.sh.hugepages.default_setup -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:03:51.473 11:11:47 setup.sh.hugepages.default_setup -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:51.473 11:11:47 setup.sh.hugepages.default_setup -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:03:51.473 11:11:47 setup.sh.hugepages.default_setup -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:51.473 11:11:47 setup.sh.hugepages.default_setup -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:51.473 11:11:47 setup.sh.hugepages.default_setup -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:51.473 11:11:47 setup.sh.hugepages.default_setup -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:51.473 11:11:47 setup.sh.hugepages.default_setup -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:51.473 11:11:47 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:51.473 11:11:47 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node=0 00:03:51.474 11:11:47 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:03:51.474 11:11:47 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:03:51.474 11:11:47 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:51.474 11:11:47 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:51.474 11:11:47 setup.sh.hugepages.default_setup -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:51.474 11:11:47 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:03:51.474 11:11:47 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:51.474 11:11:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:51.474 11:11:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:51.474 11:11:47 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 97662684 kB' 'MemFree: 85530684 kB' 'MemUsed: 12132000 kB' 'SwapCached: 0 kB' 'Active: 5273104 kB' 'Inactive: 3292688 kB' 'Active(anon): 5064260 kB' 'Inactive(anon): 0 kB' 'Active(file): 208844 kB' 'Inactive(file): 3292688 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 8104784 kB' 'Mapped: 143256 kB' 'AnonPages: 464220 kB' 'Shmem: 4603252 kB' 'KernelStack: 12120 kB' 'PageTables: 6124 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 124320 kB' 'Slab: 360484 kB' 'SReclaimable: 124320 kB' 'SUnreclaim: 236164 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:03:51.474 11:11:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.474 11:11:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:51.474 11:11:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:51.474 11:11:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:51.474 11:11:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.474 11:11:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:51.474 11:11:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:51.474 11:11:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:51.474 11:11:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.474 11:11:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:51.474 11:11:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:51.474 11:11:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:51.474 11:11:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.474 11:11:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:51.474 11:11:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:51.474 11:11:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:51.474 11:11:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.474 11:11:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:51.474 11:11:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:51.474 11:11:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:51.474 11:11:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.474 11:11:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:51.474 11:11:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:51.474 11:11:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:51.474 11:11:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.474 11:11:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:51.474 11:11:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:51.474 11:11:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:51.474 11:11:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.474 11:11:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:51.474 11:11:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:51.474 11:11:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:51.474 11:11:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.474 11:11:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:51.474 11:11:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:51.474 11:11:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:51.474 11:11:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.474 11:11:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:51.474 11:11:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:51.474 11:11:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:51.474 11:11:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.474 11:11:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:51.474 11:11:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:51.474 11:11:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:51.474 11:11:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.474 11:11:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:51.474 11:11:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:51.474 11:11:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:51.474 11:11:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.474 11:11:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:51.474 11:11:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:51.474 11:11:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:51.474 11:11:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.474 11:11:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:51.474 11:11:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:51.474 11:11:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:51.474 11:11:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.474 11:11:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:51.474 11:11:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:51.474 11:11:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:51.474 11:11:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.474 11:11:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:51.474 11:11:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:51.474 11:11:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:51.474 11:11:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.474 11:11:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:51.474 11:11:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:51.474 11:11:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:51.474 11:11:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.474 11:11:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:51.474 11:11:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:51.474 11:11:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:51.474 11:11:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.474 11:11:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:51.474 11:11:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:51.474 11:11:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:51.474 11:11:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.474 11:11:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:51.474 11:11:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:51.474 11:11:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:51.474 11:11:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.474 11:11:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:51.474 11:11:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:51.474 11:11:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:51.474 11:11:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.474 11:11:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:51.474 11:11:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:51.474 11:11:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:51.474 11:11:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.474 11:11:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:51.474 11:11:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:51.474 11:11:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:51.474 11:11:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.474 11:11:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:51.474 11:11:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:51.474 11:11:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:51.474 11:11:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.475 11:11:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:51.475 11:11:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:51.475 11:11:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:51.475 11:11:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.475 11:11:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:51.475 11:11:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:51.475 11:11:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:51.475 11:11:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.475 11:11:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:51.475 11:11:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:51.475 11:11:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:51.475 11:11:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.475 11:11:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:51.475 11:11:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:51.475 11:11:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:51.475 11:11:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.475 11:11:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:51.475 11:11:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:51.475 11:11:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:51.475 11:11:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.475 11:11:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:51.475 11:11:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:51.475 11:11:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:51.475 11:11:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.475 11:11:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:51.475 11:11:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:51.475 11:11:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:51.475 11:11:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.475 11:11:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:51.475 11:11:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:51.475 11:11:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:51.475 11:11:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.475 11:11:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:51.475 11:11:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:51.475 11:11:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:51.475 11:11:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.475 11:11:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:51.475 11:11:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:51.475 11:11:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:51.475 11:11:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.475 11:11:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:51.475 11:11:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:51.475 11:11:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:51.475 11:11:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.475 11:11:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:51.475 11:11:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:51.475 11:11:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:51.475 11:11:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.475 11:11:47 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:03:51.475 11:11:47 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:03:51.475 11:11:47 setup.sh.hugepages.default_setup -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:51.475 11:11:47 setup.sh.hugepages.default_setup -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:51.475 11:11:47 setup.sh.hugepages.default_setup -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:51.475 11:11:47 setup.sh.hugepages.default_setup -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:51.475 11:11:47 setup.sh.hugepages.default_setup -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:03:51.475 node0=1024 expecting 1024 00:03:51.475 11:11:47 setup.sh.hugepages.default_setup -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:03:51.475 00:03:51.475 real 0m4.497s 00:03:51.475 user 0m1.338s 00:03:51.475 sys 0m1.919s 00:03:51.475 11:11:47 setup.sh.hugepages.default_setup -- common/autotest_common.sh@1126 -- # xtrace_disable 00:03:51.475 11:11:47 setup.sh.hugepages.default_setup -- common/autotest_common.sh@10 -- # set +x 00:03:51.475 ************************************ 00:03:51.475 END TEST default_setup 00:03:51.475 ************************************ 00:03:51.475 11:11:47 setup.sh.hugepages -- setup/hugepages.sh@211 -- # run_test per_node_1G_alloc per_node_1G_alloc 00:03:51.475 11:11:47 setup.sh.hugepages -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:03:51.475 11:11:47 setup.sh.hugepages -- common/autotest_common.sh@1107 -- # xtrace_disable 00:03:51.475 11:11:47 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:03:51.475 ************************************ 00:03:51.475 START TEST per_node_1G_alloc 00:03:51.475 ************************************ 00:03:51.475 11:11:47 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@1125 -- # per_node_1G_alloc 00:03:51.475 11:11:47 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@143 -- # local IFS=, 00:03:51.475 11:11:47 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@145 -- # get_test_nr_hugepages 1048576 0 1 00:03:51.475 11:11:47 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@49 -- # local size=1048576 00:03:51.475 11:11:47 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@50 -- # (( 3 > 1 )) 00:03:51.475 11:11:47 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@51 -- # shift 00:03:51.734 11:11:47 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@52 -- # node_ids=('0' '1') 00:03:51.734 11:11:47 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@52 -- # local node_ids 00:03:51.734 11:11:47 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:51.734 11:11:47 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:03:51.734 11:11:47 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 1 00:03:51.734 11:11:47 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@62 -- # user_nodes=('0' '1') 00:03:51.734 11:11:47 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:03:51.734 11:11:47 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:03:51.734 11:11:47 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:51.734 11:11:47 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:51.734 11:11:47 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:51.734 11:11:47 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@69 -- # (( 2 > 0 )) 00:03:51.734 11:11:47 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:03:51.734 11:11:47 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=512 00:03:51.734 11:11:47 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:03:51.734 11:11:47 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=512 00:03:51.734 11:11:47 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@73 -- # return 0 00:03:51.734 11:11:47 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # NRHUGE=512 00:03:51.734 11:11:47 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # HUGENODE=0,1 00:03:51.734 11:11:47 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # setup output 00:03:51.734 11:11:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:03:51.734 11:11:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:03:54.267 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:03:54.268 0000:5e:00.0 (8086 0a54): Already using the vfio-pci driver 00:03:54.268 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:03:54.268 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:03:54.268 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:03:54.268 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:03:54.268 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:03:54.268 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:03:54.268 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:03:54.268 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:03:54.268 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:03:54.268 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:03:54.268 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:03:54.268 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:03:54.268 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:03:54.268 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:03:54.268 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:03:54.532 11:11:49 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@147 -- # nr_hugepages=1024 00:03:54.532 11:11:49 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@147 -- # verify_nr_hugepages 00:03:54.532 11:11:49 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@89 -- # local node 00:03:54.532 11:11:49 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:03:54.532 11:11:49 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:03:54.532 11:11:49 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@92 -- # local surp 00:03:54.532 11:11:49 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@93 -- # local resv 00:03:54.532 11:11:49 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@94 -- # local anon 00:03:54.532 11:11:49 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:54.532 11:11:49 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:54.532 11:11:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:54.532 11:11:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:03:54.532 11:11:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:03:54.532 11:11:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:54.532 11:11:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:54.532 11:11:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:54.532 11:11:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:54.532 11:11:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:54.532 11:11:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:54.532 11:11:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.532 11:11:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.532 11:11:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381160 kB' 'MemFree: 176373148 kB' 'MemAvailable: 179228460 kB' 'Buffers: 4132 kB' 'Cached: 9370228 kB' 'SwapCached: 0 kB' 'Active: 6403620 kB' 'Inactive: 3506552 kB' 'Active(anon): 6015804 kB' 'Inactive(anon): 0 kB' 'Active(file): 387816 kB' 'Inactive(file): 3506552 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 538424 kB' 'Mapped: 196324 kB' 'Shmem: 5479992 kB' 'KReclaimable: 210788 kB' 'Slab: 711628 kB' 'SReclaimable: 210788 kB' 'SUnreclaim: 500840 kB' 'KernelStack: 20288 kB' 'PageTables: 8452 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103030608 kB' 'Committed_AS: 7521392 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 314872 kB' 'VmallocChunk: 0 kB' 'Percpu: 66048 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2257876 kB' 'DirectMap2M: 12101632 kB' 'DirectMap1G: 187695104 kB' 00:03:54.532 11:11:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.532 11:11:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:54.532 11:11:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.532 11:11:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.532 11:11:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.532 11:11:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:54.532 11:11:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.532 11:11:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.532 11:11:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.532 11:11:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:54.532 11:11:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.532 11:11:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.532 11:11:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.532 11:11:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:54.532 11:11:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.532 11:11:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.532 11:11:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.532 11:11:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:54.532 11:11:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.532 11:11:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.532 11:11:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.532 11:11:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:54.532 11:11:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.532 11:11:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.532 11:11:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.532 11:11:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:54.532 11:11:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.532 11:11:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.532 11:11:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.532 11:11:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:54.532 11:11:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.532 11:11:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.532 11:11:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.532 11:11:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:54.532 11:11:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.532 11:11:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.532 11:11:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.532 11:11:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:54.533 11:11:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.533 11:11:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.533 11:11:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.533 11:11:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:54.533 11:11:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.533 11:11:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.533 11:11:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.533 11:11:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:54.533 11:11:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.533 11:11:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.533 11:11:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.533 11:11:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:54.533 11:11:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.533 11:11:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.533 11:11:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.533 11:11:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:54.533 11:11:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.533 11:11:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.533 11:11:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.533 11:11:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:54.533 11:11:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.533 11:11:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.533 11:11:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.533 11:11:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:54.533 11:11:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.533 11:11:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.533 11:11:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.533 11:11:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:54.533 11:11:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.533 11:11:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.533 11:11:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.533 11:11:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:54.533 11:11:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.533 11:11:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.533 11:11:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.533 11:11:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:54.533 11:11:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.533 11:11:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.533 11:11:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.533 11:11:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:54.533 11:11:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.533 11:11:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.533 11:11:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.533 11:11:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:54.533 11:11:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.533 11:11:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.533 11:11:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.533 11:11:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:54.533 11:11:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.533 11:11:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.533 11:11:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.533 11:11:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:54.533 11:11:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.533 11:11:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.533 11:11:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.533 11:11:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:54.533 11:11:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.533 11:11:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.533 11:11:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.533 11:11:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:54.533 11:11:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.533 11:11:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.533 11:11:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.533 11:11:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:54.533 11:11:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.533 11:11:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.533 11:11:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.533 11:11:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:54.533 11:11:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.533 11:11:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.533 11:11:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.533 11:11:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:54.533 11:11:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.533 11:11:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.533 11:11:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.533 11:11:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:54.533 11:11:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.533 11:11:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.533 11:11:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.533 11:11:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:54.533 11:11:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.533 11:11:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.533 11:11:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.533 11:11:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:54.533 11:11:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.533 11:11:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.533 11:11:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.533 11:11:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:54.533 11:11:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.533 11:11:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.533 11:11:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.533 11:11:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:54.533 11:11:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.533 11:11:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.533 11:11:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.533 11:11:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:54.533 11:11:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.533 11:11:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.533 11:11:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.533 11:11:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:54.533 11:11:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.533 11:11:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.533 11:11:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.533 11:11:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:54.533 11:11:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.533 11:11:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.533 11:11:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.533 11:11:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:54.533 11:11:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.533 11:11:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.533 11:11:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.533 11:11:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:54.533 11:11:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.533 11:11:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.533 11:11:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.533 11:11:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:54.533 11:11:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.534 11:11:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.534 11:11:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.534 11:11:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:54.534 11:11:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.534 11:11:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.534 11:11:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.534 11:11:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:03:54.534 11:11:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:03:54.534 11:11:49 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@97 -- # anon=0 00:03:54.534 11:11:49 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:54.534 11:11:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:54.534 11:11:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:03:54.534 11:11:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:03:54.534 11:11:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:54.534 11:11:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:54.534 11:11:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:54.534 11:11:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:54.534 11:11:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:54.534 11:11:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:54.534 11:11:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.534 11:11:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.534 11:11:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381160 kB' 'MemFree: 176374248 kB' 'MemAvailable: 179229560 kB' 'Buffers: 4132 kB' 'Cached: 9370232 kB' 'SwapCached: 0 kB' 'Active: 6403964 kB' 'Inactive: 3506552 kB' 'Active(anon): 6016148 kB' 'Inactive(anon): 0 kB' 'Active(file): 387816 kB' 'Inactive(file): 3506552 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 538904 kB' 'Mapped: 196664 kB' 'Shmem: 5479996 kB' 'KReclaimable: 210788 kB' 'Slab: 711504 kB' 'SReclaimable: 210788 kB' 'SUnreclaim: 500716 kB' 'KernelStack: 20336 kB' 'PageTables: 8500 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103030608 kB' 'Committed_AS: 7521412 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 314812 kB' 'VmallocChunk: 0 kB' 'Percpu: 66048 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2257876 kB' 'DirectMap2M: 12101632 kB' 'DirectMap1G: 187695104 kB' 00:03:54.534 11:11:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.534 11:11:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:54.534 11:11:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.534 11:11:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.534 11:11:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.534 11:11:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:54.534 11:11:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.534 11:11:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.534 11:11:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.534 11:11:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:54.534 11:11:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.534 11:11:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.534 11:11:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.534 11:11:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:54.534 11:11:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.534 11:11:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.534 11:11:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.534 11:11:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:54.534 11:11:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.534 11:11:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.534 11:11:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.534 11:11:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:54.534 11:11:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.534 11:11:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.534 11:11:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.534 11:11:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:54.534 11:11:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.534 11:11:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.534 11:11:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.534 11:11:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:54.534 11:11:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.534 11:11:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.534 11:11:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.534 11:11:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:54.534 11:11:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.534 11:11:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.534 11:11:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.534 11:11:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:54.534 11:11:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.534 11:11:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.534 11:11:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.534 11:11:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:54.534 11:11:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.534 11:11:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.534 11:11:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.534 11:11:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:54.534 11:11:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.534 11:11:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.534 11:11:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.534 11:11:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:54.534 11:11:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.534 11:11:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.534 11:11:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.534 11:11:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:54.534 11:11:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.534 11:11:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.534 11:11:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.534 11:11:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:54.534 11:11:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.534 11:11:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.534 11:11:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.534 11:11:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:54.534 11:11:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.534 11:11:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.534 11:11:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.534 11:11:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:54.534 11:11:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.534 11:11:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.534 11:11:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.534 11:11:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:54.534 11:11:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.534 11:11:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.534 11:11:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.534 11:11:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:54.534 11:11:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.534 11:11:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.534 11:11:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.534 11:11:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:54.534 11:11:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.534 11:11:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.534 11:11:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.534 11:11:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:54.534 11:11:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.535 11:11:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.535 11:11:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.535 11:11:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:54.535 11:11:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.535 11:11:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.535 11:11:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.535 11:11:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:54.535 11:11:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.535 11:11:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.535 11:11:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.535 11:11:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:54.535 11:11:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.535 11:11:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.535 11:11:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.535 11:11:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:54.535 11:11:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.535 11:11:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.535 11:11:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.535 11:11:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:54.535 11:11:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.535 11:11:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.535 11:11:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.535 11:11:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:54.535 11:11:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.535 11:11:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.535 11:11:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.535 11:11:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:54.535 11:11:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.535 11:11:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.535 11:11:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.535 11:11:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:54.535 11:11:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.535 11:11:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.535 11:11:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.535 11:11:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:54.535 11:11:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.535 11:11:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.535 11:11:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.535 11:11:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:54.535 11:11:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.535 11:11:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.535 11:11:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.535 11:11:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:54.535 11:11:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.535 11:11:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.535 11:11:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.535 11:11:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:54.535 11:11:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.535 11:11:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.535 11:11:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.535 11:11:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:54.535 11:11:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.535 11:11:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.535 11:11:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.535 11:11:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:54.535 11:11:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.535 11:11:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.535 11:11:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.535 11:11:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:54.535 11:11:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.535 11:11:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.535 11:11:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.535 11:11:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:54.535 11:11:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.535 11:11:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.535 11:11:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.535 11:11:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:54.535 11:11:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.535 11:11:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.535 11:11:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.535 11:11:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:54.535 11:11:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.535 11:11:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.535 11:11:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.535 11:11:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:54.535 11:11:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.535 11:11:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.535 11:11:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.535 11:11:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:54.535 11:11:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.535 11:11:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.535 11:11:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.535 11:11:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:54.535 11:11:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.535 11:11:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.535 11:11:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.535 11:11:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:54.535 11:11:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.535 11:11:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.535 11:11:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.535 11:11:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:54.535 11:11:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.535 11:11:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.535 11:11:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.535 11:11:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:54.535 11:11:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.535 11:11:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.535 11:11:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.535 11:11:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:54.535 11:11:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.535 11:11:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.535 11:11:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.535 11:11:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:54.535 11:11:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.535 11:11:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.535 11:11:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.535 11:11:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:54.535 11:11:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.535 11:11:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.535 11:11:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.535 11:11:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:54.535 11:11:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.535 11:11:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.535 11:11:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.535 11:11:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:54.535 11:11:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.535 11:11:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.536 11:11:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.536 11:11:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:54.536 11:11:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.536 11:11:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.536 11:11:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.536 11:11:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:03:54.536 11:11:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:03:54.536 11:11:50 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@99 -- # surp=0 00:03:54.536 11:11:50 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:54.536 11:11:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:54.536 11:11:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:03:54.536 11:11:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:03:54.536 11:11:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:54.536 11:11:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:54.536 11:11:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:54.536 11:11:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:54.536 11:11:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:54.536 11:11:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:54.536 11:11:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.536 11:11:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.536 11:11:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381160 kB' 'MemFree: 176383592 kB' 'MemAvailable: 179238904 kB' 'Buffers: 4132 kB' 'Cached: 9370248 kB' 'SwapCached: 0 kB' 'Active: 6399448 kB' 'Inactive: 3506552 kB' 'Active(anon): 6011632 kB' 'Inactive(anon): 0 kB' 'Active(file): 387816 kB' 'Inactive(file): 3506552 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 534608 kB' 'Mapped: 196160 kB' 'Shmem: 5480012 kB' 'KReclaimable: 210788 kB' 'Slab: 711504 kB' 'SReclaimable: 210788 kB' 'SUnreclaim: 500716 kB' 'KernelStack: 20336 kB' 'PageTables: 8500 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103030608 kB' 'Committed_AS: 7515312 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 314808 kB' 'VmallocChunk: 0 kB' 'Percpu: 66048 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2257876 kB' 'DirectMap2M: 12101632 kB' 'DirectMap1G: 187695104 kB' 00:03:54.536 11:11:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.536 11:11:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:54.536 11:11:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.536 11:11:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.536 11:11:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.536 11:11:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:54.536 11:11:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.536 11:11:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.536 11:11:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.536 11:11:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:54.536 11:11:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.536 11:11:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.536 11:11:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.536 11:11:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:54.536 11:11:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.536 11:11:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.536 11:11:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.536 11:11:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:54.536 11:11:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.536 11:11:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.536 11:11:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.536 11:11:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:54.536 11:11:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.536 11:11:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.536 11:11:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.536 11:11:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:54.536 11:11:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.536 11:11:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.536 11:11:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.536 11:11:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:54.536 11:11:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.536 11:11:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.536 11:11:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.536 11:11:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:54.536 11:11:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.536 11:11:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.536 11:11:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.536 11:11:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:54.536 11:11:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.536 11:11:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.536 11:11:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.536 11:11:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:54.536 11:11:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.536 11:11:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.536 11:11:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.536 11:11:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:54.536 11:11:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.536 11:11:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.536 11:11:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.536 11:11:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:54.536 11:11:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.536 11:11:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.536 11:11:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.536 11:11:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:54.536 11:11:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.536 11:11:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.536 11:11:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.536 11:11:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:54.536 11:11:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.536 11:11:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.536 11:11:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.536 11:11:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:54.536 11:11:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.536 11:11:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.536 11:11:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.536 11:11:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:54.536 11:11:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.536 11:11:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.536 11:11:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.536 11:11:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:54.536 11:11:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.536 11:11:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.537 11:11:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.537 11:11:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:54.537 11:11:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.537 11:11:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.537 11:11:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.537 11:11:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:54.537 11:11:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.537 11:11:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.537 11:11:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.537 11:11:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:54.537 11:11:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.537 11:11:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.537 11:11:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.537 11:11:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:54.537 11:11:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.537 11:11:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.537 11:11:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.537 11:11:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:54.537 11:11:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.537 11:11:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.537 11:11:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.537 11:11:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:54.537 11:11:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.537 11:11:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.537 11:11:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.537 11:11:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:54.537 11:11:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.537 11:11:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.537 11:11:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.537 11:11:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:54.537 11:11:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.537 11:11:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.537 11:11:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.537 11:11:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:54.537 11:11:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.537 11:11:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.537 11:11:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.537 11:11:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:54.537 11:11:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.537 11:11:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.537 11:11:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.537 11:11:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:54.537 11:11:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.537 11:11:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.537 11:11:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.537 11:11:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:54.537 11:11:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.537 11:11:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.537 11:11:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.537 11:11:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:54.537 11:11:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.537 11:11:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.537 11:11:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.537 11:11:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:54.537 11:11:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.537 11:11:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.537 11:11:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.537 11:11:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:54.537 11:11:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.537 11:11:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.537 11:11:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.537 11:11:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:54.537 11:11:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.537 11:11:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.537 11:11:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.537 11:11:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:54.537 11:11:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.537 11:11:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.537 11:11:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.537 11:11:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:54.537 11:11:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.537 11:11:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.537 11:11:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.537 11:11:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:54.537 11:11:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.537 11:11:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.537 11:11:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.537 11:11:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:54.537 11:11:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.537 11:11:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.537 11:11:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.537 11:11:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:54.537 11:11:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.537 11:11:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.537 11:11:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.537 11:11:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:54.537 11:11:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.537 11:11:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.537 11:11:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.537 11:11:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:54.537 11:11:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.537 11:11:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.537 11:11:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.537 11:11:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:54.537 11:11:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.537 11:11:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.537 11:11:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.537 11:11:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:54.537 11:11:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.537 11:11:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.537 11:11:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.537 11:11:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:54.537 11:11:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.537 11:11:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.537 11:11:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.537 11:11:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:54.537 11:11:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.537 11:11:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.538 11:11:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.538 11:11:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:54.538 11:11:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.538 11:11:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.538 11:11:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.538 11:11:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:54.538 11:11:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.538 11:11:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.538 11:11:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.538 11:11:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:54.538 11:11:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.538 11:11:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.538 11:11:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.538 11:11:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:54.538 11:11:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.538 11:11:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.538 11:11:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.538 11:11:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:54.538 11:11:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.538 11:11:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.538 11:11:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.538 11:11:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:03:54.538 11:11:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:03:54.538 11:11:50 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@100 -- # resv=0 00:03:54.538 11:11:50 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:03:54.538 nr_hugepages=1024 00:03:54.538 11:11:50 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:54.538 resv_hugepages=0 00:03:54.538 11:11:50 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:54.538 surplus_hugepages=0 00:03:54.538 11:11:50 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:54.538 anon_hugepages=0 00:03:54.538 11:11:50 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:54.538 11:11:50 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:03:54.538 11:11:50 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:54.538 11:11:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:54.538 11:11:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:03:54.538 11:11:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:03:54.538 11:11:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:54.538 11:11:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:54.538 11:11:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:54.538 11:11:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:54.538 11:11:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:54.538 11:11:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:54.538 11:11:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.538 11:11:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.538 11:11:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381160 kB' 'MemFree: 176383920 kB' 'MemAvailable: 179239232 kB' 'Buffers: 4132 kB' 'Cached: 9370248 kB' 'SwapCached: 0 kB' 'Active: 6399344 kB' 'Inactive: 3506552 kB' 'Active(anon): 6011528 kB' 'Inactive(anon): 0 kB' 'Active(file): 387816 kB' 'Inactive(file): 3506552 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 534524 kB' 'Mapped: 195772 kB' 'Shmem: 5480012 kB' 'KReclaimable: 210788 kB' 'Slab: 711504 kB' 'SReclaimable: 210788 kB' 'SUnreclaim: 500716 kB' 'KernelStack: 20336 kB' 'PageTables: 8472 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103030608 kB' 'Committed_AS: 7515336 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 314824 kB' 'VmallocChunk: 0 kB' 'Percpu: 66048 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2257876 kB' 'DirectMap2M: 12101632 kB' 'DirectMap1G: 187695104 kB' 00:03:54.538 11:11:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.538 11:11:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:54.538 11:11:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.538 11:11:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.538 11:11:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.538 11:11:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:54.538 11:11:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.538 11:11:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.538 11:11:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.538 11:11:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:54.538 11:11:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.538 11:11:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.538 11:11:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.538 11:11:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:54.538 11:11:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.538 11:11:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.538 11:11:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.538 11:11:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:54.538 11:11:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.538 11:11:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.538 11:11:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.538 11:11:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:54.538 11:11:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.538 11:11:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.538 11:11:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.538 11:11:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:54.538 11:11:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.538 11:11:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.538 11:11:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.538 11:11:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:54.538 11:11:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.538 11:11:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.538 11:11:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.538 11:11:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:54.538 11:11:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.538 11:11:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.538 11:11:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.538 11:11:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:54.538 11:11:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.538 11:11:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.538 11:11:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.538 11:11:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:54.538 11:11:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.538 11:11:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.538 11:11:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.538 11:11:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:54.538 11:11:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.538 11:11:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.538 11:11:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.538 11:11:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:54.538 11:11:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.538 11:11:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.538 11:11:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.538 11:11:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:54.538 11:11:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.538 11:11:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.538 11:11:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.538 11:11:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:54.538 11:11:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.539 11:11:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.539 11:11:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.539 11:11:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:54.539 11:11:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.539 11:11:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.539 11:11:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.539 11:11:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:54.539 11:11:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.539 11:11:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.539 11:11:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.539 11:11:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:54.539 11:11:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.539 11:11:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.539 11:11:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.539 11:11:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:54.539 11:11:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.539 11:11:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.539 11:11:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.539 11:11:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:54.539 11:11:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.539 11:11:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.539 11:11:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.539 11:11:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:54.539 11:11:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.539 11:11:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.539 11:11:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.539 11:11:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:54.539 11:11:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.539 11:11:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.539 11:11:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.539 11:11:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:54.539 11:11:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.539 11:11:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.539 11:11:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.539 11:11:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:54.539 11:11:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.539 11:11:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.539 11:11:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.539 11:11:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:54.539 11:11:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.539 11:11:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.539 11:11:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.539 11:11:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:54.539 11:11:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.539 11:11:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.539 11:11:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.539 11:11:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:54.539 11:11:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.539 11:11:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.539 11:11:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.539 11:11:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:54.539 11:11:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.539 11:11:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.539 11:11:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.539 11:11:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:54.539 11:11:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.539 11:11:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.539 11:11:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.539 11:11:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:54.539 11:11:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.539 11:11:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.539 11:11:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.539 11:11:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:54.539 11:11:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.539 11:11:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.539 11:11:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.539 11:11:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:54.539 11:11:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.539 11:11:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.539 11:11:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.539 11:11:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:54.539 11:11:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.539 11:11:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.539 11:11:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.539 11:11:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:54.539 11:11:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.539 11:11:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.539 11:11:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.539 11:11:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:54.539 11:11:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.539 11:11:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.539 11:11:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.539 11:11:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:54.539 11:11:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.539 11:11:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.539 11:11:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.539 11:11:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:54.539 11:11:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.539 11:11:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.539 11:11:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.539 11:11:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:54.539 11:11:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.539 11:11:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.539 11:11:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.539 11:11:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:54.539 11:11:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.539 11:11:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.539 11:11:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.539 11:11:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:54.539 11:11:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.539 11:11:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.539 11:11:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.539 11:11:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:54.539 11:11:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.539 11:11:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.539 11:11:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.539 11:11:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:54.539 11:11:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.539 11:11:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.539 11:11:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.539 11:11:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:54.539 11:11:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.539 11:11:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.539 11:11:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.539 11:11:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:54.539 11:11:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.539 11:11:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.539 11:11:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.540 11:11:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:54.540 11:11:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.540 11:11:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.540 11:11:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.540 11:11:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:54.540 11:11:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.540 11:11:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.540 11:11:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.540 11:11:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:54.540 11:11:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.540 11:11:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.540 11:11:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.540 11:11:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:54.540 11:11:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.540 11:11:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.540 11:11:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.540 11:11:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 1024 00:03:54.540 11:11:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:03:54.540 11:11:50 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:54.540 11:11:50 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:03:54.540 11:11:50 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@27 -- # local node 00:03:54.540 11:11:50 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:54.540 11:11:50 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:03:54.540 11:11:50 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:54.540 11:11:50 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:03:54.540 11:11:50 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:54.540 11:11:50 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:54.540 11:11:50 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:54.540 11:11:50 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:54.540 11:11:50 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:54.540 11:11:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:54.540 11:11:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node=0 00:03:54.540 11:11:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:03:54.540 11:11:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:54.540 11:11:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:54.540 11:11:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:54.540 11:11:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:54.540 11:11:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:54.540 11:11:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:54.540 11:11:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.540 11:11:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.540 11:11:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 97662684 kB' 'MemFree: 86582832 kB' 'MemUsed: 11079852 kB' 'SwapCached: 0 kB' 'Active: 5271748 kB' 'Inactive: 3292688 kB' 'Active(anon): 5062904 kB' 'Inactive(anon): 0 kB' 'Active(file): 208844 kB' 'Inactive(file): 3292688 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 8104852 kB' 'Mapped: 142448 kB' 'AnonPages: 462964 kB' 'Shmem: 4603320 kB' 'KernelStack: 12072 kB' 'PageTables: 5816 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 124320 kB' 'Slab: 360536 kB' 'SReclaimable: 124320 kB' 'SUnreclaim: 236216 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:03:54.540 11:11:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.540 11:11:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:54.540 11:11:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.540 11:11:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.540 11:11:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.540 11:11:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:54.540 11:11:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.540 11:11:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.540 11:11:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.540 11:11:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:54.540 11:11:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.540 11:11:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.540 11:11:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.540 11:11:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:54.540 11:11:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.540 11:11:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.540 11:11:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.540 11:11:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:54.540 11:11:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.540 11:11:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.540 11:11:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.540 11:11:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:54.540 11:11:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.540 11:11:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.540 11:11:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.540 11:11:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:54.540 11:11:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.540 11:11:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.540 11:11:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.540 11:11:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:54.540 11:11:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.540 11:11:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.540 11:11:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.540 11:11:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:54.540 11:11:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.540 11:11:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.540 11:11:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.540 11:11:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:54.540 11:11:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.540 11:11:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.540 11:11:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.540 11:11:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:54.540 11:11:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.540 11:11:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.540 11:11:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.540 11:11:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:54.540 11:11:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.540 11:11:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.540 11:11:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.540 11:11:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:54.540 11:11:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.540 11:11:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.540 11:11:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.540 11:11:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:54.540 11:11:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.540 11:11:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.540 11:11:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.540 11:11:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:54.540 11:11:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.540 11:11:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.540 11:11:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.540 11:11:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:54.540 11:11:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.540 11:11:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.540 11:11:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.540 11:11:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:54.540 11:11:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.541 11:11:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.541 11:11:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.541 11:11:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:54.541 11:11:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.541 11:11:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.541 11:11:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.541 11:11:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:54.541 11:11:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.541 11:11:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.541 11:11:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.541 11:11:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:54.541 11:11:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.541 11:11:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.541 11:11:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.541 11:11:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:54.541 11:11:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.541 11:11:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.541 11:11:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.541 11:11:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:54.541 11:11:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.541 11:11:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.541 11:11:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.541 11:11:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:54.541 11:11:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.541 11:11:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.541 11:11:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.541 11:11:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:54.541 11:11:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.541 11:11:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.541 11:11:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.541 11:11:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:54.541 11:11:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.541 11:11:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.541 11:11:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.541 11:11:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:54.541 11:11:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.541 11:11:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.541 11:11:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.541 11:11:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:54.541 11:11:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.541 11:11:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.541 11:11:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.541 11:11:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:54.541 11:11:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.541 11:11:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.541 11:11:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.541 11:11:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:54.541 11:11:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.541 11:11:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.541 11:11:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.541 11:11:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:54.541 11:11:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.541 11:11:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.541 11:11:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.541 11:11:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:54.541 11:11:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.541 11:11:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.541 11:11:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.541 11:11:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:54.541 11:11:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.541 11:11:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.541 11:11:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.541 11:11:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:54.541 11:11:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.541 11:11:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.541 11:11:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.541 11:11:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:54.541 11:11:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.541 11:11:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.541 11:11:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.541 11:11:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:54.541 11:11:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.541 11:11:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.541 11:11:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.541 11:11:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:54.541 11:11:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.541 11:11:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.541 11:11:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.541 11:11:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:03:54.541 11:11:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:03:54.541 11:11:50 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:54.541 11:11:50 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:54.541 11:11:50 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:54.541 11:11:50 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:03:54.541 11:11:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:54.541 11:11:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node=1 00:03:54.541 11:11:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:03:54.541 11:11:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:54.541 11:11:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:54.541 11:11:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:03:54.541 11:11:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:03:54.541 11:11:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:54.541 11:11:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:54.541 11:11:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.541 11:11:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.542 11:11:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 93718476 kB' 'MemFree: 89801944 kB' 'MemUsed: 3916532 kB' 'SwapCached: 0 kB' 'Active: 1127596 kB' 'Inactive: 213864 kB' 'Active(anon): 948624 kB' 'Inactive(anon): 0 kB' 'Active(file): 178972 kB' 'Inactive(file): 213864 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 1269528 kB' 'Mapped: 53324 kB' 'AnonPages: 71560 kB' 'Shmem: 876692 kB' 'KernelStack: 8264 kB' 'PageTables: 2656 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 86468 kB' 'Slab: 350968 kB' 'SReclaimable: 86468 kB' 'SUnreclaim: 264500 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:03:54.542 11:11:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.542 11:11:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:54.542 11:11:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.542 11:11:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.542 11:11:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.542 11:11:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:54.542 11:11:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.542 11:11:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.542 11:11:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.542 11:11:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:54.542 11:11:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.542 11:11:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.542 11:11:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.542 11:11:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:54.542 11:11:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.542 11:11:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.542 11:11:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.542 11:11:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:54.542 11:11:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.542 11:11:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.542 11:11:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.542 11:11:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:54.542 11:11:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.542 11:11:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.542 11:11:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.542 11:11:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:54.542 11:11:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.542 11:11:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.542 11:11:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.542 11:11:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:54.542 11:11:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.542 11:11:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.542 11:11:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.542 11:11:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:54.542 11:11:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.542 11:11:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.542 11:11:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.542 11:11:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:54.542 11:11:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.542 11:11:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.542 11:11:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.542 11:11:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:54.542 11:11:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.542 11:11:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.542 11:11:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.542 11:11:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:54.542 11:11:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.542 11:11:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.542 11:11:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.542 11:11:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:54.542 11:11:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.542 11:11:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.542 11:11:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.542 11:11:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:54.542 11:11:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.542 11:11:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.542 11:11:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.542 11:11:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:54.542 11:11:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.542 11:11:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.542 11:11:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.542 11:11:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:54.542 11:11:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.542 11:11:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.542 11:11:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.542 11:11:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:54.542 11:11:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.542 11:11:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.542 11:11:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.542 11:11:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:54.542 11:11:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.542 11:11:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.542 11:11:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.542 11:11:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:54.542 11:11:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.542 11:11:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.542 11:11:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.542 11:11:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:54.542 11:11:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.542 11:11:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.542 11:11:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.542 11:11:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:54.542 11:11:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.542 11:11:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.542 11:11:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.542 11:11:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:54.542 11:11:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.542 11:11:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.542 11:11:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.542 11:11:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:54.542 11:11:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.542 11:11:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.542 11:11:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.542 11:11:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:54.542 11:11:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.542 11:11:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.542 11:11:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.542 11:11:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:54.542 11:11:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.542 11:11:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.542 11:11:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.542 11:11:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:54.542 11:11:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.542 11:11:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.542 11:11:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.542 11:11:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:54.542 11:11:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.542 11:11:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.542 11:11:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.542 11:11:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:54.542 11:11:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.542 11:11:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.542 11:11:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.542 11:11:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:54.542 11:11:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.543 11:11:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.543 11:11:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.543 11:11:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:54.543 11:11:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.543 11:11:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.543 11:11:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.543 11:11:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:54.543 11:11:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.543 11:11:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.543 11:11:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.543 11:11:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:54.543 11:11:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.543 11:11:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.543 11:11:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.543 11:11:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:54.543 11:11:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.543 11:11:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.543 11:11:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.543 11:11:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:54.543 11:11:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.543 11:11:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.543 11:11:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.543 11:11:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:54.543 11:11:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.543 11:11:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.543 11:11:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.543 11:11:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:54.543 11:11:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.543 11:11:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.543 11:11:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.543 11:11:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:03:54.543 11:11:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:03:54.543 11:11:50 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:54.543 11:11:50 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:54.543 11:11:50 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:54.543 11:11:50 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:54.543 11:11:50 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:03:54.543 node0=512 expecting 512 00:03:54.543 11:11:50 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:54.543 11:11:50 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:54.543 11:11:50 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:54.543 11:11:50 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@128 -- # echo 'node1=512 expecting 512' 00:03:54.543 node1=512 expecting 512 00:03:54.543 11:11:50 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:03:54.543 00:03:54.543 real 0m3.006s 00:03:54.543 user 0m1.230s 00:03:54.543 sys 0m1.846s 00:03:54.543 11:11:50 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:03:54.543 11:11:50 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@10 -- # set +x 00:03:54.543 ************************************ 00:03:54.543 END TEST per_node_1G_alloc 00:03:54.543 ************************************ 00:03:54.543 11:11:50 setup.sh.hugepages -- setup/hugepages.sh@212 -- # run_test even_2G_alloc even_2G_alloc 00:03:54.543 11:11:50 setup.sh.hugepages -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:03:54.543 11:11:50 setup.sh.hugepages -- common/autotest_common.sh@1107 -- # xtrace_disable 00:03:54.543 11:11:50 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:03:54.802 ************************************ 00:03:54.802 START TEST even_2G_alloc 00:03:54.802 ************************************ 00:03:54.802 11:11:50 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@1125 -- # even_2G_alloc 00:03:54.802 11:11:50 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@152 -- # get_test_nr_hugepages 2097152 00:03:54.802 11:11:50 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@49 -- # local size=2097152 00:03:54.802 11:11:50 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:03:54.802 11:11:50 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:54.802 11:11:50 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:03:54.802 11:11:50 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:03:54.802 11:11:50 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:03:54.802 11:11:50 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:03:54.802 11:11:50 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:03:54.803 11:11:50 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:54.803 11:11:50 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:54.803 11:11:50 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:54.803 11:11:50 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:03:54.803 11:11:50 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:03:54.803 11:11:50 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:54.803 11:11:50 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:03:54.803 11:11:50 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@83 -- # : 512 00:03:54.803 11:11:50 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@84 -- # : 1 00:03:54.803 11:11:50 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:54.803 11:11:50 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:03:54.803 11:11:50 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@83 -- # : 0 00:03:54.803 11:11:50 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@84 -- # : 0 00:03:54.803 11:11:50 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:54.803 11:11:50 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # NRHUGE=1024 00:03:54.803 11:11:50 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # HUGE_EVEN_ALLOC=yes 00:03:54.803 11:11:50 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # setup output 00:03:54.803 11:11:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:03:54.803 11:11:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:03:57.338 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:03:57.338 0000:5e:00.0 (8086 0a54): Already using the vfio-pci driver 00:03:57.338 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:03:57.338 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:03:57.338 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:03:57.338 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:03:57.338 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:03:57.338 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:03:57.338 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:03:57.338 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:03:57.338 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:03:57.338 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:03:57.338 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:03:57.338 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:03:57.338 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:03:57.338 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:03:57.338 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:03:57.602 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@154 -- # verify_nr_hugepages 00:03:57.602 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@89 -- # local node 00:03:57.602 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:03:57.602 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:03:57.602 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@92 -- # local surp 00:03:57.602 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@93 -- # local resv 00:03:57.603 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@94 -- # local anon 00:03:57.603 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:57.603 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:57.603 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:57.603 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:03:57.603 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:03:57.603 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:57.603 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:57.603 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:57.603 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:57.603 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:57.603 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:57.603 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.603 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.603 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381160 kB' 'MemFree: 176388704 kB' 'MemAvailable: 179244016 kB' 'Buffers: 4132 kB' 'Cached: 9370388 kB' 'SwapCached: 0 kB' 'Active: 6400760 kB' 'Inactive: 3506552 kB' 'Active(anon): 6012944 kB' 'Inactive(anon): 0 kB' 'Active(file): 387816 kB' 'Inactive(file): 3506552 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 535588 kB' 'Mapped: 195696 kB' 'Shmem: 5480152 kB' 'KReclaimable: 210788 kB' 'Slab: 710736 kB' 'SReclaimable: 210788 kB' 'SUnreclaim: 499948 kB' 'KernelStack: 20352 kB' 'PageTables: 8884 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103030608 kB' 'Committed_AS: 7515948 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 314904 kB' 'VmallocChunk: 0 kB' 'Percpu: 66048 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2257876 kB' 'DirectMap2M: 12101632 kB' 'DirectMap1G: 187695104 kB' 00:03:57.603 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.603 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:57.603 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.603 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.603 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.603 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:57.603 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.603 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.603 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.603 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:57.603 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.603 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.603 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.603 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:57.603 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.603 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.603 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.603 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:57.603 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.603 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.603 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.603 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:57.603 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.603 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.603 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.603 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:57.603 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.603 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.603 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.603 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:57.603 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.603 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.603 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.603 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:57.603 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.603 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.603 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.603 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:57.603 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.603 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.603 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.603 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:57.603 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.603 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.603 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.603 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:57.603 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.603 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.603 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.603 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:57.603 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.603 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.603 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.603 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:57.603 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.603 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.603 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.603 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:57.603 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.603 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.603 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.603 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:57.603 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.603 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.603 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.603 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:57.603 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.603 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.603 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.603 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:57.603 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.603 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.603 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.603 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:57.603 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.603 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.603 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.603 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:57.603 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.603 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.603 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.603 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:57.603 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.603 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.603 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.603 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:57.603 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.603 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.603 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.603 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:57.603 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.604 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.604 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.604 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:57.604 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.604 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.604 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.604 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:57.604 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.604 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.604 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.604 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:57.604 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.604 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.604 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.604 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:57.604 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.604 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.604 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.604 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:57.604 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.604 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.604 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.604 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:57.604 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.604 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.604 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.604 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:57.604 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.604 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.604 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.604 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:57.604 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.604 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.604 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.604 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:57.604 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.604 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.604 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.604 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:57.604 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.604 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.604 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.604 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:57.604 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.604 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.604 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.604 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:57.604 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.604 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.604 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.604 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:57.604 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.604 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.604 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.604 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:57.604 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.604 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.604 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.604 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:57.604 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.604 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.604 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.604 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:57.604 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.604 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.604 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.604 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:57.604 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.604 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.604 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.604 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:03:57.604 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:03:57.604 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@97 -- # anon=0 00:03:57.604 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:57.604 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:57.604 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:03:57.604 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:03:57.604 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:57.604 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:57.604 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:57.604 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:57.604 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:57.604 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:57.604 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.604 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.604 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381160 kB' 'MemFree: 176388948 kB' 'MemAvailable: 179244260 kB' 'Buffers: 4132 kB' 'Cached: 9370392 kB' 'SwapCached: 0 kB' 'Active: 6400372 kB' 'Inactive: 3506552 kB' 'Active(anon): 6012556 kB' 'Inactive(anon): 0 kB' 'Active(file): 387816 kB' 'Inactive(file): 3506552 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 535648 kB' 'Mapped: 195620 kB' 'Shmem: 5480156 kB' 'KReclaimable: 210788 kB' 'Slab: 710712 kB' 'SReclaimable: 210788 kB' 'SUnreclaim: 499924 kB' 'KernelStack: 20336 kB' 'PageTables: 8828 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103030608 kB' 'Committed_AS: 7515968 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 314872 kB' 'VmallocChunk: 0 kB' 'Percpu: 66048 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2257876 kB' 'DirectMap2M: 12101632 kB' 'DirectMap1G: 187695104 kB' 00:03:57.604 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.604 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:57.604 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.604 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.604 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.604 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:57.604 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.604 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.604 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.604 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:57.604 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.604 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.604 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.604 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:57.604 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.604 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.604 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.604 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:57.604 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.604 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.604 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.605 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:57.605 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.605 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.605 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.605 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:57.605 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.605 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.605 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.605 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:57.605 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.605 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.605 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.605 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:57.605 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.605 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.605 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.605 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:57.605 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.605 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.605 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.605 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:57.605 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.605 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.605 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.605 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:57.605 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.605 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.605 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.605 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:57.605 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.605 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.605 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.605 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:57.605 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.605 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.605 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.605 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:57.605 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.605 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.605 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.605 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:57.605 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.605 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.605 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.605 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:57.605 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.605 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.605 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.605 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:57.605 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.605 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.605 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.605 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:57.605 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.605 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.605 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.605 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:57.605 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.605 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.605 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.605 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:57.605 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.605 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.605 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.605 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:57.605 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.605 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.605 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.605 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:57.605 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.605 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.605 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.605 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:57.605 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.605 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.605 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.605 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:57.605 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.605 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.605 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.605 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:57.605 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.605 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.605 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.605 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:57.605 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.605 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.605 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.605 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:57.605 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.605 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.605 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.605 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:57.605 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.605 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.605 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.605 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:57.605 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.605 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.605 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.605 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:57.605 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.605 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.605 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.605 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:57.605 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.605 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.605 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.605 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:57.605 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.605 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.605 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.605 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:57.605 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.605 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.605 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.605 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:57.605 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.605 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.605 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.605 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:57.606 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.606 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.606 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.606 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:57.606 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.606 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.606 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.606 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:57.606 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.606 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.606 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.606 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:57.606 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.606 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.606 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.606 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:57.606 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.606 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.606 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.606 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:57.606 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.606 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.606 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.606 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:57.606 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.606 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.606 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.606 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:57.606 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.606 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.606 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.606 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:57.606 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.606 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.606 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.606 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:57.606 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.606 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.606 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.606 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:57.606 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.606 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.606 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.606 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:57.606 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.606 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.606 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.606 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:57.606 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.606 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.606 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.606 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:57.606 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.606 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.606 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.606 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:57.606 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.606 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.606 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.606 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:57.606 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.606 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.606 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.606 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:03:57.606 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:03:57.606 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@99 -- # surp=0 00:03:57.606 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:57.606 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:57.606 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:03:57.606 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:03:57.606 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:57.606 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:57.606 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:57.606 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:57.606 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:57.606 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:57.606 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.606 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.606 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381160 kB' 'MemFree: 176390028 kB' 'MemAvailable: 179245340 kB' 'Buffers: 4132 kB' 'Cached: 9370408 kB' 'SwapCached: 0 kB' 'Active: 6400016 kB' 'Inactive: 3506552 kB' 'Active(anon): 6012200 kB' 'Inactive(anon): 0 kB' 'Active(file): 387816 kB' 'Inactive(file): 3506552 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 535284 kB' 'Mapped: 195620 kB' 'Shmem: 5480172 kB' 'KReclaimable: 210788 kB' 'Slab: 710712 kB' 'SReclaimable: 210788 kB' 'SUnreclaim: 499924 kB' 'KernelStack: 20320 kB' 'PageTables: 8776 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103030608 kB' 'Committed_AS: 7515988 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 314840 kB' 'VmallocChunk: 0 kB' 'Percpu: 66048 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2257876 kB' 'DirectMap2M: 12101632 kB' 'DirectMap1G: 187695104 kB' 00:03:57.606 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.606 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:57.606 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.606 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.606 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.606 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:57.606 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.606 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.606 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.606 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:57.606 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.606 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.606 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.606 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:57.606 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.606 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.606 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.606 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:57.606 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.606 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.606 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.606 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:57.606 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.606 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.606 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.606 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:57.606 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.606 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.607 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.607 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:57.607 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.607 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.607 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.607 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:57.607 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.607 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.607 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.607 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:57.607 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.607 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.607 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.607 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:57.607 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.607 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.607 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.607 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:57.607 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.607 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.607 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.607 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:57.607 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.607 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.607 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.607 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:57.607 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.607 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.607 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.607 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:57.607 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.607 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.607 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.607 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:57.607 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.607 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.607 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.607 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:57.607 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.607 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.607 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.607 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:57.607 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.607 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.607 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.607 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:57.607 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.607 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.607 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.607 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:57.607 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.607 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.607 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.607 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:57.607 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.607 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.607 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.607 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:57.607 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.607 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.607 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.607 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:57.607 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.607 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.607 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.607 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:57.607 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.607 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.607 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.607 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:57.607 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.607 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.607 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.607 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:57.607 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.607 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.607 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.607 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:57.607 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.607 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.607 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.607 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:57.607 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.607 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.607 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.607 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:57.607 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.607 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.607 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.607 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:57.607 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.607 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.607 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.607 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:57.607 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.607 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.607 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.607 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:57.608 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.608 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.608 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.608 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:57.608 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.608 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.608 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.608 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:57.608 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.608 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.608 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.608 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:57.608 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.608 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.608 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.608 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:57.608 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.608 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.608 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.608 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:57.608 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.608 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.608 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.608 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:57.608 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.608 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.608 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.608 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:57.608 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.608 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.608 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.608 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:57.608 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.608 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.608 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.608 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:57.608 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.608 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.608 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.608 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:57.608 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.608 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.608 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.608 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:57.608 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.608 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.608 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.608 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:57.608 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.608 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.608 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.608 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:57.608 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.608 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.608 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.608 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:57.608 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.608 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.608 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.608 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:57.608 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.608 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.608 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.608 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:57.608 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.608 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.608 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.608 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:57.608 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.608 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.608 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.608 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:57.608 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.608 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.608 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.608 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:03:57.608 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:03:57.608 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@100 -- # resv=0 00:03:57.608 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:03:57.608 nr_hugepages=1024 00:03:57.608 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:57.608 resv_hugepages=0 00:03:57.608 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:57.608 surplus_hugepages=0 00:03:57.608 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:57.608 anon_hugepages=0 00:03:57.608 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:57.608 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:03:57.608 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:57.608 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:57.608 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:03:57.608 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:03:57.608 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:57.608 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:57.608 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:57.608 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:57.608 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:57.608 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:57.608 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.608 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.608 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381160 kB' 'MemFree: 176391156 kB' 'MemAvailable: 179246468 kB' 'Buffers: 4132 kB' 'Cached: 9370432 kB' 'SwapCached: 0 kB' 'Active: 6400012 kB' 'Inactive: 3506552 kB' 'Active(anon): 6012196 kB' 'Inactive(anon): 0 kB' 'Active(file): 387816 kB' 'Inactive(file): 3506552 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 535284 kB' 'Mapped: 195620 kB' 'Shmem: 5480196 kB' 'KReclaimable: 210788 kB' 'Slab: 710712 kB' 'SReclaimable: 210788 kB' 'SUnreclaim: 499924 kB' 'KernelStack: 20320 kB' 'PageTables: 8776 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103030608 kB' 'Committed_AS: 7516012 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 314840 kB' 'VmallocChunk: 0 kB' 'Percpu: 66048 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2257876 kB' 'DirectMap2M: 12101632 kB' 'DirectMap1G: 187695104 kB' 00:03:57.608 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.608 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:57.608 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.608 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.608 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.608 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:57.608 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.608 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.609 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.609 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:57.609 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.609 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.609 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.609 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:57.609 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.609 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.609 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.609 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:57.609 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.609 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.609 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.609 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:57.609 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.609 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.609 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.609 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:57.609 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.609 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.609 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.609 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:57.609 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.609 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.609 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.609 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:57.609 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.609 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.609 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.609 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:57.609 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.609 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.609 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.609 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:57.609 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.609 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.609 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.609 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:57.609 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.609 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.609 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.609 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:57.609 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.609 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.609 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.609 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:57.609 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.609 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.609 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.609 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:57.609 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.609 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.609 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.609 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:57.609 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.609 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.609 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.609 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:57.609 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.609 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.609 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.609 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:57.609 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.609 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.609 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.609 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:57.609 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.609 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.609 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.609 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:57.609 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.609 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.609 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.609 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:57.609 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.609 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.609 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.609 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:57.609 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.609 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.609 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.609 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:57.609 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.609 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.609 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.609 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:57.609 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.609 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.609 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.609 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:57.609 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.609 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.609 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.609 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:57.609 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.609 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.609 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.609 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:57.609 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.609 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.609 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.609 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:57.609 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.609 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.609 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.609 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:57.609 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.609 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.609 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.609 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:57.609 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.609 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.609 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.609 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:57.609 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.609 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.609 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.610 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:57.610 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.610 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.610 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.610 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:57.610 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.610 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.610 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.610 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:57.610 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.610 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.610 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.610 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:57.610 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.610 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.610 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.610 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:57.610 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.610 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.610 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.610 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:57.610 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.610 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.610 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.610 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:57.610 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.610 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.610 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.610 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:57.610 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.610 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.610 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.610 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:57.610 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.610 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.610 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.610 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:57.610 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.610 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.610 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.610 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:57.610 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.610 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.610 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.610 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:57.610 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.610 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.610 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.610 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:57.610 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.610 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.610 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.610 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:57.610 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.610 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.610 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.610 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:57.610 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.610 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.610 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.610 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:57.610 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.610 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.610 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.610 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:57.610 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.610 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.610 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.610 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 1024 00:03:57.610 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:03:57.610 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:57.610 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:03:57.610 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@27 -- # local node 00:03:57.610 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:57.610 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:03:57.610 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:57.610 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:03:57.610 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:57.610 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:57.610 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:57.610 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:57.610 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:57.610 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:57.610 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node=0 00:03:57.610 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:03:57.610 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:57.610 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:57.610 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:57.610 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:57.610 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:57.610 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:57.610 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.610 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.610 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 97662684 kB' 'MemFree: 86583512 kB' 'MemUsed: 11079172 kB' 'SwapCached: 0 kB' 'Active: 5271872 kB' 'Inactive: 3292688 kB' 'Active(anon): 5063028 kB' 'Inactive(anon): 0 kB' 'Active(file): 208844 kB' 'Inactive(file): 3292688 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 8104964 kB' 'Mapped: 142372 kB' 'AnonPages: 462728 kB' 'Shmem: 4603432 kB' 'KernelStack: 12040 kB' 'PageTables: 6088 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 124320 kB' 'Slab: 360232 kB' 'SReclaimable: 124320 kB' 'SUnreclaim: 235912 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:03:57.610 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.610 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:57.610 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.610 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.610 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.610 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:57.610 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.610 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.610 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.610 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:57.610 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.610 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.610 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.610 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:57.610 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.610 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.610 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.611 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:57.611 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.611 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.611 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.611 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:57.611 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.611 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.611 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.611 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:57.611 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.611 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.611 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.611 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:57.611 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.611 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.611 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.611 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:57.611 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.611 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.611 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.611 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:57.611 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.611 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.611 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.611 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:57.611 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.611 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.611 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.611 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:57.611 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.611 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.611 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.611 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:57.611 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.611 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.611 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.611 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:57.611 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.611 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.611 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.611 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:57.611 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.611 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.611 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.611 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:57.611 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.611 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.611 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.611 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:57.611 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.611 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.611 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.611 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:57.611 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.611 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.611 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.611 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:57.611 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.611 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.611 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.611 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:57.611 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.611 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.611 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.611 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:57.611 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.611 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.611 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.611 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:57.611 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.611 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.611 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.611 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:57.611 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.611 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.611 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.611 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:57.611 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.611 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.611 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.611 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:57.611 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.611 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.611 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.611 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:57.611 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.611 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.611 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.611 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:57.611 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.611 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.611 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.611 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:57.611 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.611 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.611 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.611 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:57.611 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.611 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.611 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.611 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:57.611 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.611 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.611 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.611 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:57.611 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.611 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.611 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.611 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:57.611 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.611 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.611 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.611 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:57.611 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.611 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.611 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.611 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:57.611 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.611 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.611 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.611 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:57.612 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.612 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.612 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.612 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:57.612 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.612 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.612 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.612 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:03:57.612 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:03:57.612 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:57.612 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:57.612 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:57.612 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:03:57.612 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:57.612 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node=1 00:03:57.612 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:03:57.612 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:57.612 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:57.612 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:03:57.612 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:03:57.612 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:57.612 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:57.612 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.612 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.612 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 93718476 kB' 'MemFree: 89808652 kB' 'MemUsed: 3909824 kB' 'SwapCached: 0 kB' 'Active: 1128496 kB' 'Inactive: 213864 kB' 'Active(anon): 949524 kB' 'Inactive(anon): 0 kB' 'Active(file): 178972 kB' 'Inactive(file): 213864 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 1269640 kB' 'Mapped: 53248 kB' 'AnonPages: 72888 kB' 'Shmem: 876804 kB' 'KernelStack: 8296 kB' 'PageTables: 2692 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 86468 kB' 'Slab: 350480 kB' 'SReclaimable: 86468 kB' 'SUnreclaim: 264012 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:03:57.612 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.612 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:57.612 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.612 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.612 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.612 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:57.612 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.612 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.612 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.612 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:57.612 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.612 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.612 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.612 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:57.612 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.612 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.612 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.612 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:57.612 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.612 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.612 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.612 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:57.612 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.612 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.612 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.612 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:57.612 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.612 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.612 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.612 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:57.612 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.612 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.612 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.612 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:57.612 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.612 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.612 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.612 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:57.612 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.612 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.612 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.612 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:57.612 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.612 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.612 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.612 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:57.612 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.612 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.612 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.612 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:57.612 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.612 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.612 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.612 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:57.612 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.612 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.612 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.612 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:57.612 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.612 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.612 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.612 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:57.612 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.612 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.612 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.613 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:57.613 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.613 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.613 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.613 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:57.613 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.613 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.613 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.613 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:57.613 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.613 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.613 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.613 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:57.613 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.613 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.613 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.613 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:57.613 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.613 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.613 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.613 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:57.613 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.613 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.613 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.613 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:57.613 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.613 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.613 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.613 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:57.613 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.613 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.613 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.613 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:57.613 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.613 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.613 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.613 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:57.613 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.613 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.613 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.613 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:57.613 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.613 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.613 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.613 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:57.613 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.613 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.613 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.613 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:57.613 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.613 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.613 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.613 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:57.613 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.613 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.613 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.613 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:57.613 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.613 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.613 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.613 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:57.613 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.613 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.613 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.613 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:57.613 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.613 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.613 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.613 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:57.613 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.613 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.613 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.613 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:57.613 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.613 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.613 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.613 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:57.613 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.613 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.613 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.613 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:03:57.613 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:03:57.613 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:57.613 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:57.613 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:57.613 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:57.613 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:03:57.613 node0=512 expecting 512 00:03:57.613 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:57.613 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:57.613 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:57.613 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@128 -- # echo 'node1=512 expecting 512' 00:03:57.613 node1=512 expecting 512 00:03:57.613 11:11:53 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:03:57.613 00:03:57.613 real 0m3.040s 00:03:57.613 user 0m1.254s 00:03:57.613 sys 0m1.854s 00:03:57.613 11:11:53 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:03:57.613 11:11:53 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@10 -- # set +x 00:03:57.613 ************************************ 00:03:57.613 END TEST even_2G_alloc 00:03:57.613 ************************************ 00:03:57.872 11:11:53 setup.sh.hugepages -- setup/hugepages.sh@213 -- # run_test odd_alloc odd_alloc 00:03:57.872 11:11:53 setup.sh.hugepages -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:03:57.872 11:11:53 setup.sh.hugepages -- common/autotest_common.sh@1107 -- # xtrace_disable 00:03:57.872 11:11:53 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:03:57.872 ************************************ 00:03:57.872 START TEST odd_alloc 00:03:57.872 ************************************ 00:03:57.872 11:11:53 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@1125 -- # odd_alloc 00:03:57.872 11:11:53 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@159 -- # get_test_nr_hugepages 2098176 00:03:57.872 11:11:53 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@49 -- # local size=2098176 00:03:57.872 11:11:53 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:03:57.872 11:11:53 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:57.872 11:11:53 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1025 00:03:57.872 11:11:53 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:03:57.872 11:11:53 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:03:57.872 11:11:53 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:03:57.872 11:11:53 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1025 00:03:57.872 11:11:53 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:57.872 11:11:53 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:57.872 11:11:53 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:57.872 11:11:53 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:03:57.872 11:11:53 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:03:57.872 11:11:53 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:57.872 11:11:53 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:03:57.872 11:11:53 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@83 -- # : 513 00:03:57.872 11:11:53 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@84 -- # : 1 00:03:57.872 11:11:53 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:57.872 11:11:53 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=513 00:03:57.872 11:11:53 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@83 -- # : 0 00:03:57.872 11:11:53 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@84 -- # : 0 00:03:57.872 11:11:53 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:57.872 11:11:53 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # HUGEMEM=2049 00:03:57.872 11:11:53 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # HUGE_EVEN_ALLOC=yes 00:03:57.872 11:11:53 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # setup output 00:03:57.872 11:11:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:03:57.872 11:11:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:04:00.403 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:04:00.403 0000:5e:00.0 (8086 0a54): Already using the vfio-pci driver 00:04:00.403 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:04:00.403 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:04:00.404 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:04:00.404 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:04:00.404 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:04:00.404 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:04:00.404 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:04:00.404 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:04:00.404 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:04:00.404 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:04:00.404 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:04:00.404 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:04:00.404 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:04:00.404 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:04:00.404 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:04:00.668 11:11:56 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@161 -- # verify_nr_hugepages 00:04:00.668 11:11:56 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@89 -- # local node 00:04:00.668 11:11:56 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:04:00.668 11:11:56 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:04:00.668 11:11:56 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@92 -- # local surp 00:04:00.668 11:11:56 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@93 -- # local resv 00:04:00.668 11:11:56 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@94 -- # local anon 00:04:00.668 11:11:56 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:00.668 11:11:56 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:00.668 11:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:00.668 11:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:04:00.668 11:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:04:00.668 11:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:00.668 11:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:00.668 11:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:00.668 11:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:00.668 11:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:00.668 11:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:00.668 11:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.668 11:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.668 11:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381160 kB' 'MemFree: 176400776 kB' 'MemAvailable: 179256088 kB' 'Buffers: 4132 kB' 'Cached: 9370544 kB' 'SwapCached: 0 kB' 'Active: 6400660 kB' 'Inactive: 3506552 kB' 'Active(anon): 6012844 kB' 'Inactive(anon): 0 kB' 'Active(file): 387816 kB' 'Inactive(file): 3506552 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 535676 kB' 'Mapped: 195824 kB' 'Shmem: 5480308 kB' 'KReclaimable: 210788 kB' 'Slab: 710716 kB' 'SReclaimable: 210788 kB' 'SUnreclaim: 499928 kB' 'KernelStack: 20368 kB' 'PageTables: 8532 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103029584 kB' 'Committed_AS: 7516656 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 314840 kB' 'VmallocChunk: 0 kB' 'Percpu: 66048 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 2257876 kB' 'DirectMap2M: 12101632 kB' 'DirectMap1G: 187695104 kB' 00:04:00.668 11:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.668 11:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:00.668 11:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.669 11:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.669 11:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.669 11:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:00.669 11:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.669 11:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.669 11:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.669 11:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:00.669 11:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.669 11:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.669 11:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.669 11:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:00.669 11:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.669 11:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.669 11:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.669 11:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:00.669 11:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.669 11:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.669 11:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.669 11:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:00.669 11:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.669 11:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.669 11:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.669 11:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:00.669 11:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.669 11:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.669 11:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.669 11:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:00.669 11:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.669 11:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.669 11:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.669 11:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:00.669 11:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.669 11:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.669 11:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.669 11:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:00.669 11:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.669 11:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.669 11:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.669 11:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:00.669 11:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.669 11:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.669 11:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.669 11:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:00.669 11:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.669 11:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.669 11:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.669 11:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:00.669 11:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.669 11:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.669 11:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.669 11:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:00.669 11:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.669 11:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.669 11:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.669 11:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:00.669 11:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.669 11:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.669 11:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.669 11:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:00.669 11:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.669 11:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.669 11:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.669 11:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:00.669 11:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.669 11:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.669 11:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.669 11:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:00.669 11:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.669 11:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.669 11:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.669 11:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:00.669 11:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.669 11:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.669 11:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.669 11:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:00.669 11:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.669 11:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.669 11:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.669 11:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:00.669 11:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.669 11:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.669 11:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.669 11:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:00.669 11:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.669 11:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.669 11:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.669 11:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:00.669 11:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.669 11:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.669 11:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.669 11:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:00.669 11:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.669 11:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.669 11:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.669 11:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:00.669 11:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.669 11:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.669 11:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.669 11:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:00.669 11:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.669 11:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.669 11:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.669 11:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:00.669 11:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.669 11:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.669 11:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.669 11:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:00.669 11:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.669 11:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.669 11:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.669 11:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:00.669 11:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.669 11:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.669 11:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.669 11:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:00.669 11:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.669 11:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.669 11:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.669 11:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:00.669 11:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.669 11:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.669 11:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.669 11:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:00.669 11:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.669 11:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.670 11:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.670 11:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:00.670 11:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.670 11:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.670 11:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.670 11:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:00.670 11:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.670 11:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.670 11:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.670 11:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:00.670 11:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.670 11:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.670 11:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.670 11:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:00.670 11:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.670 11:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.670 11:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.670 11:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:00.670 11:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.670 11:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.670 11:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.670 11:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:00.670 11:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.670 11:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.670 11:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.670 11:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:00.670 11:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.670 11:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.670 11:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.670 11:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:00.670 11:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.670 11:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.670 11:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.670 11:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:04:00.670 11:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:04:00.670 11:11:56 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@97 -- # anon=0 00:04:00.670 11:11:56 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:00.670 11:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:00.670 11:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:04:00.670 11:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:04:00.670 11:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:00.670 11:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:00.670 11:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:00.670 11:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:00.670 11:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:00.670 11:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:00.670 11:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.670 11:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.670 11:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381160 kB' 'MemFree: 176402296 kB' 'MemAvailable: 179257608 kB' 'Buffers: 4132 kB' 'Cached: 9370548 kB' 'SwapCached: 0 kB' 'Active: 6400536 kB' 'Inactive: 3506552 kB' 'Active(anon): 6012720 kB' 'Inactive(anon): 0 kB' 'Active(file): 387816 kB' 'Inactive(file): 3506552 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 535492 kB' 'Mapped: 195724 kB' 'Shmem: 5480312 kB' 'KReclaimable: 210788 kB' 'Slab: 710636 kB' 'SReclaimable: 210788 kB' 'SUnreclaim: 499848 kB' 'KernelStack: 20352 kB' 'PageTables: 8464 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103029584 kB' 'Committed_AS: 7516676 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 314824 kB' 'VmallocChunk: 0 kB' 'Percpu: 66048 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 2257876 kB' 'DirectMap2M: 12101632 kB' 'DirectMap1G: 187695104 kB' 00:04:00.670 11:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.670 11:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:00.670 11:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.670 11:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.670 11:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.670 11:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:00.670 11:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.670 11:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.670 11:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.670 11:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:00.670 11:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.670 11:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.670 11:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.670 11:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:00.670 11:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.670 11:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.670 11:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.670 11:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:00.670 11:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.670 11:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.670 11:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.670 11:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:00.670 11:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.670 11:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.670 11:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.670 11:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:00.670 11:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.670 11:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.670 11:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.670 11:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:00.670 11:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.670 11:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.670 11:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.670 11:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:00.670 11:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.670 11:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.670 11:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.670 11:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:00.670 11:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.670 11:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.670 11:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.670 11:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:00.670 11:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.670 11:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.670 11:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.670 11:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:00.670 11:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.670 11:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.670 11:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.670 11:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:00.670 11:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.670 11:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.670 11:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.670 11:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:00.670 11:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.670 11:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.670 11:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.670 11:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:00.670 11:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.670 11:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.670 11:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.670 11:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:00.671 11:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.671 11:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.671 11:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.671 11:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:00.671 11:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.671 11:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.671 11:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.671 11:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:00.671 11:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.671 11:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.671 11:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.671 11:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:00.671 11:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.671 11:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.671 11:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.671 11:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:00.671 11:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.671 11:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.671 11:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.671 11:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:00.671 11:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.671 11:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.671 11:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.671 11:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:00.671 11:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.671 11:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.671 11:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.671 11:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:00.671 11:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.671 11:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.671 11:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.671 11:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:00.671 11:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.671 11:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.671 11:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.671 11:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:00.671 11:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.671 11:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.671 11:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.671 11:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:00.671 11:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.671 11:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.671 11:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.671 11:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:00.671 11:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.671 11:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.671 11:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.671 11:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:00.671 11:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.671 11:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.671 11:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.671 11:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:00.671 11:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.671 11:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.671 11:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.671 11:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:00.671 11:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.671 11:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.671 11:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.671 11:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:00.671 11:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.671 11:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.671 11:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.671 11:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:00.671 11:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.671 11:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.671 11:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.671 11:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:00.671 11:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.671 11:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.671 11:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.671 11:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:00.671 11:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.671 11:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.671 11:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.671 11:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:00.671 11:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.671 11:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.671 11:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.671 11:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:00.671 11:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.671 11:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.671 11:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.671 11:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:00.671 11:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.671 11:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.671 11:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.671 11:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:00.671 11:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.671 11:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.671 11:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.671 11:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:00.671 11:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.671 11:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.671 11:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.671 11:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:00.671 11:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.671 11:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.671 11:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.671 11:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:00.671 11:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.671 11:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.671 11:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.671 11:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:00.671 11:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.671 11:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.671 11:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.671 11:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:00.671 11:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.671 11:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.671 11:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.671 11:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:00.671 11:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.671 11:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.671 11:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.671 11:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:00.671 11:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.671 11:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.671 11:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.671 11:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:00.671 11:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.671 11:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.671 11:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.671 11:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:00.671 11:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.672 11:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.672 11:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.672 11:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:00.672 11:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.672 11:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.672 11:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.672 11:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:00.672 11:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.672 11:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.672 11:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.672 11:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:00.672 11:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.672 11:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.672 11:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.672 11:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:00.672 11:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.672 11:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.672 11:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.672 11:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:04:00.672 11:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:04:00.672 11:11:56 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@99 -- # surp=0 00:04:00.672 11:11:56 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:00.672 11:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:00.672 11:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:04:00.672 11:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:04:00.672 11:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:00.672 11:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:00.672 11:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:00.672 11:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:00.672 11:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:00.672 11:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:00.672 11:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.672 11:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.672 11:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381160 kB' 'MemFree: 176402800 kB' 'MemAvailable: 179258112 kB' 'Buffers: 4132 kB' 'Cached: 9370548 kB' 'SwapCached: 0 kB' 'Active: 6400440 kB' 'Inactive: 3506552 kB' 'Active(anon): 6012624 kB' 'Inactive(anon): 0 kB' 'Active(file): 387816 kB' 'Inactive(file): 3506552 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 535492 kB' 'Mapped: 195724 kB' 'Shmem: 5480312 kB' 'KReclaimable: 210788 kB' 'Slab: 710636 kB' 'SReclaimable: 210788 kB' 'SUnreclaim: 499848 kB' 'KernelStack: 20336 kB' 'PageTables: 8464 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103029584 kB' 'Committed_AS: 7516696 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 314824 kB' 'VmallocChunk: 0 kB' 'Percpu: 66048 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 2257876 kB' 'DirectMap2M: 12101632 kB' 'DirectMap1G: 187695104 kB' 00:04:00.672 11:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.672 11:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:00.672 11:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.672 11:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.672 11:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.672 11:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:00.672 11:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.672 11:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.672 11:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.672 11:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:00.672 11:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.672 11:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.672 11:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.672 11:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:00.672 11:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.672 11:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.672 11:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.672 11:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:00.672 11:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.672 11:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.672 11:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.672 11:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:00.672 11:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.672 11:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.672 11:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.672 11:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:00.672 11:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.672 11:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.672 11:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.672 11:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:00.672 11:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.672 11:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.672 11:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.672 11:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:00.672 11:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.672 11:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.672 11:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.672 11:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:00.672 11:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.672 11:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.672 11:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.672 11:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:00.672 11:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.672 11:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.672 11:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.672 11:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:00.672 11:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.672 11:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.672 11:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.672 11:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:00.672 11:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.672 11:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.672 11:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.672 11:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:00.672 11:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.672 11:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.672 11:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.672 11:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:00.672 11:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.672 11:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.672 11:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.672 11:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:00.672 11:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.672 11:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.672 11:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.672 11:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:00.672 11:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.672 11:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.672 11:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.672 11:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:00.672 11:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.672 11:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.672 11:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.672 11:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:00.672 11:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.672 11:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.672 11:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.672 11:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:00.673 11:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.673 11:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.673 11:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.673 11:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:00.673 11:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.673 11:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.673 11:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.673 11:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:00.673 11:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.673 11:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.673 11:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.673 11:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:00.673 11:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.673 11:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.673 11:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.673 11:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:00.673 11:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.673 11:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.673 11:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.673 11:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:00.673 11:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.673 11:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.673 11:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.673 11:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:00.673 11:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.673 11:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.673 11:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.673 11:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:00.673 11:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.673 11:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.673 11:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.673 11:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:00.673 11:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.673 11:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.673 11:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.673 11:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:00.673 11:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.673 11:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.673 11:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.673 11:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:00.673 11:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.673 11:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.673 11:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.673 11:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:00.673 11:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.673 11:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.673 11:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.673 11:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:00.673 11:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.673 11:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.673 11:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.673 11:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:00.673 11:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.673 11:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.673 11:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.673 11:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:00.673 11:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.673 11:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.673 11:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.673 11:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:00.673 11:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.673 11:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.673 11:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.673 11:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:00.673 11:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.673 11:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.673 11:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.673 11:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:00.673 11:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.673 11:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.673 11:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.673 11:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:00.673 11:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.673 11:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.673 11:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.673 11:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:00.673 11:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.673 11:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.673 11:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.673 11:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:00.673 11:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.673 11:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.673 11:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.673 11:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:00.673 11:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.673 11:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.673 11:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.673 11:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:00.673 11:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.673 11:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.673 11:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.673 11:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:00.673 11:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.673 11:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.673 11:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.673 11:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:00.673 11:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.673 11:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.673 11:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.673 11:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:00.673 11:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.673 11:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.673 11:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.674 11:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:00.674 11:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.674 11:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.674 11:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.674 11:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:00.674 11:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.674 11:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.674 11:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.674 11:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:00.674 11:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.674 11:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.674 11:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.674 11:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:00.674 11:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.674 11:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.674 11:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.674 11:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:00.674 11:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.674 11:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.674 11:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.674 11:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:04:00.674 11:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:04:00.674 11:11:56 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@100 -- # resv=0 00:04:00.674 11:11:56 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1025 00:04:00.674 nr_hugepages=1025 00:04:00.674 11:11:56 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:00.674 resv_hugepages=0 00:04:00.674 11:11:56 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:00.674 surplus_hugepages=0 00:04:00.674 11:11:56 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:00.674 anon_hugepages=0 00:04:00.674 11:11:56 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@107 -- # (( 1025 == nr_hugepages + surp + resv )) 00:04:00.674 11:11:56 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@109 -- # (( 1025 == nr_hugepages )) 00:04:00.674 11:11:56 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:00.674 11:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:00.674 11:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:04:00.674 11:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:04:00.674 11:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:00.674 11:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:00.674 11:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:00.674 11:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:00.674 11:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:00.674 11:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:00.674 11:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.674 11:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.674 11:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381160 kB' 'MemFree: 176403276 kB' 'MemAvailable: 179258588 kB' 'Buffers: 4132 kB' 'Cached: 9370584 kB' 'SwapCached: 0 kB' 'Active: 6400160 kB' 'Inactive: 3506552 kB' 'Active(anon): 6012344 kB' 'Inactive(anon): 0 kB' 'Active(file): 387816 kB' 'Inactive(file): 3506552 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 535140 kB' 'Mapped: 195724 kB' 'Shmem: 5480348 kB' 'KReclaimable: 210788 kB' 'Slab: 710636 kB' 'SReclaimable: 210788 kB' 'SUnreclaim: 499848 kB' 'KernelStack: 20336 kB' 'PageTables: 8460 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103029584 kB' 'Committed_AS: 7516716 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 314824 kB' 'VmallocChunk: 0 kB' 'Percpu: 66048 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 2257876 kB' 'DirectMap2M: 12101632 kB' 'DirectMap1G: 187695104 kB' 00:04:00.674 11:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.674 11:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:00.674 11:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.674 11:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.674 11:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.674 11:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:00.674 11:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.674 11:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.674 11:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.674 11:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:00.674 11:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.674 11:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.674 11:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.674 11:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:00.674 11:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.674 11:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.674 11:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.674 11:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:00.674 11:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.674 11:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.674 11:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.674 11:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:00.674 11:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.674 11:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.674 11:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.674 11:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:00.674 11:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.674 11:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.674 11:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.674 11:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:00.674 11:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.674 11:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.674 11:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.674 11:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:00.674 11:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.674 11:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.674 11:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.674 11:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:00.674 11:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.674 11:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.674 11:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.674 11:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:00.674 11:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.674 11:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.674 11:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.674 11:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:00.674 11:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.674 11:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.674 11:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.674 11:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:00.674 11:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.674 11:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.674 11:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.674 11:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:00.674 11:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.674 11:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.674 11:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.674 11:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:00.674 11:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.674 11:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.674 11:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.674 11:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:00.674 11:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.674 11:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.674 11:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.674 11:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:00.674 11:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.675 11:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.675 11:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.675 11:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:00.675 11:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.675 11:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.675 11:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.675 11:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:00.675 11:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.675 11:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.675 11:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.675 11:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:00.675 11:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.675 11:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.675 11:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.675 11:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:00.675 11:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.675 11:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.675 11:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.675 11:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:00.675 11:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.675 11:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.675 11:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.675 11:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:00.675 11:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.675 11:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.675 11:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.675 11:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:00.675 11:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.675 11:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.675 11:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.675 11:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:00.675 11:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.675 11:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.675 11:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.675 11:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:00.675 11:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.675 11:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.675 11:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.675 11:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:00.675 11:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.675 11:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.675 11:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.675 11:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:00.675 11:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.675 11:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.675 11:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.675 11:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:00.675 11:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.675 11:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.675 11:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.675 11:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:00.675 11:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.675 11:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.675 11:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.675 11:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:00.675 11:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.675 11:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.675 11:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.675 11:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:00.675 11:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.675 11:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.675 11:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.675 11:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:00.675 11:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.675 11:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.675 11:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.675 11:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:00.675 11:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.675 11:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.675 11:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.675 11:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:00.675 11:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.675 11:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.675 11:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.675 11:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:00.675 11:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.675 11:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.675 11:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.675 11:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:00.675 11:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.675 11:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.675 11:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.675 11:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:00.675 11:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.675 11:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.675 11:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.675 11:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:00.675 11:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.675 11:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.675 11:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.675 11:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:00.675 11:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.675 11:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.675 11:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.675 11:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:00.675 11:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.675 11:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.675 11:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.675 11:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:00.675 11:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.675 11:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.675 11:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.675 11:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:00.675 11:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.675 11:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.675 11:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.675 11:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:00.675 11:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.675 11:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.675 11:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.675 11:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:00.675 11:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.675 11:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.675 11:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.675 11:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:00.675 11:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.675 11:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.675 11:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.675 11:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:00.675 11:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.675 11:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.675 11:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.675 11:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:00.675 11:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.675 11:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.676 11:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.676 11:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 1025 00:04:00.676 11:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:04:00.676 11:11:56 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@110 -- # (( 1025 == nr_hugepages + surp + resv )) 00:04:00.676 11:11:56 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:04:00.676 11:11:56 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@27 -- # local node 00:04:00.676 11:11:56 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:00.676 11:11:56 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:04:00.676 11:11:56 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:00.676 11:11:56 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=513 00:04:00.676 11:11:56 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:04:00.676 11:11:56 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:00.676 11:11:56 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:00.676 11:11:56 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:00.676 11:11:56 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:00.676 11:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:00.676 11:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node=0 00:04:00.676 11:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:04:00.676 11:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:00.676 11:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:00.676 11:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:00.676 11:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:00.676 11:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:00.676 11:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:00.676 11:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.676 11:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.676 11:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 97662684 kB' 'MemFree: 86591808 kB' 'MemUsed: 11070876 kB' 'SwapCached: 0 kB' 'Active: 5271824 kB' 'Inactive: 3292688 kB' 'Active(anon): 5062980 kB' 'Inactive(anon): 0 kB' 'Active(file): 208844 kB' 'Inactive(file): 3292688 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 8104964 kB' 'Mapped: 142476 kB' 'AnonPages: 462644 kB' 'Shmem: 4603432 kB' 'KernelStack: 12088 kB' 'PageTables: 5868 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 124320 kB' 'Slab: 360364 kB' 'SReclaimable: 124320 kB' 'SUnreclaim: 236044 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:04:00.676 11:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.676 11:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:00.676 11:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.676 11:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.676 11:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.676 11:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:00.676 11:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.676 11:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.676 11:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.676 11:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:00.676 11:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.676 11:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.676 11:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.676 11:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:00.676 11:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.676 11:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.676 11:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.676 11:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:00.676 11:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.676 11:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.676 11:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.676 11:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:00.676 11:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.676 11:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.676 11:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.676 11:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:00.676 11:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.676 11:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.676 11:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.676 11:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:00.676 11:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.676 11:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.676 11:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.676 11:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:00.676 11:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.676 11:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.676 11:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.676 11:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:00.676 11:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.676 11:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.676 11:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.676 11:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:00.676 11:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.676 11:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.676 11:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.676 11:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:00.676 11:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.676 11:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.676 11:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.676 11:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:00.676 11:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.676 11:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.676 11:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.676 11:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:00.676 11:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.676 11:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.676 11:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.676 11:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:00.676 11:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.676 11:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.676 11:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.676 11:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:00.676 11:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.676 11:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.676 11:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.676 11:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:00.676 11:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.676 11:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.676 11:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.676 11:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:00.676 11:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.676 11:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.676 11:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.676 11:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:00.676 11:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.676 11:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.676 11:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.676 11:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:00.676 11:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.676 11:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.676 11:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.676 11:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:00.676 11:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.676 11:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.676 11:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.676 11:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:00.676 11:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.676 11:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.677 11:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.677 11:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:00.677 11:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.677 11:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.677 11:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.677 11:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:00.677 11:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.677 11:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.677 11:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.677 11:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:00.677 11:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.677 11:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.677 11:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.677 11:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:00.677 11:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.677 11:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.677 11:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.677 11:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:00.677 11:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.677 11:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.677 11:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.677 11:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:00.677 11:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.677 11:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.677 11:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.677 11:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:00.677 11:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.677 11:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.677 11:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.677 11:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:00.677 11:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.677 11:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.677 11:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.677 11:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:00.677 11:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.677 11:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.677 11:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.677 11:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:00.677 11:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.677 11:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.677 11:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.677 11:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:00.677 11:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.677 11:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.677 11:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.677 11:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:00.677 11:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.677 11:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.677 11:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.677 11:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:00.677 11:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.677 11:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.677 11:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.677 11:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:00.677 11:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.677 11:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.677 11:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.677 11:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:04:00.677 11:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:04:00.677 11:11:56 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:00.677 11:11:56 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:00.677 11:11:56 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:00.677 11:11:56 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:04:00.677 11:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:00.677 11:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node=1 00:04:00.677 11:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:04:00.677 11:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:00.677 11:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:00.677 11:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:04:00.677 11:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:04:00.677 11:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:00.677 11:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:00.677 11:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.677 11:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.677 11:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 93718476 kB' 'MemFree: 89811868 kB' 'MemUsed: 3906608 kB' 'SwapCached: 0 kB' 'Active: 1129232 kB' 'Inactive: 213864 kB' 'Active(anon): 950260 kB' 'Inactive(anon): 0 kB' 'Active(file): 178972 kB' 'Inactive(file): 213864 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 1269792 kB' 'Mapped: 53248 kB' 'AnonPages: 73440 kB' 'Shmem: 876956 kB' 'KernelStack: 8264 kB' 'PageTables: 2648 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 86468 kB' 'Slab: 350272 kB' 'SReclaimable: 86468 kB' 'SUnreclaim: 263804 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 513' 'HugePages_Free: 513' 'HugePages_Surp: 0' 00:04:00.677 11:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.677 11:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:00.677 11:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.677 11:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.677 11:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.677 11:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:00.677 11:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.677 11:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.677 11:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.677 11:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:00.677 11:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.677 11:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.677 11:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.677 11:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:00.677 11:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.677 11:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.677 11:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.677 11:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:00.677 11:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.677 11:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.677 11:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.677 11:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:00.677 11:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.677 11:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.677 11:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.677 11:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:00.677 11:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.677 11:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.677 11:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.677 11:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:00.677 11:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.677 11:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.678 11:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.678 11:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:00.678 11:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.678 11:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.678 11:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.678 11:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:00.678 11:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.678 11:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.678 11:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.678 11:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:00.678 11:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.678 11:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.678 11:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.678 11:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:00.678 11:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.678 11:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.678 11:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.678 11:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:00.678 11:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.678 11:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.678 11:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.678 11:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:00.678 11:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.678 11:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.678 11:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.678 11:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:00.678 11:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.678 11:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.678 11:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.678 11:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:00.678 11:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.678 11:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.678 11:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.678 11:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:00.678 11:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.678 11:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.678 11:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.678 11:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:00.678 11:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.678 11:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.678 11:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.678 11:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:00.678 11:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.678 11:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.678 11:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.678 11:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:00.678 11:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.678 11:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.678 11:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.678 11:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:00.678 11:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.678 11:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.678 11:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.678 11:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:00.678 11:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.678 11:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.678 11:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.678 11:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:00.678 11:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.678 11:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.678 11:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.678 11:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:00.678 11:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.678 11:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.678 11:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.938 11:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:00.938 11:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.938 11:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.938 11:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.938 11:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:00.938 11:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.938 11:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.938 11:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.938 11:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:00.938 11:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.938 11:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.938 11:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.938 11:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:00.938 11:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.938 11:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.938 11:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.938 11:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:00.938 11:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.938 11:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.938 11:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.938 11:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:00.938 11:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.938 11:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.938 11:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.938 11:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:00.938 11:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.938 11:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.938 11:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.938 11:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:00.938 11:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.938 11:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.938 11:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.938 11:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:00.938 11:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.938 11:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.938 11:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.938 11:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:00.938 11:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.938 11:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.938 11:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.938 11:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:00.938 11:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.938 11:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.938 11:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.938 11:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:00.938 11:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.938 11:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.938 11:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.938 11:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:04:00.938 11:11:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:04:00.938 11:11:56 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:00.938 11:11:56 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:00.938 11:11:56 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:00.938 11:11:56 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:00.938 11:11:56 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 513' 00:04:00.938 node0=512 expecting 513 00:04:00.938 11:11:56 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:00.938 11:11:56 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:00.938 11:11:56 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:00.938 11:11:56 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@128 -- # echo 'node1=513 expecting 512' 00:04:00.938 node1=513 expecting 512 00:04:00.938 11:11:56 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@130 -- # [[ 512 513 == \5\1\2\ \5\1\3 ]] 00:04:00.938 00:04:00.938 real 0m3.034s 00:04:00.938 user 0m1.215s 00:04:00.938 sys 0m1.878s 00:04:00.938 11:11:56 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:00.938 11:11:56 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@10 -- # set +x 00:04:00.938 ************************************ 00:04:00.938 END TEST odd_alloc 00:04:00.938 ************************************ 00:04:00.938 11:11:56 setup.sh.hugepages -- setup/hugepages.sh@214 -- # run_test custom_alloc custom_alloc 00:04:00.938 11:11:56 setup.sh.hugepages -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:00.938 11:11:56 setup.sh.hugepages -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:00.938 11:11:56 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:04:00.938 ************************************ 00:04:00.938 START TEST custom_alloc 00:04:00.938 ************************************ 00:04:00.938 11:11:56 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@1125 -- # custom_alloc 00:04:00.938 11:11:56 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@167 -- # local IFS=, 00:04:00.938 11:11:56 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@169 -- # local node 00:04:00.938 11:11:56 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@170 -- # nodes_hp=() 00:04:00.938 11:11:56 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@170 -- # local nodes_hp 00:04:00.938 11:11:56 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@172 -- # local nr_hugepages=0 _nr_hugepages=0 00:04:00.938 11:11:56 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@174 -- # get_test_nr_hugepages 1048576 00:04:00.938 11:11:56 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@49 -- # local size=1048576 00:04:00.938 11:11:56 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:04:00.938 11:11:56 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:00.938 11:11:56 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:04:00.938 11:11:56 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:04:00.938 11:11:56 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:04:00.938 11:11:56 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:04:00.938 11:11:56 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:04:00.938 11:11:56 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:04:00.938 11:11:56 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:00.938 11:11:56 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:00.938 11:11:56 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:04:00.938 11:11:56 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:04:00.938 11:11:56 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:00.938 11:11:56 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=256 00:04:00.938 11:11:56 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@83 -- # : 256 00:04:00.938 11:11:56 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@84 -- # : 1 00:04:00.938 11:11:56 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:00.938 11:11:56 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=256 00:04:00.938 11:11:56 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@83 -- # : 0 00:04:00.938 11:11:56 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@84 -- # : 0 00:04:00.938 11:11:56 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:00.938 11:11:56 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@175 -- # nodes_hp[0]=512 00:04:00.938 11:11:56 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@176 -- # (( 2 > 1 )) 00:04:00.938 11:11:56 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@177 -- # get_test_nr_hugepages 2097152 00:04:00.938 11:11:56 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@49 -- # local size=2097152 00:04:00.939 11:11:56 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:04:00.939 11:11:56 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:00.939 11:11:56 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:04:00.939 11:11:56 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:04:00.939 11:11:56 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:04:00.939 11:11:56 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:04:00.939 11:11:56 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:04:00.939 11:11:56 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:04:00.939 11:11:56 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:00.939 11:11:56 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:00.939 11:11:56 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:04:00.939 11:11:56 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # (( 1 > 0 )) 00:04:00.939 11:11:56 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:04:00.939 11:11:56 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=512 00:04:00.939 11:11:56 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@78 -- # return 0 00:04:00.939 11:11:56 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@178 -- # nodes_hp[1]=1024 00:04:00.939 11:11:56 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@181 -- # for node in "${!nodes_hp[@]}" 00:04:00.939 11:11:56 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@182 -- # HUGENODE+=("nodes_hp[$node]=${nodes_hp[node]}") 00:04:00.939 11:11:56 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@183 -- # (( _nr_hugepages += nodes_hp[node] )) 00:04:00.939 11:11:56 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@181 -- # for node in "${!nodes_hp[@]}" 00:04:00.939 11:11:56 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@182 -- # HUGENODE+=("nodes_hp[$node]=${nodes_hp[node]}") 00:04:00.939 11:11:56 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@183 -- # (( _nr_hugepages += nodes_hp[node] )) 00:04:00.939 11:11:56 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@186 -- # get_test_nr_hugepages_per_node 00:04:00.939 11:11:56 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:04:00.939 11:11:56 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:04:00.939 11:11:56 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:04:00.939 11:11:56 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:04:00.939 11:11:56 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:00.939 11:11:56 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:00.939 11:11:56 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:04:00.939 11:11:56 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # (( 2 > 0 )) 00:04:00.939 11:11:56 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:04:00.939 11:11:56 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=512 00:04:00.939 11:11:56 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:04:00.939 11:11:56 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=1024 00:04:00.939 11:11:56 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@78 -- # return 0 00:04:00.939 11:11:56 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@187 -- # HUGENODE='nodes_hp[0]=512,nodes_hp[1]=1024' 00:04:00.939 11:11:56 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@187 -- # setup output 00:04:00.939 11:11:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:04:00.939 11:11:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:04:03.476 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:04:03.476 0000:5e:00.0 (8086 0a54): Already using the vfio-pci driver 00:04:03.476 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:04:03.476 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:04:03.476 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:04:03.476 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:04:03.476 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:04:03.476 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:04:03.476 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:04:03.476 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:04:03.476 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:04:03.476 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:04:03.476 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:04:03.476 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:04:03.476 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:04:03.476 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:04:03.476 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:04:03.740 11:11:59 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@188 -- # nr_hugepages=1536 00:04:03.740 11:11:59 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@188 -- # verify_nr_hugepages 00:04:03.740 11:11:59 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@89 -- # local node 00:04:03.740 11:11:59 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:04:03.740 11:11:59 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:04:03.740 11:11:59 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@92 -- # local surp 00:04:03.740 11:11:59 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@93 -- # local resv 00:04:03.740 11:11:59 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@94 -- # local anon 00:04:03.740 11:11:59 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:03.740 11:11:59 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:03.740 11:11:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:03.740 11:11:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:04:03.740 11:11:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:04:03.740 11:11:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:03.740 11:11:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:03.740 11:11:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:03.740 11:11:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:03.740 11:11:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:03.740 11:11:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:03.740 11:11:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.740 11:11:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.740 11:11:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381160 kB' 'MemFree: 175355956 kB' 'MemAvailable: 178211268 kB' 'Buffers: 4132 kB' 'Cached: 9370688 kB' 'SwapCached: 0 kB' 'Active: 6401028 kB' 'Inactive: 3506552 kB' 'Active(anon): 6013212 kB' 'Inactive(anon): 0 kB' 'Active(file): 387816 kB' 'Inactive(file): 3506552 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 535940 kB' 'Mapped: 195804 kB' 'Shmem: 5480452 kB' 'KReclaimable: 210788 kB' 'Slab: 710840 kB' 'SReclaimable: 210788 kB' 'SUnreclaim: 500052 kB' 'KernelStack: 20320 kB' 'PageTables: 8416 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 102506320 kB' 'Committed_AS: 7516824 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 314872 kB' 'VmallocChunk: 0 kB' 'Percpu: 66048 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 2257876 kB' 'DirectMap2M: 12101632 kB' 'DirectMap1G: 187695104 kB' 00:04:03.740 11:11:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.740 11:11:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.740 11:11:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.740 11:11:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.740 11:11:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.740 11:11:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.740 11:11:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.740 11:11:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.740 11:11:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.740 11:11:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.740 11:11:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.740 11:11:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.740 11:11:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.740 11:11:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.740 11:11:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.740 11:11:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.740 11:11:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.740 11:11:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.740 11:11:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.740 11:11:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.740 11:11:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.740 11:11:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.740 11:11:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.740 11:11:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.740 11:11:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.740 11:11:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.740 11:11:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.740 11:11:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.740 11:11:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.740 11:11:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.740 11:11:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.740 11:11:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.740 11:11:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.740 11:11:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.740 11:11:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.740 11:11:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.740 11:11:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.740 11:11:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.740 11:11:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.740 11:11:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.740 11:11:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.740 11:11:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.740 11:11:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.740 11:11:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.740 11:11:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.740 11:11:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.740 11:11:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.740 11:11:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.740 11:11:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.740 11:11:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.740 11:11:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.740 11:11:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.740 11:11:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.740 11:11:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.740 11:11:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.740 11:11:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.740 11:11:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.740 11:11:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.740 11:11:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.740 11:11:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.741 11:11:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.741 11:11:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.741 11:11:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.741 11:11:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.741 11:11:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.741 11:11:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.741 11:11:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.741 11:11:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.741 11:11:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.741 11:11:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.741 11:11:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.741 11:11:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.741 11:11:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.741 11:11:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.741 11:11:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.741 11:11:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.741 11:11:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.741 11:11:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.741 11:11:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.741 11:11:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.741 11:11:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.741 11:11:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.741 11:11:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.741 11:11:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.741 11:11:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.741 11:11:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.741 11:11:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.741 11:11:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.741 11:11:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.741 11:11:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.741 11:11:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.741 11:11:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.741 11:11:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.741 11:11:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.741 11:11:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.741 11:11:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.741 11:11:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.741 11:11:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.741 11:11:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.741 11:11:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.741 11:11:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.741 11:11:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.741 11:11:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.741 11:11:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.741 11:11:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.741 11:11:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.741 11:11:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.741 11:11:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.741 11:11:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.741 11:11:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.741 11:11:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.741 11:11:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.741 11:11:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.741 11:11:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.741 11:11:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.741 11:11:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.741 11:11:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.741 11:11:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.741 11:11:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.741 11:11:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.741 11:11:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.741 11:11:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.741 11:11:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.741 11:11:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.741 11:11:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.741 11:11:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.741 11:11:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.741 11:11:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.741 11:11:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.741 11:11:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.741 11:11:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.741 11:11:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.741 11:11:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.741 11:11:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.741 11:11:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.741 11:11:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.741 11:11:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.741 11:11:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.741 11:11:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.741 11:11:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.741 11:11:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.741 11:11:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.741 11:11:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.741 11:11:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.741 11:11:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.741 11:11:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.741 11:11:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.741 11:11:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.741 11:11:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.741 11:11:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.741 11:11:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.741 11:11:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.741 11:11:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.741 11:11:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.741 11:11:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.741 11:11:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.741 11:11:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.741 11:11:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.741 11:11:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.741 11:11:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.741 11:11:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.741 11:11:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:04:03.741 11:11:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:04:03.741 11:11:59 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@97 -- # anon=0 00:04:03.741 11:11:59 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:03.741 11:11:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:03.741 11:11:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:04:03.741 11:11:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:04:03.741 11:11:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:03.741 11:11:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:03.741 11:11:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:03.741 11:11:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:03.741 11:11:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:03.741 11:11:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:03.742 11:11:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.742 11:11:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.742 11:11:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381160 kB' 'MemFree: 175356760 kB' 'MemAvailable: 178212072 kB' 'Buffers: 4132 kB' 'Cached: 9370700 kB' 'SwapCached: 0 kB' 'Active: 6401012 kB' 'Inactive: 3506552 kB' 'Active(anon): 6013196 kB' 'Inactive(anon): 0 kB' 'Active(file): 387816 kB' 'Inactive(file): 3506552 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 535940 kB' 'Mapped: 195740 kB' 'Shmem: 5480464 kB' 'KReclaimable: 210788 kB' 'Slab: 710908 kB' 'SReclaimable: 210788 kB' 'SUnreclaim: 500120 kB' 'KernelStack: 20336 kB' 'PageTables: 8500 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 102506320 kB' 'Committed_AS: 7517344 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 314856 kB' 'VmallocChunk: 0 kB' 'Percpu: 66048 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 2257876 kB' 'DirectMap2M: 12101632 kB' 'DirectMap1G: 187695104 kB' 00:04:03.742 11:11:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.742 11:11:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.742 11:11:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.742 11:11:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.742 11:11:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.742 11:11:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.742 11:11:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.742 11:11:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.742 11:11:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.742 11:11:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.742 11:11:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.742 11:11:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.742 11:11:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.742 11:11:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.742 11:11:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.742 11:11:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.742 11:11:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.742 11:11:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.742 11:11:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.742 11:11:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.742 11:11:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.742 11:11:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.742 11:11:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.742 11:11:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.742 11:11:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.742 11:11:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.742 11:11:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.742 11:11:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.742 11:11:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.742 11:11:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.742 11:11:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.742 11:11:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.742 11:11:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.742 11:11:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.742 11:11:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.742 11:11:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.742 11:11:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.742 11:11:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.742 11:11:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.742 11:11:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.742 11:11:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.742 11:11:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.742 11:11:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.742 11:11:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.742 11:11:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.742 11:11:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.742 11:11:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.742 11:11:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.742 11:11:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.742 11:11:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.742 11:11:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.742 11:11:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.742 11:11:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.742 11:11:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.742 11:11:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.742 11:11:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.742 11:11:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.742 11:11:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.742 11:11:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.742 11:11:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.742 11:11:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.742 11:11:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.742 11:11:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.742 11:11:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.742 11:11:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.742 11:11:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.742 11:11:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.742 11:11:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.742 11:11:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.742 11:11:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.742 11:11:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.742 11:11:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.742 11:11:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.742 11:11:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.742 11:11:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.742 11:11:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.742 11:11:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.742 11:11:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.742 11:11:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.742 11:11:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.742 11:11:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.742 11:11:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.742 11:11:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.742 11:11:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.742 11:11:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.742 11:11:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.742 11:11:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.742 11:11:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.742 11:11:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.742 11:11:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.742 11:11:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.742 11:11:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.742 11:11:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.742 11:11:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.742 11:11:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.742 11:11:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.743 11:11:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.743 11:11:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.743 11:11:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.743 11:11:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.743 11:11:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.743 11:11:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.743 11:11:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.743 11:11:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.743 11:11:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.743 11:11:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.743 11:11:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.743 11:11:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.743 11:11:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.743 11:11:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.743 11:11:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.743 11:11:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.743 11:11:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.743 11:11:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.743 11:11:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.743 11:11:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.743 11:11:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.743 11:11:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.743 11:11:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.743 11:11:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.743 11:11:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.743 11:11:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.743 11:11:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.743 11:11:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.743 11:11:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.743 11:11:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.743 11:11:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.743 11:11:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.743 11:11:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.743 11:11:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.743 11:11:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.743 11:11:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.743 11:11:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.743 11:11:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.743 11:11:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.743 11:11:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.743 11:11:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.743 11:11:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.743 11:11:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.743 11:11:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.743 11:11:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.743 11:11:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.743 11:11:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.743 11:11:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.743 11:11:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.743 11:11:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.743 11:11:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.743 11:11:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.743 11:11:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.743 11:11:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.743 11:11:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.743 11:11:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.743 11:11:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.743 11:11:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.743 11:11:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.743 11:11:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.743 11:11:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.743 11:11:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.743 11:11:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.743 11:11:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.743 11:11:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.743 11:11:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.743 11:11:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.743 11:11:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.743 11:11:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.743 11:11:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.743 11:11:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.743 11:11:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.743 11:11:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.743 11:11:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.743 11:11:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.743 11:11:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.743 11:11:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.743 11:11:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.743 11:11:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.743 11:11:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.743 11:11:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.743 11:11:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.743 11:11:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.743 11:11:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.743 11:11:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.743 11:11:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.743 11:11:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.743 11:11:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.743 11:11:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.743 11:11:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.743 11:11:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.743 11:11:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.743 11:11:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.743 11:11:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.743 11:11:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.743 11:11:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.743 11:11:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.743 11:11:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.743 11:11:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.743 11:11:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.743 11:11:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.743 11:11:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.743 11:11:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.743 11:11:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.743 11:11:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.743 11:11:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.743 11:11:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.743 11:11:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.743 11:11:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.743 11:11:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:04:03.743 11:11:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:04:03.743 11:11:59 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@99 -- # surp=0 00:04:03.743 11:11:59 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:03.743 11:11:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:03.743 11:11:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:04:03.743 11:11:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:04:03.743 11:11:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:03.744 11:11:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:03.744 11:11:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:03.744 11:11:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:03.744 11:11:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:03.744 11:11:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:03.744 11:11:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.744 11:11:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.744 11:11:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381160 kB' 'MemFree: 175356760 kB' 'MemAvailable: 178212072 kB' 'Buffers: 4132 kB' 'Cached: 9370716 kB' 'SwapCached: 0 kB' 'Active: 6401316 kB' 'Inactive: 3506552 kB' 'Active(anon): 6013500 kB' 'Inactive(anon): 0 kB' 'Active(file): 387816 kB' 'Inactive(file): 3506552 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 536288 kB' 'Mapped: 195740 kB' 'Shmem: 5480480 kB' 'KReclaimable: 210788 kB' 'Slab: 710908 kB' 'SReclaimable: 210788 kB' 'SUnreclaim: 500120 kB' 'KernelStack: 20336 kB' 'PageTables: 8480 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 102506320 kB' 'Committed_AS: 7517364 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 314872 kB' 'VmallocChunk: 0 kB' 'Percpu: 66048 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 2257876 kB' 'DirectMap2M: 12101632 kB' 'DirectMap1G: 187695104 kB' 00:04:03.744 11:11:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.744 11:11:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.744 11:11:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.744 11:11:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.744 11:11:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.744 11:11:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.744 11:11:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.744 11:11:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.744 11:11:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.744 11:11:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.744 11:11:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.744 11:11:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.744 11:11:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.744 11:11:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.744 11:11:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.744 11:11:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.744 11:11:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.744 11:11:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.744 11:11:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.744 11:11:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.744 11:11:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.744 11:11:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.744 11:11:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.744 11:11:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.744 11:11:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.744 11:11:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.744 11:11:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.744 11:11:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.744 11:11:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.744 11:11:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.744 11:11:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.744 11:11:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.744 11:11:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.744 11:11:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.744 11:11:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.744 11:11:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.744 11:11:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.744 11:11:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.744 11:11:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.744 11:11:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.744 11:11:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.744 11:11:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.744 11:11:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.744 11:11:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.744 11:11:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.744 11:11:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.744 11:11:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.744 11:11:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.744 11:11:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.744 11:11:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.744 11:11:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.744 11:11:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.744 11:11:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.744 11:11:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.744 11:11:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.744 11:11:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.744 11:11:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.744 11:11:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.744 11:11:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.744 11:11:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.744 11:11:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.744 11:11:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.744 11:11:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.744 11:11:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.744 11:11:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.744 11:11:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.744 11:11:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.744 11:11:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.744 11:11:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.744 11:11:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.744 11:11:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.744 11:11:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.744 11:11:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.744 11:11:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.744 11:11:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.744 11:11:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.744 11:11:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.744 11:11:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.744 11:11:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.744 11:11:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.744 11:11:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.744 11:11:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.744 11:11:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.744 11:11:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.744 11:11:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.744 11:11:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.744 11:11:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.744 11:11:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.745 11:11:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.745 11:11:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.745 11:11:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.745 11:11:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.745 11:11:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.745 11:11:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.745 11:11:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.745 11:11:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.745 11:11:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.745 11:11:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.745 11:11:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.745 11:11:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.745 11:11:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.745 11:11:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.745 11:11:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.745 11:11:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.745 11:11:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.745 11:11:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.745 11:11:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.745 11:11:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.745 11:11:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.745 11:11:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.745 11:11:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.745 11:11:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.745 11:11:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.745 11:11:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.745 11:11:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.745 11:11:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.745 11:11:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.745 11:11:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.745 11:11:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.745 11:11:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.745 11:11:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.745 11:11:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.745 11:11:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.745 11:11:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.745 11:11:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.745 11:11:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.745 11:11:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.745 11:11:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.745 11:11:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.745 11:11:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.745 11:11:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.745 11:11:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.745 11:11:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.745 11:11:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.745 11:11:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.745 11:11:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.745 11:11:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.745 11:11:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.745 11:11:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.745 11:11:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.745 11:11:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.745 11:11:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.745 11:11:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.745 11:11:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.745 11:11:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.745 11:11:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.745 11:11:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.745 11:11:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.745 11:11:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.745 11:11:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.745 11:11:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.745 11:11:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.745 11:11:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.745 11:11:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.745 11:11:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.745 11:11:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.745 11:11:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.745 11:11:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.745 11:11:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.745 11:11:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.745 11:11:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.745 11:11:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.745 11:11:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.745 11:11:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.746 11:11:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.746 11:11:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.746 11:11:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.746 11:11:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.746 11:11:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.746 11:11:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.746 11:11:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.746 11:11:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.746 11:11:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.746 11:11:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.746 11:11:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.746 11:11:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.746 11:11:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.746 11:11:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.746 11:11:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.746 11:11:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.746 11:11:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.746 11:11:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.746 11:11:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.746 11:11:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.746 11:11:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.746 11:11:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.746 11:11:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.746 11:11:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.746 11:11:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.746 11:11:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.746 11:11:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.746 11:11:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.746 11:11:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.746 11:11:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.746 11:11:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.746 11:11:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.746 11:11:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.746 11:11:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.746 11:11:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.746 11:11:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.746 11:11:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.746 11:11:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:04:03.746 11:11:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:04:03.746 11:11:59 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@100 -- # resv=0 00:04:03.746 11:11:59 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1536 00:04:03.746 nr_hugepages=1536 00:04:03.746 11:11:59 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:03.746 resv_hugepages=0 00:04:03.746 11:11:59 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:03.746 surplus_hugepages=0 00:04:03.746 11:11:59 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:03.746 anon_hugepages=0 00:04:03.746 11:11:59 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@107 -- # (( 1536 == nr_hugepages + surp + resv )) 00:04:03.746 11:11:59 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@109 -- # (( 1536 == nr_hugepages )) 00:04:03.746 11:11:59 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:03.746 11:11:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:03.746 11:11:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:04:03.746 11:11:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:04:03.746 11:11:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:03.746 11:11:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:03.746 11:11:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:03.746 11:11:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:03.746 11:11:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:03.746 11:11:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:03.746 11:11:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381160 kB' 'MemFree: 175357940 kB' 'MemAvailable: 178213252 kB' 'Buffers: 4132 kB' 'Cached: 9370736 kB' 'SwapCached: 0 kB' 'Active: 6401040 kB' 'Inactive: 3506552 kB' 'Active(anon): 6013224 kB' 'Inactive(anon): 0 kB' 'Active(file): 387816 kB' 'Inactive(file): 3506552 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 535968 kB' 'Mapped: 195740 kB' 'Shmem: 5480500 kB' 'KReclaimable: 210788 kB' 'Slab: 710908 kB' 'SReclaimable: 210788 kB' 'SUnreclaim: 500120 kB' 'KernelStack: 20336 kB' 'PageTables: 8476 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 102506320 kB' 'Committed_AS: 7517384 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 314872 kB' 'VmallocChunk: 0 kB' 'Percpu: 66048 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 2257876 kB' 'DirectMap2M: 12101632 kB' 'DirectMap1G: 187695104 kB' 00:04:03.746 11:11:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.746 11:11:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.746 11:11:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.746 11:11:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.746 11:11:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.746 11:11:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.746 11:11:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.746 11:11:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.746 11:11:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.746 11:11:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.746 11:11:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.746 11:11:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.746 11:11:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.746 11:11:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.746 11:11:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.746 11:11:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.746 11:11:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.746 11:11:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.746 11:11:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.746 11:11:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.746 11:11:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.746 11:11:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.746 11:11:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.746 11:11:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.746 11:11:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.746 11:11:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.746 11:11:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.746 11:11:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.746 11:11:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.746 11:11:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.746 11:11:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.746 11:11:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.746 11:11:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.746 11:11:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.746 11:11:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.746 11:11:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.746 11:11:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.746 11:11:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.746 11:11:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.746 11:11:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.746 11:11:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.746 11:11:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.746 11:11:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.746 11:11:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.746 11:11:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.746 11:11:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.746 11:11:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.746 11:11:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.747 11:11:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.747 11:11:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.747 11:11:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.747 11:11:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.747 11:11:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.747 11:11:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.747 11:11:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.747 11:11:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.747 11:11:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.747 11:11:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.747 11:11:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.747 11:11:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.747 11:11:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.747 11:11:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.747 11:11:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.747 11:11:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.747 11:11:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.747 11:11:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.747 11:11:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.747 11:11:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.747 11:11:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.747 11:11:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.747 11:11:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.747 11:11:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.747 11:11:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.747 11:11:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.747 11:11:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.747 11:11:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.747 11:11:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.747 11:11:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.747 11:11:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.747 11:11:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.747 11:11:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.747 11:11:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.747 11:11:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.747 11:11:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.747 11:11:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.747 11:11:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.747 11:11:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.747 11:11:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.747 11:11:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.747 11:11:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.747 11:11:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.747 11:11:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.747 11:11:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.747 11:11:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.747 11:11:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.747 11:11:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.747 11:11:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.747 11:11:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.747 11:11:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.747 11:11:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.747 11:11:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.747 11:11:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.747 11:11:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.747 11:11:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.747 11:11:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.747 11:11:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.747 11:11:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.747 11:11:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.747 11:11:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.747 11:11:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.747 11:11:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.747 11:11:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.747 11:11:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.747 11:11:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.747 11:11:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.747 11:11:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.747 11:11:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.747 11:11:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.747 11:11:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.747 11:11:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.747 11:11:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.747 11:11:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.747 11:11:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.747 11:11:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.747 11:11:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.747 11:11:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.747 11:11:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.747 11:11:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.747 11:11:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.747 11:11:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.747 11:11:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.747 11:11:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.747 11:11:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.747 11:11:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.747 11:11:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.747 11:11:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.747 11:11:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.747 11:11:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.747 11:11:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.747 11:11:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.747 11:11:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.747 11:11:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.747 11:11:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.747 11:11:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.747 11:11:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.747 11:11:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.748 11:11:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.748 11:11:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.748 11:11:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.748 11:11:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.748 11:11:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.748 11:11:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.748 11:11:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.748 11:11:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.748 11:11:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.748 11:11:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.748 11:11:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.748 11:11:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.748 11:11:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.748 11:11:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.748 11:11:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.748 11:11:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.748 11:11:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.748 11:11:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.748 11:11:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.748 11:11:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.748 11:11:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.748 11:11:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.748 11:11:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.748 11:11:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.748 11:11:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.748 11:11:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.748 11:11:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.748 11:11:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.748 11:11:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.748 11:11:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.748 11:11:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.748 11:11:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.748 11:11:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.748 11:11:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.748 11:11:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.748 11:11:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.748 11:11:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.748 11:11:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.748 11:11:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.748 11:11:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.748 11:11:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.748 11:11:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.748 11:11:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.748 11:11:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.748 11:11:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.748 11:11:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.748 11:11:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.748 11:11:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.748 11:11:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.748 11:11:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 1536 00:04:03.748 11:11:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:04:03.748 11:11:59 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@110 -- # (( 1536 == nr_hugepages + surp + resv )) 00:04:03.748 11:11:59 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:04:03.748 11:11:59 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@27 -- # local node 00:04:03.748 11:11:59 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:03.748 11:11:59 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:04:03.748 11:11:59 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:03.748 11:11:59 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:04:03.748 11:11:59 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:04:03.748 11:11:59 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:03.748 11:11:59 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:03.748 11:11:59 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:03.748 11:11:59 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:03.748 11:11:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:03.748 11:11:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node=0 00:04:03.748 11:11:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:04:03.748 11:11:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:03.748 11:11:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:03.748 11:11:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:03.748 11:11:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:03.748 11:11:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:03.748 11:11:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:03.748 11:11:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.748 11:11:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.748 11:11:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 97662684 kB' 'MemFree: 86589740 kB' 'MemUsed: 11072944 kB' 'SwapCached: 0 kB' 'Active: 5271836 kB' 'Inactive: 3292688 kB' 'Active(anon): 5062992 kB' 'Inactive(anon): 0 kB' 'Active(file): 208844 kB' 'Inactive(file): 3292688 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 8104964 kB' 'Mapped: 142488 kB' 'AnonPages: 462636 kB' 'Shmem: 4603432 kB' 'KernelStack: 12072 kB' 'PageTables: 5872 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 124320 kB' 'Slab: 360420 kB' 'SReclaimable: 124320 kB' 'SUnreclaim: 236100 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:04:03.748 11:11:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.748 11:11:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.748 11:11:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.748 11:11:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.748 11:11:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.748 11:11:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.748 11:11:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.748 11:11:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.748 11:11:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.748 11:11:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.748 11:11:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.748 11:11:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.748 11:11:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.748 11:11:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.748 11:11:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.748 11:11:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.748 11:11:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.748 11:11:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.748 11:11:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.748 11:11:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.748 11:11:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.748 11:11:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.748 11:11:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.748 11:11:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.748 11:11:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.748 11:11:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.748 11:11:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.748 11:11:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.748 11:11:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.748 11:11:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.748 11:11:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.748 11:11:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.748 11:11:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.748 11:11:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.748 11:11:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.748 11:11:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.749 11:11:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.749 11:11:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.749 11:11:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.749 11:11:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.749 11:11:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.749 11:11:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.749 11:11:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.749 11:11:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.749 11:11:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.749 11:11:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.749 11:11:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.749 11:11:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.749 11:11:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.749 11:11:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.749 11:11:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.749 11:11:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.749 11:11:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.749 11:11:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.749 11:11:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.749 11:11:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.749 11:11:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.749 11:11:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.749 11:11:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.749 11:11:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.749 11:11:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.749 11:11:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.749 11:11:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.749 11:11:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.010 11:11:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.010 11:11:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:04.010 11:11:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.010 11:11:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.010 11:11:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.010 11:11:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:04.010 11:11:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.010 11:11:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.010 11:11:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.010 11:11:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:04.010 11:11:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.010 11:11:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.010 11:11:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.010 11:11:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:04.010 11:11:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.010 11:11:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.010 11:11:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.010 11:11:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:04.010 11:11:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.010 11:11:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.010 11:11:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.010 11:11:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:04.010 11:11:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.010 11:11:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.010 11:11:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.010 11:11:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:04.010 11:11:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.010 11:11:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.010 11:11:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.010 11:11:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:04.010 11:11:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.010 11:11:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.010 11:11:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.010 11:11:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:04.010 11:11:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.010 11:11:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.010 11:11:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.010 11:11:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:04.010 11:11:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.010 11:11:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.010 11:11:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.010 11:11:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:04.010 11:11:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.010 11:11:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.010 11:11:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.010 11:11:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:04.010 11:11:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.010 11:11:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.010 11:11:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.010 11:11:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:04.010 11:11:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.010 11:11:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.010 11:11:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.010 11:11:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:04.010 11:11:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.010 11:11:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.010 11:11:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.010 11:11:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:04.010 11:11:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.010 11:11:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.010 11:11:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.010 11:11:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:04.010 11:11:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.010 11:11:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.010 11:11:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.010 11:11:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:04.010 11:11:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.010 11:11:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.010 11:11:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.010 11:11:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:04.010 11:11:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.010 11:11:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.010 11:11:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.010 11:11:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:04.010 11:11:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.010 11:11:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.010 11:11:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.010 11:11:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:04.010 11:11:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.010 11:11:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.010 11:11:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.010 11:11:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:04:04.010 11:11:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:04:04.010 11:11:59 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:04.010 11:11:59 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:04.010 11:11:59 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:04.010 11:11:59 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:04:04.010 11:11:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:04.010 11:11:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node=1 00:04:04.010 11:11:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:04:04.010 11:11:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:04.010 11:11:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:04.010 11:11:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:04:04.010 11:11:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:04:04.010 11:11:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:04.010 11:11:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:04.010 11:11:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.010 11:11:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.010 11:11:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 93718476 kB' 'MemFree: 88768200 kB' 'MemUsed: 4950276 kB' 'SwapCached: 0 kB' 'Active: 1129596 kB' 'Inactive: 213864 kB' 'Active(anon): 950624 kB' 'Inactive(anon): 0 kB' 'Active(file): 178972 kB' 'Inactive(file): 213864 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 1269948 kB' 'Mapped: 53252 kB' 'AnonPages: 73684 kB' 'Shmem: 877112 kB' 'KernelStack: 8280 kB' 'PageTables: 2652 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 86468 kB' 'Slab: 350488 kB' 'SReclaimable: 86468 kB' 'SUnreclaim: 264020 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:04:04.010 11:11:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.010 11:11:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:04.010 11:11:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.011 11:11:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.011 11:11:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.011 11:11:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:04.011 11:11:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.011 11:11:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.011 11:11:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.011 11:11:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:04.011 11:11:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.011 11:11:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.011 11:11:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.011 11:11:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:04.011 11:11:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.011 11:11:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.011 11:11:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.011 11:11:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:04.011 11:11:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.011 11:11:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.011 11:11:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.011 11:11:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:04.011 11:11:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.011 11:11:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.011 11:11:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.011 11:11:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:04.011 11:11:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.011 11:11:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.011 11:11:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.011 11:11:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:04.011 11:11:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.011 11:11:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.011 11:11:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.011 11:11:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:04.011 11:11:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.011 11:11:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.011 11:11:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.011 11:11:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:04.011 11:11:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.011 11:11:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.011 11:11:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.011 11:11:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:04.011 11:11:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.011 11:11:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.011 11:11:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.011 11:11:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:04.011 11:11:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.011 11:11:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.011 11:11:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.011 11:11:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:04.011 11:11:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.011 11:11:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.011 11:11:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.011 11:11:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:04.011 11:11:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.011 11:11:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.011 11:11:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.011 11:11:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:04.011 11:11:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.011 11:11:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.011 11:11:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.011 11:11:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:04.011 11:11:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.011 11:11:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.011 11:11:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.011 11:11:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:04.011 11:11:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.011 11:11:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.011 11:11:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.011 11:11:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:04.011 11:11:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.011 11:11:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.011 11:11:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.011 11:11:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:04.011 11:11:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.011 11:11:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.011 11:11:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.011 11:11:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:04.011 11:11:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.011 11:11:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.011 11:11:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.011 11:11:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:04.011 11:11:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.011 11:11:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.011 11:11:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.011 11:11:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:04.011 11:11:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.011 11:11:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.011 11:11:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.011 11:11:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:04.011 11:11:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.011 11:11:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.011 11:11:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.011 11:11:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:04.011 11:11:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.011 11:11:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.011 11:11:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.011 11:11:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:04.011 11:11:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.011 11:11:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.011 11:11:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.011 11:11:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:04.011 11:11:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.011 11:11:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.011 11:11:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.011 11:11:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:04.011 11:11:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.011 11:11:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.011 11:11:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.011 11:11:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:04.011 11:11:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.011 11:11:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.011 11:11:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.011 11:11:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:04.011 11:11:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.011 11:11:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.011 11:11:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.011 11:11:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:04.011 11:11:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.011 11:11:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.011 11:11:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.011 11:11:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:04.011 11:11:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.011 11:11:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.012 11:11:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.012 11:11:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:04.012 11:11:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.012 11:11:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.012 11:11:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.012 11:11:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:04.012 11:11:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.012 11:11:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.012 11:11:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.012 11:11:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:04.012 11:11:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.012 11:11:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.012 11:11:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.012 11:11:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:04.012 11:11:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.012 11:11:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.012 11:11:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.012 11:11:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:04.012 11:11:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.012 11:11:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.012 11:11:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.012 11:11:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:04:04.012 11:11:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:04:04.012 11:11:59 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:04.012 11:11:59 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:04.012 11:11:59 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:04.012 11:11:59 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:04.012 11:11:59 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:04:04.012 node0=512 expecting 512 00:04:04.012 11:11:59 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:04.012 11:11:59 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:04.012 11:11:59 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:04.012 11:11:59 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@128 -- # echo 'node1=1024 expecting 1024' 00:04:04.012 node1=1024 expecting 1024 00:04:04.012 11:11:59 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@130 -- # [[ 512,1024 == \5\1\2\,\1\0\2\4 ]] 00:04:04.012 00:04:04.012 real 0m3.032s 00:04:04.012 user 0m1.200s 00:04:04.012 sys 0m1.896s 00:04:04.012 11:11:59 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:04.012 11:11:59 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@10 -- # set +x 00:04:04.012 ************************************ 00:04:04.012 END TEST custom_alloc 00:04:04.012 ************************************ 00:04:04.012 11:11:59 setup.sh.hugepages -- setup/hugepages.sh@215 -- # run_test no_shrink_alloc no_shrink_alloc 00:04:04.012 11:11:59 setup.sh.hugepages -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:04.012 11:11:59 setup.sh.hugepages -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:04.012 11:11:59 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:04:04.012 ************************************ 00:04:04.012 START TEST no_shrink_alloc 00:04:04.012 ************************************ 00:04:04.012 11:11:59 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@1125 -- # no_shrink_alloc 00:04:04.012 11:11:59 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@195 -- # get_test_nr_hugepages 2097152 0 00:04:04.012 11:11:59 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@49 -- # local size=2097152 00:04:04.012 11:11:59 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:04:04.012 11:11:59 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@51 -- # shift 00:04:04.012 11:11:59 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@52 -- # node_ids=('0') 00:04:04.012 11:11:59 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@52 -- # local node_ids 00:04:04.012 11:11:59 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:04.012 11:11:59 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:04:04.012 11:11:59 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:04:04.012 11:11:59 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:04:04.012 11:11:59 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:04:04.012 11:11:59 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:04:04.012 11:11:59 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:04:04.012 11:11:59 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:04.012 11:11:59 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:04.012 11:11:59 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:04:04.012 11:11:59 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:04:04.012 11:11:59 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:04:04.012 11:11:59 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@73 -- # return 0 00:04:04.012 11:11:59 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@198 -- # setup output 00:04:04.012 11:11:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:04:04.012 11:11:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:04:06.546 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:04:06.546 0000:5e:00.0 (8086 0a54): Already using the vfio-pci driver 00:04:06.547 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:04:06.547 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:04:06.547 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:04:06.547 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:04:06.547 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:04:06.547 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:04:06.547 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:04:06.547 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:04:06.547 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:04:06.547 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:04:06.547 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:04:06.547 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:04:06.547 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:04:06.846 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:04:06.846 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:04:06.846 11:12:02 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@199 -- # verify_nr_hugepages 00:04:06.846 11:12:02 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@89 -- # local node 00:04:06.846 11:12:02 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:04:06.846 11:12:02 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:04:06.846 11:12:02 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@92 -- # local surp 00:04:06.846 11:12:02 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@93 -- # local resv 00:04:06.846 11:12:02 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@94 -- # local anon 00:04:06.846 11:12:02 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:06.846 11:12:02 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:06.846 11:12:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:06.846 11:12:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:06.846 11:12:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:06.846 11:12:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:06.846 11:12:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:06.846 11:12:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:06.846 11:12:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:06.846 11:12:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:06.846 11:12:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:06.846 11:12:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.846 11:12:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.847 11:12:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381160 kB' 'MemFree: 176337828 kB' 'MemAvailable: 179193132 kB' 'Buffers: 4132 kB' 'Cached: 9370844 kB' 'SwapCached: 0 kB' 'Active: 6409328 kB' 'Inactive: 3506552 kB' 'Active(anon): 6021512 kB' 'Inactive(anon): 0 kB' 'Active(file): 387816 kB' 'Inactive(file): 3506552 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 544252 kB' 'Mapped: 196656 kB' 'Shmem: 5480608 kB' 'KReclaimable: 210772 kB' 'Slab: 711032 kB' 'SReclaimable: 210772 kB' 'SUnreclaim: 500260 kB' 'KernelStack: 20720 kB' 'PageTables: 9556 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103030608 kB' 'Committed_AS: 7529508 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 315116 kB' 'VmallocChunk: 0 kB' 'Percpu: 66048 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2257876 kB' 'DirectMap2M: 12101632 kB' 'DirectMap1G: 187695104 kB' 00:04:06.847 11:12:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.847 11:12:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.847 11:12:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.847 11:12:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.847 11:12:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.847 11:12:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.847 11:12:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.847 11:12:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.847 11:12:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.847 11:12:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.847 11:12:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.847 11:12:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.847 11:12:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.847 11:12:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.847 11:12:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.847 11:12:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.847 11:12:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.847 11:12:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.847 11:12:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.847 11:12:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.847 11:12:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.847 11:12:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.847 11:12:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.847 11:12:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.847 11:12:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.847 11:12:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.847 11:12:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.847 11:12:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.847 11:12:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.847 11:12:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.847 11:12:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.847 11:12:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.847 11:12:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.847 11:12:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.847 11:12:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.847 11:12:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.847 11:12:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.847 11:12:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.847 11:12:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.847 11:12:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.847 11:12:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.847 11:12:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.847 11:12:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.847 11:12:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.847 11:12:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.847 11:12:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.847 11:12:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.847 11:12:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.847 11:12:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.847 11:12:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.847 11:12:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.847 11:12:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.847 11:12:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.847 11:12:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.847 11:12:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.847 11:12:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.847 11:12:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.847 11:12:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.847 11:12:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.847 11:12:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.847 11:12:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.847 11:12:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.847 11:12:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.847 11:12:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.847 11:12:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.847 11:12:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.847 11:12:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.847 11:12:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.847 11:12:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.847 11:12:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.847 11:12:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.847 11:12:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.847 11:12:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.847 11:12:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.847 11:12:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.847 11:12:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.847 11:12:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.847 11:12:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.847 11:12:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.847 11:12:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.847 11:12:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.847 11:12:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.847 11:12:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.847 11:12:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.847 11:12:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.847 11:12:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.847 11:12:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.847 11:12:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.847 11:12:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.847 11:12:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.847 11:12:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.847 11:12:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.848 11:12:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.848 11:12:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.848 11:12:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.848 11:12:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.848 11:12:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.848 11:12:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.848 11:12:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.848 11:12:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.848 11:12:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.848 11:12:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.848 11:12:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.848 11:12:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.848 11:12:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.848 11:12:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.848 11:12:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.848 11:12:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.848 11:12:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.848 11:12:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.848 11:12:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.848 11:12:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.848 11:12:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.848 11:12:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.848 11:12:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.848 11:12:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.848 11:12:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.848 11:12:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.848 11:12:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.848 11:12:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.848 11:12:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.848 11:12:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.848 11:12:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.848 11:12:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.848 11:12:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.848 11:12:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.848 11:12:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.848 11:12:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.848 11:12:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.848 11:12:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.848 11:12:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.848 11:12:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.848 11:12:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.848 11:12:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.848 11:12:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.848 11:12:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.848 11:12:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.848 11:12:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.848 11:12:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.848 11:12:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.848 11:12:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.848 11:12:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.848 11:12:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.848 11:12:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.848 11:12:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.848 11:12:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.848 11:12:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.848 11:12:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.848 11:12:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.848 11:12:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.848 11:12:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.848 11:12:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.848 11:12:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.848 11:12:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.848 11:12:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.848 11:12:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.848 11:12:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.848 11:12:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.848 11:12:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.848 11:12:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.848 11:12:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.848 11:12:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:06.848 11:12:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:06.848 11:12:02 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # anon=0 00:04:06.848 11:12:02 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:06.848 11:12:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:06.848 11:12:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:06.848 11:12:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:06.848 11:12:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:06.848 11:12:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:06.848 11:12:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:06.848 11:12:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:06.848 11:12:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:06.848 11:12:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:06.848 11:12:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.848 11:12:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.848 11:12:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381160 kB' 'MemFree: 176347836 kB' 'MemAvailable: 179203140 kB' 'Buffers: 4132 kB' 'Cached: 9370844 kB' 'SwapCached: 0 kB' 'Active: 6408400 kB' 'Inactive: 3506552 kB' 'Active(anon): 6020584 kB' 'Inactive(anon): 0 kB' 'Active(file): 387816 kB' 'Inactive(file): 3506552 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 543280 kB' 'Mapped: 196612 kB' 'Shmem: 5480608 kB' 'KReclaimable: 210772 kB' 'Slab: 711016 kB' 'SReclaimable: 210772 kB' 'SUnreclaim: 500244 kB' 'KernelStack: 20624 kB' 'PageTables: 9228 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103030608 kB' 'Committed_AS: 7529524 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 315004 kB' 'VmallocChunk: 0 kB' 'Percpu: 66048 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2257876 kB' 'DirectMap2M: 12101632 kB' 'DirectMap1G: 187695104 kB' 00:04:06.848 11:12:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.848 11:12:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.848 11:12:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.848 11:12:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.848 11:12:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.848 11:12:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.848 11:12:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.848 11:12:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.848 11:12:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.848 11:12:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.848 11:12:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.848 11:12:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.848 11:12:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.848 11:12:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.848 11:12:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.848 11:12:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.849 11:12:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.849 11:12:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.849 11:12:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.849 11:12:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.849 11:12:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.849 11:12:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.849 11:12:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.849 11:12:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.849 11:12:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.849 11:12:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.849 11:12:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.849 11:12:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.849 11:12:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.849 11:12:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.849 11:12:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.849 11:12:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.849 11:12:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.849 11:12:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.849 11:12:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.849 11:12:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.849 11:12:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.849 11:12:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.849 11:12:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.849 11:12:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.849 11:12:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.849 11:12:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.849 11:12:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.849 11:12:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.849 11:12:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.849 11:12:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.849 11:12:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.849 11:12:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.849 11:12:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.849 11:12:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.849 11:12:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.849 11:12:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.849 11:12:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.849 11:12:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.849 11:12:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.849 11:12:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.849 11:12:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.849 11:12:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.849 11:12:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.849 11:12:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.849 11:12:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.849 11:12:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.849 11:12:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.849 11:12:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.849 11:12:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.849 11:12:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.849 11:12:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.849 11:12:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.849 11:12:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.849 11:12:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.849 11:12:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.849 11:12:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.849 11:12:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.849 11:12:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.849 11:12:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.849 11:12:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.849 11:12:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.849 11:12:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.849 11:12:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.849 11:12:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.849 11:12:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.849 11:12:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.849 11:12:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.849 11:12:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.849 11:12:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.849 11:12:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.849 11:12:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.849 11:12:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.849 11:12:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.849 11:12:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.849 11:12:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.849 11:12:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.849 11:12:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.849 11:12:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.849 11:12:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.849 11:12:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.849 11:12:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.849 11:12:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.849 11:12:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.849 11:12:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.849 11:12:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.849 11:12:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.849 11:12:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.849 11:12:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.849 11:12:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.849 11:12:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.849 11:12:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.849 11:12:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.849 11:12:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.849 11:12:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.849 11:12:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.849 11:12:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.849 11:12:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.849 11:12:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.849 11:12:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.849 11:12:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.849 11:12:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.849 11:12:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.849 11:12:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.849 11:12:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.849 11:12:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.849 11:12:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.849 11:12:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.849 11:12:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.849 11:12:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.849 11:12:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.849 11:12:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.849 11:12:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.850 11:12:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.850 11:12:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.850 11:12:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.850 11:12:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.850 11:12:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.850 11:12:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.850 11:12:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.850 11:12:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.850 11:12:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.850 11:12:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.850 11:12:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.850 11:12:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.850 11:12:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.850 11:12:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.850 11:12:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.850 11:12:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.850 11:12:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.850 11:12:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.850 11:12:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.850 11:12:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.850 11:12:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.850 11:12:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.850 11:12:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.850 11:12:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.850 11:12:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.850 11:12:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.850 11:12:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.850 11:12:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.850 11:12:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.850 11:12:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.850 11:12:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.850 11:12:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.850 11:12:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.850 11:12:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.850 11:12:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.850 11:12:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.850 11:12:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.850 11:12:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.850 11:12:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.850 11:12:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.850 11:12:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.850 11:12:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.850 11:12:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.850 11:12:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.850 11:12:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.850 11:12:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.850 11:12:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.850 11:12:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.850 11:12:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.850 11:12:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.850 11:12:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.850 11:12:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.850 11:12:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.850 11:12:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.850 11:12:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.850 11:12:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.850 11:12:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.850 11:12:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.850 11:12:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.850 11:12:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.850 11:12:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.850 11:12:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.850 11:12:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.850 11:12:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.850 11:12:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.850 11:12:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.850 11:12:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.850 11:12:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.850 11:12:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.850 11:12:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.850 11:12:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.850 11:12:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.850 11:12:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.850 11:12:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.850 11:12:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.850 11:12:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.850 11:12:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.850 11:12:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:06.850 11:12:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:06.850 11:12:02 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # surp=0 00:04:06.850 11:12:02 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:06.850 11:12:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:06.850 11:12:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:06.850 11:12:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:06.850 11:12:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:06.850 11:12:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:06.850 11:12:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:06.850 11:12:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:06.850 11:12:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:06.850 11:12:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:06.850 11:12:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.850 11:12:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.851 11:12:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381160 kB' 'MemFree: 176347676 kB' 'MemAvailable: 179202980 kB' 'Buffers: 4132 kB' 'Cached: 9370864 kB' 'SwapCached: 0 kB' 'Active: 6408344 kB' 'Inactive: 3506552 kB' 'Active(anon): 6020528 kB' 'Inactive(anon): 0 kB' 'Active(file): 387816 kB' 'Inactive(file): 3506552 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 543140 kB' 'Mapped: 196612 kB' 'Shmem: 5480628 kB' 'KReclaimable: 210772 kB' 'Slab: 711084 kB' 'SReclaimable: 210772 kB' 'SUnreclaim: 500312 kB' 'KernelStack: 20464 kB' 'PageTables: 8932 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103030608 kB' 'Committed_AS: 7529548 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 315004 kB' 'VmallocChunk: 0 kB' 'Percpu: 66048 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2257876 kB' 'DirectMap2M: 12101632 kB' 'DirectMap1G: 187695104 kB' 00:04:06.851 11:12:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.851 11:12:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.851 11:12:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.851 11:12:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.851 11:12:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.851 11:12:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.851 11:12:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.851 11:12:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.851 11:12:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.851 11:12:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.851 11:12:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.851 11:12:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.851 11:12:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.851 11:12:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.851 11:12:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.851 11:12:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.851 11:12:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.851 11:12:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.851 11:12:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.851 11:12:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.851 11:12:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.851 11:12:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.851 11:12:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.851 11:12:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.851 11:12:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.851 11:12:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.851 11:12:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.851 11:12:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.851 11:12:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.851 11:12:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.851 11:12:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.851 11:12:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.851 11:12:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.851 11:12:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.851 11:12:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.851 11:12:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.851 11:12:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.851 11:12:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.851 11:12:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.851 11:12:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.851 11:12:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.851 11:12:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.851 11:12:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.851 11:12:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.851 11:12:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.851 11:12:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.851 11:12:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.851 11:12:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.851 11:12:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.851 11:12:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.851 11:12:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.851 11:12:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.851 11:12:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.851 11:12:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.851 11:12:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.851 11:12:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.851 11:12:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.851 11:12:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.851 11:12:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.851 11:12:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.851 11:12:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.851 11:12:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.851 11:12:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.851 11:12:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.851 11:12:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.851 11:12:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.851 11:12:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.851 11:12:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.851 11:12:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.851 11:12:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.851 11:12:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.851 11:12:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.851 11:12:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.851 11:12:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.851 11:12:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.851 11:12:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.851 11:12:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.851 11:12:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.851 11:12:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.851 11:12:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.851 11:12:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.851 11:12:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.851 11:12:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.851 11:12:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.851 11:12:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.851 11:12:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.851 11:12:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.852 11:12:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.852 11:12:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.852 11:12:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.852 11:12:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.852 11:12:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.852 11:12:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.852 11:12:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.852 11:12:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.852 11:12:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.852 11:12:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.852 11:12:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.852 11:12:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.852 11:12:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.852 11:12:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.852 11:12:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.852 11:12:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.852 11:12:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.852 11:12:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.852 11:12:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.852 11:12:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.852 11:12:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.852 11:12:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.852 11:12:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.852 11:12:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.852 11:12:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.852 11:12:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.852 11:12:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.852 11:12:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.852 11:12:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.852 11:12:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.852 11:12:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.852 11:12:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.852 11:12:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.852 11:12:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.852 11:12:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.852 11:12:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.852 11:12:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.852 11:12:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.852 11:12:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.852 11:12:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.852 11:12:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.852 11:12:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.852 11:12:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.852 11:12:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.852 11:12:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.852 11:12:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.852 11:12:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.852 11:12:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.852 11:12:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.852 11:12:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.852 11:12:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.852 11:12:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.852 11:12:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.852 11:12:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.852 11:12:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.852 11:12:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.852 11:12:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.852 11:12:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.852 11:12:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.852 11:12:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.852 11:12:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.852 11:12:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.852 11:12:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.852 11:12:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.852 11:12:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.852 11:12:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.852 11:12:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.852 11:12:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.852 11:12:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.852 11:12:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.852 11:12:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.852 11:12:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.852 11:12:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.852 11:12:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.852 11:12:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.852 11:12:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.852 11:12:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.852 11:12:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.852 11:12:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.852 11:12:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.852 11:12:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.852 11:12:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.852 11:12:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.852 11:12:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.852 11:12:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.852 11:12:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.852 11:12:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.852 11:12:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.852 11:12:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.852 11:12:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.852 11:12:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.852 11:12:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.852 11:12:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.852 11:12:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.852 11:12:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.852 11:12:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.852 11:12:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.852 11:12:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.852 11:12:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.852 11:12:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.853 11:12:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.853 11:12:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.853 11:12:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.853 11:12:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.853 11:12:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.853 11:12:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.853 11:12:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.853 11:12:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.853 11:12:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.853 11:12:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.853 11:12:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.853 11:12:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.853 11:12:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.853 11:12:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.853 11:12:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:06.853 11:12:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:06.853 11:12:02 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # resv=0 00:04:06.853 11:12:02 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:04:06.853 nr_hugepages=1024 00:04:06.853 11:12:02 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:06.853 resv_hugepages=0 00:04:06.853 11:12:02 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:06.853 surplus_hugepages=0 00:04:06.853 11:12:02 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:06.853 anon_hugepages=0 00:04:06.853 11:12:02 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:06.853 11:12:02 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:04:06.853 11:12:02 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:06.853 11:12:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:06.853 11:12:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:06.853 11:12:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:06.853 11:12:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:06.853 11:12:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:06.853 11:12:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:06.853 11:12:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:06.853 11:12:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:06.853 11:12:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:06.853 11:12:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.853 11:12:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.853 11:12:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381160 kB' 'MemFree: 176348812 kB' 'MemAvailable: 179204116 kB' 'Buffers: 4132 kB' 'Cached: 9370884 kB' 'SwapCached: 0 kB' 'Active: 6407600 kB' 'Inactive: 3506552 kB' 'Active(anon): 6019784 kB' 'Inactive(anon): 0 kB' 'Active(file): 387816 kB' 'Inactive(file): 3506552 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 542404 kB' 'Mapped: 196612 kB' 'Shmem: 5480648 kB' 'KReclaimable: 210772 kB' 'Slab: 711244 kB' 'SReclaimable: 210772 kB' 'SUnreclaim: 500472 kB' 'KernelStack: 20496 kB' 'PageTables: 8964 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103030608 kB' 'Committed_AS: 7529568 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 314988 kB' 'VmallocChunk: 0 kB' 'Percpu: 66048 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2257876 kB' 'DirectMap2M: 12101632 kB' 'DirectMap1G: 187695104 kB' 00:04:06.853 11:12:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.853 11:12:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.853 11:12:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.853 11:12:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.853 11:12:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.853 11:12:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.853 11:12:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.853 11:12:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.853 11:12:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.853 11:12:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.853 11:12:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.853 11:12:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.853 11:12:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.853 11:12:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.853 11:12:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.853 11:12:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.853 11:12:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.853 11:12:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.853 11:12:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.853 11:12:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.853 11:12:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.853 11:12:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.853 11:12:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.853 11:12:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.853 11:12:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.853 11:12:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.853 11:12:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.853 11:12:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.853 11:12:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.853 11:12:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.853 11:12:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.853 11:12:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.853 11:12:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.853 11:12:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.853 11:12:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.853 11:12:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.853 11:12:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.853 11:12:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.853 11:12:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.853 11:12:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.853 11:12:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.853 11:12:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.853 11:12:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.853 11:12:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.853 11:12:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.853 11:12:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.853 11:12:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.853 11:12:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.853 11:12:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.853 11:12:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.853 11:12:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.853 11:12:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.853 11:12:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.853 11:12:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.853 11:12:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.854 11:12:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.854 11:12:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.854 11:12:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.854 11:12:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.854 11:12:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.854 11:12:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.854 11:12:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.854 11:12:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.854 11:12:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.854 11:12:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.854 11:12:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.854 11:12:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.854 11:12:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.854 11:12:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.854 11:12:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.854 11:12:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.854 11:12:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.854 11:12:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.854 11:12:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.854 11:12:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.854 11:12:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.854 11:12:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.854 11:12:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.854 11:12:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.854 11:12:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.854 11:12:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.854 11:12:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.854 11:12:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.854 11:12:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.854 11:12:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.854 11:12:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.854 11:12:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.854 11:12:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.854 11:12:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.854 11:12:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.854 11:12:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.854 11:12:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.854 11:12:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.854 11:12:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.854 11:12:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.854 11:12:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.854 11:12:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.854 11:12:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.854 11:12:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.854 11:12:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.854 11:12:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.854 11:12:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.854 11:12:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.854 11:12:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.854 11:12:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.854 11:12:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.854 11:12:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.854 11:12:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.854 11:12:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.854 11:12:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.854 11:12:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.854 11:12:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.854 11:12:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.854 11:12:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.854 11:12:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.854 11:12:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.854 11:12:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.854 11:12:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.854 11:12:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.854 11:12:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.854 11:12:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.854 11:12:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.854 11:12:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.854 11:12:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.854 11:12:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.854 11:12:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.854 11:12:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.854 11:12:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.854 11:12:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.854 11:12:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.854 11:12:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.854 11:12:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.854 11:12:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.854 11:12:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.854 11:12:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.854 11:12:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.854 11:12:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.854 11:12:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.854 11:12:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.854 11:12:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.854 11:12:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.854 11:12:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.854 11:12:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.854 11:12:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.854 11:12:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.854 11:12:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.854 11:12:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.854 11:12:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.854 11:12:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.854 11:12:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.854 11:12:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.854 11:12:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.854 11:12:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.854 11:12:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.854 11:12:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.854 11:12:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.854 11:12:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.855 11:12:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.855 11:12:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.855 11:12:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.855 11:12:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.855 11:12:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.855 11:12:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.855 11:12:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.855 11:12:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.855 11:12:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.855 11:12:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.855 11:12:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.855 11:12:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.855 11:12:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.855 11:12:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.855 11:12:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.855 11:12:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.855 11:12:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.855 11:12:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.855 11:12:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.855 11:12:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.855 11:12:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.855 11:12:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.855 11:12:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.855 11:12:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.855 11:12:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.855 11:12:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.855 11:12:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.855 11:12:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.855 11:12:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.855 11:12:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.855 11:12:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.855 11:12:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.855 11:12:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.855 11:12:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.855 11:12:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.855 11:12:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.855 11:12:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 1024 00:04:06.855 11:12:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:06.855 11:12:02 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:06.855 11:12:02 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:04:06.855 11:12:02 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@27 -- # local node 00:04:06.855 11:12:02 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:06.855 11:12:02 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:04:06.855 11:12:02 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:06.855 11:12:02 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:04:06.855 11:12:02 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:04:06.855 11:12:02 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:06.855 11:12:02 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:06.855 11:12:02 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:06.855 11:12:02 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:06.855 11:12:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:06.855 11:12:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node=0 00:04:06.855 11:12:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:06.855 11:12:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:06.855 11:12:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:06.855 11:12:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:06.855 11:12:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:06.855 11:12:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:06.855 11:12:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:06.855 11:12:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.855 11:12:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.855 11:12:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 97662684 kB' 'MemFree: 85553688 kB' 'MemUsed: 12108996 kB' 'SwapCached: 0 kB' 'Active: 5277128 kB' 'Inactive: 3292688 kB' 'Active(anon): 5068284 kB' 'Inactive(anon): 0 kB' 'Active(file): 208844 kB' 'Inactive(file): 3292688 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 8105032 kB' 'Mapped: 142656 kB' 'AnonPages: 468024 kB' 'Shmem: 4603500 kB' 'KernelStack: 12072 kB' 'PageTables: 5964 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 124320 kB' 'Slab: 360340 kB' 'SReclaimable: 124320 kB' 'SUnreclaim: 236020 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:04:07.144 11:12:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.144 11:12:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.144 11:12:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.144 11:12:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.144 11:12:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.144 11:12:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.144 11:12:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.144 11:12:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.144 11:12:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.144 11:12:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.144 11:12:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.144 11:12:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.144 11:12:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.144 11:12:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.144 11:12:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.144 11:12:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.144 11:12:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.144 11:12:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.144 11:12:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.144 11:12:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.144 11:12:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.144 11:12:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.144 11:12:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.144 11:12:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.144 11:12:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.144 11:12:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.144 11:12:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.144 11:12:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.144 11:12:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.144 11:12:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.144 11:12:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.144 11:12:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.144 11:12:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.144 11:12:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.144 11:12:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.144 11:12:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.144 11:12:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.144 11:12:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.144 11:12:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.144 11:12:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.144 11:12:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.144 11:12:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.144 11:12:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.144 11:12:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.144 11:12:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.144 11:12:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.144 11:12:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.144 11:12:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.144 11:12:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.144 11:12:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.144 11:12:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.144 11:12:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.144 11:12:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.144 11:12:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.144 11:12:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.144 11:12:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.144 11:12:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.144 11:12:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.144 11:12:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.144 11:12:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.144 11:12:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.144 11:12:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.145 11:12:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.145 11:12:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.145 11:12:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.145 11:12:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.145 11:12:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.145 11:12:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.145 11:12:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.145 11:12:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.145 11:12:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.145 11:12:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.145 11:12:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.145 11:12:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.145 11:12:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.145 11:12:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.145 11:12:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.145 11:12:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.145 11:12:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.145 11:12:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.145 11:12:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.145 11:12:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.145 11:12:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.145 11:12:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.145 11:12:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.145 11:12:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.145 11:12:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.145 11:12:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.145 11:12:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.145 11:12:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.145 11:12:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.145 11:12:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.145 11:12:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.145 11:12:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.145 11:12:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.145 11:12:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.145 11:12:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.145 11:12:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.145 11:12:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.145 11:12:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.145 11:12:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.145 11:12:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.145 11:12:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.145 11:12:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.145 11:12:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.145 11:12:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.145 11:12:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.145 11:12:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.145 11:12:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.145 11:12:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.145 11:12:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.145 11:12:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.145 11:12:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.145 11:12:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.145 11:12:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.145 11:12:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.145 11:12:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.145 11:12:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.145 11:12:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.145 11:12:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.145 11:12:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.145 11:12:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.145 11:12:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.145 11:12:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.145 11:12:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.145 11:12:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.145 11:12:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.145 11:12:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.145 11:12:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.145 11:12:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.145 11:12:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.145 11:12:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.145 11:12:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.145 11:12:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.145 11:12:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.145 11:12:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.145 11:12:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.145 11:12:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.145 11:12:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.145 11:12:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.145 11:12:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.145 11:12:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.145 11:12:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.145 11:12:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.145 11:12:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.145 11:12:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:07.145 11:12:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:07.145 11:12:02 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:07.145 11:12:02 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:07.145 11:12:02 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:07.145 11:12:02 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:07.145 11:12:02 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:04:07.145 node0=1024 expecting 1024 00:04:07.145 11:12:02 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:04:07.145 11:12:02 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # CLEAR_HUGE=no 00:04:07.145 11:12:02 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # NRHUGE=512 00:04:07.145 11:12:02 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # setup output 00:04:07.145 11:12:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:04:07.145 11:12:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:04:09.683 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:04:09.683 0000:5e:00.0 (8086 0a54): Already using the vfio-pci driver 00:04:09.683 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:04:09.683 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:04:09.683 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:04:09.683 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:04:09.683 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:04:09.683 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:04:09.683 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:04:09.683 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:04:09.683 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:04:09.683 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:04:09.683 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:04:09.683 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:04:09.683 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:04:09.683 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:04:09.683 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:04:09.683 INFO: Requested 512 hugepages but 1024 already allocated on node0 00:04:09.683 11:12:05 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@204 -- # verify_nr_hugepages 00:04:09.683 11:12:05 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@89 -- # local node 00:04:09.683 11:12:05 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:04:09.683 11:12:05 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:04:09.683 11:12:05 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@92 -- # local surp 00:04:09.683 11:12:05 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@93 -- # local resv 00:04:09.683 11:12:05 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@94 -- # local anon 00:04:09.683 11:12:05 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:10.000 11:12:05 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:10.000 11:12:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:10.000 11:12:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:10.000 11:12:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:10.001 11:12:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:10.001 11:12:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:10.001 11:12:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:10.001 11:12:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:10.001 11:12:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:10.001 11:12:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:10.001 11:12:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.001 11:12:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.001 11:12:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381160 kB' 'MemFree: 176359604 kB' 'MemAvailable: 179214908 kB' 'Buffers: 4132 kB' 'Cached: 9370980 kB' 'SwapCached: 0 kB' 'Active: 6408908 kB' 'Inactive: 3506552 kB' 'Active(anon): 6021092 kB' 'Inactive(anon): 0 kB' 'Active(file): 387816 kB' 'Inactive(file): 3506552 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 543528 kB' 'Mapped: 196672 kB' 'Shmem: 5480744 kB' 'KReclaimable: 210772 kB' 'Slab: 711256 kB' 'SReclaimable: 210772 kB' 'SUnreclaim: 500484 kB' 'KernelStack: 20720 kB' 'PageTables: 9324 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103030608 kB' 'Committed_AS: 7529920 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 315196 kB' 'VmallocChunk: 0 kB' 'Percpu: 66048 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2257876 kB' 'DirectMap2M: 12101632 kB' 'DirectMap1G: 187695104 kB' 00:04:10.001 11:12:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.001 11:12:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:10.001 11:12:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.001 11:12:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.001 11:12:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.001 11:12:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:10.001 11:12:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.001 11:12:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.001 11:12:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.001 11:12:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:10.001 11:12:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.001 11:12:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.001 11:12:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.001 11:12:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:10.001 11:12:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.001 11:12:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.001 11:12:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.001 11:12:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:10.001 11:12:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.001 11:12:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.001 11:12:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.001 11:12:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:10.001 11:12:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.001 11:12:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.001 11:12:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.001 11:12:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:10.001 11:12:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.001 11:12:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.001 11:12:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.001 11:12:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:10.001 11:12:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.001 11:12:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.001 11:12:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.001 11:12:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:10.001 11:12:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.001 11:12:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.001 11:12:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.001 11:12:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:10.001 11:12:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.001 11:12:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.001 11:12:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.001 11:12:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:10.001 11:12:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.001 11:12:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.001 11:12:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.001 11:12:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:10.001 11:12:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.001 11:12:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.001 11:12:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.001 11:12:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:10.001 11:12:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.001 11:12:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.001 11:12:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.001 11:12:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:10.001 11:12:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.001 11:12:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.001 11:12:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.001 11:12:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:10.001 11:12:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.001 11:12:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.001 11:12:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.001 11:12:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:10.001 11:12:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.001 11:12:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.001 11:12:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.001 11:12:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:10.001 11:12:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.002 11:12:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.002 11:12:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.002 11:12:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:10.002 11:12:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.002 11:12:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.002 11:12:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.002 11:12:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:10.002 11:12:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.002 11:12:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.002 11:12:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.002 11:12:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:10.002 11:12:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.002 11:12:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.002 11:12:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.002 11:12:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:10.002 11:12:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.002 11:12:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.002 11:12:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.002 11:12:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:10.002 11:12:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.002 11:12:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.002 11:12:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.002 11:12:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:10.002 11:12:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.002 11:12:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.002 11:12:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.002 11:12:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:10.002 11:12:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.002 11:12:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.002 11:12:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.002 11:12:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:10.002 11:12:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.002 11:12:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.002 11:12:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.002 11:12:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:10.002 11:12:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.002 11:12:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.002 11:12:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.002 11:12:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:10.002 11:12:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.002 11:12:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.002 11:12:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.002 11:12:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:10.002 11:12:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.002 11:12:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.002 11:12:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.002 11:12:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:10.002 11:12:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.002 11:12:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.002 11:12:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.002 11:12:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:10.002 11:12:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.002 11:12:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.002 11:12:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.002 11:12:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:10.002 11:12:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.002 11:12:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.002 11:12:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.002 11:12:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:10.002 11:12:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.002 11:12:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.002 11:12:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.002 11:12:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:10.002 11:12:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.002 11:12:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.002 11:12:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.002 11:12:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:10.002 11:12:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.002 11:12:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.002 11:12:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.002 11:12:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:10.002 11:12:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.002 11:12:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.002 11:12:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.002 11:12:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:10.002 11:12:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.002 11:12:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.002 11:12:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.002 11:12:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:10.002 11:12:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.002 11:12:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.002 11:12:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.002 11:12:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:10.002 11:12:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.002 11:12:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.002 11:12:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.002 11:12:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:10.002 11:12:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.002 11:12:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.002 11:12:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.002 11:12:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:10.002 11:12:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.003 11:12:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.003 11:12:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.003 11:12:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:10.003 11:12:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:10.003 11:12:05 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # anon=0 00:04:10.003 11:12:05 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:10.003 11:12:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:10.003 11:12:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:10.003 11:12:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:10.003 11:12:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:10.003 11:12:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:10.003 11:12:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:10.003 11:12:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:10.003 11:12:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:10.003 11:12:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:10.003 11:12:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.003 11:12:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.003 11:12:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381160 kB' 'MemFree: 176361532 kB' 'MemAvailable: 179216836 kB' 'Buffers: 4132 kB' 'Cached: 9370984 kB' 'SwapCached: 0 kB' 'Active: 6408680 kB' 'Inactive: 3506552 kB' 'Active(anon): 6020864 kB' 'Inactive(anon): 0 kB' 'Active(file): 387816 kB' 'Inactive(file): 3506552 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 543348 kB' 'Mapped: 196620 kB' 'Shmem: 5480748 kB' 'KReclaimable: 210772 kB' 'Slab: 711124 kB' 'SReclaimable: 210772 kB' 'SUnreclaim: 500352 kB' 'KernelStack: 20560 kB' 'PageTables: 9188 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103030608 kB' 'Committed_AS: 7530184 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 315036 kB' 'VmallocChunk: 0 kB' 'Percpu: 66048 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2257876 kB' 'DirectMap2M: 12101632 kB' 'DirectMap1G: 187695104 kB' 00:04:10.003 11:12:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.003 11:12:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:10.003 11:12:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.003 11:12:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.003 11:12:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.003 11:12:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:10.003 11:12:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.003 11:12:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.003 11:12:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.003 11:12:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:10.003 11:12:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.003 11:12:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.003 11:12:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.003 11:12:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:10.003 11:12:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.003 11:12:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.003 11:12:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.003 11:12:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:10.003 11:12:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.003 11:12:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.003 11:12:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.003 11:12:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:10.003 11:12:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.003 11:12:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.003 11:12:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.003 11:12:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:10.003 11:12:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.003 11:12:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.003 11:12:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.003 11:12:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:10.003 11:12:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.003 11:12:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.003 11:12:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.003 11:12:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:10.003 11:12:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.003 11:12:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.003 11:12:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.003 11:12:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:10.003 11:12:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.003 11:12:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.003 11:12:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.003 11:12:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:10.003 11:12:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.003 11:12:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.003 11:12:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.003 11:12:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:10.003 11:12:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.003 11:12:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.003 11:12:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.003 11:12:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:10.003 11:12:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.003 11:12:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.003 11:12:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.003 11:12:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:10.003 11:12:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.003 11:12:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.003 11:12:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.003 11:12:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:10.003 11:12:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.004 11:12:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.004 11:12:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.004 11:12:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:10.004 11:12:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.004 11:12:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.004 11:12:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.004 11:12:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:10.004 11:12:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.004 11:12:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.004 11:12:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.004 11:12:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:10.004 11:12:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.004 11:12:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.004 11:12:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.004 11:12:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:10.004 11:12:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.004 11:12:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.004 11:12:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.004 11:12:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:10.004 11:12:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.004 11:12:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.004 11:12:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.004 11:12:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:10.004 11:12:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.004 11:12:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.004 11:12:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.004 11:12:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:10.004 11:12:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.004 11:12:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.004 11:12:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.004 11:12:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:10.004 11:12:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.004 11:12:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.004 11:12:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.004 11:12:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:10.004 11:12:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.004 11:12:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.004 11:12:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.004 11:12:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:10.004 11:12:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.004 11:12:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.004 11:12:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.004 11:12:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:10.004 11:12:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.004 11:12:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.004 11:12:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.004 11:12:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:10.004 11:12:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.004 11:12:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.004 11:12:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.004 11:12:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:10.004 11:12:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.004 11:12:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.004 11:12:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.004 11:12:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:10.004 11:12:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.004 11:12:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.004 11:12:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.004 11:12:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:10.004 11:12:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.004 11:12:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.004 11:12:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.004 11:12:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:10.004 11:12:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.004 11:12:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.004 11:12:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.004 11:12:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:10.004 11:12:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.004 11:12:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.004 11:12:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.004 11:12:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:10.004 11:12:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.004 11:12:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.004 11:12:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.004 11:12:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:10.004 11:12:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.004 11:12:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.004 11:12:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.004 11:12:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:10.004 11:12:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.004 11:12:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.004 11:12:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.004 11:12:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:10.004 11:12:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.005 11:12:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.005 11:12:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.005 11:12:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:10.005 11:12:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.005 11:12:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.005 11:12:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.005 11:12:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:10.005 11:12:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.005 11:12:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.005 11:12:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.005 11:12:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:10.005 11:12:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.005 11:12:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.005 11:12:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.005 11:12:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:10.005 11:12:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.005 11:12:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.005 11:12:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.005 11:12:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:10.005 11:12:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.005 11:12:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.005 11:12:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.005 11:12:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:10.005 11:12:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.005 11:12:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.005 11:12:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.005 11:12:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:10.005 11:12:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.005 11:12:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.005 11:12:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.005 11:12:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:10.005 11:12:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.005 11:12:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.005 11:12:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.005 11:12:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:10.005 11:12:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.005 11:12:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.005 11:12:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.005 11:12:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:10.005 11:12:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.005 11:12:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.005 11:12:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.005 11:12:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:10.005 11:12:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.005 11:12:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.005 11:12:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.005 11:12:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:10.005 11:12:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.005 11:12:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.005 11:12:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.005 11:12:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:10.005 11:12:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.005 11:12:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.005 11:12:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.005 11:12:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:10.005 11:12:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.005 11:12:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.005 11:12:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.005 11:12:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:10.005 11:12:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.005 11:12:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.005 11:12:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.005 11:12:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:10.005 11:12:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:10.005 11:12:05 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # surp=0 00:04:10.005 11:12:05 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:10.005 11:12:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:10.005 11:12:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:10.005 11:12:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:10.005 11:12:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:10.005 11:12:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:10.005 11:12:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:10.005 11:12:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:10.005 11:12:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:10.005 11:12:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:10.005 11:12:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.005 11:12:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.006 11:12:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381160 kB' 'MemFree: 176357760 kB' 'MemAvailable: 179213064 kB' 'Buffers: 4132 kB' 'Cached: 9371004 kB' 'SwapCached: 0 kB' 'Active: 6408092 kB' 'Inactive: 3506552 kB' 'Active(anon): 6020276 kB' 'Inactive(anon): 0 kB' 'Active(file): 387816 kB' 'Inactive(file): 3506552 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 542748 kB' 'Mapped: 196620 kB' 'Shmem: 5480768 kB' 'KReclaimable: 210772 kB' 'Slab: 711124 kB' 'SReclaimable: 210772 kB' 'SUnreclaim: 500352 kB' 'KernelStack: 20304 kB' 'PageTables: 8492 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103030608 kB' 'Committed_AS: 7527592 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 314972 kB' 'VmallocChunk: 0 kB' 'Percpu: 66048 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2257876 kB' 'DirectMap2M: 12101632 kB' 'DirectMap1G: 187695104 kB' 00:04:10.006 11:12:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.006 11:12:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:10.006 11:12:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.006 11:12:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.006 11:12:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.006 11:12:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:10.006 11:12:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.006 11:12:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.006 11:12:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.006 11:12:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:10.006 11:12:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.006 11:12:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.006 11:12:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.006 11:12:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:10.006 11:12:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.006 11:12:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.006 11:12:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.006 11:12:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:10.006 11:12:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.006 11:12:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.006 11:12:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.006 11:12:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:10.006 11:12:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.006 11:12:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.006 11:12:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.006 11:12:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:10.006 11:12:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.006 11:12:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.006 11:12:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.006 11:12:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:10.006 11:12:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.006 11:12:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.006 11:12:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.006 11:12:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:10.006 11:12:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.006 11:12:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.006 11:12:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.006 11:12:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:10.006 11:12:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.006 11:12:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.006 11:12:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.006 11:12:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:10.006 11:12:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.006 11:12:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.006 11:12:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.006 11:12:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:10.006 11:12:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.006 11:12:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.006 11:12:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.006 11:12:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:10.006 11:12:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.006 11:12:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.006 11:12:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.006 11:12:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:10.006 11:12:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.006 11:12:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.006 11:12:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.006 11:12:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:10.006 11:12:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.006 11:12:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.006 11:12:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.006 11:12:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:10.006 11:12:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.006 11:12:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.006 11:12:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.006 11:12:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:10.006 11:12:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.006 11:12:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.006 11:12:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.006 11:12:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:10.006 11:12:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.006 11:12:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.006 11:12:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.006 11:12:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:10.006 11:12:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.006 11:12:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.006 11:12:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.006 11:12:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:10.006 11:12:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.006 11:12:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.006 11:12:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.006 11:12:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:10.006 11:12:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.006 11:12:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.007 11:12:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.007 11:12:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:10.007 11:12:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.007 11:12:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.007 11:12:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.007 11:12:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:10.007 11:12:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.007 11:12:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.007 11:12:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.007 11:12:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:10.007 11:12:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.007 11:12:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.007 11:12:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.007 11:12:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:10.007 11:12:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.007 11:12:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.007 11:12:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.007 11:12:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:10.007 11:12:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.007 11:12:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.007 11:12:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.007 11:12:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:10.007 11:12:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.007 11:12:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.007 11:12:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.007 11:12:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:10.007 11:12:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.007 11:12:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.007 11:12:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.007 11:12:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:10.007 11:12:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.007 11:12:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.007 11:12:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.007 11:12:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:10.007 11:12:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.007 11:12:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.007 11:12:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.007 11:12:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:10.007 11:12:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.007 11:12:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.007 11:12:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.007 11:12:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:10.007 11:12:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.007 11:12:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.007 11:12:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.007 11:12:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:10.007 11:12:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.007 11:12:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.007 11:12:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.007 11:12:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:10.007 11:12:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.007 11:12:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.007 11:12:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.007 11:12:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:10.007 11:12:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.007 11:12:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.007 11:12:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.007 11:12:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:10.007 11:12:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.007 11:12:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.007 11:12:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.007 11:12:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:10.007 11:12:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.007 11:12:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.007 11:12:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.007 11:12:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:10.007 11:12:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.007 11:12:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.007 11:12:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.007 11:12:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:10.007 11:12:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.007 11:12:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.007 11:12:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.007 11:12:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:10.007 11:12:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.007 11:12:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.007 11:12:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.007 11:12:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:10.007 11:12:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.007 11:12:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.007 11:12:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.007 11:12:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:10.007 11:12:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.007 11:12:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.008 11:12:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.008 11:12:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:10.008 11:12:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.008 11:12:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.008 11:12:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.008 11:12:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:10.008 11:12:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.008 11:12:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.008 11:12:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.008 11:12:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:10.008 11:12:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.008 11:12:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.008 11:12:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.008 11:12:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:10.008 11:12:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.008 11:12:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.008 11:12:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.008 11:12:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:10.008 11:12:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.008 11:12:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.008 11:12:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.008 11:12:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:10.008 11:12:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.008 11:12:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.008 11:12:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.008 11:12:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:10.008 11:12:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.008 11:12:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.008 11:12:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.008 11:12:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:10.008 11:12:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.008 11:12:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.008 11:12:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.008 11:12:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:10.008 11:12:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:10.008 11:12:05 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # resv=0 00:04:10.008 11:12:05 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:04:10.008 nr_hugepages=1024 00:04:10.008 11:12:05 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:10.008 resv_hugepages=0 00:04:10.008 11:12:05 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:10.008 surplus_hugepages=0 00:04:10.008 11:12:05 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:10.008 anon_hugepages=0 00:04:10.008 11:12:05 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:10.008 11:12:05 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:04:10.008 11:12:05 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:10.008 11:12:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:10.008 11:12:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:10.008 11:12:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:10.008 11:12:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:10.008 11:12:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:10.008 11:12:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:10.008 11:12:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:10.008 11:12:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:10.008 11:12:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:10.008 11:12:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.008 11:12:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.008 11:12:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381160 kB' 'MemFree: 176358084 kB' 'MemAvailable: 179213388 kB' 'Buffers: 4132 kB' 'Cached: 9371024 kB' 'SwapCached: 0 kB' 'Active: 6408152 kB' 'Inactive: 3506552 kB' 'Active(anon): 6020336 kB' 'Inactive(anon): 0 kB' 'Active(file): 387816 kB' 'Inactive(file): 3506552 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 542820 kB' 'Mapped: 196616 kB' 'Shmem: 5480788 kB' 'KReclaimable: 210772 kB' 'Slab: 711148 kB' 'SReclaimable: 210772 kB' 'SUnreclaim: 500376 kB' 'KernelStack: 20416 kB' 'PageTables: 8780 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103030608 kB' 'Committed_AS: 7527612 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 314972 kB' 'VmallocChunk: 0 kB' 'Percpu: 66048 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2257876 kB' 'DirectMap2M: 12101632 kB' 'DirectMap1G: 187695104 kB' 00:04:10.008 11:12:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.008 11:12:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:10.008 11:12:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.008 11:12:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.008 11:12:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.008 11:12:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:10.008 11:12:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.008 11:12:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.008 11:12:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.008 11:12:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:10.008 11:12:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.008 11:12:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.008 11:12:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.008 11:12:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:10.008 11:12:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.008 11:12:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.008 11:12:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.008 11:12:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:10.008 11:12:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.008 11:12:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.008 11:12:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.008 11:12:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:10.008 11:12:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.008 11:12:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.008 11:12:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.008 11:12:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:10.008 11:12:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.008 11:12:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.008 11:12:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.008 11:12:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:10.008 11:12:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.008 11:12:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.008 11:12:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.008 11:12:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:10.008 11:12:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.008 11:12:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.008 11:12:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.008 11:12:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:10.008 11:12:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.008 11:12:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.008 11:12:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.008 11:12:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:10.008 11:12:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.008 11:12:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.008 11:12:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.008 11:12:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:10.008 11:12:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.009 11:12:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.009 11:12:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.009 11:12:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:10.009 11:12:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.009 11:12:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.009 11:12:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.009 11:12:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:10.009 11:12:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.009 11:12:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.009 11:12:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.009 11:12:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:10.009 11:12:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.009 11:12:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.009 11:12:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.009 11:12:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:10.009 11:12:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.009 11:12:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.009 11:12:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.009 11:12:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:10.009 11:12:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.009 11:12:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.009 11:12:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.009 11:12:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:10.009 11:12:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.009 11:12:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.009 11:12:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.009 11:12:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:10.009 11:12:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.009 11:12:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.009 11:12:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.009 11:12:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:10.009 11:12:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.009 11:12:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.009 11:12:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.009 11:12:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:10.009 11:12:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.009 11:12:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.009 11:12:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.009 11:12:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:10.009 11:12:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.009 11:12:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.009 11:12:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.009 11:12:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:10.009 11:12:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.009 11:12:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.009 11:12:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.009 11:12:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:10.009 11:12:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.009 11:12:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.009 11:12:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.009 11:12:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:10.009 11:12:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.009 11:12:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.009 11:12:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.009 11:12:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:10.009 11:12:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.009 11:12:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.009 11:12:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.009 11:12:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:10.009 11:12:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.009 11:12:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.009 11:12:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.009 11:12:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:10.009 11:12:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.009 11:12:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.009 11:12:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.009 11:12:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:10.009 11:12:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.009 11:12:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.009 11:12:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.009 11:12:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:10.009 11:12:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.009 11:12:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.009 11:12:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.009 11:12:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:10.009 11:12:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.009 11:12:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.009 11:12:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.009 11:12:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:10.009 11:12:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.009 11:12:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.009 11:12:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.009 11:12:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:10.009 11:12:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.009 11:12:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.009 11:12:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.009 11:12:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:10.009 11:12:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.009 11:12:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.009 11:12:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.009 11:12:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:10.009 11:12:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.009 11:12:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.009 11:12:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.009 11:12:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:10.009 11:12:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.009 11:12:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.009 11:12:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.009 11:12:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:10.009 11:12:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.009 11:12:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.009 11:12:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.009 11:12:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:10.009 11:12:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.009 11:12:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.010 11:12:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.010 11:12:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:10.010 11:12:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.010 11:12:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.010 11:12:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.010 11:12:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:10.010 11:12:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.010 11:12:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.010 11:12:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.010 11:12:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:10.010 11:12:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.010 11:12:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.010 11:12:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.010 11:12:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:10.010 11:12:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.010 11:12:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.010 11:12:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.010 11:12:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:10.010 11:12:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.010 11:12:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.010 11:12:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.010 11:12:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:10.010 11:12:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.010 11:12:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.010 11:12:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.010 11:12:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:10.010 11:12:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.010 11:12:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.010 11:12:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.010 11:12:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:10.010 11:12:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.010 11:12:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.010 11:12:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.010 11:12:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:10.010 11:12:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.010 11:12:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.010 11:12:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.010 11:12:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:10.010 11:12:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.010 11:12:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.010 11:12:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.010 11:12:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 1024 00:04:10.010 11:12:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:10.010 11:12:05 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:10.010 11:12:05 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:04:10.010 11:12:05 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@27 -- # local node 00:04:10.010 11:12:05 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:10.010 11:12:05 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:04:10.010 11:12:05 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:10.010 11:12:05 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:04:10.010 11:12:05 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:04:10.010 11:12:05 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:10.010 11:12:05 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:10.010 11:12:05 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:10.010 11:12:05 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:10.010 11:12:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:10.010 11:12:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node=0 00:04:10.010 11:12:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:10.010 11:12:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:10.010 11:12:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:10.010 11:12:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:10.010 11:12:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:10.010 11:12:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:10.010 11:12:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:10.010 11:12:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.010 11:12:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.010 11:12:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 97662684 kB' 'MemFree: 85539344 kB' 'MemUsed: 12123340 kB' 'SwapCached: 0 kB' 'Active: 5277912 kB' 'Inactive: 3292688 kB' 'Active(anon): 5069068 kB' 'Inactive(anon): 0 kB' 'Active(file): 208844 kB' 'Inactive(file): 3292688 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 8105132 kB' 'Mapped: 142664 kB' 'AnonPages: 468684 kB' 'Shmem: 4603600 kB' 'KernelStack: 12104 kB' 'PageTables: 6060 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 124320 kB' 'Slab: 360552 kB' 'SReclaimable: 124320 kB' 'SUnreclaim: 236232 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:04:10.010 11:12:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.010 11:12:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:10.010 11:12:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.010 11:12:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.010 11:12:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.010 11:12:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:10.010 11:12:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.010 11:12:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.010 11:12:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.010 11:12:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:10.010 11:12:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.010 11:12:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.010 11:12:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.010 11:12:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:10.010 11:12:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.010 11:12:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.010 11:12:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.010 11:12:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:10.010 11:12:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.010 11:12:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.010 11:12:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.010 11:12:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:10.010 11:12:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.010 11:12:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.010 11:12:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.010 11:12:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:10.010 11:12:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.010 11:12:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.010 11:12:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.010 11:12:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:10.010 11:12:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.011 11:12:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.011 11:12:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.011 11:12:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:10.011 11:12:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.011 11:12:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.011 11:12:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.011 11:12:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:10.011 11:12:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.011 11:12:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.011 11:12:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.011 11:12:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:10.011 11:12:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.011 11:12:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.011 11:12:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.011 11:12:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:10.011 11:12:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.011 11:12:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.011 11:12:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.011 11:12:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:10.011 11:12:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.011 11:12:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.011 11:12:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.011 11:12:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:10.011 11:12:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.011 11:12:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.011 11:12:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.011 11:12:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:10.011 11:12:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.011 11:12:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.011 11:12:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.011 11:12:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:10.011 11:12:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.011 11:12:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.011 11:12:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.011 11:12:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:10.011 11:12:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.011 11:12:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.011 11:12:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.011 11:12:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:10.011 11:12:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.011 11:12:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.011 11:12:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.011 11:12:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:10.011 11:12:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.011 11:12:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.011 11:12:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.011 11:12:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:10.011 11:12:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.011 11:12:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.011 11:12:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.011 11:12:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:10.011 11:12:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.011 11:12:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.011 11:12:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.011 11:12:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:10.011 11:12:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.011 11:12:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.011 11:12:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.011 11:12:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:10.011 11:12:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.011 11:12:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.011 11:12:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.011 11:12:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:10.011 11:12:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.011 11:12:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.011 11:12:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.011 11:12:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:10.011 11:12:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.011 11:12:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.011 11:12:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.011 11:12:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:10.011 11:12:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.011 11:12:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.011 11:12:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.011 11:12:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:10.011 11:12:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.011 11:12:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.011 11:12:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.011 11:12:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:10.011 11:12:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.011 11:12:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.011 11:12:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.011 11:12:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:10.011 11:12:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.011 11:12:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.011 11:12:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.011 11:12:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:10.011 11:12:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.011 11:12:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.011 11:12:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.011 11:12:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:10.011 11:12:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.011 11:12:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.011 11:12:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.011 11:12:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:10.011 11:12:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.011 11:12:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.011 11:12:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.011 11:12:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:10.011 11:12:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.011 11:12:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.011 11:12:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.011 11:12:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:10.011 11:12:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.011 11:12:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.011 11:12:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.011 11:12:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:10.011 11:12:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.012 11:12:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.012 11:12:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.012 11:12:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:10.012 11:12:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.012 11:12:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.012 11:12:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.012 11:12:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:10.012 11:12:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:10.012 11:12:05 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:10.012 11:12:05 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:10.012 11:12:05 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:10.012 11:12:05 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:10.012 11:12:05 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:04:10.012 node0=1024 expecting 1024 00:04:10.012 11:12:05 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:04:10.012 00:04:10.012 real 0m6.008s 00:04:10.012 user 0m2.429s 00:04:10.012 sys 0m3.720s 00:04:10.012 11:12:05 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:10.012 11:12:05 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@10 -- # set +x 00:04:10.012 ************************************ 00:04:10.012 END TEST no_shrink_alloc 00:04:10.012 ************************************ 00:04:10.012 11:12:05 setup.sh.hugepages -- setup/hugepages.sh@217 -- # clear_hp 00:04:10.012 11:12:05 setup.sh.hugepages -- setup/hugepages.sh@37 -- # local node hp 00:04:10.012 11:12:05 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:04:10.012 11:12:05 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:10.012 11:12:05 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:04:10.012 11:12:05 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:10.012 11:12:05 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:04:10.012 11:12:05 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:04:10.012 11:12:05 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:10.012 11:12:05 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:04:10.012 11:12:05 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:10.012 11:12:05 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:04:10.012 11:12:05 setup.sh.hugepages -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:04:10.012 11:12:05 setup.sh.hugepages -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:04:10.012 00:04:10.012 real 0m23.175s 00:04:10.012 user 0m8.915s 00:04:10.012 sys 0m13.457s 00:04:10.012 11:12:05 setup.sh.hugepages -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:10.012 11:12:05 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:04:10.012 ************************************ 00:04:10.012 END TEST hugepages 00:04:10.012 ************************************ 00:04:10.012 11:12:05 setup.sh -- setup/test-setup.sh@14 -- # run_test driver /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/driver.sh 00:04:10.012 11:12:05 setup.sh -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:10.012 11:12:05 setup.sh -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:10.012 11:12:05 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:04:10.012 ************************************ 00:04:10.012 START TEST driver 00:04:10.012 ************************************ 00:04:10.012 11:12:05 setup.sh.driver -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/driver.sh 00:04:10.271 * Looking for test storage... 00:04:10.271 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:04:10.271 11:12:05 setup.sh.driver -- setup/driver.sh@68 -- # setup reset 00:04:10.271 11:12:05 setup.sh.driver -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:10.271 11:12:05 setup.sh.driver -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:04:14.465 11:12:09 setup.sh.driver -- setup/driver.sh@69 -- # run_test guess_driver guess_driver 00:04:14.465 11:12:09 setup.sh.driver -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:14.465 11:12:09 setup.sh.driver -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:14.465 11:12:09 setup.sh.driver -- common/autotest_common.sh@10 -- # set +x 00:04:14.465 ************************************ 00:04:14.465 START TEST guess_driver 00:04:14.465 ************************************ 00:04:14.465 11:12:09 setup.sh.driver.guess_driver -- common/autotest_common.sh@1125 -- # guess_driver 00:04:14.465 11:12:09 setup.sh.driver.guess_driver -- setup/driver.sh@46 -- # local driver setup_driver marker 00:04:14.465 11:12:09 setup.sh.driver.guess_driver -- setup/driver.sh@47 -- # local fail=0 00:04:14.465 11:12:09 setup.sh.driver.guess_driver -- setup/driver.sh@49 -- # pick_driver 00:04:14.465 11:12:09 setup.sh.driver.guess_driver -- setup/driver.sh@36 -- # vfio 00:04:14.465 11:12:09 setup.sh.driver.guess_driver -- setup/driver.sh@21 -- # local iommu_grups 00:04:14.465 11:12:09 setup.sh.driver.guess_driver -- setup/driver.sh@22 -- # local unsafe_vfio 00:04:14.465 11:12:09 setup.sh.driver.guess_driver -- setup/driver.sh@24 -- # [[ -e /sys/module/vfio/parameters/enable_unsafe_noiommu_mode ]] 00:04:14.465 11:12:09 setup.sh.driver.guess_driver -- setup/driver.sh@25 -- # unsafe_vfio=N 00:04:14.465 11:12:09 setup.sh.driver.guess_driver -- setup/driver.sh@27 -- # iommu_groups=(/sys/kernel/iommu_groups/*) 00:04:14.465 11:12:09 setup.sh.driver.guess_driver -- setup/driver.sh@29 -- # (( 174 > 0 )) 00:04:14.465 11:12:09 setup.sh.driver.guess_driver -- setup/driver.sh@30 -- # is_driver vfio_pci 00:04:14.465 11:12:09 setup.sh.driver.guess_driver -- setup/driver.sh@14 -- # mod vfio_pci 00:04:14.465 11:12:09 setup.sh.driver.guess_driver -- setup/driver.sh@12 -- # dep vfio_pci 00:04:14.466 11:12:09 setup.sh.driver.guess_driver -- setup/driver.sh@11 -- # modprobe --show-depends vfio_pci 00:04:14.466 11:12:09 setup.sh.driver.guess_driver -- setup/driver.sh@12 -- # [[ insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/virt/lib/irqbypass.ko.xz 00:04:14.466 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/iommu/iommufd/iommufd.ko.xz 00:04:14.466 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/vfio.ko.xz 00:04:14.466 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/iommu/iommufd/iommufd.ko.xz 00:04:14.466 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/vfio.ko.xz 00:04:14.466 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/vfio_iommu_type1.ko.xz 00:04:14.466 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/pci/vfio-pci-core.ko.xz 00:04:14.466 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/pci/vfio-pci.ko.xz == *\.\k\o* ]] 00:04:14.466 11:12:09 setup.sh.driver.guess_driver -- setup/driver.sh@30 -- # return 0 00:04:14.466 11:12:09 setup.sh.driver.guess_driver -- setup/driver.sh@37 -- # echo vfio-pci 00:04:14.466 11:12:09 setup.sh.driver.guess_driver -- setup/driver.sh@49 -- # driver=vfio-pci 00:04:14.466 11:12:09 setup.sh.driver.guess_driver -- setup/driver.sh@51 -- # [[ vfio-pci == \N\o\ \v\a\l\i\d\ \d\r\i\v\e\r\ \f\o\u\n\d ]] 00:04:14.466 11:12:09 setup.sh.driver.guess_driver -- setup/driver.sh@56 -- # echo 'Looking for driver=vfio-pci' 00:04:14.466 Looking for driver=vfio-pci 00:04:14.466 11:12:09 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:14.466 11:12:09 setup.sh.driver.guess_driver -- setup/driver.sh@45 -- # setup output config 00:04:14.466 11:12:09 setup.sh.driver.guess_driver -- setup/common.sh@9 -- # [[ output == output ]] 00:04:14.466 11:12:09 setup.sh.driver.guess_driver -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:04:16.999 11:12:12 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:16.999 11:12:12 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:16.999 11:12:12 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:16.999 11:12:12 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:16.999 11:12:12 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:16.999 11:12:12 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:16.999 11:12:12 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:17.000 11:12:12 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:17.000 11:12:12 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:17.000 11:12:12 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:17.000 11:12:12 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:17.000 11:12:12 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:17.258 11:12:12 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:17.258 11:12:12 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:17.258 11:12:12 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:17.258 11:12:12 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:17.258 11:12:12 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:17.258 11:12:12 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:17.258 11:12:12 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:17.258 11:12:12 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:17.258 11:12:12 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:17.258 11:12:12 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:17.258 11:12:12 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:17.258 11:12:12 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:17.258 11:12:12 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:17.258 11:12:12 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:17.258 11:12:12 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:17.258 11:12:12 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:17.258 11:12:12 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:17.258 11:12:12 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:17.258 11:12:12 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:17.258 11:12:12 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:17.258 11:12:12 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:17.258 11:12:12 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:17.258 11:12:12 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:17.258 11:12:12 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:17.258 11:12:12 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:17.258 11:12:12 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:17.258 11:12:12 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:17.258 11:12:12 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:17.258 11:12:12 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:17.258 11:12:12 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:17.259 11:12:12 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:17.259 11:12:12 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:17.259 11:12:12 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:17.259 11:12:12 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:17.259 11:12:12 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:17.259 11:12:12 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:18.637 11:12:14 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:18.637 11:12:14 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:18.637 11:12:14 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:18.896 11:12:14 setup.sh.driver.guess_driver -- setup/driver.sh@64 -- # (( fail == 0 )) 00:04:18.896 11:12:14 setup.sh.driver.guess_driver -- setup/driver.sh@65 -- # setup reset 00:04:18.896 11:12:14 setup.sh.driver.guess_driver -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:18.896 11:12:14 setup.sh.driver.guess_driver -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:04:23.083 00:04:23.083 real 0m8.563s 00:04:23.083 user 0m2.400s 00:04:23.083 sys 0m4.030s 00:04:23.083 11:12:18 setup.sh.driver.guess_driver -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:23.083 11:12:18 setup.sh.driver.guess_driver -- common/autotest_common.sh@10 -- # set +x 00:04:23.083 ************************************ 00:04:23.083 END TEST guess_driver 00:04:23.083 ************************************ 00:04:23.083 00:04:23.083 real 0m12.829s 00:04:23.083 user 0m3.632s 00:04:23.083 sys 0m6.257s 00:04:23.083 11:12:18 setup.sh.driver -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:23.083 11:12:18 setup.sh.driver -- common/autotest_common.sh@10 -- # set +x 00:04:23.083 ************************************ 00:04:23.083 END TEST driver 00:04:23.083 ************************************ 00:04:23.083 11:12:18 setup.sh -- setup/test-setup.sh@15 -- # run_test devices /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/devices.sh 00:04:23.083 11:12:18 setup.sh -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:23.083 11:12:18 setup.sh -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:23.083 11:12:18 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:04:23.083 ************************************ 00:04:23.083 START TEST devices 00:04:23.083 ************************************ 00:04:23.083 11:12:18 setup.sh.devices -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/devices.sh 00:04:23.083 * Looking for test storage... 00:04:23.083 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:04:23.083 11:12:18 setup.sh.devices -- setup/devices.sh@190 -- # trap cleanup EXIT 00:04:23.083 11:12:18 setup.sh.devices -- setup/devices.sh@192 -- # setup reset 00:04:23.083 11:12:18 setup.sh.devices -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:23.083 11:12:18 setup.sh.devices -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:04:26.371 11:12:21 setup.sh.devices -- setup/devices.sh@194 -- # get_zoned_devs 00:04:26.371 11:12:21 setup.sh.devices -- common/autotest_common.sh@1669 -- # zoned_devs=() 00:04:26.371 11:12:21 setup.sh.devices -- common/autotest_common.sh@1669 -- # local -gA zoned_devs 00:04:26.371 11:12:21 setup.sh.devices -- common/autotest_common.sh@1670 -- # local nvme bdf 00:04:26.371 11:12:21 setup.sh.devices -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:04:26.371 11:12:21 setup.sh.devices -- common/autotest_common.sh@1673 -- # is_block_zoned nvme0n1 00:04:26.371 11:12:21 setup.sh.devices -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:04:26.371 11:12:21 setup.sh.devices -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:04:26.371 11:12:21 setup.sh.devices -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:04:26.371 11:12:21 setup.sh.devices -- setup/devices.sh@196 -- # blocks=() 00:04:26.371 11:12:21 setup.sh.devices -- setup/devices.sh@196 -- # declare -a blocks 00:04:26.371 11:12:21 setup.sh.devices -- setup/devices.sh@197 -- # blocks_to_pci=() 00:04:26.371 11:12:21 setup.sh.devices -- setup/devices.sh@197 -- # declare -A blocks_to_pci 00:04:26.371 11:12:21 setup.sh.devices -- setup/devices.sh@198 -- # min_disk_size=3221225472 00:04:26.372 11:12:21 setup.sh.devices -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:04:26.372 11:12:21 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0n1 00:04:26.372 11:12:21 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0 00:04:26.372 11:12:21 setup.sh.devices -- setup/devices.sh@202 -- # pci=0000:5e:00.0 00:04:26.372 11:12:21 setup.sh.devices -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\5\e\:\0\0\.\0* ]] 00:04:26.372 11:12:21 setup.sh.devices -- setup/devices.sh@204 -- # block_in_use nvme0n1 00:04:26.372 11:12:21 setup.sh.devices -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:04:26.372 11:12:21 setup.sh.devices -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:04:26.372 No valid GPT data, bailing 00:04:26.372 11:12:21 setup.sh.devices -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:04:26.372 11:12:21 setup.sh.devices -- scripts/common.sh@391 -- # pt= 00:04:26.372 11:12:21 setup.sh.devices -- scripts/common.sh@392 -- # return 1 00:04:26.372 11:12:21 setup.sh.devices -- setup/devices.sh@204 -- # sec_size_to_bytes nvme0n1 00:04:26.372 11:12:21 setup.sh.devices -- setup/common.sh@76 -- # local dev=nvme0n1 00:04:26.372 11:12:21 setup.sh.devices -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:04:26.372 11:12:21 setup.sh.devices -- setup/common.sh@80 -- # echo 1600321314816 00:04:26.372 11:12:21 setup.sh.devices -- setup/devices.sh@204 -- # (( 1600321314816 >= min_disk_size )) 00:04:26.372 11:12:21 setup.sh.devices -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:04:26.372 11:12:21 setup.sh.devices -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:5e:00.0 00:04:26.372 11:12:21 setup.sh.devices -- setup/devices.sh@209 -- # (( 1 > 0 )) 00:04:26.372 11:12:21 setup.sh.devices -- setup/devices.sh@211 -- # declare -r test_disk=nvme0n1 00:04:26.372 11:12:21 setup.sh.devices -- setup/devices.sh@213 -- # run_test nvme_mount nvme_mount 00:04:26.372 11:12:21 setup.sh.devices -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:26.372 11:12:21 setup.sh.devices -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:26.372 11:12:21 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:04:26.372 ************************************ 00:04:26.372 START TEST nvme_mount 00:04:26.372 ************************************ 00:04:26.372 11:12:21 setup.sh.devices.nvme_mount -- common/autotest_common.sh@1125 -- # nvme_mount 00:04:26.372 11:12:21 setup.sh.devices.nvme_mount -- setup/devices.sh@95 -- # nvme_disk=nvme0n1 00:04:26.372 11:12:21 setup.sh.devices.nvme_mount -- setup/devices.sh@96 -- # nvme_disk_p=nvme0n1p1 00:04:26.372 11:12:21 setup.sh.devices.nvme_mount -- setup/devices.sh@97 -- # nvme_mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:26.372 11:12:21 setup.sh.devices.nvme_mount -- setup/devices.sh@98 -- # nvme_dummy_test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:26.372 11:12:21 setup.sh.devices.nvme_mount -- setup/devices.sh@101 -- # partition_drive nvme0n1 1 00:04:26.372 11:12:21 setup.sh.devices.nvme_mount -- setup/common.sh@39 -- # local disk=nvme0n1 00:04:26.372 11:12:21 setup.sh.devices.nvme_mount -- setup/common.sh@40 -- # local part_no=1 00:04:26.372 11:12:21 setup.sh.devices.nvme_mount -- setup/common.sh@41 -- # local size=1073741824 00:04:26.372 11:12:21 setup.sh.devices.nvme_mount -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:04:26.372 11:12:21 setup.sh.devices.nvme_mount -- setup/common.sh@44 -- # parts=() 00:04:26.372 11:12:21 setup.sh.devices.nvme_mount -- setup/common.sh@44 -- # local parts 00:04:26.372 11:12:21 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part = 1 )) 00:04:26.372 11:12:21 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:26.372 11:12:21 setup.sh.devices.nvme_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:04:26.372 11:12:21 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part++ )) 00:04:26.372 11:12:21 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:26.372 11:12:21 setup.sh.devices.nvme_mount -- setup/common.sh@51 -- # (( size /= 512 )) 00:04:26.372 11:12:21 setup.sh.devices.nvme_mount -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:04:26.372 11:12:21 setup.sh.devices.nvme_mount -- setup/common.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 00:04:27.310 Creating new GPT entries in memory. 00:04:27.310 GPT data structures destroyed! You may now partition the disk using fdisk or 00:04:27.310 other utilities. 00:04:27.310 11:12:22 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part = 1 )) 00:04:27.310 11:12:22 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:27.310 11:12:22 setup.sh.devices.nvme_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:04:27.310 11:12:22 setup.sh.devices.nvme_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:04:27.310 11:12:22 setup.sh.devices.nvme_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:2099199 00:04:28.245 Creating new GPT entries in memory. 00:04:28.245 The operation has completed successfully. 00:04:28.245 11:12:23 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part++ )) 00:04:28.245 11:12:23 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:28.245 11:12:23 setup.sh.devices.nvme_mount -- setup/common.sh@62 -- # wait 1317399 00:04:28.504 11:12:23 setup.sh.devices.nvme_mount -- setup/devices.sh@102 -- # mkfs /dev/nvme0n1p1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:28.504 11:12:23 setup.sh.devices.nvme_mount -- setup/common.sh@66 -- # local dev=/dev/nvme0n1p1 mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount size= 00:04:28.504 11:12:23 setup.sh.devices.nvme_mount -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:28.504 11:12:23 setup.sh.devices.nvme_mount -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1p1 ]] 00:04:28.504 11:12:23 setup.sh.devices.nvme_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1p1 00:04:28.504 11:12:23 setup.sh.devices.nvme_mount -- setup/common.sh@72 -- # mount /dev/nvme0n1p1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:28.504 11:12:23 setup.sh.devices.nvme_mount -- setup/devices.sh@105 -- # verify 0000:5e:00.0 nvme0n1:nvme0n1p1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:28.504 11:12:23 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:5e:00.0 00:04:28.504 11:12:23 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1p1 00:04:28.504 11:12:23 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:28.504 11:12:23 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:28.504 11:12:23 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:04:28.504 11:12:23 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:04:28.504 11:12:23 setup.sh.devices.nvme_mount -- setup/devices.sh@56 -- # : 00:04:28.504 11:12:23 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:04:28.504 11:12:23 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:28.504 11:12:23 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:5e:00.0 00:04:28.504 11:12:23 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:04:28.504 11:12:23 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:04:28.504 11:12:23 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:04:31.040 11:12:26 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:5e:00.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:31.040 11:12:26 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1p1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1\p\1* ]] 00:04:31.040 11:12:26 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:04:31.040 11:12:26 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:31.040 11:12:26 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:31.040 11:12:26 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:31.040 11:12:26 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:31.040 11:12:26 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:31.040 11:12:26 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:31.040 11:12:26 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:31.040 11:12:26 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:31.040 11:12:26 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:31.040 11:12:26 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:31.040 11:12:26 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:31.040 11:12:26 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:31.040 11:12:26 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:31.040 11:12:26 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:31.040 11:12:26 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:31.040 11:12:26 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:31.040 11:12:26 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:31.040 11:12:26 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:31.040 11:12:26 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:31.040 11:12:26 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:31.040 11:12:26 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:31.040 11:12:26 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:31.040 11:12:26 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:31.040 11:12:26 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:31.040 11:12:26 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:31.040 11:12:26 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:31.040 11:12:26 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:31.040 11:12:26 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:31.040 11:12:26 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:31.040 11:12:26 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:31.040 11:12:26 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:31.040 11:12:26 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:31.040 11:12:26 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:31.299 11:12:26 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:31.299 11:12:26 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount ]] 00:04:31.299 11:12:26 setup.sh.devices.nvme_mount -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:31.299 11:12:26 setup.sh.devices.nvme_mount -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:04:31.299 11:12:26 setup.sh.devices.nvme_mount -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:31.299 11:12:26 setup.sh.devices.nvme_mount -- setup/devices.sh@110 -- # cleanup_nvme 00:04:31.299 11:12:26 setup.sh.devices.nvme_mount -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:31.299 11:12:26 setup.sh.devices.nvme_mount -- setup/devices.sh@21 -- # umount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:31.299 11:12:26 setup.sh.devices.nvme_mount -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:31.299 11:12:26 setup.sh.devices.nvme_mount -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:04:31.299 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:04:31.299 11:12:26 setup.sh.devices.nvme_mount -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:04:31.299 11:12:26 setup.sh.devices.nvme_mount -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:04:31.558 /dev/nvme0n1: 8 bytes were erased at offset 0x00000200 (gpt): 45 46 49 20 50 41 52 54 00:04:31.558 /dev/nvme0n1: 8 bytes were erased at offset 0x1749a955e00 (gpt): 45 46 49 20 50 41 52 54 00:04:31.558 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:04:31.558 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:04:31.559 11:12:27 setup.sh.devices.nvme_mount -- setup/devices.sh@113 -- # mkfs /dev/nvme0n1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 1024M 00:04:31.559 11:12:27 setup.sh.devices.nvme_mount -- setup/common.sh@66 -- # local dev=/dev/nvme0n1 mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount size=1024M 00:04:31.559 11:12:27 setup.sh.devices.nvme_mount -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:31.559 11:12:27 setup.sh.devices.nvme_mount -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1 ]] 00:04:31.559 11:12:27 setup.sh.devices.nvme_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1 1024M 00:04:31.559 11:12:27 setup.sh.devices.nvme_mount -- setup/common.sh@72 -- # mount /dev/nvme0n1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:31.559 11:12:27 setup.sh.devices.nvme_mount -- setup/devices.sh@116 -- # verify 0000:5e:00.0 nvme0n1:nvme0n1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:31.559 11:12:27 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:5e:00.0 00:04:31.559 11:12:27 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1 00:04:31.559 11:12:27 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:31.559 11:12:27 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:31.559 11:12:27 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:04:31.559 11:12:27 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:04:31.559 11:12:27 setup.sh.devices.nvme_mount -- setup/devices.sh@56 -- # : 00:04:31.559 11:12:27 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:04:31.559 11:12:27 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:31.559 11:12:27 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:5e:00.0 00:04:31.559 11:12:27 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:04:31.559 11:12:27 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:04:31.559 11:12:27 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:04:34.850 11:12:29 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:5e:00.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:34.850 11:12:29 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1* ]] 00:04:34.850 11:12:29 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:04:34.850 11:12:29 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:34.850 11:12:29 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:34.850 11:12:29 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:34.850 11:12:29 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:34.850 11:12:29 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:34.850 11:12:29 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:34.850 11:12:29 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:34.850 11:12:29 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:34.850 11:12:29 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:34.850 11:12:29 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:34.850 11:12:29 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:34.850 11:12:29 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:34.850 11:12:29 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:34.850 11:12:29 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:34.850 11:12:29 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:34.850 11:12:29 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:34.850 11:12:29 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:34.850 11:12:29 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:34.850 11:12:29 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:34.850 11:12:29 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:34.850 11:12:29 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:34.850 11:12:29 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:34.850 11:12:29 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:34.850 11:12:29 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:34.850 11:12:29 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:34.850 11:12:29 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:34.850 11:12:29 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:34.850 11:12:29 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:34.850 11:12:29 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:34.850 11:12:29 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:34.850 11:12:29 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:34.850 11:12:29 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:34.850 11:12:29 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:34.850 11:12:29 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:34.850 11:12:29 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount ]] 00:04:34.850 11:12:29 setup.sh.devices.nvme_mount -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:34.850 11:12:29 setup.sh.devices.nvme_mount -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:04:34.850 11:12:29 setup.sh.devices.nvme_mount -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:34.850 11:12:29 setup.sh.devices.nvme_mount -- setup/devices.sh@123 -- # umount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:34.850 11:12:29 setup.sh.devices.nvme_mount -- setup/devices.sh@125 -- # verify 0000:5e:00.0 data@nvme0n1 '' '' 00:04:34.850 11:12:29 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:5e:00.0 00:04:34.850 11:12:29 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=data@nvme0n1 00:04:34.850 11:12:29 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point= 00:04:34.850 11:12:29 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file= 00:04:34.850 11:12:29 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:04:34.850 11:12:29 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n '' ]] 00:04:34.850 11:12:29 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:04:34.850 11:12:29 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:34.850 11:12:29 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:5e:00.0 00:04:34.850 11:12:29 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:04:34.850 11:12:29 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:04:34.850 11:12:29 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:04:37.386 11:12:32 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:5e:00.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:37.386 11:12:32 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: data@nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\d\a\t\a\@\n\v\m\e\0\n\1* ]] 00:04:37.386 11:12:32 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:04:37.386 11:12:32 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:37.386 11:12:32 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:37.386 11:12:32 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:37.386 11:12:32 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:37.386 11:12:32 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:37.386 11:12:32 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:37.386 11:12:32 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:37.386 11:12:32 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:37.386 11:12:32 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:37.386 11:12:32 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:37.386 11:12:32 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:37.386 11:12:32 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:37.386 11:12:32 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:37.386 11:12:32 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:37.386 11:12:32 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:37.386 11:12:32 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:37.386 11:12:32 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:37.386 11:12:32 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:37.386 11:12:32 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:37.386 11:12:32 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:37.386 11:12:32 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:37.386 11:12:32 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:37.386 11:12:32 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:37.386 11:12:32 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:37.386 11:12:32 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:37.386 11:12:32 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:37.386 11:12:32 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:37.386 11:12:32 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:37.386 11:12:32 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:37.386 11:12:32 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:37.386 11:12:32 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:37.386 11:12:32 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:37.386 11:12:32 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:37.386 11:12:32 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:37.386 11:12:32 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n '' ]] 00:04:37.386 11:12:32 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # return 0 00:04:37.386 11:12:32 setup.sh.devices.nvme_mount -- setup/devices.sh@128 -- # cleanup_nvme 00:04:37.386 11:12:32 setup.sh.devices.nvme_mount -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:37.386 11:12:32 setup.sh.devices.nvme_mount -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:37.386 11:12:32 setup.sh.devices.nvme_mount -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:04:37.386 11:12:32 setup.sh.devices.nvme_mount -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:04:37.386 /dev/nvme0n1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:04:37.386 00:04:37.386 real 0m10.998s 00:04:37.386 user 0m3.346s 00:04:37.386 sys 0m5.515s 00:04:37.386 11:12:32 setup.sh.devices.nvme_mount -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:37.386 11:12:32 setup.sh.devices.nvme_mount -- common/autotest_common.sh@10 -- # set +x 00:04:37.386 ************************************ 00:04:37.386 END TEST nvme_mount 00:04:37.386 ************************************ 00:04:37.386 11:12:32 setup.sh.devices -- setup/devices.sh@214 -- # run_test dm_mount dm_mount 00:04:37.386 11:12:32 setup.sh.devices -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:37.386 11:12:32 setup.sh.devices -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:37.386 11:12:32 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:04:37.386 ************************************ 00:04:37.386 START TEST dm_mount 00:04:37.386 ************************************ 00:04:37.386 11:12:32 setup.sh.devices.dm_mount -- common/autotest_common.sh@1125 -- # dm_mount 00:04:37.386 11:12:32 setup.sh.devices.dm_mount -- setup/devices.sh@144 -- # pv=nvme0n1 00:04:37.386 11:12:32 setup.sh.devices.dm_mount -- setup/devices.sh@145 -- # pv0=nvme0n1p1 00:04:37.386 11:12:32 setup.sh.devices.dm_mount -- setup/devices.sh@146 -- # pv1=nvme0n1p2 00:04:37.386 11:12:32 setup.sh.devices.dm_mount -- setup/devices.sh@148 -- # partition_drive nvme0n1 00:04:37.386 11:12:32 setup.sh.devices.dm_mount -- setup/common.sh@39 -- # local disk=nvme0n1 00:04:37.386 11:12:32 setup.sh.devices.dm_mount -- setup/common.sh@40 -- # local part_no=2 00:04:37.386 11:12:32 setup.sh.devices.dm_mount -- setup/common.sh@41 -- # local size=1073741824 00:04:37.386 11:12:32 setup.sh.devices.dm_mount -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:04:37.386 11:12:32 setup.sh.devices.dm_mount -- setup/common.sh@44 -- # parts=() 00:04:37.386 11:12:32 setup.sh.devices.dm_mount -- setup/common.sh@44 -- # local parts 00:04:37.386 11:12:32 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part = 1 )) 00:04:37.386 11:12:32 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:37.386 11:12:32 setup.sh.devices.dm_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:04:37.386 11:12:32 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part++ )) 00:04:37.386 11:12:32 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:37.386 11:12:32 setup.sh.devices.dm_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:04:37.386 11:12:32 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part++ )) 00:04:37.386 11:12:32 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:37.386 11:12:32 setup.sh.devices.dm_mount -- setup/common.sh@51 -- # (( size /= 512 )) 00:04:37.386 11:12:32 setup.sh.devices.dm_mount -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:04:37.386 11:12:32 setup.sh.devices.dm_mount -- setup/common.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 nvme0n1p2 00:04:38.323 Creating new GPT entries in memory. 00:04:38.323 GPT data structures destroyed! You may now partition the disk using fdisk or 00:04:38.323 other utilities. 00:04:38.323 11:12:33 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part = 1 )) 00:04:38.323 11:12:33 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:38.323 11:12:33 setup.sh.devices.dm_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:04:38.323 11:12:33 setup.sh.devices.dm_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:04:38.323 11:12:33 setup.sh.devices.dm_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:2099199 00:04:39.701 Creating new GPT entries in memory. 00:04:39.701 The operation has completed successfully. 00:04:39.701 11:12:34 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part++ )) 00:04:39.701 11:12:34 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:39.701 11:12:34 setup.sh.devices.dm_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:04:39.701 11:12:34 setup.sh.devices.dm_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:04:39.701 11:12:34 setup.sh.devices.dm_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=2:2099200:4196351 00:04:40.639 The operation has completed successfully. 00:04:40.639 11:12:35 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part++ )) 00:04:40.639 11:12:35 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:40.639 11:12:35 setup.sh.devices.dm_mount -- setup/common.sh@62 -- # wait 1321548 00:04:40.639 11:12:36 setup.sh.devices.dm_mount -- setup/devices.sh@150 -- # dm_name=nvme_dm_test 00:04:40.639 11:12:36 setup.sh.devices.dm_mount -- setup/devices.sh@151 -- # dm_mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:04:40.639 11:12:36 setup.sh.devices.dm_mount -- setup/devices.sh@152 -- # dm_dummy_test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:04:40.639 11:12:36 setup.sh.devices.dm_mount -- setup/devices.sh@155 -- # dmsetup create nvme_dm_test 00:04:40.639 11:12:36 setup.sh.devices.dm_mount -- setup/devices.sh@160 -- # for t in {1..5} 00:04:40.639 11:12:36 setup.sh.devices.dm_mount -- setup/devices.sh@161 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:04:40.639 11:12:36 setup.sh.devices.dm_mount -- setup/devices.sh@161 -- # break 00:04:40.639 11:12:36 setup.sh.devices.dm_mount -- setup/devices.sh@164 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:04:40.639 11:12:36 setup.sh.devices.dm_mount -- setup/devices.sh@165 -- # readlink -f /dev/mapper/nvme_dm_test 00:04:40.639 11:12:36 setup.sh.devices.dm_mount -- setup/devices.sh@165 -- # dm=/dev/dm-2 00:04:40.639 11:12:36 setup.sh.devices.dm_mount -- setup/devices.sh@166 -- # dm=dm-2 00:04:40.639 11:12:36 setup.sh.devices.dm_mount -- setup/devices.sh@168 -- # [[ -e /sys/class/block/nvme0n1p1/holders/dm-2 ]] 00:04:40.639 11:12:36 setup.sh.devices.dm_mount -- setup/devices.sh@169 -- # [[ -e /sys/class/block/nvme0n1p2/holders/dm-2 ]] 00:04:40.639 11:12:36 setup.sh.devices.dm_mount -- setup/devices.sh@171 -- # mkfs /dev/mapper/nvme_dm_test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:04:40.639 11:12:36 setup.sh.devices.dm_mount -- setup/common.sh@66 -- # local dev=/dev/mapper/nvme_dm_test mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount size= 00:04:40.639 11:12:36 setup.sh.devices.dm_mount -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:04:40.639 11:12:36 setup.sh.devices.dm_mount -- setup/common.sh@70 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:04:40.639 11:12:36 setup.sh.devices.dm_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/mapper/nvme_dm_test 00:04:40.639 11:12:36 setup.sh.devices.dm_mount -- setup/common.sh@72 -- # mount /dev/mapper/nvme_dm_test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:04:40.639 11:12:36 setup.sh.devices.dm_mount -- setup/devices.sh@174 -- # verify 0000:5e:00.0 nvme0n1:nvme_dm_test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:04:40.639 11:12:36 setup.sh.devices.dm_mount -- setup/devices.sh@48 -- # local dev=0000:5e:00.0 00:04:40.639 11:12:36 setup.sh.devices.dm_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme_dm_test 00:04:40.639 11:12:36 setup.sh.devices.dm_mount -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:04:40.639 11:12:36 setup.sh.devices.dm_mount -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:04:40.639 11:12:36 setup.sh.devices.dm_mount -- setup/devices.sh@53 -- # local found=0 00:04:40.639 11:12:36 setup.sh.devices.dm_mount -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm ]] 00:04:40.639 11:12:36 setup.sh.devices.dm_mount -- setup/devices.sh@56 -- # : 00:04:40.639 11:12:36 setup.sh.devices.dm_mount -- setup/devices.sh@59 -- # local pci status 00:04:40.639 11:12:36 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:40.639 11:12:36 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:5e:00.0 00:04:40.639 11:12:36 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # setup output config 00:04:40.639 11:12:36 setup.sh.devices.dm_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:04:40.639 11:12:36 setup.sh.devices.dm_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:04:43.179 11:12:38 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:5e:00.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:43.179 11:12:38 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-2,holder@nvme0n1p2:dm-2,mount@nvme0n1:nvme_dm_test, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\_\d\m\_\t\e\s\t* ]] 00:04:43.179 11:12:38 setup.sh.devices.dm_mount -- setup/devices.sh@63 -- # found=1 00:04:43.179 11:12:38 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:43.179 11:12:38 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:43.179 11:12:38 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:43.179 11:12:38 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:43.179 11:12:38 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:43.179 11:12:38 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:43.179 11:12:38 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:43.179 11:12:38 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:43.179 11:12:38 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:43.179 11:12:38 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:43.179 11:12:38 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:43.179 11:12:38 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:43.179 11:12:38 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:43.179 11:12:38 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:43.179 11:12:38 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:43.179 11:12:38 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:43.179 11:12:38 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:43.179 11:12:38 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:43.179 11:12:38 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:43.179 11:12:38 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:43.179 11:12:38 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:43.179 11:12:38 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:43.179 11:12:38 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:43.179 11:12:38 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:43.179 11:12:38 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:43.179 11:12:38 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:43.179 11:12:38 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:43.179 11:12:38 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:43.179 11:12:38 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:43.179 11:12:38 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:43.179 11:12:38 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:43.179 11:12:38 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:43.179 11:12:38 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:43.472 11:12:38 setup.sh.devices.dm_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:43.472 11:12:38 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount ]] 00:04:43.472 11:12:38 setup.sh.devices.dm_mount -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:04:43.472 11:12:38 setup.sh.devices.dm_mount -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm ]] 00:04:43.472 11:12:38 setup.sh.devices.dm_mount -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:04:43.472 11:12:38 setup.sh.devices.dm_mount -- setup/devices.sh@182 -- # umount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:04:43.472 11:12:38 setup.sh.devices.dm_mount -- setup/devices.sh@184 -- # verify 0000:5e:00.0 holder@nvme0n1p1:dm-2,holder@nvme0n1p2:dm-2 '' '' 00:04:43.472 11:12:38 setup.sh.devices.dm_mount -- setup/devices.sh@48 -- # local dev=0000:5e:00.0 00:04:43.472 11:12:38 setup.sh.devices.dm_mount -- setup/devices.sh@49 -- # local mounts=holder@nvme0n1p1:dm-2,holder@nvme0n1p2:dm-2 00:04:43.472 11:12:38 setup.sh.devices.dm_mount -- setup/devices.sh@50 -- # local mount_point= 00:04:43.472 11:12:38 setup.sh.devices.dm_mount -- setup/devices.sh@51 -- # local test_file= 00:04:43.472 11:12:38 setup.sh.devices.dm_mount -- setup/devices.sh@53 -- # local found=0 00:04:43.472 11:12:38 setup.sh.devices.dm_mount -- setup/devices.sh@55 -- # [[ -n '' ]] 00:04:43.472 11:12:38 setup.sh.devices.dm_mount -- setup/devices.sh@59 -- # local pci status 00:04:43.472 11:12:38 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:43.472 11:12:38 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:5e:00.0 00:04:43.472 11:12:38 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # setup output config 00:04:43.472 11:12:38 setup.sh.devices.dm_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:04:43.472 11:12:38 setup.sh.devices.dm_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:04:46.033 11:12:41 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:5e:00.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:46.033 11:12:41 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-2,holder@nvme0n1p2:dm-2, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\1\:\d\m\-\2\,\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\2\:\d\m\-\2* ]] 00:04:46.033 11:12:41 setup.sh.devices.dm_mount -- setup/devices.sh@63 -- # found=1 00:04:46.033 11:12:41 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:46.033 11:12:41 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:46.033 11:12:41 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:46.033 11:12:41 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:46.033 11:12:41 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:46.033 11:12:41 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:46.033 11:12:41 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:46.033 11:12:41 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:46.033 11:12:41 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:46.033 11:12:41 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:46.033 11:12:41 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:46.033 11:12:41 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:46.033 11:12:41 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:46.033 11:12:41 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:46.033 11:12:41 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:46.033 11:12:41 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:46.033 11:12:41 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:46.033 11:12:41 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:46.033 11:12:41 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:46.033 11:12:41 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:46.033 11:12:41 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:46.033 11:12:41 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:46.033 11:12:41 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:46.033 11:12:41 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:46.033 11:12:41 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:46.033 11:12:41 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:46.033 11:12:41 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:46.033 11:12:41 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:46.033 11:12:41 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:46.033 11:12:41 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:46.033 11:12:41 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:46.033 11:12:41 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:46.033 11:12:41 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:46.293 11:12:41 setup.sh.devices.dm_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:46.293 11:12:41 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # [[ -n '' ]] 00:04:46.293 11:12:41 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # return 0 00:04:46.293 11:12:41 setup.sh.devices.dm_mount -- setup/devices.sh@187 -- # cleanup_dm 00:04:46.293 11:12:41 setup.sh.devices.dm_mount -- setup/devices.sh@33 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:04:46.293 11:12:41 setup.sh.devices.dm_mount -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:04:46.293 11:12:41 setup.sh.devices.dm_mount -- setup/devices.sh@37 -- # dmsetup remove --force nvme_dm_test 00:04:46.293 11:12:41 setup.sh.devices.dm_mount -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:46.293 11:12:41 setup.sh.devices.dm_mount -- setup/devices.sh@40 -- # wipefs --all /dev/nvme0n1p1 00:04:46.293 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:04:46.293 11:12:41 setup.sh.devices.dm_mount -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:04:46.293 11:12:41 setup.sh.devices.dm_mount -- setup/devices.sh@43 -- # wipefs --all /dev/nvme0n1p2 00:04:46.293 00:04:46.293 real 0m8.943s 00:04:46.293 user 0m2.139s 00:04:46.293 sys 0m3.812s 00:04:46.293 11:12:41 setup.sh.devices.dm_mount -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:46.293 11:12:41 setup.sh.devices.dm_mount -- common/autotest_common.sh@10 -- # set +x 00:04:46.293 ************************************ 00:04:46.293 END TEST dm_mount 00:04:46.293 ************************************ 00:04:46.293 11:12:41 setup.sh.devices -- setup/devices.sh@1 -- # cleanup 00:04:46.293 11:12:41 setup.sh.devices -- setup/devices.sh@11 -- # cleanup_nvme 00:04:46.293 11:12:41 setup.sh.devices -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:46.293 11:12:41 setup.sh.devices -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:46.293 11:12:41 setup.sh.devices -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:04:46.293 11:12:41 setup.sh.devices -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:04:46.293 11:12:41 setup.sh.devices -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:04:46.552 /dev/nvme0n1: 8 bytes were erased at offset 0x00000200 (gpt): 45 46 49 20 50 41 52 54 00:04:46.552 /dev/nvme0n1: 8 bytes were erased at offset 0x1749a955e00 (gpt): 45 46 49 20 50 41 52 54 00:04:46.552 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:04:46.552 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:04:46.552 11:12:42 setup.sh.devices -- setup/devices.sh@12 -- # cleanup_dm 00:04:46.552 11:12:42 setup.sh.devices -- setup/devices.sh@33 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:04:46.552 11:12:42 setup.sh.devices -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:04:46.552 11:12:42 setup.sh.devices -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:46.552 11:12:42 setup.sh.devices -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:04:46.552 11:12:42 setup.sh.devices -- setup/devices.sh@14 -- # [[ -b /dev/nvme0n1 ]] 00:04:46.552 11:12:42 setup.sh.devices -- setup/devices.sh@15 -- # wipefs --all /dev/nvme0n1 00:04:46.552 00:04:46.552 real 0m23.664s 00:04:46.552 user 0m6.777s 00:04:46.552 sys 0m11.638s 00:04:46.552 11:12:42 setup.sh.devices -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:46.552 11:12:42 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:04:46.552 ************************************ 00:04:46.552 END TEST devices 00:04:46.552 ************************************ 00:04:46.811 00:04:46.811 real 1m21.001s 00:04:46.811 user 0m26.407s 00:04:46.811 sys 0m43.779s 00:04:46.811 11:12:42 setup.sh -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:46.811 11:12:42 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:04:46.811 ************************************ 00:04:46.811 END TEST setup.sh 00:04:46.811 ************************************ 00:04:46.811 11:12:42 -- spdk/autotest.sh@128 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:04:49.349 Hugepages 00:04:49.349 node hugesize free / total 00:04:49.349 node0 1048576kB 0 / 0 00:04:49.349 node0 2048kB 2048 / 2048 00:04:49.349 node1 1048576kB 0 / 0 00:04:49.349 node1 2048kB 0 / 0 00:04:49.349 00:04:49.349 Type BDF Vendor Device NUMA Driver Device Block devices 00:04:49.349 I/OAT 0000:00:04.0 8086 2021 0 ioatdma - - 00:04:49.349 I/OAT 0000:00:04.1 8086 2021 0 ioatdma - - 00:04:49.349 I/OAT 0000:00:04.2 8086 2021 0 ioatdma - - 00:04:49.349 I/OAT 0000:00:04.3 8086 2021 0 ioatdma - - 00:04:49.608 I/OAT 0000:00:04.4 8086 2021 0 ioatdma - - 00:04:49.608 I/OAT 0000:00:04.5 8086 2021 0 ioatdma - - 00:04:49.608 I/OAT 0000:00:04.6 8086 2021 0 ioatdma - - 00:04:49.608 I/OAT 0000:00:04.7 8086 2021 0 ioatdma - - 00:04:49.608 NVMe 0000:5e:00.0 8086 0a54 0 nvme nvme0 nvme0n1 00:04:49.608 I/OAT 0000:80:04.0 8086 2021 1 ioatdma - - 00:04:49.608 I/OAT 0000:80:04.1 8086 2021 1 ioatdma - - 00:04:49.608 I/OAT 0000:80:04.2 8086 2021 1 ioatdma - - 00:04:49.608 I/OAT 0000:80:04.3 8086 2021 1 ioatdma - - 00:04:49.608 I/OAT 0000:80:04.4 8086 2021 1 ioatdma - - 00:04:49.608 I/OAT 0000:80:04.5 8086 2021 1 ioatdma - - 00:04:49.608 I/OAT 0000:80:04.6 8086 2021 1 ioatdma - - 00:04:49.608 I/OAT 0000:80:04.7 8086 2021 1 ioatdma - - 00:04:49.608 11:12:45 -- spdk/autotest.sh@130 -- # uname -s 00:04:49.608 11:12:45 -- spdk/autotest.sh@130 -- # [[ Linux == Linux ]] 00:04:49.608 11:12:45 -- spdk/autotest.sh@132 -- # nvme_namespace_revert 00:04:49.608 11:12:45 -- common/autotest_common.sh@1531 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:04:52.899 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:04:52.899 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:04:52.899 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:04:52.899 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:04:52.899 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:04:52.899 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:04:52.899 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:04:52.899 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:04:52.899 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:04:52.899 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:04:52.899 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:04:52.899 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:04:52.899 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:04:52.899 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:04:52.899 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:04:52.899 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:04:53.836 0000:5e:00.0 (8086 0a54): nvme -> vfio-pci 00:04:54.096 11:12:49 -- common/autotest_common.sh@1532 -- # sleep 1 00:04:55.033 11:12:50 -- common/autotest_common.sh@1533 -- # bdfs=() 00:04:55.033 11:12:50 -- common/autotest_common.sh@1533 -- # local bdfs 00:04:55.033 11:12:50 -- common/autotest_common.sh@1534 -- # bdfs=($(get_nvme_bdfs)) 00:04:55.033 11:12:50 -- common/autotest_common.sh@1534 -- # get_nvme_bdfs 00:04:55.033 11:12:50 -- common/autotest_common.sh@1513 -- # bdfs=() 00:04:55.033 11:12:50 -- common/autotest_common.sh@1513 -- # local bdfs 00:04:55.033 11:12:50 -- common/autotest_common.sh@1514 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:04:55.033 11:12:50 -- common/autotest_common.sh@1514 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:04:55.033 11:12:50 -- common/autotest_common.sh@1514 -- # jq -r '.config[].params.traddr' 00:04:55.033 11:12:50 -- common/autotest_common.sh@1515 -- # (( 1 == 0 )) 00:04:55.033 11:12:50 -- common/autotest_common.sh@1519 -- # printf '%s\n' 0000:5e:00.0 00:04:55.033 11:12:50 -- common/autotest_common.sh@1536 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:04:58.322 Waiting for block devices as requested 00:04:58.322 0000:5e:00.0 (8086 0a54): vfio-pci -> nvme 00:04:58.322 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:04:58.322 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:04:58.322 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:04:58.322 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:04:58.322 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:04:58.322 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:04:58.581 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:04:58.581 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:04:58.581 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:04:58.581 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:04:58.840 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:04:58.840 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:04:58.840 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:04:59.098 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:04:59.098 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:04:59.098 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:04:59.356 11:12:54 -- common/autotest_common.sh@1538 -- # for bdf in "${bdfs[@]}" 00:04:59.356 11:12:54 -- common/autotest_common.sh@1539 -- # get_nvme_ctrlr_from_bdf 0000:5e:00.0 00:04:59.356 11:12:54 -- common/autotest_common.sh@1502 -- # grep 0000:5e:00.0/nvme/nvme 00:04:59.356 11:12:54 -- common/autotest_common.sh@1502 -- # readlink -f /sys/class/nvme/nvme0 00:04:59.356 11:12:54 -- common/autotest_common.sh@1502 -- # bdf_sysfs_path=/sys/devices/pci0000:5d/0000:5d:02.0/0000:5e:00.0/nvme/nvme0 00:04:59.356 11:12:54 -- common/autotest_common.sh@1503 -- # [[ -z /sys/devices/pci0000:5d/0000:5d:02.0/0000:5e:00.0/nvme/nvme0 ]] 00:04:59.356 11:12:54 -- common/autotest_common.sh@1507 -- # basename /sys/devices/pci0000:5d/0000:5d:02.0/0000:5e:00.0/nvme/nvme0 00:04:59.356 11:12:54 -- common/autotest_common.sh@1507 -- # printf '%s\n' nvme0 00:04:59.356 11:12:54 -- common/autotest_common.sh@1539 -- # nvme_ctrlr=/dev/nvme0 00:04:59.356 11:12:54 -- common/autotest_common.sh@1540 -- # [[ -z /dev/nvme0 ]] 00:04:59.356 11:12:54 -- common/autotest_common.sh@1545 -- # nvme id-ctrl /dev/nvme0 00:04:59.356 11:12:54 -- common/autotest_common.sh@1545 -- # grep oacs 00:04:59.356 11:12:54 -- common/autotest_common.sh@1545 -- # cut -d: -f2 00:04:59.356 11:12:54 -- common/autotest_common.sh@1545 -- # oacs=' 0xe' 00:04:59.356 11:12:54 -- common/autotest_common.sh@1546 -- # oacs_ns_manage=8 00:04:59.356 11:12:54 -- common/autotest_common.sh@1548 -- # [[ 8 -ne 0 ]] 00:04:59.356 11:12:54 -- common/autotest_common.sh@1554 -- # nvme id-ctrl /dev/nvme0 00:04:59.356 11:12:54 -- common/autotest_common.sh@1554 -- # grep unvmcap 00:04:59.356 11:12:54 -- common/autotest_common.sh@1554 -- # cut -d: -f2 00:04:59.356 11:12:54 -- common/autotest_common.sh@1554 -- # unvmcap=' 0' 00:04:59.356 11:12:54 -- common/autotest_common.sh@1555 -- # [[ 0 -eq 0 ]] 00:04:59.356 11:12:54 -- common/autotest_common.sh@1557 -- # continue 00:04:59.356 11:12:54 -- spdk/autotest.sh@135 -- # timing_exit pre_cleanup 00:04:59.356 11:12:54 -- common/autotest_common.sh@730 -- # xtrace_disable 00:04:59.356 11:12:54 -- common/autotest_common.sh@10 -- # set +x 00:04:59.356 11:12:54 -- spdk/autotest.sh@138 -- # timing_enter afterboot 00:04:59.356 11:12:54 -- common/autotest_common.sh@724 -- # xtrace_disable 00:04:59.356 11:12:54 -- common/autotest_common.sh@10 -- # set +x 00:04:59.356 11:12:54 -- spdk/autotest.sh@139 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:05:02.643 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:05:02.643 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:05:02.643 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:05:02.643 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:05:02.643 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:05:02.643 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:05:02.643 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:05:02.643 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:05:02.643 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:05:02.643 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:05:02.643 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:05:02.643 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:05:02.643 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:05:02.643 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:05:02.643 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:05:02.643 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:05:03.579 0000:5e:00.0 (8086 0a54): nvme -> vfio-pci 00:05:03.579 11:12:59 -- spdk/autotest.sh@140 -- # timing_exit afterboot 00:05:03.579 11:12:59 -- common/autotest_common.sh@730 -- # xtrace_disable 00:05:03.579 11:12:59 -- common/autotest_common.sh@10 -- # set +x 00:05:03.579 11:12:59 -- spdk/autotest.sh@144 -- # opal_revert_cleanup 00:05:03.579 11:12:59 -- common/autotest_common.sh@1591 -- # mapfile -t bdfs 00:05:03.579 11:12:59 -- common/autotest_common.sh@1591 -- # get_nvme_bdfs_by_id 0x0a54 00:05:03.579 11:12:59 -- common/autotest_common.sh@1577 -- # bdfs=() 00:05:03.579 11:12:59 -- common/autotest_common.sh@1577 -- # local bdfs 00:05:03.579 11:12:59 -- common/autotest_common.sh@1579 -- # get_nvme_bdfs 00:05:03.579 11:12:59 -- common/autotest_common.sh@1513 -- # bdfs=() 00:05:03.579 11:12:59 -- common/autotest_common.sh@1513 -- # local bdfs 00:05:03.579 11:12:59 -- common/autotest_common.sh@1514 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:05:03.579 11:12:59 -- common/autotest_common.sh@1514 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:05:03.579 11:12:59 -- common/autotest_common.sh@1514 -- # jq -r '.config[].params.traddr' 00:05:03.839 11:12:59 -- common/autotest_common.sh@1515 -- # (( 1 == 0 )) 00:05:03.839 11:12:59 -- common/autotest_common.sh@1519 -- # printf '%s\n' 0000:5e:00.0 00:05:03.839 11:12:59 -- common/autotest_common.sh@1579 -- # for bdf in $(get_nvme_bdfs) 00:05:03.839 11:12:59 -- common/autotest_common.sh@1580 -- # cat /sys/bus/pci/devices/0000:5e:00.0/device 00:05:03.839 11:12:59 -- common/autotest_common.sh@1580 -- # device=0x0a54 00:05:03.839 11:12:59 -- common/autotest_common.sh@1581 -- # [[ 0x0a54 == \0\x\0\a\5\4 ]] 00:05:03.839 11:12:59 -- common/autotest_common.sh@1582 -- # bdfs+=($bdf) 00:05:03.839 11:12:59 -- common/autotest_common.sh@1586 -- # printf '%s\n' 0000:5e:00.0 00:05:03.839 11:12:59 -- common/autotest_common.sh@1592 -- # [[ -z 0000:5e:00.0 ]] 00:05:03.839 11:12:59 -- common/autotest_common.sh@1597 -- # spdk_tgt_pid=1330403 00:05:03.839 11:12:59 -- common/autotest_common.sh@1598 -- # waitforlisten 1330403 00:05:03.839 11:12:59 -- common/autotest_common.sh@1596 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:03.839 11:12:59 -- common/autotest_common.sh@831 -- # '[' -z 1330403 ']' 00:05:03.839 11:12:59 -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:03.839 11:12:59 -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:03.839 11:12:59 -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:03.839 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:03.839 11:12:59 -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:03.839 11:12:59 -- common/autotest_common.sh@10 -- # set +x 00:05:03.839 [2024-07-26 11:12:59.366120] Starting SPDK v24.09-pre git sha1 487ff9e1a / DPDK 24.03.0 initialization... 00:05:03.839 [2024-07-26 11:12:59.366172] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1330403 ] 00:05:03.839 EAL: No free 2048 kB hugepages reported on node 1 00:05:03.839 [2024-07-26 11:12:59.432499] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:04.098 [2024-07-26 11:12:59.512337] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:04.666 11:13:00 -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:04.666 11:13:00 -- common/autotest_common.sh@864 -- # return 0 00:05:04.666 11:13:00 -- common/autotest_common.sh@1600 -- # bdf_id=0 00:05:04.666 11:13:00 -- common/autotest_common.sh@1601 -- # for bdf in "${bdfs[@]}" 00:05:04.666 11:13:00 -- common/autotest_common.sh@1602 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t pcie -a 0000:5e:00.0 00:05:07.971 nvme0n1 00:05:07.971 11:13:03 -- common/autotest_common.sh@1604 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_nvme_opal_revert -b nvme0 -p test 00:05:07.971 [2024-07-26 11:13:03.302383] vbdev_opal_rpc.c: 125:rpc_bdev_nvme_opal_revert: *ERROR*: nvme0 not support opal 00:05:07.971 request: 00:05:07.971 { 00:05:07.971 "nvme_ctrlr_name": "nvme0", 00:05:07.971 "password": "test", 00:05:07.971 "method": "bdev_nvme_opal_revert", 00:05:07.971 "req_id": 1 00:05:07.971 } 00:05:07.971 Got JSON-RPC error response 00:05:07.971 response: 00:05:07.971 { 00:05:07.971 "code": -32602, 00:05:07.971 "message": "Invalid parameters" 00:05:07.971 } 00:05:07.971 11:13:03 -- common/autotest_common.sh@1604 -- # true 00:05:07.971 11:13:03 -- common/autotest_common.sh@1605 -- # (( ++bdf_id )) 00:05:07.971 11:13:03 -- common/autotest_common.sh@1608 -- # killprocess 1330403 00:05:07.971 11:13:03 -- common/autotest_common.sh@950 -- # '[' -z 1330403 ']' 00:05:07.971 11:13:03 -- common/autotest_common.sh@954 -- # kill -0 1330403 00:05:07.971 11:13:03 -- common/autotest_common.sh@955 -- # uname 00:05:07.971 11:13:03 -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:07.971 11:13:03 -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1330403 00:05:07.971 11:13:03 -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:07.971 11:13:03 -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:07.971 11:13:03 -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1330403' 00:05:07.971 killing process with pid 1330403 00:05:07.971 11:13:03 -- common/autotest_common.sh@969 -- # kill 1330403 00:05:07.971 11:13:03 -- common/autotest_common.sh@974 -- # wait 1330403 00:05:09.874 11:13:05 -- spdk/autotest.sh@150 -- # '[' 0 -eq 1 ']' 00:05:09.874 11:13:05 -- spdk/autotest.sh@154 -- # '[' 1 -eq 1 ']' 00:05:09.874 11:13:05 -- spdk/autotest.sh@155 -- # [[ 0 -eq 1 ]] 00:05:09.874 11:13:05 -- spdk/autotest.sh@155 -- # [[ 0 -eq 1 ]] 00:05:09.874 11:13:05 -- spdk/autotest.sh@162 -- # timing_enter lib 00:05:09.874 11:13:05 -- common/autotest_common.sh@724 -- # xtrace_disable 00:05:09.874 11:13:05 -- common/autotest_common.sh@10 -- # set +x 00:05:09.874 11:13:05 -- spdk/autotest.sh@164 -- # [[ 0 -eq 1 ]] 00:05:09.874 11:13:05 -- spdk/autotest.sh@168 -- # run_test env /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env.sh 00:05:09.874 11:13:05 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:09.874 11:13:05 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:09.874 11:13:05 -- common/autotest_common.sh@10 -- # set +x 00:05:10.132 ************************************ 00:05:10.132 START TEST env 00:05:10.132 ************************************ 00:05:10.132 11:13:05 env -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env.sh 00:05:10.132 * Looking for test storage... 00:05:10.132 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env 00:05:10.132 11:13:05 env -- env/env.sh@10 -- # run_test env_memory /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/memory/memory_ut 00:05:10.132 11:13:05 env -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:10.132 11:13:05 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:10.132 11:13:05 env -- common/autotest_common.sh@10 -- # set +x 00:05:10.132 ************************************ 00:05:10.132 START TEST env_memory 00:05:10.132 ************************************ 00:05:10.132 11:13:05 env.env_memory -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/memory/memory_ut 00:05:10.132 00:05:10.132 00:05:10.132 CUnit - A unit testing framework for C - Version 2.1-3 00:05:10.132 http://cunit.sourceforge.net/ 00:05:10.132 00:05:10.132 00:05:10.132 Suite: memory 00:05:10.133 Test: alloc and free memory map ...[2024-07-26 11:13:05.722492] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:05:10.133 passed 00:05:10.133 Test: mem map translation ...[2024-07-26 11:13:05.740561] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:05:10.133 [2024-07-26 11:13:05.740579] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:05:10.133 [2024-07-26 11:13:05.740613] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 584:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:05:10.133 [2024-07-26 11:13:05.740621] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 600:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:05:10.133 passed 00:05:10.133 Test: mem map registration ...[2024-07-26 11:13:05.777238] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x200000 len=1234 00:05:10.133 [2024-07-26 11:13:05.777254] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x4d2 len=2097152 00:05:10.133 passed 00:05:10.393 Test: mem map adjacent registrations ...passed 00:05:10.393 00:05:10.393 Run Summary: Type Total Ran Passed Failed Inactive 00:05:10.393 suites 1 1 n/a 0 0 00:05:10.393 tests 4 4 4 0 0 00:05:10.393 asserts 152 152 152 0 n/a 00:05:10.393 00:05:10.393 Elapsed time = 0.134 seconds 00:05:10.393 00:05:10.393 real 0m0.147s 00:05:10.393 user 0m0.140s 00:05:10.393 sys 0m0.007s 00:05:10.393 11:13:05 env.env_memory -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:10.393 11:13:05 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:05:10.393 ************************************ 00:05:10.393 END TEST env_memory 00:05:10.393 ************************************ 00:05:10.393 11:13:05 env -- env/env.sh@11 -- # run_test env_vtophys /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/vtophys/vtophys 00:05:10.393 11:13:05 env -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:10.393 11:13:05 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:10.393 11:13:05 env -- common/autotest_common.sh@10 -- # set +x 00:05:10.393 ************************************ 00:05:10.393 START TEST env_vtophys 00:05:10.393 ************************************ 00:05:10.393 11:13:05 env.env_vtophys -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/vtophys/vtophys 00:05:10.393 EAL: lib.eal log level changed from notice to debug 00:05:10.393 EAL: Detected lcore 0 as core 0 on socket 0 00:05:10.393 EAL: Detected lcore 1 as core 1 on socket 0 00:05:10.393 EAL: Detected lcore 2 as core 2 on socket 0 00:05:10.393 EAL: Detected lcore 3 as core 3 on socket 0 00:05:10.393 EAL: Detected lcore 4 as core 4 on socket 0 00:05:10.393 EAL: Detected lcore 5 as core 5 on socket 0 00:05:10.393 EAL: Detected lcore 6 as core 6 on socket 0 00:05:10.393 EAL: Detected lcore 7 as core 8 on socket 0 00:05:10.393 EAL: Detected lcore 8 as core 9 on socket 0 00:05:10.393 EAL: Detected lcore 9 as core 10 on socket 0 00:05:10.393 EAL: Detected lcore 10 as core 11 on socket 0 00:05:10.393 EAL: Detected lcore 11 as core 12 on socket 0 00:05:10.393 EAL: Detected lcore 12 as core 13 on socket 0 00:05:10.393 EAL: Detected lcore 13 as core 16 on socket 0 00:05:10.393 EAL: Detected lcore 14 as core 17 on socket 0 00:05:10.393 EAL: Detected lcore 15 as core 18 on socket 0 00:05:10.393 EAL: Detected lcore 16 as core 19 on socket 0 00:05:10.393 EAL: Detected lcore 17 as core 20 on socket 0 00:05:10.393 EAL: Detected lcore 18 as core 21 on socket 0 00:05:10.393 EAL: Detected lcore 19 as core 25 on socket 0 00:05:10.393 EAL: Detected lcore 20 as core 26 on socket 0 00:05:10.393 EAL: Detected lcore 21 as core 27 on socket 0 00:05:10.393 EAL: Detected lcore 22 as core 28 on socket 0 00:05:10.393 EAL: Detected lcore 23 as core 29 on socket 0 00:05:10.393 EAL: Detected lcore 24 as core 0 on socket 1 00:05:10.393 EAL: Detected lcore 25 as core 1 on socket 1 00:05:10.393 EAL: Detected lcore 26 as core 2 on socket 1 00:05:10.393 EAL: Detected lcore 27 as core 3 on socket 1 00:05:10.393 EAL: Detected lcore 28 as core 4 on socket 1 00:05:10.393 EAL: Detected lcore 29 as core 5 on socket 1 00:05:10.393 EAL: Detected lcore 30 as core 6 on socket 1 00:05:10.393 EAL: Detected lcore 31 as core 8 on socket 1 00:05:10.393 EAL: Detected lcore 32 as core 10 on socket 1 00:05:10.393 EAL: Detected lcore 33 as core 11 on socket 1 00:05:10.393 EAL: Detected lcore 34 as core 12 on socket 1 00:05:10.393 EAL: Detected lcore 35 as core 13 on socket 1 00:05:10.393 EAL: Detected lcore 36 as core 16 on socket 1 00:05:10.393 EAL: Detected lcore 37 as core 17 on socket 1 00:05:10.393 EAL: Detected lcore 38 as core 18 on socket 1 00:05:10.393 EAL: Detected lcore 39 as core 19 on socket 1 00:05:10.393 EAL: Detected lcore 40 as core 20 on socket 1 00:05:10.393 EAL: Detected lcore 41 as core 21 on socket 1 00:05:10.393 EAL: Detected lcore 42 as core 24 on socket 1 00:05:10.393 EAL: Detected lcore 43 as core 25 on socket 1 00:05:10.393 EAL: Detected lcore 44 as core 26 on socket 1 00:05:10.393 EAL: Detected lcore 45 as core 27 on socket 1 00:05:10.393 EAL: Detected lcore 46 as core 28 on socket 1 00:05:10.393 EAL: Detected lcore 47 as core 29 on socket 1 00:05:10.393 EAL: Detected lcore 48 as core 0 on socket 0 00:05:10.393 EAL: Detected lcore 49 as core 1 on socket 0 00:05:10.393 EAL: Detected lcore 50 as core 2 on socket 0 00:05:10.393 EAL: Detected lcore 51 as core 3 on socket 0 00:05:10.393 EAL: Detected lcore 52 as core 4 on socket 0 00:05:10.393 EAL: Detected lcore 53 as core 5 on socket 0 00:05:10.393 EAL: Detected lcore 54 as core 6 on socket 0 00:05:10.393 EAL: Detected lcore 55 as core 8 on socket 0 00:05:10.393 EAL: Detected lcore 56 as core 9 on socket 0 00:05:10.393 EAL: Detected lcore 57 as core 10 on socket 0 00:05:10.393 EAL: Detected lcore 58 as core 11 on socket 0 00:05:10.393 EAL: Detected lcore 59 as core 12 on socket 0 00:05:10.393 EAL: Detected lcore 60 as core 13 on socket 0 00:05:10.393 EAL: Detected lcore 61 as core 16 on socket 0 00:05:10.393 EAL: Detected lcore 62 as core 17 on socket 0 00:05:10.393 EAL: Detected lcore 63 as core 18 on socket 0 00:05:10.393 EAL: Detected lcore 64 as core 19 on socket 0 00:05:10.393 EAL: Detected lcore 65 as core 20 on socket 0 00:05:10.393 EAL: Detected lcore 66 as core 21 on socket 0 00:05:10.393 EAL: Detected lcore 67 as core 25 on socket 0 00:05:10.393 EAL: Detected lcore 68 as core 26 on socket 0 00:05:10.393 EAL: Detected lcore 69 as core 27 on socket 0 00:05:10.393 EAL: Detected lcore 70 as core 28 on socket 0 00:05:10.393 EAL: Detected lcore 71 as core 29 on socket 0 00:05:10.393 EAL: Detected lcore 72 as core 0 on socket 1 00:05:10.393 EAL: Detected lcore 73 as core 1 on socket 1 00:05:10.393 EAL: Detected lcore 74 as core 2 on socket 1 00:05:10.393 EAL: Detected lcore 75 as core 3 on socket 1 00:05:10.393 EAL: Detected lcore 76 as core 4 on socket 1 00:05:10.393 EAL: Detected lcore 77 as core 5 on socket 1 00:05:10.393 EAL: Detected lcore 78 as core 6 on socket 1 00:05:10.393 EAL: Detected lcore 79 as core 8 on socket 1 00:05:10.393 EAL: Detected lcore 80 as core 10 on socket 1 00:05:10.393 EAL: Detected lcore 81 as core 11 on socket 1 00:05:10.393 EAL: Detected lcore 82 as core 12 on socket 1 00:05:10.393 EAL: Detected lcore 83 as core 13 on socket 1 00:05:10.393 EAL: Detected lcore 84 as core 16 on socket 1 00:05:10.393 EAL: Detected lcore 85 as core 17 on socket 1 00:05:10.393 EAL: Detected lcore 86 as core 18 on socket 1 00:05:10.393 EAL: Detected lcore 87 as core 19 on socket 1 00:05:10.393 EAL: Detected lcore 88 as core 20 on socket 1 00:05:10.393 EAL: Detected lcore 89 as core 21 on socket 1 00:05:10.393 EAL: Detected lcore 90 as core 24 on socket 1 00:05:10.393 EAL: Detected lcore 91 as core 25 on socket 1 00:05:10.393 EAL: Detected lcore 92 as core 26 on socket 1 00:05:10.393 EAL: Detected lcore 93 as core 27 on socket 1 00:05:10.393 EAL: Detected lcore 94 as core 28 on socket 1 00:05:10.393 EAL: Detected lcore 95 as core 29 on socket 1 00:05:10.393 EAL: Maximum logical cores by configuration: 128 00:05:10.393 EAL: Detected CPU lcores: 96 00:05:10.393 EAL: Detected NUMA nodes: 2 00:05:10.393 EAL: Checking presence of .so 'librte_eal.so.24.1' 00:05:10.393 EAL: Detected shared linkage of DPDK 00:05:10.393 EAL: No shared files mode enabled, IPC will be disabled 00:05:10.393 EAL: Bus pci wants IOVA as 'DC' 00:05:10.393 EAL: Buses did not request a specific IOVA mode. 00:05:10.393 EAL: IOMMU is available, selecting IOVA as VA mode. 00:05:10.393 EAL: Selected IOVA mode 'VA' 00:05:10.393 EAL: No free 2048 kB hugepages reported on node 1 00:05:10.393 EAL: Probing VFIO support... 00:05:10.393 EAL: IOMMU type 1 (Type 1) is supported 00:05:10.393 EAL: IOMMU type 7 (sPAPR) is not supported 00:05:10.393 EAL: IOMMU type 8 (No-IOMMU) is not supported 00:05:10.393 EAL: VFIO support initialized 00:05:10.393 EAL: Ask a virtual area of 0x2e000 bytes 00:05:10.394 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:05:10.394 EAL: Setting up physically contiguous memory... 00:05:10.394 EAL: Setting maximum number of open files to 524288 00:05:10.394 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:05:10.394 EAL: Detected memory type: socket_id:1 hugepage_sz:2097152 00:05:10.394 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:05:10.394 EAL: Ask a virtual area of 0x61000 bytes 00:05:10.394 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:05:10.394 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:10.394 EAL: Ask a virtual area of 0x400000000 bytes 00:05:10.394 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:05:10.394 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:05:10.394 EAL: Ask a virtual area of 0x61000 bytes 00:05:10.394 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:05:10.394 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:10.394 EAL: Ask a virtual area of 0x400000000 bytes 00:05:10.394 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:05:10.394 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:05:10.394 EAL: Ask a virtual area of 0x61000 bytes 00:05:10.394 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:05:10.394 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:10.394 EAL: Ask a virtual area of 0x400000000 bytes 00:05:10.394 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:05:10.394 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:05:10.394 EAL: Ask a virtual area of 0x61000 bytes 00:05:10.394 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:05:10.394 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:10.394 EAL: Ask a virtual area of 0x400000000 bytes 00:05:10.394 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:05:10.394 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:05:10.394 EAL: Creating 4 segment lists: n_segs:8192 socket_id:1 hugepage_sz:2097152 00:05:10.394 EAL: Ask a virtual area of 0x61000 bytes 00:05:10.394 EAL: Virtual area found at 0x201000800000 (size = 0x61000) 00:05:10.394 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:05:10.394 EAL: Ask a virtual area of 0x400000000 bytes 00:05:10.394 EAL: Virtual area found at 0x201000a00000 (size = 0x400000000) 00:05:10.394 EAL: VA reserved for memseg list at 0x201000a00000, size 400000000 00:05:10.394 EAL: Ask a virtual area of 0x61000 bytes 00:05:10.394 EAL: Virtual area found at 0x201400a00000 (size = 0x61000) 00:05:10.394 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:05:10.394 EAL: Ask a virtual area of 0x400000000 bytes 00:05:10.394 EAL: Virtual area found at 0x201400c00000 (size = 0x400000000) 00:05:10.394 EAL: VA reserved for memseg list at 0x201400c00000, size 400000000 00:05:10.394 EAL: Ask a virtual area of 0x61000 bytes 00:05:10.394 EAL: Virtual area found at 0x201800c00000 (size = 0x61000) 00:05:10.394 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:05:10.394 EAL: Ask a virtual area of 0x400000000 bytes 00:05:10.394 EAL: Virtual area found at 0x201800e00000 (size = 0x400000000) 00:05:10.394 EAL: VA reserved for memseg list at 0x201800e00000, size 400000000 00:05:10.394 EAL: Ask a virtual area of 0x61000 bytes 00:05:10.394 EAL: Virtual area found at 0x201c00e00000 (size = 0x61000) 00:05:10.394 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:05:10.394 EAL: Ask a virtual area of 0x400000000 bytes 00:05:10.394 EAL: Virtual area found at 0x201c01000000 (size = 0x400000000) 00:05:10.394 EAL: VA reserved for memseg list at 0x201c01000000, size 400000000 00:05:10.394 EAL: Hugepages will be freed exactly as allocated. 00:05:10.394 EAL: No shared files mode enabled, IPC is disabled 00:05:10.394 EAL: No shared files mode enabled, IPC is disabled 00:05:10.394 EAL: TSC frequency is ~2100000 KHz 00:05:10.394 EAL: Main lcore 0 is ready (tid=7ff9e4f99a00;cpuset=[0]) 00:05:10.394 EAL: Trying to obtain current memory policy. 00:05:10.394 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:10.394 EAL: Restoring previous memory policy: 0 00:05:10.394 EAL: request: mp_malloc_sync 00:05:10.394 EAL: No shared files mode enabled, IPC is disabled 00:05:10.394 EAL: Heap on socket 0 was expanded by 2MB 00:05:10.394 EAL: No shared files mode enabled, IPC is disabled 00:05:10.394 EAL: No PCI address specified using 'addr=' in: bus=pci 00:05:10.394 EAL: Mem event callback 'spdk:(nil)' registered 00:05:10.394 00:05:10.394 00:05:10.394 CUnit - A unit testing framework for C - Version 2.1-3 00:05:10.394 http://cunit.sourceforge.net/ 00:05:10.394 00:05:10.394 00:05:10.394 Suite: components_suite 00:05:10.394 Test: vtophys_malloc_test ...passed 00:05:10.394 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:05:10.394 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:10.394 EAL: Restoring previous memory policy: 4 00:05:10.394 EAL: Calling mem event callback 'spdk:(nil)' 00:05:10.394 EAL: request: mp_malloc_sync 00:05:10.394 EAL: No shared files mode enabled, IPC is disabled 00:05:10.394 EAL: Heap on socket 0 was expanded by 4MB 00:05:10.394 EAL: Calling mem event callback 'spdk:(nil)' 00:05:10.394 EAL: request: mp_malloc_sync 00:05:10.394 EAL: No shared files mode enabled, IPC is disabled 00:05:10.394 EAL: Heap on socket 0 was shrunk by 4MB 00:05:10.394 EAL: Trying to obtain current memory policy. 00:05:10.394 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:10.394 EAL: Restoring previous memory policy: 4 00:05:10.394 EAL: Calling mem event callback 'spdk:(nil)' 00:05:10.394 EAL: request: mp_malloc_sync 00:05:10.394 EAL: No shared files mode enabled, IPC is disabled 00:05:10.394 EAL: Heap on socket 0 was expanded by 6MB 00:05:10.394 EAL: Calling mem event callback 'spdk:(nil)' 00:05:10.394 EAL: request: mp_malloc_sync 00:05:10.394 EAL: No shared files mode enabled, IPC is disabled 00:05:10.394 EAL: Heap on socket 0 was shrunk by 6MB 00:05:10.394 EAL: Trying to obtain current memory policy. 00:05:10.394 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:10.394 EAL: Restoring previous memory policy: 4 00:05:10.394 EAL: Calling mem event callback 'spdk:(nil)' 00:05:10.394 EAL: request: mp_malloc_sync 00:05:10.394 EAL: No shared files mode enabled, IPC is disabled 00:05:10.394 EAL: Heap on socket 0 was expanded by 10MB 00:05:10.394 EAL: Calling mem event callback 'spdk:(nil)' 00:05:10.394 EAL: request: mp_malloc_sync 00:05:10.394 EAL: No shared files mode enabled, IPC is disabled 00:05:10.394 EAL: Heap on socket 0 was shrunk by 10MB 00:05:10.394 EAL: Trying to obtain current memory policy. 00:05:10.394 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:10.394 EAL: Restoring previous memory policy: 4 00:05:10.394 EAL: Calling mem event callback 'spdk:(nil)' 00:05:10.394 EAL: request: mp_malloc_sync 00:05:10.394 EAL: No shared files mode enabled, IPC is disabled 00:05:10.394 EAL: Heap on socket 0 was expanded by 18MB 00:05:10.394 EAL: Calling mem event callback 'spdk:(nil)' 00:05:10.394 EAL: request: mp_malloc_sync 00:05:10.394 EAL: No shared files mode enabled, IPC is disabled 00:05:10.394 EAL: Heap on socket 0 was shrunk by 18MB 00:05:10.394 EAL: Trying to obtain current memory policy. 00:05:10.394 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:10.394 EAL: Restoring previous memory policy: 4 00:05:10.394 EAL: Calling mem event callback 'spdk:(nil)' 00:05:10.394 EAL: request: mp_malloc_sync 00:05:10.394 EAL: No shared files mode enabled, IPC is disabled 00:05:10.394 EAL: Heap on socket 0 was expanded by 34MB 00:05:10.394 EAL: Calling mem event callback 'spdk:(nil)' 00:05:10.394 EAL: request: mp_malloc_sync 00:05:10.394 EAL: No shared files mode enabled, IPC is disabled 00:05:10.394 EAL: Heap on socket 0 was shrunk by 34MB 00:05:10.394 EAL: Trying to obtain current memory policy. 00:05:10.394 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:10.394 EAL: Restoring previous memory policy: 4 00:05:10.394 EAL: Calling mem event callback 'spdk:(nil)' 00:05:10.394 EAL: request: mp_malloc_sync 00:05:10.394 EAL: No shared files mode enabled, IPC is disabled 00:05:10.394 EAL: Heap on socket 0 was expanded by 66MB 00:05:10.394 EAL: Calling mem event callback 'spdk:(nil)' 00:05:10.394 EAL: request: mp_malloc_sync 00:05:10.394 EAL: No shared files mode enabled, IPC is disabled 00:05:10.394 EAL: Heap on socket 0 was shrunk by 66MB 00:05:10.394 EAL: Trying to obtain current memory policy. 00:05:10.394 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:10.394 EAL: Restoring previous memory policy: 4 00:05:10.394 EAL: Calling mem event callback 'spdk:(nil)' 00:05:10.394 EAL: request: mp_malloc_sync 00:05:10.394 EAL: No shared files mode enabled, IPC is disabled 00:05:10.394 EAL: Heap on socket 0 was expanded by 130MB 00:05:10.654 EAL: Calling mem event callback 'spdk:(nil)' 00:05:10.654 EAL: request: mp_malloc_sync 00:05:10.654 EAL: No shared files mode enabled, IPC is disabled 00:05:10.654 EAL: Heap on socket 0 was shrunk by 130MB 00:05:10.654 EAL: Trying to obtain current memory policy. 00:05:10.654 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:10.654 EAL: Restoring previous memory policy: 4 00:05:10.654 EAL: Calling mem event callback 'spdk:(nil)' 00:05:10.654 EAL: request: mp_malloc_sync 00:05:10.654 EAL: No shared files mode enabled, IPC is disabled 00:05:10.654 EAL: Heap on socket 0 was expanded by 258MB 00:05:10.654 EAL: Calling mem event callback 'spdk:(nil)' 00:05:10.654 EAL: request: mp_malloc_sync 00:05:10.654 EAL: No shared files mode enabled, IPC is disabled 00:05:10.654 EAL: Heap on socket 0 was shrunk by 258MB 00:05:10.654 EAL: Trying to obtain current memory policy. 00:05:10.654 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:10.654 EAL: Restoring previous memory policy: 4 00:05:10.654 EAL: Calling mem event callback 'spdk:(nil)' 00:05:10.654 EAL: request: mp_malloc_sync 00:05:10.654 EAL: No shared files mode enabled, IPC is disabled 00:05:10.654 EAL: Heap on socket 0 was expanded by 514MB 00:05:10.912 EAL: Calling mem event callback 'spdk:(nil)' 00:05:10.912 EAL: request: mp_malloc_sync 00:05:10.912 EAL: No shared files mode enabled, IPC is disabled 00:05:10.912 EAL: Heap on socket 0 was shrunk by 514MB 00:05:10.912 EAL: Trying to obtain current memory policy. 00:05:10.912 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:11.171 EAL: Restoring previous memory policy: 4 00:05:11.171 EAL: Calling mem event callback 'spdk:(nil)' 00:05:11.171 EAL: request: mp_malloc_sync 00:05:11.171 EAL: No shared files mode enabled, IPC is disabled 00:05:11.171 EAL: Heap on socket 0 was expanded by 1026MB 00:05:11.171 EAL: Calling mem event callback 'spdk:(nil)' 00:05:11.430 EAL: request: mp_malloc_sync 00:05:11.430 EAL: No shared files mode enabled, IPC is disabled 00:05:11.430 EAL: Heap on socket 0 was shrunk by 1026MB 00:05:11.430 passed 00:05:11.430 00:05:11.430 Run Summary: Type Total Ran Passed Failed Inactive 00:05:11.430 suites 1 1 n/a 0 0 00:05:11.430 tests 2 2 2 0 0 00:05:11.430 asserts 497 497 497 0 n/a 00:05:11.430 00:05:11.430 Elapsed time = 0.970 seconds 00:05:11.430 EAL: Calling mem event callback 'spdk:(nil)' 00:05:11.430 EAL: request: mp_malloc_sync 00:05:11.430 EAL: No shared files mode enabled, IPC is disabled 00:05:11.430 EAL: Heap on socket 0 was shrunk by 2MB 00:05:11.430 EAL: No shared files mode enabled, IPC is disabled 00:05:11.430 EAL: No shared files mode enabled, IPC is disabled 00:05:11.430 EAL: No shared files mode enabled, IPC is disabled 00:05:11.430 00:05:11.430 real 0m1.087s 00:05:11.430 user 0m0.645s 00:05:11.430 sys 0m0.419s 00:05:11.430 11:13:06 env.env_vtophys -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:11.430 11:13:06 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:05:11.430 ************************************ 00:05:11.430 END TEST env_vtophys 00:05:11.430 ************************************ 00:05:11.430 11:13:07 env -- env/env.sh@12 -- # run_test env_pci /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/pci/pci_ut 00:05:11.430 11:13:07 env -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:11.430 11:13:07 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:11.430 11:13:07 env -- common/autotest_common.sh@10 -- # set +x 00:05:11.430 ************************************ 00:05:11.430 START TEST env_pci 00:05:11.430 ************************************ 00:05:11.430 11:13:07 env.env_pci -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/pci/pci_ut 00:05:11.430 00:05:11.430 00:05:11.430 CUnit - A unit testing framework for C - Version 2.1-3 00:05:11.430 http://cunit.sourceforge.net/ 00:05:11.430 00:05:11.430 00:05:11.430 Suite: pci 00:05:11.430 Test: pci_hook ...[2024-07-26 11:13:07.064414] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/pci.c:1040:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 1331724 has claimed it 00:05:11.689 EAL: Cannot find device (10000:00:01.0) 00:05:11.689 EAL: Failed to attach device on primary process 00:05:11.689 passed 00:05:11.689 00:05:11.689 Run Summary: Type Total Ran Passed Failed Inactive 00:05:11.689 suites 1 1 n/a 0 0 00:05:11.689 tests 1 1 1 0 0 00:05:11.689 asserts 25 25 25 0 n/a 00:05:11.689 00:05:11.689 Elapsed time = 0.025 seconds 00:05:11.689 00:05:11.689 real 0m0.044s 00:05:11.689 user 0m0.011s 00:05:11.689 sys 0m0.033s 00:05:11.689 11:13:07 env.env_pci -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:11.689 11:13:07 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:05:11.689 ************************************ 00:05:11.689 END TEST env_pci 00:05:11.689 ************************************ 00:05:11.689 11:13:07 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:05:11.689 11:13:07 env -- env/env.sh@15 -- # uname 00:05:11.689 11:13:07 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:05:11.689 11:13:07 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:05:11.689 11:13:07 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:05:11.689 11:13:07 env -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:05:11.689 11:13:07 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:11.689 11:13:07 env -- common/autotest_common.sh@10 -- # set +x 00:05:11.689 ************************************ 00:05:11.689 START TEST env_dpdk_post_init 00:05:11.689 ************************************ 00:05:11.689 11:13:07 env.env_dpdk_post_init -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:05:11.689 EAL: Detected CPU lcores: 96 00:05:11.689 EAL: Detected NUMA nodes: 2 00:05:11.689 EAL: Detected shared linkage of DPDK 00:05:11.689 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:05:11.689 EAL: Selected IOVA mode 'VA' 00:05:11.689 EAL: No free 2048 kB hugepages reported on node 1 00:05:11.689 EAL: VFIO support initialized 00:05:11.689 TELEMETRY: No legacy callbacks, legacy socket not created 00:05:11.689 EAL: Using IOMMU type 1 (Type 1) 00:05:11.689 EAL: Ignore mapping IO port bar(1) 00:05:11.689 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.0 (socket 0) 00:05:11.689 EAL: Ignore mapping IO port bar(1) 00:05:11.689 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.1 (socket 0) 00:05:11.689 EAL: Ignore mapping IO port bar(1) 00:05:11.689 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.2 (socket 0) 00:05:11.689 EAL: Ignore mapping IO port bar(1) 00:05:11.689 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.3 (socket 0) 00:05:11.689 EAL: Ignore mapping IO port bar(1) 00:05:11.689 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.4 (socket 0) 00:05:11.949 EAL: Ignore mapping IO port bar(1) 00:05:11.949 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.5 (socket 0) 00:05:11.949 EAL: Ignore mapping IO port bar(1) 00:05:11.949 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.6 (socket 0) 00:05:11.949 EAL: Ignore mapping IO port bar(1) 00:05:11.949 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.7 (socket 0) 00:05:12.516 EAL: Probe PCI driver: spdk_nvme (8086:0a54) device: 0000:5e:00.0 (socket 0) 00:05:12.516 EAL: Ignore mapping IO port bar(1) 00:05:12.516 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.0 (socket 1) 00:05:12.516 EAL: Ignore mapping IO port bar(1) 00:05:12.516 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.1 (socket 1) 00:05:12.516 EAL: Ignore mapping IO port bar(1) 00:05:12.516 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.2 (socket 1) 00:05:12.516 EAL: Ignore mapping IO port bar(1) 00:05:12.516 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.3 (socket 1) 00:05:12.774 EAL: Ignore mapping IO port bar(1) 00:05:12.774 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.4 (socket 1) 00:05:12.774 EAL: Ignore mapping IO port bar(1) 00:05:12.774 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.5 (socket 1) 00:05:12.774 EAL: Ignore mapping IO port bar(1) 00:05:12.774 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.6 (socket 1) 00:05:12.774 EAL: Ignore mapping IO port bar(1) 00:05:12.774 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.7 (socket 1) 00:05:16.058 EAL: Releasing PCI mapped resource for 0000:5e:00.0 00:05:16.058 EAL: Calling pci_unmap_resource for 0000:5e:00.0 at 0x202001020000 00:05:16.626 Starting DPDK initialization... 00:05:16.626 Starting SPDK post initialization... 00:05:16.626 SPDK NVMe probe 00:05:16.626 Attaching to 0000:5e:00.0 00:05:16.626 Attached to 0000:5e:00.0 00:05:16.626 Cleaning up... 00:05:16.626 00:05:16.626 real 0m4.844s 00:05:16.626 user 0m3.766s 00:05:16.626 sys 0m0.150s 00:05:16.626 11:13:12 env.env_dpdk_post_init -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:16.626 11:13:12 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:05:16.626 ************************************ 00:05:16.626 END TEST env_dpdk_post_init 00:05:16.626 ************************************ 00:05:16.626 11:13:12 env -- env/env.sh@26 -- # uname 00:05:16.626 11:13:12 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:05:16.626 11:13:12 env -- env/env.sh@29 -- # run_test env_mem_callbacks /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:05:16.626 11:13:12 env -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:16.626 11:13:12 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:16.626 11:13:12 env -- common/autotest_common.sh@10 -- # set +x 00:05:16.626 ************************************ 00:05:16.626 START TEST env_mem_callbacks 00:05:16.626 ************************************ 00:05:16.626 11:13:12 env.env_mem_callbacks -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:05:16.626 EAL: Detected CPU lcores: 96 00:05:16.626 EAL: Detected NUMA nodes: 2 00:05:16.626 EAL: Detected shared linkage of DPDK 00:05:16.626 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:05:16.626 EAL: Selected IOVA mode 'VA' 00:05:16.626 EAL: No free 2048 kB hugepages reported on node 1 00:05:16.626 EAL: VFIO support initialized 00:05:16.626 TELEMETRY: No legacy callbacks, legacy socket not created 00:05:16.626 00:05:16.626 00:05:16.626 CUnit - A unit testing framework for C - Version 2.1-3 00:05:16.626 http://cunit.sourceforge.net/ 00:05:16.626 00:05:16.626 00:05:16.626 Suite: memory 00:05:16.626 Test: test ... 00:05:16.626 register 0x200000200000 2097152 00:05:16.626 malloc 3145728 00:05:16.626 register 0x200000400000 4194304 00:05:16.626 buf 0x200000500000 len 3145728 PASSED 00:05:16.626 malloc 64 00:05:16.626 buf 0x2000004fff40 len 64 PASSED 00:05:16.626 malloc 4194304 00:05:16.626 register 0x200000800000 6291456 00:05:16.626 buf 0x200000a00000 len 4194304 PASSED 00:05:16.626 free 0x200000500000 3145728 00:05:16.626 free 0x2000004fff40 64 00:05:16.626 unregister 0x200000400000 4194304 PASSED 00:05:16.626 free 0x200000a00000 4194304 00:05:16.626 unregister 0x200000800000 6291456 PASSED 00:05:16.626 malloc 8388608 00:05:16.626 register 0x200000400000 10485760 00:05:16.626 buf 0x200000600000 len 8388608 PASSED 00:05:16.626 free 0x200000600000 8388608 00:05:16.626 unregister 0x200000400000 10485760 PASSED 00:05:16.626 passed 00:05:16.626 00:05:16.626 Run Summary: Type Total Ran Passed Failed Inactive 00:05:16.626 suites 1 1 n/a 0 0 00:05:16.626 tests 1 1 1 0 0 00:05:16.626 asserts 15 15 15 0 n/a 00:05:16.626 00:05:16.626 Elapsed time = 0.008 seconds 00:05:16.626 00:05:16.626 real 0m0.054s 00:05:16.626 user 0m0.016s 00:05:16.626 sys 0m0.038s 00:05:16.626 11:13:12 env.env_mem_callbacks -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:16.626 11:13:12 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 00:05:16.626 ************************************ 00:05:16.626 END TEST env_mem_callbacks 00:05:16.626 ************************************ 00:05:16.626 00:05:16.626 real 0m6.618s 00:05:16.626 user 0m4.761s 00:05:16.626 sys 0m0.934s 00:05:16.626 11:13:12 env -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:16.626 11:13:12 env -- common/autotest_common.sh@10 -- # set +x 00:05:16.626 ************************************ 00:05:16.626 END TEST env 00:05:16.626 ************************************ 00:05:16.626 11:13:12 -- spdk/autotest.sh@169 -- # run_test rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/rpc.sh 00:05:16.626 11:13:12 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:16.626 11:13:12 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:16.626 11:13:12 -- common/autotest_common.sh@10 -- # set +x 00:05:16.626 ************************************ 00:05:16.626 START TEST rpc 00:05:16.626 ************************************ 00:05:16.626 11:13:12 rpc -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/rpc.sh 00:05:16.885 * Looking for test storage... 00:05:16.885 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:05:16.885 11:13:12 rpc -- rpc/rpc.sh@65 -- # spdk_pid=1332765 00:05:16.885 11:13:12 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:16.885 11:13:12 rpc -- rpc/rpc.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -e bdev 00:05:16.885 11:13:12 rpc -- rpc/rpc.sh@67 -- # waitforlisten 1332765 00:05:16.885 11:13:12 rpc -- common/autotest_common.sh@831 -- # '[' -z 1332765 ']' 00:05:16.885 11:13:12 rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:16.885 11:13:12 rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:16.886 11:13:12 rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:16.886 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:16.886 11:13:12 rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:16.886 11:13:12 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:16.886 [2024-07-26 11:13:12.390003] Starting SPDK v24.09-pre git sha1 487ff9e1a / DPDK 24.03.0 initialization... 00:05:16.886 [2024-07-26 11:13:12.390050] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1332765 ] 00:05:16.886 EAL: No free 2048 kB hugepages reported on node 1 00:05:16.886 [2024-07-26 11:13:12.457468] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:16.886 [2024-07-26 11:13:12.535211] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:05:16.886 [2024-07-26 11:13:12.535245] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 1332765' to capture a snapshot of events at runtime. 00:05:16.886 [2024-07-26 11:13:12.535251] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:05:16.886 [2024-07-26 11:13:12.535257] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:05:16.886 [2024-07-26 11:13:12.535262] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid1332765 for offline analysis/debug. 00:05:16.886 [2024-07-26 11:13:12.535284] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:17.871 11:13:13 rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:17.871 11:13:13 rpc -- common/autotest_common.sh@864 -- # return 0 00:05:17.871 11:13:13 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:05:17.871 11:13:13 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:05:17.871 11:13:13 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:05:17.871 11:13:13 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:05:17.871 11:13:13 rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:17.871 11:13:13 rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:17.871 11:13:13 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:17.871 ************************************ 00:05:17.871 START TEST rpc_integrity 00:05:17.871 ************************************ 00:05:17.871 11:13:13 rpc.rpc_integrity -- common/autotest_common.sh@1125 -- # rpc_integrity 00:05:17.871 11:13:13 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:05:17.871 11:13:13 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:17.871 11:13:13 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:17.871 11:13:13 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:17.871 11:13:13 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:05:17.871 11:13:13 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:05:17.871 11:13:13 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:05:17.871 11:13:13 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:05:17.871 11:13:13 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:17.871 11:13:13 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:17.871 11:13:13 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:17.871 11:13:13 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:05:17.871 11:13:13 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:05:17.871 11:13:13 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:17.871 11:13:13 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:17.871 11:13:13 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:17.871 11:13:13 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:05:17.871 { 00:05:17.871 "name": "Malloc0", 00:05:17.871 "aliases": [ 00:05:17.871 "296da467-479d-4b54-ab09-34abefa27fd6" 00:05:17.871 ], 00:05:17.871 "product_name": "Malloc disk", 00:05:17.871 "block_size": 512, 00:05:17.871 "num_blocks": 16384, 00:05:17.871 "uuid": "296da467-479d-4b54-ab09-34abefa27fd6", 00:05:17.871 "assigned_rate_limits": { 00:05:17.871 "rw_ios_per_sec": 0, 00:05:17.871 "rw_mbytes_per_sec": 0, 00:05:17.871 "r_mbytes_per_sec": 0, 00:05:17.871 "w_mbytes_per_sec": 0 00:05:17.871 }, 00:05:17.871 "claimed": false, 00:05:17.871 "zoned": false, 00:05:17.871 "supported_io_types": { 00:05:17.871 "read": true, 00:05:17.871 "write": true, 00:05:17.871 "unmap": true, 00:05:17.871 "flush": true, 00:05:17.871 "reset": true, 00:05:17.871 "nvme_admin": false, 00:05:17.871 "nvme_io": false, 00:05:17.871 "nvme_io_md": false, 00:05:17.871 "write_zeroes": true, 00:05:17.871 "zcopy": true, 00:05:17.871 "get_zone_info": false, 00:05:17.871 "zone_management": false, 00:05:17.871 "zone_append": false, 00:05:17.871 "compare": false, 00:05:17.871 "compare_and_write": false, 00:05:17.871 "abort": true, 00:05:17.871 "seek_hole": false, 00:05:17.871 "seek_data": false, 00:05:17.871 "copy": true, 00:05:17.871 "nvme_iov_md": false 00:05:17.871 }, 00:05:17.871 "memory_domains": [ 00:05:17.871 { 00:05:17.871 "dma_device_id": "system", 00:05:17.871 "dma_device_type": 1 00:05:17.871 }, 00:05:17.871 { 00:05:17.871 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:17.871 "dma_device_type": 2 00:05:17.871 } 00:05:17.871 ], 00:05:17.871 "driver_specific": {} 00:05:17.871 } 00:05:17.871 ]' 00:05:17.871 11:13:13 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:05:17.871 11:13:13 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:05:17.871 11:13:13 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:05:17.871 11:13:13 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:17.871 11:13:13 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:17.871 [2024-07-26 11:13:13.354317] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:05:17.871 [2024-07-26 11:13:13.354347] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:05:17.871 [2024-07-26 11:13:13.354358] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x15812d0 00:05:17.871 [2024-07-26 11:13:13.354364] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:05:17.871 [2024-07-26 11:13:13.355408] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:05:17.871 [2024-07-26 11:13:13.355429] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:05:17.871 Passthru0 00:05:17.871 11:13:13 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:17.871 11:13:13 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:05:17.871 11:13:13 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:17.871 11:13:13 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:17.871 11:13:13 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:17.871 11:13:13 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:05:17.871 { 00:05:17.871 "name": "Malloc0", 00:05:17.871 "aliases": [ 00:05:17.871 "296da467-479d-4b54-ab09-34abefa27fd6" 00:05:17.871 ], 00:05:17.871 "product_name": "Malloc disk", 00:05:17.871 "block_size": 512, 00:05:17.871 "num_blocks": 16384, 00:05:17.871 "uuid": "296da467-479d-4b54-ab09-34abefa27fd6", 00:05:17.871 "assigned_rate_limits": { 00:05:17.871 "rw_ios_per_sec": 0, 00:05:17.871 "rw_mbytes_per_sec": 0, 00:05:17.871 "r_mbytes_per_sec": 0, 00:05:17.871 "w_mbytes_per_sec": 0 00:05:17.871 }, 00:05:17.871 "claimed": true, 00:05:17.871 "claim_type": "exclusive_write", 00:05:17.871 "zoned": false, 00:05:17.871 "supported_io_types": { 00:05:17.871 "read": true, 00:05:17.871 "write": true, 00:05:17.871 "unmap": true, 00:05:17.871 "flush": true, 00:05:17.871 "reset": true, 00:05:17.871 "nvme_admin": false, 00:05:17.871 "nvme_io": false, 00:05:17.871 "nvme_io_md": false, 00:05:17.871 "write_zeroes": true, 00:05:17.871 "zcopy": true, 00:05:17.871 "get_zone_info": false, 00:05:17.871 "zone_management": false, 00:05:17.871 "zone_append": false, 00:05:17.871 "compare": false, 00:05:17.871 "compare_and_write": false, 00:05:17.871 "abort": true, 00:05:17.871 "seek_hole": false, 00:05:17.871 "seek_data": false, 00:05:17.871 "copy": true, 00:05:17.871 "nvme_iov_md": false 00:05:17.871 }, 00:05:17.871 "memory_domains": [ 00:05:17.871 { 00:05:17.871 "dma_device_id": "system", 00:05:17.871 "dma_device_type": 1 00:05:17.871 }, 00:05:17.871 { 00:05:17.871 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:17.871 "dma_device_type": 2 00:05:17.871 } 00:05:17.871 ], 00:05:17.871 "driver_specific": {} 00:05:17.871 }, 00:05:17.871 { 00:05:17.871 "name": "Passthru0", 00:05:17.871 "aliases": [ 00:05:17.871 "f491aedb-7f0e-526c-8e49-b978e6019e9c" 00:05:17.871 ], 00:05:17.871 "product_name": "passthru", 00:05:17.871 "block_size": 512, 00:05:17.871 "num_blocks": 16384, 00:05:17.872 "uuid": "f491aedb-7f0e-526c-8e49-b978e6019e9c", 00:05:17.872 "assigned_rate_limits": { 00:05:17.872 "rw_ios_per_sec": 0, 00:05:17.872 "rw_mbytes_per_sec": 0, 00:05:17.872 "r_mbytes_per_sec": 0, 00:05:17.872 "w_mbytes_per_sec": 0 00:05:17.872 }, 00:05:17.872 "claimed": false, 00:05:17.872 "zoned": false, 00:05:17.872 "supported_io_types": { 00:05:17.872 "read": true, 00:05:17.872 "write": true, 00:05:17.872 "unmap": true, 00:05:17.872 "flush": true, 00:05:17.872 "reset": true, 00:05:17.872 "nvme_admin": false, 00:05:17.872 "nvme_io": false, 00:05:17.872 "nvme_io_md": false, 00:05:17.872 "write_zeroes": true, 00:05:17.872 "zcopy": true, 00:05:17.872 "get_zone_info": false, 00:05:17.872 "zone_management": false, 00:05:17.872 "zone_append": false, 00:05:17.872 "compare": false, 00:05:17.872 "compare_and_write": false, 00:05:17.872 "abort": true, 00:05:17.872 "seek_hole": false, 00:05:17.872 "seek_data": false, 00:05:17.872 "copy": true, 00:05:17.872 "nvme_iov_md": false 00:05:17.872 }, 00:05:17.872 "memory_domains": [ 00:05:17.872 { 00:05:17.872 "dma_device_id": "system", 00:05:17.872 "dma_device_type": 1 00:05:17.872 }, 00:05:17.872 { 00:05:17.872 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:17.872 "dma_device_type": 2 00:05:17.872 } 00:05:17.872 ], 00:05:17.872 "driver_specific": { 00:05:17.872 "passthru": { 00:05:17.872 "name": "Passthru0", 00:05:17.872 "base_bdev_name": "Malloc0" 00:05:17.872 } 00:05:17.872 } 00:05:17.872 } 00:05:17.872 ]' 00:05:17.872 11:13:13 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:05:17.872 11:13:13 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:05:17.872 11:13:13 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:05:17.872 11:13:13 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:17.872 11:13:13 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:17.872 11:13:13 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:17.872 11:13:13 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:05:17.872 11:13:13 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:17.872 11:13:13 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:17.872 11:13:13 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:17.872 11:13:13 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:05:17.872 11:13:13 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:17.872 11:13:13 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:17.872 11:13:13 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:17.872 11:13:13 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:05:17.872 11:13:13 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:05:17.872 11:13:13 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:05:17.872 00:05:17.872 real 0m0.283s 00:05:17.872 user 0m0.177s 00:05:17.872 sys 0m0.040s 00:05:17.872 11:13:13 rpc.rpc_integrity -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:17.872 11:13:13 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:17.872 ************************************ 00:05:17.872 END TEST rpc_integrity 00:05:17.872 ************************************ 00:05:18.140 11:13:13 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:05:18.140 11:13:13 rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:18.140 11:13:13 rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:18.140 11:13:13 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:18.140 ************************************ 00:05:18.140 START TEST rpc_plugins 00:05:18.140 ************************************ 00:05:18.140 11:13:13 rpc.rpc_plugins -- common/autotest_common.sh@1125 -- # rpc_plugins 00:05:18.140 11:13:13 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:05:18.140 11:13:13 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:18.140 11:13:13 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:18.140 11:13:13 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:18.140 11:13:13 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:05:18.140 11:13:13 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:05:18.140 11:13:13 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:18.140 11:13:13 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:18.140 11:13:13 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:18.140 11:13:13 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:05:18.140 { 00:05:18.140 "name": "Malloc1", 00:05:18.140 "aliases": [ 00:05:18.140 "9a606d77-281e-4e7d-a5ad-3a44fddd992c" 00:05:18.140 ], 00:05:18.140 "product_name": "Malloc disk", 00:05:18.140 "block_size": 4096, 00:05:18.140 "num_blocks": 256, 00:05:18.140 "uuid": "9a606d77-281e-4e7d-a5ad-3a44fddd992c", 00:05:18.140 "assigned_rate_limits": { 00:05:18.140 "rw_ios_per_sec": 0, 00:05:18.140 "rw_mbytes_per_sec": 0, 00:05:18.140 "r_mbytes_per_sec": 0, 00:05:18.140 "w_mbytes_per_sec": 0 00:05:18.140 }, 00:05:18.140 "claimed": false, 00:05:18.140 "zoned": false, 00:05:18.140 "supported_io_types": { 00:05:18.140 "read": true, 00:05:18.140 "write": true, 00:05:18.140 "unmap": true, 00:05:18.140 "flush": true, 00:05:18.140 "reset": true, 00:05:18.140 "nvme_admin": false, 00:05:18.140 "nvme_io": false, 00:05:18.140 "nvme_io_md": false, 00:05:18.140 "write_zeroes": true, 00:05:18.140 "zcopy": true, 00:05:18.140 "get_zone_info": false, 00:05:18.140 "zone_management": false, 00:05:18.140 "zone_append": false, 00:05:18.140 "compare": false, 00:05:18.140 "compare_and_write": false, 00:05:18.140 "abort": true, 00:05:18.140 "seek_hole": false, 00:05:18.140 "seek_data": false, 00:05:18.140 "copy": true, 00:05:18.140 "nvme_iov_md": false 00:05:18.140 }, 00:05:18.140 "memory_domains": [ 00:05:18.140 { 00:05:18.140 "dma_device_id": "system", 00:05:18.140 "dma_device_type": 1 00:05:18.140 }, 00:05:18.140 { 00:05:18.140 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:18.140 "dma_device_type": 2 00:05:18.140 } 00:05:18.140 ], 00:05:18.140 "driver_specific": {} 00:05:18.140 } 00:05:18.140 ]' 00:05:18.140 11:13:13 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:05:18.140 11:13:13 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:05:18.140 11:13:13 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:05:18.140 11:13:13 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:18.140 11:13:13 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:18.140 11:13:13 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:18.140 11:13:13 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:05:18.140 11:13:13 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:18.140 11:13:13 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:18.140 11:13:13 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:18.140 11:13:13 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:05:18.140 11:13:13 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:05:18.140 11:13:13 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:05:18.140 00:05:18.140 real 0m0.137s 00:05:18.140 user 0m0.088s 00:05:18.140 sys 0m0.013s 00:05:18.140 11:13:13 rpc.rpc_plugins -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:18.140 11:13:13 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:18.140 ************************************ 00:05:18.140 END TEST rpc_plugins 00:05:18.140 ************************************ 00:05:18.140 11:13:13 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:05:18.140 11:13:13 rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:18.140 11:13:13 rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:18.140 11:13:13 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:18.140 ************************************ 00:05:18.140 START TEST rpc_trace_cmd_test 00:05:18.140 ************************************ 00:05:18.140 11:13:13 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1125 -- # rpc_trace_cmd_test 00:05:18.140 11:13:13 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:05:18.140 11:13:13 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:05:18.140 11:13:13 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:18.140 11:13:13 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:05:18.140 11:13:13 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:18.140 11:13:13 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:05:18.140 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid1332765", 00:05:18.140 "tpoint_group_mask": "0x8", 00:05:18.140 "iscsi_conn": { 00:05:18.140 "mask": "0x2", 00:05:18.140 "tpoint_mask": "0x0" 00:05:18.140 }, 00:05:18.140 "scsi": { 00:05:18.140 "mask": "0x4", 00:05:18.140 "tpoint_mask": "0x0" 00:05:18.140 }, 00:05:18.140 "bdev": { 00:05:18.140 "mask": "0x8", 00:05:18.140 "tpoint_mask": "0xffffffffffffffff" 00:05:18.140 }, 00:05:18.140 "nvmf_rdma": { 00:05:18.140 "mask": "0x10", 00:05:18.140 "tpoint_mask": "0x0" 00:05:18.140 }, 00:05:18.140 "nvmf_tcp": { 00:05:18.140 "mask": "0x20", 00:05:18.140 "tpoint_mask": "0x0" 00:05:18.140 }, 00:05:18.140 "ftl": { 00:05:18.140 "mask": "0x40", 00:05:18.140 "tpoint_mask": "0x0" 00:05:18.140 }, 00:05:18.140 "blobfs": { 00:05:18.140 "mask": "0x80", 00:05:18.141 "tpoint_mask": "0x0" 00:05:18.141 }, 00:05:18.141 "dsa": { 00:05:18.141 "mask": "0x200", 00:05:18.141 "tpoint_mask": "0x0" 00:05:18.141 }, 00:05:18.141 "thread": { 00:05:18.141 "mask": "0x400", 00:05:18.141 "tpoint_mask": "0x0" 00:05:18.141 }, 00:05:18.141 "nvme_pcie": { 00:05:18.141 "mask": "0x800", 00:05:18.141 "tpoint_mask": "0x0" 00:05:18.141 }, 00:05:18.141 "iaa": { 00:05:18.141 "mask": "0x1000", 00:05:18.141 "tpoint_mask": "0x0" 00:05:18.141 }, 00:05:18.141 "nvme_tcp": { 00:05:18.141 "mask": "0x2000", 00:05:18.141 "tpoint_mask": "0x0" 00:05:18.141 }, 00:05:18.141 "bdev_nvme": { 00:05:18.141 "mask": "0x4000", 00:05:18.141 "tpoint_mask": "0x0" 00:05:18.141 }, 00:05:18.141 "sock": { 00:05:18.141 "mask": "0x8000", 00:05:18.141 "tpoint_mask": "0x0" 00:05:18.141 } 00:05:18.141 }' 00:05:18.141 11:13:13 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:05:18.399 11:13:13 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 16 -gt 2 ']' 00:05:18.399 11:13:13 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:05:18.399 11:13:13 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:05:18.399 11:13:13 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:05:18.399 11:13:13 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:05:18.399 11:13:13 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:05:18.399 11:13:13 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:05:18.399 11:13:13 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:05:18.399 11:13:13 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:05:18.399 00:05:18.399 real 0m0.204s 00:05:18.399 user 0m0.173s 00:05:18.399 sys 0m0.024s 00:05:18.399 11:13:13 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:18.399 11:13:13 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:05:18.399 ************************************ 00:05:18.399 END TEST rpc_trace_cmd_test 00:05:18.399 ************************************ 00:05:18.400 11:13:14 rpc -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:05:18.400 11:13:14 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:05:18.400 11:13:14 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:05:18.400 11:13:14 rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:18.400 11:13:14 rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:18.400 11:13:14 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:18.400 ************************************ 00:05:18.400 START TEST rpc_daemon_integrity 00:05:18.400 ************************************ 00:05:18.400 11:13:14 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1125 -- # rpc_integrity 00:05:18.400 11:13:14 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:05:18.400 11:13:14 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:18.400 11:13:14 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:18.400 11:13:14 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:18.400 11:13:14 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:05:18.659 11:13:14 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:05:18.659 11:13:14 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:05:18.659 11:13:14 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:05:18.659 11:13:14 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:18.659 11:13:14 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:18.659 11:13:14 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:18.659 11:13:14 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:05:18.659 11:13:14 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:05:18.659 11:13:14 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:18.659 11:13:14 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:18.659 11:13:14 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:18.659 11:13:14 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:05:18.659 { 00:05:18.659 "name": "Malloc2", 00:05:18.659 "aliases": [ 00:05:18.659 "b8ca2ae6-2b14-4311-80aa-8a53629606da" 00:05:18.659 ], 00:05:18.659 "product_name": "Malloc disk", 00:05:18.659 "block_size": 512, 00:05:18.659 "num_blocks": 16384, 00:05:18.659 "uuid": "b8ca2ae6-2b14-4311-80aa-8a53629606da", 00:05:18.659 "assigned_rate_limits": { 00:05:18.659 "rw_ios_per_sec": 0, 00:05:18.659 "rw_mbytes_per_sec": 0, 00:05:18.659 "r_mbytes_per_sec": 0, 00:05:18.659 "w_mbytes_per_sec": 0 00:05:18.659 }, 00:05:18.659 "claimed": false, 00:05:18.659 "zoned": false, 00:05:18.659 "supported_io_types": { 00:05:18.659 "read": true, 00:05:18.659 "write": true, 00:05:18.659 "unmap": true, 00:05:18.659 "flush": true, 00:05:18.659 "reset": true, 00:05:18.659 "nvme_admin": false, 00:05:18.659 "nvme_io": false, 00:05:18.659 "nvme_io_md": false, 00:05:18.659 "write_zeroes": true, 00:05:18.659 "zcopy": true, 00:05:18.659 "get_zone_info": false, 00:05:18.659 "zone_management": false, 00:05:18.659 "zone_append": false, 00:05:18.659 "compare": false, 00:05:18.659 "compare_and_write": false, 00:05:18.659 "abort": true, 00:05:18.659 "seek_hole": false, 00:05:18.659 "seek_data": false, 00:05:18.659 "copy": true, 00:05:18.659 "nvme_iov_md": false 00:05:18.659 }, 00:05:18.659 "memory_domains": [ 00:05:18.659 { 00:05:18.659 "dma_device_id": "system", 00:05:18.659 "dma_device_type": 1 00:05:18.659 }, 00:05:18.659 { 00:05:18.659 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:18.659 "dma_device_type": 2 00:05:18.659 } 00:05:18.659 ], 00:05:18.659 "driver_specific": {} 00:05:18.659 } 00:05:18.659 ]' 00:05:18.659 11:13:14 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:05:18.659 11:13:14 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:05:18.659 11:13:14 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:05:18.659 11:13:14 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:18.659 11:13:14 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:18.659 [2024-07-26 11:13:14.164503] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:05:18.659 [2024-07-26 11:13:14.164529] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:05:18.659 [2024-07-26 11:13:14.164540] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1718ac0 00:05:18.659 [2024-07-26 11:13:14.164546] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:05:18.659 [2024-07-26 11:13:14.165451] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:05:18.659 [2024-07-26 11:13:14.165473] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:05:18.659 Passthru0 00:05:18.659 11:13:14 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:18.659 11:13:14 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:05:18.659 11:13:14 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:18.659 11:13:14 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:18.659 11:13:14 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:18.659 11:13:14 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:05:18.659 { 00:05:18.659 "name": "Malloc2", 00:05:18.659 "aliases": [ 00:05:18.659 "b8ca2ae6-2b14-4311-80aa-8a53629606da" 00:05:18.659 ], 00:05:18.659 "product_name": "Malloc disk", 00:05:18.659 "block_size": 512, 00:05:18.659 "num_blocks": 16384, 00:05:18.659 "uuid": "b8ca2ae6-2b14-4311-80aa-8a53629606da", 00:05:18.659 "assigned_rate_limits": { 00:05:18.659 "rw_ios_per_sec": 0, 00:05:18.659 "rw_mbytes_per_sec": 0, 00:05:18.659 "r_mbytes_per_sec": 0, 00:05:18.659 "w_mbytes_per_sec": 0 00:05:18.659 }, 00:05:18.659 "claimed": true, 00:05:18.659 "claim_type": "exclusive_write", 00:05:18.659 "zoned": false, 00:05:18.659 "supported_io_types": { 00:05:18.659 "read": true, 00:05:18.659 "write": true, 00:05:18.659 "unmap": true, 00:05:18.659 "flush": true, 00:05:18.659 "reset": true, 00:05:18.659 "nvme_admin": false, 00:05:18.659 "nvme_io": false, 00:05:18.659 "nvme_io_md": false, 00:05:18.659 "write_zeroes": true, 00:05:18.659 "zcopy": true, 00:05:18.659 "get_zone_info": false, 00:05:18.659 "zone_management": false, 00:05:18.659 "zone_append": false, 00:05:18.659 "compare": false, 00:05:18.659 "compare_and_write": false, 00:05:18.659 "abort": true, 00:05:18.659 "seek_hole": false, 00:05:18.659 "seek_data": false, 00:05:18.659 "copy": true, 00:05:18.659 "nvme_iov_md": false 00:05:18.659 }, 00:05:18.659 "memory_domains": [ 00:05:18.659 { 00:05:18.659 "dma_device_id": "system", 00:05:18.659 "dma_device_type": 1 00:05:18.659 }, 00:05:18.659 { 00:05:18.659 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:18.659 "dma_device_type": 2 00:05:18.659 } 00:05:18.659 ], 00:05:18.659 "driver_specific": {} 00:05:18.659 }, 00:05:18.659 { 00:05:18.659 "name": "Passthru0", 00:05:18.659 "aliases": [ 00:05:18.659 "3fd9fe48-d4d0-5809-878a-cdef6ebc82cd" 00:05:18.659 ], 00:05:18.659 "product_name": "passthru", 00:05:18.659 "block_size": 512, 00:05:18.659 "num_blocks": 16384, 00:05:18.659 "uuid": "3fd9fe48-d4d0-5809-878a-cdef6ebc82cd", 00:05:18.659 "assigned_rate_limits": { 00:05:18.659 "rw_ios_per_sec": 0, 00:05:18.659 "rw_mbytes_per_sec": 0, 00:05:18.659 "r_mbytes_per_sec": 0, 00:05:18.659 "w_mbytes_per_sec": 0 00:05:18.659 }, 00:05:18.659 "claimed": false, 00:05:18.659 "zoned": false, 00:05:18.659 "supported_io_types": { 00:05:18.659 "read": true, 00:05:18.659 "write": true, 00:05:18.659 "unmap": true, 00:05:18.659 "flush": true, 00:05:18.659 "reset": true, 00:05:18.659 "nvme_admin": false, 00:05:18.659 "nvme_io": false, 00:05:18.659 "nvme_io_md": false, 00:05:18.659 "write_zeroes": true, 00:05:18.659 "zcopy": true, 00:05:18.659 "get_zone_info": false, 00:05:18.659 "zone_management": false, 00:05:18.659 "zone_append": false, 00:05:18.659 "compare": false, 00:05:18.659 "compare_and_write": false, 00:05:18.659 "abort": true, 00:05:18.659 "seek_hole": false, 00:05:18.659 "seek_data": false, 00:05:18.659 "copy": true, 00:05:18.659 "nvme_iov_md": false 00:05:18.659 }, 00:05:18.659 "memory_domains": [ 00:05:18.659 { 00:05:18.659 "dma_device_id": "system", 00:05:18.659 "dma_device_type": 1 00:05:18.659 }, 00:05:18.659 { 00:05:18.659 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:18.659 "dma_device_type": 2 00:05:18.659 } 00:05:18.659 ], 00:05:18.659 "driver_specific": { 00:05:18.659 "passthru": { 00:05:18.659 "name": "Passthru0", 00:05:18.659 "base_bdev_name": "Malloc2" 00:05:18.659 } 00:05:18.659 } 00:05:18.659 } 00:05:18.659 ]' 00:05:18.659 11:13:14 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:05:18.659 11:13:14 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:05:18.659 11:13:14 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:05:18.659 11:13:14 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:18.659 11:13:14 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:18.659 11:13:14 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:18.659 11:13:14 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:05:18.659 11:13:14 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:18.659 11:13:14 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:18.659 11:13:14 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:18.659 11:13:14 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:05:18.659 11:13:14 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:18.659 11:13:14 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:18.660 11:13:14 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:18.660 11:13:14 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:05:18.660 11:13:14 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:05:18.660 11:13:14 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:05:18.660 00:05:18.660 real 0m0.264s 00:05:18.660 user 0m0.167s 00:05:18.660 sys 0m0.036s 00:05:18.660 11:13:14 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:18.660 11:13:14 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:18.660 ************************************ 00:05:18.660 END TEST rpc_daemon_integrity 00:05:18.660 ************************************ 00:05:18.918 11:13:14 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:05:18.918 11:13:14 rpc -- rpc/rpc.sh@84 -- # killprocess 1332765 00:05:18.918 11:13:14 rpc -- common/autotest_common.sh@950 -- # '[' -z 1332765 ']' 00:05:18.918 11:13:14 rpc -- common/autotest_common.sh@954 -- # kill -0 1332765 00:05:18.918 11:13:14 rpc -- common/autotest_common.sh@955 -- # uname 00:05:18.918 11:13:14 rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:18.918 11:13:14 rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1332765 00:05:18.918 11:13:14 rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:18.918 11:13:14 rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:18.918 11:13:14 rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1332765' 00:05:18.918 killing process with pid 1332765 00:05:18.919 11:13:14 rpc -- common/autotest_common.sh@969 -- # kill 1332765 00:05:18.919 11:13:14 rpc -- common/autotest_common.sh@974 -- # wait 1332765 00:05:19.178 00:05:19.178 real 0m2.444s 00:05:19.178 user 0m3.152s 00:05:19.178 sys 0m0.670s 00:05:19.178 11:13:14 rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:19.178 11:13:14 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:19.178 ************************************ 00:05:19.178 END TEST rpc 00:05:19.178 ************************************ 00:05:19.178 11:13:14 -- spdk/autotest.sh@170 -- # run_test skip_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:05:19.178 11:13:14 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:19.178 11:13:14 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:19.178 11:13:14 -- common/autotest_common.sh@10 -- # set +x 00:05:19.178 ************************************ 00:05:19.178 START TEST skip_rpc 00:05:19.178 ************************************ 00:05:19.178 11:13:14 skip_rpc -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:05:19.454 * Looking for test storage... 00:05:19.454 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:05:19.454 11:13:14 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:05:19.454 11:13:14 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:05:19.454 11:13:14 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:05:19.454 11:13:14 skip_rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:19.454 11:13:14 skip_rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:19.454 11:13:14 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:19.454 ************************************ 00:05:19.454 START TEST skip_rpc 00:05:19.455 ************************************ 00:05:19.455 11:13:14 skip_rpc.skip_rpc -- common/autotest_common.sh@1125 -- # test_skip_rpc 00:05:19.455 11:13:14 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=1333399 00:05:19.455 11:13:14 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:19.455 11:13:14 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:05:19.455 11:13:14 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:05:19.455 [2024-07-26 11:13:14.937224] Starting SPDK v24.09-pre git sha1 487ff9e1a / DPDK 24.03.0 initialization... 00:05:19.455 [2024-07-26 11:13:14.937259] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1333399 ] 00:05:19.455 EAL: No free 2048 kB hugepages reported on node 1 00:05:19.455 [2024-07-26 11:13:14.988354] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:19.455 [2024-07-26 11:13:15.060045] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:24.724 11:13:19 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:05:24.724 11:13:19 skip_rpc.skip_rpc -- common/autotest_common.sh@650 -- # local es=0 00:05:24.724 11:13:19 skip_rpc.skip_rpc -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd spdk_get_version 00:05:24.724 11:13:19 skip_rpc.skip_rpc -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:05:24.724 11:13:19 skip_rpc.skip_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:24.724 11:13:19 skip_rpc.skip_rpc -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:05:24.724 11:13:19 skip_rpc.skip_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:24.724 11:13:19 skip_rpc.skip_rpc -- common/autotest_common.sh@653 -- # rpc_cmd spdk_get_version 00:05:24.724 11:13:19 skip_rpc.skip_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:24.724 11:13:19 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:24.724 11:13:19 skip_rpc.skip_rpc -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:05:24.724 11:13:19 skip_rpc.skip_rpc -- common/autotest_common.sh@653 -- # es=1 00:05:24.724 11:13:19 skip_rpc.skip_rpc -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:05:24.724 11:13:19 skip_rpc.skip_rpc -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:05:24.724 11:13:19 skip_rpc.skip_rpc -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:05:24.724 11:13:19 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:05:24.724 11:13:19 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 1333399 00:05:24.724 11:13:19 skip_rpc.skip_rpc -- common/autotest_common.sh@950 -- # '[' -z 1333399 ']' 00:05:24.724 11:13:19 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # kill -0 1333399 00:05:24.724 11:13:19 skip_rpc.skip_rpc -- common/autotest_common.sh@955 -- # uname 00:05:24.724 11:13:19 skip_rpc.skip_rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:24.724 11:13:19 skip_rpc.skip_rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1333399 00:05:24.724 11:13:19 skip_rpc.skip_rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:24.724 11:13:19 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:24.724 11:13:19 skip_rpc.skip_rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1333399' 00:05:24.724 killing process with pid 1333399 00:05:24.724 11:13:19 skip_rpc.skip_rpc -- common/autotest_common.sh@969 -- # kill 1333399 00:05:24.724 11:13:19 skip_rpc.skip_rpc -- common/autotest_common.sh@974 -- # wait 1333399 00:05:24.724 00:05:24.724 real 0m5.371s 00:05:24.724 user 0m5.154s 00:05:24.724 sys 0m0.247s 00:05:24.724 11:13:20 skip_rpc.skip_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:24.724 11:13:20 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:24.724 ************************************ 00:05:24.724 END TEST skip_rpc 00:05:24.724 ************************************ 00:05:24.724 11:13:20 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:05:24.724 11:13:20 skip_rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:24.724 11:13:20 skip_rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:24.724 11:13:20 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:24.724 ************************************ 00:05:24.724 START TEST skip_rpc_with_json 00:05:24.724 ************************************ 00:05:24.724 11:13:20 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1125 -- # test_skip_rpc_with_json 00:05:24.724 11:13:20 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:05:24.724 11:13:20 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=1334351 00:05:24.724 11:13:20 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:24.724 11:13:20 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:24.724 11:13:20 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 1334351 00:05:24.724 11:13:20 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@831 -- # '[' -z 1334351 ']' 00:05:24.724 11:13:20 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:24.724 11:13:20 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:24.724 11:13:20 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:24.724 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:24.724 11:13:20 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:24.724 11:13:20 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:24.724 [2024-07-26 11:13:20.370767] Starting SPDK v24.09-pre git sha1 487ff9e1a / DPDK 24.03.0 initialization... 00:05:24.724 [2024-07-26 11:13:20.370805] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1334351 ] 00:05:24.983 EAL: No free 2048 kB hugepages reported on node 1 00:05:24.983 [2024-07-26 11:13:20.435361] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:24.983 [2024-07-26 11:13:20.513729] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:25.551 11:13:21 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:25.551 11:13:21 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@864 -- # return 0 00:05:25.551 11:13:21 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:05:25.551 11:13:21 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:25.551 11:13:21 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:25.551 [2024-07-26 11:13:21.172172] nvmf_rpc.c:2569:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:05:25.551 request: 00:05:25.551 { 00:05:25.551 "trtype": "tcp", 00:05:25.551 "method": "nvmf_get_transports", 00:05:25.551 "req_id": 1 00:05:25.551 } 00:05:25.551 Got JSON-RPC error response 00:05:25.551 response: 00:05:25.551 { 00:05:25.551 "code": -19, 00:05:25.551 "message": "No such device" 00:05:25.551 } 00:05:25.551 11:13:21 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:05:25.551 11:13:21 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:05:25.551 11:13:21 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:25.551 11:13:21 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:25.551 [2024-07-26 11:13:21.184279] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:05:25.551 11:13:21 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:25.551 11:13:21 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:05:25.551 11:13:21 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:25.551 11:13:21 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:25.810 11:13:21 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:25.810 11:13:21 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:05:25.810 { 00:05:25.810 "subsystems": [ 00:05:25.810 { 00:05:25.810 "subsystem": "vfio_user_target", 00:05:25.810 "config": null 00:05:25.810 }, 00:05:25.810 { 00:05:25.810 "subsystem": "keyring", 00:05:25.810 "config": [] 00:05:25.810 }, 00:05:25.810 { 00:05:25.810 "subsystem": "iobuf", 00:05:25.810 "config": [ 00:05:25.810 { 00:05:25.810 "method": "iobuf_set_options", 00:05:25.810 "params": { 00:05:25.810 "small_pool_count": 8192, 00:05:25.810 "large_pool_count": 1024, 00:05:25.810 "small_bufsize": 8192, 00:05:25.810 "large_bufsize": 135168 00:05:25.810 } 00:05:25.810 } 00:05:25.810 ] 00:05:25.810 }, 00:05:25.810 { 00:05:25.810 "subsystem": "sock", 00:05:25.810 "config": [ 00:05:25.810 { 00:05:25.810 "method": "sock_set_default_impl", 00:05:25.810 "params": { 00:05:25.810 "impl_name": "posix" 00:05:25.810 } 00:05:25.810 }, 00:05:25.810 { 00:05:25.810 "method": "sock_impl_set_options", 00:05:25.810 "params": { 00:05:25.810 "impl_name": "ssl", 00:05:25.810 "recv_buf_size": 4096, 00:05:25.810 "send_buf_size": 4096, 00:05:25.810 "enable_recv_pipe": true, 00:05:25.810 "enable_quickack": false, 00:05:25.810 "enable_placement_id": 0, 00:05:25.810 "enable_zerocopy_send_server": true, 00:05:25.810 "enable_zerocopy_send_client": false, 00:05:25.810 "zerocopy_threshold": 0, 00:05:25.810 "tls_version": 0, 00:05:25.810 "enable_ktls": false 00:05:25.810 } 00:05:25.810 }, 00:05:25.810 { 00:05:25.810 "method": "sock_impl_set_options", 00:05:25.810 "params": { 00:05:25.810 "impl_name": "posix", 00:05:25.810 "recv_buf_size": 2097152, 00:05:25.810 "send_buf_size": 2097152, 00:05:25.810 "enable_recv_pipe": true, 00:05:25.810 "enable_quickack": false, 00:05:25.810 "enable_placement_id": 0, 00:05:25.810 "enable_zerocopy_send_server": true, 00:05:25.810 "enable_zerocopy_send_client": false, 00:05:25.810 "zerocopy_threshold": 0, 00:05:25.810 "tls_version": 0, 00:05:25.810 "enable_ktls": false 00:05:25.810 } 00:05:25.810 } 00:05:25.810 ] 00:05:25.810 }, 00:05:25.810 { 00:05:25.810 "subsystem": "vmd", 00:05:25.810 "config": [] 00:05:25.810 }, 00:05:25.810 { 00:05:25.810 "subsystem": "accel", 00:05:25.810 "config": [ 00:05:25.810 { 00:05:25.810 "method": "accel_set_options", 00:05:25.810 "params": { 00:05:25.810 "small_cache_size": 128, 00:05:25.810 "large_cache_size": 16, 00:05:25.810 "task_count": 2048, 00:05:25.810 "sequence_count": 2048, 00:05:25.810 "buf_count": 2048 00:05:25.810 } 00:05:25.810 } 00:05:25.810 ] 00:05:25.810 }, 00:05:25.810 { 00:05:25.810 "subsystem": "bdev", 00:05:25.810 "config": [ 00:05:25.810 { 00:05:25.810 "method": "bdev_set_options", 00:05:25.810 "params": { 00:05:25.810 "bdev_io_pool_size": 65535, 00:05:25.810 "bdev_io_cache_size": 256, 00:05:25.810 "bdev_auto_examine": true, 00:05:25.810 "iobuf_small_cache_size": 128, 00:05:25.810 "iobuf_large_cache_size": 16 00:05:25.810 } 00:05:25.810 }, 00:05:25.810 { 00:05:25.810 "method": "bdev_raid_set_options", 00:05:25.810 "params": { 00:05:25.810 "process_window_size_kb": 1024, 00:05:25.810 "process_max_bandwidth_mb_sec": 0 00:05:25.810 } 00:05:25.810 }, 00:05:25.810 { 00:05:25.810 "method": "bdev_iscsi_set_options", 00:05:25.810 "params": { 00:05:25.810 "timeout_sec": 30 00:05:25.810 } 00:05:25.810 }, 00:05:25.810 { 00:05:25.810 "method": "bdev_nvme_set_options", 00:05:25.810 "params": { 00:05:25.810 "action_on_timeout": "none", 00:05:25.810 "timeout_us": 0, 00:05:25.810 "timeout_admin_us": 0, 00:05:25.810 "keep_alive_timeout_ms": 10000, 00:05:25.810 "arbitration_burst": 0, 00:05:25.810 "low_priority_weight": 0, 00:05:25.810 "medium_priority_weight": 0, 00:05:25.810 "high_priority_weight": 0, 00:05:25.810 "nvme_adminq_poll_period_us": 10000, 00:05:25.810 "nvme_ioq_poll_period_us": 0, 00:05:25.810 "io_queue_requests": 0, 00:05:25.810 "delay_cmd_submit": true, 00:05:25.810 "transport_retry_count": 4, 00:05:25.810 "bdev_retry_count": 3, 00:05:25.810 "transport_ack_timeout": 0, 00:05:25.810 "ctrlr_loss_timeout_sec": 0, 00:05:25.810 "reconnect_delay_sec": 0, 00:05:25.810 "fast_io_fail_timeout_sec": 0, 00:05:25.810 "disable_auto_failback": false, 00:05:25.810 "generate_uuids": false, 00:05:25.810 "transport_tos": 0, 00:05:25.810 "nvme_error_stat": false, 00:05:25.810 "rdma_srq_size": 0, 00:05:25.810 "io_path_stat": false, 00:05:25.810 "allow_accel_sequence": false, 00:05:25.810 "rdma_max_cq_size": 0, 00:05:25.810 "rdma_cm_event_timeout_ms": 0, 00:05:25.810 "dhchap_digests": [ 00:05:25.811 "sha256", 00:05:25.811 "sha384", 00:05:25.811 "sha512" 00:05:25.811 ], 00:05:25.811 "dhchap_dhgroups": [ 00:05:25.811 "null", 00:05:25.811 "ffdhe2048", 00:05:25.811 "ffdhe3072", 00:05:25.811 "ffdhe4096", 00:05:25.811 "ffdhe6144", 00:05:25.811 "ffdhe8192" 00:05:25.811 ] 00:05:25.811 } 00:05:25.811 }, 00:05:25.811 { 00:05:25.811 "method": "bdev_nvme_set_hotplug", 00:05:25.811 "params": { 00:05:25.811 "period_us": 100000, 00:05:25.811 "enable": false 00:05:25.811 } 00:05:25.811 }, 00:05:25.811 { 00:05:25.811 "method": "bdev_wait_for_examine" 00:05:25.811 } 00:05:25.811 ] 00:05:25.811 }, 00:05:25.811 { 00:05:25.811 "subsystem": "scsi", 00:05:25.811 "config": null 00:05:25.811 }, 00:05:25.811 { 00:05:25.811 "subsystem": "scheduler", 00:05:25.811 "config": [ 00:05:25.811 { 00:05:25.811 "method": "framework_set_scheduler", 00:05:25.811 "params": { 00:05:25.811 "name": "static" 00:05:25.811 } 00:05:25.811 } 00:05:25.811 ] 00:05:25.811 }, 00:05:25.811 { 00:05:25.811 "subsystem": "vhost_scsi", 00:05:25.811 "config": [] 00:05:25.811 }, 00:05:25.811 { 00:05:25.811 "subsystem": "vhost_blk", 00:05:25.811 "config": [] 00:05:25.811 }, 00:05:25.811 { 00:05:25.811 "subsystem": "ublk", 00:05:25.811 "config": [] 00:05:25.811 }, 00:05:25.811 { 00:05:25.811 "subsystem": "nbd", 00:05:25.811 "config": [] 00:05:25.811 }, 00:05:25.811 { 00:05:25.811 "subsystem": "nvmf", 00:05:25.811 "config": [ 00:05:25.811 { 00:05:25.811 "method": "nvmf_set_config", 00:05:25.811 "params": { 00:05:25.811 "discovery_filter": "match_any", 00:05:25.811 "admin_cmd_passthru": { 00:05:25.811 "identify_ctrlr": false 00:05:25.811 } 00:05:25.811 } 00:05:25.811 }, 00:05:25.811 { 00:05:25.811 "method": "nvmf_set_max_subsystems", 00:05:25.811 "params": { 00:05:25.811 "max_subsystems": 1024 00:05:25.811 } 00:05:25.811 }, 00:05:25.811 { 00:05:25.811 "method": "nvmf_set_crdt", 00:05:25.811 "params": { 00:05:25.811 "crdt1": 0, 00:05:25.811 "crdt2": 0, 00:05:25.811 "crdt3": 0 00:05:25.811 } 00:05:25.811 }, 00:05:25.811 { 00:05:25.811 "method": "nvmf_create_transport", 00:05:25.811 "params": { 00:05:25.811 "trtype": "TCP", 00:05:25.811 "max_queue_depth": 128, 00:05:25.811 "max_io_qpairs_per_ctrlr": 127, 00:05:25.811 "in_capsule_data_size": 4096, 00:05:25.811 "max_io_size": 131072, 00:05:25.811 "io_unit_size": 131072, 00:05:25.811 "max_aq_depth": 128, 00:05:25.811 "num_shared_buffers": 511, 00:05:25.811 "buf_cache_size": 4294967295, 00:05:25.811 "dif_insert_or_strip": false, 00:05:25.811 "zcopy": false, 00:05:25.811 "c2h_success": true, 00:05:25.811 "sock_priority": 0, 00:05:25.811 "abort_timeout_sec": 1, 00:05:25.811 "ack_timeout": 0, 00:05:25.811 "data_wr_pool_size": 0 00:05:25.811 } 00:05:25.811 } 00:05:25.811 ] 00:05:25.811 }, 00:05:25.811 { 00:05:25.811 "subsystem": "iscsi", 00:05:25.811 "config": [ 00:05:25.811 { 00:05:25.811 "method": "iscsi_set_options", 00:05:25.811 "params": { 00:05:25.811 "node_base": "iqn.2016-06.io.spdk", 00:05:25.811 "max_sessions": 128, 00:05:25.811 "max_connections_per_session": 2, 00:05:25.811 "max_queue_depth": 64, 00:05:25.811 "default_time2wait": 2, 00:05:25.811 "default_time2retain": 20, 00:05:25.811 "first_burst_length": 8192, 00:05:25.811 "immediate_data": true, 00:05:25.811 "allow_duplicated_isid": false, 00:05:25.811 "error_recovery_level": 0, 00:05:25.811 "nop_timeout": 60, 00:05:25.811 "nop_in_interval": 30, 00:05:25.811 "disable_chap": false, 00:05:25.811 "require_chap": false, 00:05:25.811 "mutual_chap": false, 00:05:25.811 "chap_group": 0, 00:05:25.811 "max_large_datain_per_connection": 64, 00:05:25.811 "max_r2t_per_connection": 4, 00:05:25.811 "pdu_pool_size": 36864, 00:05:25.811 "immediate_data_pool_size": 16384, 00:05:25.811 "data_out_pool_size": 2048 00:05:25.811 } 00:05:25.811 } 00:05:25.811 ] 00:05:25.811 } 00:05:25.811 ] 00:05:25.811 } 00:05:25.811 11:13:21 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:05:25.811 11:13:21 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 1334351 00:05:25.811 11:13:21 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@950 -- # '[' -z 1334351 ']' 00:05:25.811 11:13:21 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # kill -0 1334351 00:05:25.811 11:13:21 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@955 -- # uname 00:05:25.811 11:13:21 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:25.811 11:13:21 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1334351 00:05:25.811 11:13:21 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:25.811 11:13:21 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:25.811 11:13:21 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1334351' 00:05:25.811 killing process with pid 1334351 00:05:25.811 11:13:21 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@969 -- # kill 1334351 00:05:25.811 11:13:21 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@974 -- # wait 1334351 00:05:26.070 11:13:21 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=1334588 00:05:26.070 11:13:21 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:05:26.070 11:13:21 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:05:31.339 11:13:26 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 1334588 00:05:31.339 11:13:26 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@950 -- # '[' -z 1334588 ']' 00:05:31.339 11:13:26 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # kill -0 1334588 00:05:31.339 11:13:26 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@955 -- # uname 00:05:31.339 11:13:26 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:31.339 11:13:26 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1334588 00:05:31.339 11:13:26 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:31.339 11:13:26 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:31.339 11:13:26 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1334588' 00:05:31.339 killing process with pid 1334588 00:05:31.339 11:13:26 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@969 -- # kill 1334588 00:05:31.339 11:13:26 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@974 -- # wait 1334588 00:05:31.598 11:13:27 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:05:31.598 11:13:27 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:05:31.598 00:05:31.598 real 0m6.729s 00:05:31.598 user 0m6.545s 00:05:31.598 sys 0m0.595s 00:05:31.598 11:13:27 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:31.598 11:13:27 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:31.598 ************************************ 00:05:31.598 END TEST skip_rpc_with_json 00:05:31.598 ************************************ 00:05:31.598 11:13:27 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:05:31.598 11:13:27 skip_rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:31.598 11:13:27 skip_rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:31.598 11:13:27 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:31.598 ************************************ 00:05:31.598 START TEST skip_rpc_with_delay 00:05:31.598 ************************************ 00:05:31.598 11:13:27 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1125 -- # test_skip_rpc_with_delay 00:05:31.598 11:13:27 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:05:31.598 11:13:27 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@650 -- # local es=0 00:05:31.598 11:13:27 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:05:31.598 11:13:27 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:31.598 11:13:27 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:31.598 11:13:27 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:31.598 11:13:27 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:31.598 11:13:27 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:31.598 11:13:27 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:31.598 11:13:27 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:31.598 11:13:27 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:05:31.598 11:13:27 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:05:31.598 [2024-07-26 11:13:27.173174] app.c: 832:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:05:31.598 [2024-07-26 11:13:27.173227] app.c: 711:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 0, errno: 2 00:05:31.598 11:13:27 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@653 -- # es=1 00:05:31.598 11:13:27 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:05:31.598 11:13:27 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:05:31.598 11:13:27 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:05:31.598 00:05:31.598 real 0m0.064s 00:05:31.598 user 0m0.042s 00:05:31.598 sys 0m0.021s 00:05:31.598 11:13:27 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:31.598 11:13:27 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:05:31.598 ************************************ 00:05:31.598 END TEST skip_rpc_with_delay 00:05:31.598 ************************************ 00:05:31.598 11:13:27 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:05:31.598 11:13:27 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:05:31.598 11:13:27 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:05:31.598 11:13:27 skip_rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:31.598 11:13:27 skip_rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:31.598 11:13:27 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:31.598 ************************************ 00:05:31.598 START TEST exit_on_failed_rpc_init 00:05:31.598 ************************************ 00:05:31.598 11:13:27 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1125 -- # test_exit_on_failed_rpc_init 00:05:31.598 11:13:27 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=1335559 00:05:31.599 11:13:27 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 1335559 00:05:31.599 11:13:27 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:31.599 11:13:27 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@831 -- # '[' -z 1335559 ']' 00:05:31.599 11:13:27 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:31.599 11:13:27 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:31.599 11:13:27 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:31.599 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:31.599 11:13:27 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:31.599 11:13:27 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:05:31.857 [2024-07-26 11:13:27.301450] Starting SPDK v24.09-pre git sha1 487ff9e1a / DPDK 24.03.0 initialization... 00:05:31.857 [2024-07-26 11:13:27.301486] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1335559 ] 00:05:31.857 EAL: No free 2048 kB hugepages reported on node 1 00:05:31.857 [2024-07-26 11:13:27.365316] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:31.857 [2024-07-26 11:13:27.437258] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:32.793 11:13:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:32.793 11:13:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@864 -- # return 0 00:05:32.793 11:13:28 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:32.793 11:13:28 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:05:32.793 11:13:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@650 -- # local es=0 00:05:32.793 11:13:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:05:32.793 11:13:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:32.793 11:13:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:32.793 11:13:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:32.793 11:13:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:32.793 11:13:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:32.793 11:13:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:32.793 11:13:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:32.793 11:13:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:05:32.793 11:13:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:05:32.793 [2024-07-26 11:13:28.145432] Starting SPDK v24.09-pre git sha1 487ff9e1a / DPDK 24.03.0 initialization... 00:05:32.793 [2024-07-26 11:13:28.145478] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1335619 ] 00:05:32.793 EAL: No free 2048 kB hugepages reported on node 1 00:05:32.793 [2024-07-26 11:13:28.212193] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:32.793 [2024-07-26 11:13:28.283478] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:05:32.793 [2024-07-26 11:13:28.283546] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:05:32.793 [2024-07-26 11:13:28.283555] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:05:32.793 [2024-07-26 11:13:28.283561] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:05:32.793 11:13:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@653 -- # es=234 00:05:32.793 11:13:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:05:32.794 11:13:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@662 -- # es=106 00:05:32.794 11:13:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@663 -- # case "$es" in 00:05:32.794 11:13:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@670 -- # es=1 00:05:32.794 11:13:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:05:32.794 11:13:28 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:05:32.794 11:13:28 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 1335559 00:05:32.794 11:13:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@950 -- # '[' -z 1335559 ']' 00:05:32.794 11:13:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # kill -0 1335559 00:05:32.794 11:13:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@955 -- # uname 00:05:32.794 11:13:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:32.794 11:13:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1335559 00:05:32.794 11:13:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:32.794 11:13:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:32.794 11:13:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1335559' 00:05:32.794 killing process with pid 1335559 00:05:32.794 11:13:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@969 -- # kill 1335559 00:05:32.794 11:13:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@974 -- # wait 1335559 00:05:33.053 00:05:33.053 real 0m1.455s 00:05:33.053 user 0m1.653s 00:05:33.053 sys 0m0.420s 00:05:33.053 11:13:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:33.053 11:13:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:05:33.053 ************************************ 00:05:33.053 END TEST exit_on_failed_rpc_init 00:05:33.053 ************************************ 00:05:33.311 11:13:28 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:05:33.311 00:05:33.311 real 0m13.983s 00:05:33.311 user 0m13.520s 00:05:33.311 sys 0m1.547s 00:05:33.311 11:13:28 skip_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:33.311 11:13:28 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:33.311 ************************************ 00:05:33.311 END TEST skip_rpc 00:05:33.311 ************************************ 00:05:33.312 11:13:28 -- spdk/autotest.sh@171 -- # run_test rpc_client /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:05:33.312 11:13:28 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:33.312 11:13:28 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:33.312 11:13:28 -- common/autotest_common.sh@10 -- # set +x 00:05:33.312 ************************************ 00:05:33.312 START TEST rpc_client 00:05:33.312 ************************************ 00:05:33.312 11:13:28 rpc_client -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:05:33.312 * Looking for test storage... 00:05:33.312 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client 00:05:33.312 11:13:28 rpc_client -- rpc_client/rpc_client.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client_test 00:05:33.312 OK 00:05:33.312 11:13:28 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:05:33.312 00:05:33.312 real 0m0.114s 00:05:33.312 user 0m0.051s 00:05:33.312 sys 0m0.071s 00:05:33.312 11:13:28 rpc_client -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:33.312 11:13:28 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:05:33.312 ************************************ 00:05:33.312 END TEST rpc_client 00:05:33.312 ************************************ 00:05:33.312 11:13:28 -- spdk/autotest.sh@172 -- # run_test json_config /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config.sh 00:05:33.312 11:13:28 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:33.312 11:13:28 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:33.312 11:13:28 -- common/autotest_common.sh@10 -- # set +x 00:05:33.572 ************************************ 00:05:33.572 START TEST json_config 00:05:33.572 ************************************ 00:05:33.572 11:13:28 json_config -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config.sh 00:05:33.572 11:13:29 json_config -- json_config/json_config.sh@8 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:05:33.572 11:13:29 json_config -- nvmf/common.sh@7 -- # uname -s 00:05:33.572 11:13:29 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:33.572 11:13:29 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:33.572 11:13:29 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:33.572 11:13:29 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:33.572 11:13:29 json_config -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:33.572 11:13:29 json_config -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:33.572 11:13:29 json_config -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:33.572 11:13:29 json_config -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:33.572 11:13:29 json_config -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:33.572 11:13:29 json_config -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:33.572 11:13:29 json_config -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:05:33.572 11:13:29 json_config -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:05:33.572 11:13:29 json_config -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:33.572 11:13:29 json_config -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:33.572 11:13:29 json_config -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:05:33.572 11:13:29 json_config -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:33.572 11:13:29 json_config -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:05:33.572 11:13:29 json_config -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:33.572 11:13:29 json_config -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:33.572 11:13:29 json_config -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:33.572 11:13:29 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:33.572 11:13:29 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:33.572 11:13:29 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:33.572 11:13:29 json_config -- paths/export.sh@5 -- # export PATH 00:05:33.572 11:13:29 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:33.572 11:13:29 json_config -- nvmf/common.sh@47 -- # : 0 00:05:33.572 11:13:29 json_config -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:05:33.572 11:13:29 json_config -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:05:33.572 11:13:29 json_config -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:33.572 11:13:29 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:33.572 11:13:29 json_config -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:33.572 11:13:29 json_config -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:05:33.572 11:13:29 json_config -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:05:33.572 11:13:29 json_config -- nvmf/common.sh@51 -- # have_pci_nics=0 00:05:33.572 11:13:29 json_config -- json_config/json_config.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/common.sh 00:05:33.572 11:13:29 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:05:33.572 11:13:29 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:05:33.572 11:13:29 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:05:33.572 11:13:29 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:05:33.572 11:13:29 json_config -- json_config/json_config.sh@31 -- # app_pid=(['target']='' ['initiator']='') 00:05:33.572 11:13:29 json_config -- json_config/json_config.sh@31 -- # declare -A app_pid 00:05:33.572 11:13:29 json_config -- json_config/json_config.sh@32 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock' ['initiator']='/var/tmp/spdk_initiator.sock') 00:05:33.572 11:13:29 json_config -- json_config/json_config.sh@32 -- # declare -A app_socket 00:05:33.572 11:13:29 json_config -- json_config/json_config.sh@33 -- # app_params=(['target']='-m 0x1 -s 1024' ['initiator']='-m 0x2 -g -u -s 1024') 00:05:33.572 11:13:29 json_config -- json_config/json_config.sh@33 -- # declare -A app_params 00:05:33.572 11:13:29 json_config -- json_config/json_config.sh@34 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json' ['initiator']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_initiator_config.json') 00:05:33.572 11:13:29 json_config -- json_config/json_config.sh@34 -- # declare -A configs_path 00:05:33.572 11:13:29 json_config -- json_config/json_config.sh@40 -- # last_event_id=0 00:05:33.572 11:13:29 json_config -- json_config/json_config.sh@359 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:05:33.572 11:13:29 json_config -- json_config/json_config.sh@360 -- # echo 'INFO: JSON configuration test init' 00:05:33.572 INFO: JSON configuration test init 00:05:33.572 11:13:29 json_config -- json_config/json_config.sh@361 -- # json_config_test_init 00:05:33.572 11:13:29 json_config -- json_config/json_config.sh@266 -- # timing_enter json_config_test_init 00:05:33.572 11:13:29 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:05:33.572 11:13:29 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:33.572 11:13:29 json_config -- json_config/json_config.sh@267 -- # timing_enter json_config_setup_target 00:05:33.572 11:13:29 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:05:33.572 11:13:29 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:33.572 11:13:29 json_config -- json_config/json_config.sh@269 -- # json_config_test_start_app target --wait-for-rpc 00:05:33.572 11:13:29 json_config -- json_config/common.sh@9 -- # local app=target 00:05:33.572 11:13:29 json_config -- json_config/common.sh@10 -- # shift 00:05:33.572 11:13:29 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:05:33.572 11:13:29 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:05:33.572 11:13:29 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:05:33.572 11:13:29 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:33.572 11:13:29 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:33.572 11:13:29 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=1335913 00:05:33.572 11:13:29 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:05:33.572 Waiting for target to run... 00:05:33.572 11:13:29 json_config -- json_config/common.sh@25 -- # waitforlisten 1335913 /var/tmp/spdk_tgt.sock 00:05:33.572 11:13:29 json_config -- common/autotest_common.sh@831 -- # '[' -z 1335913 ']' 00:05:33.572 11:13:29 json_config -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:05:33.572 11:13:29 json_config -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --wait-for-rpc 00:05:33.572 11:13:29 json_config -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:33.572 11:13:29 json_config -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:05:33.572 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:05:33.572 11:13:29 json_config -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:33.572 11:13:29 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:33.572 [2024-07-26 11:13:29.154301] Starting SPDK v24.09-pre git sha1 487ff9e1a / DPDK 24.03.0 initialization... 00:05:33.572 [2024-07-26 11:13:29.154349] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1335913 ] 00:05:33.572 EAL: No free 2048 kB hugepages reported on node 1 00:05:33.831 [2024-07-26 11:13:29.435495] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:34.089 [2024-07-26 11:13:29.503987] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:34.349 11:13:29 json_config -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:34.349 11:13:29 json_config -- common/autotest_common.sh@864 -- # return 0 00:05:34.349 11:13:29 json_config -- json_config/common.sh@26 -- # echo '' 00:05:34.349 00:05:34.349 11:13:29 json_config -- json_config/json_config.sh@273 -- # create_accel_config 00:05:34.349 11:13:29 json_config -- json_config/json_config.sh@97 -- # timing_enter create_accel_config 00:05:34.349 11:13:29 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:05:34.349 11:13:29 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:34.349 11:13:29 json_config -- json_config/json_config.sh@99 -- # [[ 0 -eq 1 ]] 00:05:34.349 11:13:29 json_config -- json_config/json_config.sh@105 -- # timing_exit create_accel_config 00:05:34.349 11:13:29 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:05:34.349 11:13:29 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:34.349 11:13:29 json_config -- json_config/json_config.sh@277 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh --json-with-subsystems 00:05:34.349 11:13:29 json_config -- json_config/json_config.sh@278 -- # tgt_rpc load_config 00:05:34.349 11:13:29 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock load_config 00:05:37.634 11:13:33 json_config -- json_config/json_config.sh@280 -- # tgt_check_notification_types 00:05:37.634 11:13:33 json_config -- json_config/json_config.sh@43 -- # timing_enter tgt_check_notification_types 00:05:37.634 11:13:33 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:05:37.634 11:13:33 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:37.634 11:13:33 json_config -- json_config/json_config.sh@45 -- # local ret=0 00:05:37.634 11:13:33 json_config -- json_config/json_config.sh@46 -- # enabled_types=('bdev_register' 'bdev_unregister') 00:05:37.634 11:13:33 json_config -- json_config/json_config.sh@46 -- # local enabled_types 00:05:37.634 11:13:33 json_config -- json_config/json_config.sh@48 -- # tgt_rpc notify_get_types 00:05:37.634 11:13:33 json_config -- json_config/json_config.sh@48 -- # jq -r '.[]' 00:05:37.634 11:13:33 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock notify_get_types 00:05:37.634 11:13:33 json_config -- json_config/json_config.sh@48 -- # get_types=('bdev_register' 'bdev_unregister') 00:05:37.634 11:13:33 json_config -- json_config/json_config.sh@48 -- # local get_types 00:05:37.634 11:13:33 json_config -- json_config/json_config.sh@50 -- # local type_diff 00:05:37.634 11:13:33 json_config -- json_config/json_config.sh@51 -- # echo bdev_register bdev_unregister bdev_register bdev_unregister 00:05:37.634 11:13:33 json_config -- json_config/json_config.sh@51 -- # tr ' ' '\n' 00:05:37.634 11:13:33 json_config -- json_config/json_config.sh@51 -- # sort 00:05:37.634 11:13:33 json_config -- json_config/json_config.sh@51 -- # uniq -u 00:05:37.634 11:13:33 json_config -- json_config/json_config.sh@51 -- # type_diff= 00:05:37.634 11:13:33 json_config -- json_config/json_config.sh@53 -- # [[ -n '' ]] 00:05:37.634 11:13:33 json_config -- json_config/json_config.sh@58 -- # timing_exit tgt_check_notification_types 00:05:37.634 11:13:33 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:05:37.634 11:13:33 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:37.634 11:13:33 json_config -- json_config/json_config.sh@59 -- # return 0 00:05:37.634 11:13:33 json_config -- json_config/json_config.sh@282 -- # [[ 0 -eq 1 ]] 00:05:37.634 11:13:33 json_config -- json_config/json_config.sh@286 -- # [[ 0 -eq 1 ]] 00:05:37.634 11:13:33 json_config -- json_config/json_config.sh@290 -- # [[ 0 -eq 1 ]] 00:05:37.634 11:13:33 json_config -- json_config/json_config.sh@294 -- # [[ 1 -eq 1 ]] 00:05:37.634 11:13:33 json_config -- json_config/json_config.sh@295 -- # create_nvmf_subsystem_config 00:05:37.634 11:13:33 json_config -- json_config/json_config.sh@234 -- # timing_enter create_nvmf_subsystem_config 00:05:37.634 11:13:33 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:05:37.634 11:13:33 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:37.634 11:13:33 json_config -- json_config/json_config.sh@236 -- # NVMF_FIRST_TARGET_IP=127.0.0.1 00:05:37.634 11:13:33 json_config -- json_config/json_config.sh@237 -- # [[ tcp == \r\d\m\a ]] 00:05:37.634 11:13:33 json_config -- json_config/json_config.sh@241 -- # [[ -z 127.0.0.1 ]] 00:05:37.634 11:13:33 json_config -- json_config/json_config.sh@246 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocForNvmf0 00:05:37.634 11:13:33 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocForNvmf0 00:05:37.897 MallocForNvmf0 00:05:37.897 11:13:33 json_config -- json_config/json_config.sh@247 -- # tgt_rpc bdev_malloc_create 4 1024 --name MallocForNvmf1 00:05:37.897 11:13:33 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 4 1024 --name MallocForNvmf1 00:05:38.158 MallocForNvmf1 00:05:38.158 11:13:33 json_config -- json_config/json_config.sh@249 -- # tgt_rpc nvmf_create_transport -t tcp -u 8192 -c 0 00:05:38.158 11:13:33 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_transport -t tcp -u 8192 -c 0 00:05:38.158 [2024-07-26 11:13:33.806408] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:05:38.416 11:13:33 json_config -- json_config/json_config.sh@250 -- # tgt_rpc nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:05:38.416 11:13:33 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:05:38.416 11:13:34 json_config -- json_config/json_config.sh@251 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:05:38.416 11:13:34 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:05:38.675 11:13:34 json_config -- json_config/json_config.sh@252 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:05:38.675 11:13:34 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:05:38.933 11:13:34 json_config -- json_config/json_config.sh@253 -- # tgt_rpc nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:05:38.933 11:13:34 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:05:38.933 [2024-07-26 11:13:34.516614] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:05:38.933 11:13:34 json_config -- json_config/json_config.sh@255 -- # timing_exit create_nvmf_subsystem_config 00:05:38.933 11:13:34 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:05:38.933 11:13:34 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:38.933 11:13:34 json_config -- json_config/json_config.sh@297 -- # timing_exit json_config_setup_target 00:05:38.933 11:13:34 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:05:38.933 11:13:34 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:39.192 11:13:34 json_config -- json_config/json_config.sh@299 -- # [[ 0 -eq 1 ]] 00:05:39.192 11:13:34 json_config -- json_config/json_config.sh@304 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:05:39.192 11:13:34 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:05:39.192 MallocBdevForConfigChangeCheck 00:05:39.192 11:13:34 json_config -- json_config/json_config.sh@306 -- # timing_exit json_config_test_init 00:05:39.192 11:13:34 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:05:39.192 11:13:34 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:39.192 11:13:34 json_config -- json_config/json_config.sh@363 -- # tgt_rpc save_config 00:05:39.192 11:13:34 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:39.759 11:13:35 json_config -- json_config/json_config.sh@365 -- # echo 'INFO: shutting down applications...' 00:05:39.759 INFO: shutting down applications... 00:05:39.759 11:13:35 json_config -- json_config/json_config.sh@366 -- # [[ 0 -eq 1 ]] 00:05:39.759 11:13:35 json_config -- json_config/json_config.sh@372 -- # json_config_clear target 00:05:39.759 11:13:35 json_config -- json_config/json_config.sh@336 -- # [[ -n 22 ]] 00:05:39.759 11:13:35 json_config -- json_config/json_config.sh@337 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py -s /var/tmp/spdk_tgt.sock clear_config 00:05:41.658 Calling clear_iscsi_subsystem 00:05:41.658 Calling clear_nvmf_subsystem 00:05:41.658 Calling clear_nbd_subsystem 00:05:41.658 Calling clear_ublk_subsystem 00:05:41.659 Calling clear_vhost_blk_subsystem 00:05:41.659 Calling clear_vhost_scsi_subsystem 00:05:41.659 Calling clear_bdev_subsystem 00:05:41.659 11:13:37 json_config -- json_config/json_config.sh@341 -- # local config_filter=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py 00:05:41.659 11:13:37 json_config -- json_config/json_config.sh@347 -- # count=100 00:05:41.659 11:13:37 json_config -- json_config/json_config.sh@348 -- # '[' 100 -gt 0 ']' 00:05:41.659 11:13:37 json_config -- json_config/json_config.sh@349 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:41.659 11:13:37 json_config -- json_config/json_config.sh@349 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method delete_global_parameters 00:05:41.659 11:13:37 json_config -- json_config/json_config.sh@349 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method check_empty 00:05:42.226 11:13:37 json_config -- json_config/json_config.sh@349 -- # break 00:05:42.226 11:13:37 json_config -- json_config/json_config.sh@354 -- # '[' 100 -eq 0 ']' 00:05:42.226 11:13:37 json_config -- json_config/json_config.sh@373 -- # json_config_test_shutdown_app target 00:05:42.226 11:13:37 json_config -- json_config/common.sh@31 -- # local app=target 00:05:42.226 11:13:37 json_config -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:05:42.226 11:13:37 json_config -- json_config/common.sh@35 -- # [[ -n 1335913 ]] 00:05:42.226 11:13:37 json_config -- json_config/common.sh@38 -- # kill -SIGINT 1335913 00:05:42.226 11:13:37 json_config -- json_config/common.sh@40 -- # (( i = 0 )) 00:05:42.226 11:13:37 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:42.226 11:13:37 json_config -- json_config/common.sh@41 -- # kill -0 1335913 00:05:42.226 11:13:37 json_config -- json_config/common.sh@45 -- # sleep 0.5 00:05:42.485 11:13:38 json_config -- json_config/common.sh@40 -- # (( i++ )) 00:05:42.485 11:13:38 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:42.485 11:13:38 json_config -- json_config/common.sh@41 -- # kill -0 1335913 00:05:42.485 11:13:38 json_config -- json_config/common.sh@42 -- # app_pid["$app"]= 00:05:42.485 11:13:38 json_config -- json_config/common.sh@43 -- # break 00:05:42.485 11:13:38 json_config -- json_config/common.sh@48 -- # [[ -n '' ]] 00:05:42.485 11:13:38 json_config -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:05:42.485 SPDK target shutdown done 00:05:42.485 11:13:38 json_config -- json_config/json_config.sh@375 -- # echo 'INFO: relaunching applications...' 00:05:42.485 INFO: relaunching applications... 00:05:42.485 11:13:38 json_config -- json_config/json_config.sh@376 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:42.485 11:13:38 json_config -- json_config/common.sh@9 -- # local app=target 00:05:42.485 11:13:38 json_config -- json_config/common.sh@10 -- # shift 00:05:42.744 11:13:38 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:05:42.744 11:13:38 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:05:42.744 11:13:38 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:05:42.744 11:13:38 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:42.744 11:13:38 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:42.744 11:13:38 json_config -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:42.744 11:13:38 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=1337660 00:05:42.744 11:13:38 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:05:42.744 Waiting for target to run... 00:05:42.744 11:13:38 json_config -- json_config/common.sh@25 -- # waitforlisten 1337660 /var/tmp/spdk_tgt.sock 00:05:42.744 11:13:38 json_config -- common/autotest_common.sh@831 -- # '[' -z 1337660 ']' 00:05:42.744 11:13:38 json_config -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:05:42.744 11:13:38 json_config -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:42.744 11:13:38 json_config -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:05:42.744 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:05:42.744 11:13:38 json_config -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:42.744 11:13:38 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:42.744 [2024-07-26 11:13:38.200886] Starting SPDK v24.09-pre git sha1 487ff9e1a / DPDK 24.03.0 initialization... 00:05:42.744 [2024-07-26 11:13:38.200941] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1337660 ] 00:05:42.744 EAL: No free 2048 kB hugepages reported on node 1 00:05:43.003 [2024-07-26 11:13:38.655948] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:43.262 [2024-07-26 11:13:38.744521] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:46.547 [2024-07-26 11:13:41.756511] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:05:46.547 [2024-07-26 11:13:41.788822] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:05:46.806 11:13:42 json_config -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:46.806 11:13:42 json_config -- common/autotest_common.sh@864 -- # return 0 00:05:46.806 11:13:42 json_config -- json_config/common.sh@26 -- # echo '' 00:05:46.806 00:05:46.806 11:13:42 json_config -- json_config/json_config.sh@377 -- # [[ 0 -eq 1 ]] 00:05:46.806 11:13:42 json_config -- json_config/json_config.sh@381 -- # echo 'INFO: Checking if target configuration is the same...' 00:05:46.806 INFO: Checking if target configuration is the same... 00:05:46.806 11:13:42 json_config -- json_config/json_config.sh@382 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:46.806 11:13:42 json_config -- json_config/json_config.sh@382 -- # tgt_rpc save_config 00:05:46.806 11:13:42 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:46.806 + '[' 2 -ne 2 ']' 00:05:46.806 +++ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh 00:05:46.806 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/../.. 00:05:46.806 + rootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:05:46.806 +++ basename /dev/fd/62 00:05:46.806 ++ mktemp /tmp/62.XXX 00:05:46.806 + tmp_file_1=/tmp/62.QF9 00:05:46.806 +++ basename /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:46.806 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:05:46.806 + tmp_file_2=/tmp/spdk_tgt_config.json.Keg 00:05:46.806 + ret=0 00:05:46.806 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:05:47.064 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:05:47.323 + diff -u /tmp/62.QF9 /tmp/spdk_tgt_config.json.Keg 00:05:47.323 + echo 'INFO: JSON config files are the same' 00:05:47.323 INFO: JSON config files are the same 00:05:47.323 + rm /tmp/62.QF9 /tmp/spdk_tgt_config.json.Keg 00:05:47.323 + exit 0 00:05:47.323 11:13:42 json_config -- json_config/json_config.sh@383 -- # [[ 0 -eq 1 ]] 00:05:47.323 11:13:42 json_config -- json_config/json_config.sh@388 -- # echo 'INFO: changing configuration and checking if this can be detected...' 00:05:47.323 INFO: changing configuration and checking if this can be detected... 00:05:47.323 11:13:42 json_config -- json_config/json_config.sh@390 -- # tgt_rpc bdev_malloc_delete MallocBdevForConfigChangeCheck 00:05:47.323 11:13:42 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_delete MallocBdevForConfigChangeCheck 00:05:47.323 11:13:42 json_config -- json_config/json_config.sh@391 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:47.323 11:13:42 json_config -- json_config/json_config.sh@391 -- # tgt_rpc save_config 00:05:47.323 11:13:42 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:47.323 + '[' 2 -ne 2 ']' 00:05:47.323 +++ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh 00:05:47.323 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/../.. 00:05:47.323 + rootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:05:47.323 +++ basename /dev/fd/62 00:05:47.323 ++ mktemp /tmp/62.XXX 00:05:47.323 + tmp_file_1=/tmp/62.rCZ 00:05:47.323 +++ basename /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:47.323 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:05:47.323 + tmp_file_2=/tmp/spdk_tgt_config.json.ywu 00:05:47.323 + ret=0 00:05:47.323 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:05:47.581 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:05:47.850 + diff -u /tmp/62.rCZ /tmp/spdk_tgt_config.json.ywu 00:05:47.850 + ret=1 00:05:47.850 + echo '=== Start of file: /tmp/62.rCZ ===' 00:05:47.850 + cat /tmp/62.rCZ 00:05:47.850 + echo '=== End of file: /tmp/62.rCZ ===' 00:05:47.850 + echo '' 00:05:47.850 + echo '=== Start of file: /tmp/spdk_tgt_config.json.ywu ===' 00:05:47.850 + cat /tmp/spdk_tgt_config.json.ywu 00:05:47.850 + echo '=== End of file: /tmp/spdk_tgt_config.json.ywu ===' 00:05:47.850 + echo '' 00:05:47.852 + rm /tmp/62.rCZ /tmp/spdk_tgt_config.json.ywu 00:05:47.852 + exit 1 00:05:47.852 11:13:43 json_config -- json_config/json_config.sh@395 -- # echo 'INFO: configuration change detected.' 00:05:47.852 INFO: configuration change detected. 00:05:47.852 11:13:43 json_config -- json_config/json_config.sh@398 -- # json_config_test_fini 00:05:47.852 11:13:43 json_config -- json_config/json_config.sh@310 -- # timing_enter json_config_test_fini 00:05:47.852 11:13:43 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:05:47.852 11:13:43 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:47.852 11:13:43 json_config -- json_config/json_config.sh@311 -- # local ret=0 00:05:47.852 11:13:43 json_config -- json_config/json_config.sh@313 -- # [[ -n '' ]] 00:05:47.852 11:13:43 json_config -- json_config/json_config.sh@321 -- # [[ -n 1337660 ]] 00:05:47.852 11:13:43 json_config -- json_config/json_config.sh@324 -- # cleanup_bdev_subsystem_config 00:05:47.852 11:13:43 json_config -- json_config/json_config.sh@188 -- # timing_enter cleanup_bdev_subsystem_config 00:05:47.852 11:13:43 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:05:47.852 11:13:43 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:47.852 11:13:43 json_config -- json_config/json_config.sh@190 -- # [[ 0 -eq 1 ]] 00:05:47.852 11:13:43 json_config -- json_config/json_config.sh@197 -- # uname -s 00:05:47.852 11:13:43 json_config -- json_config/json_config.sh@197 -- # [[ Linux = Linux ]] 00:05:47.852 11:13:43 json_config -- json_config/json_config.sh@198 -- # rm -f /sample_aio 00:05:47.852 11:13:43 json_config -- json_config/json_config.sh@201 -- # [[ 0 -eq 1 ]] 00:05:47.852 11:13:43 json_config -- json_config/json_config.sh@205 -- # timing_exit cleanup_bdev_subsystem_config 00:05:47.852 11:13:43 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:05:47.852 11:13:43 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:47.852 11:13:43 json_config -- json_config/json_config.sh@327 -- # killprocess 1337660 00:05:47.852 11:13:43 json_config -- common/autotest_common.sh@950 -- # '[' -z 1337660 ']' 00:05:47.852 11:13:43 json_config -- common/autotest_common.sh@954 -- # kill -0 1337660 00:05:47.852 11:13:43 json_config -- common/autotest_common.sh@955 -- # uname 00:05:47.852 11:13:43 json_config -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:47.852 11:13:43 json_config -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1337660 00:05:47.852 11:13:43 json_config -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:47.852 11:13:43 json_config -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:47.852 11:13:43 json_config -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1337660' 00:05:47.852 killing process with pid 1337660 00:05:47.852 11:13:43 json_config -- common/autotest_common.sh@969 -- # kill 1337660 00:05:47.852 11:13:43 json_config -- common/autotest_common.sh@974 -- # wait 1337660 00:05:49.762 11:13:45 json_config -- json_config/json_config.sh@330 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_initiator_config.json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:49.762 11:13:45 json_config -- json_config/json_config.sh@331 -- # timing_exit json_config_test_fini 00:05:49.762 11:13:45 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:05:49.762 11:13:45 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:50.021 11:13:45 json_config -- json_config/json_config.sh@332 -- # return 0 00:05:50.021 11:13:45 json_config -- json_config/json_config.sh@400 -- # echo 'INFO: Success' 00:05:50.021 INFO: Success 00:05:50.021 00:05:50.021 real 0m16.442s 00:05:50.021 user 0m17.316s 00:05:50.021 sys 0m1.916s 00:05:50.021 11:13:45 json_config -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:50.021 11:13:45 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:50.021 ************************************ 00:05:50.021 END TEST json_config 00:05:50.021 ************************************ 00:05:50.021 11:13:45 -- spdk/autotest.sh@173 -- # run_test json_config_extra_key /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:05:50.021 11:13:45 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:50.021 11:13:45 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:50.021 11:13:45 -- common/autotest_common.sh@10 -- # set +x 00:05:50.021 ************************************ 00:05:50.021 START TEST json_config_extra_key 00:05:50.021 ************************************ 00:05:50.021 11:13:45 json_config_extra_key -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:05:50.021 11:13:45 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:05:50.021 11:13:45 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:05:50.021 11:13:45 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:50.021 11:13:45 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:50.021 11:13:45 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:50.021 11:13:45 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:50.021 11:13:45 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:50.021 11:13:45 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:50.021 11:13:45 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:50.021 11:13:45 json_config_extra_key -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:50.021 11:13:45 json_config_extra_key -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:50.021 11:13:45 json_config_extra_key -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:50.021 11:13:45 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:05:50.021 11:13:45 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:05:50.021 11:13:45 json_config_extra_key -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:50.021 11:13:45 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:50.021 11:13:45 json_config_extra_key -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:05:50.021 11:13:45 json_config_extra_key -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:50.021 11:13:45 json_config_extra_key -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:05:50.021 11:13:45 json_config_extra_key -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:50.021 11:13:45 json_config_extra_key -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:50.021 11:13:45 json_config_extra_key -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:50.021 11:13:45 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:50.021 11:13:45 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:50.021 11:13:45 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:50.021 11:13:45 json_config_extra_key -- paths/export.sh@5 -- # export PATH 00:05:50.021 11:13:45 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:50.021 11:13:45 json_config_extra_key -- nvmf/common.sh@47 -- # : 0 00:05:50.021 11:13:45 json_config_extra_key -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:05:50.021 11:13:45 json_config_extra_key -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:05:50.021 11:13:45 json_config_extra_key -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:50.021 11:13:45 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:50.021 11:13:45 json_config_extra_key -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:50.021 11:13:45 json_config_extra_key -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:05:50.021 11:13:45 json_config_extra_key -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:05:50.021 11:13:45 json_config_extra_key -- nvmf/common.sh@51 -- # have_pci_nics=0 00:05:50.021 11:13:45 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/common.sh 00:05:50.021 11:13:45 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:05:50.021 11:13:45 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:05:50.021 11:13:45 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:05:50.021 11:13:45 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:05:50.021 11:13:45 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:05:50.021 11:13:45 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:05:50.021 11:13:45 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json') 00:05:50.021 11:13:45 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:05:50.021 11:13:45 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:05:50.021 11:13:45 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:05:50.021 INFO: launching applications... 00:05:50.021 11:13:45 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json 00:05:50.021 11:13:45 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:05:50.021 11:13:45 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:05:50.021 11:13:45 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:05:50.021 11:13:45 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:05:50.022 11:13:45 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:05:50.022 11:13:45 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:50.022 11:13:45 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:50.022 11:13:45 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=1338933 00:05:50.022 11:13:45 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:05:50.022 Waiting for target to run... 00:05:50.022 11:13:45 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 1338933 /var/tmp/spdk_tgt.sock 00:05:50.022 11:13:45 json_config_extra_key -- common/autotest_common.sh@831 -- # '[' -z 1338933 ']' 00:05:50.022 11:13:45 json_config_extra_key -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json 00:05:50.022 11:13:45 json_config_extra_key -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:05:50.022 11:13:45 json_config_extra_key -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:50.022 11:13:45 json_config_extra_key -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:05:50.022 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:05:50.022 11:13:45 json_config_extra_key -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:50.022 11:13:45 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:05:50.022 [2024-07-26 11:13:45.656785] Starting SPDK v24.09-pre git sha1 487ff9e1a / DPDK 24.03.0 initialization... 00:05:50.022 [2024-07-26 11:13:45.656837] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1338933 ] 00:05:50.022 EAL: No free 2048 kB hugepages reported on node 1 00:05:50.589 [2024-07-26 11:13:46.099250] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:50.589 [2024-07-26 11:13:46.186334] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:50.849 11:13:46 json_config_extra_key -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:50.849 11:13:46 json_config_extra_key -- common/autotest_common.sh@864 -- # return 0 00:05:50.849 11:13:46 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:05:50.849 00:05:50.849 11:13:46 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:05:50.849 INFO: shutting down applications... 00:05:50.849 11:13:46 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:05:50.849 11:13:46 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:05:50.849 11:13:46 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:05:50.849 11:13:46 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 1338933 ]] 00:05:50.849 11:13:46 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 1338933 00:05:50.849 11:13:46 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:05:50.849 11:13:46 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:50.849 11:13:46 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 1338933 00:05:50.849 11:13:46 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:05:51.465 11:13:46 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:05:51.465 11:13:46 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:51.465 11:13:46 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 1338933 00:05:51.465 11:13:46 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:05:51.465 11:13:46 json_config_extra_key -- json_config/common.sh@43 -- # break 00:05:51.465 11:13:46 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:05:51.465 11:13:46 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:05:51.465 SPDK target shutdown done 00:05:51.465 11:13:46 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:05:51.465 Success 00:05:51.465 00:05:51.465 real 0m1.454s 00:05:51.465 user 0m1.067s 00:05:51.465 sys 0m0.542s 00:05:51.465 11:13:46 json_config_extra_key -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:51.465 11:13:46 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:05:51.465 ************************************ 00:05:51.465 END TEST json_config_extra_key 00:05:51.465 ************************************ 00:05:51.465 11:13:46 -- spdk/autotest.sh@174 -- # run_test alias_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:05:51.465 11:13:46 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:51.465 11:13:46 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:51.465 11:13:46 -- common/autotest_common.sh@10 -- # set +x 00:05:51.465 ************************************ 00:05:51.465 START TEST alias_rpc 00:05:51.465 ************************************ 00:05:51.465 11:13:47 alias_rpc -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:05:51.466 * Looking for test storage... 00:05:51.466 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc 00:05:51.466 11:13:47 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:05:51.466 11:13:47 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=1339223 00:05:51.466 11:13:47 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 1339223 00:05:51.466 11:13:47 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:51.466 11:13:47 alias_rpc -- common/autotest_common.sh@831 -- # '[' -z 1339223 ']' 00:05:51.466 11:13:47 alias_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:51.466 11:13:47 alias_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:51.466 11:13:47 alias_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:51.466 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:51.724 11:13:47 alias_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:51.724 11:13:47 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:51.724 [2024-07-26 11:13:47.174435] Starting SPDK v24.09-pre git sha1 487ff9e1a / DPDK 24.03.0 initialization... 00:05:51.724 [2024-07-26 11:13:47.174485] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1339223 ] 00:05:51.724 EAL: No free 2048 kB hugepages reported on node 1 00:05:51.724 [2024-07-26 11:13:47.241819] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:51.724 [2024-07-26 11:13:47.314861] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:52.661 11:13:47 alias_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:52.661 11:13:47 alias_rpc -- common/autotest_common.sh@864 -- # return 0 00:05:52.661 11:13:47 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py load_config -i 00:05:52.661 11:13:48 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 1339223 00:05:52.661 11:13:48 alias_rpc -- common/autotest_common.sh@950 -- # '[' -z 1339223 ']' 00:05:52.661 11:13:48 alias_rpc -- common/autotest_common.sh@954 -- # kill -0 1339223 00:05:52.661 11:13:48 alias_rpc -- common/autotest_common.sh@955 -- # uname 00:05:52.661 11:13:48 alias_rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:52.661 11:13:48 alias_rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1339223 00:05:52.661 11:13:48 alias_rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:52.661 11:13:48 alias_rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:52.661 11:13:48 alias_rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1339223' 00:05:52.661 killing process with pid 1339223 00:05:52.661 11:13:48 alias_rpc -- common/autotest_common.sh@969 -- # kill 1339223 00:05:52.661 11:13:48 alias_rpc -- common/autotest_common.sh@974 -- # wait 1339223 00:05:52.921 00:05:52.921 real 0m1.487s 00:05:52.921 user 0m1.611s 00:05:52.921 sys 0m0.412s 00:05:52.921 11:13:48 alias_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:52.921 11:13:48 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:52.921 ************************************ 00:05:52.921 END TEST alias_rpc 00:05:52.921 ************************************ 00:05:52.921 11:13:48 -- spdk/autotest.sh@176 -- # [[ 0 -eq 0 ]] 00:05:52.921 11:13:48 -- spdk/autotest.sh@177 -- # run_test spdkcli_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/tcp.sh 00:05:52.921 11:13:48 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:52.921 11:13:48 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:52.921 11:13:48 -- common/autotest_common.sh@10 -- # set +x 00:05:53.180 ************************************ 00:05:53.180 START TEST spdkcli_tcp 00:05:53.180 ************************************ 00:05:53.180 11:13:48 spdkcli_tcp -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/tcp.sh 00:05:53.180 * Looking for test storage... 00:05:53.180 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli 00:05:53.180 11:13:48 spdkcli_tcp -- spdkcli/tcp.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/common.sh 00:05:53.180 11:13:48 spdkcli_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:05:53.180 11:13:48 spdkcli_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py 00:05:53.180 11:13:48 spdkcli_tcp -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:05:53.180 11:13:48 spdkcli_tcp -- spdkcli/tcp.sh@19 -- # PORT=9998 00:05:53.180 11:13:48 spdkcli_tcp -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:05:53.180 11:13:48 spdkcli_tcp -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:05:53.180 11:13:48 spdkcli_tcp -- common/autotest_common.sh@724 -- # xtrace_disable 00:05:53.180 11:13:48 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:53.180 11:13:48 spdkcli_tcp -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=1339521 00:05:53.180 11:13:48 spdkcli_tcp -- spdkcli/tcp.sh@27 -- # waitforlisten 1339521 00:05:53.180 11:13:48 spdkcli_tcp -- spdkcli/tcp.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:05:53.180 11:13:48 spdkcli_tcp -- common/autotest_common.sh@831 -- # '[' -z 1339521 ']' 00:05:53.180 11:13:48 spdkcli_tcp -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:53.180 11:13:48 spdkcli_tcp -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:53.180 11:13:48 spdkcli_tcp -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:53.180 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:53.180 11:13:48 spdkcli_tcp -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:53.180 11:13:48 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:53.180 [2024-07-26 11:13:48.734540] Starting SPDK v24.09-pre git sha1 487ff9e1a / DPDK 24.03.0 initialization... 00:05:53.180 [2024-07-26 11:13:48.734589] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1339521 ] 00:05:53.180 EAL: No free 2048 kB hugepages reported on node 1 00:05:53.180 [2024-07-26 11:13:48.787515] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:53.439 [2024-07-26 11:13:48.864980] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:05:53.439 [2024-07-26 11:13:48.864983] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:54.006 11:13:49 spdkcli_tcp -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:54.006 11:13:49 spdkcli_tcp -- common/autotest_common.sh@864 -- # return 0 00:05:54.006 11:13:49 spdkcli_tcp -- spdkcli/tcp.sh@31 -- # socat_pid=1339741 00:05:54.006 11:13:49 spdkcli_tcp -- spdkcli/tcp.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:05:54.006 11:13:49 spdkcli_tcp -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:05:54.266 [ 00:05:54.266 "bdev_malloc_delete", 00:05:54.266 "bdev_malloc_create", 00:05:54.266 "bdev_null_resize", 00:05:54.266 "bdev_null_delete", 00:05:54.266 "bdev_null_create", 00:05:54.266 "bdev_nvme_cuse_unregister", 00:05:54.266 "bdev_nvme_cuse_register", 00:05:54.266 "bdev_opal_new_user", 00:05:54.266 "bdev_opal_set_lock_state", 00:05:54.266 "bdev_opal_delete", 00:05:54.266 "bdev_opal_get_info", 00:05:54.266 "bdev_opal_create", 00:05:54.266 "bdev_nvme_opal_revert", 00:05:54.266 "bdev_nvme_opal_init", 00:05:54.266 "bdev_nvme_send_cmd", 00:05:54.266 "bdev_nvme_get_path_iostat", 00:05:54.266 "bdev_nvme_get_mdns_discovery_info", 00:05:54.266 "bdev_nvme_stop_mdns_discovery", 00:05:54.266 "bdev_nvme_start_mdns_discovery", 00:05:54.266 "bdev_nvme_set_multipath_policy", 00:05:54.266 "bdev_nvme_set_preferred_path", 00:05:54.266 "bdev_nvme_get_io_paths", 00:05:54.266 "bdev_nvme_remove_error_injection", 00:05:54.266 "bdev_nvme_add_error_injection", 00:05:54.266 "bdev_nvme_get_discovery_info", 00:05:54.266 "bdev_nvme_stop_discovery", 00:05:54.266 "bdev_nvme_start_discovery", 00:05:54.266 "bdev_nvme_get_controller_health_info", 00:05:54.267 "bdev_nvme_disable_controller", 00:05:54.267 "bdev_nvme_enable_controller", 00:05:54.267 "bdev_nvme_reset_controller", 00:05:54.267 "bdev_nvme_get_transport_statistics", 00:05:54.267 "bdev_nvme_apply_firmware", 00:05:54.267 "bdev_nvme_detach_controller", 00:05:54.267 "bdev_nvme_get_controllers", 00:05:54.267 "bdev_nvme_attach_controller", 00:05:54.267 "bdev_nvme_set_hotplug", 00:05:54.267 "bdev_nvme_set_options", 00:05:54.267 "bdev_passthru_delete", 00:05:54.267 "bdev_passthru_create", 00:05:54.267 "bdev_lvol_set_parent_bdev", 00:05:54.267 "bdev_lvol_set_parent", 00:05:54.267 "bdev_lvol_check_shallow_copy", 00:05:54.267 "bdev_lvol_start_shallow_copy", 00:05:54.267 "bdev_lvol_grow_lvstore", 00:05:54.267 "bdev_lvol_get_lvols", 00:05:54.267 "bdev_lvol_get_lvstores", 00:05:54.267 "bdev_lvol_delete", 00:05:54.267 "bdev_lvol_set_read_only", 00:05:54.267 "bdev_lvol_resize", 00:05:54.267 "bdev_lvol_decouple_parent", 00:05:54.267 "bdev_lvol_inflate", 00:05:54.267 "bdev_lvol_rename", 00:05:54.267 "bdev_lvol_clone_bdev", 00:05:54.267 "bdev_lvol_clone", 00:05:54.267 "bdev_lvol_snapshot", 00:05:54.267 "bdev_lvol_create", 00:05:54.267 "bdev_lvol_delete_lvstore", 00:05:54.267 "bdev_lvol_rename_lvstore", 00:05:54.267 "bdev_lvol_create_lvstore", 00:05:54.267 "bdev_raid_set_options", 00:05:54.267 "bdev_raid_remove_base_bdev", 00:05:54.267 "bdev_raid_add_base_bdev", 00:05:54.267 "bdev_raid_delete", 00:05:54.267 "bdev_raid_create", 00:05:54.267 "bdev_raid_get_bdevs", 00:05:54.267 "bdev_error_inject_error", 00:05:54.267 "bdev_error_delete", 00:05:54.267 "bdev_error_create", 00:05:54.267 "bdev_split_delete", 00:05:54.267 "bdev_split_create", 00:05:54.267 "bdev_delay_delete", 00:05:54.267 "bdev_delay_create", 00:05:54.267 "bdev_delay_update_latency", 00:05:54.267 "bdev_zone_block_delete", 00:05:54.267 "bdev_zone_block_create", 00:05:54.267 "blobfs_create", 00:05:54.267 "blobfs_detect", 00:05:54.267 "blobfs_set_cache_size", 00:05:54.267 "bdev_aio_delete", 00:05:54.267 "bdev_aio_rescan", 00:05:54.267 "bdev_aio_create", 00:05:54.267 "bdev_ftl_set_property", 00:05:54.267 "bdev_ftl_get_properties", 00:05:54.267 "bdev_ftl_get_stats", 00:05:54.267 "bdev_ftl_unmap", 00:05:54.267 "bdev_ftl_unload", 00:05:54.267 "bdev_ftl_delete", 00:05:54.267 "bdev_ftl_load", 00:05:54.267 "bdev_ftl_create", 00:05:54.267 "bdev_virtio_attach_controller", 00:05:54.267 "bdev_virtio_scsi_get_devices", 00:05:54.267 "bdev_virtio_detach_controller", 00:05:54.267 "bdev_virtio_blk_set_hotplug", 00:05:54.267 "bdev_iscsi_delete", 00:05:54.267 "bdev_iscsi_create", 00:05:54.267 "bdev_iscsi_set_options", 00:05:54.267 "accel_error_inject_error", 00:05:54.267 "ioat_scan_accel_module", 00:05:54.267 "dsa_scan_accel_module", 00:05:54.267 "iaa_scan_accel_module", 00:05:54.267 "vfu_virtio_create_scsi_endpoint", 00:05:54.267 "vfu_virtio_scsi_remove_target", 00:05:54.267 "vfu_virtio_scsi_add_target", 00:05:54.267 "vfu_virtio_create_blk_endpoint", 00:05:54.267 "vfu_virtio_delete_endpoint", 00:05:54.267 "keyring_file_remove_key", 00:05:54.267 "keyring_file_add_key", 00:05:54.267 "keyring_linux_set_options", 00:05:54.267 "iscsi_get_histogram", 00:05:54.267 "iscsi_enable_histogram", 00:05:54.267 "iscsi_set_options", 00:05:54.267 "iscsi_get_auth_groups", 00:05:54.267 "iscsi_auth_group_remove_secret", 00:05:54.267 "iscsi_auth_group_add_secret", 00:05:54.267 "iscsi_delete_auth_group", 00:05:54.267 "iscsi_create_auth_group", 00:05:54.267 "iscsi_set_discovery_auth", 00:05:54.267 "iscsi_get_options", 00:05:54.267 "iscsi_target_node_request_logout", 00:05:54.267 "iscsi_target_node_set_redirect", 00:05:54.267 "iscsi_target_node_set_auth", 00:05:54.267 "iscsi_target_node_add_lun", 00:05:54.267 "iscsi_get_stats", 00:05:54.267 "iscsi_get_connections", 00:05:54.267 "iscsi_portal_group_set_auth", 00:05:54.267 "iscsi_start_portal_group", 00:05:54.267 "iscsi_delete_portal_group", 00:05:54.267 "iscsi_create_portal_group", 00:05:54.267 "iscsi_get_portal_groups", 00:05:54.267 "iscsi_delete_target_node", 00:05:54.267 "iscsi_target_node_remove_pg_ig_maps", 00:05:54.267 "iscsi_target_node_add_pg_ig_maps", 00:05:54.267 "iscsi_create_target_node", 00:05:54.267 "iscsi_get_target_nodes", 00:05:54.267 "iscsi_delete_initiator_group", 00:05:54.267 "iscsi_initiator_group_remove_initiators", 00:05:54.267 "iscsi_initiator_group_add_initiators", 00:05:54.267 "iscsi_create_initiator_group", 00:05:54.267 "iscsi_get_initiator_groups", 00:05:54.267 "nvmf_set_crdt", 00:05:54.267 "nvmf_set_config", 00:05:54.267 "nvmf_set_max_subsystems", 00:05:54.267 "nvmf_stop_mdns_prr", 00:05:54.267 "nvmf_publish_mdns_prr", 00:05:54.267 "nvmf_subsystem_get_listeners", 00:05:54.267 "nvmf_subsystem_get_qpairs", 00:05:54.267 "nvmf_subsystem_get_controllers", 00:05:54.267 "nvmf_get_stats", 00:05:54.267 "nvmf_get_transports", 00:05:54.267 "nvmf_create_transport", 00:05:54.267 "nvmf_get_targets", 00:05:54.267 "nvmf_delete_target", 00:05:54.267 "nvmf_create_target", 00:05:54.267 "nvmf_subsystem_allow_any_host", 00:05:54.267 "nvmf_subsystem_remove_host", 00:05:54.267 "nvmf_subsystem_add_host", 00:05:54.267 "nvmf_ns_remove_host", 00:05:54.267 "nvmf_ns_add_host", 00:05:54.267 "nvmf_subsystem_remove_ns", 00:05:54.267 "nvmf_subsystem_add_ns", 00:05:54.267 "nvmf_subsystem_listener_set_ana_state", 00:05:54.267 "nvmf_discovery_get_referrals", 00:05:54.267 "nvmf_discovery_remove_referral", 00:05:54.267 "nvmf_discovery_add_referral", 00:05:54.267 "nvmf_subsystem_remove_listener", 00:05:54.267 "nvmf_subsystem_add_listener", 00:05:54.267 "nvmf_delete_subsystem", 00:05:54.267 "nvmf_create_subsystem", 00:05:54.267 "nvmf_get_subsystems", 00:05:54.267 "env_dpdk_get_mem_stats", 00:05:54.267 "nbd_get_disks", 00:05:54.267 "nbd_stop_disk", 00:05:54.267 "nbd_start_disk", 00:05:54.267 "ublk_recover_disk", 00:05:54.267 "ublk_get_disks", 00:05:54.267 "ublk_stop_disk", 00:05:54.267 "ublk_start_disk", 00:05:54.267 "ublk_destroy_target", 00:05:54.267 "ublk_create_target", 00:05:54.267 "virtio_blk_create_transport", 00:05:54.267 "virtio_blk_get_transports", 00:05:54.267 "vhost_controller_set_coalescing", 00:05:54.267 "vhost_get_controllers", 00:05:54.267 "vhost_delete_controller", 00:05:54.267 "vhost_create_blk_controller", 00:05:54.267 "vhost_scsi_controller_remove_target", 00:05:54.267 "vhost_scsi_controller_add_target", 00:05:54.267 "vhost_start_scsi_controller", 00:05:54.267 "vhost_create_scsi_controller", 00:05:54.267 "thread_set_cpumask", 00:05:54.267 "framework_get_governor", 00:05:54.267 "framework_get_scheduler", 00:05:54.267 "framework_set_scheduler", 00:05:54.267 "framework_get_reactors", 00:05:54.267 "thread_get_io_channels", 00:05:54.267 "thread_get_pollers", 00:05:54.267 "thread_get_stats", 00:05:54.267 "framework_monitor_context_switch", 00:05:54.267 "spdk_kill_instance", 00:05:54.267 "log_enable_timestamps", 00:05:54.267 "log_get_flags", 00:05:54.267 "log_clear_flag", 00:05:54.267 "log_set_flag", 00:05:54.267 "log_get_level", 00:05:54.267 "log_set_level", 00:05:54.267 "log_get_print_level", 00:05:54.267 "log_set_print_level", 00:05:54.267 "framework_enable_cpumask_locks", 00:05:54.267 "framework_disable_cpumask_locks", 00:05:54.267 "framework_wait_init", 00:05:54.267 "framework_start_init", 00:05:54.267 "scsi_get_devices", 00:05:54.267 "bdev_get_histogram", 00:05:54.267 "bdev_enable_histogram", 00:05:54.267 "bdev_set_qos_limit", 00:05:54.267 "bdev_set_qd_sampling_period", 00:05:54.267 "bdev_get_bdevs", 00:05:54.267 "bdev_reset_iostat", 00:05:54.267 "bdev_get_iostat", 00:05:54.267 "bdev_examine", 00:05:54.267 "bdev_wait_for_examine", 00:05:54.267 "bdev_set_options", 00:05:54.267 "notify_get_notifications", 00:05:54.267 "notify_get_types", 00:05:54.267 "accel_get_stats", 00:05:54.267 "accel_set_options", 00:05:54.267 "accel_set_driver", 00:05:54.267 "accel_crypto_key_destroy", 00:05:54.267 "accel_crypto_keys_get", 00:05:54.267 "accel_crypto_key_create", 00:05:54.267 "accel_assign_opc", 00:05:54.267 "accel_get_module_info", 00:05:54.267 "accel_get_opc_assignments", 00:05:54.267 "vmd_rescan", 00:05:54.267 "vmd_remove_device", 00:05:54.267 "vmd_enable", 00:05:54.267 "sock_get_default_impl", 00:05:54.267 "sock_set_default_impl", 00:05:54.267 "sock_impl_set_options", 00:05:54.267 "sock_impl_get_options", 00:05:54.267 "iobuf_get_stats", 00:05:54.267 "iobuf_set_options", 00:05:54.267 "keyring_get_keys", 00:05:54.267 "framework_get_pci_devices", 00:05:54.267 "framework_get_config", 00:05:54.267 "framework_get_subsystems", 00:05:54.267 "vfu_tgt_set_base_path", 00:05:54.267 "trace_get_info", 00:05:54.267 "trace_get_tpoint_group_mask", 00:05:54.267 "trace_disable_tpoint_group", 00:05:54.267 "trace_enable_tpoint_group", 00:05:54.267 "trace_clear_tpoint_mask", 00:05:54.267 "trace_set_tpoint_mask", 00:05:54.267 "spdk_get_version", 00:05:54.267 "rpc_get_methods" 00:05:54.267 ] 00:05:54.267 11:13:49 spdkcli_tcp -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:05:54.267 11:13:49 spdkcli_tcp -- common/autotest_common.sh@730 -- # xtrace_disable 00:05:54.267 11:13:49 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:54.268 11:13:49 spdkcli_tcp -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:05:54.268 11:13:49 spdkcli_tcp -- spdkcli/tcp.sh@38 -- # killprocess 1339521 00:05:54.268 11:13:49 spdkcli_tcp -- common/autotest_common.sh@950 -- # '[' -z 1339521 ']' 00:05:54.268 11:13:49 spdkcli_tcp -- common/autotest_common.sh@954 -- # kill -0 1339521 00:05:54.268 11:13:49 spdkcli_tcp -- common/autotest_common.sh@955 -- # uname 00:05:54.268 11:13:49 spdkcli_tcp -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:54.268 11:13:49 spdkcli_tcp -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1339521 00:05:54.268 11:13:49 spdkcli_tcp -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:54.268 11:13:49 spdkcli_tcp -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:54.268 11:13:49 spdkcli_tcp -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1339521' 00:05:54.268 killing process with pid 1339521 00:05:54.268 11:13:49 spdkcli_tcp -- common/autotest_common.sh@969 -- # kill 1339521 00:05:54.268 11:13:49 spdkcli_tcp -- common/autotest_common.sh@974 -- # wait 1339521 00:05:54.527 00:05:54.527 real 0m1.558s 00:05:54.527 user 0m2.952s 00:05:54.527 sys 0m0.424s 00:05:54.527 11:13:50 spdkcli_tcp -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:54.527 11:13:50 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:54.527 ************************************ 00:05:54.527 END TEST spdkcli_tcp 00:05:54.527 ************************************ 00:05:54.527 11:13:50 -- spdk/autotest.sh@180 -- # run_test dpdk_mem_utility /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:05:54.527 11:13:50 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:54.527 11:13:50 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:54.527 11:13:50 -- common/autotest_common.sh@10 -- # set +x 00:05:54.786 ************************************ 00:05:54.786 START TEST dpdk_mem_utility 00:05:54.786 ************************************ 00:05:54.786 11:13:50 dpdk_mem_utility -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:05:54.786 * Looking for test storage... 00:05:54.786 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility 00:05:54.786 11:13:50 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:05:54.786 11:13:50 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=1339968 00:05:54.786 11:13:50 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 1339968 00:05:54.786 11:13:50 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:54.786 11:13:50 dpdk_mem_utility -- common/autotest_common.sh@831 -- # '[' -z 1339968 ']' 00:05:54.786 11:13:50 dpdk_mem_utility -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:54.786 11:13:50 dpdk_mem_utility -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:54.786 11:13:50 dpdk_mem_utility -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:54.786 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:54.786 11:13:50 dpdk_mem_utility -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:54.786 11:13:50 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:05:54.786 [2024-07-26 11:13:50.354894] Starting SPDK v24.09-pre git sha1 487ff9e1a / DPDK 24.03.0 initialization... 00:05:54.786 [2024-07-26 11:13:50.354947] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1339968 ] 00:05:54.786 EAL: No free 2048 kB hugepages reported on node 1 00:05:54.786 [2024-07-26 11:13:50.419906] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:55.045 [2024-07-26 11:13:50.492231] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:55.611 11:13:51 dpdk_mem_utility -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:55.611 11:13:51 dpdk_mem_utility -- common/autotest_common.sh@864 -- # return 0 00:05:55.611 11:13:51 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:05:55.611 11:13:51 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:05:55.612 11:13:51 dpdk_mem_utility -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:55.612 11:13:51 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:05:55.612 { 00:05:55.612 "filename": "/tmp/spdk_mem_dump.txt" 00:05:55.612 } 00:05:55.612 11:13:51 dpdk_mem_utility -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:55.612 11:13:51 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:05:55.612 DPDK memory size 814.000000 MiB in 1 heap(s) 00:05:55.612 1 heaps totaling size 814.000000 MiB 00:05:55.612 size: 814.000000 MiB heap id: 0 00:05:55.612 end heaps---------- 00:05:55.612 8 mempools totaling size 598.116089 MiB 00:05:55.612 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:05:55.612 size: 158.602051 MiB name: PDU_data_out_Pool 00:05:55.612 size: 84.521057 MiB name: bdev_io_1339968 00:05:55.612 size: 51.011292 MiB name: evtpool_1339968 00:05:55.612 size: 50.003479 MiB name: msgpool_1339968 00:05:55.612 size: 21.763794 MiB name: PDU_Pool 00:05:55.612 size: 19.513306 MiB name: SCSI_TASK_Pool 00:05:55.612 size: 0.026123 MiB name: Session_Pool 00:05:55.612 end mempools------- 00:05:55.612 6 memzones totaling size 4.142822 MiB 00:05:55.612 size: 1.000366 MiB name: RG_ring_0_1339968 00:05:55.612 size: 1.000366 MiB name: RG_ring_1_1339968 00:05:55.612 size: 1.000366 MiB name: RG_ring_4_1339968 00:05:55.612 size: 1.000366 MiB name: RG_ring_5_1339968 00:05:55.612 size: 0.125366 MiB name: RG_ring_2_1339968 00:05:55.612 size: 0.015991 MiB name: RG_ring_3_1339968 00:05:55.612 end memzones------- 00:05:55.612 11:13:51 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py -m 0 00:05:55.612 heap id: 0 total size: 814.000000 MiB number of busy elements: 41 number of free elements: 15 00:05:55.612 list of free elements. size: 12.519348 MiB 00:05:55.612 element at address: 0x200000400000 with size: 1.999512 MiB 00:05:55.612 element at address: 0x200018e00000 with size: 0.999878 MiB 00:05:55.612 element at address: 0x200019000000 with size: 0.999878 MiB 00:05:55.612 element at address: 0x200003e00000 with size: 0.996277 MiB 00:05:55.612 element at address: 0x200031c00000 with size: 0.994446 MiB 00:05:55.612 element at address: 0x200013800000 with size: 0.978699 MiB 00:05:55.612 element at address: 0x200007000000 with size: 0.959839 MiB 00:05:55.612 element at address: 0x200019200000 with size: 0.936584 MiB 00:05:55.612 element at address: 0x200000200000 with size: 0.841614 MiB 00:05:55.612 element at address: 0x20001aa00000 with size: 0.582886 MiB 00:05:55.612 element at address: 0x20000b200000 with size: 0.490723 MiB 00:05:55.612 element at address: 0x200000800000 with size: 0.487793 MiB 00:05:55.612 element at address: 0x200019400000 with size: 0.485657 MiB 00:05:55.612 element at address: 0x200027e00000 with size: 0.410034 MiB 00:05:55.612 element at address: 0x200003a00000 with size: 0.355530 MiB 00:05:55.612 list of standard malloc elements. size: 199.218079 MiB 00:05:55.612 element at address: 0x20000b3fff80 with size: 132.000122 MiB 00:05:55.612 element at address: 0x2000071fff80 with size: 64.000122 MiB 00:05:55.612 element at address: 0x200018efff80 with size: 1.000122 MiB 00:05:55.612 element at address: 0x2000190fff80 with size: 1.000122 MiB 00:05:55.612 element at address: 0x2000192fff80 with size: 1.000122 MiB 00:05:55.612 element at address: 0x2000003d9f00 with size: 0.140747 MiB 00:05:55.612 element at address: 0x2000192eff00 with size: 0.062622 MiB 00:05:55.612 element at address: 0x2000003fdf80 with size: 0.007935 MiB 00:05:55.612 element at address: 0x2000192efdc0 with size: 0.000305 MiB 00:05:55.612 element at address: 0x2000002d7740 with size: 0.000183 MiB 00:05:55.612 element at address: 0x2000002d7800 with size: 0.000183 MiB 00:05:55.612 element at address: 0x2000002d78c0 with size: 0.000183 MiB 00:05:55.612 element at address: 0x2000002d7ac0 with size: 0.000183 MiB 00:05:55.612 element at address: 0x2000002d7b80 with size: 0.000183 MiB 00:05:55.612 element at address: 0x2000002d7c40 with size: 0.000183 MiB 00:05:55.612 element at address: 0x2000003d9e40 with size: 0.000183 MiB 00:05:55.612 element at address: 0x20000087ce00 with size: 0.000183 MiB 00:05:55.612 element at address: 0x20000087cec0 with size: 0.000183 MiB 00:05:55.612 element at address: 0x2000008fd180 with size: 0.000183 MiB 00:05:55.612 element at address: 0x200003a5b040 with size: 0.000183 MiB 00:05:55.612 element at address: 0x200003adb300 with size: 0.000183 MiB 00:05:55.612 element at address: 0x200003adb500 with size: 0.000183 MiB 00:05:55.612 element at address: 0x200003adf7c0 with size: 0.000183 MiB 00:05:55.612 element at address: 0x200003affa80 with size: 0.000183 MiB 00:05:55.612 element at address: 0x200003affb40 with size: 0.000183 MiB 00:05:55.612 element at address: 0x200003eff0c0 with size: 0.000183 MiB 00:05:55.612 element at address: 0x2000070fdd80 with size: 0.000183 MiB 00:05:55.612 element at address: 0x20000b27da00 with size: 0.000183 MiB 00:05:55.612 element at address: 0x20000b27dac0 with size: 0.000183 MiB 00:05:55.612 element at address: 0x20000b2fdd80 with size: 0.000183 MiB 00:05:55.612 element at address: 0x2000138fa8c0 with size: 0.000183 MiB 00:05:55.612 element at address: 0x2000192efc40 with size: 0.000183 MiB 00:05:55.612 element at address: 0x2000192efd00 with size: 0.000183 MiB 00:05:55.612 element at address: 0x2000194bc740 with size: 0.000183 MiB 00:05:55.612 element at address: 0x20001aa95380 with size: 0.000183 MiB 00:05:55.612 element at address: 0x20001aa95440 with size: 0.000183 MiB 00:05:55.612 element at address: 0x200027e68f80 with size: 0.000183 MiB 00:05:55.612 element at address: 0x200027e69040 with size: 0.000183 MiB 00:05:55.612 element at address: 0x200027e6fc40 with size: 0.000183 MiB 00:05:55.612 element at address: 0x200027e6fe40 with size: 0.000183 MiB 00:05:55.612 element at address: 0x200027e6ff00 with size: 0.000183 MiB 00:05:55.612 list of memzone associated elements. size: 602.262573 MiB 00:05:55.612 element at address: 0x20001aa95500 with size: 211.416748 MiB 00:05:55.612 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:05:55.612 element at address: 0x200027e6ffc0 with size: 157.562561 MiB 00:05:55.612 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:05:55.612 element at address: 0x2000139fab80 with size: 84.020630 MiB 00:05:55.612 associated memzone info: size: 84.020508 MiB name: MP_bdev_io_1339968_0 00:05:55.612 element at address: 0x2000009ff380 with size: 48.003052 MiB 00:05:55.612 associated memzone info: size: 48.002930 MiB name: MP_evtpool_1339968_0 00:05:55.612 element at address: 0x200003fff380 with size: 48.003052 MiB 00:05:55.612 associated memzone info: size: 48.002930 MiB name: MP_msgpool_1339968_0 00:05:55.612 element at address: 0x2000195be940 with size: 20.255554 MiB 00:05:55.612 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:05:55.612 element at address: 0x200031dfeb40 with size: 18.005066 MiB 00:05:55.612 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:05:55.612 element at address: 0x2000005ffe00 with size: 2.000488 MiB 00:05:55.612 associated memzone info: size: 2.000366 MiB name: RG_MP_evtpool_1339968 00:05:55.612 element at address: 0x200003bffe00 with size: 2.000488 MiB 00:05:55.612 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_1339968 00:05:55.612 element at address: 0x2000002d7d00 with size: 1.008118 MiB 00:05:55.612 associated memzone info: size: 1.007996 MiB name: MP_evtpool_1339968 00:05:55.612 element at address: 0x20000b2fde40 with size: 1.008118 MiB 00:05:55.612 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:05:55.612 element at address: 0x2000194bc800 with size: 1.008118 MiB 00:05:55.612 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:05:55.612 element at address: 0x2000070fde40 with size: 1.008118 MiB 00:05:55.612 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:05:55.612 element at address: 0x2000008fd240 with size: 1.008118 MiB 00:05:55.612 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:05:55.612 element at address: 0x200003eff180 with size: 1.000488 MiB 00:05:55.612 associated memzone info: size: 1.000366 MiB name: RG_ring_0_1339968 00:05:55.612 element at address: 0x200003affc00 with size: 1.000488 MiB 00:05:55.612 associated memzone info: size: 1.000366 MiB name: RG_ring_1_1339968 00:05:55.612 element at address: 0x2000138fa980 with size: 1.000488 MiB 00:05:55.612 associated memzone info: size: 1.000366 MiB name: RG_ring_4_1339968 00:05:55.612 element at address: 0x200031cfe940 with size: 1.000488 MiB 00:05:55.612 associated memzone info: size: 1.000366 MiB name: RG_ring_5_1339968 00:05:55.612 element at address: 0x200003a5b100 with size: 0.500488 MiB 00:05:55.612 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_1339968 00:05:55.612 element at address: 0x20000b27db80 with size: 0.500488 MiB 00:05:55.612 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:05:55.612 element at address: 0x20000087cf80 with size: 0.500488 MiB 00:05:55.612 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:05:55.612 element at address: 0x20001947c540 with size: 0.250488 MiB 00:05:55.612 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:05:55.612 element at address: 0x200003adf880 with size: 0.125488 MiB 00:05:55.612 associated memzone info: size: 0.125366 MiB name: RG_ring_2_1339968 00:05:55.612 element at address: 0x2000070f5b80 with size: 0.031738 MiB 00:05:55.612 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:05:55.612 element at address: 0x200027e69100 with size: 0.023743 MiB 00:05:55.612 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:05:55.612 element at address: 0x200003adb5c0 with size: 0.016113 MiB 00:05:55.612 associated memzone info: size: 0.015991 MiB name: RG_ring_3_1339968 00:05:55.612 element at address: 0x200027e6f240 with size: 0.002441 MiB 00:05:55.612 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:05:55.612 element at address: 0x2000002d7980 with size: 0.000305 MiB 00:05:55.612 associated memzone info: size: 0.000183 MiB name: MP_msgpool_1339968 00:05:55.612 element at address: 0x200003adb3c0 with size: 0.000305 MiB 00:05:55.612 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_1339968 00:05:55.612 element at address: 0x200027e6fd00 with size: 0.000305 MiB 00:05:55.612 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:05:55.612 11:13:51 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:05:55.613 11:13:51 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 1339968 00:05:55.613 11:13:51 dpdk_mem_utility -- common/autotest_common.sh@950 -- # '[' -z 1339968 ']' 00:05:55.613 11:13:51 dpdk_mem_utility -- common/autotest_common.sh@954 -- # kill -0 1339968 00:05:55.613 11:13:51 dpdk_mem_utility -- common/autotest_common.sh@955 -- # uname 00:05:55.613 11:13:51 dpdk_mem_utility -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:55.613 11:13:51 dpdk_mem_utility -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1339968 00:05:55.871 11:13:51 dpdk_mem_utility -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:55.871 11:13:51 dpdk_mem_utility -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:55.871 11:13:51 dpdk_mem_utility -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1339968' 00:05:55.871 killing process with pid 1339968 00:05:55.871 11:13:51 dpdk_mem_utility -- common/autotest_common.sh@969 -- # kill 1339968 00:05:55.871 11:13:51 dpdk_mem_utility -- common/autotest_common.sh@974 -- # wait 1339968 00:05:56.130 00:05:56.130 real 0m1.397s 00:05:56.130 user 0m1.463s 00:05:56.130 sys 0m0.404s 00:05:56.130 11:13:51 dpdk_mem_utility -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:56.130 11:13:51 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:05:56.130 ************************************ 00:05:56.130 END TEST dpdk_mem_utility 00:05:56.130 ************************************ 00:05:56.130 11:13:51 -- spdk/autotest.sh@181 -- # run_test event /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event.sh 00:05:56.130 11:13:51 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:56.130 11:13:51 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:56.130 11:13:51 -- common/autotest_common.sh@10 -- # set +x 00:05:56.130 ************************************ 00:05:56.130 START TEST event 00:05:56.130 ************************************ 00:05:56.130 11:13:51 event -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event.sh 00:05:56.130 * Looking for test storage... 00:05:56.130 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event 00:05:56.130 11:13:51 event -- event/event.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/nbd_common.sh 00:05:56.130 11:13:51 event -- bdev/nbd_common.sh@6 -- # set -e 00:05:56.130 11:13:51 event -- event/event.sh@45 -- # run_test event_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:05:56.130 11:13:51 event -- common/autotest_common.sh@1101 -- # '[' 6 -le 1 ']' 00:05:56.130 11:13:51 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:56.130 11:13:51 event -- common/autotest_common.sh@10 -- # set +x 00:05:56.388 ************************************ 00:05:56.388 START TEST event_perf 00:05:56.388 ************************************ 00:05:56.388 11:13:51 event.event_perf -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:05:56.388 Running I/O for 1 seconds...[2024-07-26 11:13:51.825227] Starting SPDK v24.09-pre git sha1 487ff9e1a / DPDK 24.03.0 initialization... 00:05:56.388 [2024-07-26 11:13:51.825300] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1340311 ] 00:05:56.388 EAL: No free 2048 kB hugepages reported on node 1 00:05:56.388 [2024-07-26 11:13:51.896358] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:05:56.388 [2024-07-26 11:13:51.970099] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:05:56.388 [2024-07-26 11:13:51.970209] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:05:56.388 [2024-07-26 11:13:51.970289] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:56.388 [2024-07-26 11:13:51.970290] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:05:57.762 Running I/O for 1 seconds... 00:05:57.762 lcore 0: 210503 00:05:57.762 lcore 1: 210502 00:05:57.762 lcore 2: 210502 00:05:57.762 lcore 3: 210502 00:05:57.762 done. 00:05:57.762 00:05:57.763 real 0m1.236s 00:05:57.763 user 0m4.149s 00:05:57.763 sys 0m0.084s 00:05:57.763 11:13:53 event.event_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:57.763 11:13:53 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:05:57.763 ************************************ 00:05:57.763 END TEST event_perf 00:05:57.763 ************************************ 00:05:57.763 11:13:53 event -- event/event.sh@46 -- # run_test event_reactor /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:05:57.763 11:13:53 event -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:05:57.763 11:13:53 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:57.763 11:13:53 event -- common/autotest_common.sh@10 -- # set +x 00:05:57.763 ************************************ 00:05:57.763 START TEST event_reactor 00:05:57.763 ************************************ 00:05:57.763 11:13:53 event.event_reactor -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:05:57.763 [2024-07-26 11:13:53.132944] Starting SPDK v24.09-pre git sha1 487ff9e1a / DPDK 24.03.0 initialization... 00:05:57.763 [2024-07-26 11:13:53.133005] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1340550 ] 00:05:57.763 EAL: No free 2048 kB hugepages reported on node 1 00:05:57.763 [2024-07-26 11:13:53.202213] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:57.763 [2024-07-26 11:13:53.272011] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:58.698 test_start 00:05:58.698 oneshot 00:05:58.698 tick 100 00:05:58.698 tick 100 00:05:58.698 tick 250 00:05:58.698 tick 100 00:05:58.698 tick 100 00:05:58.698 tick 100 00:05:58.698 tick 250 00:05:58.698 tick 500 00:05:58.698 tick 100 00:05:58.698 tick 100 00:05:58.698 tick 250 00:05:58.698 tick 100 00:05:58.698 tick 100 00:05:58.698 test_end 00:05:58.698 00:05:58.698 real 0m1.229s 00:05:58.698 user 0m1.144s 00:05:58.698 sys 0m0.081s 00:05:58.698 11:13:54 event.event_reactor -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:58.698 11:13:54 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:05:58.698 ************************************ 00:05:58.698 END TEST event_reactor 00:05:58.698 ************************************ 00:05:58.957 11:13:54 event -- event/event.sh@47 -- # run_test event_reactor_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:05:58.957 11:13:54 event -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:05:58.957 11:13:54 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:58.957 11:13:54 event -- common/autotest_common.sh@10 -- # set +x 00:05:58.957 ************************************ 00:05:58.957 START TEST event_reactor_perf 00:05:58.957 ************************************ 00:05:58.957 11:13:54 event.event_reactor_perf -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:05:58.957 [2024-07-26 11:13:54.431615] Starting SPDK v24.09-pre git sha1 487ff9e1a / DPDK 24.03.0 initialization... 00:05:58.957 [2024-07-26 11:13:54.431818] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1340743 ] 00:05:58.957 EAL: No free 2048 kB hugepages reported on node 1 00:05:58.957 [2024-07-26 11:13:54.503297] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:58.957 [2024-07-26 11:13:54.574765] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:00.335 test_start 00:06:00.335 test_end 00:06:00.335 Performance: 521802 events per second 00:06:00.335 00:06:00.335 real 0m1.232s 00:06:00.335 user 0m1.149s 00:06:00.335 sys 0m0.079s 00:06:00.335 11:13:55 event.event_reactor_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:00.335 11:13:55 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:06:00.335 ************************************ 00:06:00.335 END TEST event_reactor_perf 00:06:00.335 ************************************ 00:06:00.335 11:13:55 event -- event/event.sh@49 -- # uname -s 00:06:00.335 11:13:55 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:06:00.335 11:13:55 event -- event/event.sh@50 -- # run_test event_scheduler /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:06:00.335 11:13:55 event -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:00.335 11:13:55 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:00.335 11:13:55 event -- common/autotest_common.sh@10 -- # set +x 00:06:00.335 ************************************ 00:06:00.335 START TEST event_scheduler 00:06:00.335 ************************************ 00:06:00.335 11:13:55 event.event_scheduler -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:06:00.335 * Looking for test storage... 00:06:00.335 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler 00:06:00.335 11:13:55 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:06:00.335 11:13:55 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=1341028 00:06:00.335 11:13:55 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:06:00.335 11:13:55 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:06:00.335 11:13:55 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 1341028 00:06:00.335 11:13:55 event.event_scheduler -- common/autotest_common.sh@831 -- # '[' -z 1341028 ']' 00:06:00.335 11:13:55 event.event_scheduler -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:00.335 11:13:55 event.event_scheduler -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:00.335 11:13:55 event.event_scheduler -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:00.335 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:00.335 11:13:55 event.event_scheduler -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:00.335 11:13:55 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:00.335 [2024-07-26 11:13:55.851266] Starting SPDK v24.09-pre git sha1 487ff9e1a / DPDK 24.03.0 initialization... 00:06:00.335 [2024-07-26 11:13:55.851316] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1341028 ] 00:06:00.335 EAL: No free 2048 kB hugepages reported on node 1 00:06:00.335 [2024-07-26 11:13:55.918216] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:00.593 [2024-07-26 11:13:55.999539] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:00.593 [2024-07-26 11:13:55.999573] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:00.593 [2024-07-26 11:13:55.999676] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:06:00.593 [2024-07-26 11:13:55.999677] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:06:01.159 11:13:56 event.event_scheduler -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:01.159 11:13:56 event.event_scheduler -- common/autotest_common.sh@864 -- # return 0 00:06:01.159 11:13:56 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:06:01.159 11:13:56 event.event_scheduler -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:01.159 11:13:56 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:01.159 [2024-07-26 11:13:56.666097] dpdk_governor.c: 173:_init: *ERROR*: App core mask contains some but not all of a set of SMT siblings 00:06:01.159 [2024-07-26 11:13:56.666113] scheduler_dynamic.c: 270:init: *NOTICE*: Unable to initialize dpdk governor 00:06:01.159 [2024-07-26 11:13:56.666121] scheduler_dynamic.c: 416:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:06:01.159 [2024-07-26 11:13:56.666126] scheduler_dynamic.c: 418:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:06:01.159 [2024-07-26 11:13:56.666131] scheduler_dynamic.c: 420:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:06:01.159 11:13:56 event.event_scheduler -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:01.159 11:13:56 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:06:01.159 11:13:56 event.event_scheduler -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:01.159 11:13:56 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:01.159 [2024-07-26 11:13:56.737878] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:06:01.159 11:13:56 event.event_scheduler -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:01.159 11:13:56 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:06:01.159 11:13:56 event.event_scheduler -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:01.159 11:13:56 event.event_scheduler -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:01.159 11:13:56 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:01.159 ************************************ 00:06:01.159 START TEST scheduler_create_thread 00:06:01.159 ************************************ 00:06:01.159 11:13:56 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1125 -- # scheduler_create_thread 00:06:01.159 11:13:56 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:06:01.159 11:13:56 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:01.159 11:13:56 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:01.159 2 00:06:01.159 11:13:56 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:01.159 11:13:56 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:06:01.159 11:13:56 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:01.159 11:13:56 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:01.159 3 00:06:01.159 11:13:56 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:01.159 11:13:56 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:06:01.159 11:13:56 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:01.159 11:13:56 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:01.159 4 00:06:01.159 11:13:56 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:01.159 11:13:56 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:06:01.159 11:13:56 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:01.159 11:13:56 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:01.159 5 00:06:01.159 11:13:56 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:01.159 11:13:56 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:06:01.159 11:13:56 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:01.159 11:13:56 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:01.418 6 00:06:01.418 11:13:56 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:01.418 11:13:56 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:06:01.418 11:13:56 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:01.418 11:13:56 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:01.418 7 00:06:01.418 11:13:56 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:01.418 11:13:56 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:06:01.418 11:13:56 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:01.418 11:13:56 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:01.418 8 00:06:01.418 11:13:56 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:01.418 11:13:56 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:06:01.418 11:13:56 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:01.418 11:13:56 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:01.418 9 00:06:01.418 11:13:56 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:01.418 11:13:56 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:06:01.418 11:13:56 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:01.418 11:13:56 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:01.418 10 00:06:01.418 11:13:56 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:01.418 11:13:56 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:06:01.418 11:13:56 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:01.418 11:13:56 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:01.418 11:13:56 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:01.418 11:13:56 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 00:06:01.418 11:13:56 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:06:01.418 11:13:56 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:01.418 11:13:56 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:01.985 11:13:57 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:01.985 11:13:57 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:06:01.985 11:13:57 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:01.985 11:13:57 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:03.363 11:13:58 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:03.363 11:13:58 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 00:06:03.363 11:13:58 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:06:03.363 11:13:58 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:03.363 11:13:58 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:04.298 11:13:59 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:04.298 00:06:04.298 real 0m3.099s 00:06:04.298 user 0m0.023s 00:06:04.298 sys 0m0.005s 00:06:04.298 11:13:59 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:04.298 11:13:59 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:04.298 ************************************ 00:06:04.298 END TEST scheduler_create_thread 00:06:04.298 ************************************ 00:06:04.298 11:13:59 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:06:04.298 11:13:59 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 1341028 00:06:04.298 11:13:59 event.event_scheduler -- common/autotest_common.sh@950 -- # '[' -z 1341028 ']' 00:06:04.298 11:13:59 event.event_scheduler -- common/autotest_common.sh@954 -- # kill -0 1341028 00:06:04.298 11:13:59 event.event_scheduler -- common/autotest_common.sh@955 -- # uname 00:06:04.298 11:13:59 event.event_scheduler -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:04.298 11:13:59 event.event_scheduler -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1341028 00:06:04.298 11:13:59 event.event_scheduler -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:06:04.298 11:13:59 event.event_scheduler -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:06:04.298 11:13:59 event.event_scheduler -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1341028' 00:06:04.298 killing process with pid 1341028 00:06:04.298 11:13:59 event.event_scheduler -- common/autotest_common.sh@969 -- # kill 1341028 00:06:04.298 11:13:59 event.event_scheduler -- common/autotest_common.sh@974 -- # wait 1341028 00:06:04.866 [2024-07-26 11:14:00.253016] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:06:04.866 00:06:04.866 real 0m4.745s 00:06:04.866 user 0m9.209s 00:06:04.866 sys 0m0.380s 00:06:04.866 11:14:00 event.event_scheduler -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:04.866 11:14:00 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:04.866 ************************************ 00:06:04.866 END TEST event_scheduler 00:06:04.866 ************************************ 00:06:04.866 11:14:00 event -- event/event.sh@51 -- # modprobe -n nbd 00:06:04.866 11:14:00 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:06:04.866 11:14:00 event -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:04.866 11:14:00 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:04.866 11:14:00 event -- common/autotest_common.sh@10 -- # set +x 00:06:04.866 ************************************ 00:06:04.866 START TEST app_repeat 00:06:04.866 ************************************ 00:06:04.866 11:14:00 event.app_repeat -- common/autotest_common.sh@1125 -- # app_repeat_test 00:06:04.866 11:14:00 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:04.866 11:14:00 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:04.866 11:14:00 event.app_repeat -- event/event.sh@13 -- # local nbd_list 00:06:04.866 11:14:00 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:04.866 11:14:00 event.app_repeat -- event/event.sh@14 -- # local bdev_list 00:06:04.866 11:14:00 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4 00:06:04.866 11:14:00 event.app_repeat -- event/event.sh@17 -- # modprobe nbd 00:06:05.125 11:14:00 event.app_repeat -- event/event.sh@19 -- # repeat_pid=1341848 00:06:05.125 11:14:00 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:06:05.125 11:14:00 event.app_repeat -- event/event.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:06:05.125 11:14:00 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 1341848' 00:06:05.125 Process app_repeat pid: 1341848 00:06:05.125 11:14:00 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:06:05.125 11:14:00 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:06:05.125 spdk_app_start Round 0 00:06:05.125 11:14:00 event.app_repeat -- event/event.sh@25 -- # waitforlisten 1341848 /var/tmp/spdk-nbd.sock 00:06:05.125 11:14:00 event.app_repeat -- common/autotest_common.sh@831 -- # '[' -z 1341848 ']' 00:06:05.125 11:14:00 event.app_repeat -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:05.125 11:14:00 event.app_repeat -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:05.125 11:14:00 event.app_repeat -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:05.125 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:05.125 11:14:00 event.app_repeat -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:05.125 11:14:00 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:05.125 [2024-07-26 11:14:00.555198] Starting SPDK v24.09-pre git sha1 487ff9e1a / DPDK 24.03.0 initialization... 00:06:05.125 [2024-07-26 11:14:00.555246] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1341848 ] 00:06:05.125 EAL: No free 2048 kB hugepages reported on node 1 00:06:05.125 [2024-07-26 11:14:00.619979] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:05.125 [2024-07-26 11:14:00.699812] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:05.125 [2024-07-26 11:14:00.699814] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:06.061 11:14:01 event.app_repeat -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:06.061 11:14:01 event.app_repeat -- common/autotest_common.sh@864 -- # return 0 00:06:06.061 11:14:01 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:06.061 Malloc0 00:06:06.061 11:14:01 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:06.061 Malloc1 00:06:06.320 11:14:01 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:06.320 11:14:01 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:06.320 11:14:01 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:06.320 11:14:01 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:06.320 11:14:01 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:06.320 11:14:01 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:06.320 11:14:01 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:06.320 11:14:01 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:06.320 11:14:01 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:06.320 11:14:01 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:06.320 11:14:01 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:06.320 11:14:01 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:06.320 11:14:01 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:06:06.320 11:14:01 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:06.320 11:14:01 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:06.320 11:14:01 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:06:06.320 /dev/nbd0 00:06:06.320 11:14:01 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:06.320 11:14:01 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:06.320 11:14:01 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:06:06.320 11:14:01 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:06:06.320 11:14:01 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:06:06.320 11:14:01 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:06:06.320 11:14:01 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:06:06.320 11:14:01 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:06:06.320 11:14:01 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:06:06.320 11:14:01 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:06:06.320 11:14:01 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:06.320 1+0 records in 00:06:06.320 1+0 records out 00:06:06.320 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000230993 s, 17.7 MB/s 00:06:06.320 11:14:01 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:06.320 11:14:01 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:06:06.320 11:14:01 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:06.320 11:14:01 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:06:06.320 11:14:01 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:06:06.320 11:14:01 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:06.320 11:14:01 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:06.320 11:14:01 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:06:06.579 /dev/nbd1 00:06:06.579 11:14:02 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:06.579 11:14:02 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:06.579 11:14:02 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:06:06.579 11:14:02 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:06:06.579 11:14:02 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:06:06.579 11:14:02 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:06:06.579 11:14:02 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:06:06.579 11:14:02 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:06:06.579 11:14:02 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:06:06.579 11:14:02 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:06:06.579 11:14:02 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:06.579 1+0 records in 00:06:06.579 1+0 records out 00:06:06.579 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000191397 s, 21.4 MB/s 00:06:06.579 11:14:02 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:06.579 11:14:02 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:06:06.579 11:14:02 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:06.579 11:14:02 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:06:06.579 11:14:02 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:06:06.579 11:14:02 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:06.579 11:14:02 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:06.579 11:14:02 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:06.579 11:14:02 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:06.579 11:14:02 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:06.838 11:14:02 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:06.838 { 00:06:06.838 "nbd_device": "/dev/nbd0", 00:06:06.838 "bdev_name": "Malloc0" 00:06:06.838 }, 00:06:06.838 { 00:06:06.838 "nbd_device": "/dev/nbd1", 00:06:06.838 "bdev_name": "Malloc1" 00:06:06.838 } 00:06:06.838 ]' 00:06:06.838 11:14:02 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:06.838 { 00:06:06.838 "nbd_device": "/dev/nbd0", 00:06:06.838 "bdev_name": "Malloc0" 00:06:06.838 }, 00:06:06.838 { 00:06:06.838 "nbd_device": "/dev/nbd1", 00:06:06.838 "bdev_name": "Malloc1" 00:06:06.838 } 00:06:06.838 ]' 00:06:06.838 11:14:02 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:06.839 11:14:02 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:06.839 /dev/nbd1' 00:06:06.839 11:14:02 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:06.839 11:14:02 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:06.839 /dev/nbd1' 00:06:06.839 11:14:02 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:06:06.839 11:14:02 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:06:06.839 11:14:02 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:06:06.839 11:14:02 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:06:06.839 11:14:02 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:06:06.839 11:14:02 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:06.839 11:14:02 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:06.839 11:14:02 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:06.839 11:14:02 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:06.839 11:14:02 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:06.839 11:14:02 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:06:06.839 256+0 records in 00:06:06.839 256+0 records out 00:06:06.839 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.010369 s, 101 MB/s 00:06:06.839 11:14:02 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:06.839 11:14:02 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:06.839 256+0 records in 00:06:06.839 256+0 records out 00:06:06.839 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.014067 s, 74.5 MB/s 00:06:06.839 11:14:02 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:06.839 11:14:02 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:06.839 256+0 records in 00:06:06.839 256+0 records out 00:06:06.839 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0153105 s, 68.5 MB/s 00:06:06.839 11:14:02 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:06:06.839 11:14:02 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:06.839 11:14:02 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:06.839 11:14:02 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:06.839 11:14:02 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:06.839 11:14:02 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:06.839 11:14:02 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:06.839 11:14:02 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:06.839 11:14:02 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:06:06.839 11:14:02 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:06.839 11:14:02 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:06:06.839 11:14:02 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:06.839 11:14:02 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:06:06.839 11:14:02 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:06.839 11:14:02 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:06.839 11:14:02 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:06.839 11:14:02 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:06:06.839 11:14:02 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:06.839 11:14:02 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:07.098 11:14:02 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:07.098 11:14:02 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:07.098 11:14:02 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:07.098 11:14:02 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:07.098 11:14:02 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:07.098 11:14:02 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:07.098 11:14:02 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:07.098 11:14:02 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:07.098 11:14:02 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:07.098 11:14:02 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:07.357 11:14:02 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:07.357 11:14:02 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:07.357 11:14:02 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:07.357 11:14:02 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:07.357 11:14:02 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:07.357 11:14:02 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:07.357 11:14:02 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:07.357 11:14:02 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:07.357 11:14:02 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:07.357 11:14:02 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:07.357 11:14:02 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:07.616 11:14:03 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:07.616 11:14:03 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:07.616 11:14:03 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:07.616 11:14:03 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:07.616 11:14:03 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:06:07.616 11:14:03 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:07.616 11:14:03 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:06:07.616 11:14:03 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:06:07.616 11:14:03 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:06:07.616 11:14:03 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:06:07.616 11:14:03 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:07.616 11:14:03 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:06:07.616 11:14:03 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:06:07.616 11:14:03 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:06:07.873 [2024-07-26 11:14:03.435503] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:07.873 [2024-07-26 11:14:03.501498] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:07.873 [2024-07-26 11:14:03.501501] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:08.131 [2024-07-26 11:14:03.541955] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:06:08.131 [2024-07-26 11:14:03.541989] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:10.664 11:14:06 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:06:10.664 11:14:06 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:06:10.664 spdk_app_start Round 1 00:06:10.664 11:14:06 event.app_repeat -- event/event.sh@25 -- # waitforlisten 1341848 /var/tmp/spdk-nbd.sock 00:06:10.664 11:14:06 event.app_repeat -- common/autotest_common.sh@831 -- # '[' -z 1341848 ']' 00:06:10.664 11:14:06 event.app_repeat -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:10.664 11:14:06 event.app_repeat -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:10.664 11:14:06 event.app_repeat -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:10.664 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:10.664 11:14:06 event.app_repeat -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:10.664 11:14:06 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:10.923 11:14:06 event.app_repeat -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:10.923 11:14:06 event.app_repeat -- common/autotest_common.sh@864 -- # return 0 00:06:10.923 11:14:06 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:11.182 Malloc0 00:06:11.182 11:14:06 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:11.182 Malloc1 00:06:11.441 11:14:06 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:11.441 11:14:06 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:11.441 11:14:06 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:11.441 11:14:06 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:11.441 11:14:06 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:11.441 11:14:06 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:11.441 11:14:06 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:11.441 11:14:06 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:11.441 11:14:06 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:11.441 11:14:06 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:11.441 11:14:06 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:11.441 11:14:06 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:11.441 11:14:06 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:06:11.441 11:14:06 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:11.441 11:14:06 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:11.441 11:14:06 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:06:11.441 /dev/nbd0 00:06:11.441 11:14:07 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:11.441 11:14:07 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:11.441 11:14:07 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:06:11.441 11:14:07 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:06:11.441 11:14:07 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:06:11.441 11:14:07 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:06:11.441 11:14:07 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:06:11.441 11:14:07 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:06:11.442 11:14:07 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:06:11.442 11:14:07 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:06:11.442 11:14:07 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:11.442 1+0 records in 00:06:11.442 1+0 records out 00:06:11.442 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000189278 s, 21.6 MB/s 00:06:11.442 11:14:07 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:11.442 11:14:07 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:06:11.442 11:14:07 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:11.442 11:14:07 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:06:11.442 11:14:07 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:06:11.442 11:14:07 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:11.442 11:14:07 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:11.442 11:14:07 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:06:11.700 /dev/nbd1 00:06:11.700 11:14:07 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:11.700 11:14:07 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:11.700 11:14:07 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:06:11.700 11:14:07 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:06:11.701 11:14:07 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:06:11.701 11:14:07 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:06:11.701 11:14:07 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:06:11.701 11:14:07 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:06:11.701 11:14:07 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:06:11.701 11:14:07 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:06:11.701 11:14:07 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:11.701 1+0 records in 00:06:11.701 1+0 records out 00:06:11.701 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000224046 s, 18.3 MB/s 00:06:11.701 11:14:07 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:11.701 11:14:07 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:06:11.701 11:14:07 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:11.701 11:14:07 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:06:11.701 11:14:07 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:06:11.701 11:14:07 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:11.701 11:14:07 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:11.701 11:14:07 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:11.701 11:14:07 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:11.701 11:14:07 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:11.959 11:14:07 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:11.960 { 00:06:11.960 "nbd_device": "/dev/nbd0", 00:06:11.960 "bdev_name": "Malloc0" 00:06:11.960 }, 00:06:11.960 { 00:06:11.960 "nbd_device": "/dev/nbd1", 00:06:11.960 "bdev_name": "Malloc1" 00:06:11.960 } 00:06:11.960 ]' 00:06:11.960 11:14:07 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:11.960 11:14:07 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:11.960 { 00:06:11.960 "nbd_device": "/dev/nbd0", 00:06:11.960 "bdev_name": "Malloc0" 00:06:11.960 }, 00:06:11.960 { 00:06:11.960 "nbd_device": "/dev/nbd1", 00:06:11.960 "bdev_name": "Malloc1" 00:06:11.960 } 00:06:11.960 ]' 00:06:11.960 11:14:07 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:11.960 /dev/nbd1' 00:06:11.960 11:14:07 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:11.960 /dev/nbd1' 00:06:11.960 11:14:07 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:11.960 11:14:07 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:06:11.960 11:14:07 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:06:11.960 11:14:07 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:06:11.960 11:14:07 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:06:11.960 11:14:07 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:06:11.960 11:14:07 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:11.960 11:14:07 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:11.960 11:14:07 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:11.960 11:14:07 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:11.960 11:14:07 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:11.960 11:14:07 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:06:11.960 256+0 records in 00:06:11.960 256+0 records out 00:06:11.960 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0103133 s, 102 MB/s 00:06:11.960 11:14:07 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:11.960 11:14:07 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:11.960 256+0 records in 00:06:11.960 256+0 records out 00:06:11.960 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0138505 s, 75.7 MB/s 00:06:11.960 11:14:07 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:11.960 11:14:07 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:11.960 256+0 records in 00:06:11.960 256+0 records out 00:06:11.960 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0145591 s, 72.0 MB/s 00:06:11.960 11:14:07 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:06:11.960 11:14:07 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:11.960 11:14:07 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:11.960 11:14:07 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:11.960 11:14:07 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:11.960 11:14:07 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:11.960 11:14:07 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:11.960 11:14:07 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:11.960 11:14:07 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:06:11.960 11:14:07 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:11.960 11:14:07 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:06:11.960 11:14:07 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:11.960 11:14:07 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:06:11.960 11:14:07 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:11.960 11:14:07 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:11.960 11:14:07 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:11.960 11:14:07 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:06:11.960 11:14:07 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:11.960 11:14:07 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:12.218 11:14:07 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:12.219 11:14:07 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:12.219 11:14:07 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:12.219 11:14:07 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:12.219 11:14:07 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:12.219 11:14:07 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:12.219 11:14:07 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:12.219 11:14:07 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:12.219 11:14:07 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:12.219 11:14:07 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:12.477 11:14:07 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:12.477 11:14:07 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:12.477 11:14:07 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:12.477 11:14:07 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:12.477 11:14:07 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:12.477 11:14:07 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:12.477 11:14:07 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:12.477 11:14:07 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:12.477 11:14:07 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:12.477 11:14:07 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:12.477 11:14:07 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:12.736 11:14:08 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:12.736 11:14:08 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:12.736 11:14:08 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:12.736 11:14:08 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:12.736 11:14:08 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:06:12.736 11:14:08 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:12.736 11:14:08 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:06:12.736 11:14:08 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:06:12.736 11:14:08 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:06:12.736 11:14:08 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:06:12.736 11:14:08 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:12.736 11:14:08 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:06:12.736 11:14:08 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:06:12.995 11:14:08 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:06:12.996 [2024-07-26 11:14:08.581228] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:12.996 [2024-07-26 11:14:08.647058] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:12.996 [2024-07-26 11:14:08.647059] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:13.255 [2024-07-26 11:14:08.688534] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:06:13.255 [2024-07-26 11:14:08.688573] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:15.784 11:14:11 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:06:15.784 11:14:11 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:06:15.784 spdk_app_start Round 2 00:06:15.784 11:14:11 event.app_repeat -- event/event.sh@25 -- # waitforlisten 1341848 /var/tmp/spdk-nbd.sock 00:06:15.784 11:14:11 event.app_repeat -- common/autotest_common.sh@831 -- # '[' -z 1341848 ']' 00:06:15.784 11:14:11 event.app_repeat -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:15.784 11:14:11 event.app_repeat -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:15.784 11:14:11 event.app_repeat -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:15.784 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:15.784 11:14:11 event.app_repeat -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:15.784 11:14:11 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:16.041 11:14:11 event.app_repeat -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:16.041 11:14:11 event.app_repeat -- common/autotest_common.sh@864 -- # return 0 00:06:16.041 11:14:11 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:16.300 Malloc0 00:06:16.300 11:14:11 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:16.300 Malloc1 00:06:16.300 11:14:11 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:16.300 11:14:11 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:16.300 11:14:11 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:16.300 11:14:11 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:16.300 11:14:11 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:16.300 11:14:11 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:16.300 11:14:11 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:16.300 11:14:11 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:16.300 11:14:11 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:16.300 11:14:11 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:16.300 11:14:11 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:16.300 11:14:11 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:16.559 11:14:11 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:06:16.559 11:14:11 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:16.559 11:14:11 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:16.559 11:14:11 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:06:16.559 /dev/nbd0 00:06:16.559 11:14:12 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:16.559 11:14:12 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:16.559 11:14:12 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:06:16.559 11:14:12 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:06:16.559 11:14:12 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:06:16.559 11:14:12 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:06:16.559 11:14:12 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:06:16.559 11:14:12 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:06:16.559 11:14:12 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:06:16.559 11:14:12 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:06:16.559 11:14:12 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:16.559 1+0 records in 00:06:16.559 1+0 records out 00:06:16.559 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000178151 s, 23.0 MB/s 00:06:16.559 11:14:12 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:16.559 11:14:12 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:06:16.559 11:14:12 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:16.559 11:14:12 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:06:16.559 11:14:12 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:06:16.559 11:14:12 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:16.559 11:14:12 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:16.559 11:14:12 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:06:16.817 /dev/nbd1 00:06:16.817 11:14:12 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:16.817 11:14:12 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:16.817 11:14:12 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:06:16.817 11:14:12 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:06:16.817 11:14:12 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:06:16.817 11:14:12 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:06:16.817 11:14:12 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:06:16.817 11:14:12 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:06:16.817 11:14:12 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:06:16.817 11:14:12 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:06:16.817 11:14:12 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:16.817 1+0 records in 00:06:16.817 1+0 records out 00:06:16.817 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000206842 s, 19.8 MB/s 00:06:16.817 11:14:12 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:16.817 11:14:12 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:06:16.817 11:14:12 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:16.818 11:14:12 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:06:16.818 11:14:12 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:06:16.818 11:14:12 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:16.818 11:14:12 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:16.818 11:14:12 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:16.818 11:14:12 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:16.818 11:14:12 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:17.076 11:14:12 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:17.076 { 00:06:17.076 "nbd_device": "/dev/nbd0", 00:06:17.076 "bdev_name": "Malloc0" 00:06:17.076 }, 00:06:17.076 { 00:06:17.076 "nbd_device": "/dev/nbd1", 00:06:17.076 "bdev_name": "Malloc1" 00:06:17.076 } 00:06:17.076 ]' 00:06:17.076 11:14:12 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:17.076 { 00:06:17.076 "nbd_device": "/dev/nbd0", 00:06:17.076 "bdev_name": "Malloc0" 00:06:17.076 }, 00:06:17.076 { 00:06:17.076 "nbd_device": "/dev/nbd1", 00:06:17.076 "bdev_name": "Malloc1" 00:06:17.076 } 00:06:17.076 ]' 00:06:17.076 11:14:12 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:17.076 11:14:12 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:17.076 /dev/nbd1' 00:06:17.076 11:14:12 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:17.076 /dev/nbd1' 00:06:17.076 11:14:12 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:17.076 11:14:12 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:06:17.076 11:14:12 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:06:17.076 11:14:12 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:06:17.076 11:14:12 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:06:17.076 11:14:12 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:06:17.076 11:14:12 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:17.076 11:14:12 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:17.076 11:14:12 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:17.076 11:14:12 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:17.076 11:14:12 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:17.076 11:14:12 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:06:17.076 256+0 records in 00:06:17.076 256+0 records out 00:06:17.076 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0102917 s, 102 MB/s 00:06:17.076 11:14:12 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:17.076 11:14:12 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:17.076 256+0 records in 00:06:17.076 256+0 records out 00:06:17.076 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0133245 s, 78.7 MB/s 00:06:17.076 11:14:12 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:17.076 11:14:12 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:17.076 256+0 records in 00:06:17.076 256+0 records out 00:06:17.076 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0144038 s, 72.8 MB/s 00:06:17.076 11:14:12 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:06:17.076 11:14:12 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:17.076 11:14:12 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:17.076 11:14:12 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:17.076 11:14:12 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:17.076 11:14:12 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:17.076 11:14:12 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:17.076 11:14:12 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:17.076 11:14:12 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:06:17.076 11:14:12 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:17.076 11:14:12 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:06:17.076 11:14:12 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:17.076 11:14:12 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:06:17.076 11:14:12 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:17.076 11:14:12 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:17.076 11:14:12 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:17.076 11:14:12 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:06:17.076 11:14:12 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:17.077 11:14:12 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:17.335 11:14:12 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:17.335 11:14:12 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:17.335 11:14:12 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:17.335 11:14:12 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:17.335 11:14:12 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:17.335 11:14:12 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:17.335 11:14:12 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:17.335 11:14:12 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:17.335 11:14:12 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:17.335 11:14:12 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:17.592 11:14:13 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:17.592 11:14:13 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:17.592 11:14:13 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:17.592 11:14:13 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:17.592 11:14:13 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:17.592 11:14:13 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:17.592 11:14:13 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:17.592 11:14:13 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:17.592 11:14:13 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:17.592 11:14:13 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:17.592 11:14:13 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:17.851 11:14:13 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:17.851 11:14:13 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:17.851 11:14:13 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:17.851 11:14:13 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:17.851 11:14:13 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:06:17.851 11:14:13 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:17.851 11:14:13 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:06:17.851 11:14:13 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:06:17.851 11:14:13 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:06:17.851 11:14:13 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:06:17.851 11:14:13 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:17.851 11:14:13 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:06:17.851 11:14:13 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:06:17.851 11:14:13 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:06:18.138 [2024-07-26 11:14:13.673771] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:18.138 [2024-07-26 11:14:13.741330] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:18.138 [2024-07-26 11:14:13.741330] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:18.420 [2024-07-26 11:14:13.782338] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:06:18.420 [2024-07-26 11:14:13.782374] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:20.951 11:14:16 event.app_repeat -- event/event.sh@38 -- # waitforlisten 1341848 /var/tmp/spdk-nbd.sock 00:06:20.951 11:14:16 event.app_repeat -- common/autotest_common.sh@831 -- # '[' -z 1341848 ']' 00:06:20.951 11:14:16 event.app_repeat -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:20.951 11:14:16 event.app_repeat -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:20.951 11:14:16 event.app_repeat -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:20.951 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:20.951 11:14:16 event.app_repeat -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:20.951 11:14:16 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:21.209 11:14:16 event.app_repeat -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:21.209 11:14:16 event.app_repeat -- common/autotest_common.sh@864 -- # return 0 00:06:21.209 11:14:16 event.app_repeat -- event/event.sh@39 -- # killprocess 1341848 00:06:21.209 11:14:16 event.app_repeat -- common/autotest_common.sh@950 -- # '[' -z 1341848 ']' 00:06:21.209 11:14:16 event.app_repeat -- common/autotest_common.sh@954 -- # kill -0 1341848 00:06:21.209 11:14:16 event.app_repeat -- common/autotest_common.sh@955 -- # uname 00:06:21.209 11:14:16 event.app_repeat -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:21.209 11:14:16 event.app_repeat -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1341848 00:06:21.209 11:14:16 event.app_repeat -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:21.209 11:14:16 event.app_repeat -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:21.209 11:14:16 event.app_repeat -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1341848' 00:06:21.209 killing process with pid 1341848 00:06:21.209 11:14:16 event.app_repeat -- common/autotest_common.sh@969 -- # kill 1341848 00:06:21.209 11:14:16 event.app_repeat -- common/autotest_common.sh@974 -- # wait 1341848 00:06:21.468 spdk_app_start is called in Round 0. 00:06:21.468 Shutdown signal received, stop current app iteration 00:06:21.468 Starting SPDK v24.09-pre git sha1 487ff9e1a / DPDK 24.03.0 reinitialization... 00:06:21.468 spdk_app_start is called in Round 1. 00:06:21.468 Shutdown signal received, stop current app iteration 00:06:21.468 Starting SPDK v24.09-pre git sha1 487ff9e1a / DPDK 24.03.0 reinitialization... 00:06:21.468 spdk_app_start is called in Round 2. 00:06:21.468 Shutdown signal received, stop current app iteration 00:06:21.468 Starting SPDK v24.09-pre git sha1 487ff9e1a / DPDK 24.03.0 reinitialization... 00:06:21.468 spdk_app_start is called in Round 3. 00:06:21.468 Shutdown signal received, stop current app iteration 00:06:21.468 11:14:16 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:06:21.468 11:14:16 event.app_repeat -- event/event.sh@42 -- # return 0 00:06:21.468 00:06:21.468 real 0m16.373s 00:06:21.468 user 0m35.614s 00:06:21.468 sys 0m2.332s 00:06:21.468 11:14:16 event.app_repeat -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:21.468 11:14:16 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:21.468 ************************************ 00:06:21.468 END TEST app_repeat 00:06:21.468 ************************************ 00:06:21.468 11:14:16 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:06:21.468 11:14:16 event -- event/event.sh@55 -- # run_test cpu_locks /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/cpu_locks.sh 00:06:21.468 11:14:16 event -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:21.468 11:14:16 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:21.468 11:14:16 event -- common/autotest_common.sh@10 -- # set +x 00:06:21.468 ************************************ 00:06:21.469 START TEST cpu_locks 00:06:21.469 ************************************ 00:06:21.469 11:14:16 event.cpu_locks -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/cpu_locks.sh 00:06:21.469 * Looking for test storage... 00:06:21.469 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event 00:06:21.469 11:14:17 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:06:21.469 11:14:17 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:06:21.469 11:14:17 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:06:21.469 11:14:17 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:06:21.469 11:14:17 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:21.469 11:14:17 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:21.469 11:14:17 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:21.469 ************************************ 00:06:21.469 START TEST default_locks 00:06:21.469 ************************************ 00:06:21.469 11:14:17 event.cpu_locks.default_locks -- common/autotest_common.sh@1125 -- # default_locks 00:06:21.469 11:14:17 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=1344834 00:06:21.469 11:14:17 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 1344834 00:06:21.469 11:14:17 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:06:21.469 11:14:17 event.cpu_locks.default_locks -- common/autotest_common.sh@831 -- # '[' -z 1344834 ']' 00:06:21.469 11:14:17 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:21.469 11:14:17 event.cpu_locks.default_locks -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:21.469 11:14:17 event.cpu_locks.default_locks -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:21.469 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:21.469 11:14:17 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:21.469 11:14:17 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:06:21.727 [2024-07-26 11:14:17.140595] Starting SPDK v24.09-pre git sha1 487ff9e1a / DPDK 24.03.0 initialization... 00:06:21.727 [2024-07-26 11:14:17.140643] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1344834 ] 00:06:21.727 EAL: No free 2048 kB hugepages reported on node 1 00:06:21.727 [2024-07-26 11:14:17.205691] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:21.727 [2024-07-26 11:14:17.284513] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:22.294 11:14:17 event.cpu_locks.default_locks -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:22.294 11:14:17 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # return 0 00:06:22.295 11:14:17 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 1344834 00:06:22.295 11:14:17 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 1344834 00:06:22.295 11:14:17 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:22.861 lslocks: write error 00:06:22.861 11:14:18 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 1344834 00:06:22.861 11:14:18 event.cpu_locks.default_locks -- common/autotest_common.sh@950 -- # '[' -z 1344834 ']' 00:06:22.861 11:14:18 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # kill -0 1344834 00:06:22.861 11:14:18 event.cpu_locks.default_locks -- common/autotest_common.sh@955 -- # uname 00:06:22.861 11:14:18 event.cpu_locks.default_locks -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:22.861 11:14:18 event.cpu_locks.default_locks -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1344834 00:06:22.861 11:14:18 event.cpu_locks.default_locks -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:22.861 11:14:18 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:22.861 11:14:18 event.cpu_locks.default_locks -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1344834' 00:06:22.861 killing process with pid 1344834 00:06:22.861 11:14:18 event.cpu_locks.default_locks -- common/autotest_common.sh@969 -- # kill 1344834 00:06:22.861 11:14:18 event.cpu_locks.default_locks -- common/autotest_common.sh@974 -- # wait 1344834 00:06:23.119 11:14:18 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 1344834 00:06:23.119 11:14:18 event.cpu_locks.default_locks -- common/autotest_common.sh@650 -- # local es=0 00:06:23.119 11:14:18 event.cpu_locks.default_locks -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 1344834 00:06:23.119 11:14:18 event.cpu_locks.default_locks -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:06:23.119 11:14:18 event.cpu_locks.default_locks -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:23.119 11:14:18 event.cpu_locks.default_locks -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:06:23.119 11:14:18 event.cpu_locks.default_locks -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:23.119 11:14:18 event.cpu_locks.default_locks -- common/autotest_common.sh@653 -- # waitforlisten 1344834 00:06:23.119 11:14:18 event.cpu_locks.default_locks -- common/autotest_common.sh@831 -- # '[' -z 1344834 ']' 00:06:23.119 11:14:18 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:23.119 11:14:18 event.cpu_locks.default_locks -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:23.119 11:14:18 event.cpu_locks.default_locks -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:23.119 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:23.119 11:14:18 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:23.119 11:14:18 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:06:23.119 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 846: kill: (1344834) - No such process 00:06:23.119 ERROR: process (pid: 1344834) is no longer running 00:06:23.119 11:14:18 event.cpu_locks.default_locks -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:23.119 11:14:18 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # return 1 00:06:23.119 11:14:18 event.cpu_locks.default_locks -- common/autotest_common.sh@653 -- # es=1 00:06:23.119 11:14:18 event.cpu_locks.default_locks -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:23.119 11:14:18 event.cpu_locks.default_locks -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:06:23.119 11:14:18 event.cpu_locks.default_locks -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:23.119 11:14:18 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks 00:06:23.120 11:14:18 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=() 00:06:23.120 11:14:18 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files 00:06:23.120 11:14:18 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:06:23.120 00:06:23.120 real 0m1.588s 00:06:23.120 user 0m1.678s 00:06:23.120 sys 0m0.507s 00:06:23.120 11:14:18 event.cpu_locks.default_locks -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:23.120 11:14:18 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:06:23.120 ************************************ 00:06:23.120 END TEST default_locks 00:06:23.120 ************************************ 00:06:23.120 11:14:18 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:06:23.120 11:14:18 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:23.120 11:14:18 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:23.120 11:14:18 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:23.120 ************************************ 00:06:23.120 START TEST default_locks_via_rpc 00:06:23.120 ************************************ 00:06:23.120 11:14:18 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1125 -- # default_locks_via_rpc 00:06:23.120 11:14:18 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=1345104 00:06:23.120 11:14:18 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 1345104 00:06:23.120 11:14:18 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:06:23.120 11:14:18 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 1345104 ']' 00:06:23.120 11:14:18 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:23.120 11:14:18 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:23.120 11:14:18 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:23.120 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:23.120 11:14:18 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:23.120 11:14:18 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:23.379 [2024-07-26 11:14:18.796710] Starting SPDK v24.09-pre git sha1 487ff9e1a / DPDK 24.03.0 initialization... 00:06:23.379 [2024-07-26 11:14:18.796748] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1345104 ] 00:06:23.379 EAL: No free 2048 kB hugepages reported on node 1 00:06:23.379 [2024-07-26 11:14:18.860759] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:23.379 [2024-07-26 11:14:18.938984] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:23.946 11:14:19 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:23.946 11:14:19 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:06:23.946 11:14:19 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:06:23.946 11:14:19 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:23.946 11:14:19 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:23.946 11:14:19 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:23.946 11:14:19 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks 00:06:23.946 11:14:19 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=() 00:06:23.946 11:14:19 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files 00:06:23.946 11:14:19 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:06:23.946 11:14:19 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:06:23.946 11:14:19 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:23.946 11:14:19 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:23.946 11:14:19 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:23.946 11:14:19 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 1345104 00:06:23.946 11:14:19 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 1345104 00:06:23.946 11:14:19 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:24.513 11:14:20 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 1345104 00:06:24.513 11:14:20 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@950 -- # '[' -z 1345104 ']' 00:06:24.513 11:14:20 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # kill -0 1345104 00:06:24.513 11:14:20 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@955 -- # uname 00:06:24.513 11:14:20 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:24.513 11:14:20 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1345104 00:06:24.513 11:14:20 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:24.513 11:14:20 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:24.513 11:14:20 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1345104' 00:06:24.513 killing process with pid 1345104 00:06:24.513 11:14:20 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@969 -- # kill 1345104 00:06:24.513 11:14:20 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@974 -- # wait 1345104 00:06:24.772 00:06:24.772 real 0m1.642s 00:06:24.772 user 0m1.719s 00:06:24.772 sys 0m0.549s 00:06:24.772 11:14:20 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:24.772 11:14:20 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:24.772 ************************************ 00:06:24.772 END TEST default_locks_via_rpc 00:06:24.772 ************************************ 00:06:24.772 11:14:20 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:06:24.772 11:14:20 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:24.772 11:14:20 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:24.772 11:14:20 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:25.030 ************************************ 00:06:25.030 START TEST non_locking_app_on_locked_coremask 00:06:25.030 ************************************ 00:06:25.030 11:14:20 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1125 -- # non_locking_app_on_locked_coremask 00:06:25.031 11:14:20 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=1345434 00:06:25.031 11:14:20 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 1345434 /var/tmp/spdk.sock 00:06:25.031 11:14:20 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:06:25.031 11:14:20 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # '[' -z 1345434 ']' 00:06:25.031 11:14:20 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:25.031 11:14:20 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:25.031 11:14:20 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:25.031 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:25.031 11:14:20 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:25.031 11:14:20 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:25.031 [2024-07-26 11:14:20.503778] Starting SPDK v24.09-pre git sha1 487ff9e1a / DPDK 24.03.0 initialization... 00:06:25.031 [2024-07-26 11:14:20.503817] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1345434 ] 00:06:25.031 EAL: No free 2048 kB hugepages reported on node 1 00:06:25.031 [2024-07-26 11:14:20.568662] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:25.031 [2024-07-26 11:14:20.646560] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:25.967 11:14:21 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:25.967 11:14:21 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # return 0 00:06:25.967 11:14:21 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=1345592 00:06:25.967 11:14:21 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 1345592 /var/tmp/spdk2.sock 00:06:25.967 11:14:21 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:06:25.967 11:14:21 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # '[' -z 1345592 ']' 00:06:25.967 11:14:21 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:25.967 11:14:21 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:25.967 11:14:21 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:25.967 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:25.967 11:14:21 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:25.967 11:14:21 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:25.967 [2024-07-26 11:14:21.344016] Starting SPDK v24.09-pre git sha1 487ff9e1a / DPDK 24.03.0 initialization... 00:06:25.967 [2024-07-26 11:14:21.344066] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1345592 ] 00:06:25.967 EAL: No free 2048 kB hugepages reported on node 1 00:06:25.967 [2024-07-26 11:14:21.416990] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:25.967 [2024-07-26 11:14:21.417014] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:25.967 [2024-07-26 11:14:21.572413] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:26.534 11:14:22 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:26.534 11:14:22 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # return 0 00:06:26.534 11:14:22 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 1345434 00:06:26.534 11:14:22 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 1345434 00:06:26.534 11:14:22 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:27.102 lslocks: write error 00:06:27.102 11:14:22 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 1345434 00:06:27.102 11:14:22 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@950 -- # '[' -z 1345434 ']' 00:06:27.102 11:14:22 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # kill -0 1345434 00:06:27.102 11:14:22 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # uname 00:06:27.102 11:14:22 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:27.103 11:14:22 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1345434 00:06:27.103 11:14:22 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:27.103 11:14:22 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:27.103 11:14:22 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1345434' 00:06:27.103 killing process with pid 1345434 00:06:27.103 11:14:22 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@969 -- # kill 1345434 00:06:27.103 11:14:22 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@974 -- # wait 1345434 00:06:27.670 11:14:23 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 1345592 00:06:27.670 11:14:23 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@950 -- # '[' -z 1345592 ']' 00:06:27.670 11:14:23 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # kill -0 1345592 00:06:27.670 11:14:23 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # uname 00:06:27.670 11:14:23 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:27.671 11:14:23 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1345592 00:06:27.671 11:14:23 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:27.671 11:14:23 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:27.671 11:14:23 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1345592' 00:06:27.671 killing process with pid 1345592 00:06:27.671 11:14:23 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@969 -- # kill 1345592 00:06:27.671 11:14:23 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@974 -- # wait 1345592 00:06:28.237 00:06:28.237 real 0m3.182s 00:06:28.237 user 0m3.414s 00:06:28.237 sys 0m0.904s 00:06:28.237 11:14:23 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:28.237 11:14:23 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:28.237 ************************************ 00:06:28.237 END TEST non_locking_app_on_locked_coremask 00:06:28.237 ************************************ 00:06:28.237 11:14:23 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:06:28.237 11:14:23 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:28.237 11:14:23 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:28.237 11:14:23 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:28.237 ************************************ 00:06:28.237 START TEST locking_app_on_unlocked_coremask 00:06:28.237 ************************************ 00:06:28.237 11:14:23 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1125 -- # locking_app_on_unlocked_coremask 00:06:28.237 11:14:23 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=1346086 00:06:28.237 11:14:23 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 1346086 /var/tmp/spdk.sock 00:06:28.237 11:14:23 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:06:28.237 11:14:23 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@831 -- # '[' -z 1346086 ']' 00:06:28.237 11:14:23 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:28.237 11:14:23 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:28.237 11:14:23 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:28.237 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:28.237 11:14:23 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:28.237 11:14:23 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:28.237 [2024-07-26 11:14:23.748187] Starting SPDK v24.09-pre git sha1 487ff9e1a / DPDK 24.03.0 initialization... 00:06:28.237 [2024-07-26 11:14:23.748224] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1346086 ] 00:06:28.237 EAL: No free 2048 kB hugepages reported on node 1 00:06:28.237 [2024-07-26 11:14:23.811922] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:28.237 [2024-07-26 11:14:23.811944] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:28.237 [2024-07-26 11:14:23.889854] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:29.181 11:14:24 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:29.181 11:14:24 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # return 0 00:06:29.181 11:14:24 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:06:29.181 11:14:24 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=1346127 00:06:29.181 11:14:24 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 1346127 /var/tmp/spdk2.sock 00:06:29.181 11:14:24 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@831 -- # '[' -z 1346127 ']' 00:06:29.181 11:14:24 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:29.181 11:14:24 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:29.181 11:14:24 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:29.181 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:29.181 11:14:24 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:29.181 11:14:24 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:29.181 [2024-07-26 11:14:24.565623] Starting SPDK v24.09-pre git sha1 487ff9e1a / DPDK 24.03.0 initialization... 00:06:29.181 [2024-07-26 11:14:24.565675] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1346127 ] 00:06:29.181 EAL: No free 2048 kB hugepages reported on node 1 00:06:29.181 [2024-07-26 11:14:24.635513] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:29.181 [2024-07-26 11:14:24.793459] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:29.751 11:14:25 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:29.751 11:14:25 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # return 0 00:06:29.751 11:14:25 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 1346127 00:06:29.751 11:14:25 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 1346127 00:06:29.751 11:14:25 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:30.318 lslocks: write error 00:06:30.318 11:14:25 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 1346086 00:06:30.318 11:14:25 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@950 -- # '[' -z 1346086 ']' 00:06:30.318 11:14:25 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # kill -0 1346086 00:06:30.318 11:14:25 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@955 -- # uname 00:06:30.318 11:14:25 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:30.318 11:14:25 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1346086 00:06:30.318 11:14:25 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:30.318 11:14:25 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:30.318 11:14:25 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1346086' 00:06:30.318 killing process with pid 1346086 00:06:30.318 11:14:25 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@969 -- # kill 1346086 00:06:30.318 11:14:25 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@974 -- # wait 1346086 00:06:30.887 11:14:26 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 1346127 00:06:30.887 11:14:26 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@950 -- # '[' -z 1346127 ']' 00:06:30.887 11:14:26 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # kill -0 1346127 00:06:30.887 11:14:26 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@955 -- # uname 00:06:30.887 11:14:26 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:30.887 11:14:26 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1346127 00:06:30.887 11:14:26 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:30.887 11:14:26 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:30.887 11:14:26 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1346127' 00:06:30.887 killing process with pid 1346127 00:06:30.887 11:14:26 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@969 -- # kill 1346127 00:06:30.887 11:14:26 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@974 -- # wait 1346127 00:06:31.455 00:06:31.455 real 0m3.119s 00:06:31.455 user 0m3.345s 00:06:31.455 sys 0m0.848s 00:06:31.455 11:14:26 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:31.455 11:14:26 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:31.455 ************************************ 00:06:31.455 END TEST locking_app_on_unlocked_coremask 00:06:31.455 ************************************ 00:06:31.455 11:14:26 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:06:31.455 11:14:26 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:31.455 11:14:26 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:31.455 11:14:26 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:31.455 ************************************ 00:06:31.455 START TEST locking_app_on_locked_coremask 00:06:31.455 ************************************ 00:06:31.455 11:14:26 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1125 -- # locking_app_on_locked_coremask 00:06:31.455 11:14:26 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=1346589 00:06:31.455 11:14:26 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 1346589 /var/tmp/spdk.sock 00:06:31.455 11:14:26 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:06:31.455 11:14:26 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # '[' -z 1346589 ']' 00:06:31.455 11:14:26 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:31.455 11:14:26 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:31.455 11:14:26 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:31.455 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:31.455 11:14:26 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:31.455 11:14:26 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:31.455 [2024-07-26 11:14:26.937066] Starting SPDK v24.09-pre git sha1 487ff9e1a / DPDK 24.03.0 initialization... 00:06:31.455 [2024-07-26 11:14:26.937107] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1346589 ] 00:06:31.455 EAL: No free 2048 kB hugepages reported on node 1 00:06:31.455 [2024-07-26 11:14:27.002287] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:31.455 [2024-07-26 11:14:27.070064] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:32.392 11:14:27 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:32.392 11:14:27 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # return 0 00:06:32.392 11:14:27 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=1346819 00:06:32.392 11:14:27 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 1346819 /var/tmp/spdk2.sock 00:06:32.392 11:14:27 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:06:32.392 11:14:27 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@650 -- # local es=0 00:06:32.392 11:14:27 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 1346819 /var/tmp/spdk2.sock 00:06:32.392 11:14:27 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:06:32.392 11:14:27 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:32.392 11:14:27 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:06:32.392 11:14:27 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:32.392 11:14:27 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@653 -- # waitforlisten 1346819 /var/tmp/spdk2.sock 00:06:32.392 11:14:27 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # '[' -z 1346819 ']' 00:06:32.392 11:14:27 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:32.392 11:14:27 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:32.392 11:14:27 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:32.392 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:32.392 11:14:27 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:32.392 11:14:27 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:32.392 [2024-07-26 11:14:27.785088] Starting SPDK v24.09-pre git sha1 487ff9e1a / DPDK 24.03.0 initialization... 00:06:32.392 [2024-07-26 11:14:27.785135] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1346819 ] 00:06:32.392 EAL: No free 2048 kB hugepages reported on node 1 00:06:32.392 [2024-07-26 11:14:27.858940] app.c: 771:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 1346589 has claimed it. 00:06:32.392 [2024-07-26 11:14:27.858977] app.c: 902:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:06:32.960 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 846: kill: (1346819) - No such process 00:06:32.960 ERROR: process (pid: 1346819) is no longer running 00:06:32.960 11:14:28 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:32.960 11:14:28 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # return 1 00:06:32.960 11:14:28 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@653 -- # es=1 00:06:32.960 11:14:28 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:32.960 11:14:28 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:06:32.960 11:14:28 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:32.960 11:14:28 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 1346589 00:06:32.960 11:14:28 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 1346589 00:06:32.960 11:14:28 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:33.219 lslocks: write error 00:06:33.219 11:14:28 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 1346589 00:06:33.219 11:14:28 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@950 -- # '[' -z 1346589 ']' 00:06:33.219 11:14:28 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # kill -0 1346589 00:06:33.219 11:14:28 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # uname 00:06:33.219 11:14:28 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:33.219 11:14:28 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1346589 00:06:33.219 11:14:28 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:33.219 11:14:28 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:33.219 11:14:28 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1346589' 00:06:33.219 killing process with pid 1346589 00:06:33.219 11:14:28 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@969 -- # kill 1346589 00:06:33.219 11:14:28 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@974 -- # wait 1346589 00:06:33.478 00:06:33.478 real 0m2.240s 00:06:33.478 user 0m2.456s 00:06:33.478 sys 0m0.607s 00:06:33.478 11:14:29 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:33.478 11:14:29 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:33.478 ************************************ 00:06:33.478 END TEST locking_app_on_locked_coremask 00:06:33.478 ************************************ 00:06:33.737 11:14:29 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:06:33.737 11:14:29 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:33.737 11:14:29 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:33.737 11:14:29 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:33.737 ************************************ 00:06:33.737 START TEST locking_overlapped_coremask 00:06:33.737 ************************************ 00:06:33.737 11:14:29 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1125 -- # locking_overlapped_coremask 00:06:33.737 11:14:29 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=1347077 00:06:33.737 11:14:29 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 1347077 /var/tmp/spdk.sock 00:06:33.737 11:14:29 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 00:06:33.737 11:14:29 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@831 -- # '[' -z 1347077 ']' 00:06:33.737 11:14:29 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:33.737 11:14:29 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:33.737 11:14:29 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:33.737 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:33.737 11:14:29 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:33.737 11:14:29 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:33.737 [2024-07-26 11:14:29.244640] Starting SPDK v24.09-pre git sha1 487ff9e1a / DPDK 24.03.0 initialization... 00:06:33.737 [2024-07-26 11:14:29.244683] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1347077 ] 00:06:33.737 EAL: No free 2048 kB hugepages reported on node 1 00:06:33.737 [2024-07-26 11:14:29.309147] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:33.737 [2024-07-26 11:14:29.378778] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:33.737 [2024-07-26 11:14:29.378883] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:33.737 [2024-07-26 11:14:29.378884] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:06:34.674 11:14:30 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:34.674 11:14:30 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # return 0 00:06:34.674 11:14:30 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=1347116 00:06:34.674 11:14:30 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 1347116 /var/tmp/spdk2.sock 00:06:34.674 11:14:30 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:06:34.674 11:14:30 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@650 -- # local es=0 00:06:34.674 11:14:30 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 1347116 /var/tmp/spdk2.sock 00:06:34.674 11:14:30 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:06:34.674 11:14:30 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:34.674 11:14:30 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:06:34.674 11:14:30 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:34.674 11:14:30 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@653 -- # waitforlisten 1347116 /var/tmp/spdk2.sock 00:06:34.674 11:14:30 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@831 -- # '[' -z 1347116 ']' 00:06:34.674 11:14:30 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:34.674 11:14:30 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:34.674 11:14:30 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:34.674 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:34.674 11:14:30 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:34.674 11:14:30 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:34.674 [2024-07-26 11:14:30.097086] Starting SPDK v24.09-pre git sha1 487ff9e1a / DPDK 24.03.0 initialization... 00:06:34.674 [2024-07-26 11:14:30.097133] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1347116 ] 00:06:34.674 EAL: No free 2048 kB hugepages reported on node 1 00:06:34.674 [2024-07-26 11:14:30.171315] app.c: 771:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 1347077 has claimed it. 00:06:34.674 [2024-07-26 11:14:30.171354] app.c: 902:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:06:35.242 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 846: kill: (1347116) - No such process 00:06:35.242 ERROR: process (pid: 1347116) is no longer running 00:06:35.242 11:14:30 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:35.242 11:14:30 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # return 1 00:06:35.242 11:14:30 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@653 -- # es=1 00:06:35.242 11:14:30 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:35.242 11:14:30 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:06:35.242 11:14:30 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:35.242 11:14:30 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:06:35.242 11:14:30 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:06:35.242 11:14:30 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:06:35.242 11:14:30 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:06:35.242 11:14:30 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 1347077 00:06:35.242 11:14:30 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@950 -- # '[' -z 1347077 ']' 00:06:35.242 11:14:30 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # kill -0 1347077 00:06:35.242 11:14:30 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@955 -- # uname 00:06:35.242 11:14:30 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:35.242 11:14:30 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1347077 00:06:35.242 11:14:30 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:35.242 11:14:30 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:35.242 11:14:30 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1347077' 00:06:35.242 killing process with pid 1347077 00:06:35.242 11:14:30 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@969 -- # kill 1347077 00:06:35.242 11:14:30 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@974 -- # wait 1347077 00:06:35.500 00:06:35.500 real 0m1.901s 00:06:35.500 user 0m5.366s 00:06:35.500 sys 0m0.416s 00:06:35.500 11:14:31 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:35.500 11:14:31 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:35.500 ************************************ 00:06:35.500 END TEST locking_overlapped_coremask 00:06:35.500 ************************************ 00:06:35.500 11:14:31 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:06:35.500 11:14:31 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:35.500 11:14:31 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:35.500 11:14:31 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:35.758 ************************************ 00:06:35.758 START TEST locking_overlapped_coremask_via_rpc 00:06:35.758 ************************************ 00:06:35.758 11:14:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1125 -- # locking_overlapped_coremask_via_rpc 00:06:35.758 11:14:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=1347355 00:06:35.758 11:14:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 1347355 /var/tmp/spdk.sock 00:06:35.758 11:14:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:06:35.758 11:14:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 1347355 ']' 00:06:35.758 11:14:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:35.758 11:14:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:35.758 11:14:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:35.758 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:35.758 11:14:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:35.758 11:14:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:35.758 [2024-07-26 11:14:31.213383] Starting SPDK v24.09-pre git sha1 487ff9e1a / DPDK 24.03.0 initialization... 00:06:35.758 [2024-07-26 11:14:31.213419] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1347355 ] 00:06:35.758 EAL: No free 2048 kB hugepages reported on node 1 00:06:35.758 [2024-07-26 11:14:31.277048] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:35.758 [2024-07-26 11:14:31.277071] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:35.758 [2024-07-26 11:14:31.357355] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:35.758 [2024-07-26 11:14:31.357461] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:35.758 [2024-07-26 11:14:31.357461] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:06:36.694 11:14:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:36.694 11:14:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:06:36.694 11:14:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:06:36.694 11:14:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=1347583 00:06:36.694 11:14:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 1347583 /var/tmp/spdk2.sock 00:06:36.694 11:14:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 1347583 ']' 00:06:36.694 11:14:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:36.694 11:14:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:36.694 11:14:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:36.694 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:36.694 11:14:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:36.694 11:14:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:36.694 [2024-07-26 11:14:32.058545] Starting SPDK v24.09-pre git sha1 487ff9e1a / DPDK 24.03.0 initialization... 00:06:36.694 [2024-07-26 11:14:32.058591] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1347583 ] 00:06:36.694 EAL: No free 2048 kB hugepages reported on node 1 00:06:36.694 [2024-07-26 11:14:32.134300] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:36.694 [2024-07-26 11:14:32.134326] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:36.694 [2024-07-26 11:14:32.280130] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:06:36.694 [2024-07-26 11:14:32.283668] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:06:36.694 [2024-07-26 11:14:32.283668] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:06:37.261 11:14:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:37.261 11:14:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:06:37.261 11:14:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:06:37.261 11:14:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:37.261 11:14:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:37.261 11:14:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:37.261 11:14:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:37.261 11:14:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@650 -- # local es=0 00:06:37.261 11:14:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:37.261 11:14:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:06:37.261 11:14:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:37.261 11:14:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:06:37.261 11:14:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:37.261 11:14:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@653 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:37.261 11:14:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:37.261 11:14:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:37.261 [2024-07-26 11:14:32.871704] app.c: 771:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 1347355 has claimed it. 00:06:37.261 request: 00:06:37.261 { 00:06:37.261 "method": "framework_enable_cpumask_locks", 00:06:37.261 "req_id": 1 00:06:37.261 } 00:06:37.261 Got JSON-RPC error response 00:06:37.261 response: 00:06:37.261 { 00:06:37.261 "code": -32603, 00:06:37.261 "message": "Failed to claim CPU core: 2" 00:06:37.261 } 00:06:37.261 11:14:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:06:37.261 11:14:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@653 -- # es=1 00:06:37.261 11:14:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:37.261 11:14:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:06:37.261 11:14:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:37.261 11:14:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 1347355 /var/tmp/spdk.sock 00:06:37.261 11:14:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 1347355 ']' 00:06:37.261 11:14:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:37.261 11:14:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:37.261 11:14:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:37.261 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:37.261 11:14:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:37.261 11:14:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:37.519 11:14:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:37.519 11:14:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:06:37.519 11:14:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 1347583 /var/tmp/spdk2.sock 00:06:37.519 11:14:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 1347583 ']' 00:06:37.519 11:14:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:37.519 11:14:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:37.519 11:14:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:37.519 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:37.519 11:14:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:37.519 11:14:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:37.778 11:14:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:37.778 11:14:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:06:37.778 11:14:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:06:37.778 11:14:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:06:37.778 11:14:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:06:37.778 11:14:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:06:37.778 00:06:37.778 real 0m2.104s 00:06:37.778 user 0m0.864s 00:06:37.778 sys 0m0.174s 00:06:37.778 11:14:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:37.778 11:14:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:37.778 ************************************ 00:06:37.778 END TEST locking_overlapped_coremask_via_rpc 00:06:37.778 ************************************ 00:06:37.778 11:14:33 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup 00:06:37.778 11:14:33 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 1347355 ]] 00:06:37.778 11:14:33 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 1347355 00:06:37.778 11:14:33 event.cpu_locks -- common/autotest_common.sh@950 -- # '[' -z 1347355 ']' 00:06:37.778 11:14:33 event.cpu_locks -- common/autotest_common.sh@954 -- # kill -0 1347355 00:06:37.778 11:14:33 event.cpu_locks -- common/autotest_common.sh@955 -- # uname 00:06:37.778 11:14:33 event.cpu_locks -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:37.778 11:14:33 event.cpu_locks -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1347355 00:06:37.778 11:14:33 event.cpu_locks -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:37.778 11:14:33 event.cpu_locks -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:37.778 11:14:33 event.cpu_locks -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1347355' 00:06:37.778 killing process with pid 1347355 00:06:37.778 11:14:33 event.cpu_locks -- common/autotest_common.sh@969 -- # kill 1347355 00:06:37.778 11:14:33 event.cpu_locks -- common/autotest_common.sh@974 -- # wait 1347355 00:06:38.040 11:14:33 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 1347583 ]] 00:06:38.040 11:14:33 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 1347583 00:06:38.040 11:14:33 event.cpu_locks -- common/autotest_common.sh@950 -- # '[' -z 1347583 ']' 00:06:38.040 11:14:33 event.cpu_locks -- common/autotest_common.sh@954 -- # kill -0 1347583 00:06:38.040 11:14:33 event.cpu_locks -- common/autotest_common.sh@955 -- # uname 00:06:38.040 11:14:33 event.cpu_locks -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:38.040 11:14:33 event.cpu_locks -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1347583 00:06:38.040 11:14:33 event.cpu_locks -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:06:38.040 11:14:33 event.cpu_locks -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:06:38.040 11:14:33 event.cpu_locks -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1347583' 00:06:38.040 killing process with pid 1347583 00:06:38.040 11:14:33 event.cpu_locks -- common/autotest_common.sh@969 -- # kill 1347583 00:06:38.040 11:14:33 event.cpu_locks -- common/autotest_common.sh@974 -- # wait 1347583 00:06:38.611 11:14:34 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:06:38.611 11:14:34 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup 00:06:38.611 11:14:34 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 1347355 ]] 00:06:38.611 11:14:34 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 1347355 00:06:38.611 11:14:34 event.cpu_locks -- common/autotest_common.sh@950 -- # '[' -z 1347355 ']' 00:06:38.611 11:14:34 event.cpu_locks -- common/autotest_common.sh@954 -- # kill -0 1347355 00:06:38.611 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 954: kill: (1347355) - No such process 00:06:38.611 11:14:34 event.cpu_locks -- common/autotest_common.sh@977 -- # echo 'Process with pid 1347355 is not found' 00:06:38.611 Process with pid 1347355 is not found 00:06:38.611 11:14:34 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 1347583 ]] 00:06:38.611 11:14:34 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 1347583 00:06:38.611 11:14:34 event.cpu_locks -- common/autotest_common.sh@950 -- # '[' -z 1347583 ']' 00:06:38.611 11:14:34 event.cpu_locks -- common/autotest_common.sh@954 -- # kill -0 1347583 00:06:38.611 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 954: kill: (1347583) - No such process 00:06:38.611 11:14:34 event.cpu_locks -- common/autotest_common.sh@977 -- # echo 'Process with pid 1347583 is not found' 00:06:38.611 Process with pid 1347583 is not found 00:06:38.611 11:14:34 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:06:38.611 00:06:38.611 real 0m17.055s 00:06:38.611 user 0m29.320s 00:06:38.611 sys 0m4.890s 00:06:38.611 11:14:34 event.cpu_locks -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:38.611 11:14:34 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:38.611 ************************************ 00:06:38.611 END TEST cpu_locks 00:06:38.611 ************************************ 00:06:38.611 00:06:38.611 real 0m42.377s 00:06:38.611 user 1m20.781s 00:06:38.611 sys 0m8.195s 00:06:38.611 11:14:34 event -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:38.611 11:14:34 event -- common/autotest_common.sh@10 -- # set +x 00:06:38.611 ************************************ 00:06:38.611 END TEST event 00:06:38.611 ************************************ 00:06:38.611 11:14:34 -- spdk/autotest.sh@182 -- # run_test thread /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/thread.sh 00:06:38.611 11:14:34 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:38.611 11:14:34 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:38.612 11:14:34 -- common/autotest_common.sh@10 -- # set +x 00:06:38.612 ************************************ 00:06:38.612 START TEST thread 00:06:38.612 ************************************ 00:06:38.612 11:14:34 thread -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/thread.sh 00:06:38.612 * Looking for test storage... 00:06:38.612 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread 00:06:38.612 11:14:34 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:06:38.612 11:14:34 thread -- common/autotest_common.sh@1101 -- # '[' 8 -le 1 ']' 00:06:38.612 11:14:34 thread -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:38.612 11:14:34 thread -- common/autotest_common.sh@10 -- # set +x 00:06:38.612 ************************************ 00:06:38.612 START TEST thread_poller_perf 00:06:38.612 ************************************ 00:06:38.612 11:14:34 thread.thread_poller_perf -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:06:38.870 [2024-07-26 11:14:34.274462] Starting SPDK v24.09-pre git sha1 487ff9e1a / DPDK 24.03.0 initialization... 00:06:38.870 [2024-07-26 11:14:34.274534] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1347987 ] 00:06:38.870 EAL: No free 2048 kB hugepages reported on node 1 00:06:38.870 [2024-07-26 11:14:34.345102] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:38.870 [2024-07-26 11:14:34.418057] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:38.870 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:06:40.246 ====================================== 00:06:40.246 busy:2106585738 (cyc) 00:06:40.246 total_run_count: 419000 00:06:40.246 tsc_hz: 2100000000 (cyc) 00:06:40.246 ====================================== 00:06:40.246 poller_cost: 5027 (cyc), 2393 (nsec) 00:06:40.246 00:06:40.246 real 0m1.240s 00:06:40.246 user 0m1.154s 00:06:40.246 sys 0m0.082s 00:06:40.246 11:14:35 thread.thread_poller_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:40.246 11:14:35 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:06:40.246 ************************************ 00:06:40.246 END TEST thread_poller_perf 00:06:40.246 ************************************ 00:06:40.247 11:14:35 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:06:40.247 11:14:35 thread -- common/autotest_common.sh@1101 -- # '[' 8 -le 1 ']' 00:06:40.247 11:14:35 thread -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:40.247 11:14:35 thread -- common/autotest_common.sh@10 -- # set +x 00:06:40.247 ************************************ 00:06:40.247 START TEST thread_poller_perf 00:06:40.247 ************************************ 00:06:40.247 11:14:35 thread.thread_poller_perf -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:06:40.247 [2024-07-26 11:14:35.585289] Starting SPDK v24.09-pre git sha1 487ff9e1a / DPDK 24.03.0 initialization... 00:06:40.247 [2024-07-26 11:14:35.585358] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1348188 ] 00:06:40.247 EAL: No free 2048 kB hugepages reported on node 1 00:06:40.247 [2024-07-26 11:14:35.656588] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:40.247 [2024-07-26 11:14:35.728854] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:40.247 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:06:41.182 ====================================== 00:06:41.182 busy:2101329988 (cyc) 00:06:41.182 total_run_count: 5567000 00:06:41.182 tsc_hz: 2100000000 (cyc) 00:06:41.182 ====================================== 00:06:41.182 poller_cost: 377 (cyc), 179 (nsec) 00:06:41.182 00:06:41.182 real 0m1.235s 00:06:41.182 user 0m1.141s 00:06:41.182 sys 0m0.089s 00:06:41.182 11:14:36 thread.thread_poller_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:41.183 11:14:36 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:06:41.183 ************************************ 00:06:41.183 END TEST thread_poller_perf 00:06:41.183 ************************************ 00:06:41.183 11:14:36 thread -- thread/thread.sh@17 -- # [[ y != \y ]] 00:06:41.183 00:06:41.183 real 0m2.709s 00:06:41.183 user 0m2.379s 00:06:41.183 sys 0m0.339s 00:06:41.183 11:14:36 thread -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:41.183 11:14:36 thread -- common/autotest_common.sh@10 -- # set +x 00:06:41.183 ************************************ 00:06:41.183 END TEST thread 00:06:41.183 ************************************ 00:06:41.441 11:14:36 -- spdk/autotest.sh@184 -- # [[ 0 -eq 1 ]] 00:06:41.441 11:14:36 -- spdk/autotest.sh@189 -- # run_test app_cmdline /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/cmdline.sh 00:06:41.441 11:14:36 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:41.441 11:14:36 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:41.441 11:14:36 -- common/autotest_common.sh@10 -- # set +x 00:06:41.441 ************************************ 00:06:41.441 START TEST app_cmdline 00:06:41.441 ************************************ 00:06:41.441 11:14:36 app_cmdline -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/cmdline.sh 00:06:41.441 * Looking for test storage... 00:06:41.441 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:06:41.442 11:14:36 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:06:41.442 11:14:36 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=1348502 00:06:41.442 11:14:36 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 1348502 00:06:41.442 11:14:36 app_cmdline -- app/cmdline.sh@16 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:06:41.442 11:14:36 app_cmdline -- common/autotest_common.sh@831 -- # '[' -z 1348502 ']' 00:06:41.442 11:14:36 app_cmdline -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:41.442 11:14:36 app_cmdline -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:41.442 11:14:36 app_cmdline -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:41.442 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:41.442 11:14:36 app_cmdline -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:41.442 11:14:36 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:06:41.442 [2024-07-26 11:14:37.046442] Starting SPDK v24.09-pre git sha1 487ff9e1a / DPDK 24.03.0 initialization... 00:06:41.442 [2024-07-26 11:14:37.046499] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1348502 ] 00:06:41.442 EAL: No free 2048 kB hugepages reported on node 1 00:06:41.700 [2024-07-26 11:14:37.116331] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:41.700 [2024-07-26 11:14:37.192811] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:42.267 11:14:37 app_cmdline -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:42.267 11:14:37 app_cmdline -- common/autotest_common.sh@864 -- # return 0 00:06:42.267 11:14:37 app_cmdline -- app/cmdline.sh@20 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py spdk_get_version 00:06:42.526 { 00:06:42.526 "version": "SPDK v24.09-pre git sha1 487ff9e1a", 00:06:42.526 "fields": { 00:06:42.526 "major": 24, 00:06:42.526 "minor": 9, 00:06:42.526 "patch": 0, 00:06:42.526 "suffix": "-pre", 00:06:42.526 "commit": "487ff9e1a" 00:06:42.526 } 00:06:42.526 } 00:06:42.526 11:14:38 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:06:42.526 11:14:38 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:06:42.526 11:14:38 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:06:42.526 11:14:38 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:06:42.526 11:14:38 app_cmdline -- app/cmdline.sh@26 -- # sort 00:06:42.526 11:14:38 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:06:42.526 11:14:38 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:06:42.526 11:14:38 app_cmdline -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:42.526 11:14:38 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:06:42.526 11:14:38 app_cmdline -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:42.526 11:14:38 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:06:42.526 11:14:38 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:06:42.526 11:14:38 app_cmdline -- app/cmdline.sh@30 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:42.526 11:14:38 app_cmdline -- common/autotest_common.sh@650 -- # local es=0 00:06:42.526 11:14:38 app_cmdline -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:42.526 11:14:38 app_cmdline -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:06:42.526 11:14:38 app_cmdline -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:42.526 11:14:38 app_cmdline -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:06:42.526 11:14:38 app_cmdline -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:42.526 11:14:38 app_cmdline -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:06:42.526 11:14:38 app_cmdline -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:42.526 11:14:38 app_cmdline -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:06:42.526 11:14:38 app_cmdline -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:06:42.526 11:14:38 app_cmdline -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:42.785 request: 00:06:42.785 { 00:06:42.785 "method": "env_dpdk_get_mem_stats", 00:06:42.785 "req_id": 1 00:06:42.785 } 00:06:42.785 Got JSON-RPC error response 00:06:42.785 response: 00:06:42.785 { 00:06:42.785 "code": -32601, 00:06:42.785 "message": "Method not found" 00:06:42.785 } 00:06:42.785 11:14:38 app_cmdline -- common/autotest_common.sh@653 -- # es=1 00:06:42.785 11:14:38 app_cmdline -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:42.785 11:14:38 app_cmdline -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:06:42.785 11:14:38 app_cmdline -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:42.785 11:14:38 app_cmdline -- app/cmdline.sh@1 -- # killprocess 1348502 00:06:42.785 11:14:38 app_cmdline -- common/autotest_common.sh@950 -- # '[' -z 1348502 ']' 00:06:42.785 11:14:38 app_cmdline -- common/autotest_common.sh@954 -- # kill -0 1348502 00:06:42.785 11:14:38 app_cmdline -- common/autotest_common.sh@955 -- # uname 00:06:42.785 11:14:38 app_cmdline -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:42.785 11:14:38 app_cmdline -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1348502 00:06:42.785 11:14:38 app_cmdline -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:42.785 11:14:38 app_cmdline -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:42.785 11:14:38 app_cmdline -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1348502' 00:06:42.785 killing process with pid 1348502 00:06:42.785 11:14:38 app_cmdline -- common/autotest_common.sh@969 -- # kill 1348502 00:06:42.785 11:14:38 app_cmdline -- common/autotest_common.sh@974 -- # wait 1348502 00:06:43.044 00:06:43.044 real 0m1.697s 00:06:43.044 user 0m2.019s 00:06:43.044 sys 0m0.445s 00:06:43.044 11:14:38 app_cmdline -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:43.044 11:14:38 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:06:43.044 ************************************ 00:06:43.044 END TEST app_cmdline 00:06:43.044 ************************************ 00:06:43.044 11:14:38 -- spdk/autotest.sh@190 -- # run_test version /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/version.sh 00:06:43.044 11:14:38 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:43.044 11:14:38 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:43.044 11:14:38 -- common/autotest_common.sh@10 -- # set +x 00:06:43.044 ************************************ 00:06:43.044 START TEST version 00:06:43.044 ************************************ 00:06:43.044 11:14:38 version -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/version.sh 00:06:43.303 * Looking for test storage... 00:06:43.303 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:06:43.303 11:14:38 version -- app/version.sh@17 -- # get_header_version major 00:06:43.304 11:14:38 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:06:43.304 11:14:38 version -- app/version.sh@14 -- # cut -f2 00:06:43.304 11:14:38 version -- app/version.sh@14 -- # tr -d '"' 00:06:43.304 11:14:38 version -- app/version.sh@17 -- # major=24 00:06:43.304 11:14:38 version -- app/version.sh@18 -- # get_header_version minor 00:06:43.304 11:14:38 version -- app/version.sh@14 -- # cut -f2 00:06:43.304 11:14:38 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:06:43.304 11:14:38 version -- app/version.sh@14 -- # tr -d '"' 00:06:43.304 11:14:38 version -- app/version.sh@18 -- # minor=9 00:06:43.304 11:14:38 version -- app/version.sh@19 -- # get_header_version patch 00:06:43.304 11:14:38 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:06:43.304 11:14:38 version -- app/version.sh@14 -- # cut -f2 00:06:43.304 11:14:38 version -- app/version.sh@14 -- # tr -d '"' 00:06:43.304 11:14:38 version -- app/version.sh@19 -- # patch=0 00:06:43.304 11:14:38 version -- app/version.sh@20 -- # get_header_version suffix 00:06:43.304 11:14:38 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:06:43.304 11:14:38 version -- app/version.sh@14 -- # cut -f2 00:06:43.304 11:14:38 version -- app/version.sh@14 -- # tr -d '"' 00:06:43.304 11:14:38 version -- app/version.sh@20 -- # suffix=-pre 00:06:43.304 11:14:38 version -- app/version.sh@22 -- # version=24.9 00:06:43.304 11:14:38 version -- app/version.sh@25 -- # (( patch != 0 )) 00:06:43.304 11:14:38 version -- app/version.sh@28 -- # version=24.9rc0 00:06:43.304 11:14:38 version -- app/version.sh@30 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:06:43.304 11:14:38 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:06:43.304 11:14:38 version -- app/version.sh@30 -- # py_version=24.9rc0 00:06:43.304 11:14:38 version -- app/version.sh@31 -- # [[ 24.9rc0 == \2\4\.\9\r\c\0 ]] 00:06:43.304 00:06:43.304 real 0m0.158s 00:06:43.304 user 0m0.091s 00:06:43.304 sys 0m0.103s 00:06:43.304 11:14:38 version -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:43.304 11:14:38 version -- common/autotest_common.sh@10 -- # set +x 00:06:43.304 ************************************ 00:06:43.304 END TEST version 00:06:43.304 ************************************ 00:06:43.304 11:14:38 -- spdk/autotest.sh@192 -- # '[' 0 -eq 1 ']' 00:06:43.304 11:14:38 -- spdk/autotest.sh@202 -- # uname -s 00:06:43.304 11:14:38 -- spdk/autotest.sh@202 -- # [[ Linux == Linux ]] 00:06:43.304 11:14:38 -- spdk/autotest.sh@203 -- # [[ 0 -eq 1 ]] 00:06:43.304 11:14:38 -- spdk/autotest.sh@203 -- # [[ 0 -eq 1 ]] 00:06:43.304 11:14:38 -- spdk/autotest.sh@215 -- # '[' 0 -eq 1 ']' 00:06:43.304 11:14:38 -- spdk/autotest.sh@260 -- # '[' 0 -eq 1 ']' 00:06:43.304 11:14:38 -- spdk/autotest.sh@264 -- # timing_exit lib 00:06:43.304 11:14:38 -- common/autotest_common.sh@730 -- # xtrace_disable 00:06:43.304 11:14:38 -- common/autotest_common.sh@10 -- # set +x 00:06:43.304 11:14:38 -- spdk/autotest.sh@266 -- # '[' 0 -eq 1 ']' 00:06:43.304 11:14:38 -- spdk/autotest.sh@274 -- # '[' 0 -eq 1 ']' 00:06:43.304 11:14:38 -- spdk/autotest.sh@283 -- # '[' 1 -eq 1 ']' 00:06:43.304 11:14:38 -- spdk/autotest.sh@284 -- # export NET_TYPE 00:06:43.304 11:14:38 -- spdk/autotest.sh@287 -- # '[' tcp = rdma ']' 00:06:43.304 11:14:38 -- spdk/autotest.sh@290 -- # '[' tcp = tcp ']' 00:06:43.304 11:14:38 -- spdk/autotest.sh@291 -- # run_test nvmf_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=tcp 00:06:43.304 11:14:38 -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:06:43.304 11:14:38 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:43.304 11:14:38 -- common/autotest_common.sh@10 -- # set +x 00:06:43.304 ************************************ 00:06:43.304 START TEST nvmf_tcp 00:06:43.304 ************************************ 00:06:43.304 11:14:38 nvmf_tcp -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=tcp 00:06:43.562 * Looking for test storage... 00:06:43.562 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:06:43.562 11:14:39 nvmf_tcp -- nvmf/nvmf.sh@10 -- # uname -s 00:06:43.562 11:14:39 nvmf_tcp -- nvmf/nvmf.sh@10 -- # '[' '!' Linux = Linux ']' 00:06:43.562 11:14:39 nvmf_tcp -- nvmf/nvmf.sh@14 -- # run_test nvmf_target_core /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp 00:06:43.562 11:14:39 nvmf_tcp -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:06:43.562 11:14:39 nvmf_tcp -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:43.562 11:14:39 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:43.562 ************************************ 00:06:43.562 START TEST nvmf_target_core 00:06:43.562 ************************************ 00:06:43.562 11:14:39 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp 00:06:43.562 * Looking for test storage... 00:06:43.562 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:06:43.562 11:14:39 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@10 -- # uname -s 00:06:43.562 11:14:39 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@10 -- # '[' '!' Linux = Linux ']' 00:06:43.562 11:14:39 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:06:43.562 11:14:39 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@7 -- # uname -s 00:06:43.562 11:14:39 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:43.562 11:14:39 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:43.562 11:14:39 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:43.562 11:14:39 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:43.562 11:14:39 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:43.562 11:14:39 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:43.562 11:14:39 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:43.562 11:14:39 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:43.562 11:14:39 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:43.562 11:14:39 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:43.562 11:14:39 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:06:43.562 11:14:39 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:06:43.562 11:14:39 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:43.562 11:14:39 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:43.562 11:14:39 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:06:43.562 11:14:39 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:43.562 11:14:39 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:06:43.562 11:14:39 nvmf_tcp.nvmf_target_core -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:43.562 11:14:39 nvmf_tcp.nvmf_target_core -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:43.562 11:14:39 nvmf_tcp.nvmf_target_core -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:43.562 11:14:39 nvmf_tcp.nvmf_target_core -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:43.563 11:14:39 nvmf_tcp.nvmf_target_core -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:43.563 11:14:39 nvmf_tcp.nvmf_target_core -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:43.563 11:14:39 nvmf_tcp.nvmf_target_core -- paths/export.sh@5 -- # export PATH 00:06:43.563 11:14:39 nvmf_tcp.nvmf_target_core -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:43.563 11:14:39 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@47 -- # : 0 00:06:43.563 11:14:39 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:06:43.563 11:14:39 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:06:43.563 11:14:39 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:43.563 11:14:39 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:43.563 11:14:39 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:43.563 11:14:39 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:06:43.563 11:14:39 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:06:43.563 11:14:39 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@51 -- # have_pci_nics=0 00:06:43.563 11:14:39 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@16 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:06:43.563 11:14:39 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@18 -- # TEST_ARGS=("$@") 00:06:43.563 11:14:39 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@20 -- # [[ 0 -eq 0 ]] 00:06:43.563 11:14:39 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@21 -- # run_test nvmf_abort /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp 00:06:43.563 11:14:39 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:06:43.563 11:14:39 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:43.563 11:14:39 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:06:43.563 ************************************ 00:06:43.563 START TEST nvmf_abort 00:06:43.563 ************************************ 00:06:43.563 11:14:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp 00:06:43.822 * Looking for test storage... 00:06:43.822 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:06:43.822 11:14:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:06:43.822 11:14:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@7 -- # uname -s 00:06:43.822 11:14:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:43.822 11:14:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:43.822 11:14:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:43.822 11:14:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:43.822 11:14:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:43.822 11:14:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:43.822 11:14:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:43.822 11:14:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:43.822 11:14:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:43.822 11:14:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:43.822 11:14:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:06:43.822 11:14:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:06:43.822 11:14:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:43.822 11:14:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:43.822 11:14:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:06:43.822 11:14:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:43.822 11:14:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:06:43.822 11:14:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:43.822 11:14:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:43.822 11:14:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:43.822 11:14:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:43.822 11:14:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:43.822 11:14:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:43.822 11:14:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@5 -- # export PATH 00:06:43.823 11:14:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:43.823 11:14:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@47 -- # : 0 00:06:43.823 11:14:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:06:43.823 11:14:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:06:43.823 11:14:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:43.823 11:14:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:43.823 11:14:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:43.823 11:14:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:06:43.823 11:14:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:06:43.823 11:14:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@51 -- # have_pci_nics=0 00:06:43.823 11:14:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@11 -- # MALLOC_BDEV_SIZE=64 00:06:43.823 11:14:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@12 -- # MALLOC_BLOCK_SIZE=4096 00:06:43.823 11:14:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@14 -- # nvmftestinit 00:06:43.823 11:14:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:06:43.823 11:14:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:06:43.823 11:14:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@448 -- # prepare_net_devs 00:06:43.823 11:14:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@410 -- # local -g is_hw=no 00:06:43.823 11:14:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@412 -- # remove_spdk_ns 00:06:43.823 11:14:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:43.823 11:14:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:06:43.823 11:14:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:43.823 11:14:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:06:43.823 11:14:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:06:43.823 11:14:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@285 -- # xtrace_disable 00:06:43.823 11:14:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:49.181 11:14:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:06:49.181 11:14:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@291 -- # pci_devs=() 00:06:49.181 11:14:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@291 -- # local -a pci_devs 00:06:49.181 11:14:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@292 -- # pci_net_devs=() 00:06:49.181 11:14:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:06:49.181 11:14:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@293 -- # pci_drivers=() 00:06:49.181 11:14:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@293 -- # local -A pci_drivers 00:06:49.181 11:14:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@295 -- # net_devs=() 00:06:49.181 11:14:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@295 -- # local -ga net_devs 00:06:49.181 11:14:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@296 -- # e810=() 00:06:49.181 11:14:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@296 -- # local -ga e810 00:06:49.181 11:14:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@297 -- # x722=() 00:06:49.181 11:14:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@297 -- # local -ga x722 00:06:49.181 11:14:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@298 -- # mlx=() 00:06:49.181 11:14:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@298 -- # local -ga mlx 00:06:49.181 11:14:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:06:49.181 11:14:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:06:49.181 11:14:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:06:49.181 11:14:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:06:49.181 11:14:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:06:49.181 11:14:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:06:49.181 11:14:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:06:49.181 11:14:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:06:49.181 11:14:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:06:49.181 11:14:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:06:49.181 11:14:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:06:49.181 11:14:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:06:49.181 11:14:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:06:49.181 11:14:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:06:49.181 11:14:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:06:49.181 11:14:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:06:49.181 11:14:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:06:49.181 11:14:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:06:49.181 11:14:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:06:49.181 Found 0000:86:00.0 (0x8086 - 0x159b) 00:06:49.181 11:14:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:06:49.181 11:14:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:06:49.181 11:14:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:49.181 11:14:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:49.181 11:14:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:06:49.181 11:14:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:06:49.181 11:14:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:06:49.181 Found 0000:86:00.1 (0x8086 - 0x159b) 00:06:49.181 11:14:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:06:49.181 11:14:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:06:49.181 11:14:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:49.181 11:14:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:49.181 11:14:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:06:49.181 11:14:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:06:49.181 11:14:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:06:49.181 11:14:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:06:49.181 11:14:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:06:49.181 11:14:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:49.181 11:14:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:06:49.181 11:14:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:49.181 11:14:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@390 -- # [[ up == up ]] 00:06:49.181 11:14:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:06:49.181 11:14:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:49.181 11:14:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:06:49.181 Found net devices under 0000:86:00.0: cvl_0_0 00:06:49.181 11:14:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:06:49.181 11:14:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:06:49.181 11:14:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:49.181 11:14:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:06:49.181 11:14:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:49.181 11:14:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@390 -- # [[ up == up ]] 00:06:49.181 11:14:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:06:49.181 11:14:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:49.181 11:14:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:06:49.181 Found net devices under 0000:86:00.1: cvl_0_1 00:06:49.181 11:14:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:06:49.181 11:14:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:06:49.181 11:14:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@414 -- # is_hw=yes 00:06:49.181 11:14:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:06:49.181 11:14:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:06:49.181 11:14:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:06:49.181 11:14:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:06:49.181 11:14:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:06:49.181 11:14:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:06:49.181 11:14:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:06:49.181 11:14:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:06:49.181 11:14:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:06:49.181 11:14:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:06:49.181 11:14:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:06:49.181 11:14:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:06:49.181 11:14:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:06:49.181 11:14:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:06:49.181 11:14:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:06:49.441 11:14:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:06:49.441 11:14:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:06:49.441 11:14:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:06:49.441 11:14:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:06:49.441 11:14:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:06:49.441 11:14:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:06:49.441 11:14:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:06:49.441 11:14:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:06:49.441 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:06:49.441 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.241 ms 00:06:49.441 00:06:49.441 --- 10.0.0.2 ping statistics --- 00:06:49.441 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:49.441 rtt min/avg/max/mdev = 0.241/0.241/0.241/0.000 ms 00:06:49.441 11:14:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:06:49.441 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:06:49.441 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.190 ms 00:06:49.441 00:06:49.441 --- 10.0.0.1 ping statistics --- 00:06:49.441 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:49.441 rtt min/avg/max/mdev = 0.190/0.190/0.190/0.000 ms 00:06:49.441 11:14:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:06:49.441 11:14:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@422 -- # return 0 00:06:49.441 11:14:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:06:49.441 11:14:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:06:49.441 11:14:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:06:49.441 11:14:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:06:49.441 11:14:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:06:49.441 11:14:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:06:49.441 11:14:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:06:49.699 11:14:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@15 -- # nvmfappstart -m 0xE 00:06:49.699 11:14:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:06:49.699 11:14:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@724 -- # xtrace_disable 00:06:49.699 11:14:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:49.699 11:14:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@481 -- # nvmfpid=1352110 00:06:49.699 11:14:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@482 -- # waitforlisten 1352110 00:06:49.699 11:14:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:06:49.699 11:14:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@831 -- # '[' -z 1352110 ']' 00:06:49.699 11:14:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:49.699 11:14:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:49.699 11:14:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:49.699 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:49.699 11:14:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:49.699 11:14:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:49.699 [2024-07-26 11:14:45.169611] Starting SPDK v24.09-pre git sha1 487ff9e1a / DPDK 24.03.0 initialization... 00:06:49.699 [2024-07-26 11:14:45.169673] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:49.699 EAL: No free 2048 kB hugepages reported on node 1 00:06:49.699 [2024-07-26 11:14:45.236863] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:49.699 [2024-07-26 11:14:45.310569] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:06:49.699 [2024-07-26 11:14:45.310607] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:06:49.699 [2024-07-26 11:14:45.310614] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:06:49.699 [2024-07-26 11:14:45.310620] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:06:49.699 [2024-07-26 11:14:45.310625] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:06:49.699 [2024-07-26 11:14:45.310745] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:06:49.699 [2024-07-26 11:14:45.310851] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:49.699 [2024-07-26 11:14:45.310852] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:06:50.633 11:14:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:50.633 11:14:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@864 -- # return 0 00:06:50.633 11:14:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:06:50.633 11:14:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@730 -- # xtrace_disable 00:06:50.633 11:14:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:50.633 11:14:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:06:50.633 11:14:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -a 256 00:06:50.633 11:14:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:50.633 11:14:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:50.633 [2024-07-26 11:14:46.006855] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:50.633 11:14:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:50.633 11:14:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@20 -- # rpc_cmd bdev_malloc_create 64 4096 -b Malloc0 00:06:50.633 11:14:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:50.633 11:14:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:50.633 Malloc0 00:06:50.633 11:14:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:50.633 11:14:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@21 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:06:50.633 11:14:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:50.633 11:14:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:50.633 Delay0 00:06:50.633 11:14:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:50.633 11:14:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:06:50.633 11:14:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:50.633 11:14:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:50.633 11:14:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:50.633 11:14:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 Delay0 00:06:50.633 11:14:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:50.633 11:14:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:50.633 11:14:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:50.633 11:14:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:06:50.633 11:14:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:50.633 11:14:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:50.633 [2024-07-26 11:14:46.087378] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:06:50.633 11:14:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:50.633 11:14:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:06:50.633 11:14:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:50.633 11:14:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:50.633 11:14:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:50.633 11:14:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0x1 -t 1 -l warning -q 128 00:06:50.633 EAL: No free 2048 kB hugepages reported on node 1 00:06:50.633 [2024-07-26 11:14:46.205327] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:06:53.161 Initializing NVMe Controllers 00:06:53.161 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:06:53.161 controller IO queue size 128 less than required 00:06:53.161 Consider using lower queue depth or small IO size because IO requests may be queued at the NVMe driver. 00:06:53.161 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 0 00:06:53.161 Initialization complete. Launching workers. 00:06:53.161 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 I/O completed: 123, failed: 44406 00:06:53.161 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) abort submitted 44467, failed to submit 62 00:06:53.161 success 44410, unsuccessful 57, failed 0 00:06:53.161 11:14:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:06:53.161 11:14:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:53.161 11:14:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:53.161 11:14:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:53.161 11:14:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:06:53.161 11:14:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@38 -- # nvmftestfini 00:06:53.161 11:14:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@488 -- # nvmfcleanup 00:06:53.161 11:14:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@117 -- # sync 00:06:53.161 11:14:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:06:53.161 11:14:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@120 -- # set +e 00:06:53.161 11:14:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@121 -- # for i in {1..20} 00:06:53.161 11:14:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:06:53.161 rmmod nvme_tcp 00:06:53.161 rmmod nvme_fabrics 00:06:53.161 rmmod nvme_keyring 00:06:53.161 11:14:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:06:53.161 11:14:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@124 -- # set -e 00:06:53.161 11:14:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@125 -- # return 0 00:06:53.161 11:14:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@489 -- # '[' -n 1352110 ']' 00:06:53.161 11:14:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@490 -- # killprocess 1352110 00:06:53.161 11:14:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@950 -- # '[' -z 1352110 ']' 00:06:53.161 11:14:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@954 -- # kill -0 1352110 00:06:53.161 11:14:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@955 -- # uname 00:06:53.161 11:14:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:53.161 11:14:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1352110 00:06:53.161 11:14:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:06:53.161 11:14:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:06:53.161 11:14:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1352110' 00:06:53.161 killing process with pid 1352110 00:06:53.161 11:14:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@969 -- # kill 1352110 00:06:53.161 11:14:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@974 -- # wait 1352110 00:06:53.161 11:14:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:06:53.161 11:14:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:06:53.161 11:14:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:06:53.161 11:14:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:06:53.161 11:14:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@278 -- # remove_spdk_ns 00:06:53.161 11:14:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:53.161 11:14:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:06:53.161 11:14:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:55.064 11:14:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:06:55.064 00:06:55.064 real 0m11.418s 00:06:55.064 user 0m13.019s 00:06:55.064 sys 0m5.223s 00:06:55.064 11:14:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:55.064 11:14:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:55.064 ************************************ 00:06:55.065 END TEST nvmf_abort 00:06:55.065 ************************************ 00:06:55.065 11:14:50 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@22 -- # run_test nvmf_ns_hotplug_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:06:55.065 11:14:50 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:06:55.065 11:14:50 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:55.065 11:14:50 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:06:55.065 ************************************ 00:06:55.065 START TEST nvmf_ns_hotplug_stress 00:06:55.065 ************************************ 00:06:55.065 11:14:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:06:55.324 * Looking for test storage... 00:06:55.324 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:06:55.324 11:14:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:06:55.324 11:14:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # uname -s 00:06:55.324 11:14:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:55.324 11:14:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:55.324 11:14:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:55.324 11:14:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:55.324 11:14:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:55.324 11:14:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:55.324 11:14:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:55.324 11:14:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:55.324 11:14:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:55.324 11:14:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:55.324 11:14:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:06:55.324 11:14:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:06:55.324 11:14:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:55.324 11:14:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:55.324 11:14:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:06:55.324 11:14:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:55.324 11:14:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:06:55.324 11:14:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:55.324 11:14:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:55.324 11:14:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:55.324 11:14:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:55.324 11:14:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:55.324 11:14:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:55.324 11:14:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@5 -- # export PATH 00:06:55.324 11:14:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:55.324 11:14:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@47 -- # : 0 00:06:55.324 11:14:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:06:55.324 11:14:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:06:55.324 11:14:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:55.324 11:14:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:55.324 11:14:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:55.324 11:14:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:06:55.324 11:14:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:06:55.324 11:14:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@51 -- # have_pci_nics=0 00:06:55.324 11:14:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:06:55.324 11:14:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@22 -- # nvmftestinit 00:06:55.324 11:14:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:06:55.324 11:14:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:06:55.324 11:14:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@448 -- # prepare_net_devs 00:06:55.324 11:14:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@410 -- # local -g is_hw=no 00:06:55.324 11:14:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@412 -- # remove_spdk_ns 00:06:55.324 11:14:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:55.324 11:14:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:06:55.324 11:14:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:55.324 11:14:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:06:55.324 11:14:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:06:55.324 11:14:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@285 -- # xtrace_disable 00:06:55.325 11:14:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:07:01.891 11:14:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:07:01.891 11:14:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@291 -- # pci_devs=() 00:07:01.891 11:14:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@291 -- # local -a pci_devs 00:07:01.891 11:14:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@292 -- # pci_net_devs=() 00:07:01.891 11:14:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:07:01.891 11:14:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@293 -- # pci_drivers=() 00:07:01.891 11:14:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@293 -- # local -A pci_drivers 00:07:01.891 11:14:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@295 -- # net_devs=() 00:07:01.891 11:14:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@295 -- # local -ga net_devs 00:07:01.891 11:14:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@296 -- # e810=() 00:07:01.891 11:14:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@296 -- # local -ga e810 00:07:01.891 11:14:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@297 -- # x722=() 00:07:01.891 11:14:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@297 -- # local -ga x722 00:07:01.891 11:14:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@298 -- # mlx=() 00:07:01.892 11:14:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@298 -- # local -ga mlx 00:07:01.892 11:14:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:01.892 11:14:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:01.892 11:14:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:01.892 11:14:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:01.892 11:14:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:01.892 11:14:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:01.892 11:14:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:01.892 11:14:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:01.892 11:14:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:01.892 11:14:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:01.892 11:14:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:01.892 11:14:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:07:01.892 11:14:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:07:01.892 11:14:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:07:01.892 11:14:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:07:01.892 11:14:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:07:01.892 11:14:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:07:01.892 11:14:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:01.892 11:14:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:07:01.892 Found 0000:86:00.0 (0x8086 - 0x159b) 00:07:01.892 11:14:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:07:01.892 11:14:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:07:01.892 11:14:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:01.892 11:14:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:01.892 11:14:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:07:01.892 11:14:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:01.892 11:14:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:07:01.892 Found 0000:86:00.1 (0x8086 - 0x159b) 00:07:01.892 11:14:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:07:01.892 11:14:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:07:01.892 11:14:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:01.892 11:14:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:01.892 11:14:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:07:01.892 11:14:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:07:01.892 11:14:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:07:01.892 11:14:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:07:01.892 11:14:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:01.892 11:14:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:01.892 11:14:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:07:01.892 11:14:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:01.892 11:14:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@390 -- # [[ up == up ]] 00:07:01.892 11:14:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:07:01.892 11:14:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:01.892 11:14:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:07:01.892 Found net devices under 0000:86:00.0: cvl_0_0 00:07:01.892 11:14:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:07:01.892 11:14:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:01.892 11:14:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:01.892 11:14:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:07:01.892 11:14:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:01.892 11:14:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@390 -- # [[ up == up ]] 00:07:01.892 11:14:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:07:01.892 11:14:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:01.892 11:14:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:07:01.892 Found net devices under 0000:86:00.1: cvl_0_1 00:07:01.892 11:14:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:07:01.892 11:14:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:07:01.892 11:14:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@414 -- # is_hw=yes 00:07:01.892 11:14:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:07:01.892 11:14:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:07:01.892 11:14:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:07:01.892 11:14:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:01.892 11:14:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:01.892 11:14:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:07:01.892 11:14:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:07:01.892 11:14:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:07:01.892 11:14:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:07:01.892 11:14:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:07:01.892 11:14:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:07:01.892 11:14:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:01.892 11:14:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:07:01.892 11:14:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:07:01.892 11:14:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:07:01.892 11:14:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:07:01.892 11:14:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:07:01.892 11:14:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:07:01.892 11:14:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:07:01.892 11:14:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:07:01.892 11:14:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:07:01.892 11:14:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:07:01.892 11:14:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:07:01.892 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:01.892 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.170 ms 00:07:01.892 00:07:01.892 --- 10.0.0.2 ping statistics --- 00:07:01.892 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:01.893 rtt min/avg/max/mdev = 0.170/0.170/0.170/0.000 ms 00:07:01.893 11:14:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:07:01.893 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:01.893 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.091 ms 00:07:01.893 00:07:01.893 --- 10.0.0.1 ping statistics --- 00:07:01.893 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:01.893 rtt min/avg/max/mdev = 0.091/0.091/0.091/0.000 ms 00:07:01.893 11:14:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:01.893 11:14:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@422 -- # return 0 00:07:01.893 11:14:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:07:01.893 11:14:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:01.893 11:14:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:07:01.893 11:14:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:07:01.893 11:14:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:01.893 11:14:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:07:01.893 11:14:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:07:01.893 11:14:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@23 -- # nvmfappstart -m 0xE 00:07:01.893 11:14:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:07:01.893 11:14:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@724 -- # xtrace_disable 00:07:01.893 11:14:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:07:01.893 11:14:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@481 -- # nvmfpid=1356199 00:07:01.893 11:14:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@482 -- # waitforlisten 1356199 00:07:01.893 11:14:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:07:01.893 11:14:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@831 -- # '[' -z 1356199 ']' 00:07:01.893 11:14:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:01.893 11:14:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:01.893 11:14:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:01.893 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:01.893 11:14:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:01.893 11:14:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:07:01.893 [2024-07-26 11:14:56.684318] Starting SPDK v24.09-pre git sha1 487ff9e1a / DPDK 24.03.0 initialization... 00:07:01.893 [2024-07-26 11:14:56.684361] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:01.893 EAL: No free 2048 kB hugepages reported on node 1 00:07:01.893 [2024-07-26 11:14:56.755971] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:01.893 [2024-07-26 11:14:56.833216] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:01.893 [2024-07-26 11:14:56.833252] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:01.893 [2024-07-26 11:14:56.833259] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:01.893 [2024-07-26 11:14:56.833264] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:01.893 [2024-07-26 11:14:56.833269] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:01.893 [2024-07-26 11:14:56.833377] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:07:01.893 [2024-07-26 11:14:56.833480] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:07:01.893 [2024-07-26 11:14:56.833481] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:07:01.893 11:14:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:01.893 11:14:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@864 -- # return 0 00:07:01.893 11:14:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:07:01.893 11:14:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@730 -- # xtrace_disable 00:07:01.893 11:14:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:07:01.893 11:14:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:01.893 11:14:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@25 -- # null_size=1000 00:07:01.893 11:14:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:07:02.151 [2024-07-26 11:14:57.678050] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:02.151 11:14:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:07:02.409 11:14:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:02.409 [2024-07-26 11:14:58.062710] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:02.667 11:14:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:07:02.667 11:14:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 512 -b Malloc0 00:07:02.924 Malloc0 00:07:02.924 11:14:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:07:03.182 Delay0 00:07:03.182 11:14:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:03.182 11:14:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create NULL1 1000 512 00:07:03.440 NULL1 00:07:03.440 11:14:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:07:03.697 11:14:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 30 -q 128 -w randread -o 512 -Q 1000 00:07:03.697 11:14:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@42 -- # PERF_PID=1356620 00:07:03.697 11:14:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1356620 00:07:03.697 11:14:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:03.697 EAL: No free 2048 kB hugepages reported on node 1 00:07:03.697 Read completed with error (sct=0, sc=11) 00:07:03.697 11:14:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:03.697 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:03.955 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:03.955 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:03.955 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:03.955 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:03.955 11:14:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1001 00:07:03.955 11:14:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1001 00:07:04.213 true 00:07:04.213 11:14:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1356620 00:07:04.213 11:14:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:05.147 11:15:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:05.147 11:15:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1002 00:07:05.147 11:15:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1002 00:07:05.450 true 00:07:05.450 11:15:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1356620 00:07:05.451 11:15:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:05.709 11:15:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:05.709 11:15:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1003 00:07:05.709 11:15:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1003 00:07:05.968 true 00:07:05.968 11:15:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1356620 00:07:05.968 11:15:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:07.348 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:07.348 11:15:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:07.348 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:07.348 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:07.348 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:07.348 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:07.348 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:07.348 11:15:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1004 00:07:07.348 11:15:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1004 00:07:07.607 true 00:07:07.607 11:15:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1356620 00:07:07.607 11:15:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:08.547 11:15:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:08.548 11:15:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1005 00:07:08.548 11:15:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1005 00:07:08.548 true 00:07:08.806 11:15:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1356620 00:07:08.806 11:15:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:08.806 11:15:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:09.065 11:15:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1006 00:07:09.065 11:15:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1006 00:07:09.065 true 00:07:09.325 11:15:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1356620 00:07:09.325 11:15:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:09.325 11:15:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:09.325 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:09.325 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:09.583 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:09.583 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:09.583 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:09.583 11:15:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1007 00:07:09.583 11:15:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1007 00:07:09.842 true 00:07:09.842 11:15:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1356620 00:07:09.842 11:15:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:10.779 11:15:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:10.779 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:10.779 11:15:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1008 00:07:10.779 11:15:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1008 00:07:11.037 true 00:07:11.037 11:15:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1356620 00:07:11.037 11:15:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:11.296 11:15:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:11.296 11:15:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1009 00:07:11.296 11:15:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1009 00:07:11.555 true 00:07:11.555 11:15:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1356620 00:07:11.555 11:15:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:12.931 11:15:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:12.931 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:12.931 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:12.931 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:12.931 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:12.931 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:12.931 11:15:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1010 00:07:12.931 11:15:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1010 00:07:13.190 true 00:07:13.190 11:15:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1356620 00:07:13.190 11:15:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:14.125 11:15:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:14.125 11:15:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1011 00:07:14.125 11:15:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1011 00:07:14.385 true 00:07:14.385 11:15:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1356620 00:07:14.385 11:15:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:14.385 11:15:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:14.644 11:15:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1012 00:07:14.644 11:15:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1012 00:07:14.903 true 00:07:14.903 11:15:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1356620 00:07:14.903 11:15:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:15.838 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:16.097 11:15:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:16.097 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:16.097 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:16.097 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:16.097 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:16.097 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:16.097 11:15:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1013 00:07:16.097 11:15:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1013 00:07:16.355 true 00:07:16.355 11:15:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1356620 00:07:16.355 11:15:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:17.291 11:15:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:17.291 11:15:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1014 00:07:17.291 11:15:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1014 00:07:17.549 true 00:07:17.549 11:15:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1356620 00:07:17.549 11:15:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:17.818 11:15:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:17.818 11:15:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1015 00:07:17.818 11:15:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1015 00:07:18.133 true 00:07:18.133 11:15:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1356620 00:07:18.133 11:15:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:19.107 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:19.107 11:15:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:19.107 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:19.365 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:19.365 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:19.365 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:19.365 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:19.365 11:15:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1016 00:07:19.365 11:15:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1016 00:07:19.624 true 00:07:19.624 11:15:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1356620 00:07:19.624 11:15:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:20.559 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:20.559 11:15:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:20.559 11:15:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1017 00:07:20.559 11:15:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1017 00:07:20.818 true 00:07:20.818 11:15:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1356620 00:07:20.818 11:15:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:21.076 11:15:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:21.076 11:15:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1018 00:07:21.076 11:15:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1018 00:07:21.334 true 00:07:21.334 11:15:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1356620 00:07:21.334 11:15:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:21.592 11:15:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:21.592 11:15:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1019 00:07:21.592 11:15:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1019 00:07:21.851 true 00:07:21.851 11:15:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1356620 00:07:21.851 11:15:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:22.110 11:15:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:22.110 11:15:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1020 00:07:22.110 11:15:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1020 00:07:22.368 true 00:07:22.368 11:15:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1356620 00:07:22.369 11:15:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:23.745 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:23.745 11:15:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:23.745 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:23.745 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:23.745 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:23.745 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:23.745 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:23.745 11:15:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1021 00:07:23.745 11:15:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1021 00:07:24.004 true 00:07:24.004 11:15:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1356620 00:07:24.004 11:15:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:24.940 11:15:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:24.940 11:15:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1022 00:07:24.940 11:15:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1022 00:07:25.198 true 00:07:25.198 11:15:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1356620 00:07:25.198 11:15:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:25.198 11:15:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:25.457 11:15:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1023 00:07:25.457 11:15:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1023 00:07:25.716 true 00:07:25.716 11:15:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1356620 00:07:25.716 11:15:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:26.653 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:26.911 11:15:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:26.911 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:26.911 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:26.911 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:26.911 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:26.911 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:26.912 11:15:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1024 00:07:26.912 11:15:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1024 00:07:27.170 true 00:07:27.170 11:15:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1356620 00:07:27.170 11:15:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:28.103 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:28.103 11:15:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:28.103 11:15:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1025 00:07:28.103 11:15:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1025 00:07:28.361 true 00:07:28.361 11:15:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1356620 00:07:28.361 11:15:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:28.619 11:15:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:28.877 11:15:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1026 00:07:28.877 11:15:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1026 00:07:28.877 true 00:07:28.877 11:15:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1356620 00:07:28.877 11:15:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:29.135 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:29.135 11:15:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:29.135 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:29.135 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:29.135 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:29.417 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:29.417 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:29.417 [2024-07-26 11:15:24.846127] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.417 [2024-07-26 11:15:24.846206] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.417 [2024-07-26 11:15:24.846256] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.417 [2024-07-26 11:15:24.846298] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.417 [2024-07-26 11:15:24.846328] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.417 [2024-07-26 11:15:24.846367] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.417 [2024-07-26 11:15:24.846405] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.417 [2024-07-26 11:15:24.846443] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.417 [2024-07-26 11:15:24.846482] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.418 [2024-07-26 11:15:24.846517] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.418 [2024-07-26 11:15:24.846553] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.418 [2024-07-26 11:15:24.846592] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.418 [2024-07-26 11:15:24.846634] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.418 [2024-07-26 11:15:24.846676] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.418 [2024-07-26 11:15:24.846724] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.418 [2024-07-26 11:15:24.846769] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.418 [2024-07-26 11:15:24.846814] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.418 [2024-07-26 11:15:24.846854] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.418 [2024-07-26 11:15:24.846888] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.418 [2024-07-26 11:15:24.846927] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.418 [2024-07-26 11:15:24.846963] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.418 [2024-07-26 11:15:24.847004] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.418 [2024-07-26 11:15:24.847044] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.418 [2024-07-26 11:15:24.847079] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.418 [2024-07-26 11:15:24.847114] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.418 [2024-07-26 11:15:24.847148] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.418 [2024-07-26 11:15:24.847185] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.418 [2024-07-26 11:15:24.847227] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.418 [2024-07-26 11:15:24.847269] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.418 [2024-07-26 11:15:24.847308] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.418 [2024-07-26 11:15:24.847340] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.418 [2024-07-26 11:15:24.847380] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.418 [2024-07-26 11:15:24.847421] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.418 [2024-07-26 11:15:24.847468] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.418 [2024-07-26 11:15:24.847510] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.418 [2024-07-26 11:15:24.847554] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.418 [2024-07-26 11:15:24.847593] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.418 [2024-07-26 11:15:24.847638] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.418 [2024-07-26 11:15:24.847678] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.418 [2024-07-26 11:15:24.847718] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.418 [2024-07-26 11:15:24.847756] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.418 [2024-07-26 11:15:24.847796] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.418 [2024-07-26 11:15:24.847834] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.418 [2024-07-26 11:15:24.847876] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.418 [2024-07-26 11:15:24.847917] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.418 [2024-07-26 11:15:24.847952] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.418 [2024-07-26 11:15:24.847999] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.418 [2024-07-26 11:15:24.848044] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.418 [2024-07-26 11:15:24.848088] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.418 [2024-07-26 11:15:24.848130] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.418 [2024-07-26 11:15:24.848169] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.418 [2024-07-26 11:15:24.848216] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.418 [2024-07-26 11:15:24.848257] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.418 [2024-07-26 11:15:24.848297] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.418 [2024-07-26 11:15:24.848347] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.418 [2024-07-26 11:15:24.848389] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.418 [2024-07-26 11:15:24.848435] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.418 [2024-07-26 11:15:24.848482] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.418 [2024-07-26 11:15:24.848523] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.418 [2024-07-26 11:15:24.848567] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.418 [2024-07-26 11:15:24.848608] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.418 [2024-07-26 11:15:24.848651] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.418 [2024-07-26 11:15:24.848707] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.418 [2024-07-26 11:15:24.848754] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.418 [2024-07-26 11:15:24.848932] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.418 [2024-07-26 11:15:24.848987] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.418 [2024-07-26 11:15:24.849034] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.418 [2024-07-26 11:15:24.849077] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.418 [2024-07-26 11:15:24.849120] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.418 [2024-07-26 11:15:24.849166] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.418 [2024-07-26 11:15:24.849205] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.418 [2024-07-26 11:15:24.849247] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.418 [2024-07-26 11:15:24.849286] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.418 [2024-07-26 11:15:24.849329] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.418 [2024-07-26 11:15:24.849370] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.418 [2024-07-26 11:15:24.849412] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.418 [2024-07-26 11:15:24.849464] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.418 [2024-07-26 11:15:24.849507] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.418 [2024-07-26 11:15:24.849551] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.418 [2024-07-26 11:15:24.849605] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.418 [2024-07-26 11:15:24.849655] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.418 [2024-07-26 11:15:24.849687] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.418 [2024-07-26 11:15:24.849726] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.418 [2024-07-26 11:15:24.849765] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.418 [2024-07-26 11:15:24.849804] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.418 [2024-07-26 11:15:24.849852] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.418 [2024-07-26 11:15:24.849895] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.418 [2024-07-26 11:15:24.849933] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.418 [2024-07-26 11:15:24.849971] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.418 [2024-07-26 11:15:24.850007] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.418 [2024-07-26 11:15:24.850048] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.418 [2024-07-26 11:15:24.850089] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.418 [2024-07-26 11:15:24.850125] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.418 [2024-07-26 11:15:24.850168] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.418 [2024-07-26 11:15:24.850200] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.418 [2024-07-26 11:15:24.850240] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.418 [2024-07-26 11:15:24.850280] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.418 [2024-07-26 11:15:24.850325] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.418 [2024-07-26 11:15:24.850365] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.418 [2024-07-26 11:15:24.850404] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.418 [2024-07-26 11:15:24.850449] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.418 [2024-07-26 11:15:24.850494] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.418 [2024-07-26 11:15:24.850539] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.419 [2024-07-26 11:15:24.850578] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.419 [2024-07-26 11:15:24.850615] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.419 [2024-07-26 11:15:24.850660] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.419 [2024-07-26 11:15:24.850688] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.419 [2024-07-26 11:15:24.850727] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.419 [2024-07-26 11:15:24.850766] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.419 [2024-07-26 11:15:24.850806] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.419 [2024-07-26 11:15:24.850847] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.419 [2024-07-26 11:15:24.850893] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.419 [2024-07-26 11:15:24.850930] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.419 [2024-07-26 11:15:24.850970] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.419 [2024-07-26 11:15:24.851008] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.419 [2024-07-26 11:15:24.851050] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.419 [2024-07-26 11:15:24.851084] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.419 [2024-07-26 11:15:24.851119] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.419 [2024-07-26 11:15:24.851163] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.419 [2024-07-26 11:15:24.851203] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.419 [2024-07-26 11:15:24.851246] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.419 [2024-07-26 11:15:24.851285] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.419 [2024-07-26 11:15:24.851331] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.419 [2024-07-26 11:15:24.851371] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.419 [2024-07-26 11:15:24.851415] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.419 [2024-07-26 11:15:24.851457] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.419 [2024-07-26 11:15:24.851496] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.419 [2024-07-26 11:15:24.852320] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.419 [2024-07-26 11:15:24.852370] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.419 [2024-07-26 11:15:24.852411] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.419 [2024-07-26 11:15:24.852452] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.419 [2024-07-26 11:15:24.852501] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.419 [2024-07-26 11:15:24.852546] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.419 [2024-07-26 11:15:24.852591] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.419 [2024-07-26 11:15:24.852636] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.419 [2024-07-26 11:15:24.852682] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.419 [2024-07-26 11:15:24.852729] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.419 [2024-07-26 11:15:24.852770] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.419 [2024-07-26 11:15:24.852811] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.419 [2024-07-26 11:15:24.852852] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.419 [2024-07-26 11:15:24.852896] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.419 [2024-07-26 11:15:24.852936] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.419 [2024-07-26 11:15:24.852974] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.419 [2024-07-26 11:15:24.853010] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.419 [2024-07-26 11:15:24.853042] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.419 [2024-07-26 11:15:24.853078] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.419 [2024-07-26 11:15:24.853117] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.419 [2024-07-26 11:15:24.853157] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.419 [2024-07-26 11:15:24.853196] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.419 [2024-07-26 11:15:24.853234] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.419 [2024-07-26 11:15:24.853274] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.419 [2024-07-26 11:15:24.853313] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.419 [2024-07-26 11:15:24.853350] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.419 [2024-07-26 11:15:24.853390] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.419 [2024-07-26 11:15:24.853427] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.419 [2024-07-26 11:15:24.853467] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.419 [2024-07-26 11:15:24.853508] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.419 [2024-07-26 11:15:24.853543] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.419 [2024-07-26 11:15:24.853580] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.419 [2024-07-26 11:15:24.853622] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.419 [2024-07-26 11:15:24.853669] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.419 [2024-07-26 11:15:24.853707] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.419 [2024-07-26 11:15:24.853743] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.419 [2024-07-26 11:15:24.853778] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.419 [2024-07-26 11:15:24.853817] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.419 [2024-07-26 11:15:24.853859] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.419 [2024-07-26 11:15:24.853901] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.419 [2024-07-26 11:15:24.853943] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.419 [2024-07-26 11:15:24.853981] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.419 [2024-07-26 11:15:24.854023] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.419 [2024-07-26 11:15:24.854059] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.419 [2024-07-26 11:15:24.854098] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.419 [2024-07-26 11:15:24.854138] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.419 [2024-07-26 11:15:24.854175] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.419 [2024-07-26 11:15:24.854211] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.419 [2024-07-26 11:15:24.854246] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.419 [2024-07-26 11:15:24.854288] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.419 [2024-07-26 11:15:24.854324] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.419 [2024-07-26 11:15:24.854361] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.419 [2024-07-26 11:15:24.854403] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.419 [2024-07-26 11:15:24.854448] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.419 [2024-07-26 11:15:24.854492] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.419 [2024-07-26 11:15:24.854534] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.419 [2024-07-26 11:15:24.854575] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.419 [2024-07-26 11:15:24.854618] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.419 [2024-07-26 11:15:24.854663] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.419 [2024-07-26 11:15:24.854712] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.419 [2024-07-26 11:15:24.854754] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.419 [2024-07-26 11:15:24.854797] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.419 [2024-07-26 11:15:24.854840] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.419 [2024-07-26 11:15:24.854884] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.419 [2024-07-26 11:15:24.855066] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.419 [2024-07-26 11:15:24.855111] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.419 [2024-07-26 11:15:24.855154] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.419 [2024-07-26 11:15:24.855203] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.419 [2024-07-26 11:15:24.855246] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.419 [2024-07-26 11:15:24.855274] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.419 [2024-07-26 11:15:24.855313] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.419 [2024-07-26 11:15:24.855351] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.420 [2024-07-26 11:15:24.855390] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.420 [2024-07-26 11:15:24.855426] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.420 [2024-07-26 11:15:24.855471] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.420 [2024-07-26 11:15:24.855507] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.420 [2024-07-26 11:15:24.855546] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.420 [2024-07-26 11:15:24.855580] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.420 [2024-07-26 11:15:24.855621] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.420 [2024-07-26 11:15:24.855664] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.420 [2024-07-26 11:15:24.855700] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.420 [2024-07-26 11:15:24.856158] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.420 [2024-07-26 11:15:24.856202] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.420 [2024-07-26 11:15:24.856243] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.420 [2024-07-26 11:15:24.856281] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.420 [2024-07-26 11:15:24.856319] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.420 [2024-07-26 11:15:24.856356] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.420 [2024-07-26 11:15:24.856392] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.420 [2024-07-26 11:15:24.856430] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.420 [2024-07-26 11:15:24.856465] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.420 [2024-07-26 11:15:24.856497] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.420 [2024-07-26 11:15:24.856535] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.420 [2024-07-26 11:15:24.856569] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.420 [2024-07-26 11:15:24.856608] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.420 [2024-07-26 11:15:24.856653] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.420 [2024-07-26 11:15:24.856697] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.420 [2024-07-26 11:15:24.856741] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.420 [2024-07-26 11:15:24.856780] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.420 [2024-07-26 11:15:24.856824] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.420 [2024-07-26 11:15:24.856868] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.420 [2024-07-26 11:15:24.856911] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.420 [2024-07-26 11:15:24.856962] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.420 [2024-07-26 11:15:24.857013] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.420 [2024-07-26 11:15:24.857054] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.420 [2024-07-26 11:15:24.857094] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.420 [2024-07-26 11:15:24.857136] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.420 [2024-07-26 11:15:24.857177] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.420 [2024-07-26 11:15:24.857221] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.420 [2024-07-26 11:15:24.857261] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.420 [2024-07-26 11:15:24.857307] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.420 [2024-07-26 11:15:24.857351] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.420 [2024-07-26 11:15:24.857390] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.420 [2024-07-26 11:15:24.857440] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.420 [2024-07-26 11:15:24.857481] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.420 [2024-07-26 11:15:24.857526] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.420 [2024-07-26 11:15:24.857573] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.420 [2024-07-26 11:15:24.857618] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.420 [2024-07-26 11:15:24.857667] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.420 [2024-07-26 11:15:24.857715] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.420 [2024-07-26 11:15:24.857757] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.420 [2024-07-26 11:15:24.857801] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.420 [2024-07-26 11:15:24.857847] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.420 [2024-07-26 11:15:24.857891] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.420 [2024-07-26 11:15:24.857934] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.420 [2024-07-26 11:15:24.857980] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.420 [2024-07-26 11:15:24.858025] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.420 [2024-07-26 11:15:24.858070] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.420 [2024-07-26 11:15:24.858114] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.420 [2024-07-26 11:15:24.858153] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.420 [2024-07-26 11:15:24.858196] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.420 [2024-07-26 11:15:24.858246] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.420 [2024-07-26 11:15:24.858289] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.420 [2024-07-26 11:15:24.858327] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.420 [2024-07-26 11:15:24.858364] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.420 [2024-07-26 11:15:24.858401] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.420 [2024-07-26 11:15:24.858445] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.420 [2024-07-26 11:15:24.858486] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.420 [2024-07-26 11:15:24.858528] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.420 [2024-07-26 11:15:24.858565] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.420 [2024-07-26 11:15:24.858609] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.420 [2024-07-26 11:15:24.858652] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.420 [2024-07-26 11:15:24.858691] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.420 [2024-07-26 11:15:24.858719] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.420 [2024-07-26 11:15:24.858759] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.420 [2024-07-26 11:15:24.859025] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.420 [2024-07-26 11:15:24.859068] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.420 [2024-07-26 11:15:24.859112] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.420 [2024-07-26 11:15:24.859155] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.420 [2024-07-26 11:15:24.859190] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.420 [2024-07-26 11:15:24.859227] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.420 [2024-07-26 11:15:24.859264] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.420 [2024-07-26 11:15:24.859305] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.420 [2024-07-26 11:15:24.859343] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.420 [2024-07-26 11:15:24.859384] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.420 [2024-07-26 11:15:24.859424] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.420 [2024-07-26 11:15:24.859463] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.420 [2024-07-26 11:15:24.859504] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.420 [2024-07-26 11:15:24.859539] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.420 [2024-07-26 11:15:24.859579] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.420 [2024-07-26 11:15:24.859623] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.420 [2024-07-26 11:15:24.859665] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.420 [2024-07-26 11:15:24.859707] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.420 [2024-07-26 11:15:24.859755] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.420 [2024-07-26 11:15:24.859802] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.420 [2024-07-26 11:15:24.859845] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.420 [2024-07-26 11:15:24.859891] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.420 [2024-07-26 11:15:24.859939] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.420 [2024-07-26 11:15:24.859980] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.421 [2024-07-26 11:15:24.860023] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.421 [2024-07-26 11:15:24.860064] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.421 [2024-07-26 11:15:24.860105] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.421 [2024-07-26 11:15:24.860150] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.421 [2024-07-26 11:15:24.860195] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.421 [2024-07-26 11:15:24.860241] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.421 [2024-07-26 11:15:24.860283] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.421 [2024-07-26 11:15:24.860326] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.421 [2024-07-26 11:15:24.860369] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.421 [2024-07-26 11:15:24.860411] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.421 [2024-07-26 11:15:24.860466] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.421 [2024-07-26 11:15:24.860505] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.421 [2024-07-26 11:15:24.860543] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.421 [2024-07-26 11:15:24.860596] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.421 [2024-07-26 11:15:24.860641] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.421 [2024-07-26 11:15:24.860684] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.421 [2024-07-26 11:15:24.860727] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.421 [2024-07-26 11:15:24.860766] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.421 [2024-07-26 11:15:24.860810] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.421 [2024-07-26 11:15:24.860852] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.421 [2024-07-26 11:15:24.860896] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.421 [2024-07-26 11:15:24.860940] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.421 [2024-07-26 11:15:24.860979] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.421 [2024-07-26 11:15:24.861022] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.421 [2024-07-26 11:15:24.861079] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.421 [2024-07-26 11:15:24.861119] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.421 [2024-07-26 11:15:24.861160] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.421 [2024-07-26 11:15:24.861198] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.421 [2024-07-26 11:15:24.861239] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.421 [2024-07-26 11:15:24.861287] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.421 [2024-07-26 11:15:24.861329] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.421 [2024-07-26 11:15:24.861368] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.421 [2024-07-26 11:15:24.861397] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.421 [2024-07-26 11:15:24.861440] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.421 [2024-07-26 11:15:24.861481] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.421 [2024-07-26 11:15:24.861513] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.421 [2024-07-26 11:15:24.861549] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.421 [2024-07-26 11:15:24.861584] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.421 [2024-07-26 11:15:24.861620] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.421 [2024-07-26 11:15:24.861660] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.421 [2024-07-26 11:15:24.862408] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.421 [2024-07-26 11:15:24.862448] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.421 [2024-07-26 11:15:24.862484] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.421 [2024-07-26 11:15:24.862522] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.421 [2024-07-26 11:15:24.862558] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.421 [2024-07-26 11:15:24.862596] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.421 [2024-07-26 11:15:24.862648] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.421 [2024-07-26 11:15:24.862688] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.421 [2024-07-26 11:15:24.862726] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.421 [2024-07-26 11:15:24.862765] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.421 [2024-07-26 11:15:24.862805] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.421 [2024-07-26 11:15:24.862841] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.421 [2024-07-26 11:15:24.862880] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.421 [2024-07-26 11:15:24.862917] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.421 [2024-07-26 11:15:24.862951] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.421 [2024-07-26 11:15:24.862990] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.421 [2024-07-26 11:15:24.863035] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.421 [2024-07-26 11:15:24.863084] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.421 [2024-07-26 11:15:24.863130] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.421 [2024-07-26 11:15:24.863168] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.421 [2024-07-26 11:15:24.863216] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.421 [2024-07-26 11:15:24.863262] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.421 [2024-07-26 11:15:24.863305] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.421 [2024-07-26 11:15:24.863347] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.421 [2024-07-26 11:15:24.863390] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.421 [2024-07-26 11:15:24.863438] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.421 [2024-07-26 11:15:24.863482] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.421 [2024-07-26 11:15:24.863526] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.421 [2024-07-26 11:15:24.863574] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.421 [2024-07-26 11:15:24.863619] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.421 [2024-07-26 11:15:24.863669] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.421 [2024-07-26 11:15:24.863714] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.421 [2024-07-26 11:15:24.863756] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.421 [2024-07-26 11:15:24.863794] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.421 [2024-07-26 11:15:24.863833] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.421 [2024-07-26 11:15:24.863876] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.421 [2024-07-26 11:15:24.863918] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.421 [2024-07-26 11:15:24.863960] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.421 [2024-07-26 11:15:24.864002] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.421 [2024-07-26 11:15:24.864046] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.421 [2024-07-26 11:15:24.864090] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.421 [2024-07-26 11:15:24.864135] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.421 [2024-07-26 11:15:24.864177] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.421 [2024-07-26 11:15:24.864225] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.421 [2024-07-26 11:15:24.864267] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.421 [2024-07-26 11:15:24.864311] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.421 [2024-07-26 11:15:24.864360] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.421 [2024-07-26 11:15:24.864405] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.421 [2024-07-26 11:15:24.864449] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.421 [2024-07-26 11:15:24.864495] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.421 [2024-07-26 11:15:24.864540] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.421 [2024-07-26 11:15:24.864585] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.421 [2024-07-26 11:15:24.864633] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.421 [2024-07-26 11:15:24.864667] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.421 [2024-07-26 11:15:24.864705] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.421 [2024-07-26 11:15:24.864743] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.421 [2024-07-26 11:15:24.864784] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.422 [2024-07-26 11:15:24.864835] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.422 [2024-07-26 11:15:24.864875] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.422 [2024-07-26 11:15:24.864912] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.422 [2024-07-26 11:15:24.864951] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.422 [2024-07-26 11:15:24.864987] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.422 [2024-07-26 11:15:24.865033] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.422 [2024-07-26 11:15:24.865216] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.422 [2024-07-26 11:15:24.865257] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.422 [2024-07-26 11:15:24.865296] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.422 [2024-07-26 11:15:24.865340] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.422 [2024-07-26 11:15:24.865380] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.422 [2024-07-26 11:15:24.865422] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.422 [2024-07-26 11:15:24.865464] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.422 [2024-07-26 11:15:24.865506] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.422 [2024-07-26 11:15:24.865547] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.422 [2024-07-26 11:15:24.865583] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.422 [2024-07-26 11:15:24.865622] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.422 [2024-07-26 11:15:24.865663] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.422 [2024-07-26 11:15:24.865702] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.422 [2024-07-26 11:15:24.865740] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.422 [2024-07-26 11:15:24.865778] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.422 [2024-07-26 11:15:24.865820] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.422 [2024-07-26 11:15:24.865858] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.422 [2024-07-26 11:15:24.865899] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.422 [2024-07-26 11:15:24.865937] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.422 [2024-07-26 11:15:24.865976] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.422 [2024-07-26 11:15:24.866013] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.422 [2024-07-26 11:15:24.866049] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.422 [2024-07-26 11:15:24.866091] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.422 [2024-07-26 11:15:24.866129] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.422 [2024-07-26 11:15:24.866169] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.422 [2024-07-26 11:15:24.866206] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.422 [2024-07-26 11:15:24.866241] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.422 [2024-07-26 11:15:24.866284] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.422 [2024-07-26 11:15:24.866329] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.422 [2024-07-26 11:15:24.866385] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.422 [2024-07-26 11:15:24.866427] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.422 [2024-07-26 11:15:24.866472] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.422 [2024-07-26 11:15:24.866518] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.422 [2024-07-26 11:15:24.866559] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.422 [2024-07-26 11:15:24.866600] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.422 [2024-07-26 11:15:24.866649] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.422 [2024-07-26 11:15:24.866694] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.422 [2024-07-26 11:15:24.866738] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.422 [2024-07-26 11:15:24.866782] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.422 [2024-07-26 11:15:24.866827] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.422 [2024-07-26 11:15:24.866869] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.422 [2024-07-26 11:15:24.866913] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.422 [2024-07-26 11:15:24.866955] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.422 [2024-07-26 11:15:24.867001] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.422 [2024-07-26 11:15:24.867047] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.422 [2024-07-26 11:15:24.867088] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.422 [2024-07-26 11:15:24.867132] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.422 [2024-07-26 11:15:24.867177] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.422 [2024-07-26 11:15:24.867220] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.422 [2024-07-26 11:15:24.867261] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.422 [2024-07-26 11:15:24.867310] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.422 [2024-07-26 11:15:24.867361] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.422 [2024-07-26 11:15:24.867409] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.422 [2024-07-26 11:15:24.867455] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.422 [2024-07-26 11:15:24.867503] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.422 [2024-07-26 11:15:24.867547] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.422 [2024-07-26 11:15:24.867593] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.422 [2024-07-26 11:15:24.867641] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.422 [2024-07-26 11:15:24.867686] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.422 [2024-07-26 11:15:24.867730] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.422 [2024-07-26 11:15:24.867773] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.422 [2024-07-26 11:15:24.867821] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.422 [2024-07-26 11:15:24.867867] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.422 [2024-07-26 11:15:24.867913] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.422 [2024-07-26 11:15:24.868711] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.422 [2024-07-26 11:15:24.868756] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.422 [2024-07-26 11:15:24.868803] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.422 [2024-07-26 11:15:24.868844] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.422 [2024-07-26 11:15:24.868883] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.422 [2024-07-26 11:15:24.868924] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.423 [2024-07-26 11:15:24.868964] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.423 [2024-07-26 11:15:24.869003] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.423 [2024-07-26 11:15:24.869041] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.423 [2024-07-26 11:15:24.869077] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.423 [2024-07-26 11:15:24.869121] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.423 [2024-07-26 11:15:24.869158] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.423 [2024-07-26 11:15:24.869198] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.423 [2024-07-26 11:15:24.869242] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.423 [2024-07-26 11:15:24.869288] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.423 [2024-07-26 11:15:24.869328] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.423 [2024-07-26 11:15:24.869372] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.423 [2024-07-26 11:15:24.869414] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.423 [2024-07-26 11:15:24.869458] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.423 [2024-07-26 11:15:24.869497] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.423 [2024-07-26 11:15:24.869536] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.423 [2024-07-26 11:15:24.869574] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.423 [2024-07-26 11:15:24.869619] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.423 [2024-07-26 11:15:24.869667] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.423 [2024-07-26 11:15:24.869716] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.423 [2024-07-26 11:15:24.869759] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.423 [2024-07-26 11:15:24.869809] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.423 [2024-07-26 11:15:24.869857] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.423 [2024-07-26 11:15:24.869901] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.423 [2024-07-26 11:15:24.869952] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.423 [2024-07-26 11:15:24.869995] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.423 [2024-07-26 11:15:24.870044] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.423 [2024-07-26 11:15:24.870089] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.423 [2024-07-26 11:15:24.870136] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.423 [2024-07-26 11:15:24.870181] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.423 [2024-07-26 11:15:24.870228] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.423 [2024-07-26 11:15:24.870278] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.423 [2024-07-26 11:15:24.870333] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.423 [2024-07-26 11:15:24.870376] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.423 [2024-07-26 11:15:24.870419] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.423 [2024-07-26 11:15:24.870465] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.423 [2024-07-26 11:15:24.870511] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.423 [2024-07-26 11:15:24.870554] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.423 [2024-07-26 11:15:24.870599] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.423 [2024-07-26 11:15:24.870649] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.423 [2024-07-26 11:15:24.870694] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.423 [2024-07-26 11:15:24.870746] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.423 [2024-07-26 11:15:24.870790] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.423 [2024-07-26 11:15:24.870832] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.423 [2024-07-26 11:15:24.870881] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.423 [2024-07-26 11:15:24.870929] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.423 [2024-07-26 11:15:24.870978] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.423 [2024-07-26 11:15:24.871022] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.423 [2024-07-26 11:15:24.871071] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.423 [2024-07-26 11:15:24.871115] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.423 [2024-07-26 11:15:24.871160] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.423 [2024-07-26 11:15:24.871204] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.423 [2024-07-26 11:15:24.871250] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.423 [2024-07-26 11:15:24.871293] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.423 [2024-07-26 11:15:24.871328] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.423 [2024-07-26 11:15:24.871365] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.423 [2024-07-26 11:15:24.871404] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.423 [2024-07-26 11:15:24.871445] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.423 [2024-07-26 11:15:24.871645] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.423 [2024-07-26 11:15:24.871688] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.423 [2024-07-26 11:15:24.871729] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.423 [2024-07-26 11:15:24.871771] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.423 [2024-07-26 11:15:24.871810] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.423 [2024-07-26 11:15:24.871846] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.423 [2024-07-26 11:15:24.871888] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.423 [2024-07-26 11:15:24.871928] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.423 [2024-07-26 11:15:24.871967] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.423 [2024-07-26 11:15:24.872006] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.423 [2024-07-26 11:15:24.872052] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.423 [2024-07-26 11:15:24.872090] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.423 [2024-07-26 11:15:24.872132] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.423 [2024-07-26 11:15:24.872171] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.423 [2024-07-26 11:15:24.872215] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.423 [2024-07-26 11:15:24.872248] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.423 [2024-07-26 11:15:24.872291] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.423 [2024-07-26 11:15:24.872327] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.423 [2024-07-26 11:15:24.872365] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.423 [2024-07-26 11:15:24.872408] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.423 [2024-07-26 11:15:24.872452] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.423 [2024-07-26 11:15:24.872494] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.423 [2024-07-26 11:15:24.872536] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.423 [2024-07-26 11:15:24.872578] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.423 [2024-07-26 11:15:24.872617] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.423 [2024-07-26 11:15:24.872664] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.423 [2024-07-26 11:15:24.872703] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.423 [2024-07-26 11:15:24.872742] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.423 [2024-07-26 11:15:24.872790] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.423 [2024-07-26 11:15:24.872832] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.423 [2024-07-26 11:15:24.872875] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.423 [2024-07-26 11:15:24.872918] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.423 [2024-07-26 11:15:24.872967] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.423 [2024-07-26 11:15:24.873014] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.423 [2024-07-26 11:15:24.873063] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.423 [2024-07-26 11:15:24.873108] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.423 [2024-07-26 11:15:24.873151] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.423 [2024-07-26 11:15:24.873197] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.423 [2024-07-26 11:15:24.873243] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.424 [2024-07-26 11:15:24.873297] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.424 [2024-07-26 11:15:24.873342] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.424 [2024-07-26 11:15:24.873384] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.424 [2024-07-26 11:15:24.873428] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.424 [2024-07-26 11:15:24.873478] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.424 [2024-07-26 11:15:24.873525] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.424 [2024-07-26 11:15:24.873573] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.424 [2024-07-26 11:15:24.873619] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.424 [2024-07-26 11:15:24.873672] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.424 [2024-07-26 11:15:24.873719] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.424 [2024-07-26 11:15:24.873767] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.424 [2024-07-26 11:15:24.873812] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.424 [2024-07-26 11:15:24.873860] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.424 [2024-07-26 11:15:24.873904] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.424 [2024-07-26 11:15:24.873949] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.424 [2024-07-26 11:15:24.873998] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.424 [2024-07-26 11:15:24.874042] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.424 [2024-07-26 11:15:24.874085] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.424 [2024-07-26 11:15:24.874135] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.424 [2024-07-26 11:15:24.874179] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.424 [2024-07-26 11:15:24.874226] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.424 [2024-07-26 11:15:24.874270] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.424 [2024-07-26 11:15:24.874318] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.424 [2024-07-26 11:15:24.874366] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.424 [2024-07-26 11:15:24.874418] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.424 [2024-07-26 11:15:24.875186] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.424 [2024-07-26 11:15:24.875237] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.424 [2024-07-26 11:15:24.875279] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.424 [2024-07-26 11:15:24.875327] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.424 [2024-07-26 11:15:24.875368] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.424 [2024-07-26 11:15:24.875412] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.424 [2024-07-26 11:15:24.875453] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.424 [2024-07-26 11:15:24.875497] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.424 [2024-07-26 11:15:24.875528] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.424 [2024-07-26 11:15:24.875569] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.424 [2024-07-26 11:15:24.875608] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.424 [2024-07-26 11:15:24.875648] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.424 [2024-07-26 11:15:24.875690] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.424 [2024-07-26 11:15:24.875731] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.424 [2024-07-26 11:15:24.875776] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.424 [2024-07-26 11:15:24.875821] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.424 [2024-07-26 11:15:24.875860] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.424 [2024-07-26 11:15:24.875902] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.424 [2024-07-26 11:15:24.875946] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.424 [2024-07-26 11:15:24.875993] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.424 [2024-07-26 11:15:24.876029] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.424 [2024-07-26 11:15:24.876073] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.424 [2024-07-26 11:15:24.876116] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.424 [2024-07-26 11:15:24.876159] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.424 [2024-07-26 11:15:24.876204] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.424 [2024-07-26 11:15:24.876260] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.424 [2024-07-26 11:15:24.876305] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.424 [2024-07-26 11:15:24.876354] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.424 [2024-07-26 11:15:24.876400] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.424 [2024-07-26 11:15:24.876444] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.424 [2024-07-26 11:15:24.876502] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.424 [2024-07-26 11:15:24.876547] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.424 [2024-07-26 11:15:24.876593] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.424 [2024-07-26 11:15:24.876640] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.424 [2024-07-26 11:15:24.876692] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.424 [2024-07-26 11:15:24.876735] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.424 [2024-07-26 11:15:24.876778] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.424 [2024-07-26 11:15:24.876824] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.424 [2024-07-26 11:15:24.876870] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.424 [2024-07-26 11:15:24.876913] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.424 [2024-07-26 11:15:24.876960] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.424 [2024-07-26 11:15:24.877004] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.424 [2024-07-26 11:15:24.877051] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.424 [2024-07-26 11:15:24.877098] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.424 [2024-07-26 11:15:24.877144] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.424 [2024-07-26 11:15:24.877191] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.424 [2024-07-26 11:15:24.877236] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.424 [2024-07-26 11:15:24.877280] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.424 [2024-07-26 11:15:24.877335] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.424 [2024-07-26 11:15:24.877380] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.424 [2024-07-26 11:15:24.877430] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.424 [2024-07-26 11:15:24.877476] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.424 [2024-07-26 11:15:24.877523] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.424 [2024-07-26 11:15:24.877568] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.424 [2024-07-26 11:15:24.877615] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.424 [2024-07-26 11:15:24.877663] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.424 [2024-07-26 11:15:24.877710] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.424 [2024-07-26 11:15:24.877761] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.424 11:15:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1027 00:07:29.424 [2024-07-26 11:15:24.877805] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.424 [2024-07-26 11:15:24.877837] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.424 [2024-07-26 11:15:24.877880] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.424 [2024-07-26 11:15:24.877919] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.424 [2024-07-26 11:15:24.877957] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.424 11:15:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1027 00:07:29.424 [2024-07-26 11:15:24.878147] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.424 [2024-07-26 11:15:24.878199] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.424 [2024-07-26 11:15:24.878242] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.424 [2024-07-26 11:15:24.878283] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.424 [2024-07-26 11:15:24.878323] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.424 [2024-07-26 11:15:24.878368] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.425 [2024-07-26 11:15:24.878407] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.425 [2024-07-26 11:15:24.878445] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.425 [2024-07-26 11:15:24.878490] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.425 [2024-07-26 11:15:24.878528] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.425 [2024-07-26 11:15:24.878569] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.425 [2024-07-26 11:15:24.878611] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.425 [2024-07-26 11:15:24.878663] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.425 [2024-07-26 11:15:24.878703] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.425 [2024-07-26 11:15:24.878749] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.425 [2024-07-26 11:15:24.878786] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.425 [2024-07-26 11:15:24.878823] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.425 [2024-07-26 11:15:24.878858] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.425 [2024-07-26 11:15:24.878902] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.425 [2024-07-26 11:15:24.878946] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.425 [2024-07-26 11:15:24.878987] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.425 [2024-07-26 11:15:24.879031] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.425 [2024-07-26 11:15:24.879070] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.425 [2024-07-26 11:15:24.879110] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.425 [2024-07-26 11:15:24.879155] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.425 [2024-07-26 11:15:24.879197] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.425 [2024-07-26 11:15:24.879238] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.425 [2024-07-26 11:15:24.879281] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.425 [2024-07-26 11:15:24.879325] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.425 [2024-07-26 11:15:24.879369] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.425 [2024-07-26 11:15:24.879413] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.425 [2024-07-26 11:15:24.879457] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.425 [2024-07-26 11:15:24.879503] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.425 [2024-07-26 11:15:24.879546] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.425 [2024-07-26 11:15:24.879595] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.425 [2024-07-26 11:15:24.879649] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.425 [2024-07-26 11:15:24.879692] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.425 [2024-07-26 11:15:24.879735] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.425 [2024-07-26 11:15:24.879781] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.425 [2024-07-26 11:15:24.879832] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.425 [2024-07-26 11:15:24.879877] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.425 [2024-07-26 11:15:24.879925] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.425 [2024-07-26 11:15:24.879972] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.425 [2024-07-26 11:15:24.880017] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.425 [2024-07-26 11:15:24.880070] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.425 [2024-07-26 11:15:24.880117] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.425 [2024-07-26 11:15:24.880162] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.425 [2024-07-26 11:15:24.880206] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.425 [2024-07-26 11:15:24.880254] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.425 [2024-07-26 11:15:24.880298] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.425 [2024-07-26 11:15:24.880346] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.425 [2024-07-26 11:15:24.880395] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.425 [2024-07-26 11:15:24.880439] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.425 [2024-07-26 11:15:24.880485] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.425 [2024-07-26 11:15:24.880530] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.425 [2024-07-26 11:15:24.880577] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.425 [2024-07-26 11:15:24.880624] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.425 [2024-07-26 11:15:24.880677] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.425 [2024-07-26 11:15:24.880724] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.425 [2024-07-26 11:15:24.880769] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.425 [2024-07-26 11:15:24.880814] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.425 [2024-07-26 11:15:24.880864] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.425 [2024-07-26 11:15:24.880909] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.425 [2024-07-26 11:15:24.880954] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.425 [2024-07-26 11:15:24.881723] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.425 [2024-07-26 11:15:24.881768] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.425 [2024-07-26 11:15:24.881815] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.425 [2024-07-26 11:15:24.881854] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.425 [2024-07-26 11:15:24.881901] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.425 [2024-07-26 11:15:24.881943] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.425 [2024-07-26 11:15:24.881982] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.425 [2024-07-26 11:15:24.882028] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.425 [2024-07-26 11:15:24.882062] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.425 [2024-07-26 11:15:24.882098] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.425 [2024-07-26 11:15:24.882135] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.425 [2024-07-26 11:15:24.882177] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.425 [2024-07-26 11:15:24.882217] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.425 [2024-07-26 11:15:24.882259] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.425 [2024-07-26 11:15:24.882299] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.425 [2024-07-26 11:15:24.882338] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.425 [2024-07-26 11:15:24.882379] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.425 [2024-07-26 11:15:24.882419] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.425 [2024-07-26 11:15:24.882462] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.425 [2024-07-26 11:15:24.882500] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.425 [2024-07-26 11:15:24.882537] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.425 [2024-07-26 11:15:24.882576] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.425 [2024-07-26 11:15:24.882614] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.425 [2024-07-26 11:15:24.882663] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.425 [2024-07-26 11:15:24.882706] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.425 [2024-07-26 11:15:24.882754] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.425 [2024-07-26 11:15:24.882798] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.425 [2024-07-26 11:15:24.882840] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.425 [2024-07-26 11:15:24.882884] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.425 [2024-07-26 11:15:24.882934] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.425 [2024-07-26 11:15:24.882983] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.425 [2024-07-26 11:15:24.883027] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.425 [2024-07-26 11:15:24.883071] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.425 [2024-07-26 11:15:24.883120] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.425 [2024-07-26 11:15:24.883166] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.425 [2024-07-26 11:15:24.883213] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.425 [2024-07-26 11:15:24.883258] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.425 [2024-07-26 11:15:24.883303] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.425 [2024-07-26 11:15:24.883355] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.426 [2024-07-26 11:15:24.883401] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.426 [2024-07-26 11:15:24.883448] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.426 [2024-07-26 11:15:24.883496] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.426 [2024-07-26 11:15:24.883545] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.426 [2024-07-26 11:15:24.883601] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.426 [2024-07-26 11:15:24.883649] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.426 [2024-07-26 11:15:24.883694] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.426 [2024-07-26 11:15:24.883738] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.426 [2024-07-26 11:15:24.883789] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.426 [2024-07-26 11:15:24.883835] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.426 [2024-07-26 11:15:24.883882] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.426 [2024-07-26 11:15:24.883928] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.426 [2024-07-26 11:15:24.883974] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.426 [2024-07-26 11:15:24.884018] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.426 [2024-07-26 11:15:24.884063] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.426 [2024-07-26 11:15:24.884108] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.426 [2024-07-26 11:15:24.884153] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.426 [2024-07-26 11:15:24.884199] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.426 [2024-07-26 11:15:24.884245] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.426 [2024-07-26 11:15:24.884289] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.426 [2024-07-26 11:15:24.884336] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.426 [2024-07-26 11:15:24.884371] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.426 [2024-07-26 11:15:24.884412] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.426 [2024-07-26 11:15:24.884452] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.426 [2024-07-26 11:15:24.884644] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.426 [2024-07-26 11:15:24.884690] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.426 [2024-07-26 11:15:24.884728] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.426 [2024-07-26 11:15:24.884767] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.426 [2024-07-26 11:15:24.884805] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.426 [2024-07-26 11:15:24.884843] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.426 [2024-07-26 11:15:24.884884] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.426 [2024-07-26 11:15:24.884921] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.426 [2024-07-26 11:15:24.884959] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.426 [2024-07-26 11:15:24.885007] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.426 [2024-07-26 11:15:24.885048] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.426 [2024-07-26 11:15:24.885087] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.426 [2024-07-26 11:15:24.885133] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.426 [2024-07-26 11:15:24.885173] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.426 [2024-07-26 11:15:24.885214] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.426 [2024-07-26 11:15:24.885259] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.426 [2024-07-26 11:15:24.885298] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.426 [2024-07-26 11:15:24.885330] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.426 [2024-07-26 11:15:24.885372] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.426 [2024-07-26 11:15:24.885409] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.426 [2024-07-26 11:15:24.885452] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.426 [2024-07-26 11:15:24.885494] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.426 [2024-07-26 11:15:24.885532] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.426 [2024-07-26 11:15:24.885573] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.426 [2024-07-26 11:15:24.885611] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.426 [2024-07-26 11:15:24.885659] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.426 [2024-07-26 11:15:24.885701] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.426 [2024-07-26 11:15:24.885744] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.426 [2024-07-26 11:15:24.885782] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.426 [2024-07-26 11:15:24.885826] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.426 [2024-07-26 11:15:24.885867] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.426 [2024-07-26 11:15:24.885905] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.426 [2024-07-26 11:15:24.885954] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.426 [2024-07-26 11:15:24.885999] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.426 [2024-07-26 11:15:24.886049] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.426 [2024-07-26 11:15:24.886094] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.426 [2024-07-26 11:15:24.886142] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.426 [2024-07-26 11:15:24.886186] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.426 [2024-07-26 11:15:24.886232] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.426 [2024-07-26 11:15:24.886276] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.426 [2024-07-26 11:15:24.886325] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.426 [2024-07-26 11:15:24.886373] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.426 [2024-07-26 11:15:24.886421] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.426 [2024-07-26 11:15:24.886462] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.426 [2024-07-26 11:15:24.886511] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.426 [2024-07-26 11:15:24.886552] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.426 [2024-07-26 11:15:24.886601] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.426 [2024-07-26 11:15:24.886648] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.426 [2024-07-26 11:15:24.886696] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.426 [2024-07-26 11:15:24.886742] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.426 [2024-07-26 11:15:24.886789] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.426 [2024-07-26 11:15:24.886841] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.426 [2024-07-26 11:15:24.886887] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.426 [2024-07-26 11:15:24.886931] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.426 [2024-07-26 11:15:24.886976] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.426 [2024-07-26 11:15:24.887021] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.426 [2024-07-26 11:15:24.887064] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.426 [2024-07-26 11:15:24.887110] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.426 [2024-07-26 11:15:24.887153] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.426 [2024-07-26 11:15:24.887198] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.426 [2024-07-26 11:15:24.887242] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.426 [2024-07-26 11:15:24.887288] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.426 [2024-07-26 11:15:24.887331] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.426 [2024-07-26 11:15:24.887379] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.426 [2024-07-26 11:15:24.888141] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.426 [2024-07-26 11:15:24.888187] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.426 [2024-07-26 11:15:24.888227] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.426 [2024-07-26 11:15:24.888264] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.426 [2024-07-26 11:15:24.888304] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.426 [2024-07-26 11:15:24.888343] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.426 [2024-07-26 11:15:24.888390] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.426 [2024-07-26 11:15:24.888430] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.427 [2024-07-26 11:15:24.888469] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.427 [2024-07-26 11:15:24.888517] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.427 [2024-07-26 11:15:24.888558] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.427 [2024-07-26 11:15:24.888590] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.427 [2024-07-26 11:15:24.888635] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.427 [2024-07-26 11:15:24.888675] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.427 [2024-07-26 11:15:24.888716] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.427 [2024-07-26 11:15:24.888756] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.427 [2024-07-26 11:15:24.888796] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.427 [2024-07-26 11:15:24.888836] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.427 [2024-07-26 11:15:24.888875] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.427 [2024-07-26 11:15:24.888915] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.427 [2024-07-26 11:15:24.888956] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.427 [2024-07-26 11:15:24.888997] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.427 [2024-07-26 11:15:24.889035] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.427 [2024-07-26 11:15:24.889081] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.427 [2024-07-26 11:15:24.889123] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.427 [2024-07-26 11:15:24.889169] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.427 [2024-07-26 11:15:24.889213] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.427 [2024-07-26 11:15:24.889266] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.427 [2024-07-26 11:15:24.889313] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.427 [2024-07-26 11:15:24.889358] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.427 [2024-07-26 11:15:24.889402] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.427 [2024-07-26 11:15:24.889444] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.427 [2024-07-26 11:15:24.889494] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.427 [2024-07-26 11:15:24.889538] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.427 [2024-07-26 11:15:24.889581] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.427 [2024-07-26 11:15:24.889625] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.427 [2024-07-26 11:15:24.889678] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.427 [2024-07-26 11:15:24.889726] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.427 [2024-07-26 11:15:24.889767] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.427 [2024-07-26 11:15:24.889816] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.427 [2024-07-26 11:15:24.889862] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.427 [2024-07-26 11:15:24.889906] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.427 [2024-07-26 11:15:24.889956] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.427 [2024-07-26 11:15:24.890000] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.427 [2024-07-26 11:15:24.890045] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.427 [2024-07-26 11:15:24.890091] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.427 [2024-07-26 11:15:24.890136] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.427 [2024-07-26 11:15:24.890182] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.427 [2024-07-26 11:15:24.890229] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.427 [2024-07-26 11:15:24.890277] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.427 [2024-07-26 11:15:24.890320] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.427 [2024-07-26 11:15:24.890368] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.427 [2024-07-26 11:15:24.890418] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.427 [2024-07-26 11:15:24.890475] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.427 [2024-07-26 11:15:24.890523] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.427 [2024-07-26 11:15:24.890565] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.427 [2024-07-26 11:15:24.890610] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.427 [2024-07-26 11:15:24.890661] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.427 [2024-07-26 11:15:24.890706] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.427 [2024-07-26 11:15:24.890750] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.427 [2024-07-26 11:15:24.890794] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.427 [2024-07-26 11:15:24.890833] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.427 [2024-07-26 11:15:24.890871] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.427 [2024-07-26 11:15:24.891066] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.427 [2024-07-26 11:15:24.891112] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.427 [2024-07-26 11:15:24.891152] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.427 [2024-07-26 11:15:24.891194] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.427 [2024-07-26 11:15:24.891237] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.427 [2024-07-26 11:15:24.891277] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.427 [2024-07-26 11:15:24.891317] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.427 [2024-07-26 11:15:24.891355] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.427 [2024-07-26 11:15:24.891396] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.427 [2024-07-26 11:15:24.891434] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.427 [2024-07-26 11:15:24.891473] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.427 [2024-07-26 11:15:24.891520] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.427 Message suppressed 999 times: Read completed with error (sct=0, sc=15) 00:07:29.427 [2024-07-26 11:15:24.891561] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.427 [2024-07-26 11:15:24.891602] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.427 [2024-07-26 11:15:24.891650] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.427 [2024-07-26 11:15:24.891689] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.427 [2024-07-26 11:15:24.891729] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.427 [2024-07-26 11:15:24.891768] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.427 [2024-07-26 11:15:24.891806] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.427 [2024-07-26 11:15:24.891853] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.427 [2024-07-26 11:15:24.891891] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.427 [2024-07-26 11:15:24.891938] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.427 [2024-07-26 11:15:24.891979] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.427 [2024-07-26 11:15:24.892019] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.427 [2024-07-26 11:15:24.892062] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.427 [2024-07-26 11:15:24.892107] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.428 [2024-07-26 11:15:24.892146] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.428 [2024-07-26 11:15:24.892189] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.428 [2024-07-26 11:15:24.892228] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.428 [2024-07-26 11:15:24.892259] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.428 [2024-07-26 11:15:24.892300] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.428 [2024-07-26 11:15:24.892343] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.428 [2024-07-26 11:15:24.892382] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.428 [2024-07-26 11:15:24.892426] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.428 [2024-07-26 11:15:24.892464] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.428 [2024-07-26 11:15:24.892508] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.428 [2024-07-26 11:15:24.892549] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.428 [2024-07-26 11:15:24.892587] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.428 [2024-07-26 11:15:24.892634] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.428 [2024-07-26 11:15:24.892677] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.428 [2024-07-26 11:15:24.892721] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.428 [2024-07-26 11:15:24.892770] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.428 [2024-07-26 11:15:24.892812] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.428 [2024-07-26 11:15:24.892859] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.428 [2024-07-26 11:15:24.892904] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.428 [2024-07-26 11:15:24.892950] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.428 [2024-07-26 11:15:24.893001] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.428 [2024-07-26 11:15:24.893049] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.428 [2024-07-26 11:15:24.893095] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.428 [2024-07-26 11:15:24.893141] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.428 [2024-07-26 11:15:24.893187] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.428 [2024-07-26 11:15:24.893238] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.428 [2024-07-26 11:15:24.893286] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.428 [2024-07-26 11:15:24.893328] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.428 [2024-07-26 11:15:24.893375] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.428 [2024-07-26 11:15:24.893425] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.428 [2024-07-26 11:15:24.893470] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.428 [2024-07-26 11:15:24.893512] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.428 [2024-07-26 11:15:24.893560] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.428 [2024-07-26 11:15:24.893598] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.428 [2024-07-26 11:15:24.893648] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.428 [2024-07-26 11:15:24.893689] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.428 [2024-07-26 11:15:24.893720] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.428 [2024-07-26 11:15:24.893758] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.428 [2024-07-26 11:15:24.894595] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.428 [2024-07-26 11:15:24.894656] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.428 [2024-07-26 11:15:24.894708] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.428 [2024-07-26 11:15:24.894752] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.428 [2024-07-26 11:15:24.894798] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.428 [2024-07-26 11:15:24.894839] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.428 [2024-07-26 11:15:24.894885] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.428 [2024-07-26 11:15:24.894938] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.428 [2024-07-26 11:15:24.894983] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.428 [2024-07-26 11:15:24.895026] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.428 [2024-07-26 11:15:24.895074] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.428 [2024-07-26 11:15:24.895121] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.428 [2024-07-26 11:15:24.895175] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.428 [2024-07-26 11:15:24.895219] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.428 [2024-07-26 11:15:24.895264] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.428 [2024-07-26 11:15:24.895311] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.428 [2024-07-26 11:15:24.895360] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.428 [2024-07-26 11:15:24.895410] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.428 [2024-07-26 11:15:24.895459] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.428 [2024-07-26 11:15:24.895505] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.428 [2024-07-26 11:15:24.895549] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.428 [2024-07-26 11:15:24.895594] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.428 [2024-07-26 11:15:24.895644] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.428 [2024-07-26 11:15:24.895690] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.428 [2024-07-26 11:15:24.895737] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.428 [2024-07-26 11:15:24.895784] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.428 [2024-07-26 11:15:24.895830] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.428 [2024-07-26 11:15:24.895878] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.428 [2024-07-26 11:15:24.895926] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.428 [2024-07-26 11:15:24.895970] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.428 [2024-07-26 11:15:24.896014] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.428 [2024-07-26 11:15:24.896052] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.428 [2024-07-26 11:15:24.896094] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.428 [2024-07-26 11:15:24.896140] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.428 [2024-07-26 11:15:24.896179] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.428 [2024-07-26 11:15:24.896222] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.428 [2024-07-26 11:15:24.896264] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.428 [2024-07-26 11:15:24.896308] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.428 [2024-07-26 11:15:24.896351] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.428 [2024-07-26 11:15:24.896393] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.428 [2024-07-26 11:15:24.896434] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.428 [2024-07-26 11:15:24.896478] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.428 [2024-07-26 11:15:24.896514] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.428 [2024-07-26 11:15:24.896553] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.428 [2024-07-26 11:15:24.896589] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.428 [2024-07-26 11:15:24.896623] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.428 [2024-07-26 11:15:24.896669] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.428 [2024-07-26 11:15:24.896714] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.428 [2024-07-26 11:15:24.896761] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.428 [2024-07-26 11:15:24.896801] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.428 [2024-07-26 11:15:24.896829] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.428 [2024-07-26 11:15:24.896864] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.428 [2024-07-26 11:15:24.896902] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.428 [2024-07-26 11:15:24.896940] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.428 [2024-07-26 11:15:24.896978] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.428 [2024-07-26 11:15:24.897014] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.428 [2024-07-26 11:15:24.897048] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.428 [2024-07-26 11:15:24.897084] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.429 [2024-07-26 11:15:24.897119] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.429 [2024-07-26 11:15:24.897156] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.429 [2024-07-26 11:15:24.897196] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.429 [2024-07-26 11:15:24.897233] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.429 [2024-07-26 11:15:24.897269] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.429 [2024-07-26 11:15:24.897454] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.429 [2024-07-26 11:15:24.897494] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.429 [2024-07-26 11:15:24.897529] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.429 [2024-07-26 11:15:24.897562] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.429 [2024-07-26 11:15:24.897598] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.429 [2024-07-26 11:15:24.897644] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.429 [2024-07-26 11:15:24.897681] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.429 [2024-07-26 11:15:24.897723] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.429 [2024-07-26 11:15:24.897759] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.429 [2024-07-26 11:15:24.897796] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.429 [2024-07-26 11:15:24.897835] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.429 [2024-07-26 11:15:24.897871] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.429 [2024-07-26 11:15:24.897912] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.429 [2024-07-26 11:15:24.897950] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.429 [2024-07-26 11:15:24.897984] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.429 [2024-07-26 11:15:24.898022] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.429 [2024-07-26 11:15:24.898057] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.429 [2024-07-26 11:15:24.898092] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.429 [2024-07-26 11:15:24.898134] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.429 [2024-07-26 11:15:24.898169] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.429 [2024-07-26 11:15:24.898204] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.429 [2024-07-26 11:15:24.898252] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.429 [2024-07-26 11:15:24.898293] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.429 [2024-07-26 11:15:24.898341] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.429 [2024-07-26 11:15:24.898379] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.429 [2024-07-26 11:15:24.898422] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.429 [2024-07-26 11:15:24.898465] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.429 [2024-07-26 11:15:24.898513] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.429 [2024-07-26 11:15:24.898560] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.429 [2024-07-26 11:15:24.898600] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.429 [2024-07-26 11:15:24.898646] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.429 [2024-07-26 11:15:24.898695] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.429 [2024-07-26 11:15:24.898736] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.429 [2024-07-26 11:15:24.898780] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.429 [2024-07-26 11:15:24.898829] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.429 [2024-07-26 11:15:24.898873] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.429 [2024-07-26 11:15:24.898914] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.429 [2024-07-26 11:15:24.898953] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.429 [2024-07-26 11:15:24.898995] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.429 [2024-07-26 11:15:24.899033] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.429 [2024-07-26 11:15:24.899061] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.429 [2024-07-26 11:15:24.899101] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.429 [2024-07-26 11:15:24.899140] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.429 [2024-07-26 11:15:24.899179] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.429 [2024-07-26 11:15:24.899230] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.429 [2024-07-26 11:15:24.899272] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.429 [2024-07-26 11:15:24.899307] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.429 [2024-07-26 11:15:24.899343] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.429 [2024-07-26 11:15:24.899373] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.429 [2024-07-26 11:15:24.899412] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.429 [2024-07-26 11:15:24.899448] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.429 [2024-07-26 11:15:24.899490] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.429 [2024-07-26 11:15:24.899532] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.429 [2024-07-26 11:15:24.899571] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.429 [2024-07-26 11:15:24.899608] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.429 [2024-07-26 11:15:24.899647] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.429 [2024-07-26 11:15:24.899689] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.429 [2024-07-26 11:15:24.899730] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.429 [2024-07-26 11:15:24.899771] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.429 [2024-07-26 11:15:24.899816] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.429 [2024-07-26 11:15:24.899856] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.429 [2024-07-26 11:15:24.899894] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.429 [2024-07-26 11:15:24.899933] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.429 [2024-07-26 11:15:24.899972] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.429 [2024-07-26 11:15:24.900769] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.429 [2024-07-26 11:15:24.900823] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.429 [2024-07-26 11:15:24.900868] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.429 [2024-07-26 11:15:24.900914] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.429 [2024-07-26 11:15:24.900960] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.429 [2024-07-26 11:15:24.901009] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.429 [2024-07-26 11:15:24.901057] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.429 [2024-07-26 11:15:24.901102] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.429 [2024-07-26 11:15:24.901145] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.429 [2024-07-26 11:15:24.901194] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.429 [2024-07-26 11:15:24.901236] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.429 [2024-07-26 11:15:24.901283] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.429 [2024-07-26 11:15:24.901328] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.429 [2024-07-26 11:15:24.901373] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.429 [2024-07-26 11:15:24.901423] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.429 [2024-07-26 11:15:24.901467] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.429 [2024-07-26 11:15:24.901512] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.429 [2024-07-26 11:15:24.901557] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.429 [2024-07-26 11:15:24.901615] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.429 [2024-07-26 11:15:24.901667] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.429 [2024-07-26 11:15:24.901712] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.429 [2024-07-26 11:15:24.901758] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.429 [2024-07-26 11:15:24.901806] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.429 [2024-07-26 11:15:24.901852] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.429 [2024-07-26 11:15:24.901897] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.429 [2024-07-26 11:15:24.901940] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.429 [2024-07-26 11:15:24.901981] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.429 [2024-07-26 11:15:24.902019] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.430 [2024-07-26 11:15:24.902057] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.430 [2024-07-26 11:15:24.902096] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.430 [2024-07-26 11:15:24.902134] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.430 [2024-07-26 11:15:24.902170] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.430 [2024-07-26 11:15:24.902211] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.430 [2024-07-26 11:15:24.902251] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.430 [2024-07-26 11:15:24.902291] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.430 [2024-07-26 11:15:24.902335] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.430 [2024-07-26 11:15:24.902373] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.430 [2024-07-26 11:15:24.902418] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.430 [2024-07-26 11:15:24.902458] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.430 [2024-07-26 11:15:24.902500] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.430 [2024-07-26 11:15:24.902541] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.430 [2024-07-26 11:15:24.902581] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.430 [2024-07-26 11:15:24.902622] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.430 [2024-07-26 11:15:24.902665] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.430 [2024-07-26 11:15:24.902708] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.430 [2024-07-26 11:15:24.902749] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.430 [2024-07-26 11:15:24.902786] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.430 [2024-07-26 11:15:24.902827] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.430 [2024-07-26 11:15:24.902868] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.430 [2024-07-26 11:15:24.902903] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.430 [2024-07-26 11:15:24.902939] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.430 [2024-07-26 11:15:24.902982] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.430 [2024-07-26 11:15:24.903021] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.430 [2024-07-26 11:15:24.903057] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.430 [2024-07-26 11:15:24.903096] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.430 [2024-07-26 11:15:24.903140] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.430 [2024-07-26 11:15:24.903179] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.430 [2024-07-26 11:15:24.903222] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.430 [2024-07-26 11:15:24.903262] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.430 [2024-07-26 11:15:24.903298] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.430 [2024-07-26 11:15:24.903353] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.430 [2024-07-26 11:15:24.903398] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.430 [2024-07-26 11:15:24.903443] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.430 [2024-07-26 11:15:24.903646] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.430 [2024-07-26 11:15:24.903691] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.430 [2024-07-26 11:15:24.903738] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.430 [2024-07-26 11:15:24.903787] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.430 [2024-07-26 11:15:24.903830] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.430 [2024-07-26 11:15:24.903876] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.430 [2024-07-26 11:15:24.903920] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.430 [2024-07-26 11:15:24.903968] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.430 [2024-07-26 11:15:24.904019] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.430 [2024-07-26 11:15:24.904065] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.430 [2024-07-26 11:15:24.904111] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.430 [2024-07-26 11:15:24.904156] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.430 [2024-07-26 11:15:24.904203] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.430 [2024-07-26 11:15:24.904253] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.430 [2024-07-26 11:15:24.904294] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.430 [2024-07-26 11:15:24.904337] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.430 [2024-07-26 11:15:24.904385] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.430 [2024-07-26 11:15:24.904434] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.430 [2024-07-26 11:15:24.904490] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.430 [2024-07-26 11:15:24.904535] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.430 [2024-07-26 11:15:24.904577] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.430 [2024-07-26 11:15:24.904623] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.430 [2024-07-26 11:15:24.904675] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.430 [2024-07-26 11:15:24.904723] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.430 [2024-07-26 11:15:24.904766] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.430 [2024-07-26 11:15:24.904811] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.430 [2024-07-26 11:15:24.904857] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.430 [2024-07-26 11:15:24.904906] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.430 [2024-07-26 11:15:24.904945] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.430 [2024-07-26 11:15:24.904981] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.430 [2024-07-26 11:15:24.905019] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.430 [2024-07-26 11:15:24.905057] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.430 [2024-07-26 11:15:24.905093] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.430 [2024-07-26 11:15:24.905135] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.430 [2024-07-26 11:15:24.905177] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.430 [2024-07-26 11:15:24.905217] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.430 [2024-07-26 11:15:24.905265] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.430 [2024-07-26 11:15:24.905306] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.430 [2024-07-26 11:15:24.905348] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.430 [2024-07-26 11:15:24.905387] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.430 [2024-07-26 11:15:24.905426] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.430 [2024-07-26 11:15:24.905460] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.430 [2024-07-26 11:15:24.905500] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.430 [2024-07-26 11:15:24.905542] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.430 [2024-07-26 11:15:24.905586] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.430 [2024-07-26 11:15:24.905637] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.430 [2024-07-26 11:15:24.905679] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.430 [2024-07-26 11:15:24.905727] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.430 [2024-07-26 11:15:24.905770] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.430 [2024-07-26 11:15:24.905819] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.430 [2024-07-26 11:15:24.905861] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.430 [2024-07-26 11:15:24.905904] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.430 [2024-07-26 11:15:24.905937] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.430 [2024-07-26 11:15:24.905980] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.430 [2024-07-26 11:15:24.906016] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.430 [2024-07-26 11:15:24.906056] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.430 [2024-07-26 11:15:24.906099] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.430 [2024-07-26 11:15:24.906138] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.430 [2024-07-26 11:15:24.906178] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.430 [2024-07-26 11:15:24.906222] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.430 [2024-07-26 11:15:24.906262] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.431 [2024-07-26 11:15:24.906303] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.431 [2024-07-26 11:15:24.906338] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.431 [2024-07-26 11:15:24.906381] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.431 [2024-07-26 11:15:24.907190] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.431 [2024-07-26 11:15:24.907239] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.431 [2024-07-26 11:15:24.907285] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.431 [2024-07-26 11:15:24.907328] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.431 [2024-07-26 11:15:24.907372] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.431 [2024-07-26 11:15:24.907425] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.431 [2024-07-26 11:15:24.907469] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.431 [2024-07-26 11:15:24.907513] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.431 [2024-07-26 11:15:24.907556] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.431 [2024-07-26 11:15:24.907603] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.431 [2024-07-26 11:15:24.907652] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.431 [2024-07-26 11:15:24.907696] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.431 [2024-07-26 11:15:24.907742] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.431 [2024-07-26 11:15:24.907789] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.431 [2024-07-26 11:15:24.907832] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.431 [2024-07-26 11:15:24.907876] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.431 [2024-07-26 11:15:24.907918] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.431 [2024-07-26 11:15:24.907969] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.431 [2024-07-26 11:15:24.908018] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.431 [2024-07-26 11:15:24.908067] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.431 [2024-07-26 11:15:24.908111] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.431 [2024-07-26 11:15:24.908156] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.431 [2024-07-26 11:15:24.908202] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.431 [2024-07-26 11:15:24.908235] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.431 [2024-07-26 11:15:24.908275] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.431 [2024-07-26 11:15:24.908319] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.431 [2024-07-26 11:15:24.908359] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.431 [2024-07-26 11:15:24.908397] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.431 [2024-07-26 11:15:24.908435] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.431 [2024-07-26 11:15:24.908474] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.431 [2024-07-26 11:15:24.908518] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.431 [2024-07-26 11:15:24.908557] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.431 [2024-07-26 11:15:24.908597] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.431 [2024-07-26 11:15:24.908645] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.431 [2024-07-26 11:15:24.908686] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.431 [2024-07-26 11:15:24.908734] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.431 [2024-07-26 11:15:24.908765] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.431 [2024-07-26 11:15:24.908808] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.431 [2024-07-26 11:15:24.908853] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.431 [2024-07-26 11:15:24.908900] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.431 [2024-07-26 11:15:24.908941] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.431 [2024-07-26 11:15:24.908980] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.431 [2024-07-26 11:15:24.909024] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.431 [2024-07-26 11:15:24.909064] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.431 [2024-07-26 11:15:24.909106] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.431 [2024-07-26 11:15:24.909151] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.431 [2024-07-26 11:15:24.909192] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.431 [2024-07-26 11:15:24.909229] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.431 [2024-07-26 11:15:24.909271] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.431 [2024-07-26 11:15:24.909310] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.431 [2024-07-26 11:15:24.909355] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.431 [2024-07-26 11:15:24.909399] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.431 [2024-07-26 11:15:24.909438] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.431 [2024-07-26 11:15:24.909482] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.431 [2024-07-26 11:15:24.909528] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.431 [2024-07-26 11:15:24.909568] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.431 [2024-07-26 11:15:24.909613] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.431 [2024-07-26 11:15:24.909660] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.431 [2024-07-26 11:15:24.909711] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.431 [2024-07-26 11:15:24.909755] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.431 [2024-07-26 11:15:24.909796] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.431 [2024-07-26 11:15:24.909834] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.431 [2024-07-26 11:15:24.909875] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.431 [2024-07-26 11:15:24.909930] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.431 [2024-07-26 11:15:24.910104] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.431 [2024-07-26 11:15:24.910153] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.431 [2024-07-26 11:15:24.910201] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.431 [2024-07-26 11:15:24.910244] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.431 [2024-07-26 11:15:24.910290] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.431 [2024-07-26 11:15:24.910337] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.431 [2024-07-26 11:15:24.910384] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.431 [2024-07-26 11:15:24.910432] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.431 [2024-07-26 11:15:24.910474] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.431 [2024-07-26 11:15:24.910516] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.431 [2024-07-26 11:15:24.910560] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.431 [2024-07-26 11:15:24.910609] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.431 [2024-07-26 11:15:24.910665] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.431 [2024-07-26 11:15:24.910713] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.432 [2024-07-26 11:15:24.910759] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.432 [2024-07-26 11:15:24.910803] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.432 [2024-07-26 11:15:24.911298] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.432 [2024-07-26 11:15:24.911350] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.432 [2024-07-26 11:15:24.911397] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.432 [2024-07-26 11:15:24.911442] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.432 [2024-07-26 11:15:24.911489] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.432 [2024-07-26 11:15:24.911529] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.432 [2024-07-26 11:15:24.911572] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.432 [2024-07-26 11:15:24.911615] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.432 [2024-07-26 11:15:24.911660] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.432 [2024-07-26 11:15:24.911700] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.432 [2024-07-26 11:15:24.911747] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.432 [2024-07-26 11:15:24.911785] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.432 [2024-07-26 11:15:24.911825] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.432 [2024-07-26 11:15:24.911867] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.432 [2024-07-26 11:15:24.911907] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.432 [2024-07-26 11:15:24.911947] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.432 [2024-07-26 11:15:24.911991] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.432 [2024-07-26 11:15:24.912023] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.432 [2024-07-26 11:15:24.912062] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.432 [2024-07-26 11:15:24.912103] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.432 [2024-07-26 11:15:24.912141] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.432 [2024-07-26 11:15:24.912178] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.432 [2024-07-26 11:15:24.912225] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.432 [2024-07-26 11:15:24.912265] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.432 [2024-07-26 11:15:24.912304] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.432 [2024-07-26 11:15:24.912353] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.432 [2024-07-26 11:15:24.912396] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.432 [2024-07-26 11:15:24.912436] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.432 [2024-07-26 11:15:24.912474] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.432 [2024-07-26 11:15:24.912517] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.432 [2024-07-26 11:15:24.912554] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.432 [2024-07-26 11:15:24.912597] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.432 [2024-07-26 11:15:24.912644] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.432 [2024-07-26 11:15:24.912686] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.432 [2024-07-26 11:15:24.912725] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.432 [2024-07-26 11:15:24.912766] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.432 [2024-07-26 11:15:24.912805] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.432 [2024-07-26 11:15:24.912846] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.432 [2024-07-26 11:15:24.912885] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.432 [2024-07-26 11:15:24.912924] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.432 [2024-07-26 11:15:24.912963] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.432 [2024-07-26 11:15:24.913003] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.432 [2024-07-26 11:15:24.913045] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.432 [2024-07-26 11:15:24.913087] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.432 [2024-07-26 11:15:24.913134] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.432 [2024-07-26 11:15:24.913177] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.432 [2024-07-26 11:15:24.913226] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.432 [2024-07-26 11:15:24.913272] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.432 [2024-07-26 11:15:24.913322] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.432 [2024-07-26 11:15:24.913369] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.432 [2024-07-26 11:15:24.913415] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.432 [2024-07-26 11:15:24.913462] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.432 [2024-07-26 11:15:24.913506] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.432 [2024-07-26 11:15:24.913556] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.432 [2024-07-26 11:15:24.913602] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.432 [2024-07-26 11:15:24.913655] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.432 [2024-07-26 11:15:24.913701] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.432 [2024-07-26 11:15:24.913751] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.432 [2024-07-26 11:15:24.913794] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.432 [2024-07-26 11:15:24.913838] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.432 [2024-07-26 11:15:24.913882] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.432 [2024-07-26 11:15:24.913930] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.432 [2024-07-26 11:15:24.913980] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.432 [2024-07-26 11:15:24.914026] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.432 [2024-07-26 11:15:24.914211] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.432 [2024-07-26 11:15:24.914268] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.432 [2024-07-26 11:15:24.914308] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.432 [2024-07-26 11:15:24.914356] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.432 [2024-07-26 11:15:24.914400] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.432 [2024-07-26 11:15:24.914449] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.432 [2024-07-26 11:15:24.914495] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.432 [2024-07-26 11:15:24.914541] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.432 [2024-07-26 11:15:24.914587] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.432 [2024-07-26 11:15:24.914637] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.432 [2024-07-26 11:15:24.914686] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.432 [2024-07-26 11:15:24.914734] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.432 [2024-07-26 11:15:24.914773] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.432 [2024-07-26 11:15:24.914813] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.432 [2024-07-26 11:15:24.914858] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.432 [2024-07-26 11:15:24.914895] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.432 [2024-07-26 11:15:24.914934] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.432 [2024-07-26 11:15:24.914979] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.432 [2024-07-26 11:15:24.915017] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.432 [2024-07-26 11:15:24.915060] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.432 [2024-07-26 11:15:24.915101] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.432 [2024-07-26 11:15:24.915144] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.432 [2024-07-26 11:15:24.915183] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.432 [2024-07-26 11:15:24.915228] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.432 [2024-07-26 11:15:24.915268] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.432 [2024-07-26 11:15:24.915300] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.432 [2024-07-26 11:15:24.915340] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.432 [2024-07-26 11:15:24.915377] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.432 [2024-07-26 11:15:24.915413] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.432 [2024-07-26 11:15:24.915459] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.433 [2024-07-26 11:15:24.915500] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.433 [2024-07-26 11:15:24.915543] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.433 [2024-07-26 11:15:24.915581] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.433 [2024-07-26 11:15:24.915630] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.433 [2024-07-26 11:15:24.915672] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.433 [2024-07-26 11:15:24.915715] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.433 [2024-07-26 11:15:24.915757] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.433 [2024-07-26 11:15:24.915790] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.433 [2024-07-26 11:15:24.915834] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.433 [2024-07-26 11:15:24.915873] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.433 [2024-07-26 11:15:24.915917] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.433 [2024-07-26 11:15:24.915957] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.433 [2024-07-26 11:15:24.916001] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.433 [2024-07-26 11:15:24.916044] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.433 [2024-07-26 11:15:24.916082] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.433 [2024-07-26 11:15:24.916120] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.433 [2024-07-26 11:15:24.916160] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.433 [2024-07-26 11:15:24.916201] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.433 [2024-07-26 11:15:24.916242] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.433 [2024-07-26 11:15:24.916280] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.433 [2024-07-26 11:15:24.916316] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.433 [2024-07-26 11:15:24.916353] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.433 [2024-07-26 11:15:24.916404] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.433 [2024-07-26 11:15:24.916451] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.433 [2024-07-26 11:15:24.916500] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.433 [2024-07-26 11:15:24.916547] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.433 [2024-07-26 11:15:24.916592] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.433 [2024-07-26 11:15:24.916643] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.433 [2024-07-26 11:15:24.916692] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.433 [2024-07-26 11:15:24.916734] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.433 [2024-07-26 11:15:24.916786] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.433 [2024-07-26 11:15:24.916834] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.433 [2024-07-26 11:15:24.916880] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.433 [2024-07-26 11:15:24.917683] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.433 [2024-07-26 11:15:24.917737] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.433 [2024-07-26 11:15:24.917783] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.433 [2024-07-26 11:15:24.917831] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.433 [2024-07-26 11:15:24.917878] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.433 [2024-07-26 11:15:24.917925] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.433 [2024-07-26 11:15:24.917975] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.433 [2024-07-26 11:15:24.918020] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.433 [2024-07-26 11:15:24.918060] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.433 [2024-07-26 11:15:24.918099] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.433 [2024-07-26 11:15:24.918139] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.433 [2024-07-26 11:15:24.918177] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.433 [2024-07-26 11:15:24.918223] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.433 [2024-07-26 11:15:24.918267] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.433 [2024-07-26 11:15:24.918309] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.433 [2024-07-26 11:15:24.918354] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.433 [2024-07-26 11:15:24.918394] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.433 [2024-07-26 11:15:24.918431] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.433 [2024-07-26 11:15:24.918468] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.433 [2024-07-26 11:15:24.918515] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.433 [2024-07-26 11:15:24.918552] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.433 [2024-07-26 11:15:24.918584] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.433 [2024-07-26 11:15:24.918634] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.433 [2024-07-26 11:15:24.918673] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.433 [2024-07-26 11:15:24.918711] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.433 [2024-07-26 11:15:24.918750] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.433 [2024-07-26 11:15:24.918792] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.433 [2024-07-26 11:15:24.918838] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.433 [2024-07-26 11:15:24.918879] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.433 [2024-07-26 11:15:24.918919] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.433 [2024-07-26 11:15:24.918963] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.433 [2024-07-26 11:15:24.919003] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.433 [2024-07-26 11:15:24.919043] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.433 [2024-07-26 11:15:24.919084] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.433 [2024-07-26 11:15:24.919123] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.433 [2024-07-26 11:15:24.919167] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.433 [2024-07-26 11:15:24.919204] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.433 [2024-07-26 11:15:24.919246] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.433 [2024-07-26 11:15:24.919289] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.433 [2024-07-26 11:15:24.919335] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.433 [2024-07-26 11:15:24.919376] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.433 [2024-07-26 11:15:24.919414] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.433 [2024-07-26 11:15:24.919453] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.433 [2024-07-26 11:15:24.919494] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.433 [2024-07-26 11:15:24.919538] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.433 [2024-07-26 11:15:24.919591] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.433 [2024-07-26 11:15:24.919645] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.433 [2024-07-26 11:15:24.919691] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.433 [2024-07-26 11:15:24.919738] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.433 [2024-07-26 11:15:24.919783] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.433 [2024-07-26 11:15:24.919838] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.433 [2024-07-26 11:15:24.919881] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.433 [2024-07-26 11:15:24.919925] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.433 [2024-07-26 11:15:24.919972] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.433 [2024-07-26 11:15:24.920022] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.433 [2024-07-26 11:15:24.920067] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.433 [2024-07-26 11:15:24.920112] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.433 [2024-07-26 11:15:24.920157] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.433 [2024-07-26 11:15:24.920201] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.433 [2024-07-26 11:15:24.920245] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.433 [2024-07-26 11:15:24.920291] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.433 [2024-07-26 11:15:24.920339] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.433 [2024-07-26 11:15:24.920386] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.433 [2024-07-26 11:15:24.920429] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.434 [2024-07-26 11:15:24.920614] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.434 [2024-07-26 11:15:24.920671] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.434 [2024-07-26 11:15:24.920723] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.434 [2024-07-26 11:15:24.920769] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.434 [2024-07-26 11:15:24.920815] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.434 [2024-07-26 11:15:24.920860] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.434 [2024-07-26 11:15:24.920906] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.434 [2024-07-26 11:15:24.920951] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.434 [2024-07-26 11:15:24.920996] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.434 [2024-07-26 11:15:24.921045] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.434 [2024-07-26 11:15:24.921090] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.434 [2024-07-26 11:15:24.921140] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.434 [2024-07-26 11:15:24.921186] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.434 [2024-07-26 11:15:24.921230] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.434 [2024-07-26 11:15:24.921272] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.434 [2024-07-26 11:15:24.921311] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.434 [2024-07-26 11:15:24.921352] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.434 [2024-07-26 11:15:24.921773] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.434 [2024-07-26 11:15:24.921815] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.434 [2024-07-26 11:15:24.921858] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.434 [2024-07-26 11:15:24.921896] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.434 [2024-07-26 11:15:24.921934] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.434 [2024-07-26 11:15:24.921974] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.434 [2024-07-26 11:15:24.922014] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.434 [2024-07-26 11:15:24.922056] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.434 [2024-07-26 11:15:24.922101] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.434 [2024-07-26 11:15:24.922140] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.434 [2024-07-26 11:15:24.922179] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.434 [2024-07-26 11:15:24.922214] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.434 [2024-07-26 11:15:24.922251] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.434 [2024-07-26 11:15:24.922291] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.434 [2024-07-26 11:15:24.922330] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.434 [2024-07-26 11:15:24.922370] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.434 [2024-07-26 11:15:24.922420] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.434 [2024-07-26 11:15:24.922459] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.434 [2024-07-26 11:15:24.922498] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.434 [2024-07-26 11:15:24.922536] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.434 [2024-07-26 11:15:24.922577] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.434 [2024-07-26 11:15:24.922617] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.434 [2024-07-26 11:15:24.922663] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.434 [2024-07-26 11:15:24.922707] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.434 [2024-07-26 11:15:24.922752] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.434 [2024-07-26 11:15:24.922799] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.434 [2024-07-26 11:15:24.922847] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.434 [2024-07-26 11:15:24.922895] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.434 [2024-07-26 11:15:24.922937] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.434 [2024-07-26 11:15:24.922989] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.434 [2024-07-26 11:15:24.923034] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.434 [2024-07-26 11:15:24.923083] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.434 [2024-07-26 11:15:24.923130] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.434 [2024-07-26 11:15:24.923178] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.434 [2024-07-26 11:15:24.923222] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.434 [2024-07-26 11:15:24.923268] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.434 [2024-07-26 11:15:24.923310] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.434 [2024-07-26 11:15:24.923358] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.434 [2024-07-26 11:15:24.923401] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.434 [2024-07-26 11:15:24.923447] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.434 [2024-07-26 11:15:24.923490] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.434 [2024-07-26 11:15:24.923537] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.434 [2024-07-26 11:15:24.923571] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.434 [2024-07-26 11:15:24.923617] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.434 [2024-07-26 11:15:24.923667] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.434 [2024-07-26 11:15:24.923705] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.434 [2024-07-26 11:15:24.923746] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.434 [2024-07-26 11:15:24.923785] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.434 [2024-07-26 11:15:24.923824] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.434 [2024-07-26 11:15:24.923864] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.434 [2024-07-26 11:15:24.923903] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.434 [2024-07-26 11:15:24.923942] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.434 [2024-07-26 11:15:24.923978] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.434 [2024-07-26 11:15:24.924019] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.434 [2024-07-26 11:15:24.924060] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.434 [2024-07-26 11:15:24.924101] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.434 [2024-07-26 11:15:24.924142] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.434 [2024-07-26 11:15:24.924178] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.434 [2024-07-26 11:15:24.924221] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.434 [2024-07-26 11:15:24.924260] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.434 [2024-07-26 11:15:24.924301] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.434 [2024-07-26 11:15:24.924346] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.434 [2024-07-26 11:15:24.924386] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.434 [2024-07-26 11:15:24.924425] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.434 [2024-07-26 11:15:24.924619] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.434 [2024-07-26 11:15:24.924676] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.434 [2024-07-26 11:15:24.924714] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.434 [2024-07-26 11:15:24.924758] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.434 [2024-07-26 11:15:24.924802] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.434 [2024-07-26 11:15:24.924850] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.434 [2024-07-26 11:15:24.924905] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.434 [2024-07-26 11:15:24.924950] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.434 [2024-07-26 11:15:24.924996] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.434 [2024-07-26 11:15:24.925043] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.434 [2024-07-26 11:15:24.925088] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.434 [2024-07-26 11:15:24.925131] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.434 [2024-07-26 11:15:24.925177] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.434 [2024-07-26 11:15:24.925229] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.434 [2024-07-26 11:15:24.925273] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.434 [2024-07-26 11:15:24.925315] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.435 [2024-07-26 11:15:24.925357] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.435 [2024-07-26 11:15:24.925403] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.435 [2024-07-26 11:15:24.925451] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.435 [2024-07-26 11:15:24.925496] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.435 [2024-07-26 11:15:24.925538] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.435 [2024-07-26 11:15:24.925587] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.435 [2024-07-26 11:15:24.925638] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.435 [2024-07-26 11:15:24.925682] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.435 [2024-07-26 11:15:24.925729] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.435 [2024-07-26 11:15:24.925773] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.435 [2024-07-26 11:15:24.925817] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.435 [2024-07-26 11:15:24.925867] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.435 [2024-07-26 11:15:24.925911] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.435 [2024-07-26 11:15:24.925957] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.435 [2024-07-26 11:15:24.926001] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.435 [2024-07-26 11:15:24.926041] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.435 [2024-07-26 11:15:24.926087] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.435 [2024-07-26 11:15:24.926133] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.435 [2024-07-26 11:15:24.926181] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.435 [2024-07-26 11:15:24.926223] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.435 [2024-07-26 11:15:24.926270] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.435 [2024-07-26 11:15:24.926312] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.435 [2024-07-26 11:15:24.926356] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.435 [2024-07-26 11:15:24.926398] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.435 [2024-07-26 11:15:24.926435] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.435 [2024-07-26 11:15:24.926476] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.435 [2024-07-26 11:15:24.926511] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.435 [2024-07-26 11:15:24.926548] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.435 [2024-07-26 11:15:24.926588] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.435 [2024-07-26 11:15:24.926631] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.435 [2024-07-26 11:15:24.927162] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.435 [2024-07-26 11:15:24.927204] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.435 [2024-07-26 11:15:24.927247] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.435 [2024-07-26 11:15:24.927285] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.435 [2024-07-26 11:15:24.927325] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.435 [2024-07-26 11:15:24.927366] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.435 [2024-07-26 11:15:24.927401] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.435 [2024-07-26 11:15:24.927437] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.435 [2024-07-26 11:15:24.927479] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.435 [2024-07-26 11:15:24.927528] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.435 [2024-07-26 11:15:24.927567] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.435 [2024-07-26 11:15:24.927604] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.435 [2024-07-26 11:15:24.927648] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.435 [2024-07-26 11:15:24.927689] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.435 [2024-07-26 11:15:24.927727] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.435 [2024-07-26 11:15:24.927767] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.435 [2024-07-26 11:15:24.927810] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.435 [2024-07-26 11:15:24.927849] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.435 [2024-07-26 11:15:24.927886] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.435 [2024-07-26 11:15:24.927934] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.435 [2024-07-26 11:15:24.927979] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.435 [2024-07-26 11:15:24.928027] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.435 [2024-07-26 11:15:24.928076] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.435 [2024-07-26 11:15:24.928122] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.435 [2024-07-26 11:15:24.928167] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.435 [2024-07-26 11:15:24.928214] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.435 [2024-07-26 11:15:24.928266] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.435 [2024-07-26 11:15:24.928315] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.435 [2024-07-26 11:15:24.928358] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.435 [2024-07-26 11:15:24.928403] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.435 [2024-07-26 11:15:24.928451] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.435 [2024-07-26 11:15:24.928495] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.435 [2024-07-26 11:15:24.928536] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.435 [2024-07-26 11:15:24.928582] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.435 [2024-07-26 11:15:24.928637] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.435 [2024-07-26 11:15:24.928684] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.435 [2024-07-26 11:15:24.928735] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.435 [2024-07-26 11:15:24.928782] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.435 [2024-07-26 11:15:24.928830] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.435 [2024-07-26 11:15:24.928873] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.435 [2024-07-26 11:15:24.928919] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.435 [2024-07-26 11:15:24.928964] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.435 [2024-07-26 11:15:24.929012] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.435 [2024-07-26 11:15:24.929057] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.435 [2024-07-26 11:15:24.929104] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.435 [2024-07-26 11:15:24.929150] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.435 [2024-07-26 11:15:24.929199] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.435 [2024-07-26 11:15:24.929245] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.435 [2024-07-26 11:15:24.929291] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.435 [2024-07-26 11:15:24.929334] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.435 [2024-07-26 11:15:24.929379] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.435 [2024-07-26 11:15:24.929435] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.435 [2024-07-26 11:15:24.929477] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.435 [2024-07-26 11:15:24.929524] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.435 [2024-07-26 11:15:24.929570] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.435 [2024-07-26 11:15:24.929603] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.435 [2024-07-26 11:15:24.929647] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.435 [2024-07-26 11:15:24.929687] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.435 [2024-07-26 11:15:24.929724] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.435 [2024-07-26 11:15:24.929765] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.435 [2024-07-26 11:15:24.929814] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.435 [2024-07-26 11:15:24.929855] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.435 [2024-07-26 11:15:24.929893] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.435 [2024-07-26 11:15:24.929934] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.435 [2024-07-26 11:15:24.930108] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.435 [2024-07-26 11:15:24.930149] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.435 [2024-07-26 11:15:24.930188] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.436 [2024-07-26 11:15:24.930226] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.436 [2024-07-26 11:15:24.930269] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.436 [2024-07-26 11:15:24.930307] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.436 [2024-07-26 11:15:24.930346] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.436 [2024-07-26 11:15:24.930391] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.436 [2024-07-26 11:15:24.930432] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.436 [2024-07-26 11:15:24.930471] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.436 [2024-07-26 11:15:24.930512] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.436 [2024-07-26 11:15:24.930553] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.436 [2024-07-26 11:15:24.930585] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.436 [2024-07-26 11:15:24.930633] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.436 [2024-07-26 11:15:24.930674] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.436 [2024-07-26 11:15:24.930722] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.436 [2024-07-26 11:15:24.930762] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.436 [2024-07-26 11:15:24.931516] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.436 [2024-07-26 11:15:24.931564] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.436 [2024-07-26 11:15:24.931613] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.436 [2024-07-26 11:15:24.931665] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.436 [2024-07-26 11:15:24.931714] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.436 [2024-07-26 11:15:24.931764] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.436 [2024-07-26 11:15:24.931806] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.436 [2024-07-26 11:15:24.931849] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.436 [2024-07-26 11:15:24.931891] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.436 [2024-07-26 11:15:24.931936] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.436 [2024-07-26 11:15:24.931991] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.436 [2024-07-26 11:15:24.932037] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.436 [2024-07-26 11:15:24.932082] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.436 [2024-07-26 11:15:24.932125] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.436 [2024-07-26 11:15:24.932184] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.436 [2024-07-26 11:15:24.932234] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.436 [2024-07-26 11:15:24.932279] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.436 [2024-07-26 11:15:24.932321] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.436 [2024-07-26 11:15:24.932373] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.436 [2024-07-26 11:15:24.932420] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.436 [2024-07-26 11:15:24.932468] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.436 [2024-07-26 11:15:24.932516] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.436 [2024-07-26 11:15:24.932556] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.436 [2024-07-26 11:15:24.932601] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.436 [2024-07-26 11:15:24.932653] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.436 [2024-07-26 11:15:24.932704] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.436 [2024-07-26 11:15:24.932748] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.436 [2024-07-26 11:15:24.932791] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.436 [2024-07-26 11:15:24.932843] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.436 [2024-07-26 11:15:24.932888] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.436 [2024-07-26 11:15:24.932931] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.436 [2024-07-26 11:15:24.932975] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.436 [2024-07-26 11:15:24.933012] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.436 [2024-07-26 11:15:24.933052] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.436 [2024-07-26 11:15:24.933087] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.436 [2024-07-26 11:15:24.933125] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.436 [2024-07-26 11:15:24.933172] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.436 [2024-07-26 11:15:24.933214] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.436 [2024-07-26 11:15:24.933258] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.436 [2024-07-26 11:15:24.933297] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.436 [2024-07-26 11:15:24.933338] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.436 [2024-07-26 11:15:24.933380] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.436 [2024-07-26 11:15:24.933428] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.436 [2024-07-26 11:15:24.933467] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.436 [2024-07-26 11:15:24.933499] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.436 [2024-07-26 11:15:24.933540] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.436 [2024-07-26 11:15:24.933579] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.436 [2024-07-26 11:15:24.933619] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.436 [2024-07-26 11:15:24.933666] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.436 [2024-07-26 11:15:24.933705] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.436 [2024-07-26 11:15:24.933744] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.436 [2024-07-26 11:15:24.933796] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.436 [2024-07-26 11:15:24.933837] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.436 [2024-07-26 11:15:24.933882] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.436 [2024-07-26 11:15:24.933921] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.436 [2024-07-26 11:15:24.933962] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.436 [2024-07-26 11:15:24.933999] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.436 [2024-07-26 11:15:24.934038] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.436 [2024-07-26 11:15:24.934083] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.436 [2024-07-26 11:15:24.934123] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.436 [2024-07-26 11:15:24.934163] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.436 [2024-07-26 11:15:24.934203] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.436 [2024-07-26 11:15:24.934243] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.436 [2024-07-26 11:15:24.934288] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.436 [2024-07-26 11:15:24.934471] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.436 [2024-07-26 11:15:24.934518] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.436 [2024-07-26 11:15:24.934566] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.437 [2024-07-26 11:15:24.934613] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.437 [2024-07-26 11:15:24.934666] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.437 [2024-07-26 11:15:24.934712] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.437 [2024-07-26 11:15:24.934760] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.437 [2024-07-26 11:15:24.934804] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.437 [2024-07-26 11:15:24.934851] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.437 [2024-07-26 11:15:24.934898] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.437 [2024-07-26 11:15:24.934943] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.437 [2024-07-26 11:15:24.934986] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.437 [2024-07-26 11:15:24.935037] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.437 [2024-07-26 11:15:24.935083] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.437 [2024-07-26 11:15:24.935125] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.437 [2024-07-26 11:15:24.935173] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.437 [2024-07-26 11:15:24.935220] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.437 [2024-07-26 11:15:24.935268] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.437 [2024-07-26 11:15:24.935315] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.437 [2024-07-26 11:15:24.935359] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.437 [2024-07-26 11:15:24.935406] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.437 [2024-07-26 11:15:24.935457] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.437 [2024-07-26 11:15:24.935499] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.437 [2024-07-26 11:15:24.935546] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.437 [2024-07-26 11:15:24.935596] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.437 [2024-07-26 11:15:24.935659] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.437 [2024-07-26 11:15:24.935705] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.437 [2024-07-26 11:15:24.935751] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.437 [2024-07-26 11:15:24.935798] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.437 [2024-07-26 11:15:24.935844] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.437 [2024-07-26 11:15:24.935891] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.437 [2024-07-26 11:15:24.935936] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.437 [2024-07-26 11:15:24.935985] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.437 [2024-07-26 11:15:24.936034] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.437 [2024-07-26 11:15:24.936082] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.437 [2024-07-26 11:15:24.936125] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.437 [2024-07-26 11:15:24.936168] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.437 [2024-07-26 11:15:24.936205] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.437 [2024-07-26 11:15:24.936246] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.437 [2024-07-26 11:15:24.936288] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.437 [2024-07-26 11:15:24.936319] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.437 [2024-07-26 11:15:24.936361] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.437 [2024-07-26 11:15:24.936399] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.437 [2024-07-26 11:15:24.936440] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.437 [2024-07-26 11:15:24.936482] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.437 [2024-07-26 11:15:24.936522] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.437 [2024-07-26 11:15:24.937164] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.437 [2024-07-26 11:15:24.937205] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.437 [2024-07-26 11:15:24.937245] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.437 [2024-07-26 11:15:24.937287] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.437 [2024-07-26 11:15:24.937327] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.437 [2024-07-26 11:15:24.937368] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.437 [2024-07-26 11:15:24.937409] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.437 [2024-07-26 11:15:24.937450] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.437 [2024-07-26 11:15:24.937489] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.437 [2024-07-26 11:15:24.937528] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.437 [2024-07-26 11:15:24.937566] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.437 [2024-07-26 11:15:24.937605] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.437 [2024-07-26 11:15:24.937654] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.437 [2024-07-26 11:15:24.937702] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.437 [2024-07-26 11:15:24.937748] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.437 [2024-07-26 11:15:24.937793] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.437 [2024-07-26 11:15:24.937840] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.437 [2024-07-26 11:15:24.937890] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.437 [2024-07-26 11:15:24.937934] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.437 [2024-07-26 11:15:24.937983] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.437 [2024-07-26 11:15:24.938028] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.437 [2024-07-26 11:15:24.938078] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.437 [2024-07-26 11:15:24.938122] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.437 [2024-07-26 11:15:24.938167] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.437 [2024-07-26 11:15:24.938217] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.437 [2024-07-26 11:15:24.938266] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.437 [2024-07-26 11:15:24.938312] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.437 [2024-07-26 11:15:24.938358] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.437 [2024-07-26 11:15:24.938406] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.437 [2024-07-26 11:15:24.938453] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.437 [2024-07-26 11:15:24.938493] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.437 [2024-07-26 11:15:24.938526] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.437 [2024-07-26 11:15:24.938570] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.437 [2024-07-26 11:15:24.938614] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.437 [2024-07-26 11:15:24.938660] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.437 [2024-07-26 11:15:24.938701] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.437 [2024-07-26 11:15:24.938742] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.437 [2024-07-26 11:15:24.938788] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.437 [2024-07-26 11:15:24.938824] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.437 [2024-07-26 11:15:24.938860] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.437 [2024-07-26 11:15:24.938903] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.437 [2024-07-26 11:15:24.938944] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.437 [2024-07-26 11:15:24.938983] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.437 [2024-07-26 11:15:24.939032] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.437 [2024-07-26 11:15:24.939072] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.437 [2024-07-26 11:15:24.939112] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.437 [2024-07-26 11:15:24.939156] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.437 [2024-07-26 11:15:24.939195] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.437 [2024-07-26 11:15:24.939237] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.437 [2024-07-26 11:15:24.939283] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.437 [2024-07-26 11:15:24.939323] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.438 [2024-07-26 11:15:24.939365] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.438 [2024-07-26 11:15:24.939413] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.438 [2024-07-26 11:15:24.939455] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.438 [2024-07-26 11:15:24.939498] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.438 [2024-07-26 11:15:24.939538] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.438 [2024-07-26 11:15:24.939579] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.438 [2024-07-26 11:15:24.939621] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.438 [2024-07-26 11:15:24.939667] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.438 [2024-07-26 11:15:24.939714] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.438 [2024-07-26 11:15:24.939757] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.438 [2024-07-26 11:15:24.939807] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.438 [2024-07-26 11:15:24.939854] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.438 [2024-07-26 11:15:24.939901] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.438 [2024-07-26 11:15:24.940090] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.438 [2024-07-26 11:15:24.940142] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.438 [2024-07-26 11:15:24.940188] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.438 [2024-07-26 11:15:24.940231] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.438 [2024-07-26 11:15:24.940277] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.438 [2024-07-26 11:15:24.940326] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.438 [2024-07-26 11:15:24.940375] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.438 [2024-07-26 11:15:24.940417] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.438 [2024-07-26 11:15:24.940460] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.438 [2024-07-26 11:15:24.940507] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.438 [2024-07-26 11:15:24.940555] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.438 [2024-07-26 11:15:24.940597] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.438 [2024-07-26 11:15:24.940646] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.438 [2024-07-26 11:15:24.940687] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.438 [2024-07-26 11:15:24.940736] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.438 [2024-07-26 11:15:24.940789] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.438 [2024-07-26 11:15:24.940834] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.438 [2024-07-26 11:15:24.941327] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.438 [2024-07-26 11:15:24.941372] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.438 [2024-07-26 11:15:24.941417] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.438 [2024-07-26 11:15:24.941463] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.438 [2024-07-26 11:15:24.941504] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.438 [2024-07-26 11:15:24.941541] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.438 [2024-07-26 11:15:24.941575] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.438 [2024-07-26 11:15:24.941616] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.438 [2024-07-26 11:15:24.941664] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.438 [2024-07-26 11:15:24.941704] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.438 [2024-07-26 11:15:24.941752] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.438 [2024-07-26 11:15:24.941794] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.438 [2024-07-26 11:15:24.941835] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.438 [2024-07-26 11:15:24.941877] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.438 [2024-07-26 11:15:24.941919] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.438 [2024-07-26 11:15:24.941964] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.438 [2024-07-26 11:15:24.942003] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.438 [2024-07-26 11:15:24.942036] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.438 [2024-07-26 11:15:24.942078] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.438 [2024-07-26 11:15:24.942117] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.438 [2024-07-26 11:15:24.942163] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.438 [2024-07-26 11:15:24.942205] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.438 [2024-07-26 11:15:24.942251] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.438 [2024-07-26 11:15:24.942293] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.438 [2024-07-26 11:15:24.942328] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.438 [2024-07-26 11:15:24.942368] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.438 [2024-07-26 11:15:24.942409] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.438 [2024-07-26 11:15:24.942451] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.438 [2024-07-26 11:15:24.942490] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.438 [2024-07-26 11:15:24.942529] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.438 [2024-07-26 11:15:24.942569] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.438 [2024-07-26 11:15:24.942611] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.438 [2024-07-26 11:15:24.942658] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.438 [2024-07-26 11:15:24.942702] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.438 [2024-07-26 11:15:24.942747] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.438 [2024-07-26 11:15:24.942790] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.438 [2024-07-26 11:15:24.942837] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.438 [2024-07-26 11:15:24.942888] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.438 [2024-07-26 11:15:24.942937] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.438 [2024-07-26 11:15:24.942987] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.438 [2024-07-26 11:15:24.943033] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.438 [2024-07-26 11:15:24.943077] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.438 [2024-07-26 11:15:24.943125] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.438 [2024-07-26 11:15:24.943173] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.438 [2024-07-26 11:15:24.943220] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.438 [2024-07-26 11:15:24.943269] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.438 [2024-07-26 11:15:24.943313] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.438 [2024-07-26 11:15:24.943357] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.438 [2024-07-26 11:15:24.943412] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.438 [2024-07-26 11:15:24.943458] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.438 [2024-07-26 11:15:24.943504] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.438 [2024-07-26 11:15:24.943550] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.438 [2024-07-26 11:15:24.943594] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.438 [2024-07-26 11:15:24.943644] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.438 [2024-07-26 11:15:24.943691] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.438 [2024-07-26 11:15:24.943737] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.438 [2024-07-26 11:15:24.943784] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.438 [2024-07-26 11:15:24.943839] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.438 [2024-07-26 11:15:24.943885] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.438 [2024-07-26 11:15:24.943931] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.438 [2024-07-26 11:15:24.943978] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.438 [2024-07-26 11:15:24.944023] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.438 [2024-07-26 11:15:24.944070] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.438 [2024-07-26 11:15:24.944115] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.438 Message suppressed 999 times: Read completed with error (sct=0, sc=15) 00:07:29.438 [2024-07-26 11:15:24.944302] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.438 [2024-07-26 11:15:24.944349] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.438 [2024-07-26 11:15:24.944394] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.439 [2024-07-26 11:15:24.944436] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.439 [2024-07-26 11:15:24.944505] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.439 [2024-07-26 11:15:24.944544] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.439 [2024-07-26 11:15:24.944586] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.439 [2024-07-26 11:15:24.944633] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.439 [2024-07-26 11:15:24.944676] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.439 [2024-07-26 11:15:24.944719] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.439 [2024-07-26 11:15:24.944757] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.439 [2024-07-26 11:15:24.944795] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.439 [2024-07-26 11:15:24.944841] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.439 [2024-07-26 11:15:24.944882] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.439 [2024-07-26 11:15:24.944927] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.439 [2024-07-26 11:15:24.944960] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.439 [2024-07-26 11:15:24.945003] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.439 [2024-07-26 11:15:24.945045] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.439 [2024-07-26 11:15:24.945084] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.439 [2024-07-26 11:15:24.945130] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.439 [2024-07-26 11:15:24.945170] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.439 [2024-07-26 11:15:24.945212] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.439 [2024-07-26 11:15:24.945256] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.439 [2024-07-26 11:15:24.945299] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.439 [2024-07-26 11:15:24.945330] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.439 [2024-07-26 11:15:24.945370] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.439 [2024-07-26 11:15:24.945409] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.439 [2024-07-26 11:15:24.945453] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.439 [2024-07-26 11:15:24.945498] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.439 [2024-07-26 11:15:24.945533] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.439 [2024-07-26 11:15:24.945583] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.439 [2024-07-26 11:15:24.945623] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.439 [2024-07-26 11:15:24.945669] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.439 [2024-07-26 11:15:24.945706] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.439 [2024-07-26 11:15:24.945749] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.439 [2024-07-26 11:15:24.945787] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.439 [2024-07-26 11:15:24.945828] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.439 [2024-07-26 11:15:24.945868] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.439 [2024-07-26 11:15:24.945897] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.439 [2024-07-26 11:15:24.945932] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.439 [2024-07-26 11:15:24.945974] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.439 [2024-07-26 11:15:24.946014] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.439 [2024-07-26 11:15:24.946056] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.439 [2024-07-26 11:15:24.946095] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.439 [2024-07-26 11:15:24.946135] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.439 [2024-07-26 11:15:24.946175] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.439 [2024-07-26 11:15:24.946215] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.439 [2024-07-26 11:15:24.946257] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.439 [2024-07-26 11:15:24.946299] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.439 [2024-07-26 11:15:24.946341] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.439 [2024-07-26 11:15:24.946386] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.439 [2024-07-26 11:15:24.946433] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.439 [2024-07-26 11:15:24.946477] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.439 [2024-07-26 11:15:24.946525] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.439 [2024-07-26 11:15:24.946572] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.439 [2024-07-26 11:15:24.946619] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.439 [2024-07-26 11:15:24.946669] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.439 [2024-07-26 11:15:24.946717] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.439 [2024-07-26 11:15:24.946760] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.439 [2024-07-26 11:15:24.946805] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.439 [2024-07-26 11:15:24.946850] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.439 [2024-07-26 11:15:24.946897] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.439 [2024-07-26 11:15:24.946946] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.439 [2024-07-26 11:15:24.947723] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.439 [2024-07-26 11:15:24.947773] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.439 [2024-07-26 11:15:24.947825] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.439 [2024-07-26 11:15:24.947872] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.439 [2024-07-26 11:15:24.947917] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.439 [2024-07-26 11:15:24.947964] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.439 [2024-07-26 11:15:24.948010] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.439 [2024-07-26 11:15:24.948054] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.439 [2024-07-26 11:15:24.948102] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.439 [2024-07-26 11:15:24.948143] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.439 [2024-07-26 11:15:24.948182] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.439 [2024-07-26 11:15:24.948223] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.439 [2024-07-26 11:15:24.948272] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.439 [2024-07-26 11:15:24.948304] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.439 [2024-07-26 11:15:24.948345] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.439 [2024-07-26 11:15:24.948384] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.439 [2024-07-26 11:15:24.948423] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.439 [2024-07-26 11:15:24.948469] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.439 [2024-07-26 11:15:24.948507] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.439 [2024-07-26 11:15:24.948545] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.439 [2024-07-26 11:15:24.948596] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.439 [2024-07-26 11:15:24.948641] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.439 [2024-07-26 11:15:24.948685] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.439 [2024-07-26 11:15:24.948726] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.439 [2024-07-26 11:15:24.948765] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.439 [2024-07-26 11:15:24.948796] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.439 [2024-07-26 11:15:24.948838] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.439 [2024-07-26 11:15:24.948880] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.439 [2024-07-26 11:15:24.948919] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.439 [2024-07-26 11:15:24.948959] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.439 [2024-07-26 11:15:24.949002] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.439 [2024-07-26 11:15:24.949039] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.439 [2024-07-26 11:15:24.949083] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.439 [2024-07-26 11:15:24.949124] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.439 [2024-07-26 11:15:24.949166] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.439 [2024-07-26 11:15:24.949206] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.439 [2024-07-26 11:15:24.949246] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.440 [2024-07-26 11:15:24.949283] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.440 [2024-07-26 11:15:24.949334] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.440 [2024-07-26 11:15:24.949377] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.440 [2024-07-26 11:15:24.949428] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.440 [2024-07-26 11:15:24.949475] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.440 [2024-07-26 11:15:24.949523] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.440 [2024-07-26 11:15:24.949569] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.440 [2024-07-26 11:15:24.949617] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.440 [2024-07-26 11:15:24.949670] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.440 [2024-07-26 11:15:24.949716] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.440 [2024-07-26 11:15:24.949767] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.440 [2024-07-26 11:15:24.949813] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.440 [2024-07-26 11:15:24.949860] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.440 [2024-07-26 11:15:24.949910] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.440 [2024-07-26 11:15:24.949958] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.440 [2024-07-26 11:15:24.950004] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.440 [2024-07-26 11:15:24.950053] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.440 [2024-07-26 11:15:24.950099] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.440 [2024-07-26 11:15:24.950145] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.440 [2024-07-26 11:15:24.950187] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.440 [2024-07-26 11:15:24.950231] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.440 [2024-07-26 11:15:24.950272] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.440 [2024-07-26 11:15:24.950320] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.440 [2024-07-26 11:15:24.950365] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.440 [2024-07-26 11:15:24.950408] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.440 [2024-07-26 11:15:24.950453] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.440 [2024-07-26 11:15:24.950501] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.440 [2024-07-26 11:15:24.950684] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.440 [2024-07-26 11:15:24.950736] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.440 [2024-07-26 11:15:24.950785] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.440 [2024-07-26 11:15:24.950838] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.440 [2024-07-26 11:15:24.950882] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.440 [2024-07-26 11:15:24.950928] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.440 [2024-07-26 11:15:24.950963] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.440 [2024-07-26 11:15:24.951006] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.440 [2024-07-26 11:15:24.951048] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.440 [2024-07-26 11:15:24.951092] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.440 [2024-07-26 11:15:24.951133] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.440 [2024-07-26 11:15:24.951173] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.440 [2024-07-26 11:15:24.951219] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.440 [2024-07-26 11:15:24.951259] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.440 [2024-07-26 11:15:24.951300] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.440 [2024-07-26 11:15:24.951342] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.440 [2024-07-26 11:15:24.951384] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.440 [2024-07-26 11:15:24.951873] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.440 [2024-07-26 11:15:24.951915] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.440 [2024-07-26 11:15:24.951957] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.440 [2024-07-26 11:15:24.952001] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.440 [2024-07-26 11:15:24.952050] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.440 [2024-07-26 11:15:24.952092] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.440 [2024-07-26 11:15:24.952136] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.440 [2024-07-26 11:15:24.952174] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.440 [2024-07-26 11:15:24.952215] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.440 [2024-07-26 11:15:24.952252] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.440 [2024-07-26 11:15:24.952290] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.440 [2024-07-26 11:15:24.952331] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.440 [2024-07-26 11:15:24.952367] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.440 [2024-07-26 11:15:24.952399] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.440 [2024-07-26 11:15:24.952445] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.440 [2024-07-26 11:15:24.952481] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.440 [2024-07-26 11:15:24.952521] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.440 [2024-07-26 11:15:24.952566] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.440 [2024-07-26 11:15:24.952609] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.440 [2024-07-26 11:15:24.952655] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.440 [2024-07-26 11:15:24.952693] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.440 [2024-07-26 11:15:24.952732] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.440 [2024-07-26 11:15:24.952778] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.440 [2024-07-26 11:15:24.952825] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.440 [2024-07-26 11:15:24.952872] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.440 [2024-07-26 11:15:24.952922] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.440 [2024-07-26 11:15:24.952970] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.440 [2024-07-26 11:15:24.953016] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.440 [2024-07-26 11:15:24.953063] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.440 [2024-07-26 11:15:24.953108] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.440 [2024-07-26 11:15:24.953157] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.440 [2024-07-26 11:15:24.953207] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.440 [2024-07-26 11:15:24.953255] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.440 [2024-07-26 11:15:24.953301] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.440 [2024-07-26 11:15:24.953346] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.440 [2024-07-26 11:15:24.953397] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.440 [2024-07-26 11:15:24.953443] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.440 [2024-07-26 11:15:24.953490] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.440 [2024-07-26 11:15:24.953532] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.440 [2024-07-26 11:15:24.953579] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.440 [2024-07-26 11:15:24.953624] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.440 [2024-07-26 11:15:24.953678] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.440 [2024-07-26 11:15:24.953727] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.440 [2024-07-26 11:15:24.953775] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.440 [2024-07-26 11:15:24.953820] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.440 [2024-07-26 11:15:24.953865] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.440 [2024-07-26 11:15:24.953912] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.441 [2024-07-26 11:15:24.953957] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.441 [2024-07-26 11:15:24.954001] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.441 [2024-07-26 11:15:24.954048] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.441 [2024-07-26 11:15:24.954098] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.441 [2024-07-26 11:15:24.954149] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.441 [2024-07-26 11:15:24.954193] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.441 [2024-07-26 11:15:24.954240] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.441 [2024-07-26 11:15:24.954282] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.441 [2024-07-26 11:15:24.954322] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.441 [2024-07-26 11:15:24.954366] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.441 [2024-07-26 11:15:24.954406] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.441 [2024-07-26 11:15:24.954449] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.441 [2024-07-26 11:15:24.954480] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.441 [2024-07-26 11:15:24.954520] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.441 [2024-07-26 11:15:24.954563] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.441 [2024-07-26 11:15:24.954604] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.441 [2024-07-26 11:15:24.954651] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.441 [2024-07-26 11:15:24.954826] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.441 [2024-07-26 11:15:24.954869] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.441 [2024-07-26 11:15:24.954914] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.441 [2024-07-26 11:15:24.954955] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.441 [2024-07-26 11:15:24.954987] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.441 [2024-07-26 11:15:24.955033] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.441 [2024-07-26 11:15:24.955072] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.441 [2024-07-26 11:15:24.955110] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.441 [2024-07-26 11:15:24.955152] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.441 [2024-07-26 11:15:24.955196] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.441 [2024-07-26 11:15:24.955235] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.441 [2024-07-26 11:15:24.955275] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.441 [2024-07-26 11:15:24.955314] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.441 [2024-07-26 11:15:24.955357] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.441 [2024-07-26 11:15:24.955394] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.441 [2024-07-26 11:15:24.955432] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.441 [2024-07-26 11:15:24.955477] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.441 [2024-07-26 11:15:24.955519] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.441 [2024-07-26 11:15:24.955565] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.441 [2024-07-26 11:15:24.956076] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.441 [2024-07-26 11:15:24.956125] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.441 [2024-07-26 11:15:24.956171] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.441 [2024-07-26 11:15:24.956226] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.441 [2024-07-26 11:15:24.956273] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.441 [2024-07-26 11:15:24.956318] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.441 [2024-07-26 11:15:24.956365] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.441 [2024-07-26 11:15:24.956414] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.441 [2024-07-26 11:15:24.956460] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.441 [2024-07-26 11:15:24.956513] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.441 [2024-07-26 11:15:24.956559] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.441 [2024-07-26 11:15:24.956605] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.441 [2024-07-26 11:15:24.956655] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.441 [2024-07-26 11:15:24.956701] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.441 [2024-07-26 11:15:24.956759] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.441 [2024-07-26 11:15:24.956804] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.441 [2024-07-26 11:15:24.956848] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.441 [2024-07-26 11:15:24.956893] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.441 [2024-07-26 11:15:24.956944] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.441 [2024-07-26 11:15:24.956991] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.441 [2024-07-26 11:15:24.957034] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.441 [2024-07-26 11:15:24.957078] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.441 [2024-07-26 11:15:24.957124] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.441 [2024-07-26 11:15:24.957164] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.441 [2024-07-26 11:15:24.957195] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.441 [2024-07-26 11:15:24.957236] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.441 [2024-07-26 11:15:24.957276] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.441 [2024-07-26 11:15:24.957318] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.441 [2024-07-26 11:15:24.957357] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.441 [2024-07-26 11:15:24.957401] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.441 [2024-07-26 11:15:24.957439] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.441 [2024-07-26 11:15:24.957478] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.441 [2024-07-26 11:15:24.957516] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.441 [2024-07-26 11:15:24.957559] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.441 [2024-07-26 11:15:24.957602] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.441 [2024-07-26 11:15:24.957650] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.441 [2024-07-26 11:15:24.957684] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.441 [2024-07-26 11:15:24.957723] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.441 [2024-07-26 11:15:24.957761] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.441 [2024-07-26 11:15:24.957808] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.441 [2024-07-26 11:15:24.957850] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.441 [2024-07-26 11:15:24.957888] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.441 [2024-07-26 11:15:24.957929] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.441 [2024-07-26 11:15:24.957972] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.441 [2024-07-26 11:15:24.958013] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.441 [2024-07-26 11:15:24.958046] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.441 [2024-07-26 11:15:24.958090] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.441 [2024-07-26 11:15:24.958134] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.441 [2024-07-26 11:15:24.958173] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.441 [2024-07-26 11:15:24.958217] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.441 [2024-07-26 11:15:24.958264] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.441 [2024-07-26 11:15:24.958308] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.442 [2024-07-26 11:15:24.958350] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.442 [2024-07-26 11:15:24.958392] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.442 [2024-07-26 11:15:24.958430] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.442 [2024-07-26 11:15:24.958471] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.442 [2024-07-26 11:15:24.958514] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.442 [2024-07-26 11:15:24.958554] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.442 [2024-07-26 11:15:24.958589] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.442 [2024-07-26 11:15:24.958638] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.442 [2024-07-26 11:15:24.958683] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.442 [2024-07-26 11:15:24.958729] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.442 [2024-07-26 11:15:24.958783] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.442 [2024-07-26 11:15:24.958827] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.442 [2024-07-26 11:15:24.959011] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.442 [2024-07-26 11:15:24.959062] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.442 [2024-07-26 11:15:24.959108] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.442 [2024-07-26 11:15:24.959157] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.442 [2024-07-26 11:15:24.959200] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.442 [2024-07-26 11:15:24.959247] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.442 [2024-07-26 11:15:24.959298] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.442 [2024-07-26 11:15:24.959341] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.442 [2024-07-26 11:15:24.959389] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.442 [2024-07-26 11:15:24.959438] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.442 [2024-07-26 11:15:24.959484] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.442 [2024-07-26 11:15:24.959529] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.442 [2024-07-26 11:15:24.959576] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.442 [2024-07-26 11:15:24.959621] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.442 [2024-07-26 11:15:24.959675] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.442 [2024-07-26 11:15:24.959722] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.442 [2024-07-26 11:15:24.959769] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.442 [2024-07-26 11:15:24.959819] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.442 [2024-07-26 11:15:24.959865] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.442 [2024-07-26 11:15:24.959917] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.442 [2024-07-26 11:15:24.959965] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.442 [2024-07-26 11:15:24.960010] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.442 [2024-07-26 11:15:24.960059] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.442 [2024-07-26 11:15:24.960108] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.442 [2024-07-26 11:15:24.960153] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.442 [2024-07-26 11:15:24.960194] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.442 [2024-07-26 11:15:24.960227] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.442 [2024-07-26 11:15:24.960270] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.442 [2024-07-26 11:15:24.960312] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.442 [2024-07-26 11:15:24.960351] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.442 [2024-07-26 11:15:24.960397] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.442 [2024-07-26 11:15:24.960436] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.442 [2024-07-26 11:15:24.960474] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.442 [2024-07-26 11:15:24.960514] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.442 [2024-07-26 11:15:24.960558] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.442 [2024-07-26 11:15:24.960597] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.442 [2024-07-26 11:15:24.960641] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.442 [2024-07-26 11:15:24.960685] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.442 [2024-07-26 11:15:24.960718] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.442 [2024-07-26 11:15:24.960761] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.442 [2024-07-26 11:15:24.960804] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.442 [2024-07-26 11:15:24.960846] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.442 [2024-07-26 11:15:24.960889] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.442 [2024-07-26 11:15:24.960928] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.442 [2024-07-26 11:15:24.961686] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.442 [2024-07-26 11:15:24.961730] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.442 [2024-07-26 11:15:24.961768] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.442 [2024-07-26 11:15:24.961808] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.442 [2024-07-26 11:15:24.961847] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.442 [2024-07-26 11:15:24.961891] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.442 [2024-07-26 11:15:24.961935] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.442 [2024-07-26 11:15:24.961977] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.442 [2024-07-26 11:15:24.962021] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.442 [2024-07-26 11:15:24.962067] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.442 [2024-07-26 11:15:24.962113] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.442 [2024-07-26 11:15:24.962163] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.442 [2024-07-26 11:15:24.962209] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.442 [2024-07-26 11:15:24.962255] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.442 [2024-07-26 11:15:24.962301] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.442 [2024-07-26 11:15:24.962347] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.442 [2024-07-26 11:15:24.962392] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.442 [2024-07-26 11:15:24.962438] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.442 [2024-07-26 11:15:24.962481] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.442 [2024-07-26 11:15:24.962529] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.442 [2024-07-26 11:15:24.962572] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.442 [2024-07-26 11:15:24.962618] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.442 [2024-07-26 11:15:24.962672] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.442 [2024-07-26 11:15:24.962720] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.442 [2024-07-26 11:15:24.962768] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.442 [2024-07-26 11:15:24.962822] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.442 [2024-07-26 11:15:24.962868] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.442 [2024-07-26 11:15:24.962917] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.442 [2024-07-26 11:15:24.962961] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.442 [2024-07-26 11:15:24.963006] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.442 [2024-07-26 11:15:24.963055] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.442 [2024-07-26 11:15:24.963100] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.442 [2024-07-26 11:15:24.963147] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.442 [2024-07-26 11:15:24.963193] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.442 [2024-07-26 11:15:24.963239] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.442 [2024-07-26 11:15:24.963287] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.442 [2024-07-26 11:15:24.963335] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.442 [2024-07-26 11:15:24.963380] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.442 [2024-07-26 11:15:24.963426] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.442 [2024-07-26 11:15:24.963470] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.443 [2024-07-26 11:15:24.963526] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.443 [2024-07-26 11:15:24.963572] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.443 [2024-07-26 11:15:24.963614] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.443 [2024-07-26 11:15:24.963654] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.443 [2024-07-26 11:15:24.963691] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.443 [2024-07-26 11:15:24.963735] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.443 [2024-07-26 11:15:24.963773] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.443 [2024-07-26 11:15:24.963811] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.443 [2024-07-26 11:15:24.963851] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.443 [2024-07-26 11:15:24.963896] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.443 [2024-07-26 11:15:24.963936] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.443 [2024-07-26 11:15:24.963976] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.443 [2024-07-26 11:15:24.964019] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.443 [2024-07-26 11:15:24.964062] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.443 [2024-07-26 11:15:24.964109] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.443 [2024-07-26 11:15:24.964148] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.443 [2024-07-26 11:15:24.964178] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.443 [2024-07-26 11:15:24.964220] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.443 [2024-07-26 11:15:24.964261] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.443 [2024-07-26 11:15:24.964301] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.443 [2024-07-26 11:15:24.964343] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.443 [2024-07-26 11:15:24.964383] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.443 [2024-07-26 11:15:24.964424] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.443 [2024-07-26 11:15:24.964465] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.443 [2024-07-26 11:15:24.964643] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.443 [2024-07-26 11:15:24.964687] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.443 [2024-07-26 11:15:24.964725] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.443 [2024-07-26 11:15:24.964771] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.443 [2024-07-26 11:15:24.964810] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.443 [2024-07-26 11:15:24.964850] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.443 [2024-07-26 11:15:24.964889] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.443 [2024-07-26 11:15:24.964927] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.443 [2024-07-26 11:15:24.964966] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.443 [2024-07-26 11:15:24.965005] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.443 [2024-07-26 11:15:24.965043] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.443 [2024-07-26 11:15:24.965083] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.443 [2024-07-26 11:15:24.965122] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.443 [2024-07-26 11:15:24.965163] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.443 [2024-07-26 11:15:24.965205] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.443 [2024-07-26 11:15:24.965255] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.443 [2024-07-26 11:15:24.965304] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.443 [2024-07-26 11:15:24.965349] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.443 [2024-07-26 11:15:24.965393] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.443 [2024-07-26 11:15:24.965901] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.443 [2024-07-26 11:15:24.965949] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.443 [2024-07-26 11:15:24.965996] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.443 [2024-07-26 11:15:24.966041] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.443 [2024-07-26 11:15:24.966090] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.443 [2024-07-26 11:15:24.966136] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.443 [2024-07-26 11:15:24.966183] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.443 [2024-07-26 11:15:24.966228] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.443 [2024-07-26 11:15:24.966276] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.443 [2024-07-26 11:15:24.966320] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.443 [2024-07-26 11:15:24.966370] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.443 [2024-07-26 11:15:24.966420] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.443 [2024-07-26 11:15:24.966475] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.443 [2024-07-26 11:15:24.966524] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.443 [2024-07-26 11:15:24.966571] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.443 [2024-07-26 11:15:24.966618] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.443 [2024-07-26 11:15:24.966668] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.443 [2024-07-26 11:15:24.966715] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.443 [2024-07-26 11:15:24.966768] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.443 [2024-07-26 11:15:24.966808] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.443 [2024-07-26 11:15:24.966840] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.443 [2024-07-26 11:15:24.966883] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.443 [2024-07-26 11:15:24.966924] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.443 [2024-07-26 11:15:24.966960] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.443 [2024-07-26 11:15:24.966998] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.443 [2024-07-26 11:15:24.967042] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.443 [2024-07-26 11:15:24.967082] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.443 [2024-07-26 11:15:24.967122] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.443 [2024-07-26 11:15:24.967163] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.443 [2024-07-26 11:15:24.967211] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.443 [2024-07-26 11:15:24.967254] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.443 [2024-07-26 11:15:24.967298] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.443 [2024-07-26 11:15:24.967331] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.443 [2024-07-26 11:15:24.967370] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.443 [2024-07-26 11:15:24.967412] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.443 [2024-07-26 11:15:24.967453] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.443 [2024-07-26 11:15:24.967493] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.443 [2024-07-26 11:15:24.967537] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.443 [2024-07-26 11:15:24.967578] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.443 [2024-07-26 11:15:24.967619] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.443 [2024-07-26 11:15:24.967666] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.443 [2024-07-26 11:15:24.967709] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.443 [2024-07-26 11:15:24.967746] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.443 [2024-07-26 11:15:24.967778] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.443 [2024-07-26 11:15:24.967819] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.443 [2024-07-26 11:15:24.967866] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.443 [2024-07-26 11:15:24.967906] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.443 [2024-07-26 11:15:24.967947] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.443 [2024-07-26 11:15:24.967987] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.443 [2024-07-26 11:15:24.968029] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.443 [2024-07-26 11:15:24.968068] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.443 [2024-07-26 11:15:24.968108] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.443 [2024-07-26 11:15:24.968147] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.443 [2024-07-26 11:15:24.968188] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.444 [2024-07-26 11:15:24.968235] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.444 [2024-07-26 11:15:24.968271] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.444 [2024-07-26 11:15:24.968308] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.444 [2024-07-26 11:15:24.968352] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.444 [2024-07-26 11:15:24.968396] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.444 [2024-07-26 11:15:24.968442] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.444 [2024-07-26 11:15:24.968489] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.444 [2024-07-26 11:15:24.968539] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.444 [2024-07-26 11:15:24.968585] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.444 [2024-07-26 11:15:24.968639] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.444 [2024-07-26 11:15:24.968821] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.444 [2024-07-26 11:15:24.968869] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.444 [2024-07-26 11:15:24.968915] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.444 [2024-07-26 11:15:24.968958] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.444 [2024-07-26 11:15:24.969005] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.444 [2024-07-26 11:15:24.969057] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.444 [2024-07-26 11:15:24.969101] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.444 [2024-07-26 11:15:24.969145] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.444 [2024-07-26 11:15:24.969189] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.444 [2024-07-26 11:15:24.969238] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.444 [2024-07-26 11:15:24.969285] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.444 [2024-07-26 11:15:24.969330] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.444 [2024-07-26 11:15:24.969376] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.444 [2024-07-26 11:15:24.969422] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.444 [2024-07-26 11:15:24.969472] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.444 [2024-07-26 11:15:24.969517] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.444 [2024-07-26 11:15:24.969564] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.444 [2024-07-26 11:15:24.969610] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.444 [2024-07-26 11:15:24.969660] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.444 [2024-07-26 11:15:24.969705] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.444 [2024-07-26 11:15:24.969751] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.444 [2024-07-26 11:15:24.969796] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.444 [2024-07-26 11:15:24.969842] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.444 [2024-07-26 11:15:24.969887] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.444 [2024-07-26 11:15:24.969932] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.444 [2024-07-26 11:15:24.969975] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.444 [2024-07-26 11:15:24.970023] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.444 [2024-07-26 11:15:24.970062] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.444 [2024-07-26 11:15:24.970099] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.444 [2024-07-26 11:15:24.970138] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.444 [2024-07-26 11:15:24.970176] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.444 [2024-07-26 11:15:24.970221] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.444 [2024-07-26 11:15:24.970263] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.444 [2024-07-26 11:15:24.970302] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.444 [2024-07-26 11:15:24.970342] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.444 [2024-07-26 11:15:24.970383] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.444 [2024-07-26 11:15:24.970422] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.444 [2024-07-26 11:15:24.970468] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.444 [2024-07-26 11:15:24.970510] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.444 [2024-07-26 11:15:24.970551] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.444 [2024-07-26 11:15:24.970583] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.444 [2024-07-26 11:15:24.970623] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.444 [2024-07-26 11:15:24.970672] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.444 [2024-07-26 11:15:24.970714] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.444 [2024-07-26 11:15:24.970759] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.444 [2024-07-26 11:15:24.970797] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.444 [2024-07-26 11:15:24.970837] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.444 [2024-07-26 11:15:24.970877] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.444 [2024-07-26 11:15:24.970920] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.444 [2024-07-26 11:15:24.970960] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.444 [2024-07-26 11:15:24.970999] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.444 [2024-07-26 11:15:24.971034] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.444 [2024-07-26 11:15:24.971077] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.444 [2024-07-26 11:15:24.971114] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.444 [2024-07-26 11:15:24.971152] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.444 [2024-07-26 11:15:24.971191] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.444 [2024-07-26 11:15:24.971232] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.444 [2024-07-26 11:15:24.971270] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.444 [2024-07-26 11:15:24.971309] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.444 [2024-07-26 11:15:24.971346] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.444 [2024-07-26 11:15:24.971391] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.444 [2024-07-26 11:15:24.971428] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.444 [2024-07-26 11:15:24.971467] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.444 [2024-07-26 11:15:24.972261] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.444 [2024-07-26 11:15:24.972311] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.444 [2024-07-26 11:15:24.972360] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.444 [2024-07-26 11:15:24.972403] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.444 [2024-07-26 11:15:24.972450] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.444 [2024-07-26 11:15:24.972500] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.444 [2024-07-26 11:15:24.972546] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.444 [2024-07-26 11:15:24.972587] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.444 [2024-07-26 11:15:24.972636] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.444 [2024-07-26 11:15:24.972685] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.444 [2024-07-26 11:15:24.972736] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.444 [2024-07-26 11:15:24.972779] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.444 [2024-07-26 11:15:24.972828] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.444 [2024-07-26 11:15:24.972875] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.444 [2024-07-26 11:15:24.972923] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.444 [2024-07-26 11:15:24.972968] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.444 [2024-07-26 11:15:24.973013] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.444 [2024-07-26 11:15:24.973061] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.444 [2024-07-26 11:15:24.973108] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.444 [2024-07-26 11:15:24.973160] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.444 [2024-07-26 11:15:24.973209] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.444 [2024-07-26 11:15:24.973257] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.444 [2024-07-26 11:15:24.973291] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.444 [2024-07-26 11:15:24.973332] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.445 [2024-07-26 11:15:24.973372] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.445 [2024-07-26 11:15:24.973411] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.445 [2024-07-26 11:15:24.973459] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.445 [2024-07-26 11:15:24.973498] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.445 [2024-07-26 11:15:24.973540] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.445 [2024-07-26 11:15:24.973587] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.445 [2024-07-26 11:15:24.973635] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.445 [2024-07-26 11:15:24.973679] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.445 [2024-07-26 11:15:24.973721] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.445 [2024-07-26 11:15:24.973770] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.445 [2024-07-26 11:15:24.973802] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.445 [2024-07-26 11:15:24.973846] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.445 [2024-07-26 11:15:24.973888] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.445 [2024-07-26 11:15:24.973928] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.445 [2024-07-26 11:15:24.973969] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.445 [2024-07-26 11:15:24.974010] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.445 [2024-07-26 11:15:24.974050] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.445 [2024-07-26 11:15:24.974087] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.445 [2024-07-26 11:15:24.974130] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.445 [2024-07-26 11:15:24.974170] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.445 [2024-07-26 11:15:24.974209] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.445 [2024-07-26 11:15:24.974243] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.445 [2024-07-26 11:15:24.974286] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.445 [2024-07-26 11:15:24.974325] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.445 [2024-07-26 11:15:24.974367] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.445 [2024-07-26 11:15:24.974416] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.445 [2024-07-26 11:15:24.974456] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.445 [2024-07-26 11:15:24.974501] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.445 [2024-07-26 11:15:24.974542] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.445 [2024-07-26 11:15:24.974585] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.445 [2024-07-26 11:15:24.974633] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.445 [2024-07-26 11:15:24.974671] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.445 [2024-07-26 11:15:24.974710] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.445 [2024-07-26 11:15:24.974753] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.445 [2024-07-26 11:15:24.974788] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.445 [2024-07-26 11:15:24.974837] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.445 [2024-07-26 11:15:24.974883] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.445 [2024-07-26 11:15:24.974927] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.445 [2024-07-26 11:15:24.974974] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.445 [2024-07-26 11:15:24.975021] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.445 [2024-07-26 11:15:24.975197] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.445 [2024-07-26 11:15:24.975247] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.445 [2024-07-26 11:15:24.975296] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.445 [2024-07-26 11:15:24.975343] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.445 [2024-07-26 11:15:24.975390] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.445 [2024-07-26 11:15:24.975437] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.445 [2024-07-26 11:15:24.975482] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.445 [2024-07-26 11:15:24.975525] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.445 [2024-07-26 11:15:24.975575] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.445 [2024-07-26 11:15:24.975620] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.445 [2024-07-26 11:15:24.975669] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.445 [2024-07-26 11:15:24.975716] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.445 [2024-07-26 11:15:24.975762] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.446 [2024-07-26 11:15:24.975810] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.446 [2024-07-26 11:15:24.975856] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.446 [2024-07-26 11:15:24.975900] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.446 [2024-07-26 11:15:24.975944] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.446 [2024-07-26 11:15:24.976435] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.446 [2024-07-26 11:15:24.976481] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.446 [2024-07-26 11:15:24.976518] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.446 [2024-07-26 11:15:24.976562] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.446 [2024-07-26 11:15:24.976602] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.446 [2024-07-26 11:15:24.976650] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.446 [2024-07-26 11:15:24.976693] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.446 [2024-07-26 11:15:24.976738] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.446 [2024-07-26 11:15:24.976776] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.446 [2024-07-26 11:15:24.976814] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.446 [2024-07-26 11:15:24.976857] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.446 [2024-07-26 11:15:24.976900] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.446 [2024-07-26 11:15:24.976951] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.446 [2024-07-26 11:15:24.976991] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.446 [2024-07-26 11:15:24.977022] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.446 [2024-07-26 11:15:24.977059] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.446 [2024-07-26 11:15:24.977098] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.446 [2024-07-26 11:15:24.977146] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.446 [2024-07-26 11:15:24.977185] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.446 [2024-07-26 11:15:24.977225] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.446 [2024-07-26 11:15:24.977273] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.446 [2024-07-26 11:15:24.977310] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.446 [2024-07-26 11:15:24.977356] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.446 [2024-07-26 11:15:24.977396] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.446 [2024-07-26 11:15:24.977437] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.446 [2024-07-26 11:15:24.977473] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.446 [2024-07-26 11:15:24.977516] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.446 [2024-07-26 11:15:24.977551] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.446 [2024-07-26 11:15:24.977589] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.446 [2024-07-26 11:15:24.977638] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.446 [2024-07-26 11:15:24.977677] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.446 [2024-07-26 11:15:24.977717] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.446 [2024-07-26 11:15:24.977757] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.446 [2024-07-26 11:15:24.977795] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.446 [2024-07-26 11:15:24.977840] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.446 [2024-07-26 11:15:24.977883] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.446 [2024-07-26 11:15:24.977922] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.446 [2024-07-26 11:15:24.977965] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.446 [2024-07-26 11:15:24.978003] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.446 [2024-07-26 11:15:24.978045] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.446 [2024-07-26 11:15:24.978091] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.446 [2024-07-26 11:15:24.978148] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.446 [2024-07-26 11:15:24.978192] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.446 [2024-07-26 11:15:24.978238] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.446 [2024-07-26 11:15:24.978284] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.446 [2024-07-26 11:15:24.978339] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.446 [2024-07-26 11:15:24.978387] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.446 [2024-07-26 11:15:24.978432] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.446 [2024-07-26 11:15:24.978478] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.446 [2024-07-26 11:15:24.978522] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.446 [2024-07-26 11:15:24.978566] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.446 [2024-07-26 11:15:24.978615] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.446 [2024-07-26 11:15:24.978663] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.446 [2024-07-26 11:15:24.978710] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.446 [2024-07-26 11:15:24.978764] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.446 [2024-07-26 11:15:24.978808] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.446 [2024-07-26 11:15:24.978855] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.446 [2024-07-26 11:15:24.978901] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.446 [2024-07-26 11:15:24.978945] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.446 [2024-07-26 11:15:24.978996] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.446 [2024-07-26 11:15:24.979043] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.446 [2024-07-26 11:15:24.979089] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.446 [2024-07-26 11:15:24.979133] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.446 [2024-07-26 11:15:24.979176] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.446 [2024-07-26 11:15:24.979357] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.446 [2024-07-26 11:15:24.979403] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.446 [2024-07-26 11:15:24.979449] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.446 [2024-07-26 11:15:24.979499] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.446 [2024-07-26 11:15:24.979544] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.446 [2024-07-26 11:15:24.979591] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.446 [2024-07-26 11:15:24.979641] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.446 [2024-07-26 11:15:24.979688] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.446 [2024-07-26 11:15:24.979721] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.447 [2024-07-26 11:15:24.979763] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.447 [2024-07-26 11:15:24.979806] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.447 [2024-07-26 11:15:24.979843] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.447 [2024-07-26 11:15:24.979882] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.447 [2024-07-26 11:15:24.979928] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.447 [2024-07-26 11:15:24.979967] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.447 [2024-07-26 11:15:24.980008] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.447 [2024-07-26 11:15:24.980050] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.447 [2024-07-26 11:15:24.980092] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.447 [2024-07-26 11:15:24.980138] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.447 [2024-07-26 11:15:24.980179] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.447 [2024-07-26 11:15:24.980216] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.447 [2024-07-26 11:15:24.980265] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.447 [2024-07-26 11:15:24.980313] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.447 [2024-07-26 11:15:24.980352] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.447 [2024-07-26 11:15:24.980396] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.447 [2024-07-26 11:15:24.980434] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.447 [2024-07-26 11:15:24.980476] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.447 [2024-07-26 11:15:24.980515] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.447 [2024-07-26 11:15:24.980556] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.447 [2024-07-26 11:15:24.980596] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.447 [2024-07-26 11:15:24.980646] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.447 [2024-07-26 11:15:24.980681] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.447 [2024-07-26 11:15:24.980719] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.447 [2024-07-26 11:15:24.980758] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.447 [2024-07-26 11:15:24.980797] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.447 [2024-07-26 11:15:24.980840] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.447 [2024-07-26 11:15:24.980883] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.447 [2024-07-26 11:15:24.980925] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.447 [2024-07-26 11:15:24.980962] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.447 [2024-07-26 11:15:24.981003] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.447 [2024-07-26 11:15:24.981045] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.447 [2024-07-26 11:15:24.981081] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.447 [2024-07-26 11:15:24.981118] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.447 [2024-07-26 11:15:24.981158] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.447 [2024-07-26 11:15:24.981197] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.447 [2024-07-26 11:15:24.981235] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.447 [2024-07-26 11:15:24.981286] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.447 [2024-07-26 11:15:24.981330] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.447 [2024-07-26 11:15:24.981375] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.447 [2024-07-26 11:15:24.981422] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.447 [2024-07-26 11:15:24.981472] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.447 [2024-07-26 11:15:24.981520] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.447 [2024-07-26 11:15:24.981563] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.447 [2024-07-26 11:15:24.981611] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.447 [2024-07-26 11:15:24.981660] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.447 [2024-07-26 11:15:24.981706] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.447 [2024-07-26 11:15:24.981754] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.447 [2024-07-26 11:15:24.981799] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.447 [2024-07-26 11:15:24.981842] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.447 [2024-07-26 11:15:24.981889] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.447 [2024-07-26 11:15:24.981940] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.447 [2024-07-26 11:15:24.981985] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.447 [2024-07-26 11:15:24.982030] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.447 [2024-07-26 11:15:24.982831] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.447 [2024-07-26 11:15:24.982881] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.447 [2024-07-26 11:15:24.982929] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.447 [2024-07-26 11:15:24.982968] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.447 [2024-07-26 11:15:24.983000] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.447 [2024-07-26 11:15:24.983042] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.447 [2024-07-26 11:15:24.983083] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.447 [2024-07-26 11:15:24.983129] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.447 [2024-07-26 11:15:24.983168] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.447 [2024-07-26 11:15:24.983209] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.447 [2024-07-26 11:15:24.983253] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.447 [2024-07-26 11:15:24.983292] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.447 [2024-07-26 11:15:24.983333] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.447 [2024-07-26 11:15:24.983382] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.447 [2024-07-26 11:15:24.983421] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.447 [2024-07-26 11:15:24.983463] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.447 [2024-07-26 11:15:24.983497] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.447 [2024-07-26 11:15:24.983536] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.447 [2024-07-26 11:15:24.983576] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.447 [2024-07-26 11:15:24.983631] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.447 [2024-07-26 11:15:24.983675] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.447 [2024-07-26 11:15:24.983722] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.447 [2024-07-26 11:15:24.983761] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.447 [2024-07-26 11:15:24.983804] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.447 [2024-07-26 11:15:24.983843] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.447 [2024-07-26 11:15:24.983885] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.447 [2024-07-26 11:15:24.983930] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.447 [2024-07-26 11:15:24.983972] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.447 [2024-07-26 11:15:24.984008] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.447 [2024-07-26 11:15:24.984049] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.447 [2024-07-26 11:15:24.984092] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.447 [2024-07-26 11:15:24.984132] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.447 [2024-07-26 11:15:24.984178] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.447 [2024-07-26 11:15:24.984217] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.447 [2024-07-26 11:15:24.984261] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.447 [2024-07-26 11:15:24.984303] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.447 [2024-07-26 11:15:24.984346] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.447 [2024-07-26 11:15:24.984386] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.447 [2024-07-26 11:15:24.984430] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.447 [2024-07-26 11:15:24.984473] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.447 [2024-07-26 11:15:24.984517] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.447 [2024-07-26 11:15:24.984555] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.448 [2024-07-26 11:15:24.984600] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.448 [2024-07-26 11:15:24.984655] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.448 [2024-07-26 11:15:24.984700] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.448 [2024-07-26 11:15:24.984745] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.448 [2024-07-26 11:15:24.984793] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.448 [2024-07-26 11:15:24.984840] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.448 [2024-07-26 11:15:24.984889] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.448 [2024-07-26 11:15:24.984935] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.448 [2024-07-26 11:15:24.984979] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.448 [2024-07-26 11:15:24.985023] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.448 [2024-07-26 11:15:24.985068] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.448 [2024-07-26 11:15:24.985118] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.448 [2024-07-26 11:15:24.985164] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.448 [2024-07-26 11:15:24.985207] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.448 [2024-07-26 11:15:24.985255] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.448 [2024-07-26 11:15:24.985301] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.448 [2024-07-26 11:15:24.985348] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.448 [2024-07-26 11:15:24.985398] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.448 [2024-07-26 11:15:24.985442] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.448 [2024-07-26 11:15:24.985487] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.448 [2024-07-26 11:15:24.985532] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.448 [2024-07-26 11:15:24.985576] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.448 [2024-07-26 11:15:24.985764] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.448 [2024-07-26 11:15:24.985812] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.448 [2024-07-26 11:15:24.985857] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.448 [2024-07-26 11:15:24.985906] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.448 [2024-07-26 11:15:24.985950] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.448 [2024-07-26 11:15:24.985996] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.448 [2024-07-26 11:15:24.986044] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.448 [2024-07-26 11:15:24.986095] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.448 [2024-07-26 11:15:24.986140] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.448 [2024-07-26 11:15:24.986185] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.448 [2024-07-26 11:15:24.986217] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.448 [2024-07-26 11:15:24.986259] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.448 [2024-07-26 11:15:24.986302] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.448 [2024-07-26 11:15:24.986343] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.448 [2024-07-26 11:15:24.986383] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.448 [2024-07-26 11:15:24.986428] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.448 [2024-07-26 11:15:24.986473] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.448 [2024-07-26 11:15:24.986877] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.448 [2024-07-26 11:15:24.986921] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.448 [2024-07-26 11:15:24.986963] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.448 [2024-07-26 11:15:24.987002] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.448 [2024-07-26 11:15:24.987041] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.448 [2024-07-26 11:15:24.987082] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.448 [2024-07-26 11:15:24.987130] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.448 [2024-07-26 11:15:24.987173] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.448 [2024-07-26 11:15:24.987206] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.448 [2024-07-26 11:15:24.987248] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.448 [2024-07-26 11:15:24.987293] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.448 [2024-07-26 11:15:24.987338] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.448 [2024-07-26 11:15:24.987379] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.448 [2024-07-26 11:15:24.987421] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.448 [2024-07-26 11:15:24.987465] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.448 [2024-07-26 11:15:24.987504] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.448 [2024-07-26 11:15:24.987544] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.448 [2024-07-26 11:15:24.987580] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.448 [2024-07-26 11:15:24.987621] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.448 [2024-07-26 11:15:24.987666] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.448 [2024-07-26 11:15:24.987704] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.448 [2024-07-26 11:15:24.987744] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.448 [2024-07-26 11:15:24.987792] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.448 [2024-07-26 11:15:24.987835] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.448 [2024-07-26 11:15:24.987879] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.448 [2024-07-26 11:15:24.987932] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.448 [2024-07-26 11:15:24.987976] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.448 [2024-07-26 11:15:24.988020] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.448 [2024-07-26 11:15:24.988070] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.448 [2024-07-26 11:15:24.988113] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.448 [2024-07-26 11:15:24.988155] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.448 [2024-07-26 11:15:24.988200] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.448 [2024-07-26 11:15:24.988248] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.448 [2024-07-26 11:15:24.988295] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.448 [2024-07-26 11:15:24.988339] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.448 [2024-07-26 11:15:24.988386] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.448 [2024-07-26 11:15:24.988436] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.448 [2024-07-26 11:15:24.988485] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.448 [2024-07-26 11:15:24.988536] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.448 [2024-07-26 11:15:24.988581] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.448 [2024-07-26 11:15:24.988633] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.448 [2024-07-26 11:15:24.988677] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.448 [2024-07-26 11:15:24.988719] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.448 [2024-07-26 11:15:24.988769] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.448 [2024-07-26 11:15:24.988816] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.448 [2024-07-26 11:15:24.988862] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.448 [2024-07-26 11:15:24.988906] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.448 [2024-07-26 11:15:24.988954] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.448 [2024-07-26 11:15:24.989001] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.448 [2024-07-26 11:15:24.989049] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.448 [2024-07-26 11:15:24.989098] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.448 [2024-07-26 11:15:24.989143] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.448 [2024-07-26 11:15:24.989192] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.448 [2024-07-26 11:15:24.989244] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.448 [2024-07-26 11:15:24.989289] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.448 [2024-07-26 11:15:24.989339] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.448 [2024-07-26 11:15:24.989386] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.449 [2024-07-26 11:15:24.989430] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.449 [2024-07-26 11:15:24.989465] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.449 [2024-07-26 11:15:24.989508] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.449 [2024-07-26 11:15:24.989554] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.449 [2024-07-26 11:15:24.989595] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.449 [2024-07-26 11:15:24.989641] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.449 [2024-07-26 11:15:24.989920] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.449 [2024-07-26 11:15:24.989970] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.449 [2024-07-26 11:15:24.990006] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.449 [2024-07-26 11:15:24.990049] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.449 [2024-07-26 11:15:24.990089] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.449 [2024-07-26 11:15:24.990131] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.449 [2024-07-26 11:15:24.990175] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.449 [2024-07-26 11:15:24.990221] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.449 [2024-07-26 11:15:24.990264] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.449 [2024-07-26 11:15:24.990305] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.449 [2024-07-26 11:15:24.990344] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.449 [2024-07-26 11:15:24.990382] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.449 [2024-07-26 11:15:24.990430] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.449 [2024-07-26 11:15:24.990462] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.449 [2024-07-26 11:15:24.990504] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.449 [2024-07-26 11:15:24.990547] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.449 [2024-07-26 11:15:24.990589] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.449 [2024-07-26 11:15:24.990637] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.449 [2024-07-26 11:15:24.990679] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.449 [2024-07-26 11:15:24.990722] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.449 [2024-07-26 11:15:24.990763] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.449 [2024-07-26 11:15:24.990807] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.449 [2024-07-26 11:15:24.990846] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.449 [2024-07-26 11:15:24.990893] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.449 [2024-07-26 11:15:24.990930] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.449 [2024-07-26 11:15:24.990974] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.449 [2024-07-26 11:15:24.991015] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.449 [2024-07-26 11:15:24.991060] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.449 [2024-07-26 11:15:24.991107] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.449 [2024-07-26 11:15:24.991156] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.449 [2024-07-26 11:15:24.991203] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.449 [2024-07-26 11:15:24.991249] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.449 [2024-07-26 11:15:24.991297] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.449 [2024-07-26 11:15:24.991346] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.449 [2024-07-26 11:15:24.991394] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.449 [2024-07-26 11:15:24.991439] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.449 [2024-07-26 11:15:24.991487] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.449 [2024-07-26 11:15:24.991535] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.449 [2024-07-26 11:15:24.991591] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.449 [2024-07-26 11:15:24.991644] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.449 [2024-07-26 11:15:24.991691] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.449 [2024-07-26 11:15:24.991737] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.449 [2024-07-26 11:15:24.991782] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.449 [2024-07-26 11:15:24.991828] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.449 [2024-07-26 11:15:24.991879] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.449 [2024-07-26 11:15:24.991928] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.449 [2024-07-26 11:15:24.991973] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.449 [2024-07-26 11:15:24.992019] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.449 [2024-07-26 11:15:24.992063] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.449 [2024-07-26 11:15:24.992111] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.449 [2024-07-26 11:15:24.992161] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.449 [2024-07-26 11:15:24.992208] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.449 [2024-07-26 11:15:24.992253] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.449 [2024-07-26 11:15:24.992297] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.449 [2024-07-26 11:15:24.992344] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.449 [2024-07-26 11:15:24.992388] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.449 [2024-07-26 11:15:24.992436] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.449 [2024-07-26 11:15:24.992481] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.449 [2024-07-26 11:15:24.992530] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.449 [2024-07-26 11:15:24.992580] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.449 [2024-07-26 11:15:24.992624] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.449 [2024-07-26 11:15:24.992672] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.449 [2024-07-26 11:15:24.992709] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.449 [2024-07-26 11:15:24.992754] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.449 [2024-07-26 11:15:24.993222] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.449 [2024-07-26 11:15:24.993266] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.449 [2024-07-26 11:15:24.993311] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.449 [2024-07-26 11:15:24.993353] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.449 [2024-07-26 11:15:24.993399] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.449 [2024-07-26 11:15:24.993441] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.449 [2024-07-26 11:15:24.993483] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.449 [2024-07-26 11:15:24.993522] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.449 [2024-07-26 11:15:24.993563] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.449 [2024-07-26 11:15:24.993603] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.449 [2024-07-26 11:15:24.993653] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.449 [2024-07-26 11:15:24.993691] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.449 [2024-07-26 11:15:24.993731] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.449 [2024-07-26 11:15:24.993773] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.449 [2024-07-26 11:15:24.993819] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.449 [2024-07-26 11:15:24.993861] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.449 [2024-07-26 11:15:24.993899] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.449 Message suppressed 999 times: Read completed with error (sct=0, sc=15) 00:07:29.449 [2024-07-26 11:15:24.993942] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.449 [2024-07-26 11:15:24.993985] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.449 [2024-07-26 11:15:24.994024] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.449 [2024-07-26 11:15:24.994066] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.449 [2024-07-26 11:15:24.994106] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.449 [2024-07-26 11:15:24.994148] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.449 [2024-07-26 11:15:24.994187] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.449 [2024-07-26 11:15:24.994227] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.449 [2024-07-26 11:15:24.994272] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.449 [2024-07-26 11:15:24.994323] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.450 [2024-07-26 11:15:24.994369] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.450 [2024-07-26 11:15:24.994414] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.450 [2024-07-26 11:15:24.994459] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.450 [2024-07-26 11:15:24.994507] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.450 [2024-07-26 11:15:24.994554] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.450 [2024-07-26 11:15:24.994606] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.450 [2024-07-26 11:15:24.994655] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.450 [2024-07-26 11:15:24.994704] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.450 [2024-07-26 11:15:24.994748] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.450 [2024-07-26 11:15:24.994793] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.450 [2024-07-26 11:15:24.994842] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.450 [2024-07-26 11:15:24.994892] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.450 [2024-07-26 11:15:24.994937] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.450 [2024-07-26 11:15:24.994983] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.450 [2024-07-26 11:15:24.995028] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.450 [2024-07-26 11:15:24.995073] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.450 [2024-07-26 11:15:24.995121] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.450 [2024-07-26 11:15:24.995169] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.450 [2024-07-26 11:15:24.995214] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.450 [2024-07-26 11:15:24.995261] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.450 [2024-07-26 11:15:24.995309] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.450 [2024-07-26 11:15:24.995353] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.450 [2024-07-26 11:15:24.995404] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.450 [2024-07-26 11:15:24.995453] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.450 [2024-07-26 11:15:24.995500] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.450 [2024-07-26 11:15:24.995547] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.450 [2024-07-26 11:15:24.995595] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.450 [2024-07-26 11:15:24.995647] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.450 [2024-07-26 11:15:24.995694] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.450 [2024-07-26 11:15:24.995738] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.450 [2024-07-26 11:15:24.995774] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.450 [2024-07-26 11:15:24.995814] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.450 [2024-07-26 11:15:24.995854] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.450 [2024-07-26 11:15:24.995894] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.450 [2024-07-26 11:15:24.995935] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.450 [2024-07-26 11:15:24.995986] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.450 [2024-07-26 11:15:24.996733] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.450 [2024-07-26 11:15:24.996778] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.450 [2024-07-26 11:15:24.996819] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.450 [2024-07-26 11:15:24.996862] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.450 [2024-07-26 11:15:24.996905] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.450 [2024-07-26 11:15:24.996950] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.450 [2024-07-26 11:15:24.996988] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.450 [2024-07-26 11:15:24.997030] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.450 [2024-07-26 11:15:24.997068] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.450 [2024-07-26 11:15:24.997110] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.450 [2024-07-26 11:15:24.997149] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.450 [2024-07-26 11:15:24.997186] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.450 [2024-07-26 11:15:24.997230] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.450 [2024-07-26 11:15:24.997275] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.450 [2024-07-26 11:15:24.997318] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.450 [2024-07-26 11:15:24.997365] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.450 [2024-07-26 11:15:24.997416] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.450 [2024-07-26 11:15:24.997466] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.450 [2024-07-26 11:15:24.997517] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.450 [2024-07-26 11:15:24.997563] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.450 [2024-07-26 11:15:24.997610] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.450 [2024-07-26 11:15:24.997660] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.450 [2024-07-26 11:15:24.997708] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.450 [2024-07-26 11:15:24.997757] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.450 [2024-07-26 11:15:24.997805] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.450 [2024-07-26 11:15:24.997851] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.450 [2024-07-26 11:15:24.997896] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.450 [2024-07-26 11:15:24.997945] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.450 [2024-07-26 11:15:24.997992] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.450 [2024-07-26 11:15:24.998036] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.450 [2024-07-26 11:15:24.998081] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.450 [2024-07-26 11:15:24.998144] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.450 [2024-07-26 11:15:24.998190] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.450 [2024-07-26 11:15:24.998236] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.450 [2024-07-26 11:15:24.998282] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.450 [2024-07-26 11:15:24.998326] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.450 [2024-07-26 11:15:24.998377] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.450 [2024-07-26 11:15:24.998419] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.450 [2024-07-26 11:15:24.998468] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.450 [2024-07-26 11:15:24.998517] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.450 [2024-07-26 11:15:24.998564] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.450 [2024-07-26 11:15:24.998608] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.450 [2024-07-26 11:15:24.998664] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.450 [2024-07-26 11:15:24.998710] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.450 [2024-07-26 11:15:24.998755] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.450 [2024-07-26 11:15:24.998800] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.451 [2024-07-26 11:15:24.998850] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.451 [2024-07-26 11:15:24.998896] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.451 [2024-07-26 11:15:24.998943] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.451 [2024-07-26 11:15:24.998988] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.451 [2024-07-26 11:15:24.999033] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.451 [2024-07-26 11:15:24.999074] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.451 [2024-07-26 11:15:24.999107] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.451 [2024-07-26 11:15:24.999151] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.451 [2024-07-26 11:15:24.999193] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.451 [2024-07-26 11:15:24.999234] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.451 [2024-07-26 11:15:24.999275] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.451 [2024-07-26 11:15:24.999317] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.451 [2024-07-26 11:15:24.999357] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.451 [2024-07-26 11:15:24.999401] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.451 [2024-07-26 11:15:24.999441] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.451 [2024-07-26 11:15:24.999488] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.451 [2024-07-26 11:15:24.999528] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.451 [2024-07-26 11:15:24.999575] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.451 [2024-07-26 11:15:24.999803] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.451 [2024-07-26 11:15:24.999847] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.451 [2024-07-26 11:15:24.999895] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.451 [2024-07-26 11:15:24.999935] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.451 [2024-07-26 11:15:24.999978] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.451 [2024-07-26 11:15:25.000018] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.451 [2024-07-26 11:15:25.000062] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.451 [2024-07-26 11:15:25.000095] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.451 [2024-07-26 11:15:25.000137] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.451 [2024-07-26 11:15:25.000175] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.451 [2024-07-26 11:15:25.000213] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.451 [2024-07-26 11:15:25.000254] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.451 [2024-07-26 11:15:25.000298] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.451 [2024-07-26 11:15:25.000341] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.451 [2024-07-26 11:15:25.000382] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.451 [2024-07-26 11:15:25.000425] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.451 [2024-07-26 11:15:25.000465] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.451 [2024-07-26 11:15:25.000504] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.451 [2024-07-26 11:15:25.000544] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.451 [2024-07-26 11:15:25.000582] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.451 [2024-07-26 11:15:25.000623] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.451 [2024-07-26 11:15:25.000669] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.451 [2024-07-26 11:15:25.000707] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.451 [2024-07-26 11:15:25.000755] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.451 [2024-07-26 11:15:25.000800] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.451 [2024-07-26 11:15:25.000844] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.451 [2024-07-26 11:15:25.000893] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.451 [2024-07-26 11:15:25.000940] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.451 [2024-07-26 11:15:25.000989] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.451 [2024-07-26 11:15:25.001031] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.451 [2024-07-26 11:15:25.001078] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.451 [2024-07-26 11:15:25.001123] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.451 [2024-07-26 11:15:25.001169] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.451 [2024-07-26 11:15:25.001214] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.451 [2024-07-26 11:15:25.001264] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.451 [2024-07-26 11:15:25.001310] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.451 [2024-07-26 11:15:25.001357] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.451 [2024-07-26 11:15:25.001406] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.451 [2024-07-26 11:15:25.001457] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.451 [2024-07-26 11:15:25.001505] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.451 [2024-07-26 11:15:25.001550] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.451 [2024-07-26 11:15:25.001594] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.451 [2024-07-26 11:15:25.001650] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.451 [2024-07-26 11:15:25.001696] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.451 [2024-07-26 11:15:25.001742] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.451 [2024-07-26 11:15:25.001789] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.451 [2024-07-26 11:15:25.001835] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.451 [2024-07-26 11:15:25.001885] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.451 [2024-07-26 11:15:25.001931] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.451 [2024-07-26 11:15:25.001977] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.451 [2024-07-26 11:15:25.002023] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.451 [2024-07-26 11:15:25.002072] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.451 [2024-07-26 11:15:25.002121] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.451 [2024-07-26 11:15:25.002171] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.451 [2024-07-26 11:15:25.002219] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.451 [2024-07-26 11:15:25.002253] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.451 [2024-07-26 11:15:25.002290] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.451 [2024-07-26 11:15:25.002332] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.451 [2024-07-26 11:15:25.002370] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.451 [2024-07-26 11:15:25.002409] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.451 [2024-07-26 11:15:25.002450] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.451 [2024-07-26 11:15:25.002488] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.451 [2024-07-26 11:15:25.002531] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.451 [2024-07-26 11:15:25.003279] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.451 [2024-07-26 11:15:25.003323] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.451 [2024-07-26 11:15:25.003365] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.451 [2024-07-26 11:15:25.003404] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.451 [2024-07-26 11:15:25.003441] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.451 [2024-07-26 11:15:25.003483] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.451 [2024-07-26 11:15:25.003521] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.451 [2024-07-26 11:15:25.003562] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.451 [2024-07-26 11:15:25.003603] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.451 [2024-07-26 11:15:25.003646] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.451 [2024-07-26 11:15:25.003687] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.451 [2024-07-26 11:15:25.003726] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.451 [2024-07-26 11:15:25.003767] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.451 [2024-07-26 11:15:25.003811] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.451 [2024-07-26 11:15:25.003858] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.452 [2024-07-26 11:15:25.003903] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.452 [2024-07-26 11:15:25.003949] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.452 [2024-07-26 11:15:25.003994] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.452 [2024-07-26 11:15:25.004045] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.452 [2024-07-26 11:15:25.004089] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.452 [2024-07-26 11:15:25.004136] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.452 [2024-07-26 11:15:25.004180] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.452 [2024-07-26 11:15:25.004227] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.452 [2024-07-26 11:15:25.004275] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.452 [2024-07-26 11:15:25.004327] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.452 [2024-07-26 11:15:25.004372] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.452 [2024-07-26 11:15:25.004414] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.452 [2024-07-26 11:15:25.004460] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.452 [2024-07-26 11:15:25.004506] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.452 [2024-07-26 11:15:25.004556] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.452 [2024-07-26 11:15:25.004604] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.452 [2024-07-26 11:15:25.004657] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.452 [2024-07-26 11:15:25.004702] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.452 [2024-07-26 11:15:25.004749] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.452 [2024-07-26 11:15:25.004809] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.452 [2024-07-26 11:15:25.004856] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.452 [2024-07-26 11:15:25.004906] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.452 [2024-07-26 11:15:25.004951] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.452 [2024-07-26 11:15:25.004999] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.452 [2024-07-26 11:15:25.005052] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.452 [2024-07-26 11:15:25.005099] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.452 [2024-07-26 11:15:25.005145] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.452 [2024-07-26 11:15:25.005194] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.452 [2024-07-26 11:15:25.005242] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.452 [2024-07-26 11:15:25.005290] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.452 [2024-07-26 11:15:25.005337] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.452 [2024-07-26 11:15:25.005383] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.452 [2024-07-26 11:15:25.005435] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.452 [2024-07-26 11:15:25.005477] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.452 [2024-07-26 11:15:25.005509] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.452 [2024-07-26 11:15:25.005552] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.452 [2024-07-26 11:15:25.005590] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.452 [2024-07-26 11:15:25.005637] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.452 [2024-07-26 11:15:25.005683] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.452 [2024-07-26 11:15:25.005723] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.452 [2024-07-26 11:15:25.005762] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.452 [2024-07-26 11:15:25.005807] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.452 [2024-07-26 11:15:25.005848] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.452 [2024-07-26 11:15:25.005896] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.452 [2024-07-26 11:15:25.005942] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.452 [2024-07-26 11:15:25.005990] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.452 [2024-07-26 11:15:25.006023] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.452 [2024-07-26 11:15:25.006066] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.452 [2024-07-26 11:15:25.006106] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.452 [2024-07-26 11:15:25.006301] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.452 [2024-07-26 11:15:25.006358] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.452 [2024-07-26 11:15:25.006401] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.452 [2024-07-26 11:15:25.006432] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.452 [2024-07-26 11:15:25.006473] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.452 [2024-07-26 11:15:25.006521] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.452 [2024-07-26 11:15:25.006562] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.452 [2024-07-26 11:15:25.006601] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.452 [2024-07-26 11:15:25.006649] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.452 [2024-07-26 11:15:25.006693] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.452 [2024-07-26 11:15:25.006733] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.452 [2024-07-26 11:15:25.006774] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.452 [2024-07-26 11:15:25.006809] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.452 [2024-07-26 11:15:25.006850] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.452 [2024-07-26 11:15:25.006897] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.452 [2024-07-26 11:15:25.006936] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.452 [2024-07-26 11:15:25.006974] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.452 [2024-07-26 11:15:25.007020] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.452 [2024-07-26 11:15:25.007071] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.452 [2024-07-26 11:15:25.007118] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.452 [2024-07-26 11:15:25.007165] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.452 [2024-07-26 11:15:25.007208] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.452 [2024-07-26 11:15:25.007256] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.452 [2024-07-26 11:15:25.007309] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.452 [2024-07-26 11:15:25.007353] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.452 [2024-07-26 11:15:25.007399] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.452 [2024-07-26 11:15:25.007445] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.452 [2024-07-26 11:15:25.007490] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.452 [2024-07-26 11:15:25.007539] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.452 [2024-07-26 11:15:25.007588] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.452 [2024-07-26 11:15:25.007637] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.452 [2024-07-26 11:15:25.007687] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.452 [2024-07-26 11:15:25.007731] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.452 [2024-07-26 11:15:25.007776] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.452 [2024-07-26 11:15:25.007825] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.452 [2024-07-26 11:15:25.007875] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.452 [2024-07-26 11:15:25.007925] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.452 [2024-07-26 11:15:25.007976] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.452 [2024-07-26 11:15:25.008028] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.452 [2024-07-26 11:15:25.008074] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.452 [2024-07-26 11:15:25.008119] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.452 [2024-07-26 11:15:25.008162] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.452 [2024-07-26 11:15:25.008213] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.452 [2024-07-26 11:15:25.008265] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.452 [2024-07-26 11:15:25.008316] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.452 [2024-07-26 11:15:25.008363] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.452 [2024-07-26 11:15:25.008412] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.452 [2024-07-26 11:15:25.008457] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.453 [2024-07-26 11:15:25.008498] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.453 [2024-07-26 11:15:25.008545] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.453 [2024-07-26 11:15:25.008602] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.453 [2024-07-26 11:15:25.008648] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.453 [2024-07-26 11:15:25.008690] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.453 [2024-07-26 11:15:25.008736] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.453 [2024-07-26 11:15:25.008773] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.453 [2024-07-26 11:15:25.008819] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.453 [2024-07-26 11:15:25.008860] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.453 [2024-07-26 11:15:25.008898] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.453 [2024-07-26 11:15:25.008942] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.453 [2024-07-26 11:15:25.008984] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.453 [2024-07-26 11:15:25.009025] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.453 [2024-07-26 11:15:25.009065] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.453 [2024-07-26 11:15:25.009115] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.453 [2024-07-26 11:15:25.009938] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.453 [2024-07-26 11:15:25.009982] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.453 [2024-07-26 11:15:25.010022] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.453 [2024-07-26 11:15:25.010061] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.453 [2024-07-26 11:15:25.010104] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.453 [2024-07-26 11:15:25.010144] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.453 [2024-07-26 11:15:25.010182] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.453 [2024-07-26 11:15:25.010225] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.453 [2024-07-26 11:15:25.010272] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.453 [2024-07-26 11:15:25.010331] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.453 [2024-07-26 11:15:25.010380] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.453 [2024-07-26 11:15:25.010427] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.453 [2024-07-26 11:15:25.010470] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.453 [2024-07-26 11:15:25.010515] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.453 [2024-07-26 11:15:25.010562] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.453 [2024-07-26 11:15:25.010610] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.453 [2024-07-26 11:15:25.010658] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.453 [2024-07-26 11:15:25.010706] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.453 [2024-07-26 11:15:25.010757] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.453 [2024-07-26 11:15:25.010803] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.453 [2024-07-26 11:15:25.010848] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.453 [2024-07-26 11:15:25.010896] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.453 [2024-07-26 11:15:25.010940] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.453 [2024-07-26 11:15:25.010985] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.453 [2024-07-26 11:15:25.011031] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.453 [2024-07-26 11:15:25.011074] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.453 [2024-07-26 11:15:25.011117] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.453 [2024-07-26 11:15:25.011164] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.453 [2024-07-26 11:15:25.011213] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.453 [2024-07-26 11:15:25.011258] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.453 [2024-07-26 11:15:25.011304] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.453 [2024-07-26 11:15:25.011351] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.453 [2024-07-26 11:15:25.011401] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.453 [2024-07-26 11:15:25.011449] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.453 [2024-07-26 11:15:25.011498] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.453 [2024-07-26 11:15:25.011543] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.453 [2024-07-26 11:15:25.011593] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.453 [2024-07-26 11:15:25.011648] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.453 [2024-07-26 11:15:25.011697] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.453 [2024-07-26 11:15:25.011743] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.453 [2024-07-26 11:15:25.011790] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.453 [2024-07-26 11:15:25.011838] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.453 [2024-07-26 11:15:25.011885] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.453 [2024-07-26 11:15:25.011929] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.453 [2024-07-26 11:15:25.011971] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.453 [2024-07-26 11:15:25.012011] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.453 [2024-07-26 11:15:25.012057] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.453 [2024-07-26 11:15:25.012093] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.453 [2024-07-26 11:15:25.012130] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.453 [2024-07-26 11:15:25.012172] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.453 [2024-07-26 11:15:25.012210] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.453 [2024-07-26 11:15:25.012250] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.453 [2024-07-26 11:15:25.012293] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.453 [2024-07-26 11:15:25.012337] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.453 [2024-07-26 11:15:25.012379] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.453 [2024-07-26 11:15:25.012418] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.453 [2024-07-26 11:15:25.012459] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.453 [2024-07-26 11:15:25.012499] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.453 [2024-07-26 11:15:25.012537] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.453 [2024-07-26 11:15:25.012585] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.453 [2024-07-26 11:15:25.012620] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.453 [2024-07-26 11:15:25.012668] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.453 [2024-07-26 11:15:25.012708] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.453 [2024-07-26 11:15:25.012747] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.453 [2024-07-26 11:15:25.012952] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.453 [2024-07-26 11:15:25.012999] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.453 [2024-07-26 11:15:25.013041] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.453 [2024-07-26 11:15:25.013083] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.453 [2024-07-26 11:15:25.013122] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.453 [2024-07-26 11:15:25.013163] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.453 [2024-07-26 11:15:25.013205] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.453 [2024-07-26 11:15:25.013243] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.453 [2024-07-26 11:15:25.013285] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.453 [2024-07-26 11:15:25.013323] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.453 [2024-07-26 11:15:25.013363] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.453 [2024-07-26 11:15:25.013408] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.453 [2024-07-26 11:15:25.013450] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.453 [2024-07-26 11:15:25.013492] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.453 [2024-07-26 11:15:25.013540] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.453 [2024-07-26 11:15:25.013585] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.453 [2024-07-26 11:15:25.013640] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.453 [2024-07-26 11:15:25.013685] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.454 [2024-07-26 11:15:25.013730] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.454 [2024-07-26 11:15:25.013775] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.454 [2024-07-26 11:15:25.013831] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.454 [2024-07-26 11:15:25.013875] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.454 [2024-07-26 11:15:25.013922] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.454 [2024-07-26 11:15:25.013969] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.454 [2024-07-26 11:15:25.014016] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.454 [2024-07-26 11:15:25.014061] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.454 [2024-07-26 11:15:25.014105] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.454 [2024-07-26 11:15:25.014152] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.454 [2024-07-26 11:15:25.014200] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.454 [2024-07-26 11:15:25.014241] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.454 [2024-07-26 11:15:25.014274] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.454 [2024-07-26 11:15:25.014316] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.454 [2024-07-26 11:15:25.014359] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.454 [2024-07-26 11:15:25.014405] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.454 [2024-07-26 11:15:25.014443] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.454 [2024-07-26 11:15:25.014482] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.454 [2024-07-26 11:15:25.014529] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.454 [2024-07-26 11:15:25.014569] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.454 [2024-07-26 11:15:25.014605] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.454 [2024-07-26 11:15:25.014650] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.454 [2024-07-26 11:15:25.014690] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.454 [2024-07-26 11:15:25.014734] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.454 [2024-07-26 11:15:25.014782] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.454 [2024-07-26 11:15:25.014823] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.454 [2024-07-26 11:15:25.014864] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.454 [2024-07-26 11:15:25.014902] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.454 [2024-07-26 11:15:25.014941] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.454 [2024-07-26 11:15:25.014982] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.454 [2024-07-26 11:15:25.015024] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.454 [2024-07-26 11:15:25.015063] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.454 [2024-07-26 11:15:25.015110] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.454 [2024-07-26 11:15:25.015151] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.454 [2024-07-26 11:15:25.015194] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.454 [2024-07-26 11:15:25.015234] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.454 [2024-07-26 11:15:25.015274] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.454 [2024-07-26 11:15:25.015316] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.454 [2024-07-26 11:15:25.015355] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.454 [2024-07-26 11:15:25.015399] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.454 [2024-07-26 11:15:25.015447] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.454 [2024-07-26 11:15:25.015492] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.454 [2024-07-26 11:15:25.015537] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.454 [2024-07-26 11:15:25.015585] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.454 [2024-07-26 11:15:25.015636] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.454 [2024-07-26 11:15:25.016407] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.454 [2024-07-26 11:15:25.016456] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.454 [2024-07-26 11:15:25.016501] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.454 [2024-07-26 11:15:25.016547] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.454 [2024-07-26 11:15:25.016594] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.454 [2024-07-26 11:15:25.016646] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.454 [2024-07-26 11:15:25.016691] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.454 [2024-07-26 11:15:25.016736] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.454 [2024-07-26 11:15:25.016787] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.454 [2024-07-26 11:15:25.016831] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.454 [2024-07-26 11:15:25.016879] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.454 [2024-07-26 11:15:25.016928] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.454 [2024-07-26 11:15:25.016976] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.454 [2024-07-26 11:15:25.017022] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.454 [2024-07-26 11:15:25.017072] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.454 [2024-07-26 11:15:25.017117] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.454 [2024-07-26 11:15:25.017157] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.454 [2024-07-26 11:15:25.017200] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.454 [2024-07-26 11:15:25.017241] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.454 [2024-07-26 11:15:25.017284] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.454 [2024-07-26 11:15:25.017316] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.454 [2024-07-26 11:15:25.017356] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.454 [2024-07-26 11:15:25.017398] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.454 [2024-07-26 11:15:25.017442] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.454 [2024-07-26 11:15:25.017485] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.454 [2024-07-26 11:15:25.017528] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.454 [2024-07-26 11:15:25.017568] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.454 [2024-07-26 11:15:25.017611] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.454 [2024-07-26 11:15:25.017661] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.454 [2024-07-26 11:15:25.017707] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.454 [2024-07-26 11:15:25.017748] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.454 [2024-07-26 11:15:25.017787] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.454 [2024-07-26 11:15:25.017819] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.454 [2024-07-26 11:15:25.017864] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.454 [2024-07-26 11:15:25.017901] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.454 [2024-07-26 11:15:25.017947] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.454 [2024-07-26 11:15:25.017985] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.454 [2024-07-26 11:15:25.018030] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.454 [2024-07-26 11:15:25.018080] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.454 [2024-07-26 11:15:25.018122] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.454 [2024-07-26 11:15:25.018162] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.454 [2024-07-26 11:15:25.018204] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.454 [2024-07-26 11:15:25.018248] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.454 [2024-07-26 11:15:25.018293] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.454 [2024-07-26 11:15:25.018336] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.454 [2024-07-26 11:15:25.018376] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.454 [2024-07-26 11:15:25.018415] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.454 [2024-07-26 11:15:25.018449] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.454 [2024-07-26 11:15:25.018498] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.454 [2024-07-26 11:15:25.018541] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.454 [2024-07-26 11:15:25.018585] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.454 [2024-07-26 11:15:25.018640] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.454 [2024-07-26 11:15:25.018686] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.455 [2024-07-26 11:15:25.018733] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.455 [2024-07-26 11:15:25.018777] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.455 [2024-07-26 11:15:25.018823] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.455 [2024-07-26 11:15:25.018871] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.455 [2024-07-26 11:15:25.018920] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.455 [2024-07-26 11:15:25.018964] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.455 [2024-07-26 11:15:25.019015] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.455 [2024-07-26 11:15:25.019067] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.455 [2024-07-26 11:15:25.019114] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.455 [2024-07-26 11:15:25.019161] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.455 [2024-07-26 11:15:25.019208] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.455 [2024-07-26 11:15:25.019393] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.455 [2024-07-26 11:15:25.019438] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.455 [2024-07-26 11:15:25.019484] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.455 [2024-07-26 11:15:25.019532] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.455 [2024-07-26 11:15:25.019583] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.455 [2024-07-26 11:15:25.019638] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.455 [2024-07-26 11:15:25.019683] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.455 [2024-07-26 11:15:25.019730] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.455 [2024-07-26 11:15:25.019779] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.455 [2024-07-26 11:15:25.019824] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.455 [2024-07-26 11:15:25.019872] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.455 [2024-07-26 11:15:25.019920] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.455 [2024-07-26 11:15:25.019963] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.455 [2024-07-26 11:15:25.020010] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.455 [2024-07-26 11:15:25.020060] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.455 [2024-07-26 11:15:25.020102] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.455 [2024-07-26 11:15:25.020146] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.455 [2024-07-26 11:15:25.020551] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.455 [2024-07-26 11:15:25.020600] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.455 [2024-07-26 11:15:25.020652] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.455 [2024-07-26 11:15:25.020695] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.455 [2024-07-26 11:15:25.020734] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.455 [2024-07-26 11:15:25.020767] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.455 [2024-07-26 11:15:25.020804] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.455 [2024-07-26 11:15:25.020854] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.455 [2024-07-26 11:15:25.020900] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.455 [2024-07-26 11:15:25.020942] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.455 [2024-07-26 11:15:25.020983] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.455 [2024-07-26 11:15:25.021021] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.455 [2024-07-26 11:15:25.021067] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.455 [2024-07-26 11:15:25.021101] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.455 [2024-07-26 11:15:25.021147] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.455 [2024-07-26 11:15:25.021192] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.455 [2024-07-26 11:15:25.021232] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.455 [2024-07-26 11:15:25.021274] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.455 [2024-07-26 11:15:25.021314] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.455 [2024-07-26 11:15:25.021353] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.455 [2024-07-26 11:15:25.021390] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.455 [2024-07-26 11:15:25.021434] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.455 [2024-07-26 11:15:25.021476] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.455 [2024-07-26 11:15:25.021522] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.455 [2024-07-26 11:15:25.021563] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.455 [2024-07-26 11:15:25.021604] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.455 [2024-07-26 11:15:25.021653] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.455 [2024-07-26 11:15:25.021697] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.455 [2024-07-26 11:15:25.021744] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.455 [2024-07-26 11:15:25.021791] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.455 [2024-07-26 11:15:25.021835] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.455 [2024-07-26 11:15:25.021880] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.455 [2024-07-26 11:15:25.021926] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.455 [2024-07-26 11:15:25.021972] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.455 [2024-07-26 11:15:25.022018] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.455 [2024-07-26 11:15:25.022064] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.455 [2024-07-26 11:15:25.022113] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.455 [2024-07-26 11:15:25.022161] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.455 [2024-07-26 11:15:25.022206] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.455 [2024-07-26 11:15:25.022253] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.455 [2024-07-26 11:15:25.022300] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.455 [2024-07-26 11:15:25.022347] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.455 [2024-07-26 11:15:25.022395] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.455 [2024-07-26 11:15:25.022446] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.455 [2024-07-26 11:15:25.022488] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.455 [2024-07-26 11:15:25.022534] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.455 [2024-07-26 11:15:25.022580] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.455 [2024-07-26 11:15:25.022630] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.455 [2024-07-26 11:15:25.022678] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.455 [2024-07-26 11:15:25.022725] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.455 [2024-07-26 11:15:25.022776] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.455 [2024-07-26 11:15:25.022824] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.455 [2024-07-26 11:15:25.022870] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.455 [2024-07-26 11:15:25.022914] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.456 [2024-07-26 11:15:25.022947] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.456 [2024-07-26 11:15:25.022987] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.456 [2024-07-26 11:15:25.023029] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.456 [2024-07-26 11:15:25.023068] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.456 [2024-07-26 11:15:25.023110] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.456 [2024-07-26 11:15:25.023149] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.456 [2024-07-26 11:15:25.023189] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.456 [2024-07-26 11:15:25.023230] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.456 [2024-07-26 11:15:25.023271] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.456 [2024-07-26 11:15:25.023310] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.456 [2024-07-26 11:15:25.023499] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.456 [2024-07-26 11:15:25.023537] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.456 [2024-07-26 11:15:25.023589] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.456 [2024-07-26 11:15:25.023635] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.456 [2024-07-26 11:15:25.023686] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.456 [2024-07-26 11:15:25.023729] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.456 [2024-07-26 11:15:25.023768] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.456 [2024-07-26 11:15:25.023807] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.456 [2024-07-26 11:15:25.023853] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.456 [2024-07-26 11:15:25.023893] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.456 [2024-07-26 11:15:25.023936] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.456 [2024-07-26 11:15:25.023978] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.456 [2024-07-26 11:15:25.024017] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.456 [2024-07-26 11:15:25.024061] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.456 [2024-07-26 11:15:25.024102] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.456 [2024-07-26 11:15:25.024145] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.456 [2024-07-26 11:15:25.024184] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.456 [2024-07-26 11:15:25.024225] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.456 [2024-07-26 11:15:25.024265] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.456 [2024-07-26 11:15:25.024300] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.456 [2024-07-26 11:15:25.024347] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.456 [2024-07-26 11:15:25.024392] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.456 [2024-07-26 11:15:25.024436] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.456 [2024-07-26 11:15:25.024482] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.456 [2024-07-26 11:15:25.024530] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.456 [2024-07-26 11:15:25.024582] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.456 [2024-07-26 11:15:25.024630] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.456 [2024-07-26 11:15:25.024680] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.456 [2024-07-26 11:15:25.024727] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.456 [2024-07-26 11:15:25.024772] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.456 [2024-07-26 11:15:25.024822] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.456 [2024-07-26 11:15:25.024869] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.456 [2024-07-26 11:15:25.024912] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.456 [2024-07-26 11:15:25.024960] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.456 [2024-07-26 11:15:25.025006] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.456 [2024-07-26 11:15:25.025051] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.456 [2024-07-26 11:15:25.025101] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.456 [2024-07-26 11:15:25.025148] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.456 [2024-07-26 11:15:25.025201] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.456 [2024-07-26 11:15:25.025248] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.456 [2024-07-26 11:15:25.025292] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.456 [2024-07-26 11:15:25.025338] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.456 [2024-07-26 11:15:25.025383] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.456 [2024-07-26 11:15:25.025431] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.456 [2024-07-26 11:15:25.025479] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.456 [2024-07-26 11:15:25.025526] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.456 [2024-07-26 11:15:25.026258] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.456 [2024-07-26 11:15:25.026302] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.456 [2024-07-26 11:15:25.026341] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.456 [2024-07-26 11:15:25.026382] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.456 [2024-07-26 11:15:25.026416] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.456 [2024-07-26 11:15:25.026459] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.456 [2024-07-26 11:15:25.026500] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.456 [2024-07-26 11:15:25.026544] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.456 [2024-07-26 11:15:25.026584] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.456 [2024-07-26 11:15:25.026631] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.456 [2024-07-26 11:15:25.026679] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.456 [2024-07-26 11:15:25.026712] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.456 [2024-07-26 11:15:25.026751] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.456 [2024-07-26 11:15:25.026793] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.456 [2024-07-26 11:15:25.026841] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.456 [2024-07-26 11:15:25.026886] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.456 [2024-07-26 11:15:25.026925] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.456 [2024-07-26 11:15:25.026970] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.456 [2024-07-26 11:15:25.027011] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.456 [2024-07-26 11:15:25.027051] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.456 [2024-07-26 11:15:25.027094] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.456 [2024-07-26 11:15:25.027139] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.456 [2024-07-26 11:15:25.027182] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.456 [2024-07-26 11:15:25.027228] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.456 [2024-07-26 11:15:25.027278] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.456 [2024-07-26 11:15:25.027325] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.456 [2024-07-26 11:15:25.027375] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.456 [2024-07-26 11:15:25.027417] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.456 [2024-07-26 11:15:25.027460] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.456 [2024-07-26 11:15:25.027505] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.456 [2024-07-26 11:15:25.027556] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.456 [2024-07-26 11:15:25.027599] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.456 [2024-07-26 11:15:25.027650] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.456 [2024-07-26 11:15:25.027695] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.456 [2024-07-26 11:15:25.027743] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.456 [2024-07-26 11:15:25.027786] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.456 [2024-07-26 11:15:25.027832] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.456 [2024-07-26 11:15:25.027881] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.456 [2024-07-26 11:15:25.027929] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.456 [2024-07-26 11:15:25.027988] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.457 [2024-07-26 11:15:25.028037] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.457 [2024-07-26 11:15:25.028085] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.457 [2024-07-26 11:15:25.028131] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.457 [2024-07-26 11:15:25.028180] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.457 [2024-07-26 11:15:25.028224] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.457 [2024-07-26 11:15:25.028276] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.457 [2024-07-26 11:15:25.028324] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.457 [2024-07-26 11:15:25.028372] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.457 [2024-07-26 11:15:25.028416] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.457 [2024-07-26 11:15:25.028465] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.457 [2024-07-26 11:15:25.028515] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.457 [2024-07-26 11:15:25.028563] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.457 [2024-07-26 11:15:25.028611] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.457 [2024-07-26 11:15:25.028660] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.457 [2024-07-26 11:15:25.028707] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.457 [2024-07-26 11:15:25.028753] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.457 [2024-07-26 11:15:25.028807] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.457 [2024-07-26 11:15:25.028851] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.457 [2024-07-26 11:15:25.028900] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.457 [2024-07-26 11:15:25.028942] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.457 [2024-07-26 11:15:25.028983] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.457 [2024-07-26 11:15:25.029029] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.457 [2024-07-26 11:15:25.029071] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.457 [2024-07-26 11:15:25.029106] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.457 [2024-07-26 11:15:25.029290] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.457 [2024-07-26 11:15:25.029334] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.457 [2024-07-26 11:15:25.029376] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.457 [2024-07-26 11:15:25.029420] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.457 [2024-07-26 11:15:25.029460] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.457 [2024-07-26 11:15:25.029504] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.457 [2024-07-26 11:15:25.029546] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.457 [2024-07-26 11:15:25.029594] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.457 [2024-07-26 11:15:25.029637] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.457 [2024-07-26 11:15:25.029681] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.457 [2024-07-26 11:15:25.029727] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.457 [2024-07-26 11:15:25.029767] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.457 [2024-07-26 11:15:25.029808] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.457 [2024-07-26 11:15:25.029850] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.457 [2024-07-26 11:15:25.029891] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.457 [2024-07-26 11:15:25.029931] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.457 [2024-07-26 11:15:25.029971] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.457 [2024-07-26 11:15:25.030004] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.457 [2024-07-26 11:15:25.030051] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.457 [2024-07-26 11:15:25.030093] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.457 [2024-07-26 11:15:25.030135] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.457 [2024-07-26 11:15:25.030178] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.457 [2024-07-26 11:15:25.030217] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.457 [2024-07-26 11:15:25.030257] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.457 [2024-07-26 11:15:25.030297] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.457 [2024-07-26 11:15:25.030339] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.457 [2024-07-26 11:15:25.030377] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.457 [2024-07-26 11:15:25.030421] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.457 [2024-07-26 11:15:25.030459] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.457 [2024-07-26 11:15:25.030499] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.457 [2024-07-26 11:15:25.030539] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.457 [2024-07-26 11:15:25.030570] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.457 [2024-07-26 11:15:25.030615] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.457 [2024-07-26 11:15:25.030667] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.457 [2024-07-26 11:15:25.031156] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.457 [2024-07-26 11:15:25.031202] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.457 [2024-07-26 11:15:25.031248] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.457 [2024-07-26 11:15:25.031294] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.457 [2024-07-26 11:15:25.031345] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.457 [2024-07-26 11:15:25.031391] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.457 [2024-07-26 11:15:25.031435] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.457 [2024-07-26 11:15:25.031478] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.457 [2024-07-26 11:15:25.031529] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.457 [2024-07-26 11:15:25.031585] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.457 [2024-07-26 11:15:25.031638] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.457 [2024-07-26 11:15:25.031685] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.457 [2024-07-26 11:15:25.031733] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.457 [2024-07-26 11:15:25.031777] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.457 [2024-07-26 11:15:25.031824] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.457 [2024-07-26 11:15:25.031873] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.457 [2024-07-26 11:15:25.031925] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.457 [2024-07-26 11:15:25.031974] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.457 [2024-07-26 11:15:25.032022] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.457 [2024-07-26 11:15:25.032069] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.457 [2024-07-26 11:15:25.032120] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.457 [2024-07-26 11:15:25.032165] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.457 [2024-07-26 11:15:25.032210] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.457 [2024-07-26 11:15:25.032248] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.457 [2024-07-26 11:15:25.032288] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.457 [2024-07-26 11:15:25.032333] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.457 [2024-07-26 11:15:25.032365] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.457 [2024-07-26 11:15:25.032406] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.457 [2024-07-26 11:15:25.032445] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.457 [2024-07-26 11:15:25.032490] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.457 [2024-07-26 11:15:25.032533] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.457 [2024-07-26 11:15:25.032574] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.457 [2024-07-26 11:15:25.032616] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.457 [2024-07-26 11:15:25.032663] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.457 [2024-07-26 11:15:25.032697] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.457 [2024-07-26 11:15:25.032737] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.457 [2024-07-26 11:15:25.032781] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.457 [2024-07-26 11:15:25.032821] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.457 [2024-07-26 11:15:25.032861] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.458 [2024-07-26 11:15:25.032900] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.458 [2024-07-26 11:15:25.032938] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.458 [2024-07-26 11:15:25.032980] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.458 [2024-07-26 11:15:25.033028] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.458 [2024-07-26 11:15:25.033075] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.458 [2024-07-26 11:15:25.033123] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.458 [2024-07-26 11:15:25.033169] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.458 [2024-07-26 11:15:25.033217] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.458 [2024-07-26 11:15:25.033269] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.458 [2024-07-26 11:15:25.033323] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.458 [2024-07-26 11:15:25.033366] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.458 [2024-07-26 11:15:25.033411] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.458 [2024-07-26 11:15:25.033458] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.458 [2024-07-26 11:15:25.033503] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.458 [2024-07-26 11:15:25.033554] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.458 [2024-07-26 11:15:25.033602] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.458 [2024-07-26 11:15:25.033656] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.458 [2024-07-26 11:15:25.033700] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.458 [2024-07-26 11:15:25.033746] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.458 [2024-07-26 11:15:25.033794] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.458 [2024-07-26 11:15:25.033839] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.458 [2024-07-26 11:15:25.033885] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.458 [2024-07-26 11:15:25.033930] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.458 [2024-07-26 11:15:25.033974] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.458 [2024-07-26 11:15:25.034027] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.458 [2024-07-26 11:15:25.034220] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.458 [2024-07-26 11:15:25.034275] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.458 [2024-07-26 11:15:25.034325] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.458 [2024-07-26 11:15:25.034372] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.458 [2024-07-26 11:15:25.034421] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.458 [2024-07-26 11:15:25.034467] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.458 [2024-07-26 11:15:25.034517] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.458 [2024-07-26 11:15:25.034560] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.458 [2024-07-26 11:15:25.034606] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.458 [2024-07-26 11:15:25.034655] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.458 [2024-07-26 11:15:25.034700] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.458 [2024-07-26 11:15:25.034747] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.458 [2024-07-26 11:15:25.034789] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.458 [2024-07-26 11:15:25.034833] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.458 [2024-07-26 11:15:25.034880] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.458 [2024-07-26 11:15:25.034925] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.458 [2024-07-26 11:15:25.034971] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.458 [2024-07-26 11:15:25.035020] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.458 [2024-07-26 11:15:25.035065] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.458 [2024-07-26 11:15:25.035110] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.458 [2024-07-26 11:15:25.035156] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.458 [2024-07-26 11:15:25.035203] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.458 [2024-07-26 11:15:25.035248] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.458 [2024-07-26 11:15:25.035296] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.458 [2024-07-26 11:15:25.035334] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.458 [2024-07-26 11:15:25.035372] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.458 [2024-07-26 11:15:25.035403] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.458 [2024-07-26 11:15:25.035444] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.458 [2024-07-26 11:15:25.035482] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.458 [2024-07-26 11:15:25.036179] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.458 [2024-07-26 11:15:25.036221] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.458 [2024-07-26 11:15:25.036265] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.458 [2024-07-26 11:15:25.036308] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.458 [2024-07-26 11:15:25.036353] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.458 [2024-07-26 11:15:25.036387] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.458 [2024-07-26 11:15:25.036433] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.458 [2024-07-26 11:15:25.036471] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.458 [2024-07-26 11:15:25.036511] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.458 [2024-07-26 11:15:25.036561] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.458 [2024-07-26 11:15:25.036597] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.458 [2024-07-26 11:15:25.036645] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.458 [2024-07-26 11:15:25.036685] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.458 [2024-07-26 11:15:25.036724] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.458 [2024-07-26 11:15:25.036769] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.458 [2024-07-26 11:15:25.036808] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.458 [2024-07-26 11:15:25.036855] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.458 [2024-07-26 11:15:25.036895] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.458 [2024-07-26 11:15:25.036935] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.458 [2024-07-26 11:15:25.036976] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.458 [2024-07-26 11:15:25.037013] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.458 [2024-07-26 11:15:25.037055] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.458 [2024-07-26 11:15:25.037094] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.458 [2024-07-26 11:15:25.037144] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.458 [2024-07-26 11:15:25.037192] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.458 [2024-07-26 11:15:25.037236] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.458 [2024-07-26 11:15:25.037283] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.458 [2024-07-26 11:15:25.037330] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.458 [2024-07-26 11:15:25.037373] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.458 [2024-07-26 11:15:25.037420] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.458 [2024-07-26 11:15:25.037466] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.458 [2024-07-26 11:15:25.037510] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.458 [2024-07-26 11:15:25.037556] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.458 [2024-07-26 11:15:25.037606] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.458 [2024-07-26 11:15:25.037652] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.458 [2024-07-26 11:15:25.037701] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.458 [2024-07-26 11:15:25.037746] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.458 [2024-07-26 11:15:25.037794] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.458 [2024-07-26 11:15:25.037839] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.458 [2024-07-26 11:15:25.037892] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.458 [2024-07-26 11:15:25.037937] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.458 [2024-07-26 11:15:25.037981] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.458 [2024-07-26 11:15:25.038025] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.459 [2024-07-26 11:15:25.038070] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.459 [2024-07-26 11:15:25.038116] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.459 [2024-07-26 11:15:25.038157] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.459 [2024-07-26 11:15:25.038192] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.459 [2024-07-26 11:15:25.038230] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.459 [2024-07-26 11:15:25.038278] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.459 [2024-07-26 11:15:25.038319] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.459 [2024-07-26 11:15:25.038360] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.459 [2024-07-26 11:15:25.038404] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.459 [2024-07-26 11:15:25.038449] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.459 [2024-07-26 11:15:25.038492] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.459 [2024-07-26 11:15:25.038527] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.459 [2024-07-26 11:15:25.038565] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.459 [2024-07-26 11:15:25.038606] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.459 [2024-07-26 11:15:25.038654] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.459 [2024-07-26 11:15:25.038701] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.459 [2024-07-26 11:15:25.038741] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.459 [2024-07-26 11:15:25.038783] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.459 [2024-07-26 11:15:25.038822] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.459 [2024-07-26 11:15:25.038858] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.459 [2024-07-26 11:15:25.038902] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.459 [2024-07-26 11:15:25.039098] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.459 [2024-07-26 11:15:25.039141] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.459 [2024-07-26 11:15:25.039181] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.459 [2024-07-26 11:15:25.039227] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.459 [2024-07-26 11:15:25.039272] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.459 [2024-07-26 11:15:25.039308] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.459 [2024-07-26 11:15:25.039347] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.459 [2024-07-26 11:15:25.039385] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.459 [2024-07-26 11:15:25.039429] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.459 [2024-07-26 11:15:25.039473] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.459 [2024-07-26 11:15:25.039519] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.459 [2024-07-26 11:15:25.039565] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.459 [2024-07-26 11:15:25.039613] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.459 [2024-07-26 11:15:25.039666] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.459 [2024-07-26 11:15:25.039712] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.459 [2024-07-26 11:15:25.039761] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.459 [2024-07-26 11:15:25.039810] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.459 [2024-07-26 11:15:25.039856] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.459 [2024-07-26 11:15:25.039900] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.459 [2024-07-26 11:15:25.039941] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.459 [2024-07-26 11:15:25.039986] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.459 [2024-07-26 11:15:25.040040] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.459 [2024-07-26 11:15:25.040082] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.459 [2024-07-26 11:15:25.040129] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.459 [2024-07-26 11:15:25.040174] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.459 [2024-07-26 11:15:25.040222] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.459 [2024-07-26 11:15:25.040269] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.459 [2024-07-26 11:15:25.040315] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.459 [2024-07-26 11:15:25.040360] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.459 [2024-07-26 11:15:25.040407] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.459 [2024-07-26 11:15:25.040453] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.459 [2024-07-26 11:15:25.040503] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.459 [2024-07-26 11:15:25.040552] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.459 [2024-07-26 11:15:25.040606] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.459 [2024-07-26 11:15:25.041237] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.459 [2024-07-26 11:15:25.041281] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.459 [2024-07-26 11:15:25.041324] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.459 [2024-07-26 11:15:25.041366] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.459 [2024-07-26 11:15:25.041409] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.459 [2024-07-26 11:15:25.041451] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.459 [2024-07-26 11:15:25.041496] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.459 [2024-07-26 11:15:25.041535] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.459 [2024-07-26 11:15:25.041577] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.459 [2024-07-26 11:15:25.041631] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.459 [2024-07-26 11:15:25.041673] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.459 [2024-07-26 11:15:25.041712] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.459 [2024-07-26 11:15:25.041749] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.459 [2024-07-26 11:15:25.041780] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.459 [2024-07-26 11:15:25.041821] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.459 [2024-07-26 11:15:25.041860] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.459 [2024-07-26 11:15:25.041903] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.459 [2024-07-26 11:15:25.041947] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.459 [2024-07-26 11:15:25.041985] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.459 [2024-07-26 11:15:25.042034] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.459 [2024-07-26 11:15:25.042073] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.459 [2024-07-26 11:15:25.042110] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.459 [2024-07-26 11:15:25.042150] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.459 [2024-07-26 11:15:25.042190] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.459 [2024-07-26 11:15:25.042231] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.459 [2024-07-26 11:15:25.042273] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.459 [2024-07-26 11:15:25.042318] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.459 [2024-07-26 11:15:25.042355] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.459 [2024-07-26 11:15:25.042399] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.460 [2024-07-26 11:15:25.042439] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.460 [2024-07-26 11:15:25.042485] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.460 [2024-07-26 11:15:25.042526] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.460 [2024-07-26 11:15:25.042564] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.460 [2024-07-26 11:15:25.042610] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.460 [2024-07-26 11:15:25.042661] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.460 [2024-07-26 11:15:25.042706] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.460 [2024-07-26 11:15:25.042753] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.460 [2024-07-26 11:15:25.042801] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.460 [2024-07-26 11:15:25.042845] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.460 [2024-07-26 11:15:25.042890] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.460 [2024-07-26 11:15:25.042934] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.460 [2024-07-26 11:15:25.042985] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.460 [2024-07-26 11:15:25.043030] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.460 [2024-07-26 11:15:25.043075] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.460 [2024-07-26 11:15:25.043120] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.460 [2024-07-26 11:15:25.043167] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.460 [2024-07-26 11:15:25.043212] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.460 [2024-07-26 11:15:25.043257] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.460 [2024-07-26 11:15:25.043303] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.460 [2024-07-26 11:15:25.043350] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.460 [2024-07-26 11:15:25.043397] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.460 [2024-07-26 11:15:25.043443] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.460 [2024-07-26 11:15:25.043490] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.460 [2024-07-26 11:15:25.043534] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.460 [2024-07-26 11:15:25.043579] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.460 [2024-07-26 11:15:25.043631] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.460 [2024-07-26 11:15:25.043678] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.460 [2024-07-26 11:15:25.043722] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.460 [2024-07-26 11:15:25.043767] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.460 [2024-07-26 11:15:25.043815] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.460 [2024-07-26 11:15:25.043866] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.460 [2024-07-26 11:15:25.043910] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.460 [2024-07-26 11:15:25.043954] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.460 [2024-07-26 11:15:25.044000] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.460 [2024-07-26 11:15:25.044188] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.460 [2024-07-26 11:15:25.044229] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.460 [2024-07-26 11:15:25.044267] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.460 [2024-07-26 11:15:25.044308] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.460 [2024-07-26 11:15:25.044350] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.460 [2024-07-26 11:15:25.044396] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.460 [2024-07-26 11:15:25.044437] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.460 [2024-07-26 11:15:25.044476] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.460 [2024-07-26 11:15:25.044514] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.460 [2024-07-26 11:15:25.044555] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.460 [2024-07-26 11:15:25.044600] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.460 [2024-07-26 11:15:25.044646] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.460 [2024-07-26 11:15:25.044686] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.460 [2024-07-26 11:15:25.044732] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.460 [2024-07-26 11:15:25.044764] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.460 [2024-07-26 11:15:25.044801] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.460 [2024-07-26 11:15:25.044839] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.460 [2024-07-26 11:15:25.044879] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.460 [2024-07-26 11:15:25.044920] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.460 [2024-07-26 11:15:25.044962] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.460 [2024-07-26 11:15:25.045002] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.460 [2024-07-26 11:15:25.045044] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.460 [2024-07-26 11:15:25.045081] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.460 [2024-07-26 11:15:25.045127] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.460 [2024-07-26 11:15:25.045167] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.460 Message suppressed 999 times: Read completed with error (sct=0, sc=15) 00:07:29.460 [2024-07-26 11:15:25.045199] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.460 [2024-07-26 11:15:25.045238] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.460 [2024-07-26 11:15:25.045277] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.460 [2024-07-26 11:15:25.045317] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.460 [2024-07-26 11:15:25.045357] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.460 [2024-07-26 11:15:25.045402] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.460 [2024-07-26 11:15:25.045443] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.460 [2024-07-26 11:15:25.045484] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.460 [2024-07-26 11:15:25.045524] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.460 [2024-07-26 11:15:25.045562] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.460 [2024-07-26 11:15:25.045606] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.460 [2024-07-26 11:15:25.045653] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.460 [2024-07-26 11:15:25.045693] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.460 [2024-07-26 11:15:25.045740] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.460 [2024-07-26 11:15:25.045776] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.460 [2024-07-26 11:15:25.045824] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.460 [2024-07-26 11:15:25.045873] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.460 [2024-07-26 11:15:25.045921] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.460 [2024-07-26 11:15:25.045966] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.460 [2024-07-26 11:15:25.046012] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.460 [2024-07-26 11:15:25.046056] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.460 [2024-07-26 11:15:25.046105] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.460 [2024-07-26 11:15:25.046157] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.460 [2024-07-26 11:15:25.046203] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.460 [2024-07-26 11:15:25.046248] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.460 [2024-07-26 11:15:25.046291] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.460 [2024-07-26 11:15:25.046338] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.460 [2024-07-26 11:15:25.046387] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.460 [2024-07-26 11:15:25.046439] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.460 [2024-07-26 11:15:25.046481] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.460 [2024-07-26 11:15:25.046523] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.460 [2024-07-26 11:15:25.046571] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.460 [2024-07-26 11:15:25.046624] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.460 [2024-07-26 11:15:25.046678] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.460 [2024-07-26 11:15:25.046724] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.461 [2024-07-26 11:15:25.046772] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.461 [2024-07-26 11:15:25.046815] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.461 [2024-07-26 11:15:25.046868] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.461 [2024-07-26 11:15:25.047647] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.461 [2024-07-26 11:15:25.047693] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.461 [2024-07-26 11:15:25.047738] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.461 [2024-07-26 11:15:25.047780] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.461 [2024-07-26 11:15:25.047824] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.461 [2024-07-26 11:15:25.047862] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.461 [2024-07-26 11:15:25.047907] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.461 [2024-07-26 11:15:25.047950] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.461 [2024-07-26 11:15:25.047984] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.461 [2024-07-26 11:15:25.048022] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.461 [2024-07-26 11:15:25.048060] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.461 [2024-07-26 11:15:25.048106] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.461 [2024-07-26 11:15:25.048145] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.461 [2024-07-26 11:15:25.048187] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.461 [2024-07-26 11:15:25.048226] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.461 [2024-07-26 11:15:25.048267] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.461 [2024-07-26 11:15:25.048306] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.461 [2024-07-26 11:15:25.048350] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.461 [2024-07-26 11:15:25.048391] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.461 true 00:07:29.461 [2024-07-26 11:15:25.048440] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.461 [2024-07-26 11:15:25.048471] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.461 [2024-07-26 11:15:25.048510] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.461 [2024-07-26 11:15:25.048552] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.461 [2024-07-26 11:15:25.048597] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.461 [2024-07-26 11:15:25.048646] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.461 [2024-07-26 11:15:25.048687] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.461 [2024-07-26 11:15:25.048726] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.461 [2024-07-26 11:15:25.048765] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.461 [2024-07-26 11:15:25.048805] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.461 [2024-07-26 11:15:25.048849] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.461 [2024-07-26 11:15:25.048886] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.461 [2024-07-26 11:15:25.048932] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.461 [2024-07-26 11:15:25.048973] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.461 [2024-07-26 11:15:25.049014] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.461 [2024-07-26 11:15:25.049061] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.461 [2024-07-26 11:15:25.049103] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.461 [2024-07-26 11:15:25.049151] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.461 [2024-07-26 11:15:25.049193] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.461 [2024-07-26 11:15:25.049237] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.461 [2024-07-26 11:15:25.049279] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.461 [2024-07-26 11:15:25.049326] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.461 [2024-07-26 11:15:25.049371] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.461 [2024-07-26 11:15:25.049421] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.461 [2024-07-26 11:15:25.049467] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.461 [2024-07-26 11:15:25.049515] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.461 [2024-07-26 11:15:25.049562] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.461 [2024-07-26 11:15:25.049611] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.461 [2024-07-26 11:15:25.049657] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.461 [2024-07-26 11:15:25.049705] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.461 [2024-07-26 11:15:25.049761] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.461 [2024-07-26 11:15:25.049807] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.461 [2024-07-26 11:15:25.049854] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.461 [2024-07-26 11:15:25.049903] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.461 [2024-07-26 11:15:25.049947] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.461 [2024-07-26 11:15:25.049997] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.461 [2024-07-26 11:15:25.050045] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.461 [2024-07-26 11:15:25.050092] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.461 [2024-07-26 11:15:25.050138] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.461 [2024-07-26 11:15:25.050184] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.461 [2024-07-26 11:15:25.050232] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.461 [2024-07-26 11:15:25.050278] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.461 [2024-07-26 11:15:25.050321] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.461 [2024-07-26 11:15:25.050365] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.461 [2024-07-26 11:15:25.050415] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.461 [2024-07-26 11:15:25.050601] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.461 [2024-07-26 11:15:25.050659] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.461 [2024-07-26 11:15:25.050706] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.461 [2024-07-26 11:15:25.050739] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.461 [2024-07-26 11:15:25.050777] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.461 [2024-07-26 11:15:25.050822] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.461 [2024-07-26 11:15:25.050859] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.461 [2024-07-26 11:15:25.050907] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.461 [2024-07-26 11:15:25.050948] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.461 [2024-07-26 11:15:25.050989] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.461 [2024-07-26 11:15:25.051035] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.461 [2024-07-26 11:15:25.051082] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.461 [2024-07-26 11:15:25.051123] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.461 [2024-07-26 11:15:25.051163] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.461 [2024-07-26 11:15:25.051206] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.461 [2024-07-26 11:15:25.051243] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.461 [2024-07-26 11:15:25.051286] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.461 [2024-07-26 11:15:25.051703] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.461 [2024-07-26 11:15:25.051739] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.461 [2024-07-26 11:15:25.051780] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.461 [2024-07-26 11:15:25.051822] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.461 [2024-07-26 11:15:25.051863] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.461 [2024-07-26 11:15:25.051905] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.461 [2024-07-26 11:15:25.051946] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.461 [2024-07-26 11:15:25.051991] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.461 [2024-07-26 11:15:25.052034] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.461 [2024-07-26 11:15:25.052073] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.461 [2024-07-26 11:15:25.052112] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.461 [2024-07-26 11:15:25.052149] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.461 [2024-07-26 11:15:25.052190] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.462 [2024-07-26 11:15:25.052234] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.462 [2024-07-26 11:15:25.052282] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.462 [2024-07-26 11:15:25.052327] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.462 [2024-07-26 11:15:25.052372] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.462 [2024-07-26 11:15:25.052416] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.462 [2024-07-26 11:15:25.052461] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.462 [2024-07-26 11:15:25.052506] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.462 [2024-07-26 11:15:25.052555] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.462 [2024-07-26 11:15:25.052597] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.462 [2024-07-26 11:15:25.052645] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.462 [2024-07-26 11:15:25.052695] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.462 [2024-07-26 11:15:25.052745] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.462 [2024-07-26 11:15:25.052792] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.462 [2024-07-26 11:15:25.052837] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.462 [2024-07-26 11:15:25.052881] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.462 [2024-07-26 11:15:25.052929] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.462 [2024-07-26 11:15:25.052976] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.462 [2024-07-26 11:15:25.053022] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.462 [2024-07-26 11:15:25.053069] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.462 [2024-07-26 11:15:25.053114] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.462 [2024-07-26 11:15:25.053161] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.462 [2024-07-26 11:15:25.053209] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.462 [2024-07-26 11:15:25.053261] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.462 [2024-07-26 11:15:25.053303] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.462 [2024-07-26 11:15:25.053350] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.462 [2024-07-26 11:15:25.053397] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.462 [2024-07-26 11:15:25.053442] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.462 [2024-07-26 11:15:25.053490] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.462 [2024-07-26 11:15:25.053543] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.462 [2024-07-26 11:15:25.053585] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.462 [2024-07-26 11:15:25.053632] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.462 [2024-07-26 11:15:25.053678] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.462 [2024-07-26 11:15:25.053724] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.462 [2024-07-26 11:15:25.053766] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.462 [2024-07-26 11:15:25.053812] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.462 [2024-07-26 11:15:25.053856] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.462 [2024-07-26 11:15:25.053901] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.462 [2024-07-26 11:15:25.053942] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.462 [2024-07-26 11:15:25.053984] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.742 [2024-07-26 11:15:25.054026] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.742 [2024-07-26 11:15:25.054058] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.742 [2024-07-26 11:15:25.054101] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.742 [2024-07-26 11:15:25.054143] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.742 [2024-07-26 11:15:25.054185] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.742 [2024-07-26 11:15:25.054228] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.742 [2024-07-26 11:15:25.054274] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.742 [2024-07-26 11:15:25.054313] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.743 [2024-07-26 11:15:25.054358] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.743 [2024-07-26 11:15:25.054399] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.743 [2024-07-26 11:15:25.054438] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.743 [2024-07-26 11:15:25.054712] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.743 [2024-07-26 11:15:25.054762] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.743 [2024-07-26 11:15:25.054801] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.743 [2024-07-26 11:15:25.054840] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.743 [2024-07-26 11:15:25.054876] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.743 [2024-07-26 11:15:25.054919] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.743 [2024-07-26 11:15:25.054970] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.743 [2024-07-26 11:15:25.055013] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.743 [2024-07-26 11:15:25.055055] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.743 [2024-07-26 11:15:25.055096] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.743 [2024-07-26 11:15:25.055146] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.743 [2024-07-26 11:15:25.055191] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.743 [2024-07-26 11:15:25.055228] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.743 [2024-07-26 11:15:25.055268] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.743 [2024-07-26 11:15:25.055303] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.743 [2024-07-26 11:15:25.055347] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.743 [2024-07-26 11:15:25.055389] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.743 [2024-07-26 11:15:25.055434] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.743 [2024-07-26 11:15:25.055480] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.743 [2024-07-26 11:15:25.055527] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.743 [2024-07-26 11:15:25.055572] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.743 [2024-07-26 11:15:25.055618] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.743 [2024-07-26 11:15:25.055672] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.743 [2024-07-26 11:15:25.055719] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.743 [2024-07-26 11:15:25.055766] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.743 [2024-07-26 11:15:25.055812] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.743 [2024-07-26 11:15:25.055861] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.743 [2024-07-26 11:15:25.055904] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.743 [2024-07-26 11:15:25.055945] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.743 [2024-07-26 11:15:25.055993] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.743 [2024-07-26 11:15:25.056035] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.743 [2024-07-26 11:15:25.056079] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.743 [2024-07-26 11:15:25.056125] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.743 [2024-07-26 11:15:25.056165] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.743 [2024-07-26 11:15:25.056198] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.743 [2024-07-26 11:15:25.056240] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.743 [2024-07-26 11:15:25.056279] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.743 [2024-07-26 11:15:25.056316] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.743 [2024-07-26 11:15:25.056355] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.743 [2024-07-26 11:15:25.056399] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.743 [2024-07-26 11:15:25.056440] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.743 [2024-07-26 11:15:25.056483] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.743 [2024-07-26 11:15:25.056523] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.743 [2024-07-26 11:15:25.056569] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.743 [2024-07-26 11:15:25.056605] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.743 [2024-07-26 11:15:25.056648] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.743 [2024-07-26 11:15:25.056687] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.743 [2024-07-26 11:15:25.056728] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.743 [2024-07-26 11:15:25.056770] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.743 [2024-07-26 11:15:25.056806] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.743 [2024-07-26 11:15:25.056844] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.743 [2024-07-26 11:15:25.056886] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.743 [2024-07-26 11:15:25.056927] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.743 [2024-07-26 11:15:25.056971] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.743 [2024-07-26 11:15:25.057013] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.743 [2024-07-26 11:15:25.057054] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.743 [2024-07-26 11:15:25.057096] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.743 [2024-07-26 11:15:25.057138] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.743 [2024-07-26 11:15:25.057179] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.743 [2024-07-26 11:15:25.057220] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.743 [2024-07-26 11:15:25.057261] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.743 [2024-07-26 11:15:25.057298] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.743 [2024-07-26 11:15:25.057341] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.743 [2024-07-26 11:15:25.057386] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.743 [2024-07-26 11:15:25.058202] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.743 [2024-07-26 11:15:25.058257] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.743 [2024-07-26 11:15:25.058303] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.743 [2024-07-26 11:15:25.058346] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.743 [2024-07-26 11:15:25.058392] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.743 [2024-07-26 11:15:25.058437] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.743 [2024-07-26 11:15:25.058484] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.743 [2024-07-26 11:15:25.058531] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.743 [2024-07-26 11:15:25.058580] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.743 [2024-07-26 11:15:25.058633] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.743 [2024-07-26 11:15:25.058681] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.743 [2024-07-26 11:15:25.058726] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.743 [2024-07-26 11:15:25.058775] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.743 [2024-07-26 11:15:25.058823] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.743 [2024-07-26 11:15:25.058869] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.743 [2024-07-26 11:15:25.058916] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.743 [2024-07-26 11:15:25.058961] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.743 [2024-07-26 11:15:25.059004] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.743 [2024-07-26 11:15:25.059052] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.743 [2024-07-26 11:15:25.059096] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.743 [2024-07-26 11:15:25.059142] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.743 [2024-07-26 11:15:25.059183] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.743 [2024-07-26 11:15:25.059229] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.743 [2024-07-26 11:15:25.059271] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.743 [2024-07-26 11:15:25.059314] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.743 [2024-07-26 11:15:25.059360] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.743 [2024-07-26 11:15:25.059408] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.743 [2024-07-26 11:15:25.059449] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.743 [2024-07-26 11:15:25.059488] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.744 [2024-07-26 11:15:25.059521] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.744 [2024-07-26 11:15:25.059564] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.744 [2024-07-26 11:15:25.059609] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.744 [2024-07-26 11:15:25.059661] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.744 [2024-07-26 11:15:25.059701] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.744 [2024-07-26 11:15:25.059740] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.744 [2024-07-26 11:15:25.059781] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.744 [2024-07-26 11:15:25.059822] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.744 [2024-07-26 11:15:25.059865] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.744 [2024-07-26 11:15:25.059906] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.744 [2024-07-26 11:15:25.059945] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.744 [2024-07-26 11:15:25.059990] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.744 [2024-07-26 11:15:25.060023] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.744 [2024-07-26 11:15:25.060063] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.744 [2024-07-26 11:15:25.060106] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.744 [2024-07-26 11:15:25.060148] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.744 [2024-07-26 11:15:25.060191] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.744 [2024-07-26 11:15:25.060229] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.744 [2024-07-26 11:15:25.060270] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.744 [2024-07-26 11:15:25.060312] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.744 [2024-07-26 11:15:25.060355] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.744 [2024-07-26 11:15:25.060407] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.744 [2024-07-26 11:15:25.060445] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.744 [2024-07-26 11:15:25.060483] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.744 [2024-07-26 11:15:25.060517] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.744 [2024-07-26 11:15:25.060556] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.744 [2024-07-26 11:15:25.060593] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.744 [2024-07-26 11:15:25.060641] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.744 [2024-07-26 11:15:25.060682] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.744 [2024-07-26 11:15:25.060724] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.744 [2024-07-26 11:15:25.060764] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.744 [2024-07-26 11:15:25.060807] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.744 [2024-07-26 11:15:25.060847] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.744 [2024-07-26 11:15:25.060888] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.744 [2024-07-26 11:15:25.060932] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.744 [2024-07-26 11:15:25.061129] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.744 [2024-07-26 11:15:25.061169] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.744 [2024-07-26 11:15:25.061208] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.744 [2024-07-26 11:15:25.061254] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.744 [2024-07-26 11:15:25.061300] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.744 [2024-07-26 11:15:25.061349] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.744 [2024-07-26 11:15:25.061396] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.744 [2024-07-26 11:15:25.061451] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.744 [2024-07-26 11:15:25.061499] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.744 [2024-07-26 11:15:25.061543] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.744 [2024-07-26 11:15:25.061590] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.744 [2024-07-26 11:15:25.061644] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.744 [2024-07-26 11:15:25.061696] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.744 [2024-07-26 11:15:25.061748] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.744 [2024-07-26 11:15:25.061794] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.744 [2024-07-26 11:15:25.061843] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.744 [2024-07-26 11:15:25.062329] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.744 [2024-07-26 11:15:25.062378] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.744 [2024-07-26 11:15:25.062421] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.744 [2024-07-26 11:15:25.062468] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.744 [2024-07-26 11:15:25.062503] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.744 [2024-07-26 11:15:25.062543] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.744 [2024-07-26 11:15:25.062586] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.744 [2024-07-26 11:15:25.062620] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.744 [2024-07-26 11:15:25.062670] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.744 [2024-07-26 11:15:25.062708] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.744 [2024-07-26 11:15:25.062747] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.744 [2024-07-26 11:15:25.062796] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.744 [2024-07-26 11:15:25.062836] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.744 [2024-07-26 11:15:25.062880] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.744 [2024-07-26 11:15:25.062913] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.744 [2024-07-26 11:15:25.062953] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.744 [2024-07-26 11:15:25.062995] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.744 [2024-07-26 11:15:25.063032] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.744 [2024-07-26 11:15:25.063070] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.744 [2024-07-26 11:15:25.063116] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.744 [2024-07-26 11:15:25.063156] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.744 [2024-07-26 11:15:25.063194] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.744 [2024-07-26 11:15:25.063238] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.744 [2024-07-26 11:15:25.063275] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.744 [2024-07-26 11:15:25.063315] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.744 [2024-07-26 11:15:25.063358] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.744 [2024-07-26 11:15:25.063400] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.744 [2024-07-26 11:15:25.063440] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.744 [2024-07-26 11:15:25.063479] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.744 [2024-07-26 11:15:25.063520] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.744 [2024-07-26 11:15:25.063561] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.744 [2024-07-26 11:15:25.063607] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.744 [2024-07-26 11:15:25.063650] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.744 [2024-07-26 11:15:25.063691] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.744 [2024-07-26 11:15:25.063732] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.744 [2024-07-26 11:15:25.063767] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.744 [2024-07-26 11:15:25.063810] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.744 [2024-07-26 11:15:25.063845] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.744 [2024-07-26 11:15:25.063894] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.744 [2024-07-26 11:15:25.063940] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.744 [2024-07-26 11:15:25.063984] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.744 [2024-07-26 11:15:25.064027] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.744 [2024-07-26 11:15:25.064074] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.744 [2024-07-26 11:15:25.064125] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.744 [2024-07-26 11:15:25.064179] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.744 [2024-07-26 11:15:25.064231] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.745 [2024-07-26 11:15:25.064278] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.745 [2024-07-26 11:15:25.064324] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.745 [2024-07-26 11:15:25.064371] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.745 [2024-07-26 11:15:25.064420] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.745 [2024-07-26 11:15:25.064462] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.745 [2024-07-26 11:15:25.064506] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.745 [2024-07-26 11:15:25.064551] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.745 [2024-07-26 11:15:25.064597] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.745 [2024-07-26 11:15:25.064646] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.745 [2024-07-26 11:15:25.064696] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.745 [2024-07-26 11:15:25.064743] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.745 [2024-07-26 11:15:25.064792] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.745 [2024-07-26 11:15:25.064839] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.745 [2024-07-26 11:15:25.064886] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.745 [2024-07-26 11:15:25.064931] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.745 [2024-07-26 11:15:25.064979] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.745 [2024-07-26 11:15:25.065036] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.745 [2024-07-26 11:15:25.065086] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.745 [2024-07-26 11:15:25.065269] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.745 [2024-07-26 11:15:25.065313] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.745 [2024-07-26 11:15:25.065363] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.745 [2024-07-26 11:15:25.065409] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.745 [2024-07-26 11:15:25.065452] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.745 [2024-07-26 11:15:25.065495] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.745 [2024-07-26 11:15:25.065542] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.745 [2024-07-26 11:15:25.065586] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.745 [2024-07-26 11:15:25.065635] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.745 [2024-07-26 11:15:25.065681] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.745 [2024-07-26 11:15:25.065731] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.745 [2024-07-26 11:15:25.065778] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.745 [2024-07-26 11:15:25.065822] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.745 [2024-07-26 11:15:25.065870] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.745 [2024-07-26 11:15:25.065913] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.745 [2024-07-26 11:15:25.065957] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.745 [2024-07-26 11:15:25.065992] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.745 [2024-07-26 11:15:25.066032] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.745 [2024-07-26 11:15:25.066074] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.745 [2024-07-26 11:15:25.066111] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.745 [2024-07-26 11:15:25.066151] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.745 [2024-07-26 11:15:25.066195] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.745 [2024-07-26 11:15:25.066234] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.745 [2024-07-26 11:15:25.066278] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.745 [2024-07-26 11:15:25.066318] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.745 [2024-07-26 11:15:25.066359] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.745 [2024-07-26 11:15:25.066404] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.745 [2024-07-26 11:15:25.066446] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.745 [2024-07-26 11:15:25.066486] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.745 [2024-07-26 11:15:25.066518] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.745 [2024-07-26 11:15:25.066559] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.745 [2024-07-26 11:15:25.066604] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.745 [2024-07-26 11:15:25.066648] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.745 [2024-07-26 11:15:25.066689] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.745 [2024-07-26 11:15:25.066725] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.745 [2024-07-26 11:15:25.066765] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.745 [2024-07-26 11:15:25.066812] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.745 [2024-07-26 11:15:25.066851] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.745 [2024-07-26 11:15:25.066891] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.745 [2024-07-26 11:15:25.066936] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.745 [2024-07-26 11:15:25.066973] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.745 [2024-07-26 11:15:25.067010] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.745 [2024-07-26 11:15:25.067051] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.745 [2024-07-26 11:15:25.067093] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.745 [2024-07-26 11:15:25.067134] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.745 [2024-07-26 11:15:25.067176] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.745 [2024-07-26 11:15:25.067220] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.745 [2024-07-26 11:15:25.067264] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.745 [2024-07-26 11:15:25.067303] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.745 [2024-07-26 11:15:25.067342] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.745 [2024-07-26 11:15:25.067378] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.745 [2024-07-26 11:15:25.067416] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.745 [2024-07-26 11:15:25.067458] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.745 [2024-07-26 11:15:25.067502] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.745 [2024-07-26 11:15:25.067543] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.745 [2024-07-26 11:15:25.067579] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.745 [2024-07-26 11:15:25.067621] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.745 [2024-07-26 11:15:25.067670] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.745 [2024-07-26 11:15:25.067715] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.745 [2024-07-26 11:15:25.067763] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.745 [2024-07-26 11:15:25.067808] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.745 [2024-07-26 11:15:25.067853] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.745 [2024-07-26 11:15:25.067897] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.745 [2024-07-26 11:15:25.068689] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.745 [2024-07-26 11:15:25.068741] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.745 [2024-07-26 11:15:25.068787] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.745 [2024-07-26 11:15:25.068833] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.745 [2024-07-26 11:15:25.068878] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.745 [2024-07-26 11:15:25.068911] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.745 [2024-07-26 11:15:25.068952] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.745 [2024-07-26 11:15:25.068990] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.745 [2024-07-26 11:15:25.069030] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.745 [2024-07-26 11:15:25.069072] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.745 [2024-07-26 11:15:25.069113] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.745 [2024-07-26 11:15:25.069156] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.745 [2024-07-26 11:15:25.069203] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.745 [2024-07-26 11:15:25.069242] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.745 [2024-07-26 11:15:25.069282] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.745 [2024-07-26 11:15:25.069329] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.746 [2024-07-26 11:15:25.069366] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.746 [2024-07-26 11:15:25.069403] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.746 [2024-07-26 11:15:25.069442] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.746 [2024-07-26 11:15:25.069486] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.746 [2024-07-26 11:15:25.069524] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.746 [2024-07-26 11:15:25.069566] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.746 [2024-07-26 11:15:25.069612] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.746 [2024-07-26 11:15:25.069662] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.746 [2024-07-26 11:15:25.069710] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.746 [2024-07-26 11:15:25.069751] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.746 [2024-07-26 11:15:25.069790] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.746 [2024-07-26 11:15:25.069832] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.746 [2024-07-26 11:15:25.069872] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.746 [2024-07-26 11:15:25.069914] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.746 [2024-07-26 11:15:25.069954] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.746 [2024-07-26 11:15:25.069997] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.746 [2024-07-26 11:15:25.070038] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.746 [2024-07-26 11:15:25.070079] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.746 [2024-07-26 11:15:25.070121] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.746 [2024-07-26 11:15:25.070161] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.746 [2024-07-26 11:15:25.070201] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.746 [2024-07-26 11:15:25.070237] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.746 [2024-07-26 11:15:25.070281] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.746 [2024-07-26 11:15:25.070326] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.746 [2024-07-26 11:15:25.070372] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.746 [2024-07-26 11:15:25.070420] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.746 [2024-07-26 11:15:25.070468] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.746 [2024-07-26 11:15:25.070514] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.746 [2024-07-26 11:15:25.070562] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.746 [2024-07-26 11:15:25.070611] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.746 [2024-07-26 11:15:25.070658] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.746 [2024-07-26 11:15:25.070700] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.746 [2024-07-26 11:15:25.070744] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.746 [2024-07-26 11:15:25.070799] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.746 [2024-07-26 11:15:25.070844] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.746 [2024-07-26 11:15:25.070890] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.746 [2024-07-26 11:15:25.070936] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.746 [2024-07-26 11:15:25.070984] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.746 [2024-07-26 11:15:25.071027] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.746 [2024-07-26 11:15:25.071075] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.746 [2024-07-26 11:15:25.071123] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.746 [2024-07-26 11:15:25.071169] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.746 [2024-07-26 11:15:25.071220] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.746 [2024-07-26 11:15:25.071263] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.746 [2024-07-26 11:15:25.071311] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.746 [2024-07-26 11:15:25.071355] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.746 [2024-07-26 11:15:25.071408] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.746 [2024-07-26 11:15:25.071452] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.746 [2024-07-26 11:15:25.071635] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.746 11:15:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1356620 00:07:29.746 [2024-07-26 11:15:25.071687] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.746 [2024-07-26 11:15:25.071742] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.746 [2024-07-26 11:15:25.071787] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.746 [2024-07-26 11:15:25.071835] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.746 [2024-07-26 11:15:25.071882] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.746 [2024-07-26 11:15:25.071927] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.746 [2024-07-26 11:15:25.071975] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.746 [2024-07-26 11:15:25.072024] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.746 11:15:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:29.746 [2024-07-26 11:15:25.072068] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.746 [2024-07-26 11:15:25.072116] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.746 [2024-07-26 11:15:25.072162] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.746 [2024-07-26 11:15:25.072206] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.746 [2024-07-26 11:15:25.072242] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.746 [2024-07-26 11:15:25.072277] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.746 [2024-07-26 11:15:25.072320] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.746 [2024-07-26 11:15:25.072359] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.746 [2024-07-26 11:15:25.072756] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.746 [2024-07-26 11:15:25.072798] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.746 [2024-07-26 11:15:25.072839] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.746 [2024-07-26 11:15:25.072880] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.746 [2024-07-26 11:15:25.072922] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.746 [2024-07-26 11:15:25.072968] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.746 [2024-07-26 11:15:25.073007] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.746 [2024-07-26 11:15:25.073051] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.746 [2024-07-26 11:15:25.073091] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.746 [2024-07-26 11:15:25.073130] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.746 [2024-07-26 11:15:25.073174] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.746 [2024-07-26 11:15:25.073211] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.746 [2024-07-26 11:15:25.073253] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.746 [2024-07-26 11:15:25.073291] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.746 [2024-07-26 11:15:25.073331] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.746 [2024-07-26 11:15:25.073371] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.746 [2024-07-26 11:15:25.073419] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.746 [2024-07-26 11:15:25.073460] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.746 [2024-07-26 11:15:25.073502] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.746 [2024-07-26 11:15:25.073547] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.746 [2024-07-26 11:15:25.073589] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.746 [2024-07-26 11:15:25.073635] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.746 [2024-07-26 11:15:25.073678] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.746 [2024-07-26 11:15:25.073721] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.746 [2024-07-26 11:15:25.073763] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.746 [2024-07-26 11:15:25.073807] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.746 [2024-07-26 11:15:25.073848] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.746 [2024-07-26 11:15:25.073893] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.746 [2024-07-26 11:15:25.073946] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.747 [2024-07-26 11:15:25.073991] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.747 [2024-07-26 11:15:25.074037] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.747 [2024-07-26 11:15:25.074085] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.747 [2024-07-26 11:15:25.074132] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.747 [2024-07-26 11:15:25.074183] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.747 [2024-07-26 11:15:25.074228] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.747 [2024-07-26 11:15:25.074272] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.747 [2024-07-26 11:15:25.074321] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.747 [2024-07-26 11:15:25.074368] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.747 [2024-07-26 11:15:25.074413] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.747 [2024-07-26 11:15:25.074458] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.747 [2024-07-26 11:15:25.074508] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.747 [2024-07-26 11:15:25.074555] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.747 [2024-07-26 11:15:25.074603] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.747 [2024-07-26 11:15:25.074653] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.747 [2024-07-26 11:15:25.074697] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.747 [2024-07-26 11:15:25.074741] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.747 [2024-07-26 11:15:25.074790] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.747 [2024-07-26 11:15:25.074832] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.747 [2024-07-26 11:15:25.074877] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.747 [2024-07-26 11:15:25.074927] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.747 [2024-07-26 11:15:25.074973] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.747 [2024-07-26 11:15:25.075029] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.747 [2024-07-26 11:15:25.075076] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.747 [2024-07-26 11:15:25.075122] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.747 [2024-07-26 11:15:25.075169] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.747 [2024-07-26 11:15:25.075215] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.747 [2024-07-26 11:15:25.075264] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.747 [2024-07-26 11:15:25.075306] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.747 [2024-07-26 11:15:25.075340] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.747 [2024-07-26 11:15:25.075384] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.747 [2024-07-26 11:15:25.075424] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.747 [2024-07-26 11:15:25.075462] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.747 [2024-07-26 11:15:25.075499] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.747 [2024-07-26 11:15:25.075546] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.747 [2024-07-26 11:15:25.075726] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.747 [2024-07-26 11:15:25.075769] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.747 [2024-07-26 11:15:25.075804] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.747 [2024-07-26 11:15:25.075844] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.747 [2024-07-26 11:15:25.075885] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.747 [2024-07-26 11:15:25.075925] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.747 [2024-07-26 11:15:25.075971] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.747 [2024-07-26 11:15:25.076010] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.747 [2024-07-26 11:15:25.076049] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.747 [2024-07-26 11:15:25.076094] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.747 [2024-07-26 11:15:25.076135] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.747 [2024-07-26 11:15:25.076172] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.747 [2024-07-26 11:15:25.076204] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.747 [2024-07-26 11:15:25.076250] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.747 [2024-07-26 11:15:25.076300] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.747 [2024-07-26 11:15:25.076340] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.747 [2024-07-26 11:15:25.076382] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.747 [2024-07-26 11:15:25.076427] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.747 [2024-07-26 11:15:25.076468] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.747 [2024-07-26 11:15:25.076509] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.747 [2024-07-26 11:15:25.076549] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.747 [2024-07-26 11:15:25.076588] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.747 [2024-07-26 11:15:25.076632] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.747 [2024-07-26 11:15:25.076672] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.747 [2024-07-26 11:15:25.076712] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.747 [2024-07-26 11:15:25.076753] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.747 [2024-07-26 11:15:25.076795] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.747 [2024-07-26 11:15:25.076840] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.747 [2024-07-26 11:15:25.076888] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.747 [2024-07-26 11:15:25.076935] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.747 [2024-07-26 11:15:25.076981] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.747 [2024-07-26 11:15:25.077035] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.747 [2024-07-26 11:15:25.077080] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.747 [2024-07-26 11:15:25.077126] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.747 [2024-07-26 11:15:25.077171] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.747 [2024-07-26 11:15:25.077221] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.747 [2024-07-26 11:15:25.077277] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.747 [2024-07-26 11:15:25.077321] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.747 [2024-07-26 11:15:25.077366] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.747 [2024-07-26 11:15:25.077412] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.747 [2024-07-26 11:15:25.077459] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.747 [2024-07-26 11:15:25.077505] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.747 [2024-07-26 11:15:25.077559] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.747 [2024-07-26 11:15:25.077605] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.748 [2024-07-26 11:15:25.077653] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.748 [2024-07-26 11:15:25.077698] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.748 [2024-07-26 11:15:25.078464] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.748 [2024-07-26 11:15:25.078510] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.748 [2024-07-26 11:15:25.078540] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.748 [2024-07-26 11:15:25.078578] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.748 [2024-07-26 11:15:25.078624] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.748 [2024-07-26 11:15:25.078672] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.748 [2024-07-26 11:15:25.078715] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.748 [2024-07-26 11:15:25.078758] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.748 [2024-07-26 11:15:25.078804] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.748 [2024-07-26 11:15:25.078845] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.748 [2024-07-26 11:15:25.078887] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.748 [2024-07-26 11:15:25.078934] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.748 [2024-07-26 11:15:25.078973] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.748 [2024-07-26 11:15:25.079016] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.748 [2024-07-26 11:15:25.079047] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.748 [2024-07-26 11:15:25.079084] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.748 [2024-07-26 11:15:25.079122] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.748 [2024-07-26 11:15:25.079171] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.748 [2024-07-26 11:15:25.079210] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.748 [2024-07-26 11:15:25.079250] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.748 [2024-07-26 11:15:25.079294] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.748 [2024-07-26 11:15:25.079336] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.748 [2024-07-26 11:15:25.079376] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.748 [2024-07-26 11:15:25.079424] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.748 [2024-07-26 11:15:25.079463] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.748 [2024-07-26 11:15:25.079495] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.748 [2024-07-26 11:15:25.079537] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.748 [2024-07-26 11:15:25.079576] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.748 [2024-07-26 11:15:25.079616] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.748 [2024-07-26 11:15:25.079664] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.748 [2024-07-26 11:15:25.079702] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.748 [2024-07-26 11:15:25.079745] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.748 [2024-07-26 11:15:25.079785] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.748 [2024-07-26 11:15:25.079825] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.748 [2024-07-26 11:15:25.079867] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.748 [2024-07-26 11:15:25.079911] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.748 [2024-07-26 11:15:25.079950] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.748 [2024-07-26 11:15:25.079990] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.748 [2024-07-26 11:15:25.080036] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.748 [2024-07-26 11:15:25.080083] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.748 [2024-07-26 11:15:25.080126] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.748 [2024-07-26 11:15:25.080175] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.748 [2024-07-26 11:15:25.080222] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.748 [2024-07-26 11:15:25.080270] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.748 [2024-07-26 11:15:25.080318] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.748 [2024-07-26 11:15:25.080370] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.748 [2024-07-26 11:15:25.080417] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.748 [2024-07-26 11:15:25.080462] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.748 [2024-07-26 11:15:25.080508] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.748 [2024-07-26 11:15:25.080554] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.748 [2024-07-26 11:15:25.080603] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.748 [2024-07-26 11:15:25.080654] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.748 [2024-07-26 11:15:25.080702] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.748 [2024-07-26 11:15:25.080747] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.748 [2024-07-26 11:15:25.080792] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.748 [2024-07-26 11:15:25.080837] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.748 [2024-07-26 11:15:25.080886] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.748 [2024-07-26 11:15:25.080932] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.748 [2024-07-26 11:15:25.080977] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.748 [2024-07-26 11:15:25.081024] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.748 [2024-07-26 11:15:25.081076] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.748 [2024-07-26 11:15:25.081121] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.748 [2024-07-26 11:15:25.081165] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.748 [2024-07-26 11:15:25.081210] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.748 [2024-07-26 11:15:25.081400] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.748 [2024-07-26 11:15:25.081451] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.748 [2024-07-26 11:15:25.081496] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.748 [2024-07-26 11:15:25.081545] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.748 [2024-07-26 11:15:25.081593] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.748 [2024-07-26 11:15:25.081639] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.748 [2024-07-26 11:15:25.081689] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.748 [2024-07-26 11:15:25.081722] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.748 [2024-07-26 11:15:25.081763] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.748 [2024-07-26 11:15:25.081808] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.748 [2024-07-26 11:15:25.081847] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.748 [2024-07-26 11:15:25.081886] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.748 [2024-07-26 11:15:25.081934] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.748 [2024-07-26 11:15:25.081975] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.748 [2024-07-26 11:15:25.082021] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.748 [2024-07-26 11:15:25.082059] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.748 [2024-07-26 11:15:25.082102] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.748 [2024-07-26 11:15:25.082140] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.748 [2024-07-26 11:15:25.082187] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.748 [2024-07-26 11:15:25.082223] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.748 [2024-07-26 11:15:25.082255] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.748 [2024-07-26 11:15:25.082292] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.748 [2024-07-26 11:15:25.082332] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.748 [2024-07-26 11:15:25.082372] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.748 [2024-07-26 11:15:25.082420] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.748 [2024-07-26 11:15:25.082463] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.748 [2024-07-26 11:15:25.082502] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.748 [2024-07-26 11:15:25.082546] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.748 [2024-07-26 11:15:25.082587] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.748 [2024-07-26 11:15:25.082637] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.748 [2024-07-26 11:15:25.082679] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.749 [2024-07-26 11:15:25.082712] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.749 [2024-07-26 11:15:25.082748] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.749 [2024-07-26 11:15:25.082790] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.749 [2024-07-26 11:15:25.082831] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.749 [2024-07-26 11:15:25.082873] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.749 [2024-07-26 11:15:25.082916] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.749 [2024-07-26 11:15:25.082956] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.749 [2024-07-26 11:15:25.082999] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.749 [2024-07-26 11:15:25.083042] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.749 [2024-07-26 11:15:25.083079] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.749 [2024-07-26 11:15:25.083124] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.749 [2024-07-26 11:15:25.083159] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.749 [2024-07-26 11:15:25.083211] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.749 [2024-07-26 11:15:25.083256] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.749 [2024-07-26 11:15:25.083301] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.749 [2024-07-26 11:15:25.083343] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.749 [2024-07-26 11:15:25.083390] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.749 [2024-07-26 11:15:25.083437] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.749 [2024-07-26 11:15:25.083483] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.749 [2024-07-26 11:15:25.083527] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.749 [2024-07-26 11:15:25.083570] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.749 [2024-07-26 11:15:25.083618] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.749 [2024-07-26 11:15:25.083671] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.749 [2024-07-26 11:15:25.083715] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.749 [2024-07-26 11:15:25.083760] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.749 [2024-07-26 11:15:25.083808] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.749 [2024-07-26 11:15:25.083858] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.749 [2024-07-26 11:15:25.083902] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.749 [2024-07-26 11:15:25.083947] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.749 [2024-07-26 11:15:25.083992] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.749 [2024-07-26 11:15:25.084041] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.749 [2024-07-26 11:15:25.084087] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.749 [2024-07-26 11:15:25.084870] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.749 [2024-07-26 11:15:25.084913] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.749 [2024-07-26 11:15:25.084954] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.749 [2024-07-26 11:15:25.084992] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.749 [2024-07-26 11:15:25.085024] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.749 [2024-07-26 11:15:25.085064] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.749 [2024-07-26 11:15:25.085103] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.749 [2024-07-26 11:15:25.085146] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.749 [2024-07-26 11:15:25.085187] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.749 [2024-07-26 11:15:25.085230] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.749 [2024-07-26 11:15:25.085269] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.749 [2024-07-26 11:15:25.085307] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.749 [2024-07-26 11:15:25.085352] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.749 [2024-07-26 11:15:25.085392] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.749 [2024-07-26 11:15:25.085430] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.749 [2024-07-26 11:15:25.085472] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.749 [2024-07-26 11:15:25.085507] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.749 [2024-07-26 11:15:25.085548] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.749 [2024-07-26 11:15:25.085588] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.749 [2024-07-26 11:15:25.085634] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.749 [2024-07-26 11:15:25.085678] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.749 [2024-07-26 11:15:25.085716] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.749 [2024-07-26 11:15:25.085758] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.749 [2024-07-26 11:15:25.085790] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.749 [2024-07-26 11:15:25.085831] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.749 [2024-07-26 11:15:25.085869] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.749 [2024-07-26 11:15:25.085908] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.749 [2024-07-26 11:15:25.085944] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.749 [2024-07-26 11:15:25.085983] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.749 [2024-07-26 11:15:25.086022] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.749 [2024-07-26 11:15:25.086068] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.749 [2024-07-26 11:15:25.086109] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.749 [2024-07-26 11:15:25.086150] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.749 [2024-07-26 11:15:25.086188] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.749 [2024-07-26 11:15:25.086230] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.749 [2024-07-26 11:15:25.086263] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.749 [2024-07-26 11:15:25.086314] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.749 [2024-07-26 11:15:25.086357] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.749 [2024-07-26 11:15:25.086402] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.749 [2024-07-26 11:15:25.086448] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.749 [2024-07-26 11:15:25.086492] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.749 [2024-07-26 11:15:25.086541] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.749 [2024-07-26 11:15:25.086589] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.749 [2024-07-26 11:15:25.086643] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.749 [2024-07-26 11:15:25.086692] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.749 [2024-07-26 11:15:25.086739] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.749 [2024-07-26 11:15:25.086782] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.749 [2024-07-26 11:15:25.086833] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.749 [2024-07-26 11:15:25.086883] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.749 [2024-07-26 11:15:25.086929] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.749 [2024-07-26 11:15:25.086973] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.749 [2024-07-26 11:15:25.087018] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.749 [2024-07-26 11:15:25.087064] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.749 [2024-07-26 11:15:25.087108] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.749 [2024-07-26 11:15:25.087142] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.749 [2024-07-26 11:15:25.087180] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.749 [2024-07-26 11:15:25.087219] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.749 [2024-07-26 11:15:25.087267] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.749 [2024-07-26 11:15:25.087307] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.749 [2024-07-26 11:15:25.087351] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.749 [2024-07-26 11:15:25.087391] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.749 [2024-07-26 11:15:25.087433] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.749 [2024-07-26 11:15:25.087472] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.750 [2024-07-26 11:15:25.087517] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.750 [2024-07-26 11:15:25.087721] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.750 [2024-07-26 11:15:25.087761] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.750 [2024-07-26 11:15:25.087798] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.750 [2024-07-26 11:15:25.087835] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.750 [2024-07-26 11:15:25.087880] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.750 [2024-07-26 11:15:25.087920] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.750 [2024-07-26 11:15:25.087961] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.750 [2024-07-26 11:15:25.088000] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.750 [2024-07-26 11:15:25.088040] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.750 [2024-07-26 11:15:25.088082] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.750 [2024-07-26 11:15:25.088123] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.750 [2024-07-26 11:15:25.088165] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.750 [2024-07-26 11:15:25.088205] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.750 [2024-07-26 11:15:25.088244] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.750 [2024-07-26 11:15:25.088288] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.750 [2024-07-26 11:15:25.088329] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.750 [2024-07-26 11:15:25.088376] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.750 [2024-07-26 11:15:25.088421] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.750 [2024-07-26 11:15:25.088469] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.750 [2024-07-26 11:15:25.088517] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.750 [2024-07-26 11:15:25.088563] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.750 [2024-07-26 11:15:25.088608] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.750 [2024-07-26 11:15:25.088662] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.750 [2024-07-26 11:15:25.088706] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.750 [2024-07-26 11:15:25.088748] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.750 [2024-07-26 11:15:25.088791] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.750 [2024-07-26 11:15:25.088829] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.750 [2024-07-26 11:15:25.088864] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.750 [2024-07-26 11:15:25.088905] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.750 [2024-07-26 11:15:25.088946] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.750 [2024-07-26 11:15:25.088983] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.750 [2024-07-26 11:15:25.089022] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.750 [2024-07-26 11:15:25.089064] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.750 [2024-07-26 11:15:25.089104] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.750 [2024-07-26 11:15:25.089145] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.750 [2024-07-26 11:15:25.089185] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.750 [2024-07-26 11:15:25.089223] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.750 [2024-07-26 11:15:25.089845] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.750 [2024-07-26 11:15:25.089905] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.750 [2024-07-26 11:15:25.089952] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.750 [2024-07-26 11:15:25.089995] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.750 [2024-07-26 11:15:25.090041] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.750 [2024-07-26 11:15:25.090091] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.750 [2024-07-26 11:15:25.090138] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.750 [2024-07-26 11:15:25.090183] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.750 [2024-07-26 11:15:25.090230] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.750 [2024-07-26 11:15:25.090277] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.750 [2024-07-26 11:15:25.090323] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.750 [2024-07-26 11:15:25.090372] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.750 [2024-07-26 11:15:25.090423] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.750 [2024-07-26 11:15:25.090469] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.750 [2024-07-26 11:15:25.090514] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.750 [2024-07-26 11:15:25.090563] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.750 [2024-07-26 11:15:25.090607] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.750 [2024-07-26 11:15:25.090662] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.750 [2024-07-26 11:15:25.090708] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.750 [2024-07-26 11:15:25.090752] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.750 [2024-07-26 11:15:25.090796] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.750 [2024-07-26 11:15:25.090845] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.750 [2024-07-26 11:15:25.090893] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.750 [2024-07-26 11:15:25.090936] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.750 [2024-07-26 11:15:25.090983] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.750 [2024-07-26 11:15:25.091030] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.750 [2024-07-26 11:15:25.091072] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.750 [2024-07-26 11:15:25.091117] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.750 [2024-07-26 11:15:25.091159] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.750 [2024-07-26 11:15:25.091207] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.750 [2024-07-26 11:15:25.091252] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.750 [2024-07-26 11:15:25.091295] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.750 [2024-07-26 11:15:25.091336] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.750 [2024-07-26 11:15:25.091384] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.750 [2024-07-26 11:15:25.091426] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.750 [2024-07-26 11:15:25.091466] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.750 [2024-07-26 11:15:25.091505] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.750 [2024-07-26 11:15:25.091542] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.750 [2024-07-26 11:15:25.091585] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.750 [2024-07-26 11:15:25.091631] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.750 [2024-07-26 11:15:25.091673] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.750 [2024-07-26 11:15:25.091708] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.750 [2024-07-26 11:15:25.091750] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.750 [2024-07-26 11:15:25.091788] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.750 [2024-07-26 11:15:25.091829] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.750 [2024-07-26 11:15:25.091877] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.750 [2024-07-26 11:15:25.091920] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.750 [2024-07-26 11:15:25.091962] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.750 [2024-07-26 11:15:25.092010] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.750 [2024-07-26 11:15:25.092048] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.750 [2024-07-26 11:15:25.092087] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.750 [2024-07-26 11:15:25.092133] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.750 [2024-07-26 11:15:25.092174] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.751 [2024-07-26 11:15:25.092207] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.751 [2024-07-26 11:15:25.092250] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.751 [2024-07-26 11:15:25.092292] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.751 [2024-07-26 11:15:25.092332] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.751 [2024-07-26 11:15:25.092375] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.751 [2024-07-26 11:15:25.092412] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.751 [2024-07-26 11:15:25.092455] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.751 [2024-07-26 11:15:25.092500] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.751 [2024-07-26 11:15:25.092539] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.751 [2024-07-26 11:15:25.092575] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.751 [2024-07-26 11:15:25.092621] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.751 [2024-07-26 11:15:25.092834] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.751 [2024-07-26 11:15:25.092876] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.751 [2024-07-26 11:15:25.092918] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.751 [2024-07-26 11:15:25.092958] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.751 [2024-07-26 11:15:25.093001] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.751 [2024-07-26 11:15:25.093043] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.751 [2024-07-26 11:15:25.093086] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.751 [2024-07-26 11:15:25.093134] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.751 [2024-07-26 11:15:25.093180] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.751 [2024-07-26 11:15:25.093231] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.751 [2024-07-26 11:15:25.093273] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.751 [2024-07-26 11:15:25.093322] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.751 [2024-07-26 11:15:25.093368] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.751 [2024-07-26 11:15:25.093412] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.751 [2024-07-26 11:15:25.093469] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.751 [2024-07-26 11:15:25.093514] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.751 [2024-07-26 11:15:25.093557] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.751 [2024-07-26 11:15:25.093602] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.751 [2024-07-26 11:15:25.093653] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.751 [2024-07-26 11:15:25.093704] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.751 [2024-07-26 11:15:25.093757] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.751 [2024-07-26 11:15:25.093804] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.751 [2024-07-26 11:15:25.093848] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.751 [2024-07-26 11:15:25.093893] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.751 [2024-07-26 11:15:25.093939] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.751 [2024-07-26 11:15:25.093989] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.751 [2024-07-26 11:15:25.094032] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.751 [2024-07-26 11:15:25.094078] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.751 [2024-07-26 11:15:25.094128] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.751 [2024-07-26 11:15:25.094174] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.751 [2024-07-26 11:15:25.094225] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.751 [2024-07-26 11:15:25.094269] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.751 [2024-07-26 11:15:25.094313] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.751 [2024-07-26 11:15:25.094347] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.751 [2024-07-26 11:15:25.094388] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.751 [2024-07-26 11:15:25.094430] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.751 [2024-07-26 11:15:25.094470] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.751 [2024-07-26 11:15:25.094510] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.751 [2024-07-26 11:15:25.094551] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.751 [2024-07-26 11:15:25.094587] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.751 [2024-07-26 11:15:25.094634] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.751 [2024-07-26 11:15:25.094677] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.751 [2024-07-26 11:15:25.094719] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.751 [2024-07-26 11:15:25.094757] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.751 [2024-07-26 11:15:25.094798] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.751 [2024-07-26 11:15:25.094839] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.751 Message suppressed 999 times: Read completed with error (sct=0, sc=15) 00:07:29.751 [2024-07-26 11:15:25.095304] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.751 [2024-07-26 11:15:25.095349] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.751 [2024-07-26 11:15:25.095390] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.751 [2024-07-26 11:15:25.095431] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.751 [2024-07-26 11:15:25.095473] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.751 [2024-07-26 11:15:25.095512] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.751 [2024-07-26 11:15:25.095549] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.751 [2024-07-26 11:15:25.095590] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.751 [2024-07-26 11:15:25.095635] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.751 [2024-07-26 11:15:25.095675] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.751 [2024-07-26 11:15:25.095716] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.751 [2024-07-26 11:15:25.095758] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.751 [2024-07-26 11:15:25.095806] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.751 [2024-07-26 11:15:25.095853] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.751 [2024-07-26 11:15:25.095900] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.751 [2024-07-26 11:15:25.095944] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.751 [2024-07-26 11:15:25.095991] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.751 [2024-07-26 11:15:25.096037] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.751 [2024-07-26 11:15:25.096092] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.751 [2024-07-26 11:15:25.096140] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.751 [2024-07-26 11:15:25.096185] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.751 [2024-07-26 11:15:25.096228] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.751 [2024-07-26 11:15:25.096274] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.751 [2024-07-26 11:15:25.096318] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.751 [2024-07-26 11:15:25.096366] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.751 [2024-07-26 11:15:25.096412] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.751 [2024-07-26 11:15:25.096460] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.751 [2024-07-26 11:15:25.096502] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.751 [2024-07-26 11:15:25.096548] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.751 [2024-07-26 11:15:25.096598] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.751 [2024-07-26 11:15:25.096652] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.751 [2024-07-26 11:15:25.096699] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.751 [2024-07-26 11:15:25.096747] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.751 [2024-07-26 11:15:25.096791] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.752 [2024-07-26 11:15:25.096838] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.752 [2024-07-26 11:15:25.096883] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.752 [2024-07-26 11:15:25.096931] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.752 [2024-07-26 11:15:25.096974] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.752 [2024-07-26 11:15:25.097018] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.752 [2024-07-26 11:15:25.097068] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.752 [2024-07-26 11:15:25.097111] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.752 [2024-07-26 11:15:25.097155] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.752 [2024-07-26 11:15:25.097198] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.752 [2024-07-26 11:15:25.097250] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.752 [2024-07-26 11:15:25.097297] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.752 [2024-07-26 11:15:25.097343] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.752 [2024-07-26 11:15:25.097389] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.752 [2024-07-26 11:15:25.097429] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.752 [2024-07-26 11:15:25.097468] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.752 [2024-07-26 11:15:25.097500] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.752 [2024-07-26 11:15:25.097541] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.752 [2024-07-26 11:15:25.097583] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.752 [2024-07-26 11:15:25.097621] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.752 [2024-07-26 11:15:25.097669] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.752 [2024-07-26 11:15:25.097712] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.752 [2024-07-26 11:15:25.097749] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.752 [2024-07-26 11:15:25.097793] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.752 [2024-07-26 11:15:25.097841] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.752 [2024-07-26 11:15:25.097880] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.752 [2024-07-26 11:15:25.097926] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.752 [2024-07-26 11:15:25.097962] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.752 [2024-07-26 11:15:25.097993] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.752 [2024-07-26 11:15:25.098031] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.752 [2024-07-26 11:15:25.098069] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.752 [2024-07-26 11:15:25.098247] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.752 [2024-07-26 11:15:25.098288] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.752 [2024-07-26 11:15:25.098333] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.752 [2024-07-26 11:15:25.098367] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.752 [2024-07-26 11:15:25.098408] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.752 [2024-07-26 11:15:25.098454] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.752 [2024-07-26 11:15:25.098493] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.752 [2024-07-26 11:15:25.098533] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.752 [2024-07-26 11:15:25.098574] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.752 [2024-07-26 11:15:25.098612] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.752 [2024-07-26 11:15:25.098660] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.752 [2024-07-26 11:15:25.098700] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.752 [2024-07-26 11:15:25.098740] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.752 [2024-07-26 11:15:25.098780] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.752 [2024-07-26 11:15:25.098817] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.752 [2024-07-26 11:15:25.098859] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.752 [2024-07-26 11:15:25.098907] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.752 [2024-07-26 11:15:25.099611] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.752 [2024-07-26 11:15:25.099663] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.752 [2024-07-26 11:15:25.099707] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.752 [2024-07-26 11:15:25.099753] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.752 [2024-07-26 11:15:25.099797] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.752 [2024-07-26 11:15:25.099845] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.752 [2024-07-26 11:15:25.099895] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.752 [2024-07-26 11:15:25.099940] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.752 [2024-07-26 11:15:25.099987] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.752 [2024-07-26 11:15:25.100030] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.752 [2024-07-26 11:15:25.100077] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.752 [2024-07-26 11:15:25.100124] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.752 [2024-07-26 11:15:25.100172] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.752 [2024-07-26 11:15:25.100217] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.752 [2024-07-26 11:15:25.100263] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.752 [2024-07-26 11:15:25.100315] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.752 [2024-07-26 11:15:25.100357] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.752 [2024-07-26 11:15:25.100400] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.752 [2024-07-26 11:15:25.100444] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.752 [2024-07-26 11:15:25.100488] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.752 [2024-07-26 11:15:25.100522] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.752 [2024-07-26 11:15:25.100565] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.752 [2024-07-26 11:15:25.100606] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.752 [2024-07-26 11:15:25.100650] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.752 [2024-07-26 11:15:25.100693] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.752 [2024-07-26 11:15:25.100733] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.752 [2024-07-26 11:15:25.100774] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.752 [2024-07-26 11:15:25.100812] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.752 [2024-07-26 11:15:25.100857] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.752 [2024-07-26 11:15:25.100899] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.752 [2024-07-26 11:15:25.100945] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.753 [2024-07-26 11:15:25.100984] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.753 [2024-07-26 11:15:25.101022] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.753 [2024-07-26 11:15:25.101059] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.753 [2024-07-26 11:15:25.101098] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.753 [2024-07-26 11:15:25.101141] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.753 [2024-07-26 11:15:25.101180] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.753 [2024-07-26 11:15:25.101221] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.753 [2024-07-26 11:15:25.101260] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.753 [2024-07-26 11:15:25.101299] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.753 [2024-07-26 11:15:25.101344] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.753 [2024-07-26 11:15:25.101384] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.753 [2024-07-26 11:15:25.101423] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.753 [2024-07-26 11:15:25.101470] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.753 [2024-07-26 11:15:25.101503] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.753 [2024-07-26 11:15:25.101543] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.753 [2024-07-26 11:15:25.101588] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.753 [2024-07-26 11:15:25.101633] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.753 [2024-07-26 11:15:25.101678] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.753 [2024-07-26 11:15:25.101718] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.753 [2024-07-26 11:15:25.101756] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.753 [2024-07-26 11:15:25.101801] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.753 [2024-07-26 11:15:25.101843] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.753 [2024-07-26 11:15:25.101882] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.753 [2024-07-26 11:15:25.101924] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.753 [2024-07-26 11:15:25.101964] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.753 [2024-07-26 11:15:25.102003] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.753 [2024-07-26 11:15:25.102042] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.753 [2024-07-26 11:15:25.102080] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.753 [2024-07-26 11:15:25.102122] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.753 [2024-07-26 11:15:25.102167] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.753 [2024-07-26 11:15:25.102212] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.753 [2024-07-26 11:15:25.102256] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.753 [2024-07-26 11:15:25.102303] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.753 [2024-07-26 11:15:25.102487] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.753 [2024-07-26 11:15:25.102536] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.753 [2024-07-26 11:15:25.102580] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.753 [2024-07-26 11:15:25.102632] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.753 [2024-07-26 11:15:25.102682] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.753 [2024-07-26 11:15:25.102729] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.753 [2024-07-26 11:15:25.102772] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.753 [2024-07-26 11:15:25.102818] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.753 [2024-07-26 11:15:25.102863] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.753 [2024-07-26 11:15:25.102909] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.753 [2024-07-26 11:15:25.102962] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.753 [2024-07-26 11:15:25.103007] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.753 [2024-07-26 11:15:25.103058] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.753 [2024-07-26 11:15:25.103102] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.753 [2024-07-26 11:15:25.103148] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.753 [2024-07-26 11:15:25.103194] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.753 [2024-07-26 11:15:25.103240] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.753 [2024-07-26 11:15:25.103286] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.753 [2024-07-26 11:15:25.103333] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.753 [2024-07-26 11:15:25.103380] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.753 [2024-07-26 11:15:25.103431] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.753 [2024-07-26 11:15:25.103476] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.753 [2024-07-26 11:15:25.103523] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.753 [2024-07-26 11:15:25.103572] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.753 [2024-07-26 11:15:25.103618] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.753 [2024-07-26 11:15:25.103667] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.753 [2024-07-26 11:15:25.103712] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.753 [2024-07-26 11:15:25.103752] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.753 [2024-07-26 11:15:25.103789] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.753 [2024-07-26 11:15:25.103828] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.753 [2024-07-26 11:15:25.103867] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.753 [2024-07-26 11:15:25.103910] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.753 [2024-07-26 11:15:25.103949] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.753 [2024-07-26 11:15:25.103990] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.753 [2024-07-26 11:15:25.104029] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.753 [2024-07-26 11:15:25.104072] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.753 [2024-07-26 11:15:25.104113] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.753 [2024-07-26 11:15:25.104154] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.753 [2024-07-26 11:15:25.104200] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.753 [2024-07-26 11:15:25.104238] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.753 [2024-07-26 11:15:25.104269] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.753 [2024-07-26 11:15:25.104309] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.753 [2024-07-26 11:15:25.104349] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.753 [2024-07-26 11:15:25.104387] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.753 [2024-07-26 11:15:25.104430] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.753 [2024-07-26 11:15:25.104474] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.753 [2024-07-26 11:15:25.104518] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.753 [2024-07-26 11:15:25.104558] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.753 [2024-07-26 11:15:25.104595] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.754 [2024-07-26 11:15:25.104642] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.754 [2024-07-26 11:15:25.104687] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.754 [2024-07-26 11:15:25.104722] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.754 [2024-07-26 11:15:25.104757] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.754 [2024-07-26 11:15:25.104795] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.754 [2024-07-26 11:15:25.104841] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.754 [2024-07-26 11:15:25.104881] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.754 [2024-07-26 11:15:25.104922] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.754 [2024-07-26 11:15:25.104968] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.754 [2024-07-26 11:15:25.105008] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.754 [2024-07-26 11:15:25.105046] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.754 [2024-07-26 11:15:25.105085] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.754 [2024-07-26 11:15:25.105126] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.754 [2024-07-26 11:15:25.105166] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.754 [2024-07-26 11:15:25.105956] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.754 [2024-07-26 11:15:25.106010] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.754 [2024-07-26 11:15:25.106054] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.754 [2024-07-26 11:15:25.106101] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.754 [2024-07-26 11:15:25.106147] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.754 [2024-07-26 11:15:25.106196] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.754 [2024-07-26 11:15:25.106238] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.754 [2024-07-26 11:15:25.106286] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.754 [2024-07-26 11:15:25.106333] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.754 [2024-07-26 11:15:25.106377] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.754 [2024-07-26 11:15:25.106424] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.754 [2024-07-26 11:15:25.106467] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.754 [2024-07-26 11:15:25.106510] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.754 [2024-07-26 11:15:25.106551] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.754 [2024-07-26 11:15:25.106601] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.754 [2024-07-26 11:15:25.106656] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.754 [2024-07-26 11:15:25.106702] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.754 [2024-07-26 11:15:25.106751] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.754 [2024-07-26 11:15:25.106799] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.754 [2024-07-26 11:15:25.106842] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.754 [2024-07-26 11:15:25.106891] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.754 [2024-07-26 11:15:25.106940] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.754 [2024-07-26 11:15:25.106987] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.754 [2024-07-26 11:15:25.107021] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.754 [2024-07-26 11:15:25.107062] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.754 [2024-07-26 11:15:25.107100] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.754 [2024-07-26 11:15:25.107138] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.754 [2024-07-26 11:15:25.107178] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.754 [2024-07-26 11:15:25.107223] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.754 [2024-07-26 11:15:25.107262] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.754 [2024-07-26 11:15:25.107300] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.754 [2024-07-26 11:15:25.107343] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.754 [2024-07-26 11:15:25.107381] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.754 [2024-07-26 11:15:25.107417] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.754 [2024-07-26 11:15:25.107461] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.754 [2024-07-26 11:15:25.107496] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.754 [2024-07-26 11:15:25.107539] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.754 [2024-07-26 11:15:25.107575] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.754 [2024-07-26 11:15:25.107615] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.754 [2024-07-26 11:15:25.107665] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.754 [2024-07-26 11:15:25.107705] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.754 [2024-07-26 11:15:25.107750] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.754 [2024-07-26 11:15:25.107788] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.754 [2024-07-26 11:15:25.107828] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.754 [2024-07-26 11:15:25.107873] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.754 [2024-07-26 11:15:25.107915] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.754 [2024-07-26 11:15:25.107948] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.754 [2024-07-26 11:15:25.107990] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.754 [2024-07-26 11:15:25.108027] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.754 [2024-07-26 11:15:25.108067] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.754 [2024-07-26 11:15:25.108106] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.754 [2024-07-26 11:15:25.108143] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.754 [2024-07-26 11:15:25.108191] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.754 [2024-07-26 11:15:25.108232] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.754 [2024-07-26 11:15:25.108271] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.754 [2024-07-26 11:15:25.108307] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.754 [2024-07-26 11:15:25.108349] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.754 [2024-07-26 11:15:25.108387] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.754 [2024-07-26 11:15:25.108428] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.754 [2024-07-26 11:15:25.108472] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.754 [2024-07-26 11:15:25.108509] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.754 [2024-07-26 11:15:25.108551] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.754 [2024-07-26 11:15:25.108599] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.754 [2024-07-26 11:15:25.108649] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.754 [2024-07-26 11:15:25.108845] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.754 [2024-07-26 11:15:25.108890] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.754 [2024-07-26 11:15:25.108941] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.754 [2024-07-26 11:15:25.108987] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.754 [2024-07-26 11:15:25.109032] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.754 [2024-07-26 11:15:25.109080] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.754 [2024-07-26 11:15:25.109130] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.754 [2024-07-26 11:15:25.109175] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.754 [2024-07-26 11:15:25.109218] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.754 [2024-07-26 11:15:25.109261] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.754 [2024-07-26 11:15:25.109304] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.754 [2024-07-26 11:15:25.109360] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.754 [2024-07-26 11:15:25.109406] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.755 [2024-07-26 11:15:25.109456] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.755 [2024-07-26 11:15:25.109501] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.755 [2024-07-26 11:15:25.109546] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.755 [2024-07-26 11:15:25.109596] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.755 [2024-07-26 11:15:25.109647] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.755 [2024-07-26 11:15:25.109691] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.755 [2024-07-26 11:15:25.109737] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.755 [2024-07-26 11:15:25.109781] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.755 [2024-07-26 11:15:25.109826] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.755 [2024-07-26 11:15:25.109876] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.755 [2024-07-26 11:15:25.109924] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.755 [2024-07-26 11:15:25.109972] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.755 [2024-07-26 11:15:25.110014] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.755 [2024-07-26 11:15:25.110059] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.755 [2024-07-26 11:15:25.110104] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.755 [2024-07-26 11:15:25.110151] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.755 [2024-07-26 11:15:25.110198] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.755 [2024-07-26 11:15:25.110232] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.755 [2024-07-26 11:15:25.110271] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.755 [2024-07-26 11:15:25.110310] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.755 [2024-07-26 11:15:25.110353] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.755 [2024-07-26 11:15:25.110393] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.755 [2024-07-26 11:15:25.110432] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.755 [2024-07-26 11:15:25.110476] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.755 [2024-07-26 11:15:25.110517] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.755 [2024-07-26 11:15:25.110557] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.755 [2024-07-26 11:15:25.110598] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.755 [2024-07-26 11:15:25.110649] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.755 [2024-07-26 11:15:25.110688] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.755 [2024-07-26 11:15:25.110721] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.755 [2024-07-26 11:15:25.110765] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.755 [2024-07-26 11:15:25.110806] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.755 [2024-07-26 11:15:25.110846] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.755 [2024-07-26 11:15:25.110885] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.755 [2024-07-26 11:15:25.110923] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.755 [2024-07-26 11:15:25.110971] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.755 [2024-07-26 11:15:25.111012] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.755 [2024-07-26 11:15:25.111055] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.755 [2024-07-26 11:15:25.111100] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.755 [2024-07-26 11:15:25.111139] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.755 [2024-07-26 11:15:25.111177] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.755 [2024-07-26 11:15:25.111215] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.755 [2024-07-26 11:15:25.111256] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.755 [2024-07-26 11:15:25.111296] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.755 [2024-07-26 11:15:25.111342] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.755 [2024-07-26 11:15:25.111385] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.755 [2024-07-26 11:15:25.111426] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.755 [2024-07-26 11:15:25.111466] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.755 [2024-07-26 11:15:25.111506] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.755 [2024-07-26 11:15:25.111546] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.755 [2024-07-26 11:15:25.112352] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.755 [2024-07-26 11:15:25.112405] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.755 [2024-07-26 11:15:25.112450] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.755 [2024-07-26 11:15:25.112494] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.755 [2024-07-26 11:15:25.112553] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.755 [2024-07-26 11:15:25.112596] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.755 [2024-07-26 11:15:25.112649] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.755 [2024-07-26 11:15:25.112695] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.755 [2024-07-26 11:15:25.112744] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.755 [2024-07-26 11:15:25.112794] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.755 [2024-07-26 11:15:25.112836] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.755 [2024-07-26 11:15:25.112879] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.755 [2024-07-26 11:15:25.112924] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.755 [2024-07-26 11:15:25.112970] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.755 [2024-07-26 11:15:25.113019] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.755 [2024-07-26 11:15:25.113071] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.755 [2024-07-26 11:15:25.113119] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.755 [2024-07-26 11:15:25.113168] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.755 [2024-07-26 11:15:25.113213] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.755 [2024-07-26 11:15:25.113265] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.755 [2024-07-26 11:15:25.113315] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.755 [2024-07-26 11:15:25.113360] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.755 [2024-07-26 11:15:25.113406] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.755 [2024-07-26 11:15:25.113456] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.755 [2024-07-26 11:15:25.113495] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.755 [2024-07-26 11:15:25.113528] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.755 [2024-07-26 11:15:25.113570] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.755 [2024-07-26 11:15:25.113613] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.755 [2024-07-26 11:15:25.113657] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.755 [2024-07-26 11:15:25.113697] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.755 [2024-07-26 11:15:25.113735] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.755 [2024-07-26 11:15:25.113774] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.755 [2024-07-26 11:15:25.113812] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.755 [2024-07-26 11:15:25.113854] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.755 [2024-07-26 11:15:25.113893] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.755 [2024-07-26 11:15:25.113931] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.755 [2024-07-26 11:15:25.113968] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.755 [2024-07-26 11:15:25.114003] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.755 [2024-07-26 11:15:25.114045] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.755 [2024-07-26 11:15:25.114085] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.755 [2024-07-26 11:15:25.114123] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.755 [2024-07-26 11:15:25.114163] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.756 [2024-07-26 11:15:25.114205] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.756 [2024-07-26 11:15:25.114244] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.756 [2024-07-26 11:15:25.114288] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.756 [2024-07-26 11:15:25.114329] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.756 [2024-07-26 11:15:25.114368] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.756 [2024-07-26 11:15:25.114407] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.756 [2024-07-26 11:15:25.114456] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.756 [2024-07-26 11:15:25.114488] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.756 [2024-07-26 11:15:25.114526] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.756 [2024-07-26 11:15:25.114570] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.756 [2024-07-26 11:15:25.114610] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.756 [2024-07-26 11:15:25.114660] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.756 [2024-07-26 11:15:25.114702] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.756 [2024-07-26 11:15:25.114750] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.756 [2024-07-26 11:15:25.114791] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.756 [2024-07-26 11:15:25.114833] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.756 [2024-07-26 11:15:25.114875] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.756 [2024-07-26 11:15:25.114914] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.756 [2024-07-26 11:15:25.114949] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.756 [2024-07-26 11:15:25.114994] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.756 [2024-07-26 11:15:25.115039] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.756 [2024-07-26 11:15:25.115084] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.756 [2024-07-26 11:15:25.115270] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.756 [2024-07-26 11:15:25.115317] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.756 [2024-07-26 11:15:25.115364] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.756 [2024-07-26 11:15:25.115409] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.756 [2024-07-26 11:15:25.115458] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.756 [2024-07-26 11:15:25.115502] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.756 [2024-07-26 11:15:25.115546] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.756 [2024-07-26 11:15:25.115598] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.756 [2024-07-26 11:15:25.115655] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.756 [2024-07-26 11:15:25.115699] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.756 [2024-07-26 11:15:25.115744] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.756 [2024-07-26 11:15:25.115789] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.756 [2024-07-26 11:15:25.115835] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.756 [2024-07-26 11:15:25.115881] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.756 [2024-07-26 11:15:25.115926] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.756 [2024-07-26 11:15:25.115968] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.756 [2024-07-26 11:15:25.116017] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.756 [2024-07-26 11:15:25.116060] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.756 [2024-07-26 11:15:25.116109] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.756 [2024-07-26 11:15:25.116156] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.756 [2024-07-26 11:15:25.116204] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.756 [2024-07-26 11:15:25.116251] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.756 [2024-07-26 11:15:25.116296] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.756 [2024-07-26 11:15:25.116338] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.756 [2024-07-26 11:15:25.116384] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.756 [2024-07-26 11:15:25.116428] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.756 [2024-07-26 11:15:25.116477] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.756 [2024-07-26 11:15:25.116524] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.756 [2024-07-26 11:15:25.116569] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.756 [2024-07-26 11:15:25.116611] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.756 [2024-07-26 11:15:25.116662] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.756 [2024-07-26 11:15:25.116700] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.756 [2024-07-26 11:15:25.116742] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.756 [2024-07-26 11:15:25.116778] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.756 [2024-07-26 11:15:25.116812] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.756 [2024-07-26 11:15:25.116854] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.756 [2024-07-26 11:15:25.116891] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.756 [2024-07-26 11:15:25.116930] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.756 [2024-07-26 11:15:25.116972] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.756 [2024-07-26 11:15:25.117016] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.756 [2024-07-26 11:15:25.117057] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.756 [2024-07-26 11:15:25.117094] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.756 [2024-07-26 11:15:25.117131] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.756 [2024-07-26 11:15:25.117173] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.756 [2024-07-26 11:15:25.117214] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.756 [2024-07-26 11:15:25.117262] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.756 [2024-07-26 11:15:25.117293] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.756 [2024-07-26 11:15:25.117329] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.756 [2024-07-26 11:15:25.117366] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.756 [2024-07-26 11:15:25.117405] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.756 [2024-07-26 11:15:25.117444] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.756 [2024-07-26 11:15:25.117490] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.756 [2024-07-26 11:15:25.117530] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.756 [2024-07-26 11:15:25.117564] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.756 [2024-07-26 11:15:25.117601] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.756 [2024-07-26 11:15:25.117643] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.756 [2024-07-26 11:15:25.117683] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.756 [2024-07-26 11:15:25.117728] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.756 [2024-07-26 11:15:25.117776] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.756 [2024-07-26 11:15:25.117816] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.756 [2024-07-26 11:15:25.117858] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.756 [2024-07-26 11:15:25.117900] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.756 [2024-07-26 11:15:25.117941] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.756 [2024-07-26 11:15:25.118731] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.756 [2024-07-26 11:15:25.118782] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.756 [2024-07-26 11:15:25.118835] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.756 [2024-07-26 11:15:25.118878] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.756 [2024-07-26 11:15:25.118927] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.756 [2024-07-26 11:15:25.118973] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.756 [2024-07-26 11:15:25.119007] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.756 [2024-07-26 11:15:25.119046] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.756 [2024-07-26 11:15:25.119094] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.756 [2024-07-26 11:15:25.119132] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.756 [2024-07-26 11:15:25.119170] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.757 [2024-07-26 11:15:25.119216] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.757 [2024-07-26 11:15:25.119255] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.757 [2024-07-26 11:15:25.119291] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.757 [2024-07-26 11:15:25.119325] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.757 [2024-07-26 11:15:25.119364] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.757 [2024-07-26 11:15:25.119409] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.757 [2024-07-26 11:15:25.119451] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.757 [2024-07-26 11:15:25.119496] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.757 [2024-07-26 11:15:25.119537] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.757 [2024-07-26 11:15:25.119572] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.757 [2024-07-26 11:15:25.119618] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.757 [2024-07-26 11:15:25.119665] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.757 [2024-07-26 11:15:25.119708] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.757 [2024-07-26 11:15:25.119752] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.757 [2024-07-26 11:15:25.119790] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.757 [2024-07-26 11:15:25.119831] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.757 [2024-07-26 11:15:25.119875] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.757 [2024-07-26 11:15:25.119912] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.757 [2024-07-26 11:15:25.119954] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.757 [2024-07-26 11:15:25.119998] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.757 [2024-07-26 11:15:25.120039] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.757 [2024-07-26 11:15:25.120080] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.757 [2024-07-26 11:15:25.120121] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.757 [2024-07-26 11:15:25.120162] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.757 [2024-07-26 11:15:25.120206] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.757 [2024-07-26 11:15:25.120250] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.757 [2024-07-26 11:15:25.120296] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.757 [2024-07-26 11:15:25.120348] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.757 [2024-07-26 11:15:25.120393] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.757 [2024-07-26 11:15:25.120438] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.757 [2024-07-26 11:15:25.120483] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.757 [2024-07-26 11:15:25.120544] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.757 [2024-07-26 11:15:25.120586] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.757 [2024-07-26 11:15:25.120640] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.757 [2024-07-26 11:15:25.120686] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.757 [2024-07-26 11:15:25.120736] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.757 [2024-07-26 11:15:25.120785] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.757 [2024-07-26 11:15:25.120831] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.757 [2024-07-26 11:15:25.120876] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.757 [2024-07-26 11:15:25.120922] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.757 [2024-07-26 11:15:25.120967] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.757 [2024-07-26 11:15:25.121013] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.757 [2024-07-26 11:15:25.121056] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.757 [2024-07-26 11:15:25.121099] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.757 [2024-07-26 11:15:25.121151] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.757 [2024-07-26 11:15:25.121193] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.757 [2024-07-26 11:15:25.121239] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.757 [2024-07-26 11:15:25.121285] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.757 [2024-07-26 11:15:25.121339] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.757 [2024-07-26 11:15:25.121383] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.757 [2024-07-26 11:15:25.121430] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.757 [2024-07-26 11:15:25.121480] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.757 [2024-07-26 11:15:25.121526] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.757 [2024-07-26 11:15:25.121708] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.757 [2024-07-26 11:15:25.121757] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.757 [2024-07-26 11:15:25.121798] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.757 [2024-07-26 11:15:25.121846] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.757 [2024-07-26 11:15:25.121894] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.757 [2024-07-26 11:15:25.121943] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.757 [2024-07-26 11:15:25.121988] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.757 [2024-07-26 11:15:25.122035] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.757 [2024-07-26 11:15:25.122093] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.757 [2024-07-26 11:15:25.122139] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.757 [2024-07-26 11:15:25.122183] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.757 [2024-07-26 11:15:25.122231] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.757 [2024-07-26 11:15:25.122275] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.757 [2024-07-26 11:15:25.122322] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.757 [2024-07-26 11:15:25.122368] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.757 [2024-07-26 11:15:25.122413] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.757 [2024-07-26 11:15:25.122457] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.757 [2024-07-26 11:15:25.122920] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.757 [2024-07-26 11:15:25.122967] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.758 [2024-07-26 11:15:25.123006] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.758 [2024-07-26 11:15:25.123047] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.758 [2024-07-26 11:15:25.123084] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.758 [2024-07-26 11:15:25.123131] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.758 [2024-07-26 11:15:25.123171] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.758 [2024-07-26 11:15:25.123211] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.758 [2024-07-26 11:15:25.123252] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.758 [2024-07-26 11:15:25.123293] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.758 [2024-07-26 11:15:25.123338] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.758 [2024-07-26 11:15:25.123379] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.758 [2024-07-26 11:15:25.123414] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.758 [2024-07-26 11:15:25.123455] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.758 [2024-07-26 11:15:25.123492] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.758 [2024-07-26 11:15:25.123534] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.758 [2024-07-26 11:15:25.123572] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.758 [2024-07-26 11:15:25.123620] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.758 [2024-07-26 11:15:25.123666] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.758 [2024-07-26 11:15:25.123708] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.758 [2024-07-26 11:15:25.123745] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.758 [2024-07-26 11:15:25.123785] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.758 [2024-07-26 11:15:25.123827] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.758 [2024-07-26 11:15:25.123866] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.758 [2024-07-26 11:15:25.123897] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.758 [2024-07-26 11:15:25.123937] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.758 [2024-07-26 11:15:25.123978] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.758 [2024-07-26 11:15:25.124024] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.758 [2024-07-26 11:15:25.124063] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.758 [2024-07-26 11:15:25.124112] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.758 [2024-07-26 11:15:25.124150] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.758 [2024-07-26 11:15:25.124191] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.758 [2024-07-26 11:15:25.124230] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.758 [2024-07-26 11:15:25.124273] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.758 [2024-07-26 11:15:25.124314] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.758 [2024-07-26 11:15:25.124355] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.758 [2024-07-26 11:15:25.124394] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.758 [2024-07-26 11:15:25.124435] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.758 [2024-07-26 11:15:25.124473] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.758 [2024-07-26 11:15:25.124515] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.758 [2024-07-26 11:15:25.124554] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.758 [2024-07-26 11:15:25.124595] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.758 [2024-07-26 11:15:25.124642] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.758 [2024-07-26 11:15:25.124681] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.758 [2024-07-26 11:15:25.124726] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.758 [2024-07-26 11:15:25.124775] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.758 [2024-07-26 11:15:25.124819] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.758 [2024-07-26 11:15:25.124869] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.758 [2024-07-26 11:15:25.124913] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.758 [2024-07-26 11:15:25.124963] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.758 [2024-07-26 11:15:25.125005] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.758 [2024-07-26 11:15:25.125051] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.758 [2024-07-26 11:15:25.125096] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.758 [2024-07-26 11:15:25.125140] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.758 [2024-07-26 11:15:25.125185] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.758 [2024-07-26 11:15:25.125233] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.758 [2024-07-26 11:15:25.125277] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.758 [2024-07-26 11:15:25.125327] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.758 [2024-07-26 11:15:25.125372] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.758 [2024-07-26 11:15:25.125418] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.758 [2024-07-26 11:15:25.125466] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.758 [2024-07-26 11:15:25.125509] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.758 [2024-07-26 11:15:25.125551] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.758 [2024-07-26 11:15:25.125582] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.758 [2024-07-26 11:15:25.125769] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.758 [2024-07-26 11:15:25.125814] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.758 [2024-07-26 11:15:25.125853] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.758 [2024-07-26 11:15:25.125888] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.758 [2024-07-26 11:15:25.125924] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.758 [2024-07-26 11:15:25.125962] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.758 [2024-07-26 11:15:25.126000] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.758 [2024-07-26 11:15:25.126039] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.758 [2024-07-26 11:15:25.126085] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.758 [2024-07-26 11:15:25.126130] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.758 [2024-07-26 11:15:25.126167] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.758 [2024-07-26 11:15:25.126207] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.758 [2024-07-26 11:15:25.126246] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.758 [2024-07-26 11:15:25.126287] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.758 [2024-07-26 11:15:25.126325] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.758 [2024-07-26 11:15:25.126367] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.758 [2024-07-26 11:15:25.126405] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.758 [2024-07-26 11:15:25.126445] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.758 [2024-07-26 11:15:25.126485] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.758 [2024-07-26 11:15:25.126529] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.758 [2024-07-26 11:15:25.126573] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.758 [2024-07-26 11:15:25.126613] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.758 [2024-07-26 11:15:25.126661] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.758 [2024-07-26 11:15:25.126701] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.758 [2024-07-26 11:15:25.126737] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.758 [2024-07-26 11:15:25.126775] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.759 [2024-07-26 11:15:25.126816] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.759 [2024-07-26 11:15:25.126864] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.759 [2024-07-26 11:15:25.126913] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.759 [2024-07-26 11:15:25.126958] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.759 [2024-07-26 11:15:25.127005] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.759 [2024-07-26 11:15:25.127052] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.759 [2024-07-26 11:15:25.127103] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.759 [2024-07-26 11:15:25.127149] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.759 [2024-07-26 11:15:25.127192] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.759 [2024-07-26 11:15:25.127236] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.759 [2024-07-26 11:15:25.127278] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.759 [2024-07-26 11:15:25.127318] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.759 [2024-07-26 11:15:25.127361] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.759 [2024-07-26 11:15:25.127403] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.759 [2024-07-26 11:15:25.127443] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.759 [2024-07-26 11:15:25.127483] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.759 [2024-07-26 11:15:25.127527] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.759 [2024-07-26 11:15:25.127565] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.759 [2024-07-26 11:15:25.127601] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.759 [2024-07-26 11:15:25.127652] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.759 [2024-07-26 11:15:25.127692] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.759 [2024-07-26 11:15:25.127739] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.759 [2024-07-26 11:15:25.127786] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.759 [2024-07-26 11:15:25.127834] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.759 [2024-07-26 11:15:25.127878] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.759 [2024-07-26 11:15:25.127922] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.759 [2024-07-26 11:15:25.127967] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.759 [2024-07-26 11:15:25.128014] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.759 [2024-07-26 11:15:25.128059] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.759 [2024-07-26 11:15:25.128107] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.759 [2024-07-26 11:15:25.128153] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.759 [2024-07-26 11:15:25.128200] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.759 [2024-07-26 11:15:25.128247] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.759 [2024-07-26 11:15:25.128294] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.759 [2024-07-26 11:15:25.128337] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.759 [2024-07-26 11:15:25.128380] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.759 [2024-07-26 11:15:25.128423] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.759 [2024-07-26 11:15:25.129220] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.759 [2024-07-26 11:15:25.129269] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.759 [2024-07-26 11:15:25.129318] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.759 [2024-07-26 11:15:25.129364] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.759 [2024-07-26 11:15:25.129409] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.759 [2024-07-26 11:15:25.129452] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.759 [2024-07-26 11:15:25.129500] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.759 [2024-07-26 11:15:25.129545] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.759 [2024-07-26 11:15:25.129593] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.759 [2024-07-26 11:15:25.129647] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.759 [2024-07-26 11:15:25.129691] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.759 [2024-07-26 11:15:25.129734] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.759 [2024-07-26 11:15:25.129774] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.759 [2024-07-26 11:15:25.129822] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.759 [2024-07-26 11:15:25.129861] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.759 [2024-07-26 11:15:25.129900] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.759 [2024-07-26 11:15:25.129943] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.759 [2024-07-26 11:15:25.129984] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.759 [2024-07-26 11:15:25.130022] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.759 [2024-07-26 11:15:25.130064] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.759 [2024-07-26 11:15:25.130102] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.759 [2024-07-26 11:15:25.130144] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.759 [2024-07-26 11:15:25.130190] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.759 [2024-07-26 11:15:25.130228] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.759 [2024-07-26 11:15:25.130269] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.759 [2024-07-26 11:15:25.130306] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.759 [2024-07-26 11:15:25.130352] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.759 [2024-07-26 11:15:25.130391] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.759 [2024-07-26 11:15:25.130429] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.759 [2024-07-26 11:15:25.130475] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.759 [2024-07-26 11:15:25.130513] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.759 [2024-07-26 11:15:25.130547] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.759 [2024-07-26 11:15:25.130591] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.759 [2024-07-26 11:15:25.130636] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.759 [2024-07-26 11:15:25.130677] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.759 [2024-07-26 11:15:25.130714] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.759 [2024-07-26 11:15:25.130752] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.759 [2024-07-26 11:15:25.130794] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.759 [2024-07-26 11:15:25.130834] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.759 [2024-07-26 11:15:25.130872] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.759 [2024-07-26 11:15:25.130911] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.759 [2024-07-26 11:15:25.130948] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.759 [2024-07-26 11:15:25.130993] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.759 [2024-07-26 11:15:25.131033] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.759 [2024-07-26 11:15:25.131079] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.759 [2024-07-26 11:15:25.131119] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.759 [2024-07-26 11:15:25.131159] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.759 [2024-07-26 11:15:25.131200] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.759 [2024-07-26 11:15:25.131240] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.759 [2024-07-26 11:15:25.131284] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.759 [2024-07-26 11:15:25.131328] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.759 [2024-07-26 11:15:25.131383] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.759 [2024-07-26 11:15:25.131431] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.759 [2024-07-26 11:15:25.131474] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.759 [2024-07-26 11:15:25.131519] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.759 [2024-07-26 11:15:25.131565] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.759 [2024-07-26 11:15:25.131614] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.760 [2024-07-26 11:15:25.131665] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.760 [2024-07-26 11:15:25.131714] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.760 [2024-07-26 11:15:25.131760] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.760 [2024-07-26 11:15:25.131806] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.760 [2024-07-26 11:15:25.131853] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.760 [2024-07-26 11:15:25.131896] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.760 [2024-07-26 11:15:25.131944] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.760 [2024-07-26 11:15:25.132107] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.760 [2024-07-26 11:15:25.132149] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.760 [2024-07-26 11:15:25.132190] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.760 [2024-07-26 11:15:25.132229] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.760 [2024-07-26 11:15:25.132276] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.760 [2024-07-26 11:15:25.132315] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.760 [2024-07-26 11:15:25.132355] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.760 [2024-07-26 11:15:25.132400] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.760 [2024-07-26 11:15:25.132431] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.760 [2024-07-26 11:15:25.132476] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.760 [2024-07-26 11:15:25.132513] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.760 [2024-07-26 11:15:25.132555] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.760 [2024-07-26 11:15:25.132597] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.760 [2024-07-26 11:15:25.132652] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.760 [2024-07-26 11:15:25.132692] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.760 [2024-07-26 11:15:25.132729] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.760 [2024-07-26 11:15:25.132766] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.760 [2024-07-26 11:15:25.133358] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.760 [2024-07-26 11:15:25.133399] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.760 [2024-07-26 11:15:25.133436] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.760 [2024-07-26 11:15:25.133473] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.760 [2024-07-26 11:15:25.133515] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.760 [2024-07-26 11:15:25.133559] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.760 [2024-07-26 11:15:25.133605] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.760 [2024-07-26 11:15:25.133660] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.760 [2024-07-26 11:15:25.133707] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.760 [2024-07-26 11:15:25.133754] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.760 [2024-07-26 11:15:25.133799] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.760 [2024-07-26 11:15:25.133843] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.760 [2024-07-26 11:15:25.133888] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.760 [2024-07-26 11:15:25.133934] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.760 [2024-07-26 11:15:25.133978] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.760 [2024-07-26 11:15:25.134021] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.760 [2024-07-26 11:15:25.134074] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.760 [2024-07-26 11:15:25.134122] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.760 [2024-07-26 11:15:25.134166] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.760 [2024-07-26 11:15:25.134209] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.760 [2024-07-26 11:15:25.134255] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.760 [2024-07-26 11:15:25.134301] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.760 [2024-07-26 11:15:25.134345] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.760 [2024-07-26 11:15:25.134390] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.760 [2024-07-26 11:15:25.134438] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.760 [2024-07-26 11:15:25.134484] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.760 [2024-07-26 11:15:25.134527] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.760 [2024-07-26 11:15:25.134573] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.760 [2024-07-26 11:15:25.134621] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.760 [2024-07-26 11:15:25.134674] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.760 [2024-07-26 11:15:25.134717] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.760 [2024-07-26 11:15:25.134762] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.760 [2024-07-26 11:15:25.134806] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.760 [2024-07-26 11:15:25.134857] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.760 [2024-07-26 11:15:25.134900] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.760 [2024-07-26 11:15:25.134944] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.760 [2024-07-26 11:15:25.134989] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.760 [2024-07-26 11:15:25.135035] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.760 [2024-07-26 11:15:25.135081] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.760 [2024-07-26 11:15:25.135128] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.760 [2024-07-26 11:15:25.135173] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.760 [2024-07-26 11:15:25.135211] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.760 [2024-07-26 11:15:25.135251] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.760 [2024-07-26 11:15:25.135294] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.760 [2024-07-26 11:15:25.135332] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.760 [2024-07-26 11:15:25.135372] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.760 [2024-07-26 11:15:25.135422] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.760 [2024-07-26 11:15:25.135461] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.760 [2024-07-26 11:15:25.135506] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.760 [2024-07-26 11:15:25.135551] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.760 [2024-07-26 11:15:25.135581] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.760 [2024-07-26 11:15:25.135623] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.760 [2024-07-26 11:15:25.135670] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.760 [2024-07-26 11:15:25.135708] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.760 [2024-07-26 11:15:25.135752] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.760 [2024-07-26 11:15:25.135791] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.760 [2024-07-26 11:15:25.135829] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.760 [2024-07-26 11:15:25.135868] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.760 [2024-07-26 11:15:25.135908] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.760 [2024-07-26 11:15:25.135952] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.760 [2024-07-26 11:15:25.135997] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.760 [2024-07-26 11:15:25.136037] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.760 [2024-07-26 11:15:25.136077] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.760 [2024-07-26 11:15:25.136273] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.760 [2024-07-26 11:15:25.136315] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.760 [2024-07-26 11:15:25.136352] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.760 [2024-07-26 11:15:25.136392] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.760 [2024-07-26 11:15:25.136432] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.760 [2024-07-26 11:15:25.136470] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.760 [2024-07-26 11:15:25.136512] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.761 [2024-07-26 11:15:25.136559] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.761 [2024-07-26 11:15:25.136606] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.761 [2024-07-26 11:15:25.136659] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.761 [2024-07-26 11:15:25.136704] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.761 [2024-07-26 11:15:25.136750] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.761 [2024-07-26 11:15:25.136796] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.761 [2024-07-26 11:15:25.136845] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.761 [2024-07-26 11:15:25.136894] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.761 [2024-07-26 11:15:25.136937] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.761 [2024-07-26 11:15:25.136983] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.761 [2024-07-26 11:15:25.137031] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.761 [2024-07-26 11:15:25.137075] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.761 [2024-07-26 11:15:25.137125] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.761 [2024-07-26 11:15:25.137167] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.761 [2024-07-26 11:15:25.137214] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.761 [2024-07-26 11:15:25.137262] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.761 [2024-07-26 11:15:25.137308] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.761 [2024-07-26 11:15:25.137355] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.761 [2024-07-26 11:15:25.137403] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.761 [2024-07-26 11:15:25.137455] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.761 [2024-07-26 11:15:25.137503] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.761 [2024-07-26 11:15:25.137550] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.761 [2024-07-26 11:15:25.137594] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.761 [2024-07-26 11:15:25.137643] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.761 [2024-07-26 11:15:25.137691] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.761 [2024-07-26 11:15:25.137740] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.761 [2024-07-26 11:15:25.137784] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.761 [2024-07-26 11:15:25.137829] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.761 [2024-07-26 11:15:25.137878] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.761 [2024-07-26 11:15:25.137922] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.761 [2024-07-26 11:15:25.137968] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.761 [2024-07-26 11:15:25.138017] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.761 [2024-07-26 11:15:25.138060] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.761 [2024-07-26 11:15:25.138106] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.761 [2024-07-26 11:15:25.138155] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.761 [2024-07-26 11:15:25.138187] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.761 [2024-07-26 11:15:25.138225] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.761 [2024-07-26 11:15:25.138269] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.761 [2024-07-26 11:15:25.138306] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.761 [2024-07-26 11:15:25.138356] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.761 [2024-07-26 11:15:25.138394] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.761 [2024-07-26 11:15:25.138432] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.761 [2024-07-26 11:15:25.138471] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.761 [2024-07-26 11:15:25.138515] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.761 [2024-07-26 11:15:25.138557] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.761 [2024-07-26 11:15:25.138597] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.761 [2024-07-26 11:15:25.138649] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.761 [2024-07-26 11:15:25.138688] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.761 [2024-07-26 11:15:25.138734] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.761 [2024-07-26 11:15:25.138772] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.761 [2024-07-26 11:15:25.138809] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.761 [2024-07-26 11:15:25.138854] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.761 [2024-07-26 11:15:25.138893] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.761 [2024-07-26 11:15:25.138932] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.761 [2024-07-26 11:15:25.138976] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.761 [2024-07-26 11:15:25.139017] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.761 [2024-07-26 11:15:25.139060] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.761 [2024-07-26 11:15:25.139948] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.761 [2024-07-26 11:15:25.140003] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.761 [2024-07-26 11:15:25.140048] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.761 [2024-07-26 11:15:25.140093] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.761 [2024-07-26 11:15:25.140139] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.761 [2024-07-26 11:15:25.140187] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.761 [2024-07-26 11:15:25.140233] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.761 [2024-07-26 11:15:25.140279] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.761 [2024-07-26 11:15:25.140324] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.761 [2024-07-26 11:15:25.140372] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.761 [2024-07-26 11:15:25.140419] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.761 [2024-07-26 11:15:25.140468] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.761 [2024-07-26 11:15:25.140514] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.761 [2024-07-26 11:15:25.140559] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.761 [2024-07-26 11:15:25.140601] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.761 [2024-07-26 11:15:25.140655] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.761 [2024-07-26 11:15:25.140712] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.761 [2024-07-26 11:15:25.140756] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.761 [2024-07-26 11:15:25.140801] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.761 [2024-07-26 11:15:25.140847] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.761 [2024-07-26 11:15:25.140896] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.761 [2024-07-26 11:15:25.140940] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.761 [2024-07-26 11:15:25.140986] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.761 [2024-07-26 11:15:25.141029] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.761 [2024-07-26 11:15:25.141077] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.761 [2024-07-26 11:15:25.141129] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.761 [2024-07-26 11:15:25.141176] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.761 [2024-07-26 11:15:25.141220] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.761 [2024-07-26 11:15:25.141267] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.761 [2024-07-26 11:15:25.141313] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.761 [2024-07-26 11:15:25.141357] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.761 [2024-07-26 11:15:25.141412] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.762 [2024-07-26 11:15:25.141457] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.762 [2024-07-26 11:15:25.141508] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.762 [2024-07-26 11:15:25.141555] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.762 [2024-07-26 11:15:25.141597] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.762 [2024-07-26 11:15:25.141633] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.762 [2024-07-26 11:15:25.141671] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.762 [2024-07-26 11:15:25.141707] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.762 [2024-07-26 11:15:25.141750] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.762 [2024-07-26 11:15:25.141792] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.762 [2024-07-26 11:15:25.141835] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.762 [2024-07-26 11:15:25.141876] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.762 [2024-07-26 11:15:25.141919] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.762 [2024-07-26 11:15:25.141958] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.762 [2024-07-26 11:15:25.142004] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.762 [2024-07-26 11:15:25.142043] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.762 [2024-07-26 11:15:25.142092] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.762 [2024-07-26 11:15:25.142125] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.762 [2024-07-26 11:15:25.142165] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.762 [2024-07-26 11:15:25.142206] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.762 [2024-07-26 11:15:25.142244] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.762 [2024-07-26 11:15:25.142290] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.762 [2024-07-26 11:15:25.142326] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.762 [2024-07-26 11:15:25.142364] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.762 [2024-07-26 11:15:25.142408] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.762 [2024-07-26 11:15:25.142447] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.762 [2024-07-26 11:15:25.142495] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.762 [2024-07-26 11:15:25.142535] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.762 [2024-07-26 11:15:25.142571] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.762 [2024-07-26 11:15:25.142606] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.762 [2024-07-26 11:15:25.142654] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.762 [2024-07-26 11:15:25.142699] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.762 [2024-07-26 11:15:25.142744] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.762 [2024-07-26 11:15:25.142942] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.762 [2024-07-26 11:15:25.142982] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.762 [2024-07-26 11:15:25.143024] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.762 [2024-07-26 11:15:25.143064] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.762 [2024-07-26 11:15:25.143108] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.762 [2024-07-26 11:15:25.143143] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.762 [2024-07-26 11:15:25.143181] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.762 [2024-07-26 11:15:25.143222] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.762 [2024-07-26 11:15:25.143266] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.762 [2024-07-26 11:15:25.143313] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.762 [2024-07-26 11:15:25.143358] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.762 [2024-07-26 11:15:25.143401] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.762 [2024-07-26 11:15:25.143447] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.762 [2024-07-26 11:15:25.143494] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.762 [2024-07-26 11:15:25.143541] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.762 [2024-07-26 11:15:25.143591] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.762 [2024-07-26 11:15:25.143641] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.762 [2024-07-26 11:15:25.143689] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.762 [2024-07-26 11:15:25.143735] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.762 [2024-07-26 11:15:25.143779] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.762 [2024-07-26 11:15:25.143824] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.762 [2024-07-26 11:15:25.143870] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.762 [2024-07-26 11:15:25.143918] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.762 [2024-07-26 11:15:25.143964] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.762 [2024-07-26 11:15:25.144011] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.762 [2024-07-26 11:15:25.144056] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.762 [2024-07-26 11:15:25.144101] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.762 [2024-07-26 11:15:25.144150] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.762 [2024-07-26 11:15:25.144194] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.762 [2024-07-26 11:15:25.144239] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.762 [2024-07-26 11:15:25.144285] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.762 [2024-07-26 11:15:25.144337] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.762 [2024-07-26 11:15:25.144380] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.762 [2024-07-26 11:15:25.144428] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.762 [2024-07-26 11:15:25.144475] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.762 [2024-07-26 11:15:25.144515] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.762 [2024-07-26 11:15:25.144562] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.762 [2024-07-26 11:15:25.144611] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.762 [2024-07-26 11:15:25.144660] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.762 [2024-07-26 11:15:25.144704] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.762 [2024-07-26 11:15:25.144749] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.762 [2024-07-26 11:15:25.144798] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.762 [2024-07-26 11:15:25.144837] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.762 [2024-07-26 11:15:25.144869] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.762 [2024-07-26 11:15:25.144907] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.762 [2024-07-26 11:15:25.144950] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.762 [2024-07-26 11:15:25.144991] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.763 [2024-07-26 11:15:25.145028] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.763 [2024-07-26 11:15:25.145068] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.763 [2024-07-26 11:15:25.145117] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.763 [2024-07-26 11:15:25.145158] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.763 [2024-07-26 11:15:25.145198] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.763 [2024-07-26 11:15:25.145248] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.763 [2024-07-26 11:15:25.145291] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.763 [2024-07-26 11:15:25.145333] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.763 [2024-07-26 11:15:25.145369] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.763 [2024-07-26 11:15:25.145413] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.763 [2024-07-26 11:15:25.145454] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.763 [2024-07-26 11:15:25.145495] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.763 [2024-07-26 11:15:25.145536] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.763 [2024-07-26 11:15:25.145576] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.763 [2024-07-26 11:15:25.145623] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.763 [2024-07-26 11:15:25.145670] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.763 Message suppressed 999 times: Read completed with error (sct=0, sc=15) 00:07:29.763 [2024-07-26 11:15:25.146563] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.763 [2024-07-26 11:15:25.146617] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.763 [2024-07-26 11:15:25.146667] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.763 [2024-07-26 11:15:25.146719] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.763 [2024-07-26 11:15:25.146763] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.763 [2024-07-26 11:15:25.146807] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.763 [2024-07-26 11:15:25.146862] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.763 [2024-07-26 11:15:25.146911] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.763 [2024-07-26 11:15:25.146954] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.763 [2024-07-26 11:15:25.147001] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.763 [2024-07-26 11:15:25.147053] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.763 [2024-07-26 11:15:25.147097] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.763 [2024-07-26 11:15:25.147140] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.763 [2024-07-26 11:15:25.147185] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.763 [2024-07-26 11:15:25.147235] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.763 [2024-07-26 11:15:25.147280] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.763 [2024-07-26 11:15:25.147328] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.763 [2024-07-26 11:15:25.147374] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.763 [2024-07-26 11:15:25.147419] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.763 [2024-07-26 11:15:25.147467] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.763 [2024-07-26 11:15:25.147510] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.763 [2024-07-26 11:15:25.147559] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.763 [2024-07-26 11:15:25.147605] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.763 [2024-07-26 11:15:25.147655] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.763 [2024-07-26 11:15:25.147707] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.763 [2024-07-26 11:15:25.147755] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.763 [2024-07-26 11:15:25.147803] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.763 [2024-07-26 11:15:25.147851] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.763 [2024-07-26 11:15:25.147903] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.763 [2024-07-26 11:15:25.147952] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.763 [2024-07-26 11:15:25.147995] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.763 [2024-07-26 11:15:25.148042] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.763 [2024-07-26 11:15:25.148087] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.763 [2024-07-26 11:15:25.148130] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.763 [2024-07-26 11:15:25.148178] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.763 [2024-07-26 11:15:25.148225] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.763 [2024-07-26 11:15:25.148267] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.763 [2024-07-26 11:15:25.148310] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.763 [2024-07-26 11:15:25.148344] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.763 [2024-07-26 11:15:25.148388] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.763 [2024-07-26 11:15:25.148429] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.763 [2024-07-26 11:15:25.148472] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.763 [2024-07-26 11:15:25.148509] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.763 [2024-07-26 11:15:25.148548] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.763 [2024-07-26 11:15:25.148594] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.763 [2024-07-26 11:15:25.148641] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.763 [2024-07-26 11:15:25.148685] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.763 [2024-07-26 11:15:25.148726] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.763 [2024-07-26 11:15:25.148774] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.763 [2024-07-26 11:15:25.148815] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.763 [2024-07-26 11:15:25.148846] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.763 [2024-07-26 11:15:25.148887] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.763 [2024-07-26 11:15:25.148930] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.763 [2024-07-26 11:15:25.148971] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.763 [2024-07-26 11:15:25.149016] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.763 [2024-07-26 11:15:25.149054] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.763 [2024-07-26 11:15:25.149093] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.763 [2024-07-26 11:15:25.149136] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.763 [2024-07-26 11:15:25.149180] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.763 [2024-07-26 11:15:25.149222] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.763 [2024-07-26 11:15:25.149263] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.763 [2024-07-26 11:15:25.149299] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.763 [2024-07-26 11:15:25.149341] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.763 [2024-07-26 11:15:25.149379] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.763 [2024-07-26 11:15:25.149575] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.763 [2024-07-26 11:15:25.149615] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.763 [2024-07-26 11:15:25.149664] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.763 [2024-07-26 11:15:25.149702] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.763 [2024-07-26 11:15:25.149739] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.763 [2024-07-26 11:15:25.149779] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.763 [2024-07-26 11:15:25.149819] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.763 [2024-07-26 11:15:25.149861] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.763 [2024-07-26 11:15:25.149897] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.763 [2024-07-26 11:15:25.149945] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.763 [2024-07-26 11:15:25.149984] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.764 [2024-07-26 11:15:25.150031] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.764 [2024-07-26 11:15:25.150076] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.764 [2024-07-26 11:15:25.150127] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.764 [2024-07-26 11:15:25.150173] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.764 [2024-07-26 11:15:25.150220] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.764 [2024-07-26 11:15:25.150264] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.764 [2024-07-26 11:15:25.150771] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.764 [2024-07-26 11:15:25.150817] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.764 [2024-07-26 11:15:25.150871] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.764 [2024-07-26 11:15:25.150917] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.764 [2024-07-26 11:15:25.150964] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.764 [2024-07-26 11:15:25.151010] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.764 [2024-07-26 11:15:25.151058] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.764 [2024-07-26 11:15:25.151108] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.764 [2024-07-26 11:15:25.151156] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.764 [2024-07-26 11:15:25.151200] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.764 [2024-07-26 11:15:25.151245] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.764 [2024-07-26 11:15:25.151295] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.764 [2024-07-26 11:15:25.151344] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.764 [2024-07-26 11:15:25.151389] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.764 [2024-07-26 11:15:25.151432] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.764 [2024-07-26 11:15:25.151479] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.764 [2024-07-26 11:15:25.151526] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.764 [2024-07-26 11:15:25.151568] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.764 [2024-07-26 11:15:25.151599] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.764 [2024-07-26 11:15:25.151644] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.764 [2024-07-26 11:15:25.151683] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.764 [2024-07-26 11:15:25.151725] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.764 [2024-07-26 11:15:25.151763] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.764 [2024-07-26 11:15:25.151801] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.764 [2024-07-26 11:15:25.151844] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.764 [2024-07-26 11:15:25.151883] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.764 [2024-07-26 11:15:25.151924] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.764 [2024-07-26 11:15:25.151971] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.764 [2024-07-26 11:15:25.152011] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.764 [2024-07-26 11:15:25.152052] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.764 [2024-07-26 11:15:25.152089] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.764 [2024-07-26 11:15:25.152130] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.764 [2024-07-26 11:15:25.152170] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.764 [2024-07-26 11:15:25.152211] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.764 [2024-07-26 11:15:25.152254] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.764 [2024-07-26 11:15:25.152291] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.764 [2024-07-26 11:15:25.152336] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.764 [2024-07-26 11:15:25.152374] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.764 [2024-07-26 11:15:25.152414] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.764 [2024-07-26 11:15:25.152462] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.764 [2024-07-26 11:15:25.152502] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.764 [2024-07-26 11:15:25.152540] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.764 [2024-07-26 11:15:25.152573] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.764 [2024-07-26 11:15:25.152618] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.764 [2024-07-26 11:15:25.152670] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.764 [2024-07-26 11:15:25.152709] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.764 [2024-07-26 11:15:25.152751] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.764 [2024-07-26 11:15:25.152791] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.764 [2024-07-26 11:15:25.152828] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.764 [2024-07-26 11:15:25.152865] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.764 [2024-07-26 11:15:25.152903] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.764 [2024-07-26 11:15:25.152946] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.764 [2024-07-26 11:15:25.152985] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.764 [2024-07-26 11:15:25.153029] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.764 [2024-07-26 11:15:25.153070] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.764 [2024-07-26 11:15:25.153116] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.764 [2024-07-26 11:15:25.153162] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.764 [2024-07-26 11:15:25.153209] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.764 [2024-07-26 11:15:25.153256] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.764 [2024-07-26 11:15:25.153302] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.764 [2024-07-26 11:15:25.153345] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.764 [2024-07-26 11:15:25.153397] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.764 [2024-07-26 11:15:25.153439] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.764 [2024-07-26 11:15:25.153485] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.764 [2024-07-26 11:15:25.153668] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.764 [2024-07-26 11:15:25.153715] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.764 [2024-07-26 11:15:25.153766] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.764 [2024-07-26 11:15:25.153809] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.764 [2024-07-26 11:15:25.153857] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.764 [2024-07-26 11:15:25.153905] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.764 [2024-07-26 11:15:25.153949] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.764 [2024-07-26 11:15:25.153998] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.764 [2024-07-26 11:15:25.154047] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.764 [2024-07-26 11:15:25.154091] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.764 [2024-07-26 11:15:25.154139] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.764 [2024-07-26 11:15:25.154185] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.764 [2024-07-26 11:15:25.154229] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.764 [2024-07-26 11:15:25.154274] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.764 [2024-07-26 11:15:25.154320] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.764 [2024-07-26 11:15:25.154366] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.764 [2024-07-26 11:15:25.154410] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.764 [2024-07-26 11:15:25.154455] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.764 [2024-07-26 11:15:25.154512] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.764 [2024-07-26 11:15:25.154558] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.765 [2024-07-26 11:15:25.154604] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.765 [2024-07-26 11:15:25.154657] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.765 [2024-07-26 11:15:25.154703] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.765 [2024-07-26 11:15:25.154749] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.765 [2024-07-26 11:15:25.154790] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.765 [2024-07-26 11:15:25.154830] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.765 [2024-07-26 11:15:25.154876] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.765 [2024-07-26 11:15:25.154907] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.765 [2024-07-26 11:15:25.154951] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.765 [2024-07-26 11:15:25.154992] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.765 [2024-07-26 11:15:25.155039] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.765 [2024-07-26 11:15:25.155079] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.765 [2024-07-26 11:15:25.155118] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.765 [2024-07-26 11:15:25.155165] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.765 [2024-07-26 11:15:25.155204] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.765 [2024-07-26 11:15:25.155243] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.765 [2024-07-26 11:15:25.155285] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.765 [2024-07-26 11:15:25.155323] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.765 [2024-07-26 11:15:25.155369] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.765 [2024-07-26 11:15:25.155402] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.765 [2024-07-26 11:15:25.155449] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.765 [2024-07-26 11:15:25.155486] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.765 [2024-07-26 11:15:25.155524] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.765 [2024-07-26 11:15:25.155573] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.765 [2024-07-26 11:15:25.155610] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.765 [2024-07-26 11:15:25.155658] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.765 [2024-07-26 11:15:25.155697] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.765 [2024-07-26 11:15:25.155733] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.765 [2024-07-26 11:15:25.155770] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.765 [2024-07-26 11:15:25.155813] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.765 [2024-07-26 11:15:25.155857] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.765 [2024-07-26 11:15:25.155903] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.765 [2024-07-26 11:15:25.155947] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.765 [2024-07-26 11:15:25.155987] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.765 [2024-07-26 11:15:25.156029] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.765 [2024-07-26 11:15:25.156073] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.765 [2024-07-26 11:15:25.156114] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.765 [2024-07-26 11:15:25.156152] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.765 [2024-07-26 11:15:25.156191] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.765 [2024-07-26 11:15:25.156231] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.765 [2024-07-26 11:15:25.156273] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.765 [2024-07-26 11:15:25.156316] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.765 [2024-07-26 11:15:25.156363] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.765 [2024-07-26 11:15:25.157144] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.765 [2024-07-26 11:15:25.157192] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.765 [2024-07-26 11:15:25.157232] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.765 [2024-07-26 11:15:25.157273] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.765 [2024-07-26 11:15:25.157315] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.765 [2024-07-26 11:15:25.157361] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.765 [2024-07-26 11:15:25.157402] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.765 [2024-07-26 11:15:25.157440] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.765 [2024-07-26 11:15:25.157474] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.765 [2024-07-26 11:15:25.157516] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.765 [2024-07-26 11:15:25.157555] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.765 [2024-07-26 11:15:25.157594] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.765 [2024-07-26 11:15:25.157649] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.765 [2024-07-26 11:15:25.157688] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.765 [2024-07-26 11:15:25.157726] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.765 [2024-07-26 11:15:25.157769] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.765 [2024-07-26 11:15:25.157808] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.765 [2024-07-26 11:15:25.157850] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.765 [2024-07-26 11:15:25.157893] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.765 [2024-07-26 11:15:25.157935] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.765 [2024-07-26 11:15:25.157979] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.765 [2024-07-26 11:15:25.158016] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.765 [2024-07-26 11:15:25.158054] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.765 [2024-07-26 11:15:25.158091] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.765 [2024-07-26 11:15:25.158130] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.765 [2024-07-26 11:15:25.158172] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.765 [2024-07-26 11:15:25.158210] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.765 [2024-07-26 11:15:25.158251] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.765 [2024-07-26 11:15:25.158290] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.765 [2024-07-26 11:15:25.158337] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.765 [2024-07-26 11:15:25.158387] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.765 [2024-07-26 11:15:25.158443] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.765 [2024-07-26 11:15:25.158489] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.765 [2024-07-26 11:15:25.158535] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.765 [2024-07-26 11:15:25.158580] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.765 [2024-07-26 11:15:25.158624] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.765 [2024-07-26 11:15:25.158682] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.765 [2024-07-26 11:15:25.158726] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.765 [2024-07-26 11:15:25.158775] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.765 [2024-07-26 11:15:25.158822] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.765 [2024-07-26 11:15:25.158871] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.765 [2024-07-26 11:15:25.158913] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.765 [2024-07-26 11:15:25.158960] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.765 [2024-07-26 11:15:25.159004] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.765 [2024-07-26 11:15:25.159048] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.765 [2024-07-26 11:15:25.159103] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.765 [2024-07-26 11:15:25.159148] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.766 [2024-07-26 11:15:25.159192] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.766 [2024-07-26 11:15:25.159238] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.766 [2024-07-26 11:15:25.159285] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.766 [2024-07-26 11:15:25.159333] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.766 [2024-07-26 11:15:25.159380] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.766 [2024-07-26 11:15:25.159428] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.766 [2024-07-26 11:15:25.159477] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.766 [2024-07-26 11:15:25.159520] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.766 [2024-07-26 11:15:25.159567] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.766 [2024-07-26 11:15:25.159616] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.766 [2024-07-26 11:15:25.159664] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.766 [2024-07-26 11:15:25.159713] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.766 [2024-07-26 11:15:25.159761] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.766 [2024-07-26 11:15:25.159808] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.766 [2024-07-26 11:15:25.159852] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.766 [2024-07-26 11:15:25.159898] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.766 [2024-07-26 11:15:25.159942] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.766 [2024-07-26 11:15:25.160120] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.766 [2024-07-26 11:15:25.160164] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.766 [2024-07-26 11:15:25.160194] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.766 [2024-07-26 11:15:25.160231] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.766 [2024-07-26 11:15:25.160272] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.766 [2024-07-26 11:15:25.160310] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.766 [2024-07-26 11:15:25.160349] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.766 [2024-07-26 11:15:25.160396] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.766 [2024-07-26 11:15:25.160436] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.766 [2024-07-26 11:15:25.160477] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.766 [2024-07-26 11:15:25.160523] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.766 [2024-07-26 11:15:25.160561] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.766 [2024-07-26 11:15:25.160600] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.766 [2024-07-26 11:15:25.160646] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.766 [2024-07-26 11:15:25.160684] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.766 [2024-07-26 11:15:25.160723] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.766 [2024-07-26 11:15:25.160766] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.766 [2024-07-26 11:15:25.161196] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.766 [2024-07-26 11:15:25.161238] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.766 [2024-07-26 11:15:25.161277] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.766 [2024-07-26 11:15:25.161324] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.766 [2024-07-26 11:15:25.161365] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.766 [2024-07-26 11:15:25.161405] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.766 [2024-07-26 11:15:25.161447] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.766 [2024-07-26 11:15:25.161495] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.766 [2024-07-26 11:15:25.161544] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.766 [2024-07-26 11:15:25.161594] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.766 [2024-07-26 11:15:25.161649] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.766 [2024-07-26 11:15:25.161697] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.766 [2024-07-26 11:15:25.161751] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.766 [2024-07-26 11:15:25.161795] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.766 [2024-07-26 11:15:25.161840] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.766 [2024-07-26 11:15:25.161887] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.766 [2024-07-26 11:15:25.161947] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.766 [2024-07-26 11:15:25.161993] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.766 [2024-07-26 11:15:25.162036] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.766 [2024-07-26 11:15:25.162082] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.766 [2024-07-26 11:15:25.162128] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.766 [2024-07-26 11:15:25.162175] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.766 [2024-07-26 11:15:25.162220] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.766 [2024-07-26 11:15:25.162265] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.766 [2024-07-26 11:15:25.162308] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.766 [2024-07-26 11:15:25.162356] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.766 [2024-07-26 11:15:25.162401] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.766 [2024-07-26 11:15:25.162445] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.766 [2024-07-26 11:15:25.162492] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.766 [2024-07-26 11:15:25.162538] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.766 [2024-07-26 11:15:25.162585] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.766 [2024-07-26 11:15:25.162637] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.766 [2024-07-26 11:15:25.162686] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.766 [2024-07-26 11:15:25.162729] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.766 [2024-07-26 11:15:25.162787] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.766 [2024-07-26 11:15:25.162829] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.766 [2024-07-26 11:15:25.162871] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.766 [2024-07-26 11:15:25.162916] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.766 [2024-07-26 11:15:25.162965] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.766 [2024-07-26 11:15:25.163009] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.766 [2024-07-26 11:15:25.163053] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.766 [2024-07-26 11:15:25.163111] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.766 [2024-07-26 11:15:25.163159] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.766 [2024-07-26 11:15:25.163191] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.766 [2024-07-26 11:15:25.163235] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.766 [2024-07-26 11:15:25.163278] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.766 [2024-07-26 11:15:25.163316] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.767 [2024-07-26 11:15:25.163363] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.767 [2024-07-26 11:15:25.163404] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.767 [2024-07-26 11:15:25.163442] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.767 [2024-07-26 11:15:25.163485] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.767 [2024-07-26 11:15:25.163525] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.767 [2024-07-26 11:15:25.163565] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.767 [2024-07-26 11:15:25.163610] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.767 [2024-07-26 11:15:25.163657] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.767 [2024-07-26 11:15:25.163692] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.767 [2024-07-26 11:15:25.163735] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.767 [2024-07-26 11:15:25.163770] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.767 [2024-07-26 11:15:25.163810] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.767 [2024-07-26 11:15:25.163855] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.767 [2024-07-26 11:15:25.163894] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.767 [2024-07-26 11:15:25.163935] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.767 [2024-07-26 11:15:25.163978] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.767 [2024-07-26 11:15:25.164238] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.767 [2024-07-26 11:15:25.164285] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.767 [2024-07-26 11:15:25.164325] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.767 [2024-07-26 11:15:25.164368] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.767 [2024-07-26 11:15:25.164407] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.767 [2024-07-26 11:15:25.164449] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.767 [2024-07-26 11:15:25.164488] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.767 [2024-07-26 11:15:25.164528] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.767 [2024-07-26 11:15:25.164567] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.767 [2024-07-26 11:15:25.164604] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.767 [2024-07-26 11:15:25.164656] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.767 [2024-07-26 11:15:25.164702] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.767 [2024-07-26 11:15:25.164752] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.767 [2024-07-26 11:15:25.164801] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.767 [2024-07-26 11:15:25.164852] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.767 [2024-07-26 11:15:25.164898] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.767 [2024-07-26 11:15:25.164943] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.767 [2024-07-26 11:15:25.164987] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.767 [2024-07-26 11:15:25.165036] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.767 [2024-07-26 11:15:25.165077] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.767 [2024-07-26 11:15:25.165123] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.767 [2024-07-26 11:15:25.165170] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.767 [2024-07-26 11:15:25.165213] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.767 [2024-07-26 11:15:25.165263] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.767 [2024-07-26 11:15:25.165311] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.767 [2024-07-26 11:15:25.165358] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.767 [2024-07-26 11:15:25.165400] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.767 [2024-07-26 11:15:25.165443] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.767 [2024-07-26 11:15:25.165493] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.767 [2024-07-26 11:15:25.165535] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.767 [2024-07-26 11:15:25.165579] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.767 [2024-07-26 11:15:25.165622] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.767 [2024-07-26 11:15:25.165679] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.767 [2024-07-26 11:15:25.165728] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.767 [2024-07-26 11:15:25.165771] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.767 [2024-07-26 11:15:25.165818] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.767 [2024-07-26 11:15:25.165863] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.767 [2024-07-26 11:15:25.165910] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.767 [2024-07-26 11:15:25.165956] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.767 [2024-07-26 11:15:25.166007] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.767 [2024-07-26 11:15:25.166048] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.767 [2024-07-26 11:15:25.166094] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.767 [2024-07-26 11:15:25.166137] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.767 [2024-07-26 11:15:25.166182] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.767 [2024-07-26 11:15:25.166234] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.767 [2024-07-26 11:15:25.166279] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.767 [2024-07-26 11:15:25.166319] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.767 [2024-07-26 11:15:25.166360] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.767 [2024-07-26 11:15:25.166402] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.767 [2024-07-26 11:15:25.166444] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.767 [2024-07-26 11:15:25.166476] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.767 [2024-07-26 11:15:25.166518] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.767 [2024-07-26 11:15:25.166558] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.767 [2024-07-26 11:15:25.166600] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.767 [2024-07-26 11:15:25.166642] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.767 [2024-07-26 11:15:25.166686] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.767 [2024-07-26 11:15:25.166726] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.767 [2024-07-26 11:15:25.166766] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.767 [2024-07-26 11:15:25.166810] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.767 [2024-07-26 11:15:25.166851] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.767 [2024-07-26 11:15:25.166893] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.767 [2024-07-26 11:15:25.166934] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.767 [2024-07-26 11:15:25.166978] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.767 [2024-07-26 11:15:25.167017] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.767 [2024-07-26 11:15:25.167912] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.767 [2024-07-26 11:15:25.167955] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.767 [2024-07-26 11:15:25.167999] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.767 [2024-07-26 11:15:25.168044] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.767 [2024-07-26 11:15:25.168089] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.767 [2024-07-26 11:15:25.168139] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.767 [2024-07-26 11:15:25.168184] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.767 [2024-07-26 11:15:25.168226] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.767 [2024-07-26 11:15:25.168272] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.768 [2024-07-26 11:15:25.168319] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.768 [2024-07-26 11:15:25.168373] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.768 [2024-07-26 11:15:25.168414] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.768 [2024-07-26 11:15:25.168457] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.768 [2024-07-26 11:15:25.168507] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.768 [2024-07-26 11:15:25.168550] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.768 [2024-07-26 11:15:25.168603] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.768 [2024-07-26 11:15:25.168653] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.768 [2024-07-26 11:15:25.168698] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.768 [2024-07-26 11:15:25.168745] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.768 [2024-07-26 11:15:25.168794] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.768 [2024-07-26 11:15:25.168833] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.768 [2024-07-26 11:15:25.168876] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.768 [2024-07-26 11:15:25.168917] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.768 [2024-07-26 11:15:25.168956] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.768 [2024-07-26 11:15:25.168997] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.768 [2024-07-26 11:15:25.169036] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.768 [2024-07-26 11:15:25.169081] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.768 [2024-07-26 11:15:25.169126] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.768 [2024-07-26 11:15:25.169161] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.768 [2024-07-26 11:15:25.169199] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.768 [2024-07-26 11:15:25.169240] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.768 [2024-07-26 11:15:25.169278] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.768 [2024-07-26 11:15:25.169325] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.768 [2024-07-26 11:15:25.169366] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.768 [2024-07-26 11:15:25.169406] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.768 [2024-07-26 11:15:25.169443] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.768 [2024-07-26 11:15:25.169484] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.768 [2024-07-26 11:15:25.169529] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.768 [2024-07-26 11:15:25.169569] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.768 [2024-07-26 11:15:25.169613] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.768 [2024-07-26 11:15:25.169656] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.768 [2024-07-26 11:15:25.169695] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.768 [2024-07-26 11:15:25.169734] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.768 [2024-07-26 11:15:25.169771] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.768 [2024-07-26 11:15:25.169814] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.768 [2024-07-26 11:15:25.169858] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.768 [2024-07-26 11:15:25.169898] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.768 [2024-07-26 11:15:25.169933] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.768 [2024-07-26 11:15:25.169972] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.768 [2024-07-26 11:15:25.170018] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.768 [2024-07-26 11:15:25.170064] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.768 [2024-07-26 11:15:25.170109] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.768 [2024-07-26 11:15:25.170156] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.768 [2024-07-26 11:15:25.170203] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.768 [2024-07-26 11:15:25.170248] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.768 [2024-07-26 11:15:25.170293] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.768 [2024-07-26 11:15:25.170337] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.768 [2024-07-26 11:15:25.170385] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.768 [2024-07-26 11:15:25.170429] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.768 [2024-07-26 11:15:25.170474] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.768 [2024-07-26 11:15:25.170521] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.768 [2024-07-26 11:15:25.170567] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.768 [2024-07-26 11:15:25.170613] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.768 [2024-07-26 11:15:25.170661] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.768 [2024-07-26 11:15:25.170842] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.768 [2024-07-26 11:15:25.170898] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.768 [2024-07-26 11:15:25.170944] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.768 [2024-07-26 11:15:25.170989] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.768 [2024-07-26 11:15:25.171034] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.768 [2024-07-26 11:15:25.171082] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.768 [2024-07-26 11:15:25.171130] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.768 [2024-07-26 11:15:25.171170] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.768 [2024-07-26 11:15:25.171215] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.768 [2024-07-26 11:15:25.171259] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.768 [2024-07-26 11:15:25.171305] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.768 [2024-07-26 11:15:25.171352] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.768 [2024-07-26 11:15:25.171399] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.768 [2024-07-26 11:15:25.171443] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.768 [2024-07-26 11:15:25.171486] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.768 [2024-07-26 11:15:25.171533] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.768 [2024-07-26 11:15:25.171576] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.768 [2024-07-26 11:15:25.171621] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.768 [2024-07-26 11:15:25.171669] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.768 [2024-07-26 11:15:25.171715] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.768 [2024-07-26 11:15:25.171753] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.768 [2024-07-26 11:15:25.171792] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.768 [2024-07-26 11:15:25.171838] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.768 [2024-07-26 11:15:25.171871] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.768 [2024-07-26 11:15:25.171911] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.768 [2024-07-26 11:15:25.171953] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.768 [2024-07-26 11:15:25.171993] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.768 [2024-07-26 11:15:25.172032] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.768 [2024-07-26 11:15:25.172070] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.768 [2024-07-26 11:15:25.172111] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.768 [2024-07-26 11:15:25.172150] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.768 [2024-07-26 11:15:25.172199] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.768 [2024-07-26 11:15:25.172238] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.768 [2024-07-26 11:15:25.172275] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.768 [2024-07-26 11:15:25.172314] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.768 [2024-07-26 11:15:25.172352] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.768 [2024-07-26 11:15:25.172387] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.769 [2024-07-26 11:15:25.172425] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.769 [2024-07-26 11:15:25.172471] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.769 [2024-07-26 11:15:25.173180] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.769 [2024-07-26 11:15:25.173229] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.769 [2024-07-26 11:15:25.173271] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.769 [2024-07-26 11:15:25.173320] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.769 [2024-07-26 11:15:25.173368] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.769 [2024-07-26 11:15:25.173412] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.769 [2024-07-26 11:15:25.173458] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.769 [2024-07-26 11:15:25.173503] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.769 [2024-07-26 11:15:25.173547] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.769 [2024-07-26 11:15:25.173596] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.769 [2024-07-26 11:15:25.173645] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.769 [2024-07-26 11:15:25.173696] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.769 [2024-07-26 11:15:25.173741] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.769 [2024-07-26 11:15:25.173789] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.769 [2024-07-26 11:15:25.173836] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.769 [2024-07-26 11:15:25.173881] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.769 [2024-07-26 11:15:25.173925] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.769 [2024-07-26 11:15:25.173970] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.769 [2024-07-26 11:15:25.174017] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.769 [2024-07-26 11:15:25.174062] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.769 [2024-07-26 11:15:25.174106] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.769 [2024-07-26 11:15:25.174150] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.769 [2024-07-26 11:15:25.174195] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.769 [2024-07-26 11:15:25.174243] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.769 [2024-07-26 11:15:25.174288] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.769 [2024-07-26 11:15:25.174335] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.769 [2024-07-26 11:15:25.174381] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.769 [2024-07-26 11:15:25.174427] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.769 [2024-07-26 11:15:25.174475] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.769 [2024-07-26 11:15:25.174524] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.769 [2024-07-26 11:15:25.174568] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.769 [2024-07-26 11:15:25.174615] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.769 [2024-07-26 11:15:25.174661] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.769 [2024-07-26 11:15:25.174710] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.769 [2024-07-26 11:15:25.174754] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.769 [2024-07-26 11:15:25.174800] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.769 [2024-07-26 11:15:25.174841] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.769 [2024-07-26 11:15:25.174876] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.769 [2024-07-26 11:15:25.174914] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.769 [2024-07-26 11:15:25.174958] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.769 [2024-07-26 11:15:25.174998] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.769 [2024-07-26 11:15:25.175038] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.769 [2024-07-26 11:15:25.175083] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.769 [2024-07-26 11:15:25.175125] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.769 [2024-07-26 11:15:25.175166] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.769 [2024-07-26 11:15:25.175208] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.769 [2024-07-26 11:15:25.175245] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.769 [2024-07-26 11:15:25.175290] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.769 [2024-07-26 11:15:25.175328] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.769 [2024-07-26 11:15:25.175359] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.769 [2024-07-26 11:15:25.175405] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.769 [2024-07-26 11:15:25.175443] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.769 [2024-07-26 11:15:25.175481] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.769 [2024-07-26 11:15:25.175528] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.769 [2024-07-26 11:15:25.175569] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.769 [2024-07-26 11:15:25.175608] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.769 [2024-07-26 11:15:25.175660] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.769 [2024-07-26 11:15:25.175707] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.769 [2024-07-26 11:15:25.175746] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.769 [2024-07-26 11:15:25.175783] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.769 [2024-07-26 11:15:25.175819] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.769 [2024-07-26 11:15:25.175865] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.769 [2024-07-26 11:15:25.175901] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.769 [2024-07-26 11:15:25.175943] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.769 [2024-07-26 11:15:25.176134] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.769 [2024-07-26 11:15:25.176173] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.769 [2024-07-26 11:15:25.176211] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.769 [2024-07-26 11:15:25.176250] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.769 [2024-07-26 11:15:25.176286] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.769 [2024-07-26 11:15:25.176340] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.769 [2024-07-26 11:15:25.176386] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.769 [2024-07-26 11:15:25.176433] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.769 [2024-07-26 11:15:25.176480] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.769 [2024-07-26 11:15:25.176533] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.769 [2024-07-26 11:15:25.176579] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.769 [2024-07-26 11:15:25.176631] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.769 [2024-07-26 11:15:25.176680] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.769 [2024-07-26 11:15:25.176724] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.769 [2024-07-26 11:15:25.176783] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.769 [2024-07-26 11:15:25.176835] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.769 [2024-07-26 11:15:25.176878] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.769 [2024-07-26 11:15:25.176923] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.769 [2024-07-26 11:15:25.176967] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.769 [2024-07-26 11:15:25.177015] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.769 [2024-07-26 11:15:25.177060] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.769 [2024-07-26 11:15:25.177106] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.770 [2024-07-26 11:15:25.177149] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.770 [2024-07-26 11:15:25.177196] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.770 [2024-07-26 11:15:25.177241] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.770 [2024-07-26 11:15:25.177289] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.770 [2024-07-26 11:15:25.177332] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.770 [2024-07-26 11:15:25.177383] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.770 [2024-07-26 11:15:25.177430] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.770 [2024-07-26 11:15:25.177476] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.770 [2024-07-26 11:15:25.177523] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.770 [2024-07-26 11:15:25.177568] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.770 [2024-07-26 11:15:25.177620] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.770 [2024-07-26 11:15:25.177670] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.770 [2024-07-26 11:15:25.177713] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.770 [2024-07-26 11:15:25.177757] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.770 [2024-07-26 11:15:25.177810] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.770 [2024-07-26 11:15:25.177856] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.770 [2024-07-26 11:15:25.177902] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.770 [2024-07-26 11:15:25.177947] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.770 [2024-07-26 11:15:25.178001] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.770 [2024-07-26 11:15:25.178038] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.770 [2024-07-26 11:15:25.178085] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.770 [2024-07-26 11:15:25.178125] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.770 [2024-07-26 11:15:25.178564] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.770 [2024-07-26 11:15:25.178614] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.770 [2024-07-26 11:15:25.178661] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.770 [2024-07-26 11:15:25.178706] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.770 [2024-07-26 11:15:25.178737] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.770 [2024-07-26 11:15:25.178781] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.770 [2024-07-26 11:15:25.178823] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.770 [2024-07-26 11:15:25.178867] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.770 [2024-07-26 11:15:25.178909] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.770 [2024-07-26 11:15:25.178955] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.770 [2024-07-26 11:15:25.178995] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.770 [2024-07-26 11:15:25.179037] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.770 [2024-07-26 11:15:25.179079] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.770 [2024-07-26 11:15:25.179117] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.770 [2024-07-26 11:15:25.179160] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.770 [2024-07-26 11:15:25.179200] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.770 [2024-07-26 11:15:25.179246] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.770 [2024-07-26 11:15:25.179286] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.770 [2024-07-26 11:15:25.179328] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.770 [2024-07-26 11:15:25.179364] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.770 [2024-07-26 11:15:25.179405] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.770 [2024-07-26 11:15:25.179441] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.770 [2024-07-26 11:15:25.179481] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.770 [2024-07-26 11:15:25.179521] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.770 [2024-07-26 11:15:25.179556] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.770 [2024-07-26 11:15:25.179595] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.770 [2024-07-26 11:15:25.179642] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.770 [2024-07-26 11:15:25.179686] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.770 [2024-07-26 11:15:25.179730] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.770 [2024-07-26 11:15:25.179770] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.770 [2024-07-26 11:15:25.179812] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.770 [2024-07-26 11:15:25.179853] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.770 [2024-07-26 11:15:25.179894] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.770 [2024-07-26 11:15:25.179934] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.770 [2024-07-26 11:15:25.179974] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.770 [2024-07-26 11:15:25.180021] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.770 [2024-07-26 11:15:25.180064] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.770 [2024-07-26 11:15:25.180113] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.770 [2024-07-26 11:15:25.180158] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.770 [2024-07-26 11:15:25.180203] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.770 [2024-07-26 11:15:25.180252] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.770 [2024-07-26 11:15:25.180296] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.770 [2024-07-26 11:15:25.180344] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.770 [2024-07-26 11:15:25.180390] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.770 [2024-07-26 11:15:25.180433] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.770 [2024-07-26 11:15:25.180484] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.770 [2024-07-26 11:15:25.180529] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.770 [2024-07-26 11:15:25.180575] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.770 [2024-07-26 11:15:25.180620] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.770 [2024-07-26 11:15:25.180669] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.770 [2024-07-26 11:15:25.180720] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.770 [2024-07-26 11:15:25.180764] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.770 [2024-07-26 11:15:25.180810] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.770 [2024-07-26 11:15:25.180858] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.770 [2024-07-26 11:15:25.180904] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.770 [2024-07-26 11:15:25.180949] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.770 [2024-07-26 11:15:25.180996] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.770 [2024-07-26 11:15:25.181043] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.770 [2024-07-26 11:15:25.181092] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.770 [2024-07-26 11:15:25.181135] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.770 [2024-07-26 11:15:25.181186] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.770 [2024-07-26 11:15:25.181227] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.770 [2024-07-26 11:15:25.181268] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.770 [2024-07-26 11:15:25.181310] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.770 [2024-07-26 11:15:25.181469] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.770 [2024-07-26 11:15:25.181515] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.770 [2024-07-26 11:15:25.181560] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.770 [2024-07-26 11:15:25.181604] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.770 [2024-07-26 11:15:25.181658] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.770 [2024-07-26 11:15:25.181699] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.770 [2024-07-26 11:15:25.181736] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.770 [2024-07-26 11:15:25.181782] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.770 [2024-07-26 11:15:25.181820] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.770 [2024-07-26 11:15:25.181859] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.770 [2024-07-26 11:15:25.181906] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.771 [2024-07-26 11:15:25.181937] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.771 [2024-07-26 11:15:25.181979] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.771 [2024-07-26 11:15:25.182016] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.771 [2024-07-26 11:15:25.182063] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.771 [2024-07-26 11:15:25.182105] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.771 [2024-07-26 11:15:25.182145] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.771 [2024-07-26 11:15:25.182186] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.771 [2024-07-26 11:15:25.182226] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.771 [2024-07-26 11:15:25.182911] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.771 [2024-07-26 11:15:25.182963] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.771 [2024-07-26 11:15:25.183011] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.771 [2024-07-26 11:15:25.183058] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.771 [2024-07-26 11:15:25.183102] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.771 [2024-07-26 11:15:25.183147] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.771 [2024-07-26 11:15:25.183192] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.771 [2024-07-26 11:15:25.183253] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.771 [2024-07-26 11:15:25.183298] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.771 [2024-07-26 11:15:25.183345] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.771 [2024-07-26 11:15:25.183391] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.771 [2024-07-26 11:15:25.183435] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.771 [2024-07-26 11:15:25.183482] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.771 [2024-07-26 11:15:25.183528] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.771 [2024-07-26 11:15:25.183574] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.771 [2024-07-26 11:15:25.183619] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.771 [2024-07-26 11:15:25.183668] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.771 [2024-07-26 11:15:25.183702] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.771 [2024-07-26 11:15:25.183742] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.771 [2024-07-26 11:15:25.183788] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.771 [2024-07-26 11:15:25.183835] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.771 [2024-07-26 11:15:25.183874] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.771 [2024-07-26 11:15:25.183913] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.771 [2024-07-26 11:15:25.183957] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.771 [2024-07-26 11:15:25.183994] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.771 [2024-07-26 11:15:25.184032] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.771 [2024-07-26 11:15:25.184073] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.771 [2024-07-26 11:15:25.184113] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.771 [2024-07-26 11:15:25.184154] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.771 [2024-07-26 11:15:25.184187] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.771 [2024-07-26 11:15:25.184224] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.771 [2024-07-26 11:15:25.184260] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.771 [2024-07-26 11:15:25.184301] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.771 [2024-07-26 11:15:25.184336] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.771 [2024-07-26 11:15:25.184383] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.771 [2024-07-26 11:15:25.184421] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.771 [2024-07-26 11:15:25.184461] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.771 [2024-07-26 11:15:25.184501] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.771 [2024-07-26 11:15:25.184545] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.771 [2024-07-26 11:15:25.184587] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.771 [2024-07-26 11:15:25.184630] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.771 [2024-07-26 11:15:25.184671] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.771 [2024-07-26 11:15:25.184710] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.771 [2024-07-26 11:15:25.184747] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.771 [2024-07-26 11:15:25.184794] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.771 [2024-07-26 11:15:25.184839] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.771 [2024-07-26 11:15:25.184887] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.771 [2024-07-26 11:15:25.184932] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.771 [2024-07-26 11:15:25.184980] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.771 [2024-07-26 11:15:25.185030] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.771 [2024-07-26 11:15:25.185086] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.771 [2024-07-26 11:15:25.185131] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.771 [2024-07-26 11:15:25.185177] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.771 [2024-07-26 11:15:25.185224] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.771 [2024-07-26 11:15:25.185280] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.771 [2024-07-26 11:15:25.185325] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.771 [2024-07-26 11:15:25.185372] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.771 [2024-07-26 11:15:25.185414] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.771 [2024-07-26 11:15:25.185458] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.771 [2024-07-26 11:15:25.185505] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.771 [2024-07-26 11:15:25.185549] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.771 [2024-07-26 11:15:25.185596] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.771 [2024-07-26 11:15:25.185644] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.771 [2024-07-26 11:15:25.185690] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.771 [2024-07-26 11:15:25.185876] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.771 [2024-07-26 11:15:25.185923] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.771 [2024-07-26 11:15:25.185968] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.771 [2024-07-26 11:15:25.186018] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.772 [2024-07-26 11:15:25.186062] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.772 [2024-07-26 11:15:25.186112] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.772 [2024-07-26 11:15:25.186158] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.772 [2024-07-26 11:15:25.186215] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.772 [2024-07-26 11:15:25.186261] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.772 [2024-07-26 11:15:25.186309] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.772 [2024-07-26 11:15:25.186354] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.772 [2024-07-26 11:15:25.186402] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.772 [2024-07-26 11:15:25.186451] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.772 [2024-07-26 11:15:25.186500] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.772 [2024-07-26 11:15:25.186547] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.772 [2024-07-26 11:15:25.186597] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.772 [2024-07-26 11:15:25.186647] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.772 [2024-07-26 11:15:25.186694] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.772 [2024-07-26 11:15:25.186741] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.772 [2024-07-26 11:15:25.186786] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.772 [2024-07-26 11:15:25.186826] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.772 [2024-07-26 11:15:25.186874] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.772 [2024-07-26 11:15:25.186912] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.772 [2024-07-26 11:15:25.186949] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.772 [2024-07-26 11:15:25.186992] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.772 [2024-07-26 11:15:25.187029] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.772 [2024-07-26 11:15:25.187068] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.772 [2024-07-26 11:15:25.187110] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.772 [2024-07-26 11:15:25.187149] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.772 [2024-07-26 11:15:25.187189] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.772 [2024-07-26 11:15:25.187230] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.772 [2024-07-26 11:15:25.187277] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.772 [2024-07-26 11:15:25.187318] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.772 [2024-07-26 11:15:25.187364] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.772 [2024-07-26 11:15:25.187404] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.772 [2024-07-26 11:15:25.187437] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.772 [2024-07-26 11:15:25.187477] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.772 [2024-07-26 11:15:25.187518] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.772 [2024-07-26 11:15:25.187562] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.772 [2024-07-26 11:15:25.187601] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.772 [2024-07-26 11:15:25.187650] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.772 [2024-07-26 11:15:25.187690] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.772 [2024-07-26 11:15:25.187732] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.772 [2024-07-26 11:15:25.187773] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.772 [2024-07-26 11:15:25.188527] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.772 [2024-07-26 11:15:25.188571] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.772 [2024-07-26 11:15:25.188611] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.772 [2024-07-26 11:15:25.188659] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.772 [2024-07-26 11:15:25.188698] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.772 [2024-07-26 11:15:25.188748] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.772 [2024-07-26 11:15:25.188792] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.772 [2024-07-26 11:15:25.188837] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.772 [2024-07-26 11:15:25.188884] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.772 [2024-07-26 11:15:25.188929] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.772 [2024-07-26 11:15:25.188974] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.772 [2024-07-26 11:15:25.189022] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.772 [2024-07-26 11:15:25.189074] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.772 [2024-07-26 11:15:25.189120] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.772 [2024-07-26 11:15:25.189167] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.772 [2024-07-26 11:15:25.189212] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.772 [2024-07-26 11:15:25.189265] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.772 [2024-07-26 11:15:25.189314] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.772 [2024-07-26 11:15:25.189360] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.772 [2024-07-26 11:15:25.189407] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.772 [2024-07-26 11:15:25.189453] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.772 [2024-07-26 11:15:25.189499] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.772 [2024-07-26 11:15:25.189550] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.772 [2024-07-26 11:15:25.189594] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.772 [2024-07-26 11:15:25.189645] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.772 [2024-07-26 11:15:25.189691] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.772 [2024-07-26 11:15:25.189738] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.772 [2024-07-26 11:15:25.189782] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.772 [2024-07-26 11:15:25.189839] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.772 [2024-07-26 11:15:25.189883] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.772 [2024-07-26 11:15:25.189929] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.772 [2024-07-26 11:15:25.189976] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.772 [2024-07-26 11:15:25.190011] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.772 [2024-07-26 11:15:25.190053] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.772 [2024-07-26 11:15:25.190091] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.772 [2024-07-26 11:15:25.190127] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.772 [2024-07-26 11:15:25.190170] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.772 [2024-07-26 11:15:25.190216] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.772 [2024-07-26 11:15:25.190256] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.772 [2024-07-26 11:15:25.190295] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.772 [2024-07-26 11:15:25.190336] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.772 [2024-07-26 11:15:25.190382] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.772 [2024-07-26 11:15:25.190415] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.772 [2024-07-26 11:15:25.190461] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.772 [2024-07-26 11:15:25.190500] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.772 [2024-07-26 11:15:25.190544] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.772 [2024-07-26 11:15:25.190583] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.772 [2024-07-26 11:15:25.190624] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.772 [2024-07-26 11:15:25.190672] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.772 [2024-07-26 11:15:25.190718] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.772 [2024-07-26 11:15:25.190759] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.772 [2024-07-26 11:15:25.190792] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.772 [2024-07-26 11:15:25.190827] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.772 [2024-07-26 11:15:25.190870] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.772 [2024-07-26 11:15:25.190911] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.772 [2024-07-26 11:15:25.190952] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.772 [2024-07-26 11:15:25.190995] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.772 [2024-07-26 11:15:25.191037] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.773 [2024-07-26 11:15:25.191075] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.773 [2024-07-26 11:15:25.191115] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.773 [2024-07-26 11:15:25.191153] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.773 [2024-07-26 11:15:25.191194] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.773 [2024-07-26 11:15:25.191232] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.773 [2024-07-26 11:15:25.191270] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.773 [2024-07-26 11:15:25.191454] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.773 [2024-07-26 11:15:25.191502] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.773 [2024-07-26 11:15:25.191545] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.773 [2024-07-26 11:15:25.191594] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.773 [2024-07-26 11:15:25.191644] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.773 [2024-07-26 11:15:25.191691] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.773 [2024-07-26 11:15:25.191737] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.773 [2024-07-26 11:15:25.191787] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.773 [2024-07-26 11:15:25.191833] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.773 [2024-07-26 11:15:25.191879] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.773 [2024-07-26 11:15:25.191926] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.773 [2024-07-26 11:15:25.191974] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.773 [2024-07-26 11:15:25.192024] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.773 [2024-07-26 11:15:25.192067] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.773 [2024-07-26 11:15:25.192112] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.773 [2024-07-26 11:15:25.192159] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.773 [2024-07-26 11:15:25.192208] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.773 [2024-07-26 11:15:25.192253] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.773 [2024-07-26 11:15:25.192299] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.773 [2024-07-26 11:15:25.192808] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.773 [2024-07-26 11:15:25.192865] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.773 [2024-07-26 11:15:25.192912] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.773 [2024-07-26 11:15:25.192960] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.773 [2024-07-26 11:15:25.193006] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.773 [2024-07-26 11:15:25.193051] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.773 [2024-07-26 11:15:25.193098] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.773 [2024-07-26 11:15:25.193142] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.773 [2024-07-26 11:15:25.193192] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.773 [2024-07-26 11:15:25.193236] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.773 [2024-07-26 11:15:25.193283] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.773 [2024-07-26 11:15:25.193329] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.773 [2024-07-26 11:15:25.193373] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.773 [2024-07-26 11:15:25.193414] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.773 [2024-07-26 11:15:25.193463] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.773 [2024-07-26 11:15:25.193499] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.773 [2024-07-26 11:15:25.193537] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.773 [2024-07-26 11:15:25.193582] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.773 [2024-07-26 11:15:25.193624] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.773 [2024-07-26 11:15:25.193669] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.773 [2024-07-26 11:15:25.193713] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.773 [2024-07-26 11:15:25.193759] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.773 [2024-07-26 11:15:25.193798] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.773 [2024-07-26 11:15:25.193841] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.773 [2024-07-26 11:15:25.193884] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.773 [2024-07-26 11:15:25.193924] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.773 [2024-07-26 11:15:25.193963] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.773 [2024-07-26 11:15:25.194005] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.773 [2024-07-26 11:15:25.194039] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.773 [2024-07-26 11:15:25.194080] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.773 [2024-07-26 11:15:25.194121] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.773 [2024-07-26 11:15:25.194161] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.773 [2024-07-26 11:15:25.194201] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.773 [2024-07-26 11:15:25.194246] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.773 [2024-07-26 11:15:25.194288] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.773 [2024-07-26 11:15:25.194328] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.773 [2024-07-26 11:15:25.194365] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.773 [2024-07-26 11:15:25.194407] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.773 [2024-07-26 11:15:25.194446] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.773 [2024-07-26 11:15:25.194483] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.773 [2024-07-26 11:15:25.194525] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.773 [2024-07-26 11:15:25.194562] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.773 [2024-07-26 11:15:25.194605] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.773 [2024-07-26 11:15:25.194646] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.773 [2024-07-26 11:15:25.194688] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.773 [2024-07-26 11:15:25.194730] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.773 [2024-07-26 11:15:25.194771] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.773 [2024-07-26 11:15:25.194808] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.773 [2024-07-26 11:15:25.194847] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.773 [2024-07-26 11:15:25.194886] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.773 [2024-07-26 11:15:25.194928] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.773 [2024-07-26 11:15:25.194967] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.773 [2024-07-26 11:15:25.195008] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.773 [2024-07-26 11:15:25.195055] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.773 [2024-07-26 11:15:25.195092] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.773 [2024-07-26 11:15:25.195129] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.773 [2024-07-26 11:15:25.195176] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.773 [2024-07-26 11:15:25.195219] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.773 [2024-07-26 11:15:25.195264] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.773 [2024-07-26 11:15:25.195310] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.773 [2024-07-26 11:15:25.195356] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.773 [2024-07-26 11:15:25.195399] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.773 [2024-07-26 11:15:25.195447] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.773 [2024-07-26 11:15:25.195496] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.773 [2024-07-26 11:15:25.195683] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.773 [2024-07-26 11:15:25.195732] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.773 [2024-07-26 11:15:25.195780] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.774 [2024-07-26 11:15:25.195828] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.774 [2024-07-26 11:15:25.195877] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.774 [2024-07-26 11:15:25.195922] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.774 [2024-07-26 11:15:25.195968] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.774 [2024-07-26 11:15:25.196015] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.774 [2024-07-26 11:15:25.196063] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.774 [2024-07-26 11:15:25.196111] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.774 [2024-07-26 11:15:25.196156] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.774 [2024-07-26 11:15:25.196203] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.774 [2024-07-26 11:15:25.196252] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.774 [2024-07-26 11:15:25.196299] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.774 [2024-07-26 11:15:25.196343] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.774 [2024-07-26 11:15:25.196385] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.774 [2024-07-26 11:15:25.196423] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.774 [2024-07-26 11:15:25.196465] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.774 [2024-07-26 11:15:25.196505] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.774 [2024-07-26 11:15:25.196545] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.774 [2024-07-26 11:15:25.196596] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.774 [2024-07-26 11:15:25.196646] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.774 [2024-07-26 11:15:25.196686] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.774 [2024-07-26 11:15:25.196732] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.774 [2024-07-26 11:15:25.196772] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.774 [2024-07-26 11:15:25.196819] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.774 [2024-07-26 11:15:25.196855] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.774 [2024-07-26 11:15:25.196894] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.774 [2024-07-26 11:15:25.196939] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.774 [2024-07-26 11:15:25.196979] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.774 [2024-07-26 11:15:25.197024] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.774 [2024-07-26 11:15:25.197066] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.774 [2024-07-26 11:15:25.197112] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.774 [2024-07-26 11:15:25.197158] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.774 [2024-07-26 11:15:25.197199] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.774 [2024-07-26 11:15:25.197240] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.774 [2024-07-26 11:15:25.197281] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.774 [2024-07-26 11:15:25.197327] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.774 [2024-07-26 11:15:25.197366] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.774 [2024-07-26 11:15:25.197406] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.774 [2024-07-26 11:15:25.197442] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.774 [2024-07-26 11:15:25.197484] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.774 [2024-07-26 11:15:25.197522] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.774 [2024-07-26 11:15:25.197563] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.774 [2024-07-26 11:15:25.197608] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.774 [2024-07-26 11:15:25.197653] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.774 [2024-07-26 11:15:25.197691] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.774 [2024-07-26 11:15:25.197732] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.774 [2024-07-26 11:15:25.197771] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.774 [2024-07-26 11:15:25.197824] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.774 [2024-07-26 11:15:25.197873] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.774 [2024-07-26 11:15:25.197923] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.774 [2024-07-26 11:15:25.197972] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.774 [2024-07-26 11:15:25.198018] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.774 [2024-07-26 11:15:25.198069] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.774 [2024-07-26 11:15:25.198117] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.774 [2024-07-26 11:15:25.198163] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.774 [2024-07-26 11:15:25.198212] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.774 [2024-07-26 11:15:25.198259] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.774 [2024-07-26 11:15:25.198305] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.774 [2024-07-26 11:15:25.198352] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.774 [2024-07-26 11:15:25.198398] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.774 [2024-07-26 11:15:25.198450] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.774 Message suppressed 999 times: Read completed with error (sct=0, sc=15) 00:07:29.774 [2024-07-26 11:15:25.199239] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.774 [2024-07-26 11:15:25.199295] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.774 [2024-07-26 11:15:25.199344] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.774 [2024-07-26 11:15:25.199391] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.774 [2024-07-26 11:15:25.199439] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.774 [2024-07-26 11:15:25.199488] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.774 [2024-07-26 11:15:25.199535] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.774 [2024-07-26 11:15:25.199585] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.774 [2024-07-26 11:15:25.199637] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.774 [2024-07-26 11:15:25.199685] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.774 [2024-07-26 11:15:25.199732] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.774 [2024-07-26 11:15:25.199764] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.774 [2024-07-26 11:15:25.199806] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.774 [2024-07-26 11:15:25.199848] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.774 [2024-07-26 11:15:25.199888] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.774 [2024-07-26 11:15:25.199936] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.774 [2024-07-26 11:15:25.199976] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.774 [2024-07-26 11:15:25.200018] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.774 [2024-07-26 11:15:25.200064] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.774 [2024-07-26 11:15:25.200106] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.774 [2024-07-26 11:15:25.200145] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.774 [2024-07-26 11:15:25.200192] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.774 [2024-07-26 11:15:25.200232] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.774 [2024-07-26 11:15:25.200271] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.774 [2024-07-26 11:15:25.200314] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.774 [2024-07-26 11:15:25.200351] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.774 [2024-07-26 11:15:25.200390] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.774 [2024-07-26 11:15:25.200435] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.775 [2024-07-26 11:15:25.200477] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.775 [2024-07-26 11:15:25.200515] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.775 [2024-07-26 11:15:25.200555] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.775 [2024-07-26 11:15:25.200595] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.775 [2024-07-26 11:15:25.200640] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.775 [2024-07-26 11:15:25.200682] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.775 [2024-07-26 11:15:25.200721] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.775 [2024-07-26 11:15:25.200758] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.775 [2024-07-26 11:15:25.200799] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.775 [2024-07-26 11:15:25.200842] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.775 [2024-07-26 11:15:25.200886] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.775 [2024-07-26 11:15:25.200928] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.775 [2024-07-26 11:15:25.200970] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.775 [2024-07-26 11:15:25.201013] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.775 [2024-07-26 11:15:25.201054] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.775 [2024-07-26 11:15:25.201098] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.775 [2024-07-26 11:15:25.201140] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.775 [2024-07-26 11:15:25.201181] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.775 [2024-07-26 11:15:25.201220] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.775 [2024-07-26 11:15:25.201257] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.775 [2024-07-26 11:15:25.201295] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.775 [2024-07-26 11:15:25.201337] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.775 [2024-07-26 11:15:25.201374] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.775 [2024-07-26 11:15:25.201424] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.775 [2024-07-26 11:15:25.201469] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.775 [2024-07-26 11:15:25.201512] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.775 [2024-07-26 11:15:25.201557] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.775 [2024-07-26 11:15:25.201603] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.775 [2024-07-26 11:15:25.201662] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.775 [2024-07-26 11:15:25.201707] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.775 [2024-07-26 11:15:25.201750] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.775 [2024-07-26 11:15:25.201796] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.775 [2024-07-26 11:15:25.201841] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.775 [2024-07-26 11:15:25.201890] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.775 [2024-07-26 11:15:25.201936] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.775 [2024-07-26 11:15:25.201982] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.775 [2024-07-26 11:15:25.202171] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.775 [2024-07-26 11:15:25.202226] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.775 [2024-07-26 11:15:25.202272] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.775 [2024-07-26 11:15:25.202318] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.775 [2024-07-26 11:15:25.202363] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.775 [2024-07-26 11:15:25.202410] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.775 [2024-07-26 11:15:25.202453] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.775 [2024-07-26 11:15:25.202504] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.775 [2024-07-26 11:15:25.202559] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.775 [2024-07-26 11:15:25.202606] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.775 [2024-07-26 11:15:25.202658] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.775 [2024-07-26 11:15:25.202704] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.775 [2024-07-26 11:15:25.202748] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.775 [2024-07-26 11:15:25.202800] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.775 [2024-07-26 11:15:25.202834] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.775 [2024-07-26 11:15:25.202874] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.775 [2024-07-26 11:15:25.202916] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.775 [2024-07-26 11:15:25.203311] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.775 [2024-07-26 11:15:25.203356] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.775 [2024-07-26 11:15:25.203393] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.775 [2024-07-26 11:15:25.203440] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.775 [2024-07-26 11:15:25.203482] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.775 [2024-07-26 11:15:25.203525] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.775 [2024-07-26 11:15:25.203570] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.775 [2024-07-26 11:15:25.203621] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.775 [2024-07-26 11:15:25.203674] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.775 [2024-07-26 11:15:25.203716] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.775 [2024-07-26 11:15:25.203749] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.775 [2024-07-26 11:15:25.203790] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.775 [2024-07-26 11:15:25.203836] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.775 [2024-07-26 11:15:25.203875] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.775 [2024-07-26 11:15:25.203914] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.775 [2024-07-26 11:15:25.203957] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.775 [2024-07-26 11:15:25.203998] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.775 [2024-07-26 11:15:25.204038] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.775 [2024-07-26 11:15:25.204081] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.775 [2024-07-26 11:15:25.204121] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.775 [2024-07-26 11:15:25.204163] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.775 [2024-07-26 11:15:25.204205] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.775 [2024-07-26 11:15:25.204242] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.775 [2024-07-26 11:15:25.204285] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.775 [2024-07-26 11:15:25.204323] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.775 [2024-07-26 11:15:25.204368] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.775 [2024-07-26 11:15:25.204421] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.775 [2024-07-26 11:15:25.204470] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.775 [2024-07-26 11:15:25.204515] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.775 [2024-07-26 11:15:25.204564] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.775 [2024-07-26 11:15:25.204608] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.775 [2024-07-26 11:15:25.204655] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.775 [2024-07-26 11:15:25.204706] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.775 [2024-07-26 11:15:25.204755] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.775 [2024-07-26 11:15:25.204803] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.775 [2024-07-26 11:15:25.204847] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.775 [2024-07-26 11:15:25.204893] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.775 [2024-07-26 11:15:25.204940] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.775 [2024-07-26 11:15:25.204984] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.775 [2024-07-26 11:15:25.205031] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.776 [2024-07-26 11:15:25.205079] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.776 [2024-07-26 11:15:25.205127] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.776 [2024-07-26 11:15:25.205172] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.776 [2024-07-26 11:15:25.205220] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.776 [2024-07-26 11:15:25.205269] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.776 [2024-07-26 11:15:25.205314] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.776 [2024-07-26 11:15:25.205360] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.776 [2024-07-26 11:15:25.205411] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.776 [2024-07-26 11:15:25.205460] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.776 [2024-07-26 11:15:25.205503] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.776 [2024-07-26 11:15:25.205551] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.776 [2024-07-26 11:15:25.205598] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.776 [2024-07-26 11:15:25.205650] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.776 [2024-07-26 11:15:25.205699] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.776 [2024-07-26 11:15:25.205744] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.776 [2024-07-26 11:15:25.205790] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.776 [2024-07-26 11:15:25.205836] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.776 [2024-07-26 11:15:25.205888] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.776 [2024-07-26 11:15:25.205933] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.776 [2024-07-26 11:15:25.205980] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.776 [2024-07-26 11:15:25.206026] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.776 [2024-07-26 11:15:25.206071] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.776 [2024-07-26 11:15:25.206123] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.776 [2024-07-26 11:15:25.206166] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.776 [2024-07-26 11:15:25.206335] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.776 [2024-07-26 11:15:25.206375] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.776 [2024-07-26 11:15:25.206417] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.776 [2024-07-26 11:15:25.206456] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.776 [2024-07-26 11:15:25.206497] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.776 [2024-07-26 11:15:25.206539] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.776 [2024-07-26 11:15:25.206580] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.776 [2024-07-26 11:15:25.206624] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.776 [2024-07-26 11:15:25.206675] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.776 [2024-07-26 11:15:25.206723] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.776 [2024-07-26 11:15:25.206755] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.776 [2024-07-26 11:15:25.206799] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.776 [2024-07-26 11:15:25.206837] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.776 [2024-07-26 11:15:25.206882] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.776 [2024-07-26 11:15:25.206925] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.776 [2024-07-26 11:15:25.206972] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.776 [2024-07-26 11:15:25.207016] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.776 [2024-07-26 11:15:25.207060] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.776 [2024-07-26 11:15:25.207100] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.776 [2024-07-26 11:15:25.207146] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.776 [2024-07-26 11:15:25.207187] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.776 [2024-07-26 11:15:25.207224] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.776 [2024-07-26 11:15:25.207268] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.776 [2024-07-26 11:15:25.207306] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.776 [2024-07-26 11:15:25.207341] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.776 [2024-07-26 11:15:25.207383] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.776 [2024-07-26 11:15:25.207423] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.776 [2024-07-26 11:15:25.207465] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.776 [2024-07-26 11:15:25.207506] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.776 [2024-07-26 11:15:25.207546] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.776 [2024-07-26 11:15:25.207589] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.776 [2024-07-26 11:15:25.207635] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.776 [2024-07-26 11:15:25.207680] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.776 [2024-07-26 11:15:25.207723] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.776 [2024-07-26 11:15:25.207761] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.776 [2024-07-26 11:15:25.207802] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.776 [2024-07-26 11:15:25.207842] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.776 [2024-07-26 11:15:25.207873] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.776 [2024-07-26 11:15:25.207913] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.776 [2024-07-26 11:15:25.207957] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.776 [2024-07-26 11:15:25.207997] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.776 [2024-07-26 11:15:25.208037] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.776 [2024-07-26 11:15:25.208078] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.776 [2024-07-26 11:15:25.208115] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.776 [2024-07-26 11:15:25.208156] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.776 [2024-07-26 11:15:25.208197] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.776 [2024-07-26 11:15:25.208927] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.776 [2024-07-26 11:15:25.208975] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.776 [2024-07-26 11:15:25.209028] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.776 [2024-07-26 11:15:25.209071] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.776 [2024-07-26 11:15:25.209117] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.776 [2024-07-26 11:15:25.209163] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.776 [2024-07-26 11:15:25.209213] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.776 [2024-07-26 11:15:25.209269] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.777 [2024-07-26 11:15:25.209316] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.777 [2024-07-26 11:15:25.209363] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.777 [2024-07-26 11:15:25.209411] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.777 [2024-07-26 11:15:25.209457] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.777 [2024-07-26 11:15:25.209506] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.777 [2024-07-26 11:15:25.209563] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.777 [2024-07-26 11:15:25.209609] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.777 [2024-07-26 11:15:25.209659] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.777 [2024-07-26 11:15:25.209701] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.777 [2024-07-26 11:15:25.209747] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.777 [2024-07-26 11:15:25.209788] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.777 [2024-07-26 11:15:25.209829] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.777 [2024-07-26 11:15:25.209866] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.777 [2024-07-26 11:15:25.209908] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.777 [2024-07-26 11:15:25.209949] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.777 [2024-07-26 11:15:25.209995] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.777 [2024-07-26 11:15:25.210027] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.777 [2024-07-26 11:15:25.210069] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.777 [2024-07-26 11:15:25.210108] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.777 [2024-07-26 11:15:25.210147] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.777 [2024-07-26 11:15:25.210192] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.777 [2024-07-26 11:15:25.210235] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.777 [2024-07-26 11:15:25.210275] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.777 [2024-07-26 11:15:25.210313] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.777 [2024-07-26 11:15:25.210352] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.777 [2024-07-26 11:15:25.210393] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.777 [2024-07-26 11:15:25.210428] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.777 [2024-07-26 11:15:25.210467] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.777 [2024-07-26 11:15:25.210502] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.777 [2024-07-26 11:15:25.210542] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.777 [2024-07-26 11:15:25.210582] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.777 [2024-07-26 11:15:25.210621] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.777 [2024-07-26 11:15:25.210669] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.777 [2024-07-26 11:15:25.210707] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.777 [2024-07-26 11:15:25.210749] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.777 [2024-07-26 11:15:25.210792] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.777 [2024-07-26 11:15:25.210840] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.777 [2024-07-26 11:15:25.210896] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.777 [2024-07-26 11:15:25.210943] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.777 [2024-07-26 11:15:25.210987] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.777 [2024-07-26 11:15:25.211034] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.777 [2024-07-26 11:15:25.211078] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.777 [2024-07-26 11:15:25.211120] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.777 [2024-07-26 11:15:25.211164] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.777 [2024-07-26 11:15:25.211211] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.777 [2024-07-26 11:15:25.211256] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.777 [2024-07-26 11:15:25.211301] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.777 [2024-07-26 11:15:25.211344] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.777 [2024-07-26 11:15:25.211390] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.777 [2024-07-26 11:15:25.211443] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.777 [2024-07-26 11:15:25.211489] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.777 [2024-07-26 11:15:25.211536] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.777 [2024-07-26 11:15:25.211579] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.777 [2024-07-26 11:15:25.211630] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.777 [2024-07-26 11:15:25.211678] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.777 [2024-07-26 11:15:25.211728] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.777 [2024-07-26 11:15:25.211911] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.777 [2024-07-26 11:15:25.211959] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.777 [2024-07-26 11:15:25.212004] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.777 [2024-07-26 11:15:25.212055] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.777 [2024-07-26 11:15:25.212102] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.777 [2024-07-26 11:15:25.212149] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.777 [2024-07-26 11:15:25.212197] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.777 [2024-07-26 11:15:25.212243] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.777 [2024-07-26 11:15:25.212287] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.777 [2024-07-26 11:15:25.212344] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.777 [2024-07-26 11:15:25.212388] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.777 [2024-07-26 11:15:25.212438] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.777 [2024-07-26 11:15:25.212485] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.777 [2024-07-26 11:15:25.212534] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.777 [2024-07-26 11:15:25.212584] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.777 [2024-07-26 11:15:25.212632] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.777 [2024-07-26 11:15:25.212674] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.777 [2024-07-26 11:15:25.213083] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.777 [2024-07-26 11:15:25.213126] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.777 [2024-07-26 11:15:25.213168] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.777 [2024-07-26 11:15:25.213200] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.777 [2024-07-26 11:15:25.213242] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.777 [2024-07-26 11:15:25.213278] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.777 [2024-07-26 11:15:25.213319] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.777 [2024-07-26 11:15:25.213358] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.777 [2024-07-26 11:15:25.213405] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.778 [2024-07-26 11:15:25.213446] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.778 [2024-07-26 11:15:25.213490] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.778 [2024-07-26 11:15:25.213531] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.778 [2024-07-26 11:15:25.213572] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.778 [2024-07-26 11:15:25.213612] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.778 [2024-07-26 11:15:25.213653] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.778 [2024-07-26 11:15:25.213696] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.778 [2024-07-26 11:15:25.213733] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.778 [2024-07-26 11:15:25.213770] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.778 [2024-07-26 11:15:25.213812] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.778 [2024-07-26 11:15:25.213855] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.778 [2024-07-26 11:15:25.213897] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.778 [2024-07-26 11:15:25.213940] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.778 [2024-07-26 11:15:25.213982] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.778 [2024-07-26 11:15:25.214024] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.778 [2024-07-26 11:15:25.214065] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.778 [2024-07-26 11:15:25.214106] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.778 [2024-07-26 11:15:25.214146] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.778 [2024-07-26 11:15:25.214187] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.778 [2024-07-26 11:15:25.214233] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.778 [2024-07-26 11:15:25.214271] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.778 [2024-07-26 11:15:25.214317] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.778 [2024-07-26 11:15:25.214369] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.778 [2024-07-26 11:15:25.214415] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.778 [2024-07-26 11:15:25.214464] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.778 [2024-07-26 11:15:25.214508] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.778 [2024-07-26 11:15:25.214556] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.778 [2024-07-26 11:15:25.214602] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.778 [2024-07-26 11:15:25.214657] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.778 [2024-07-26 11:15:25.214703] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.778 [2024-07-26 11:15:25.214753] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.778 [2024-07-26 11:15:25.214802] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.778 [2024-07-26 11:15:25.214847] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.778 [2024-07-26 11:15:25.214892] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.778 [2024-07-26 11:15:25.214943] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.778 [2024-07-26 11:15:25.214989] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.778 [2024-07-26 11:15:25.215041] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.778 [2024-07-26 11:15:25.215097] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.778 [2024-07-26 11:15:25.215145] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.778 [2024-07-26 11:15:25.215193] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.778 [2024-07-26 11:15:25.215238] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.778 [2024-07-26 11:15:25.215283] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.778 [2024-07-26 11:15:25.215333] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.778 [2024-07-26 11:15:25.215379] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.778 [2024-07-26 11:15:25.215426] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.778 [2024-07-26 11:15:25.215473] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.778 [2024-07-26 11:15:25.215517] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.778 [2024-07-26 11:15:25.215570] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.778 [2024-07-26 11:15:25.215614] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.778 [2024-07-26 11:15:25.215665] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.778 [2024-07-26 11:15:25.215713] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.778 [2024-07-26 11:15:25.215755] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.778 [2024-07-26 11:15:25.215789] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.778 [2024-07-26 11:15:25.215830] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.778 [2024-07-26 11:15:25.215870] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.778 [2024-07-26 11:15:25.216048] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.778 [2024-07-26 11:15:25.216090] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.778 [2024-07-26 11:15:25.216132] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.778 [2024-07-26 11:15:25.216174] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.778 [2024-07-26 11:15:25.216212] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.778 [2024-07-26 11:15:25.216251] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.778 [2024-07-26 11:15:25.216289] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.778 [2024-07-26 11:15:25.216329] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.778 [2024-07-26 11:15:25.216372] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.778 [2024-07-26 11:15:25.216411] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.778 [2024-07-26 11:15:25.216457] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.778 [2024-07-26 11:15:25.216499] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.778 [2024-07-26 11:15:25.216538] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.778 [2024-07-26 11:15:25.216587] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.778 [2024-07-26 11:15:25.216631] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.778 [2024-07-26 11:15:25.216676] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.778 [2024-07-26 11:15:25.216717] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.778 [2024-07-26 11:15:25.216757] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.778 [2024-07-26 11:15:25.216803] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.778 [2024-07-26 11:15:25.216846] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.778 [2024-07-26 11:15:25.216890] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.778 [2024-07-26 11:15:25.216931] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.778 [2024-07-26 11:15:25.216975] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.778 [2024-07-26 11:15:25.217013] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.778 [2024-07-26 11:15:25.217053] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.778 [2024-07-26 11:15:25.217088] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.778 [2024-07-26 11:15:25.217136] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.778 [2024-07-26 11:15:25.217183] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.778 [2024-07-26 11:15:25.217229] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.778 [2024-07-26 11:15:25.217276] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.779 [2024-07-26 11:15:25.217319] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.779 [2024-07-26 11:15:25.217366] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.779 [2024-07-26 11:15:25.217413] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.779 [2024-07-26 11:15:25.217459] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.779 [2024-07-26 11:15:25.217503] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.779 [2024-07-26 11:15:25.217554] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.779 [2024-07-26 11:15:25.217597] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.779 [2024-07-26 11:15:25.217643] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.779 [2024-07-26 11:15:25.217683] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.779 [2024-07-26 11:15:25.217722] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.779 [2024-07-26 11:15:25.217762] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.779 [2024-07-26 11:15:25.217795] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.779 [2024-07-26 11:15:25.217834] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.779 [2024-07-26 11:15:25.217869] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.779 [2024-07-26 11:15:25.217908] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.779 [2024-07-26 11:15:25.217951] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.779 [2024-07-26 11:15:25.218706] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.779 [2024-07-26 11:15:25.218756] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.779 [2024-07-26 11:15:25.218809] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.779 [2024-07-26 11:15:25.218854] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.779 [2024-07-26 11:15:25.218900] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.779 [2024-07-26 11:15:25.218949] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.779 [2024-07-26 11:15:25.218997] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.779 [2024-07-26 11:15:25.219053] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.779 [2024-07-26 11:15:25.219105] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.779 [2024-07-26 11:15:25.219153] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.779 [2024-07-26 11:15:25.219202] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.779 [2024-07-26 11:15:25.219245] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.779 [2024-07-26 11:15:25.219298] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.779 [2024-07-26 11:15:25.219347] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.779 [2024-07-26 11:15:25.219395] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.779 [2024-07-26 11:15:25.219443] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.779 [2024-07-26 11:15:25.219489] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.779 [2024-07-26 11:15:25.219533] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.779 [2024-07-26 11:15:25.219581] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.779 [2024-07-26 11:15:25.219624] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.779 [2024-07-26 11:15:25.219676] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.779 [2024-07-26 11:15:25.219725] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.779 [2024-07-26 11:15:25.219769] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.779 [2024-07-26 11:15:25.219817] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.779 [2024-07-26 11:15:25.219863] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.779 [2024-07-26 11:15:25.219907] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.779 [2024-07-26 11:15:25.219951] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.779 [2024-07-26 11:15:25.219994] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.779 [2024-07-26 11:15:25.220040] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.779 [2024-07-26 11:15:25.220083] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.779 [2024-07-26 11:15:25.220124] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.779 [2024-07-26 11:15:25.220163] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.779 [2024-07-26 11:15:25.220196] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.779 [2024-07-26 11:15:25.220235] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.779 [2024-07-26 11:15:25.220273] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.779 [2024-07-26 11:15:25.220318] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.779 [2024-07-26 11:15:25.220360] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.779 [2024-07-26 11:15:25.220401] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.779 [2024-07-26 11:15:25.220439] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.779 [2024-07-26 11:15:25.220480] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.779 [2024-07-26 11:15:25.220521] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.779 [2024-07-26 11:15:25.220561] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.779 [2024-07-26 11:15:25.220602] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.779 [2024-07-26 11:15:25.220653] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.779 [2024-07-26 11:15:25.220685] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.779 [2024-07-26 11:15:25.220727] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.779 [2024-07-26 11:15:25.220767] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.779 [2024-07-26 11:15:25.220808] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.779 [2024-07-26 11:15:25.220850] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.779 [2024-07-26 11:15:25.220893] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.779 [2024-07-26 11:15:25.220932] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.779 [2024-07-26 11:15:25.220984] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.779 [2024-07-26 11:15:25.221026] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.779 [2024-07-26 11:15:25.221068] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.779 [2024-07-26 11:15:25.221111] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.779 [2024-07-26 11:15:25.221145] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.779 [2024-07-26 11:15:25.221184] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.779 [2024-07-26 11:15:25.221230] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.779 [2024-07-26 11:15:25.221273] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.779 [2024-07-26 11:15:25.221314] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.779 [2024-07-26 11:15:25.221353] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.779 [2024-07-26 11:15:25.221395] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.779 [2024-07-26 11:15:25.221438] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.779 [2024-07-26 11:15:25.221480] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.779 [2024-07-26 11:15:25.221670] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.779 [2024-07-26 11:15:25.221716] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.779 [2024-07-26 11:15:25.221769] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.779 [2024-07-26 11:15:25.221817] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.779 [2024-07-26 11:15:25.221861] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.779 [2024-07-26 11:15:25.221910] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.780 [2024-07-26 11:15:25.221962] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.780 [2024-07-26 11:15:25.222013] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.780 [2024-07-26 11:15:25.222061] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.780 [2024-07-26 11:15:25.222107] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.780 [2024-07-26 11:15:25.222156] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.780 [2024-07-26 11:15:25.222202] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.780 [2024-07-26 11:15:25.222247] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.780 [2024-07-26 11:15:25.222295] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.780 [2024-07-26 11:15:25.222346] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.780 [2024-07-26 11:15:25.222389] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.780 [2024-07-26 11:15:25.222434] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.780 [2024-07-26 11:15:25.222481] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.780 [2024-07-26 11:15:25.222526] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.780 [2024-07-26 11:15:25.222576] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.780 [2024-07-26 11:15:25.222620] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.780 [2024-07-26 11:15:25.222675] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.780 [2024-07-26 11:15:25.222723] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.780 [2024-07-26 11:15:25.222768] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.780 [2024-07-26 11:15:25.222813] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.780 [2024-07-26 11:15:25.222859] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.780 [2024-07-26 11:15:25.222903] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.780 [2024-07-26 11:15:25.222948] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.780 [2024-07-26 11:15:25.222997] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.780 [2024-07-26 11:15:25.223046] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.780 [2024-07-26 11:15:25.223093] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.780 [2024-07-26 11:15:25.223134] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.780 [2024-07-26 11:15:25.223176] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.780 [2024-07-26 11:15:25.223218] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.780 [2024-07-26 11:15:25.223263] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.780 [2024-07-26 11:15:25.223303] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.780 [2024-07-26 11:15:25.223344] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.780 [2024-07-26 11:15:25.223388] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.780 [2024-07-26 11:15:25.223431] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.780 [2024-07-26 11:15:25.223472] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.780 [2024-07-26 11:15:25.223512] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.780 [2024-07-26 11:15:25.223553] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.780 [2024-07-26 11:15:25.223588] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.780 [2024-07-26 11:15:25.223634] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.780 [2024-07-26 11:15:25.223673] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.780 [2024-07-26 11:15:25.223712] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.780 [2024-07-26 11:15:25.223758] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.780 [2024-07-26 11:15:25.223799] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.780 [2024-07-26 11:15:25.223849] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.780 [2024-07-26 11:15:25.223889] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.780 [2024-07-26 11:15:25.223929] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.780 [2024-07-26 11:15:25.223973] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.780 [2024-07-26 11:15:25.224009] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.780 [2024-07-26 11:15:25.224050] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.780 [2024-07-26 11:15:25.224090] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.780 [2024-07-26 11:15:25.224134] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.780 [2024-07-26 11:15:25.224174] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.780 [2024-07-26 11:15:25.224212] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.780 [2024-07-26 11:15:25.224253] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.780 [2024-07-26 11:15:25.224298] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.780 [2024-07-26 11:15:25.224339] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.780 [2024-07-26 11:15:25.224382] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.780 [2024-07-26 11:15:25.224421] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.780 [2024-07-26 11:15:25.225236] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.780 [2024-07-26 11:15:25.225294] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.780 [2024-07-26 11:15:25.225338] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.780 [2024-07-26 11:15:25.225385] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.780 [2024-07-26 11:15:25.225428] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.780 [2024-07-26 11:15:25.225476] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.780 [2024-07-26 11:15:25.225524] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.780 [2024-07-26 11:15:25.225569] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.780 [2024-07-26 11:15:25.225619] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.780 [2024-07-26 11:15:25.225673] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.780 [2024-07-26 11:15:25.225719] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.780 [2024-07-26 11:15:25.225768] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.780 [2024-07-26 11:15:25.225817] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.780 [2024-07-26 11:15:25.225865] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.780 [2024-07-26 11:15:25.225909] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.780 [2024-07-26 11:15:25.225959] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.780 [2024-07-26 11:15:25.226003] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.780 [2024-07-26 11:15:25.226051] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.780 [2024-07-26 11:15:25.226099] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.780 [2024-07-26 11:15:25.226142] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.780 [2024-07-26 11:15:25.226186] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.780 [2024-07-26 11:15:25.226232] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.780 [2024-07-26 11:15:25.226280] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.780 [2024-07-26 11:15:25.226329] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.780 [2024-07-26 11:15:25.226375] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.780 [2024-07-26 11:15:25.226418] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.780 [2024-07-26 11:15:25.226464] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.780 [2024-07-26 11:15:25.226512] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.780 [2024-07-26 11:15:25.226554] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.780 [2024-07-26 11:15:25.226586] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.780 [2024-07-26 11:15:25.226625] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.780 [2024-07-26 11:15:25.226673] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.781 [2024-07-26 11:15:25.226717] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.781 [2024-07-26 11:15:25.226760] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.781 [2024-07-26 11:15:25.226801] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.781 [2024-07-26 11:15:25.226846] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.781 [2024-07-26 11:15:25.226886] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.781 [2024-07-26 11:15:25.226931] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.781 [2024-07-26 11:15:25.226972] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.781 [2024-07-26 11:15:25.227020] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.781 [2024-07-26 11:15:25.227060] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.781 [2024-07-26 11:15:25.227100] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.781 [2024-07-26 11:15:25.227138] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.781 [2024-07-26 11:15:25.227184] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.781 [2024-07-26 11:15:25.227223] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.781 [2024-07-26 11:15:25.227262] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.781 [2024-07-26 11:15:25.227307] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.781 [2024-07-26 11:15:25.227348] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.781 [2024-07-26 11:15:25.227389] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.781 [2024-07-26 11:15:25.227438] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.781 [2024-07-26 11:15:25.227477] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.781 [2024-07-26 11:15:25.227522] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.781 [2024-07-26 11:15:25.227555] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.781 [2024-07-26 11:15:25.227596] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.781 [2024-07-26 11:15:25.227643] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.781 [2024-07-26 11:15:25.227686] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.781 [2024-07-26 11:15:25.227726] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.781 [2024-07-26 11:15:25.227769] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.781 [2024-07-26 11:15:25.227814] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.781 [2024-07-26 11:15:25.227858] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.781 [2024-07-26 11:15:25.227899] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.781 [2024-07-26 11:15:25.227940] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.781 [2024-07-26 11:15:25.227977] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.781 [2024-07-26 11:15:25.228015] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.781 [2024-07-26 11:15:25.228210] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.781 [2024-07-26 11:15:25.228256] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.781 [2024-07-26 11:15:25.228302] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.781 [2024-07-26 11:15:25.228350] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.781 [2024-07-26 11:15:25.228398] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.781 [2024-07-26 11:15:25.228446] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.781 [2024-07-26 11:15:25.228493] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.781 [2024-07-26 11:15:25.228541] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.781 [2024-07-26 11:15:25.228589] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.781 [2024-07-26 11:15:25.228642] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.781 [2024-07-26 11:15:25.228688] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.781 [2024-07-26 11:15:25.228734] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.781 [2024-07-26 11:15:25.228782] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.781 [2024-07-26 11:15:25.228828] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.781 [2024-07-26 11:15:25.228873] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.781 [2024-07-26 11:15:25.228925] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.781 [2024-07-26 11:15:25.228974] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.781 [2024-07-26 11:15:25.229457] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.781 [2024-07-26 11:15:25.229506] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.781 [2024-07-26 11:15:25.229555] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.781 [2024-07-26 11:15:25.229601] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.781 [2024-07-26 11:15:25.229651] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.781 [2024-07-26 11:15:25.229688] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.781 [2024-07-26 11:15:25.229732] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.781 [2024-07-26 11:15:25.229774] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.781 [2024-07-26 11:15:25.229820] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.782 [2024-07-26 11:15:25.229860] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.782 [2024-07-26 11:15:25.229900] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.783 [2024-07-26 11:15:25.229939] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.783 [2024-07-26 11:15:25.229984] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.783 [2024-07-26 11:15:25.230023] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.783 [2024-07-26 11:15:25.230061] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.783 [2024-07-26 11:15:25.230108] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.783 [2024-07-26 11:15:25.230147] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.783 [2024-07-26 11:15:25.230193] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.783 [2024-07-26 11:15:25.230232] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.783 [2024-07-26 11:15:25.230274] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.783 [2024-07-26 11:15:25.230318] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.783 [2024-07-26 11:15:25.230359] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.783 [2024-07-26 11:15:25.230403] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.783 [2024-07-26 11:15:25.230448] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.783 [2024-07-26 11:15:25.230498] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.783 [2024-07-26 11:15:25.230541] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.783 [2024-07-26 11:15:25.230578] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.783 [2024-07-26 11:15:25.230619] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.783 [2024-07-26 11:15:25.230668] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.783 [2024-07-26 11:15:25.230709] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.783 [2024-07-26 11:15:25.230752] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.783 [2024-07-26 11:15:25.230793] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.783 [2024-07-26 11:15:25.230838] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.783 [2024-07-26 11:15:25.230880] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.783 [2024-07-26 11:15:25.230923] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.783 [2024-07-26 11:15:25.230961] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.783 [2024-07-26 11:15:25.231003] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.783 [2024-07-26 11:15:25.231039] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.783 [2024-07-26 11:15:25.231081] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.783 [2024-07-26 11:15:25.231117] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.783 [2024-07-26 11:15:25.231161] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.783 [2024-07-26 11:15:25.231207] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.783 [2024-07-26 11:15:25.231254] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.783 [2024-07-26 11:15:25.231307] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.783 [2024-07-26 11:15:25.231349] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.783 [2024-07-26 11:15:25.231394] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.783 [2024-07-26 11:15:25.231442] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.783 [2024-07-26 11:15:25.231489] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.783 [2024-07-26 11:15:25.231550] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.783 [2024-07-26 11:15:25.231594] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.783 [2024-07-26 11:15:25.231644] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.783 [2024-07-26 11:15:25.231688] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.783 [2024-07-26 11:15:25.231733] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.783 [2024-07-26 11:15:25.231788] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.783 [2024-07-26 11:15:25.231833] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.783 [2024-07-26 11:15:25.231880] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.783 [2024-07-26 11:15:25.231938] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.783 [2024-07-26 11:15:25.231987] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.783 [2024-07-26 11:15:25.232030] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.783 [2024-07-26 11:15:25.232080] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.783 [2024-07-26 11:15:25.232128] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.783 [2024-07-26 11:15:25.232176] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.783 [2024-07-26 11:15:25.232225] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.783 [2024-07-26 11:15:25.232270] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.783 [2024-07-26 11:15:25.232456] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.783 [2024-07-26 11:15:25.232503] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.783 [2024-07-26 11:15:25.232550] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.783 [2024-07-26 11:15:25.232596] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.783 [2024-07-26 11:15:25.232647] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.783 [2024-07-26 11:15:25.232695] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.783 [2024-07-26 11:15:25.232746] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.783 [2024-07-26 11:15:25.232791] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.783 [2024-07-26 11:15:25.232844] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.783 [2024-07-26 11:15:25.232888] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.783 [2024-07-26 11:15:25.232932] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.783 [2024-07-26 11:15:25.232965] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.783 [2024-07-26 11:15:25.233008] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.783 [2024-07-26 11:15:25.233048] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.783 [2024-07-26 11:15:25.233085] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.783 [2024-07-26 11:15:25.233131] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.783 [2024-07-26 11:15:25.233171] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.783 [2024-07-26 11:15:25.233211] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.783 [2024-07-26 11:15:25.233253] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.783 [2024-07-26 11:15:25.233291] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.783 [2024-07-26 11:15:25.233332] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.783 [2024-07-26 11:15:25.233371] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.783 [2024-07-26 11:15:25.233415] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.783 [2024-07-26 11:15:25.233448] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.783 [2024-07-26 11:15:25.233491] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.783 [2024-07-26 11:15:25.233536] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.784 [2024-07-26 11:15:25.233575] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.784 [2024-07-26 11:15:25.233615] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.784 [2024-07-26 11:15:25.233660] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.784 [2024-07-26 11:15:25.233705] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.784 [2024-07-26 11:15:25.233745] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.784 [2024-07-26 11:15:25.233783] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.784 [2024-07-26 11:15:25.233832] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.784 [2024-07-26 11:15:25.233870] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.784 [2024-07-26 11:15:25.233906] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.784 [2024-07-26 11:15:25.233951] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.784 [2024-07-26 11:15:25.233991] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.784 [2024-07-26 11:15:25.234035] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.784 [2024-07-26 11:15:25.234074] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.784 [2024-07-26 11:15:25.234115] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.784 [2024-07-26 11:15:25.234156] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.784 [2024-07-26 11:15:25.234195] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.784 [2024-07-26 11:15:25.234239] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.784 [2024-07-26 11:15:25.234280] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.784 [2024-07-26 11:15:25.234325] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.784 [2024-07-26 11:15:25.234367] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.784 [2024-07-26 11:15:25.234406] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.784 [2024-07-26 11:15:25.234445] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.784 [2024-07-26 11:15:25.234485] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.784 [2024-07-26 11:15:25.234521] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.784 [2024-07-26 11:15:25.234568] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.784 [2024-07-26 11:15:25.234616] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.784 [2024-07-26 11:15:25.234669] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.784 [2024-07-26 11:15:25.234717] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.784 [2024-07-26 11:15:25.234761] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.784 [2024-07-26 11:15:25.234808] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.784 [2024-07-26 11:15:25.234853] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.784 [2024-07-26 11:15:25.234901] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.784 [2024-07-26 11:15:25.234951] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.784 [2024-07-26 11:15:25.235002] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.784 [2024-07-26 11:15:25.235049] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.784 [2024-07-26 11:15:25.235094] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.784 [2024-07-26 11:15:25.235139] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.784 [2024-07-26 11:15:25.235928] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.784 [2024-07-26 11:15:25.235980] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.784 [2024-07-26 11:15:25.236027] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.784 [2024-07-26 11:15:25.236075] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.784 [2024-07-26 11:15:25.236109] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.784 [2024-07-26 11:15:25.236148] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.784 [2024-07-26 11:15:25.236187] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.784 [2024-07-26 11:15:25.236229] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.784 [2024-07-26 11:15:25.236278] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.784 [2024-07-26 11:15:25.236318] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.784 [2024-07-26 11:15:25.236358] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.784 [2024-07-26 11:15:25.236398] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.784 [2024-07-26 11:15:25.236437] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.784 [2024-07-26 11:15:25.236476] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.784 [2024-07-26 11:15:25.236521] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.784 [2024-07-26 11:15:25.236562] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.784 [2024-07-26 11:15:25.236595] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.784 [2024-07-26 11:15:25.236638] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.784 [2024-07-26 11:15:25.236677] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.784 [2024-07-26 11:15:25.236713] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.784 [2024-07-26 11:15:25.236757] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.784 [2024-07-26 11:15:25.236795] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.784 [2024-07-26 11:15:25.236837] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.784 [2024-07-26 11:15:25.236879] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.784 [2024-07-26 11:15:25.236918] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.784 [2024-07-26 11:15:25.236965] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.784 [2024-07-26 11:15:25.236998] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.784 [2024-07-26 11:15:25.237039] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.784 [2024-07-26 11:15:25.237081] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.784 [2024-07-26 11:15:25.237130] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.784 [2024-07-26 11:15:25.237174] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.784 [2024-07-26 11:15:25.237213] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.784 [2024-07-26 11:15:25.237260] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.784 [2024-07-26 11:15:25.237303] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.784 [2024-07-26 11:15:25.237342] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.784 [2024-07-26 11:15:25.237381] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.784 [2024-07-26 11:15:25.237420] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.784 [2024-07-26 11:15:25.237459] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.784 [2024-07-26 11:15:25.237504] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.784 [2024-07-26 11:15:25.237543] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.784 [2024-07-26 11:15:25.237588] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.784 [2024-07-26 11:15:25.237636] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.784 [2024-07-26 11:15:25.237685] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.784 [2024-07-26 11:15:25.237736] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.784 [2024-07-26 11:15:25.237782] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.784 [2024-07-26 11:15:25.237825] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.784 [2024-07-26 11:15:25.237873] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.784 [2024-07-26 11:15:25.237920] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.784 [2024-07-26 11:15:25.237966] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.784 [2024-07-26 11:15:25.238020] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.784 [2024-07-26 11:15:25.238059] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.784 [2024-07-26 11:15:25.238103] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.784 [2024-07-26 11:15:25.238146] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.784 [2024-07-26 11:15:25.238194] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.785 [2024-07-26 11:15:25.238239] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.785 [2024-07-26 11:15:25.238284] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.785 [2024-07-26 11:15:25.238332] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.785 [2024-07-26 11:15:25.238382] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.785 [2024-07-26 11:15:25.238431] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.785 [2024-07-26 11:15:25.238475] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.785 [2024-07-26 11:15:25.238521] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.785 [2024-07-26 11:15:25.238563] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.785 [2024-07-26 11:15:25.238608] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.785 [2024-07-26 11:15:25.238659] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.785 [2024-07-26 11:15:25.238843] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.785 [2024-07-26 11:15:25.238893] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.785 [2024-07-26 11:15:25.238937] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.785 [2024-07-26 11:15:25.238983] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.785 [2024-07-26 11:15:25.239034] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.785 [2024-07-26 11:15:25.239080] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.785 [2024-07-26 11:15:25.239125] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.785 [2024-07-26 11:15:25.239167] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.785 [2024-07-26 11:15:25.239215] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.785 [2024-07-26 11:15:25.239262] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.785 [2024-07-26 11:15:25.239304] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.785 [2024-07-26 11:15:25.239342] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.785 [2024-07-26 11:15:25.239386] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.785 [2024-07-26 11:15:25.239429] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.785 [2024-07-26 11:15:25.239471] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.785 [2024-07-26 11:15:25.239517] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.785 [2024-07-26 11:15:25.239561] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.785 [2024-07-26 11:15:25.239997] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.785 [2024-07-26 11:15:25.240045] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.785 [2024-07-26 11:15:25.240084] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.785 [2024-07-26 11:15:25.240125] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.785 [2024-07-26 11:15:25.240163] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.785 [2024-07-26 11:15:25.240203] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.785 [2024-07-26 11:15:25.240245] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.785 [2024-07-26 11:15:25.240278] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.785 [2024-07-26 11:15:25.240323] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.785 [2024-07-26 11:15:25.240358] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.785 [2024-07-26 11:15:25.240399] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.785 [2024-07-26 11:15:25.240439] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.785 [2024-07-26 11:15:25.240481] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.785 [2024-07-26 11:15:25.240523] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.785 [2024-07-26 11:15:25.240570] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.785 [2024-07-26 11:15:25.240610] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.785 [2024-07-26 11:15:25.240657] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.785 [2024-07-26 11:15:25.240700] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.785 [2024-07-26 11:15:25.240741] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.785 [2024-07-26 11:15:25.240779] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.785 [2024-07-26 11:15:25.240820] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.785 [2024-07-26 11:15:25.240857] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.785 [2024-07-26 11:15:25.240904] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.785 [2024-07-26 11:15:25.240949] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.785 [2024-07-26 11:15:25.240995] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.785 [2024-07-26 11:15:25.241043] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.785 [2024-07-26 11:15:25.241095] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.785 [2024-07-26 11:15:25.241140] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.785 [2024-07-26 11:15:25.241183] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.785 [2024-07-26 11:15:25.241231] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.785 [2024-07-26 11:15:25.241278] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.785 [2024-07-26 11:15:25.241328] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.785 [2024-07-26 11:15:25.241374] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.785 [2024-07-26 11:15:25.241423] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.785 [2024-07-26 11:15:25.241470] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.785 [2024-07-26 11:15:25.241515] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.785 [2024-07-26 11:15:25.241561] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.785 [2024-07-26 11:15:25.241613] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.785 [2024-07-26 11:15:25.241663] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.785 [2024-07-26 11:15:25.241707] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.785 [2024-07-26 11:15:25.241751] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.785 [2024-07-26 11:15:25.241795] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.785 [2024-07-26 11:15:25.241845] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.785 [2024-07-26 11:15:25.241892] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.785 [2024-07-26 11:15:25.241939] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.785 [2024-07-26 11:15:25.241989] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.785 [2024-07-26 11:15:25.242033] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.785 [2024-07-26 11:15:25.242078] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.785 [2024-07-26 11:15:25.242125] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.785 [2024-07-26 11:15:25.242172] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.785 [2024-07-26 11:15:25.242216] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.785 [2024-07-26 11:15:25.242260] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.785 [2024-07-26 11:15:25.242311] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.785 [2024-07-26 11:15:25.242355] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.785 [2024-07-26 11:15:25.242399] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.785 [2024-07-26 11:15:25.242446] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.785 [2024-07-26 11:15:25.242489] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.785 [2024-07-26 11:15:25.242521] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.785 [2024-07-26 11:15:25.242558] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.785 [2024-07-26 11:15:25.242600] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.785 [2024-07-26 11:15:25.242641] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.785 [2024-07-26 11:15:25.242682] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.785 [2024-07-26 11:15:25.242721] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.785 [2024-07-26 11:15:25.242761] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.785 [2024-07-26 11:15:25.242933] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.785 [2024-07-26 11:15:25.242974] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.785 [2024-07-26 11:15:25.243005] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.785 [2024-07-26 11:15:25.243044] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.785 [2024-07-26 11:15:25.243093] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.785 [2024-07-26 11:15:25.243133] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.785 [2024-07-26 11:15:25.243176] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.786 [2024-07-26 11:15:25.243219] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.786 [2024-07-26 11:15:25.243259] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.786 [2024-07-26 11:15:25.243298] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.786 [2024-07-26 11:15:25.243337] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.786 [2024-07-26 11:15:25.243385] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.786 [2024-07-26 11:15:25.243421] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.786 [2024-07-26 11:15:25.243465] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.786 [2024-07-26 11:15:25.243503] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.786 [2024-07-26 11:15:25.243546] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.786 [2024-07-26 11:15:25.243585] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.786 [2024-07-26 11:15:25.243635] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.786 [2024-07-26 11:15:25.243678] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:29.786 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:29.786 11:15:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:29.786 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:29.786 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:29.786 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:30.068 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:30.068 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:30.068 [2024-07-26 11:15:25.452765] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.068 [2024-07-26 11:15:25.452834] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.068 [2024-07-26 11:15:25.452870] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.068 [2024-07-26 11:15:25.452904] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.068 [2024-07-26 11:15:25.452944] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.068 [2024-07-26 11:15:25.452983] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.068 [2024-07-26 11:15:25.453020] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.068 [2024-07-26 11:15:25.453057] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.068 [2024-07-26 11:15:25.453098] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.068 [2024-07-26 11:15:25.453136] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.068 [2024-07-26 11:15:25.453181] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.068 [2024-07-26 11:15:25.453210] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.068 [2024-07-26 11:15:25.453249] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.068 [2024-07-26 11:15:25.453287] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.068 [2024-07-26 11:15:25.453323] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.068 [2024-07-26 11:15:25.453359] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.068 [2024-07-26 11:15:25.453400] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.068 [2024-07-26 11:15:25.453450] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.068 [2024-07-26 11:15:25.453493] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.068 [2024-07-26 11:15:25.453530] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.068 [2024-07-26 11:15:25.453568] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.068 [2024-07-26 11:15:25.453614] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.068 [2024-07-26 11:15:25.453667] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.068 [2024-07-26 11:15:25.453696] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.068 [2024-07-26 11:15:25.453737] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.068 [2024-07-26 11:15:25.453774] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.068 [2024-07-26 11:15:25.453812] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.068 [2024-07-26 11:15:25.453850] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.068 [2024-07-26 11:15:25.453887] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.068 [2024-07-26 11:15:25.453926] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.068 [2024-07-26 11:15:25.453971] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.068 [2024-07-26 11:15:25.454007] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.068 [2024-07-26 11:15:25.454043] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.068 [2024-07-26 11:15:25.454083] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.068 [2024-07-26 11:15:25.454122] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.068 [2024-07-26 11:15:25.454159] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.068 [2024-07-26 11:15:25.454198] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.068 [2024-07-26 11:15:25.454237] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.068 [2024-07-26 11:15:25.454276] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.068 [2024-07-26 11:15:25.454316] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.068 [2024-07-26 11:15:25.454358] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.068 [2024-07-26 11:15:25.454409] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.068 [2024-07-26 11:15:25.454451] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.068 [2024-07-26 11:15:25.454492] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.068 [2024-07-26 11:15:25.454538] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.068 [2024-07-26 11:15:25.454582] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.068 [2024-07-26 11:15:25.454622] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.068 [2024-07-26 11:15:25.454674] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.068 [2024-07-26 11:15:25.454720] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.068 [2024-07-26 11:15:25.454762] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.068 [2024-07-26 11:15:25.454805] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.068 [2024-07-26 11:15:25.454846] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.068 [2024-07-26 11:15:25.454889] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.068 [2024-07-26 11:15:25.454940] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.068 [2024-07-26 11:15:25.454980] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.068 [2024-07-26 11:15:25.455027] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.068 [2024-07-26 11:15:25.455070] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.068 [2024-07-26 11:15:25.455112] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.068 [2024-07-26 11:15:25.455156] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.068 [2024-07-26 11:15:25.455199] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.068 [2024-07-26 11:15:25.455245] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.068 [2024-07-26 11:15:25.455289] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.068 [2024-07-26 11:15:25.455332] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.068 [2024-07-26 11:15:25.455377] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.068 [2024-07-26 11:15:25.455557] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.068 [2024-07-26 11:15:25.455608] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.068 [2024-07-26 11:15:25.455666] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.068 [2024-07-26 11:15:25.455712] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.068 [2024-07-26 11:15:25.455757] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.068 [2024-07-26 11:15:25.455803] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.068 [2024-07-26 11:15:25.455841] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.068 [2024-07-26 11:15:25.455887] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.068 [2024-07-26 11:15:25.455934] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.068 [2024-07-26 11:15:25.455967] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.068 [2024-07-26 11:15:25.456005] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.068 [2024-07-26 11:15:25.456047] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.068 [2024-07-26 11:15:25.456088] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.068 [2024-07-26 11:15:25.456133] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.068 [2024-07-26 11:15:25.456171] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.068 [2024-07-26 11:15:25.456208] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.069 [2024-07-26 11:15:25.456253] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.069 [2024-07-26 11:15:25.456295] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.069 [2024-07-26 11:15:25.456332] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.069 [2024-07-26 11:15:25.456369] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.069 [2024-07-26 11:15:25.456410] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.069 [2024-07-26 11:15:25.456459] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.069 [2024-07-26 11:15:25.456498] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.069 [2024-07-26 11:15:25.456533] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.069 [2024-07-26 11:15:25.456578] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.069 [2024-07-26 11:15:25.456616] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.069 [2024-07-26 11:15:25.456667] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.069 [2024-07-26 11:15:25.456708] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.069 [2024-07-26 11:15:25.456745] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.069 [2024-07-26 11:15:25.456782] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.069 [2024-07-26 11:15:25.456825] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.069 [2024-07-26 11:15:25.456862] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.069 [2024-07-26 11:15:25.456902] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.069 [2024-07-26 11:15:25.456946] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.069 [2024-07-26 11:15:25.456979] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.069 [2024-07-26 11:15:25.457017] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.069 [2024-07-26 11:15:25.457063] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.069 [2024-07-26 11:15:25.457104] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.069 [2024-07-26 11:15:25.457139] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.069 [2024-07-26 11:15:25.457179] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.069 [2024-07-26 11:15:25.457217] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.069 [2024-07-26 11:15:25.457258] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.069 [2024-07-26 11:15:25.457297] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.069 [2024-07-26 11:15:25.457338] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.069 [2024-07-26 11:15:25.457377] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.069 [2024-07-26 11:15:25.457419] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.069 [2024-07-26 11:15:25.457458] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.069 [2024-07-26 11:15:25.457502] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.069 [2024-07-26 11:15:25.457540] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.069 [2024-07-26 11:15:25.457584] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.069 [2024-07-26 11:15:25.457633] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.069 [2024-07-26 11:15:25.457679] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.069 [2024-07-26 11:15:25.457721] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.069 [2024-07-26 11:15:25.457765] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.069 [2024-07-26 11:15:25.457807] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.069 [2024-07-26 11:15:25.457855] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.069 [2024-07-26 11:15:25.457902] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.069 [2024-07-26 11:15:25.457945] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.069 [2024-07-26 11:15:25.457985] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.069 [2024-07-26 11:15:25.458037] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.069 [2024-07-26 11:15:25.458084] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.069 [2024-07-26 11:15:25.458124] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.069 [2024-07-26 11:15:25.458165] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.069 [2024-07-26 11:15:25.458980] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.069 [2024-07-26 11:15:25.459032] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.069 [2024-07-26 11:15:25.459078] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.069 [2024-07-26 11:15:25.459119] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.069 [2024-07-26 11:15:25.459164] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.069 [2024-07-26 11:15:25.459206] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.069 [2024-07-26 11:15:25.459250] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.069 [2024-07-26 11:15:25.459282] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.069 [2024-07-26 11:15:25.459323] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.069 [2024-07-26 11:15:25.459362] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.069 [2024-07-26 11:15:25.459398] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.069 [2024-07-26 11:15:25.459441] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.069 [2024-07-26 11:15:25.459482] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.069 [2024-07-26 11:15:25.459522] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.069 [2024-07-26 11:15:25.459560] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.069 [2024-07-26 11:15:25.459597] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.069 [2024-07-26 11:15:25.459646] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.069 [2024-07-26 11:15:25.459689] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.069 [2024-07-26 11:15:25.459728] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.069 [2024-07-26 11:15:25.459773] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.069 [2024-07-26 11:15:25.459809] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.069 [2024-07-26 11:15:25.459853] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.069 [2024-07-26 11:15:25.459888] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.069 [2024-07-26 11:15:25.459928] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.069 [2024-07-26 11:15:25.459969] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.069 [2024-07-26 11:15:25.460012] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.069 [2024-07-26 11:15:25.460053] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.069 [2024-07-26 11:15:25.460104] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.069 [2024-07-26 11:15:25.460143] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.069 [2024-07-26 11:15:25.460182] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.069 [2024-07-26 11:15:25.460226] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.069 [2024-07-26 11:15:25.460268] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.069 [2024-07-26 11:15:25.460299] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.069 [2024-07-26 11:15:25.460340] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.069 [2024-07-26 11:15:25.460379] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.069 [2024-07-26 11:15:25.460418] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.069 [2024-07-26 11:15:25.460460] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.069 [2024-07-26 11:15:25.460498] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.069 [2024-07-26 11:15:25.460538] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.069 [2024-07-26 11:15:25.460578] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.069 [2024-07-26 11:15:25.460617] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.069 [2024-07-26 11:15:25.460663] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.069 [2024-07-26 11:15:25.460705] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.069 [2024-07-26 11:15:25.460745] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.069 [2024-07-26 11:15:25.460781] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.069 [2024-07-26 11:15:25.460819] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.069 [2024-07-26 11:15:25.460859] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.069 [2024-07-26 11:15:25.460897] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.069 [2024-07-26 11:15:25.460943] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.069 [2024-07-26 11:15:25.460987] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.069 [2024-07-26 11:15:25.461029] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.069 [2024-07-26 11:15:25.461069] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.069 [2024-07-26 11:15:25.461112] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.069 [2024-07-26 11:15:25.461159] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.069 [2024-07-26 11:15:25.461200] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.069 [2024-07-26 11:15:25.461245] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.069 [2024-07-26 11:15:25.461291] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.069 [2024-07-26 11:15:25.461336] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.069 [2024-07-26 11:15:25.461379] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.069 [2024-07-26 11:15:25.461424] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.069 [2024-07-26 11:15:25.461469] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.069 [2024-07-26 11:15:25.461514] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.069 [2024-07-26 11:15:25.461558] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.069 [2024-07-26 11:15:25.461608] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.069 [2024-07-26 11:15:25.461800] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.069 [2024-07-26 11:15:25.461847] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.069 [2024-07-26 11:15:25.461886] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.069 [2024-07-26 11:15:25.461929] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.069 [2024-07-26 11:15:25.461975] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.069 [2024-07-26 11:15:25.462021] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.069 [2024-07-26 11:15:25.462065] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.069 [2024-07-26 11:15:25.462114] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.069 [2024-07-26 11:15:25.462160] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.069 [2024-07-26 11:15:25.462205] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.069 [2024-07-26 11:15:25.462245] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.069 [2024-07-26 11:15:25.462291] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.069 [2024-07-26 11:15:25.462335] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.069 [2024-07-26 11:15:25.462378] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.069 [2024-07-26 11:15:25.462425] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.069 [2024-07-26 11:15:25.462467] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.069 [2024-07-26 11:15:25.462516] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.069 [2024-07-26 11:15:25.462559] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.069 [2024-07-26 11:15:25.462590] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.069 [2024-07-26 11:15:25.462638] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.069 [2024-07-26 11:15:25.462680] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.069 [2024-07-26 11:15:25.462715] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.069 [2024-07-26 11:15:25.462757] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.070 [2024-07-26 11:15:25.462803] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.070 [2024-07-26 11:15:25.462848] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.070 [2024-07-26 11:15:25.462887] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.070 [2024-07-26 11:15:25.462924] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.070 [2024-07-26 11:15:25.462963] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.070 [2024-07-26 11:15:25.463003] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.070 [2024-07-26 11:15:25.463040] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.070 [2024-07-26 11:15:25.463077] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.070 [2024-07-26 11:15:25.463114] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.070 [2024-07-26 11:15:25.463151] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.070 [2024-07-26 11:15:25.463191] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.070 [2024-07-26 11:15:25.463228] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.070 [2024-07-26 11:15:25.463266] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.070 [2024-07-26 11:15:25.463310] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.070 [2024-07-26 11:15:25.463351] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.070 [2024-07-26 11:15:25.463392] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.070 [2024-07-26 11:15:25.463432] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.070 [2024-07-26 11:15:25.463471] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.070 [2024-07-26 11:15:25.463523] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.070 [2024-07-26 11:15:25.463564] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.070 [2024-07-26 11:15:25.463603] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.070 [2024-07-26 11:15:25.463646] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.070 [2024-07-26 11:15:25.463684] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.070 [2024-07-26 11:15:25.463721] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.070 [2024-07-26 11:15:25.463760] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.070 [2024-07-26 11:15:25.463798] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.070 [2024-07-26 11:15:25.463837] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.070 [2024-07-26 11:15:25.463875] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.070 [2024-07-26 11:15:25.463912] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.070 [2024-07-26 11:15:25.463951] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.070 [2024-07-26 11:15:25.463990] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.070 [2024-07-26 11:15:25.464029] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.070 [2024-07-26 11:15:25.464065] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.070 [2024-07-26 11:15:25.464103] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.070 [2024-07-26 11:15:25.464846] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.070 [2024-07-26 11:15:25.464895] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.070 [2024-07-26 11:15:25.464940] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.070 [2024-07-26 11:15:25.464990] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.070 [2024-07-26 11:15:25.465038] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.070 [2024-07-26 11:15:25.465079] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.070 [2024-07-26 11:15:25.465120] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.070 [2024-07-26 11:15:25.465167] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.070 [2024-07-26 11:15:25.465204] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.070 [2024-07-26 11:15:25.465248] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.070 [2024-07-26 11:15:25.465302] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.070 [2024-07-26 11:15:25.465345] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.070 [2024-07-26 11:15:25.465392] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.070 [2024-07-26 11:15:25.465445] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.070 [2024-07-26 11:15:25.465485] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.070 [2024-07-26 11:15:25.465530] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.070 [2024-07-26 11:15:25.465570] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.070 [2024-07-26 11:15:25.465614] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.070 [2024-07-26 11:15:25.465665] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.070 [2024-07-26 11:15:25.465715] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.070 [2024-07-26 11:15:25.465761] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.070 [2024-07-26 11:15:25.465801] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.070 [2024-07-26 11:15:25.465854] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.070 [2024-07-26 11:15:25.465887] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.070 [2024-07-26 11:15:25.465922] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.070 [2024-07-26 11:15:25.465963] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.070 [2024-07-26 11:15:25.466007] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.070 [2024-07-26 11:15:25.466043] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.070 [2024-07-26 11:15:25.466084] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.070 [2024-07-26 11:15:25.466121] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.070 [2024-07-26 11:15:25.466169] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.070 [2024-07-26 11:15:25.466209] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.070 [2024-07-26 11:15:25.466246] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.070 [2024-07-26 11:15:25.466284] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.070 [2024-07-26 11:15:25.466326] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.070 [2024-07-26 11:15:25.466368] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.070 [2024-07-26 11:15:25.466403] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.070 [2024-07-26 11:15:25.466441] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.070 [2024-07-26 11:15:25.466478] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.070 [2024-07-26 11:15:25.466515] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.070 [2024-07-26 11:15:25.466555] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.070 [2024-07-26 11:15:25.466595] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.070 [2024-07-26 11:15:25.466641] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.070 [2024-07-26 11:15:25.466681] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.070 [2024-07-26 11:15:25.466719] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.070 [2024-07-26 11:15:25.466758] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.070 [2024-07-26 11:15:25.466797] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.070 [2024-07-26 11:15:25.466844] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.070 [2024-07-26 11:15:25.466879] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.070 [2024-07-26 11:15:25.466918] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.070 [2024-07-26 11:15:25.466954] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.070 [2024-07-26 11:15:25.466992] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.070 [2024-07-26 11:15:25.467030] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.070 [2024-07-26 11:15:25.467071] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.070 [2024-07-26 11:15:25.467111] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.070 [2024-07-26 11:15:25.467150] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.070 [2024-07-26 11:15:25.467189] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.070 [2024-07-26 11:15:25.467228] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.070 [2024-07-26 11:15:25.467267] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.070 [2024-07-26 11:15:25.467305] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.070 [2024-07-26 11:15:25.467343] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.070 [2024-07-26 11:15:25.467380] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.070 [2024-07-26 11:15:25.467421] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.070 [2024-07-26 11:15:25.467460] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.070 [2024-07-26 11:15:25.467638] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.070 [2024-07-26 11:15:25.467682] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.070 [2024-07-26 11:15:25.467724] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.070 [2024-07-26 11:15:25.467767] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.070 [2024-07-26 11:15:25.467814] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.070 [2024-07-26 11:15:25.467862] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.070 [2024-07-26 11:15:25.467903] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.070 [2024-07-26 11:15:25.467944] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.070 [2024-07-26 11:15:25.467994] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.070 [2024-07-26 11:15:25.468034] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.070 [2024-07-26 11:15:25.468076] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.070 [2024-07-26 11:15:25.468129] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.070 [2024-07-26 11:15:25.468167] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.070 [2024-07-26 11:15:25.468212] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.070 [2024-07-26 11:15:25.468261] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.070 [2024-07-26 11:15:25.468307] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.070 [2024-07-26 11:15:25.468348] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.070 [2024-07-26 11:15:25.468393] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.070 [2024-07-26 11:15:25.468441] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.070 [2024-07-26 11:15:25.468487] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.070 [2024-07-26 11:15:25.468530] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.070 [2024-07-26 11:15:25.468573] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.070 [2024-07-26 11:15:25.468618] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.070 [2024-07-26 11:15:25.468668] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.070 [2024-07-26 11:15:25.468713] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.070 [2024-07-26 11:15:25.468754] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.070 [2024-07-26 11:15:25.468801] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.070 [2024-07-26 11:15:25.468843] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.070 [2024-07-26 11:15:25.468881] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.070 [2024-07-26 11:15:25.468924] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.070 [2024-07-26 11:15:25.468971] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.070 [2024-07-26 11:15:25.469015] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.070 [2024-07-26 11:15:25.469062] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.070 [2024-07-26 11:15:25.469106] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.070 [2024-07-26 11:15:25.469144] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.070 [2024-07-26 11:15:25.469177] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.070 [2024-07-26 11:15:25.469216] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.070 [2024-07-26 11:15:25.469255] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.071 [2024-07-26 11:15:25.469292] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.071 [2024-07-26 11:15:25.469331] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.071 [2024-07-26 11:15:25.469371] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.071 [2024-07-26 11:15:25.469410] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.071 [2024-07-26 11:15:25.469448] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.071 [2024-07-26 11:15:25.469491] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.071 [2024-07-26 11:15:25.469527] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.071 [2024-07-26 11:15:25.469567] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.071 [2024-07-26 11:15:25.469602] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.071 [2024-07-26 11:15:25.469646] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.071 [2024-07-26 11:15:25.469688] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.071 [2024-07-26 11:15:25.469730] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.071 [2024-07-26 11:15:25.469775] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.071 [2024-07-26 11:15:25.469814] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.071 [2024-07-26 11:15:25.469852] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.071 [2024-07-26 11:15:25.469889] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.071 [2024-07-26 11:15:25.469926] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.071 [2024-07-26 11:15:25.469963] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.071 [2024-07-26 11:15:25.470003] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.071 [2024-07-26 11:15:25.470049] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.071 [2024-07-26 11:15:25.470090] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.071 [2024-07-26 11:15:25.470124] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.071 [2024-07-26 11:15:25.470156] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.071 [2024-07-26 11:15:25.470199] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.071 [2024-07-26 11:15:25.470237] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.071 Message suppressed 999 times: Read completed with error (sct=0, sc=15) 00:07:30.071 [2024-07-26 11:15:25.471079] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.071 [2024-07-26 11:15:25.471130] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.071 [2024-07-26 11:15:25.471175] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.071 [2024-07-26 11:15:25.471221] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.071 [2024-07-26 11:15:25.471263] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.071 [2024-07-26 11:15:25.471308] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.071 [2024-07-26 11:15:25.471364] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.071 [2024-07-26 11:15:25.471414] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.071 [2024-07-26 11:15:25.471458] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.071 [2024-07-26 11:15:25.471503] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.071 [2024-07-26 11:15:25.471552] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.071 [2024-07-26 11:15:25.471599] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.071 [2024-07-26 11:15:25.471650] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.071 [2024-07-26 11:15:25.471695] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.071 [2024-07-26 11:15:25.471738] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.071 [2024-07-26 11:15:25.471780] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.071 [2024-07-26 11:15:25.471823] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.071 [2024-07-26 11:15:25.471869] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.071 [2024-07-26 11:15:25.471912] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.071 [2024-07-26 11:15:25.471956] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.071 [2024-07-26 11:15:25.471997] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.071 [2024-07-26 11:15:25.472044] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.071 [2024-07-26 11:15:25.472083] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.071 [2024-07-26 11:15:25.472128] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.071 [2024-07-26 11:15:25.472179] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.071 [2024-07-26 11:15:25.472220] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.071 [2024-07-26 11:15:25.472262] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.071 [2024-07-26 11:15:25.472311] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.071 [2024-07-26 11:15:25.472355] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.071 [2024-07-26 11:15:25.472398] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.071 [2024-07-26 11:15:25.472447] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.071 [2024-07-26 11:15:25.472492] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.071 [2024-07-26 11:15:25.472530] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.071 [2024-07-26 11:15:25.472565] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.071 [2024-07-26 11:15:25.472602] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.071 [2024-07-26 11:15:25.472649] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.071 [2024-07-26 11:15:25.472694] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.071 [2024-07-26 11:15:25.472733] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.071 [2024-07-26 11:15:25.472774] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.071 [2024-07-26 11:15:25.472817] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.071 [2024-07-26 11:15:25.472862] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.071 [2024-07-26 11:15:25.472899] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.071 [2024-07-26 11:15:25.472950] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.071 [2024-07-26 11:15:25.472988] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.071 [2024-07-26 11:15:25.473030] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.071 [2024-07-26 11:15:25.473062] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.071 [2024-07-26 11:15:25.473101] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.071 [2024-07-26 11:15:25.473138] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.071 [2024-07-26 11:15:25.473185] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.071 [2024-07-26 11:15:25.473225] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.071 [2024-07-26 11:15:25.473270] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.071 [2024-07-26 11:15:25.473308] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.071 [2024-07-26 11:15:25.473348] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.071 [2024-07-26 11:15:25.473397] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.071 [2024-07-26 11:15:25.473440] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.071 [2024-07-26 11:15:25.473482] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.071 [2024-07-26 11:15:25.473517] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.071 [2024-07-26 11:15:25.473555] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.071 [2024-07-26 11:15:25.473595] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.071 [2024-07-26 11:15:25.473640] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.071 [2024-07-26 11:15:25.473682] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.071 [2024-07-26 11:15:25.473721] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.071 [2024-07-26 11:15:25.473766] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.071 [2024-07-26 11:15:25.473803] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.071 [2024-07-26 11:15:25.473988] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.071 [2024-07-26 11:15:25.474026] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.071 [2024-07-26 11:15:25.474067] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.071 [2024-07-26 11:15:25.474103] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.071 [2024-07-26 11:15:25.474155] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.071 [2024-07-26 11:15:25.474202] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.071 [2024-07-26 11:15:25.474246] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.071 [2024-07-26 11:15:25.474294] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.071 [2024-07-26 11:15:25.474345] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.071 [2024-07-26 11:15:25.474392] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.071 [2024-07-26 11:15:25.474441] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.071 [2024-07-26 11:15:25.474486] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.071 [2024-07-26 11:15:25.474535] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.071 [2024-07-26 11:15:25.474583] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.071 [2024-07-26 11:15:25.474635] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.071 [2024-07-26 11:15:25.474681] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.071 [2024-07-26 11:15:25.474726] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.071 [2024-07-26 11:15:25.474773] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.071 [2024-07-26 11:15:25.474817] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.071 [2024-07-26 11:15:25.474860] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.071 [2024-07-26 11:15:25.474905] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.071 [2024-07-26 11:15:25.474949] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.071 [2024-07-26 11:15:25.474995] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.072 [2024-07-26 11:15:25.475042] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.072 [2024-07-26 11:15:25.475088] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.072 [2024-07-26 11:15:25.475131] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.072 [2024-07-26 11:15:25.475177] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.072 [2024-07-26 11:15:25.475224] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.072 [2024-07-26 11:15:25.475271] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.072 [2024-07-26 11:15:25.475318] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.072 [2024-07-26 11:15:25.475363] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.072 [2024-07-26 11:15:25.475408] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.072 [2024-07-26 11:15:25.475455] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.072 [2024-07-26 11:15:25.475505] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.072 [2024-07-26 11:15:25.475549] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.072 [2024-07-26 11:15:25.475596] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.072 [2024-07-26 11:15:25.475646] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.072 [2024-07-26 11:15:25.476065] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.072 [2024-07-26 11:15:25.476106] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.072 [2024-07-26 11:15:25.476147] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.072 [2024-07-26 11:15:25.476188] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.072 [2024-07-26 11:15:25.476237] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.072 [2024-07-26 11:15:25.476274] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.072 [2024-07-26 11:15:25.476306] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.072 [2024-07-26 11:15:25.476344] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.072 [2024-07-26 11:15:25.476384] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.072 [2024-07-26 11:15:25.476424] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.072 [2024-07-26 11:15:25.476469] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.072 [2024-07-26 11:15:25.476509] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.072 [2024-07-26 11:15:25.476549] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.072 [2024-07-26 11:15:25.476596] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.072 [2024-07-26 11:15:25.476644] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.072 [2024-07-26 11:15:25.476686] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.072 [2024-07-26 11:15:25.476734] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.072 [2024-07-26 11:15:25.476765] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.072 [2024-07-26 11:15:25.476805] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.072 [2024-07-26 11:15:25.476844] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.072 [2024-07-26 11:15:25.476885] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.072 [2024-07-26 11:15:25.476929] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.072 [2024-07-26 11:15:25.476971] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.072 [2024-07-26 11:15:25.477011] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.072 [2024-07-26 11:15:25.477049] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.072 [2024-07-26 11:15:25.477087] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.072 [2024-07-26 11:15:25.477125] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.072 [2024-07-26 11:15:25.477167] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.072 [2024-07-26 11:15:25.477209] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.072 [2024-07-26 11:15:25.477252] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.072 [2024-07-26 11:15:25.477296] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.072 [2024-07-26 11:15:25.477348] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.072 [2024-07-26 11:15:25.477393] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.072 [2024-07-26 11:15:25.477439] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.072 [2024-07-26 11:15:25.477485] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.072 [2024-07-26 11:15:25.477531] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.072 [2024-07-26 11:15:25.477574] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.072 [2024-07-26 11:15:25.477622] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.072 [2024-07-26 11:15:25.477673] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.072 [2024-07-26 11:15:25.477720] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.072 [2024-07-26 11:15:25.477763] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.072 [2024-07-26 11:15:25.477811] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.072 [2024-07-26 11:15:25.477862] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.072 [2024-07-26 11:15:25.477908] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.072 [2024-07-26 11:15:25.477956] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.072 [2024-07-26 11:15:25.478002] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.072 [2024-07-26 11:15:25.478046] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.072 [2024-07-26 11:15:25.478098] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.072 [2024-07-26 11:15:25.478143] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.072 [2024-07-26 11:15:25.478189] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.072 [2024-07-26 11:15:25.478233] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.072 [2024-07-26 11:15:25.478282] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.072 [2024-07-26 11:15:25.478331] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.072 [2024-07-26 11:15:25.478375] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.072 [2024-07-26 11:15:25.478421] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.072 [2024-07-26 11:15:25.478465] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.072 [2024-07-26 11:15:25.478515] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.072 [2024-07-26 11:15:25.478559] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.072 [2024-07-26 11:15:25.478607] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.072 [2024-07-26 11:15:25.478659] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.072 [2024-07-26 11:15:25.478705] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.072 [2024-07-26 11:15:25.478753] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.072 [2024-07-26 11:15:25.478798] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.072 [2024-07-26 11:15:25.478847] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.072 [2024-07-26 11:15:25.479025] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.072 [2024-07-26 11:15:25.479071] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.072 [2024-07-26 11:15:25.479103] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.072 [2024-07-26 11:15:25.479142] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.072 [2024-07-26 11:15:25.479181] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.072 [2024-07-26 11:15:25.479218] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.072 [2024-07-26 11:15:25.479269] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.072 [2024-07-26 11:15:25.479308] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.072 [2024-07-26 11:15:25.479357] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.072 [2024-07-26 11:15:25.479398] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.072 [2024-07-26 11:15:25.479440] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.072 [2024-07-26 11:15:25.479486] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.072 [2024-07-26 11:15:25.479524] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.072 [2024-07-26 11:15:25.479561] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.072 [2024-07-26 11:15:25.479595] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.072 [2024-07-26 11:15:25.479641] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.072 [2024-07-26 11:15:25.479680] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.072 [2024-07-26 11:15:25.479720] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.072 [2024-07-26 11:15:25.479766] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.072 [2024-07-26 11:15:25.479807] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.072 [2024-07-26 11:15:25.479851] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.072 [2024-07-26 11:15:25.479890] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.072 [2024-07-26 11:15:25.479927] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.072 [2024-07-26 11:15:25.479969] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.072 [2024-07-26 11:15:25.480010] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.072 [2024-07-26 11:15:25.480054] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.072 [2024-07-26 11:15:25.480831] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.072 [2024-07-26 11:15:25.480881] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.072 [2024-07-26 11:15:25.480924] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.072 [2024-07-26 11:15:25.480960] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.072 [2024-07-26 11:15:25.481006] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.072 [2024-07-26 11:15:25.481047] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.072 [2024-07-26 11:15:25.481090] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.072 [2024-07-26 11:15:25.481134] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.072 [2024-07-26 11:15:25.481178] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.072 [2024-07-26 11:15:25.481224] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.072 [2024-07-26 11:15:25.481270] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.072 [2024-07-26 11:15:25.481313] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.072 [2024-07-26 11:15:25.481359] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.072 [2024-07-26 11:15:25.481406] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.072 [2024-07-26 11:15:25.481462] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.072 [2024-07-26 11:15:25.481505] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.072 [2024-07-26 11:15:25.481550] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.072 [2024-07-26 11:15:25.481597] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.072 [2024-07-26 11:15:25.481658] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.072 [2024-07-26 11:15:25.481717] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.072 [2024-07-26 11:15:25.481761] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.072 [2024-07-26 11:15:25.481804] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.072 [2024-07-26 11:15:25.481847] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.072 [2024-07-26 11:15:25.481884] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.072 [2024-07-26 11:15:25.481927] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.072 [2024-07-26 11:15:25.481969] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.072 [2024-07-26 11:15:25.482010] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.072 [2024-07-26 11:15:25.482042] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.072 [2024-07-26 11:15:25.482082] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.072 [2024-07-26 11:15:25.482121] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.072 [2024-07-26 11:15:25.482165] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.072 [2024-07-26 11:15:25.482211] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.073 [2024-07-26 11:15:25.482251] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.073 [2024-07-26 11:15:25.482290] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.073 [2024-07-26 11:15:25.482328] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.073 [2024-07-26 11:15:25.482375] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.073 [2024-07-26 11:15:25.482418] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.073 [2024-07-26 11:15:25.482454] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.073 [2024-07-26 11:15:25.482491] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.073 [2024-07-26 11:15:25.482528] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.073 [2024-07-26 11:15:25.482567] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.073 [2024-07-26 11:15:25.482613] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.073 [2024-07-26 11:15:25.482659] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.073 [2024-07-26 11:15:25.482706] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.073 [2024-07-26 11:15:25.482749] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.073 [2024-07-26 11:15:25.482800] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.073 [2024-07-26 11:15:25.482848] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.073 [2024-07-26 11:15:25.482892] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.073 [2024-07-26 11:15:25.482936] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.073 [2024-07-26 11:15:25.482979] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.073 [2024-07-26 11:15:25.483022] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.073 [2024-07-26 11:15:25.483066] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.073 [2024-07-26 11:15:25.483117] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.073 [2024-07-26 11:15:25.483162] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.073 [2024-07-26 11:15:25.483207] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.073 [2024-07-26 11:15:25.483252] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.073 [2024-07-26 11:15:25.483302] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.073 [2024-07-26 11:15:25.483346] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.073 [2024-07-26 11:15:25.483393] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.073 [2024-07-26 11:15:25.483438] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.073 [2024-07-26 11:15:25.483488] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.073 [2024-07-26 11:15:25.483535] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.073 [2024-07-26 11:15:25.483585] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.073 [2024-07-26 11:15:25.483639] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.073 [2024-07-26 11:15:25.483833] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.073 [2024-07-26 11:15:25.483885] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.073 [2024-07-26 11:15:25.483933] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.073 [2024-07-26 11:15:25.483979] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.073 [2024-07-26 11:15:25.484024] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.073 [2024-07-26 11:15:25.484071] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.073 [2024-07-26 11:15:25.484115] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.073 [2024-07-26 11:15:25.484161] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.073 [2024-07-26 11:15:25.484207] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.073 [2024-07-26 11:15:25.484252] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.073 [2024-07-26 11:15:25.484300] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.073 [2024-07-26 11:15:25.484346] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.073 [2024-07-26 11:15:25.484393] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.073 11:15:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1028 00:07:30.073 [2024-07-26 11:15:25.484442] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.073 [2024-07-26 11:15:25.484502] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.073 [2024-07-26 11:15:25.484552] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.073 [2024-07-26 11:15:25.484600] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.073 [2024-07-26 11:15:25.484651] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.073 [2024-07-26 11:15:25.484695] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.073 [2024-07-26 11:15:25.484738] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.073 [2024-07-26 11:15:25.484782] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.073 [2024-07-26 11:15:25.484829] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.073 11:15:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1028 00:07:30.073 [2024-07-26 11:15:25.484880] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.073 [2024-07-26 11:15:25.484926] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.073 [2024-07-26 11:15:25.484971] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.073 [2024-07-26 11:15:25.485016] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.073 [2024-07-26 11:15:25.485062] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.073 [2024-07-26 11:15:25.485105] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.073 [2024-07-26 11:15:25.485149] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.073 [2024-07-26 11:15:25.485194] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.073 [2024-07-26 11:15:25.485237] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.073 [2024-07-26 11:15:25.485282] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.073 [2024-07-26 11:15:25.485322] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.073 [2024-07-26 11:15:25.485354] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.073 [2024-07-26 11:15:25.485393] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.073 [2024-07-26 11:15:25.485433] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.073 [2024-07-26 11:15:25.485473] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.073 [2024-07-26 11:15:25.486049] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.073 [2024-07-26 11:15:25.486093] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.073 [2024-07-26 11:15:25.486133] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.073 [2024-07-26 11:15:25.486179] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.073 [2024-07-26 11:15:25.486221] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.073 [2024-07-26 11:15:25.486262] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.073 [2024-07-26 11:15:25.486297] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.073 [2024-07-26 11:15:25.486339] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.073 [2024-07-26 11:15:25.486378] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.073 [2024-07-26 11:15:25.486424] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.073 [2024-07-26 11:15:25.486466] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.073 [2024-07-26 11:15:25.486505] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.073 [2024-07-26 11:15:25.486544] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.073 [2024-07-26 11:15:25.486586] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.073 [2024-07-26 11:15:25.486632] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.073 [2024-07-26 11:15:25.486676] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.073 [2024-07-26 11:15:25.486716] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.073 [2024-07-26 11:15:25.486757] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.073 [2024-07-26 11:15:25.486795] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.073 [2024-07-26 11:15:25.486838] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.073 [2024-07-26 11:15:25.486881] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.073 [2024-07-26 11:15:25.486920] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.073 [2024-07-26 11:15:25.486958] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.073 [2024-07-26 11:15:25.486998] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.073 [2024-07-26 11:15:25.487039] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.073 [2024-07-26 11:15:25.487081] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.073 [2024-07-26 11:15:25.487131] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.073 [2024-07-26 11:15:25.487175] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.073 [2024-07-26 11:15:25.487221] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.073 [2024-07-26 11:15:25.487274] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.073 [2024-07-26 11:15:25.487318] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.073 [2024-07-26 11:15:25.487369] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.073 [2024-07-26 11:15:25.487415] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.073 [2024-07-26 11:15:25.487460] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.073 [2024-07-26 11:15:25.487506] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.073 [2024-07-26 11:15:25.487552] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.073 [2024-07-26 11:15:25.487599] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.073 [2024-07-26 11:15:25.487649] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.073 [2024-07-26 11:15:25.487698] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.073 [2024-07-26 11:15:25.487744] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.073 [2024-07-26 11:15:25.487793] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.073 [2024-07-26 11:15:25.487839] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.073 [2024-07-26 11:15:25.487885] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.073 [2024-07-26 11:15:25.487930] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.073 [2024-07-26 11:15:25.488005] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.073 [2024-07-26 11:15:25.488039] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.073 [2024-07-26 11:15:25.488076] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.073 [2024-07-26 11:15:25.488118] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.073 [2024-07-26 11:15:25.488158] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.073 [2024-07-26 11:15:25.488199] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.073 [2024-07-26 11:15:25.488237] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.073 [2024-07-26 11:15:25.488293] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.073 [2024-07-26 11:15:25.488355] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.073 [2024-07-26 11:15:25.488387] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.073 [2024-07-26 11:15:25.488427] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.073 [2024-07-26 11:15:25.488468] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.073 [2024-07-26 11:15:25.488509] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.073 [2024-07-26 11:15:25.488549] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.073 [2024-07-26 11:15:25.488593] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.073 [2024-07-26 11:15:25.488670] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.073 [2024-07-26 11:15:25.488703] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.073 [2024-07-26 11:15:25.488740] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.073 [2024-07-26 11:15:25.488778] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.073 [2024-07-26 11:15:25.488820] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.073 [2024-07-26 11:15:25.489001] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.073 [2024-07-26 11:15:25.489044] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.074 [2024-07-26 11:15:25.489085] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.074 [2024-07-26 11:15:25.489125] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.074 [2024-07-26 11:15:25.489163] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.074 [2024-07-26 11:15:25.489206] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.074 [2024-07-26 11:15:25.489243] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.074 [2024-07-26 11:15:25.489294] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.074 [2024-07-26 11:15:25.489339] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.074 [2024-07-26 11:15:25.489390] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.074 [2024-07-26 11:15:25.489434] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.074 [2024-07-26 11:15:25.489481] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.074 [2024-07-26 11:15:25.489530] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.074 [2024-07-26 11:15:25.489584] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.074 [2024-07-26 11:15:25.489633] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.074 [2024-07-26 11:15:25.489679] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.074 [2024-07-26 11:15:25.489723] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.074 [2024-07-26 11:15:25.489773] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.074 [2024-07-26 11:15:25.489817] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.074 [2024-07-26 11:15:25.489862] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.074 [2024-07-26 11:15:25.489908] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.074 [2024-07-26 11:15:25.489953] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.074 [2024-07-26 11:15:25.489998] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.074 [2024-07-26 11:15:25.490044] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.074 [2024-07-26 11:15:25.490091] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.074 [2024-07-26 11:15:25.490142] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.074 [2024-07-26 11:15:25.490728] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.074 [2024-07-26 11:15:25.490783] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.074 [2024-07-26 11:15:25.490857] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.074 [2024-07-26 11:15:25.490915] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.074 [2024-07-26 11:15:25.490960] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.074 [2024-07-26 11:15:25.491007] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.074 [2024-07-26 11:15:25.491049] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.074 [2024-07-26 11:15:25.491092] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.074 [2024-07-26 11:15:25.491139] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.074 [2024-07-26 11:15:25.491180] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.074 [2024-07-26 11:15:25.491219] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.074 [2024-07-26 11:15:25.491254] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.074 [2024-07-26 11:15:25.491295] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.074 [2024-07-26 11:15:25.491335] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.074 [2024-07-26 11:15:25.491374] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.074 [2024-07-26 11:15:25.491419] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.074 [2024-07-26 11:15:25.491468] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.074 [2024-07-26 11:15:25.491510] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.074 [2024-07-26 11:15:25.491554] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.074 [2024-07-26 11:15:25.491594] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.074 [2024-07-26 11:15:25.491641] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.074 [2024-07-26 11:15:25.491680] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.074 [2024-07-26 11:15:25.491712] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.074 [2024-07-26 11:15:25.491750] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.074 [2024-07-26 11:15:25.491786] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.074 [2024-07-26 11:15:25.491830] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.074 [2024-07-26 11:15:25.491870] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.074 [2024-07-26 11:15:25.491910] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.074 [2024-07-26 11:15:25.491958] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.074 [2024-07-26 11:15:25.491992] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.074 [2024-07-26 11:15:25.492031] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.074 [2024-07-26 11:15:25.492072] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.074 [2024-07-26 11:15:25.492119] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.074 [2024-07-26 11:15:25.492160] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.074 [2024-07-26 11:15:25.492202] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.074 [2024-07-26 11:15:25.492242] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.074 [2024-07-26 11:15:25.492284] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.074 [2024-07-26 11:15:25.492327] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.074 [2024-07-26 11:15:25.492365] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.074 [2024-07-26 11:15:25.492407] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.074 [2024-07-26 11:15:25.492452] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.074 [2024-07-26 11:15:25.492501] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.074 [2024-07-26 11:15:25.492548] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.074 [2024-07-26 11:15:25.492595] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.074 [2024-07-26 11:15:25.492647] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.074 [2024-07-26 11:15:25.492690] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.074 [2024-07-26 11:15:25.492740] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.074 [2024-07-26 11:15:25.492787] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.074 [2024-07-26 11:15:25.492833] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.074 [2024-07-26 11:15:25.492880] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.074 [2024-07-26 11:15:25.492925] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.074 [2024-07-26 11:15:25.492973] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.074 [2024-07-26 11:15:25.493024] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.074 [2024-07-26 11:15:25.493071] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.074 [2024-07-26 11:15:25.493118] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.074 [2024-07-26 11:15:25.493163] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.074 [2024-07-26 11:15:25.493207] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.074 [2024-07-26 11:15:25.493254] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.074 [2024-07-26 11:15:25.493299] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.074 [2024-07-26 11:15:25.493348] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.074 [2024-07-26 11:15:25.493393] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.074 [2024-07-26 11:15:25.493443] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.074 [2024-07-26 11:15:25.493490] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.074 [2024-07-26 11:15:25.493539] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.074 [2024-07-26 11:15:25.493729] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.074 [2024-07-26 11:15:25.493787] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.074 [2024-07-26 11:15:25.493832] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.074 [2024-07-26 11:15:25.493875] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.074 [2024-07-26 11:15:25.493919] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.074 [2024-07-26 11:15:25.493967] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.074 [2024-07-26 11:15:25.494008] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.074 [2024-07-26 11:15:25.494055] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.074 [2024-07-26 11:15:25.494095] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.074 [2024-07-26 11:15:25.494129] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.074 [2024-07-26 11:15:25.494170] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.074 [2024-07-26 11:15:25.494213] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.074 [2024-07-26 11:15:25.494252] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.074 [2024-07-26 11:15:25.494292] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.074 [2024-07-26 11:15:25.494330] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.074 [2024-07-26 11:15:25.494369] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.074 [2024-07-26 11:15:25.494406] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.074 [2024-07-26 11:15:25.494447] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.074 [2024-07-26 11:15:25.494485] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.074 [2024-07-26 11:15:25.494528] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.074 [2024-07-26 11:15:25.494567] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.074 [2024-07-26 11:15:25.494600] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.074 [2024-07-26 11:15:25.494643] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.074 [2024-07-26 11:15:25.494684] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.074 [2024-07-26 11:15:25.494728] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.074 [2024-07-26 11:15:25.494769] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.074 [2024-07-26 11:15:25.494811] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.074 [2024-07-26 11:15:25.494850] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.074 [2024-07-26 11:15:25.494888] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.074 [2024-07-26 11:15:25.494929] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.074 [2024-07-26 11:15:25.494971] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.074 [2024-07-26 11:15:25.495011] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.074 [2024-07-26 11:15:25.495043] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.074 [2024-07-26 11:15:25.495084] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.074 [2024-07-26 11:15:25.495123] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.074 [2024-07-26 11:15:25.495164] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.074 [2024-07-26 11:15:25.495209] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.075 [2024-07-26 11:15:25.495253] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.075 [2024-07-26 11:15:25.495300] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.075 [2024-07-26 11:15:25.495340] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.075 [2024-07-26 11:15:25.495382] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.075 [2024-07-26 11:15:25.495419] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.075 [2024-07-26 11:15:25.495457] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.075 [2024-07-26 11:15:25.495496] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.075 [2024-07-26 11:15:25.495536] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.075 [2024-07-26 11:15:25.495573] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.075 [2024-07-26 11:15:25.495609] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.075 [2024-07-26 11:15:25.495664] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.075 [2024-07-26 11:15:25.495712] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.075 [2024-07-26 11:15:25.495758] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.075 [2024-07-26 11:15:25.495807] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.075 [2024-07-26 11:15:25.495851] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.075 [2024-07-26 11:15:25.495900] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.075 [2024-07-26 11:15:25.495948] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.075 [2024-07-26 11:15:25.495990] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.075 [2024-07-26 11:15:25.496033] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.075 [2024-07-26 11:15:25.496079] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.075 [2024-07-26 11:15:25.496121] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.075 [2024-07-26 11:15:25.496165] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.075 [2024-07-26 11:15:25.496212] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.075 [2024-07-26 11:15:25.496262] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.075 [2024-07-26 11:15:25.496308] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.075 [2024-07-26 11:15:25.496360] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.075 [2024-07-26 11:15:25.497144] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.075 [2024-07-26 11:15:25.497196] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.075 [2024-07-26 11:15:25.497239] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.075 [2024-07-26 11:15:25.497284] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.075 [2024-07-26 11:15:25.497318] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.075 [2024-07-26 11:15:25.497361] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.075 [2024-07-26 11:15:25.497405] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.075 [2024-07-26 11:15:25.497444] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.075 [2024-07-26 11:15:25.497485] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.075 [2024-07-26 11:15:25.497528] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.075 [2024-07-26 11:15:25.497570] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.075 [2024-07-26 11:15:25.497620] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.075 [2024-07-26 11:15:25.497666] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.075 [2024-07-26 11:15:25.497705] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.075 [2024-07-26 11:15:25.497750] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.075 [2024-07-26 11:15:25.497789] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.075 [2024-07-26 11:15:25.497829] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.075 [2024-07-26 11:15:25.497862] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.075 [2024-07-26 11:15:25.497904] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.075 [2024-07-26 11:15:25.497946] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.075 [2024-07-26 11:15:25.497989] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.075 [2024-07-26 11:15:25.498035] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.075 [2024-07-26 11:15:25.498073] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.075 [2024-07-26 11:15:25.498115] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.075 [2024-07-26 11:15:25.498155] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.075 [2024-07-26 11:15:25.498197] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.075 [2024-07-26 11:15:25.498236] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.075 [2024-07-26 11:15:25.498281] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.075 [2024-07-26 11:15:25.498313] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.075 [2024-07-26 11:15:25.498358] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.075 [2024-07-26 11:15:25.498396] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.075 [2024-07-26 11:15:25.498439] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.075 [2024-07-26 11:15:25.498484] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.075 [2024-07-26 11:15:25.498525] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.075 [2024-07-26 11:15:25.498566] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.075 [2024-07-26 11:15:25.498606] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.075 [2024-07-26 11:15:25.498657] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.075 [2024-07-26 11:15:25.498699] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.075 [2024-07-26 11:15:25.498740] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.075 [2024-07-26 11:15:25.498784] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.075 [2024-07-26 11:15:25.498824] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.075 [2024-07-26 11:15:25.498861] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.075 [2024-07-26 11:15:25.498909] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.075 [2024-07-26 11:15:25.498953] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.075 [2024-07-26 11:15:25.499000] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.075 [2024-07-26 11:15:25.499057] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.075 [2024-07-26 11:15:25.499106] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.075 [2024-07-26 11:15:25.499152] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.075 [2024-07-26 11:15:25.499196] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.075 [2024-07-26 11:15:25.499242] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.075 [2024-07-26 11:15:25.499289] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.075 [2024-07-26 11:15:25.499334] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.075 [2024-07-26 11:15:25.499377] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.075 [2024-07-26 11:15:25.499424] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.075 [2024-07-26 11:15:25.499474] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.075 [2024-07-26 11:15:25.499517] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.075 [2024-07-26 11:15:25.499562] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.075 [2024-07-26 11:15:25.499608] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.075 [2024-07-26 11:15:25.499661] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.075 [2024-07-26 11:15:25.499705] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.075 [2024-07-26 11:15:25.499751] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.075 [2024-07-26 11:15:25.499796] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.075 [2024-07-26 11:15:25.499844] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.075 [2024-07-26 11:15:25.499888] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.075 [2024-07-26 11:15:25.500075] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.075 [2024-07-26 11:15:25.500118] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.075 [2024-07-26 11:15:25.500166] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.075 [2024-07-26 11:15:25.500213] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.075 [2024-07-26 11:15:25.500262] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.075 [2024-07-26 11:15:25.500308] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.075 [2024-07-26 11:15:25.500358] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.075 [2024-07-26 11:15:25.500405] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.075 [2024-07-26 11:15:25.500450] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.075 [2024-07-26 11:15:25.500505] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.075 [2024-07-26 11:15:25.500546] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.075 [2024-07-26 11:15:25.500580] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.075 [2024-07-26 11:15:25.500618] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.075 [2024-07-26 11:15:25.500668] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.075 [2024-07-26 11:15:25.500706] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.075 [2024-07-26 11:15:25.500757] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.075 [2024-07-26 11:15:25.500801] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.075 [2024-07-26 11:15:25.501181] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.075 [2024-07-26 11:15:25.501224] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.075 [2024-07-26 11:15:25.501266] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.075 [2024-07-26 11:15:25.501306] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.075 [2024-07-26 11:15:25.501354] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.075 [2024-07-26 11:15:25.501392] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.075 [2024-07-26 11:15:25.501437] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.075 [2024-07-26 11:15:25.501477] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.075 [2024-07-26 11:15:25.501516] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.075 [2024-07-26 11:15:25.501548] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.075 [2024-07-26 11:15:25.501589] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.075 [2024-07-26 11:15:25.501635] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.075 [2024-07-26 11:15:25.501679] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.075 [2024-07-26 11:15:25.501721] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.075 [2024-07-26 11:15:25.501760] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.075 [2024-07-26 11:15:25.501797] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.075 [2024-07-26 11:15:25.501838] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.075 [2024-07-26 11:15:25.501875] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.075 [2024-07-26 11:15:25.501916] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.075 [2024-07-26 11:15:25.501957] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.075 [2024-07-26 11:15:25.501994] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.075 [2024-07-26 11:15:25.502036] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.076 [2024-07-26 11:15:25.502076] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.076 [2024-07-26 11:15:25.502111] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.076 [2024-07-26 11:15:25.502159] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.076 [2024-07-26 11:15:25.502206] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.076 [2024-07-26 11:15:25.502253] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.076 [2024-07-26 11:15:25.502300] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.076 [2024-07-26 11:15:25.502345] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.076 [2024-07-26 11:15:25.502394] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.076 [2024-07-26 11:15:25.502440] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.076 [2024-07-26 11:15:25.502489] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.076 [2024-07-26 11:15:25.502537] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.076 [2024-07-26 11:15:25.502582] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.076 [2024-07-26 11:15:25.502632] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.076 [2024-07-26 11:15:25.502679] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.076 [2024-07-26 11:15:25.502724] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.076 [2024-07-26 11:15:25.502779] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.076 [2024-07-26 11:15:25.502826] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.076 [2024-07-26 11:15:25.502877] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.076 [2024-07-26 11:15:25.502931] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.076 [2024-07-26 11:15:25.502979] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.076 [2024-07-26 11:15:25.503024] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.076 [2024-07-26 11:15:25.503065] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.076 [2024-07-26 11:15:25.503116] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.076 [2024-07-26 11:15:25.503164] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.076 [2024-07-26 11:15:25.503208] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.076 [2024-07-26 11:15:25.503251] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.076 [2024-07-26 11:15:25.503297] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.076 [2024-07-26 11:15:25.503345] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.076 [2024-07-26 11:15:25.503392] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.076 [2024-07-26 11:15:25.503437] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.076 [2024-07-26 11:15:25.503484] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.076 [2024-07-26 11:15:25.503531] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.076 [2024-07-26 11:15:25.503575] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.076 [2024-07-26 11:15:25.503624] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.076 [2024-07-26 11:15:25.503674] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.076 [2024-07-26 11:15:25.503722] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.076 [2024-07-26 11:15:25.503768] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.076 [2024-07-26 11:15:25.503804] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.076 [2024-07-26 11:15:25.503845] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.076 [2024-07-26 11:15:25.503888] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.076 [2024-07-26 11:15:25.503928] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.076 [2024-07-26 11:15:25.504200] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.076 [2024-07-26 11:15:25.504241] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.076 [2024-07-26 11:15:25.504288] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.076 [2024-07-26 11:15:25.504322] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.076 [2024-07-26 11:15:25.504367] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.076 [2024-07-26 11:15:25.504407] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.076 [2024-07-26 11:15:25.504449] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.076 [2024-07-26 11:15:25.504491] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.076 [2024-07-26 11:15:25.504532] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.076 [2024-07-26 11:15:25.504576] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.076 [2024-07-26 11:15:25.504614] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.076 [2024-07-26 11:15:25.504661] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.076 [2024-07-26 11:15:25.504704] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.076 [2024-07-26 11:15:25.504744] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.076 [2024-07-26 11:15:25.504781] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.076 [2024-07-26 11:15:25.504822] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.076 [2024-07-26 11:15:25.504858] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.076 [2024-07-26 11:15:25.504898] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.076 [2024-07-26 11:15:25.504938] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.076 [2024-07-26 11:15:25.504979] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.076 [2024-07-26 11:15:25.505023] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.076 [2024-07-26 11:15:25.505062] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.076 [2024-07-26 11:15:25.505103] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.076 [2024-07-26 11:15:25.505147] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.076 [2024-07-26 11:15:25.505184] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.076 [2024-07-26 11:15:25.505226] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.076 [2024-07-26 11:15:25.505266] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.076 [2024-07-26 11:15:25.505305] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.076 [2024-07-26 11:15:25.505344] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.076 [2024-07-26 11:15:25.505387] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.076 [2024-07-26 11:15:25.505437] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.076 [2024-07-26 11:15:25.505482] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.076 [2024-07-26 11:15:25.505528] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.076 [2024-07-26 11:15:25.505575] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.076 [2024-07-26 11:15:25.505624] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.076 [2024-07-26 11:15:25.505678] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.076 [2024-07-26 11:15:25.505727] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.076 [2024-07-26 11:15:25.505772] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.076 [2024-07-26 11:15:25.505822] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.076 [2024-07-26 11:15:25.505870] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.076 [2024-07-26 11:15:25.505916] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.076 [2024-07-26 11:15:25.505966] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.076 [2024-07-26 11:15:25.506011] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.076 [2024-07-26 11:15:25.506060] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.076 [2024-07-26 11:15:25.506103] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.076 [2024-07-26 11:15:25.506150] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.076 [2024-07-26 11:15:25.506197] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.076 [2024-07-26 11:15:25.506244] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.076 [2024-07-26 11:15:25.506293] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.076 [2024-07-26 11:15:25.506343] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.076 [2024-07-26 11:15:25.506388] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.076 [2024-07-26 11:15:25.506436] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.076 [2024-07-26 11:15:25.506481] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.076 [2024-07-26 11:15:25.506527] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.076 [2024-07-26 11:15:25.506572] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.076 [2024-07-26 11:15:25.506622] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.076 [2024-07-26 11:15:25.506683] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.076 [2024-07-26 11:15:25.506727] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.076 [2024-07-26 11:15:25.506778] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.076 [2024-07-26 11:15:25.506823] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.076 [2024-07-26 11:15:25.506868] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.076 [2024-07-26 11:15:25.506924] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.076 [2024-07-26 11:15:25.506966] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.076 [2024-07-26 11:15:25.507012] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.076 [2024-07-26 11:15:25.507783] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.076 [2024-07-26 11:15:25.507829] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.076 [2024-07-26 11:15:25.507876] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.076 [2024-07-26 11:15:25.507919] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.076 [2024-07-26 11:15:25.507966] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.076 [2024-07-26 11:15:25.508008] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.076 [2024-07-26 11:15:25.508049] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.076 [2024-07-26 11:15:25.508086] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.076 [2024-07-26 11:15:25.508132] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.076 [2024-07-26 11:15:25.508179] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.076 [2024-07-26 11:15:25.508217] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.076 [2024-07-26 11:15:25.508258] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.076 [2024-07-26 11:15:25.508304] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.076 [2024-07-26 11:15:25.508343] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.076 [2024-07-26 11:15:25.508385] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.076 [2024-07-26 11:15:25.508426] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.076 [2024-07-26 11:15:25.508467] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.076 [2024-07-26 11:15:25.508509] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.076 [2024-07-26 11:15:25.508554] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.076 [2024-07-26 11:15:25.508604] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.076 [2024-07-26 11:15:25.508658] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.076 [2024-07-26 11:15:25.508705] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.076 [2024-07-26 11:15:25.508750] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.076 [2024-07-26 11:15:25.508794] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.076 [2024-07-26 11:15:25.508842] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.076 [2024-07-26 11:15:25.508888] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.076 [2024-07-26 11:15:25.508934] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.076 [2024-07-26 11:15:25.508976] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.076 [2024-07-26 11:15:25.509018] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.076 [2024-07-26 11:15:25.509061] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.076 [2024-07-26 11:15:25.509112] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.076 [2024-07-26 11:15:25.509154] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.076 [2024-07-26 11:15:25.509188] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.076 [2024-07-26 11:15:25.509224] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.077 [2024-07-26 11:15:25.509270] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.077 [2024-07-26 11:15:25.509305] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.077 [2024-07-26 11:15:25.509347] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.077 [2024-07-26 11:15:25.509390] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.077 [2024-07-26 11:15:25.509436] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.077 [2024-07-26 11:15:25.509475] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.077 [2024-07-26 11:15:25.509522] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.077 [2024-07-26 11:15:25.509568] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.077 [2024-07-26 11:15:25.509606] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.077 [2024-07-26 11:15:25.509649] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.077 [2024-07-26 11:15:25.509697] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.077 [2024-07-26 11:15:25.509742] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.077 [2024-07-26 11:15:25.509787] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.077 [2024-07-26 11:15:25.509829] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.077 [2024-07-26 11:15:25.509877] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.077 [2024-07-26 11:15:25.509923] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.077 [2024-07-26 11:15:25.509968] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.077 [2024-07-26 11:15:25.510013] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.077 [2024-07-26 11:15:25.510060] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.077 [2024-07-26 11:15:25.510105] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.077 [2024-07-26 11:15:25.510151] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.077 [2024-07-26 11:15:25.510196] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.077 [2024-07-26 11:15:25.510240] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.077 [2024-07-26 11:15:25.510286] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.077 [2024-07-26 11:15:25.510334] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.077 [2024-07-26 11:15:25.510382] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.077 [2024-07-26 11:15:25.510429] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.077 [2024-07-26 11:15:25.510476] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.077 [2024-07-26 11:15:25.510521] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.077 [2024-07-26 11:15:25.510724] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.077 [2024-07-26 11:15:25.510775] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.077 [2024-07-26 11:15:25.510826] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.077 [2024-07-26 11:15:25.510871] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.077 [2024-07-26 11:15:25.510915] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.077 [2024-07-26 11:15:25.510959] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.077 [2024-07-26 11:15:25.511005] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.077 [2024-07-26 11:15:25.511052] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.077 [2024-07-26 11:15:25.511099] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.077 [2024-07-26 11:15:25.511146] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.077 [2024-07-26 11:15:25.511190] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.077 [2024-07-26 11:15:25.511237] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.077 [2024-07-26 11:15:25.511284] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.077 [2024-07-26 11:15:25.511330] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.077 [2024-07-26 11:15:25.511369] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.077 [2024-07-26 11:15:25.511404] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.077 [2024-07-26 11:15:25.511441] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.077 [2024-07-26 11:15:25.511482] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.077 [2024-07-26 11:15:25.511521] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.077 [2024-07-26 11:15:25.511569] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.077 [2024-07-26 11:15:25.511607] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.077 [2024-07-26 11:15:25.511652] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.077 [2024-07-26 11:15:25.511694] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.077 [2024-07-26 11:15:25.511734] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.077 [2024-07-26 11:15:25.511775] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.077 [2024-07-26 11:15:25.511818] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.077 [2024-07-26 11:15:25.511858] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.077 [2024-07-26 11:15:25.511906] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.077 [2024-07-26 11:15:25.511938] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.077 [2024-07-26 11:15:25.511981] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.077 [2024-07-26 11:15:25.512020] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.077 [2024-07-26 11:15:25.512062] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.077 [2024-07-26 11:15:25.512104] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.077 [2024-07-26 11:15:25.512146] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.077 [2024-07-26 11:15:25.512191] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.077 [2024-07-26 11:15:25.512231] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.077 [2024-07-26 11:15:25.512268] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.077 [2024-07-26 11:15:25.512312] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.077 [2024-07-26 11:15:25.512354] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.077 [2024-07-26 11:15:25.512399] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.077 [2024-07-26 11:15:25.512437] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.077 [2024-07-26 11:15:25.512486] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.077 [2024-07-26 11:15:25.512526] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.077 [2024-07-26 11:15:25.512567] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.077 [2024-07-26 11:15:25.512608] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.077 [2024-07-26 11:15:25.512655] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.077 [2024-07-26 11:15:25.512697] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.077 [2024-07-26 11:15:25.512736] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.077 [2024-07-26 11:15:25.512772] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.077 [2024-07-26 11:15:25.512809] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.077 [2024-07-26 11:15:25.512859] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.077 [2024-07-26 11:15:25.512906] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.077 [2024-07-26 11:15:25.512950] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.077 [2024-07-26 11:15:25.512995] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.077 [2024-07-26 11:15:25.513044] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.077 [2024-07-26 11:15:25.513090] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.077 [2024-07-26 11:15:25.513137] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.077 [2024-07-26 11:15:25.513186] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.077 [2024-07-26 11:15:25.513233] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.077 [2024-07-26 11:15:25.513282] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.077 [2024-07-26 11:15:25.513332] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.077 [2024-07-26 11:15:25.513377] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.077 [2024-07-26 11:15:25.513423] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.077 [2024-07-26 11:15:25.513470] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.077 [2024-07-26 11:15:25.514265] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.077 [2024-07-26 11:15:25.514311] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.077 [2024-07-26 11:15:25.514362] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.077 [2024-07-26 11:15:25.514406] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.077 [2024-07-26 11:15:25.514450] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.077 [2024-07-26 11:15:25.514490] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.077 [2024-07-26 11:15:25.514528] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.077 [2024-07-26 11:15:25.514569] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.077 [2024-07-26 11:15:25.514611] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.077 [2024-07-26 11:15:25.514657] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.077 [2024-07-26 11:15:25.514695] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.077 [2024-07-26 11:15:25.514733] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.077 [2024-07-26 11:15:25.514786] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.077 [2024-07-26 11:15:25.514825] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.077 [2024-07-26 11:15:25.514873] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.077 [2024-07-26 11:15:25.514915] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.077 [2024-07-26 11:15:25.514954] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.077 [2024-07-26 11:15:25.515002] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.077 [2024-07-26 11:15:25.515033] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.077 [2024-07-26 11:15:25.515076] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.077 [2024-07-26 11:15:25.515116] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.077 [2024-07-26 11:15:25.515157] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.077 [2024-07-26 11:15:25.515198] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.077 [2024-07-26 11:15:25.515241] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.077 [2024-07-26 11:15:25.515279] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.077 [2024-07-26 11:15:25.515325] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.077 [2024-07-26 11:15:25.515366] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.077 [2024-07-26 11:15:25.515406] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.077 [2024-07-26 11:15:25.515452] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.077 [2024-07-26 11:15:25.515484] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.077 [2024-07-26 11:15:25.515526] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.077 [2024-07-26 11:15:25.515566] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.077 [2024-07-26 11:15:25.515606] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.077 [2024-07-26 11:15:25.515658] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.077 [2024-07-26 11:15:25.515698] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.077 [2024-07-26 11:15:25.515737] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.077 [2024-07-26 11:15:25.515777] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.077 [2024-07-26 11:15:25.515818] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.077 [2024-07-26 11:15:25.515859] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.077 [2024-07-26 11:15:25.515898] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.077 [2024-07-26 11:15:25.515936] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.077 [2024-07-26 11:15:25.515981] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.077 [2024-07-26 11:15:25.516024] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.077 [2024-07-26 11:15:25.516067] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.077 [2024-07-26 11:15:25.516113] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.078 [2024-07-26 11:15:25.516161] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.078 [2024-07-26 11:15:25.516207] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.078 [2024-07-26 11:15:25.516257] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.078 [2024-07-26 11:15:25.516300] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.078 [2024-07-26 11:15:25.516345] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.078 [2024-07-26 11:15:25.516389] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.078 [2024-07-26 11:15:25.516437] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.078 [2024-07-26 11:15:25.516480] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.078 [2024-07-26 11:15:25.516528] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.078 [2024-07-26 11:15:25.516576] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.078 [2024-07-26 11:15:25.516620] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.078 [2024-07-26 11:15:25.516672] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.078 [2024-07-26 11:15:25.516718] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.078 [2024-07-26 11:15:25.516770] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.078 [2024-07-26 11:15:25.516817] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.078 [2024-07-26 11:15:25.516867] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.078 [2024-07-26 11:15:25.516914] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.078 [2024-07-26 11:15:25.516960] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.078 [2024-07-26 11:15:25.517016] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.078 [2024-07-26 11:15:25.517197] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.078 [2024-07-26 11:15:25.517246] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.078 [2024-07-26 11:15:25.517295] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.078 [2024-07-26 11:15:25.517340] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.078 [2024-07-26 11:15:25.517387] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.078 [2024-07-26 11:15:25.517431] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.078 [2024-07-26 11:15:25.517474] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.078 [2024-07-26 11:15:25.517523] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.078 [2024-07-26 11:15:25.517567] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.078 [2024-07-26 11:15:25.517607] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.078 [2024-07-26 11:15:25.517660] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.078 [2024-07-26 11:15:25.517705] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.078 [2024-07-26 11:15:25.517744] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.078 [2024-07-26 11:15:25.517785] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.078 [2024-07-26 11:15:25.517820] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.078 [2024-07-26 11:15:25.517862] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.078 [2024-07-26 11:15:25.517900] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.078 [2024-07-26 11:15:25.517950] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.078 [2024-07-26 11:15:25.517989] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.078 [2024-07-26 11:15:25.518029] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.078 [2024-07-26 11:15:25.518070] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.078 [2024-07-26 11:15:25.518111] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.078 [2024-07-26 11:15:25.518159] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.078 [2024-07-26 11:15:25.518197] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.078 [2024-07-26 11:15:25.518238] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.078 [2024-07-26 11:15:25.518288] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.078 [2024-07-26 11:15:25.518318] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.078 [2024-07-26 11:15:25.518360] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.078 [2024-07-26 11:15:25.518403] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.078 [2024-07-26 11:15:25.518446] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.078 [2024-07-26 11:15:25.518487] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.078 [2024-07-26 11:15:25.518528] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.078 [2024-07-26 11:15:25.518577] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.078 [2024-07-26 11:15:25.518616] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.078 [2024-07-26 11:15:25.518660] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.078 [2024-07-26 11:15:25.518699] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.078 [2024-07-26 11:15:25.518742] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.078 [2024-07-26 11:15:25.518784] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.078 [2024-07-26 11:15:25.518826] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.078 [2024-07-26 11:15:25.518867] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.078 [2024-07-26 11:15:25.518905] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.078 [2024-07-26 11:15:25.518951] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.078 [2024-07-26 11:15:25.518992] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.078 [2024-07-26 11:15:25.519033] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.078 [2024-07-26 11:15:25.519071] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.078 [2024-07-26 11:15:25.519113] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.078 [2024-07-26 11:15:25.519161] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.078 [2024-07-26 11:15:25.519206] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.078 [2024-07-26 11:15:25.519250] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.078 [2024-07-26 11:15:25.519298] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.078 [2024-07-26 11:15:25.519347] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.078 [2024-07-26 11:15:25.519392] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.078 [2024-07-26 11:15:25.519443] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.078 [2024-07-26 11:15:25.519486] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.078 [2024-07-26 11:15:25.519535] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.078 [2024-07-26 11:15:25.519579] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.078 Message suppressed 999 times: Read completed with error (sct=0, sc=15) 00:07:30.078 [2024-07-26 11:15:25.520306] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.078 [2024-07-26 11:15:25.520351] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.078 [2024-07-26 11:15:25.520391] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.078 [2024-07-26 11:15:25.520435] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.078 [2024-07-26 11:15:25.520477] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.078 [2024-07-26 11:15:25.520518] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.078 [2024-07-26 11:15:25.520558] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.078 [2024-07-26 11:15:25.520599] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.078 [2024-07-26 11:15:25.520650] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.078 [2024-07-26 11:15:25.520702] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.078 [2024-07-26 11:15:25.520743] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.078 [2024-07-26 11:15:25.520785] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.078 [2024-07-26 11:15:25.520826] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.078 [2024-07-26 11:15:25.520865] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.078 [2024-07-26 11:15:25.520904] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.078 [2024-07-26 11:15:25.520943] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.078 [2024-07-26 11:15:25.520987] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.078 [2024-07-26 11:15:25.521024] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.078 [2024-07-26 11:15:25.521070] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.078 [2024-07-26 11:15:25.521105] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.078 [2024-07-26 11:15:25.521140] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.078 [2024-07-26 11:15:25.521181] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.078 [2024-07-26 11:15:25.521217] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.078 [2024-07-26 11:15:25.521265] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.078 [2024-07-26 11:15:25.521315] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.078 [2024-07-26 11:15:25.521372] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.078 [2024-07-26 11:15:25.521422] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.078 [2024-07-26 11:15:25.521468] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.078 [2024-07-26 11:15:25.521515] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.078 [2024-07-26 11:15:25.521562] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.078 [2024-07-26 11:15:25.521609] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.078 [2024-07-26 11:15:25.521664] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.078 [2024-07-26 11:15:25.521709] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.078 [2024-07-26 11:15:25.521758] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.078 [2024-07-26 11:15:25.521800] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.078 [2024-07-26 11:15:25.521844] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.078 [2024-07-26 11:15:25.521892] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.078 [2024-07-26 11:15:25.521937] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.078 [2024-07-26 11:15:25.521992] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.078 [2024-07-26 11:15:25.522039] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.078 [2024-07-26 11:15:25.522090] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.078 [2024-07-26 11:15:25.522137] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.078 [2024-07-26 11:15:25.522187] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.079 [2024-07-26 11:15:25.522229] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.079 [2024-07-26 11:15:25.522275] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.079 [2024-07-26 11:15:25.522321] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.079 [2024-07-26 11:15:25.522372] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.079 [2024-07-26 11:15:25.522417] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.079 [2024-07-26 11:15:25.522466] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.079 [2024-07-26 11:15:25.522512] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.079 [2024-07-26 11:15:25.522558] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.079 [2024-07-26 11:15:25.522603] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.079 [2024-07-26 11:15:25.522660] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.079 [2024-07-26 11:15:25.522716] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.079 [2024-07-26 11:15:25.522760] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.079 [2024-07-26 11:15:25.522806] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.079 [2024-07-26 11:15:25.522854] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.079 [2024-07-26 11:15:25.522897] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.079 [2024-07-26 11:15:25.522940] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.079 [2024-07-26 11:15:25.522982] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.079 [2024-07-26 11:15:25.523025] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.079 [2024-07-26 11:15:25.523066] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.079 [2024-07-26 11:15:25.523109] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.079 [2024-07-26 11:15:25.523148] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.079 [2024-07-26 11:15:25.523313] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.079 [2024-07-26 11:15:25.523356] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.079 [2024-07-26 11:15:25.523395] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.079 [2024-07-26 11:15:25.523435] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.079 [2024-07-26 11:15:25.523473] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.079 [2024-07-26 11:15:25.523520] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.079 [2024-07-26 11:15:25.523557] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.079 [2024-07-26 11:15:25.523597] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.079 [2024-07-26 11:15:25.523644] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.079 [2024-07-26 11:15:25.523685] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.079 [2024-07-26 11:15:25.523731] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.079 [2024-07-26 11:15:25.523771] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.079 [2024-07-26 11:15:25.523806] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.079 [2024-07-26 11:15:25.523849] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.079 [2024-07-26 11:15:25.523889] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.079 [2024-07-26 11:15:25.523930] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.079 [2024-07-26 11:15:25.523968] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.079 [2024-07-26 11:15:25.524011] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.079 [2024-07-26 11:15:25.524053] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.079 [2024-07-26 11:15:25.524093] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.079 [2024-07-26 11:15:25.524136] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.079 [2024-07-26 11:15:25.524175] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.079 [2024-07-26 11:15:25.524215] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.079 [2024-07-26 11:15:25.524254] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.079 [2024-07-26 11:15:25.524794] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.079 [2024-07-26 11:15:25.524847] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.079 [2024-07-26 11:15:25.524891] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.079 [2024-07-26 11:15:25.524935] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.079 [2024-07-26 11:15:25.524980] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.079 [2024-07-26 11:15:25.525023] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.079 [2024-07-26 11:15:25.525069] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.079 [2024-07-26 11:15:25.525122] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.079 [2024-07-26 11:15:25.525170] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.079 [2024-07-26 11:15:25.525215] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.079 [2024-07-26 11:15:25.525260] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.079 [2024-07-26 11:15:25.525305] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.079 [2024-07-26 11:15:25.525355] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.079 [2024-07-26 11:15:25.525401] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.079 [2024-07-26 11:15:25.525449] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.079 [2024-07-26 11:15:25.525492] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.079 [2024-07-26 11:15:25.525540] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.079 [2024-07-26 11:15:25.525587] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.079 [2024-07-26 11:15:25.525639] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.079 [2024-07-26 11:15:25.525686] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.079 [2024-07-26 11:15:25.525734] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.079 [2024-07-26 11:15:25.525779] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.079 [2024-07-26 11:15:25.525829] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.079 [2024-07-26 11:15:25.525873] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.079 [2024-07-26 11:15:25.525923] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.079 [2024-07-26 11:15:25.525963] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.079 [2024-07-26 11:15:25.526000] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.079 [2024-07-26 11:15:25.526039] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.079 [2024-07-26 11:15:25.526078] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.079 [2024-07-26 11:15:25.526122] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.079 [2024-07-26 11:15:25.526168] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.079 [2024-07-26 11:15:25.526210] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.079 [2024-07-26 11:15:25.526251] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.079 [2024-07-26 11:15:25.526293] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.079 [2024-07-26 11:15:25.526331] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.079 [2024-07-26 11:15:25.526377] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.079 [2024-07-26 11:15:25.526417] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.079 [2024-07-26 11:15:25.526461] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.079 [2024-07-26 11:15:25.526492] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.079 [2024-07-26 11:15:25.526537] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.079 [2024-07-26 11:15:25.526577] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.079 [2024-07-26 11:15:25.526614] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.079 [2024-07-26 11:15:25.526661] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.079 [2024-07-26 11:15:25.526701] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.079 [2024-07-26 11:15:25.526745] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.079 [2024-07-26 11:15:25.526785] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.079 [2024-07-26 11:15:25.526830] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.079 [2024-07-26 11:15:25.526869] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.079 [2024-07-26 11:15:25.526910] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.079 [2024-07-26 11:15:25.526941] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.079 [2024-07-26 11:15:25.526980] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.079 [2024-07-26 11:15:25.527017] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.079 [2024-07-26 11:15:25.527059] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.079 [2024-07-26 11:15:25.527099] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.079 [2024-07-26 11:15:25.527138] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.079 [2024-07-26 11:15:25.527179] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.079 [2024-07-26 11:15:25.527223] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.079 [2024-07-26 11:15:25.527259] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.079 [2024-07-26 11:15:25.527301] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.079 [2024-07-26 11:15:25.527341] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.079 [2024-07-26 11:15:25.527381] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.079 [2024-07-26 11:15:25.527423] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.079 [2024-07-26 11:15:25.527467] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.079 [2024-07-26 11:15:25.527524] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.079 [2024-07-26 11:15:25.527714] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.079 [2024-07-26 11:15:25.527759] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.079 [2024-07-26 11:15:25.527807] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.079 [2024-07-26 11:15:25.527854] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.079 [2024-07-26 11:15:25.527901] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.079 [2024-07-26 11:15:25.527948] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.079 [2024-07-26 11:15:25.527996] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.079 [2024-07-26 11:15:25.528040] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.079 [2024-07-26 11:15:25.528091] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.079 [2024-07-26 11:15:25.528139] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.079 [2024-07-26 11:15:25.528185] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.079 [2024-07-26 11:15:25.528234] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.079 [2024-07-26 11:15:25.528284] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.079 [2024-07-26 11:15:25.528328] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.079 [2024-07-26 11:15:25.528380] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.079 [2024-07-26 11:15:25.528425] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.079 [2024-07-26 11:15:25.528469] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.079 [2024-07-26 11:15:25.528512] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.079 [2024-07-26 11:15:25.528558] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.079 [2024-07-26 11:15:25.528602] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.079 [2024-07-26 11:15:25.528657] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.079 [2024-07-26 11:15:25.528705] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.079 [2024-07-26 11:15:25.528749] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.079 [2024-07-26 11:15:25.528795] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.079 [2024-07-26 11:15:25.528840] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.079 [2024-07-26 11:15:25.528884] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.079 [2024-07-26 11:15:25.528929] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.079 [2024-07-26 11:15:25.528975] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.079 [2024-07-26 11:15:25.529018] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.079 [2024-07-26 11:15:25.529061] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.080 [2024-07-26 11:15:25.529103] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.080 [2024-07-26 11:15:25.529143] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.080 [2024-07-26 11:15:25.529187] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.080 [2024-07-26 11:15:25.529225] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.080 [2024-07-26 11:15:25.529258] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.080 [2024-07-26 11:15:25.529300] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.080 [2024-07-26 11:15:25.529340] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.080 [2024-07-26 11:15:25.529379] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.080 [2024-07-26 11:15:25.529422] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.080 [2024-07-26 11:15:25.529470] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.080 [2024-07-26 11:15:25.529511] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.080 [2024-07-26 11:15:25.529553] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.080 [2024-07-26 11:15:25.529590] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.080 [2024-07-26 11:15:25.529642] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.080 [2024-07-26 11:15:25.529685] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.080 [2024-07-26 11:15:25.529722] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.080 [2024-07-26 11:15:25.529758] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.080 [2024-07-26 11:15:25.529799] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.080 [2024-07-26 11:15:25.529838] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.080 [2024-07-26 11:15:25.529881] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.080 [2024-07-26 11:15:25.529920] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.080 [2024-07-26 11:15:25.529961] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.080 [2024-07-26 11:15:25.530006] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.080 [2024-07-26 11:15:25.530039] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.080 [2024-07-26 11:15:25.530075] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.080 [2024-07-26 11:15:25.530117] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.080 [2024-07-26 11:15:25.530156] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.080 [2024-07-26 11:15:25.530197] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.080 [2024-07-26 11:15:25.530236] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.080 [2024-07-26 11:15:25.530274] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.080 [2024-07-26 11:15:25.530316] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.080 [2024-07-26 11:15:25.530355] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.080 [2024-07-26 11:15:25.530394] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.080 [2024-07-26 11:15:25.531195] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.080 [2024-07-26 11:15:25.531249] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.080 [2024-07-26 11:15:25.531301] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.080 [2024-07-26 11:15:25.531344] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.080 [2024-07-26 11:15:25.531383] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.080 [2024-07-26 11:15:25.531419] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.080 [2024-07-26 11:15:25.531460] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.080 [2024-07-26 11:15:25.531505] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.080 [2024-07-26 11:15:25.531549] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.080 [2024-07-26 11:15:25.531593] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.080 [2024-07-26 11:15:25.531647] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.080 [2024-07-26 11:15:25.531688] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.080 [2024-07-26 11:15:25.531734] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.080 [2024-07-26 11:15:25.531774] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.080 [2024-07-26 11:15:25.531810] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.080 [2024-07-26 11:15:25.531842] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.080 [2024-07-26 11:15:25.531886] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.080 [2024-07-26 11:15:25.531926] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.080 [2024-07-26 11:15:25.531964] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.080 [2024-07-26 11:15:25.531999] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.080 [2024-07-26 11:15:25.532037] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.080 [2024-07-26 11:15:25.532077] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.080 [2024-07-26 11:15:25.532117] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.080 [2024-07-26 11:15:25.532158] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.080 [2024-07-26 11:15:25.532200] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.080 [2024-07-26 11:15:25.532242] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.080 [2024-07-26 11:15:25.532282] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.080 [2024-07-26 11:15:25.532323] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.080 [2024-07-26 11:15:25.532362] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.080 [2024-07-26 11:15:25.532404] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.080 [2024-07-26 11:15:25.532444] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.080 [2024-07-26 11:15:25.532485] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.080 [2024-07-26 11:15:25.532531] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.080 [2024-07-26 11:15:25.532575] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.080 [2024-07-26 11:15:25.532619] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.080 [2024-07-26 11:15:25.532669] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.080 [2024-07-26 11:15:25.532712] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.080 [2024-07-26 11:15:25.532760] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.080 [2024-07-26 11:15:25.532809] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.080 [2024-07-26 11:15:25.532854] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.080 [2024-07-26 11:15:25.532899] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.080 [2024-07-26 11:15:25.532943] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.080 [2024-07-26 11:15:25.532992] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.080 [2024-07-26 11:15:25.533040] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.080 [2024-07-26 11:15:25.533087] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.080 [2024-07-26 11:15:25.533131] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.080 [2024-07-26 11:15:25.533178] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.080 [2024-07-26 11:15:25.533224] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.080 [2024-07-26 11:15:25.533266] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.080 [2024-07-26 11:15:25.533315] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.080 [2024-07-26 11:15:25.533366] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.080 [2024-07-26 11:15:25.533418] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.080 [2024-07-26 11:15:25.533463] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.080 [2024-07-26 11:15:25.533513] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.080 [2024-07-26 11:15:25.533559] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.080 [2024-07-26 11:15:25.533610] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.080 [2024-07-26 11:15:25.533665] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.080 [2024-07-26 11:15:25.533712] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.080 [2024-07-26 11:15:25.533756] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.080 [2024-07-26 11:15:25.533799] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.080 [2024-07-26 11:15:25.533850] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.080 [2024-07-26 11:15:25.533896] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.080 [2024-07-26 11:15:25.533942] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.080 [2024-07-26 11:15:25.533987] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.080 [2024-07-26 11:15:25.534177] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.080 [2024-07-26 11:15:25.534226] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.080 [2024-07-26 11:15:25.534275] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.080 [2024-07-26 11:15:25.534318] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.080 [2024-07-26 11:15:25.534369] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.080 [2024-07-26 11:15:25.534417] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.080 [2024-07-26 11:15:25.534472] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.080 [2024-07-26 11:15:25.534521] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.080 [2024-07-26 11:15:25.534564] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.080 [2024-07-26 11:15:25.534611] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.080 [2024-07-26 11:15:25.534666] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.080 [2024-07-26 11:15:25.534705] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.080 [2024-07-26 11:15:25.534738] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.080 [2024-07-26 11:15:25.534781] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.080 [2024-07-26 11:15:25.534817] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.080 [2024-07-26 11:15:25.534856] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.080 [2024-07-26 11:15:25.534897] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.080 [2024-07-26 11:15:25.534941] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.080 [2024-07-26 11:15:25.534980] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.080 [2024-07-26 11:15:25.535020] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.080 [2024-07-26 11:15:25.535062] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.080 [2024-07-26 11:15:25.535101] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.080 [2024-07-26 11:15:25.535139] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.080 [2024-07-26 11:15:25.535185] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.080 [2024-07-26 11:15:25.535220] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.080 [2024-07-26 11:15:25.535257] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.080 [2024-07-26 11:15:25.535294] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.080 [2024-07-26 11:15:25.535336] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.080 [2024-07-26 11:15:25.535380] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.080 [2024-07-26 11:15:25.535420] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.080 [2024-07-26 11:15:25.535460] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.080 [2024-07-26 11:15:25.535505] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.080 [2024-07-26 11:15:25.535545] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.080 [2024-07-26 11:15:25.535585] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.080 [2024-07-26 11:15:25.535631] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.080 [2024-07-26 11:15:25.535673] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.080 [2024-07-26 11:15:25.535705] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.080 [2024-07-26 11:15:25.535745] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.080 [2024-07-26 11:15:25.535791] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.080 [2024-07-26 11:15:25.535835] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.080 [2024-07-26 11:15:25.535870] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.080 [2024-07-26 11:15:25.535913] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.081 [2024-07-26 11:15:25.535952] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.081 [2024-07-26 11:15:25.535994] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.081 [2024-07-26 11:15:25.536035] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.081 [2024-07-26 11:15:25.536082] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.081 [2024-07-26 11:15:25.536121] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.081 [2024-07-26 11:15:25.536162] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.081 [2024-07-26 11:15:25.536199] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.081 [2024-07-26 11:15:25.536238] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.081 [2024-07-26 11:15:25.536276] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.081 [2024-07-26 11:15:25.536312] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.081 [2024-07-26 11:15:25.536352] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.081 [2024-07-26 11:15:25.536396] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.081 [2024-07-26 11:15:25.536443] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.081 [2024-07-26 11:15:25.536490] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.081 [2024-07-26 11:15:25.536535] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.081 [2024-07-26 11:15:25.536592] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.081 [2024-07-26 11:15:25.536643] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.081 [2024-07-26 11:15:25.536695] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.081 [2024-07-26 11:15:25.536745] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.081 [2024-07-26 11:15:25.536790] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.081 [2024-07-26 11:15:25.536840] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.081 [2024-07-26 11:15:25.537636] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.081 [2024-07-26 11:15:25.537685] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.081 [2024-07-26 11:15:25.537723] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.081 [2024-07-26 11:15:25.537766] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.081 [2024-07-26 11:15:25.537805] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.081 [2024-07-26 11:15:25.537845] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.081 [2024-07-26 11:15:25.537893] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.081 [2024-07-26 11:15:25.537932] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.081 [2024-07-26 11:15:25.537971] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.081 [2024-07-26 11:15:25.538016] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.081 [2024-07-26 11:15:25.538058] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.081 [2024-07-26 11:15:25.538104] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.081 [2024-07-26 11:15:25.538137] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.081 [2024-07-26 11:15:25.538180] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.081 [2024-07-26 11:15:25.538220] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.081 [2024-07-26 11:15:25.538263] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.081 [2024-07-26 11:15:25.538301] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.081 [2024-07-26 11:15:25.538349] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.081 [2024-07-26 11:15:25.538390] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.081 [2024-07-26 11:15:25.538432] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.081 [2024-07-26 11:15:25.538478] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.081 [2024-07-26 11:15:25.538511] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.081 [2024-07-26 11:15:25.538549] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.081 [2024-07-26 11:15:25.538590] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.081 [2024-07-26 11:15:25.538637] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.081 [2024-07-26 11:15:25.538679] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.081 [2024-07-26 11:15:25.538719] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.081 [2024-07-26 11:15:25.538759] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.081 [2024-07-26 11:15:25.538804] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.081 [2024-07-26 11:15:25.538844] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.081 [2024-07-26 11:15:25.538880] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.081 [2024-07-26 11:15:25.538920] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.081 [2024-07-26 11:15:25.538963] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.081 [2024-07-26 11:15:25.539003] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.081 [2024-07-26 11:15:25.539043] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.081 [2024-07-26 11:15:25.539089] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.081 [2024-07-26 11:15:25.539129] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.081 [2024-07-26 11:15:25.539177] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.081 [2024-07-26 11:15:25.539227] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.081 [2024-07-26 11:15:25.539271] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.081 [2024-07-26 11:15:25.539317] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.081 [2024-07-26 11:15:25.539364] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.081 [2024-07-26 11:15:25.539409] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.081 [2024-07-26 11:15:25.539459] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.081 [2024-07-26 11:15:25.539507] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.081 [2024-07-26 11:15:25.539553] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.081 [2024-07-26 11:15:25.539598] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.081 [2024-07-26 11:15:25.539651] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.081 [2024-07-26 11:15:25.539698] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.081 [2024-07-26 11:15:25.539747] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.081 [2024-07-26 11:15:25.539790] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.081 [2024-07-26 11:15:25.539836] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.081 [2024-07-26 11:15:25.539880] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.081 [2024-07-26 11:15:25.539927] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.081 [2024-07-26 11:15:25.539976] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.081 [2024-07-26 11:15:25.540022] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.081 [2024-07-26 11:15:25.540069] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.081 [2024-07-26 11:15:25.540113] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.081 [2024-07-26 11:15:25.540160] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.081 [2024-07-26 11:15:25.540208] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.081 [2024-07-26 11:15:25.540256] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.081 [2024-07-26 11:15:25.540304] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.081 [2024-07-26 11:15:25.540348] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.081 [2024-07-26 11:15:25.540394] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.081 [2024-07-26 11:15:25.540586] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.081 [2024-07-26 11:15:25.540640] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.081 [2024-07-26 11:15:25.540686] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.081 [2024-07-26 11:15:25.540729] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.081 [2024-07-26 11:15:25.540775] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.081 [2024-07-26 11:15:25.540825] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.081 [2024-07-26 11:15:25.540871] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.081 [2024-07-26 11:15:25.540915] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.081 [2024-07-26 11:15:25.540960] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.081 [2024-07-26 11:15:25.541009] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.081 [2024-07-26 11:15:25.541061] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.081 [2024-07-26 11:15:25.541108] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.081 [2024-07-26 11:15:25.541150] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.081 [2024-07-26 11:15:25.541194] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.081 [2024-07-26 11:15:25.541235] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.081 [2024-07-26 11:15:25.541267] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.081 [2024-07-26 11:15:25.541310] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.081 [2024-07-26 11:15:25.541347] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.081 [2024-07-26 11:15:25.541387] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.081 [2024-07-26 11:15:25.541427] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.081 [2024-07-26 11:15:25.541466] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.081 [2024-07-26 11:15:25.541507] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.081 [2024-07-26 11:15:25.541548] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.081 [2024-07-26 11:15:25.541586] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.081 [2024-07-26 11:15:25.541639] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.081 [2024-07-26 11:15:25.541682] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.081 [2024-07-26 11:15:25.541722] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.081 [2024-07-26 11:15:25.541754] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.081 [2024-07-26 11:15:25.541794] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.081 [2024-07-26 11:15:25.541832] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.081 [2024-07-26 11:15:25.541871] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.081 [2024-07-26 11:15:25.541909] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.081 [2024-07-26 11:15:25.541957] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.081 [2024-07-26 11:15:25.541998] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.082 [2024-07-26 11:15:25.542044] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.082 [2024-07-26 11:15:25.542081] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.082 [2024-07-26 11:15:25.542119] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.082 [2024-07-26 11:15:25.542165] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.082 [2024-07-26 11:15:25.542204] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.082 [2024-07-26 11:15:25.542235] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.082 [2024-07-26 11:15:25.542279] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.082 [2024-07-26 11:15:25.542322] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.082 [2024-07-26 11:15:25.542371] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.082 [2024-07-26 11:15:25.542415] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.082 [2024-07-26 11:15:25.542454] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.082 [2024-07-26 11:15:25.542494] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.082 [2024-07-26 11:15:25.542534] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.082 [2024-07-26 11:15:25.542575] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.082 [2024-07-26 11:15:25.542617] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.082 [2024-07-26 11:15:25.542667] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.082 [2024-07-26 11:15:25.542707] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.082 [2024-07-26 11:15:25.542749] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.082 [2024-07-26 11:15:25.542786] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.082 [2024-07-26 11:15:25.542825] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.082 [2024-07-26 11:15:25.542866] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.082 [2024-07-26 11:15:25.542908] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.082 [2024-07-26 11:15:25.542952] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.082 [2024-07-26 11:15:25.543002] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.082 [2024-07-26 11:15:25.543048] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.082 [2024-07-26 11:15:25.543090] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.082 [2024-07-26 11:15:25.543137] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.082 [2024-07-26 11:15:25.543185] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.082 [2024-07-26 11:15:25.543236] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.082 [2024-07-26 11:15:25.544034] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.082 [2024-07-26 11:15:25.544087] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.082 [2024-07-26 11:15:25.544130] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.082 [2024-07-26 11:15:25.544175] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.082 [2024-07-26 11:15:25.544209] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.082 [2024-07-26 11:15:25.544252] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.082 [2024-07-26 11:15:25.544293] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.082 [2024-07-26 11:15:25.544332] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.082 [2024-07-26 11:15:25.544381] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.082 [2024-07-26 11:15:25.544420] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.082 [2024-07-26 11:15:25.544460] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.082 [2024-07-26 11:15:25.544506] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.082 [2024-07-26 11:15:25.544545] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.082 [2024-07-26 11:15:25.544590] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.082 [2024-07-26 11:15:25.544638] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.082 [2024-07-26 11:15:25.544672] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.082 [2024-07-26 11:15:25.544715] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.082 [2024-07-26 11:15:25.544757] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.082 [2024-07-26 11:15:25.544795] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.082 [2024-07-26 11:15:25.544838] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.082 [2024-07-26 11:15:25.544881] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.082 [2024-07-26 11:15:25.544919] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.082 [2024-07-26 11:15:25.544957] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.082 [2024-07-26 11:15:25.544995] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.082 [2024-07-26 11:15:25.545035] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.082 [2024-07-26 11:15:25.545076] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.082 [2024-07-26 11:15:25.545118] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.082 [2024-07-26 11:15:25.545162] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.082 [2024-07-26 11:15:25.545203] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.082 [2024-07-26 11:15:25.545243] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.082 [2024-07-26 11:15:25.545289] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.082 [2024-07-26 11:15:25.545330] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.082 [2024-07-26 11:15:25.545370] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.082 [2024-07-26 11:15:25.545410] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.082 [2024-07-26 11:15:25.545450] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.082 [2024-07-26 11:15:25.545492] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.082 [2024-07-26 11:15:25.545531] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.082 [2024-07-26 11:15:25.545568] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.082 [2024-07-26 11:15:25.545612] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.082 [2024-07-26 11:15:25.545663] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.082 [2024-07-26 11:15:25.545704] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.082 [2024-07-26 11:15:25.545752] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.082 [2024-07-26 11:15:25.545797] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.082 [2024-07-26 11:15:25.545842] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.082 [2024-07-26 11:15:25.545888] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.082 [2024-07-26 11:15:25.545935] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.082 [2024-07-26 11:15:25.545980] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.082 [2024-07-26 11:15:25.546031] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.082 [2024-07-26 11:15:25.546077] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.082 [2024-07-26 11:15:25.546121] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.082 [2024-07-26 11:15:25.546167] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.082 [2024-07-26 11:15:25.546223] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.082 [2024-07-26 11:15:25.546269] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.082 [2024-07-26 11:15:25.546315] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.082 [2024-07-26 11:15:25.546364] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.082 [2024-07-26 11:15:25.546408] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.082 [2024-07-26 11:15:25.546455] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.082 [2024-07-26 11:15:25.546501] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.082 [2024-07-26 11:15:25.546548] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.082 [2024-07-26 11:15:25.546592] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.082 [2024-07-26 11:15:25.546647] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.082 [2024-07-26 11:15:25.546692] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.082 [2024-07-26 11:15:25.546738] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.082 [2024-07-26 11:15:25.546784] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.082 [2024-07-26 11:15:25.546964] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.082 [2024-07-26 11:15:25.547008] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.082 [2024-07-26 11:15:25.547053] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.082 [2024-07-26 11:15:25.547097] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.082 [2024-07-26 11:15:25.547146] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.082 [2024-07-26 11:15:25.547191] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.082 [2024-07-26 11:15:25.547237] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.082 [2024-07-26 11:15:25.547284] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.082 [2024-07-26 11:15:25.547330] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.082 [2024-07-26 11:15:25.547373] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.082 [2024-07-26 11:15:25.547414] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.082 [2024-07-26 11:15:25.547458] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.082 [2024-07-26 11:15:25.547502] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.082 [2024-07-26 11:15:25.547534] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.082 [2024-07-26 11:15:25.547578] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.082 [2024-07-26 11:15:25.547622] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.082 [2024-07-26 11:15:25.547666] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.082 [2024-07-26 11:15:25.547714] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.082 [2024-07-26 11:15:25.547752] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.082 [2024-07-26 11:15:25.547797] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.082 [2024-07-26 11:15:25.547836] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.082 [2024-07-26 11:15:25.547878] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.082 [2024-07-26 11:15:25.547925] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.082 [2024-07-26 11:15:25.547965] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.082 [2024-07-26 11:15:25.548013] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.082 [2024-07-26 11:15:25.548045] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.082 [2024-07-26 11:15:25.548088] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.082 [2024-07-26 11:15:25.548129] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.082 [2024-07-26 11:15:25.548167] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.082 [2024-07-26 11:15:25.548204] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.082 [2024-07-26 11:15:25.548250] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.082 [2024-07-26 11:15:25.548289] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.082 [2024-07-26 11:15:25.548331] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.082 [2024-07-26 11:15:25.548378] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.082 [2024-07-26 11:15:25.548419] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.082 [2024-07-26 11:15:25.548459] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.082 [2024-07-26 11:15:25.548498] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.082 [2024-07-26 11:15:25.548533] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.082 [2024-07-26 11:15:25.548574] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.082 [2024-07-26 11:15:25.548616] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.082 [2024-07-26 11:15:25.548662] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.082 [2024-07-26 11:15:25.548699] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.082 [2024-07-26 11:15:25.548738] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.083 [2024-07-26 11:15:25.548779] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.083 [2024-07-26 11:15:25.548819] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.083 [2024-07-26 11:15:25.548860] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.083 [2024-07-26 11:15:25.548907] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.083 [2024-07-26 11:15:25.548945] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.083 [2024-07-26 11:15:25.548984] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.083 [2024-07-26 11:15:25.549023] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.083 [2024-07-26 11:15:25.549063] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.083 [2024-07-26 11:15:25.549104] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.083 [2024-07-26 11:15:25.549153] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.083 [2024-07-26 11:15:25.549196] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.083 [2024-07-26 11:15:25.549244] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.083 [2024-07-26 11:15:25.549289] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.083 [2024-07-26 11:15:25.549334] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.083 [2024-07-26 11:15:25.549378] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.083 [2024-07-26 11:15:25.549423] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.083 [2024-07-26 11:15:25.549470] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.083 [2024-07-26 11:15:25.549520] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.083 [2024-07-26 11:15:25.549566] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.083 [2024-07-26 11:15:25.549612] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.083 [2024-07-26 11:15:25.550400] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.083 [2024-07-26 11:15:25.550451] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.083 [2024-07-26 11:15:25.550495] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.083 [2024-07-26 11:15:25.550553] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.083 [2024-07-26 11:15:25.550594] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.083 [2024-07-26 11:15:25.550632] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.083 [2024-07-26 11:15:25.550675] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.083 [2024-07-26 11:15:25.550715] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.083 [2024-07-26 11:15:25.550764] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.083 [2024-07-26 11:15:25.550803] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.083 [2024-07-26 11:15:25.550845] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.083 [2024-07-26 11:15:25.550883] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.083 [2024-07-26 11:15:25.550930] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.083 [2024-07-26 11:15:25.550973] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.083 [2024-07-26 11:15:25.551022] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.083 [2024-07-26 11:15:25.551062] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.083 [2024-07-26 11:15:25.551102] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.083 [2024-07-26 11:15:25.551146] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.083 [2024-07-26 11:15:25.551185] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.083 [2024-07-26 11:15:25.551225] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.083 [2024-07-26 11:15:25.551270] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.083 [2024-07-26 11:15:25.551313] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.083 [2024-07-26 11:15:25.551353] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.083 [2024-07-26 11:15:25.551399] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.083 [2024-07-26 11:15:25.551441] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.083 [2024-07-26 11:15:25.551485] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.083 [2024-07-26 11:15:25.551520] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.083 [2024-07-26 11:15:25.551562] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.083 [2024-07-26 11:15:25.551597] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.083 [2024-07-26 11:15:25.551645] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.083 [2024-07-26 11:15:25.551690] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.083 [2024-07-26 11:15:25.551728] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.083 [2024-07-26 11:15:25.551771] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.083 [2024-07-26 11:15:25.551812] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.083 [2024-07-26 11:15:25.551853] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.083 [2024-07-26 11:15:25.551896] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.083 [2024-07-26 11:15:25.551936] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.083 [2024-07-26 11:15:25.551974] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.083 [2024-07-26 11:15:25.552015] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.083 [2024-07-26 11:15:25.552057] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.083 [2024-07-26 11:15:25.552091] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.083 [2024-07-26 11:15:25.552138] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.083 [2024-07-26 11:15:25.552183] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.083 [2024-07-26 11:15:25.552228] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.083 [2024-07-26 11:15:25.552289] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.083 [2024-07-26 11:15:25.552335] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.083 [2024-07-26 11:15:25.552380] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.083 [2024-07-26 11:15:25.552430] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.083 [2024-07-26 11:15:25.552475] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.083 [2024-07-26 11:15:25.552517] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.083 [2024-07-26 11:15:25.552565] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.083 [2024-07-26 11:15:25.552615] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.083 [2024-07-26 11:15:25.552665] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.083 [2024-07-26 11:15:25.552711] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.083 [2024-07-26 11:15:25.552754] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.083 [2024-07-26 11:15:25.552799] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.083 [2024-07-26 11:15:25.552844] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.083 [2024-07-26 11:15:25.552889] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.083 [2024-07-26 11:15:25.552937] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.083 [2024-07-26 11:15:25.552981] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.083 [2024-07-26 11:15:25.553024] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.083 [2024-07-26 11:15:25.553070] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.083 [2024-07-26 11:15:25.553118] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.083 [2024-07-26 11:15:25.553165] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.083 [2024-07-26 11:15:25.553348] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.083 [2024-07-26 11:15:25.553395] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.083 [2024-07-26 11:15:25.553445] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.083 [2024-07-26 11:15:25.553490] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.083 [2024-07-26 11:15:25.553535] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.083 [2024-07-26 11:15:25.553581] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.083 [2024-07-26 11:15:25.553634] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.083 [2024-07-26 11:15:25.553682] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.083 [2024-07-26 11:15:25.553728] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.083 [2024-07-26 11:15:25.553776] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.083 [2024-07-26 11:15:25.553824] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.083 [2024-07-26 11:15:25.553882] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.083 [2024-07-26 11:15:25.553929] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.083 [2024-07-26 11:15:25.553961] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.083 [2024-07-26 11:15:25.554002] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.083 [2024-07-26 11:15:25.554045] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.083 [2024-07-26 11:15:25.554082] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.083 [2024-07-26 11:15:25.554121] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.083 [2024-07-26 11:15:25.554165] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.083 [2024-07-26 11:15:25.554209] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.083 [2024-07-26 11:15:25.554251] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.083 [2024-07-26 11:15:25.554292] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.083 [2024-07-26 11:15:25.554333] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.083 [2024-07-26 11:15:25.554375] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.083 [2024-07-26 11:15:25.554416] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.083 [2024-07-26 11:15:25.554449] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.083 [2024-07-26 11:15:25.554488] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.083 [2024-07-26 11:15:25.554532] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.083 [2024-07-26 11:15:25.554570] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.083 [2024-07-26 11:15:25.554614] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.083 [2024-07-26 11:15:25.554662] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.083 [2024-07-26 11:15:25.554711] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.083 [2024-07-26 11:15:25.554753] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.083 [2024-07-26 11:15:25.554790] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.083 [2024-07-26 11:15:25.554834] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.083 [2024-07-26 11:15:25.554871] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.083 [2024-07-26 11:15:25.554912] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.083 [2024-07-26 11:15:25.554945] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.083 [2024-07-26 11:15:25.554986] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.083 [2024-07-26 11:15:25.555022] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.083 [2024-07-26 11:15:25.555062] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.083 [2024-07-26 11:15:25.555104] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.083 [2024-07-26 11:15:25.555148] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.083 [2024-07-26 11:15:25.555188] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.083 [2024-07-26 11:15:25.555231] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.083 [2024-07-26 11:15:25.555272] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.083 [2024-07-26 11:15:25.555316] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.083 [2024-07-26 11:15:25.555355] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.083 [2024-07-26 11:15:25.555394] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.083 [2024-07-26 11:15:25.555432] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.083 [2024-07-26 11:15:25.555470] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.084 [2024-07-26 11:15:25.555512] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.084 [2024-07-26 11:15:25.555555] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.084 [2024-07-26 11:15:25.555599] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.084 [2024-07-26 11:15:25.555648] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.084 [2024-07-26 11:15:25.555694] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.084 [2024-07-26 11:15:25.555743] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.084 [2024-07-26 11:15:25.555789] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.084 [2024-07-26 11:15:25.555835] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.084 [2024-07-26 11:15:25.555883] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.084 [2024-07-26 11:15:25.555929] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.084 [2024-07-26 11:15:25.555976] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.084 [2024-07-26 11:15:25.556020] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.084 [2024-07-26 11:15:25.556810] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.084 [2024-07-26 11:15:25.556861] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.084 [2024-07-26 11:15:25.556906] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.084 [2024-07-26 11:15:25.556949] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.084 [2024-07-26 11:15:25.556997] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.084 [2024-07-26 11:15:25.557044] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.084 [2024-07-26 11:15:25.557089] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.084 [2024-07-26 11:15:25.557129] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.084 [2024-07-26 11:15:25.557168] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.084 [2024-07-26 11:15:25.557209] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.084 [2024-07-26 11:15:25.557250] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.084 [2024-07-26 11:15:25.557296] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.084 [2024-07-26 11:15:25.557336] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.084 [2024-07-26 11:15:25.557376] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.084 [2024-07-26 11:15:25.557424] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.084 [2024-07-26 11:15:25.557465] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.084 [2024-07-26 11:15:25.557508] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.084 [2024-07-26 11:15:25.557547] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.084 [2024-07-26 11:15:25.557592] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.084 [2024-07-26 11:15:25.557635] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.084 [2024-07-26 11:15:25.557677] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.084 [2024-07-26 11:15:25.557719] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.084 [2024-07-26 11:15:25.557767] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.084 [2024-07-26 11:15:25.557805] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.084 [2024-07-26 11:15:25.557845] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.084 [2024-07-26 11:15:25.557890] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.084 [2024-07-26 11:15:25.557930] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.084 [2024-07-26 11:15:25.557968] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.084 [2024-07-26 11:15:25.558013] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.084 [2024-07-26 11:15:25.558050] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.084 [2024-07-26 11:15:25.558083] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.084 [2024-07-26 11:15:25.558122] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.084 [2024-07-26 11:15:25.558162] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.084 [2024-07-26 11:15:25.558205] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.084 [2024-07-26 11:15:25.558247] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.084 [2024-07-26 11:15:25.558289] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.084 [2024-07-26 11:15:25.558331] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.084 [2024-07-26 11:15:25.558370] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.084 [2024-07-26 11:15:25.558414] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.084 [2024-07-26 11:15:25.558453] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.084 [2024-07-26 11:15:25.558494] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.084 [2024-07-26 11:15:25.558540] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.084 [2024-07-26 11:15:25.558581] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.084 [2024-07-26 11:15:25.558621] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.084 [2024-07-26 11:15:25.558670] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.084 [2024-07-26 11:15:25.558722] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.084 [2024-07-26 11:15:25.558770] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.084 [2024-07-26 11:15:25.558811] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.084 [2024-07-26 11:15:25.558855] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.084 [2024-07-26 11:15:25.558901] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.084 [2024-07-26 11:15:25.558950] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.084 [2024-07-26 11:15:25.558997] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.084 [2024-07-26 11:15:25.559044] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.084 [2024-07-26 11:15:25.559097] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.084 [2024-07-26 11:15:25.559145] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.084 [2024-07-26 11:15:25.559189] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.084 [2024-07-26 11:15:25.559235] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.084 [2024-07-26 11:15:25.559280] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.084 [2024-07-26 11:15:25.559324] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.084 [2024-07-26 11:15:25.559369] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.084 [2024-07-26 11:15:25.559417] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.084 [2024-07-26 11:15:25.559463] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.084 [2024-07-26 11:15:25.559508] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.084 [2024-07-26 11:15:25.559553] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.084 [2024-07-26 11:15:25.559744] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.084 [2024-07-26 11:15:25.559791] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.084 [2024-07-26 11:15:25.559836] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.084 [2024-07-26 11:15:25.559878] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.084 [2024-07-26 11:15:25.559927] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.084 [2024-07-26 11:15:25.559973] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.084 [2024-07-26 11:15:25.560021] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.084 [2024-07-26 11:15:25.560066] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.084 [2024-07-26 11:15:25.560110] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.084 [2024-07-26 11:15:25.560162] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.084 [2024-07-26 11:15:25.560207] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.084 [2024-07-26 11:15:25.560252] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.084 [2024-07-26 11:15:25.560296] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.084 [2024-07-26 11:15:25.560347] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.084 [2024-07-26 11:15:25.560390] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.084 [2024-07-26 11:15:25.560421] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.084 [2024-07-26 11:15:25.560466] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.084 [2024-07-26 11:15:25.560512] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.084 [2024-07-26 11:15:25.560553] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.084 [2024-07-26 11:15:25.560595] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.084 [2024-07-26 11:15:25.560638] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.084 [2024-07-26 11:15:25.560687] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.084 [2024-07-26 11:15:25.560730] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.084 [2024-07-26 11:15:25.560770] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.084 [2024-07-26 11:15:25.560809] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.084 [2024-07-26 11:15:25.560849] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.084 [2024-07-26 11:15:25.560890] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.084 [2024-07-26 11:15:25.560924] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.084 [2024-07-26 11:15:25.560966] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.084 [2024-07-26 11:15:25.561004] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.084 [2024-07-26 11:15:25.561055] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.084 [2024-07-26 11:15:25.561094] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.084 [2024-07-26 11:15:25.561133] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.084 [2024-07-26 11:15:25.561178] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.084 [2024-07-26 11:15:25.561218] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.084 [2024-07-26 11:15:25.561257] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.084 [2024-07-26 11:15:25.561296] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.084 [2024-07-26 11:15:25.561338] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.084 [2024-07-26 11:15:25.561377] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.084 [2024-07-26 11:15:25.561408] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.084 [2024-07-26 11:15:25.561449] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.084 [2024-07-26 11:15:25.561488] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.084 [2024-07-26 11:15:25.561531] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.084 [2024-07-26 11:15:25.561570] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.085 [2024-07-26 11:15:25.561611] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.085 [2024-07-26 11:15:25.561662] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.085 [2024-07-26 11:15:25.561703] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.085 [2024-07-26 11:15:25.561743] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.085 [2024-07-26 11:15:25.561785] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.085 [2024-07-26 11:15:25.561825] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.085 [2024-07-26 11:15:25.561865] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.085 [2024-07-26 11:15:25.561905] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.085 [2024-07-26 11:15:25.561951] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.085 [2024-07-26 11:15:25.561991] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.085 [2024-07-26 11:15:25.562035] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.085 [2024-07-26 11:15:25.562082] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.085 [2024-07-26 11:15:25.562126] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.085 [2024-07-26 11:15:25.562173] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.085 [2024-07-26 11:15:25.562216] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.085 [2024-07-26 11:15:25.562263] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.085 [2024-07-26 11:15:25.562311] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.085 [2024-07-26 11:15:25.562356] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.085 [2024-07-26 11:15:25.562403] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.085 [2024-07-26 11:15:25.563193] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.085 [2024-07-26 11:15:25.563244] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.085 [2024-07-26 11:15:25.563293] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.085 [2024-07-26 11:15:25.563342] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.085 [2024-07-26 11:15:25.563386] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.085 [2024-07-26 11:15:25.563430] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.085 [2024-07-26 11:15:25.563482] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.085 [2024-07-26 11:15:25.563528] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.085 [2024-07-26 11:15:25.563575] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.085 [2024-07-26 11:15:25.563614] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.085 [2024-07-26 11:15:25.563660] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.085 [2024-07-26 11:15:25.563702] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.085 [2024-07-26 11:15:25.563740] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.085 [2024-07-26 11:15:25.563782] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.085 [2024-07-26 11:15:25.563825] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.085 [2024-07-26 11:15:25.563868] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.085 [2024-07-26 11:15:25.563907] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.085 [2024-07-26 11:15:25.563954] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.085 [2024-07-26 11:15:25.563993] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.085 [2024-07-26 11:15:25.564030] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.085 [2024-07-26 11:15:25.564078] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.085 [2024-07-26 11:15:25.564119] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.085 [2024-07-26 11:15:25.564150] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.085 [2024-07-26 11:15:25.564193] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.085 [2024-07-26 11:15:25.564229] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.085 [2024-07-26 11:15:25.564272] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.085 [2024-07-26 11:15:25.564313] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.085 [2024-07-26 11:15:25.564352] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.085 [2024-07-26 11:15:25.564389] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.085 [2024-07-26 11:15:25.564431] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.085 [2024-07-26 11:15:25.564475] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.085 [2024-07-26 11:15:25.564521] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.085 [2024-07-26 11:15:25.564558] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.085 [2024-07-26 11:15:25.564591] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.085 [2024-07-26 11:15:25.564636] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.085 [2024-07-26 11:15:25.564686] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.085 [2024-07-26 11:15:25.564729] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.085 [2024-07-26 11:15:25.564774] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.085 [2024-07-26 11:15:25.564814] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.085 [2024-07-26 11:15:25.564856] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.085 [2024-07-26 11:15:25.564901] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.085 [2024-07-26 11:15:25.564940] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.085 [2024-07-26 11:15:25.564979] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.085 [2024-07-26 11:15:25.565015] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.085 [2024-07-26 11:15:25.565056] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.085 [2024-07-26 11:15:25.565095] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.085 [2024-07-26 11:15:25.565133] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.085 [2024-07-26 11:15:25.565176] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.085 [2024-07-26 11:15:25.565221] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.085 [2024-07-26 11:15:25.565264] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.085 [2024-07-26 11:15:25.565310] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.085 [2024-07-26 11:15:25.565354] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.085 [2024-07-26 11:15:25.565402] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.085 [2024-07-26 11:15:25.565448] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.085 [2024-07-26 11:15:25.565494] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.085 [2024-07-26 11:15:25.565543] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.085 [2024-07-26 11:15:25.565587] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.085 [2024-07-26 11:15:25.565643] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.085 [2024-07-26 11:15:25.565696] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.085 [2024-07-26 11:15:25.565743] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.085 [2024-07-26 11:15:25.565791] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.085 [2024-07-26 11:15:25.565846] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.085 [2024-07-26 11:15:25.565894] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.085 [2024-07-26 11:15:25.565940] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.085 [2024-07-26 11:15:25.566126] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.085 [2024-07-26 11:15:25.566184] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.085 [2024-07-26 11:15:25.566235] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.085 [2024-07-26 11:15:25.566283] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.085 [2024-07-26 11:15:25.566330] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.085 [2024-07-26 11:15:25.566374] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.085 [2024-07-26 11:15:25.566426] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.085 [2024-07-26 11:15:25.566473] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.085 [2024-07-26 11:15:25.566518] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.085 [2024-07-26 11:15:25.566565] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.085 [2024-07-26 11:15:25.566612] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.085 [2024-07-26 11:15:25.566665] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.085 [2024-07-26 11:15:25.566709] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.085 [2024-07-26 11:15:25.566755] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.085 [2024-07-26 11:15:25.566802] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.085 [2024-07-26 11:15:25.566852] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.085 [2024-07-26 11:15:25.566898] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.085 [2024-07-26 11:15:25.566933] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.085 [2024-07-26 11:15:25.566970] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.085 [2024-07-26 11:15:25.567010] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.085 [2024-07-26 11:15:25.567049] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.085 [2024-07-26 11:15:25.567089] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.085 [2024-07-26 11:15:25.567131] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.085 [2024-07-26 11:15:25.567173] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.085 [2024-07-26 11:15:25.567214] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.085 [2024-07-26 11:15:25.567259] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.085 [2024-07-26 11:15:25.567303] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.085 [2024-07-26 11:15:25.567352] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.085 [2024-07-26 11:15:25.567392] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.085 [2024-07-26 11:15:25.567437] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.085 [2024-07-26 11:15:25.567474] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.085 [2024-07-26 11:15:25.567520] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.085 [2024-07-26 11:15:25.567567] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.085 [2024-07-26 11:15:25.567606] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.085 [2024-07-26 11:15:25.567652] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.085 [2024-07-26 11:15:25.567693] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.085 [2024-07-26 11:15:25.567738] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.085 [2024-07-26 11:15:25.567777] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.085 [2024-07-26 11:15:25.567813] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.085 [2024-07-26 11:15:25.567852] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.085 [2024-07-26 11:15:25.567894] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.085 [2024-07-26 11:15:25.567926] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.085 [2024-07-26 11:15:25.567966] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.085 [2024-07-26 11:15:25.568009] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.085 [2024-07-26 11:15:25.568050] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.085 [2024-07-26 11:15:25.568090] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.085 [2024-07-26 11:15:25.568131] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.085 [2024-07-26 11:15:25.568170] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.085 [2024-07-26 11:15:25.568211] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.085 [2024-07-26 11:15:25.568255] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.085 [2024-07-26 11:15:25.568292] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.085 [2024-07-26 11:15:25.568333] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.085 [2024-07-26 11:15:25.568372] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.085 [2024-07-26 11:15:25.568416] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.086 [2024-07-26 11:15:25.568464] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.086 [2024-07-26 11:15:25.568510] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.086 [2024-07-26 11:15:25.568563] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.086 [2024-07-26 11:15:25.568609] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.086 [2024-07-26 11:15:25.568661] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.086 [2024-07-26 11:15:25.568708] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.086 [2024-07-26 11:15:25.568756] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.086 [2024-07-26 11:15:25.568802] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.086 [2024-07-26 11:15:25.568850] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.086 [2024-07-26 11:15:25.569657] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.086 [2024-07-26 11:15:25.569706] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.086 [2024-07-26 11:15:25.569754] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.086 [2024-07-26 11:15:25.569799] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.086 [2024-07-26 11:15:25.569844] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.086 [2024-07-26 11:15:25.569888] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.086 [2024-07-26 11:15:25.569941] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.086 [2024-07-26 11:15:25.569988] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.086 [2024-07-26 11:15:25.570036] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.086 [2024-07-26 11:15:25.570080] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.086 [2024-07-26 11:15:25.570127] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.086 [2024-07-26 11:15:25.570164] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.086 [2024-07-26 11:15:25.570205] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.086 [2024-07-26 11:15:25.570247] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.086 [2024-07-26 11:15:25.570287] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.086 [2024-07-26 11:15:25.570328] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.086 [2024-07-26 11:15:25.570371] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.086 [2024-07-26 11:15:25.570413] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.086 [2024-07-26 11:15:25.570451] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.086 [2024-07-26 11:15:25.570490] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.086 [2024-07-26 11:15:25.570532] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.086 [2024-07-26 11:15:25.570569] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.086 [2024-07-26 11:15:25.570607] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.086 [2024-07-26 11:15:25.570655] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.086 [2024-07-26 11:15:25.570696] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.086 [2024-07-26 11:15:25.570728] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.086 [2024-07-26 11:15:25.570770] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.086 [2024-07-26 11:15:25.570811] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.086 [2024-07-26 11:15:25.570854] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.086 [2024-07-26 11:15:25.570904] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.086 [2024-07-26 11:15:25.570941] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.086 [2024-07-26 11:15:25.570983] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.086 [2024-07-26 11:15:25.571019] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.086 [2024-07-26 11:15:25.571057] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.086 [2024-07-26 11:15:25.571092] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.086 [2024-07-26 11:15:25.571130] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.086 [2024-07-26 11:15:25.571171] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.086 [2024-07-26 11:15:25.571214] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.086 [2024-07-26 11:15:25.571260] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.086 [2024-07-26 11:15:25.571299] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.086 [2024-07-26 11:15:25.571343] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.086 [2024-07-26 11:15:25.571383] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.086 [2024-07-26 11:15:25.571420] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.086 [2024-07-26 11:15:25.571461] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.086 [2024-07-26 11:15:25.571500] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.086 [2024-07-26 11:15:25.571534] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.086 [2024-07-26 11:15:25.571580] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.086 [2024-07-26 11:15:25.571639] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.086 [2024-07-26 11:15:25.571684] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.086 [2024-07-26 11:15:25.571732] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.086 [2024-07-26 11:15:25.571780] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.086 [2024-07-26 11:15:25.571825] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.086 [2024-07-26 11:15:25.571876] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.086 [2024-07-26 11:15:25.571919] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.086 [2024-07-26 11:15:25.571964] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.086 [2024-07-26 11:15:25.572010] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.086 [2024-07-26 11:15:25.572059] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.086 [2024-07-26 11:15:25.572103] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.086 [2024-07-26 11:15:25.572149] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.086 [2024-07-26 11:15:25.572193] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.086 [2024-07-26 11:15:25.572239] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.086 [2024-07-26 11:15:25.572288] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.086 [2024-07-26 11:15:25.572340] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.086 [2024-07-26 11:15:25.572386] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.086 [2024-07-26 11:15:25.572576] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.086 [2024-07-26 11:15:25.572620] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.086 [2024-07-26 11:15:25.572672] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.086 [2024-07-26 11:15:25.572715] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.086 [2024-07-26 11:15:25.572756] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.086 [2024-07-26 11:15:25.572787] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.086 [2024-07-26 11:15:25.572830] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.086 [2024-07-26 11:15:25.572875] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.086 [2024-07-26 11:15:25.572923] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.086 [2024-07-26 11:15:25.572965] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.086 [2024-07-26 11:15:25.573010] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.086 [2024-07-26 11:15:25.573050] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.086 [2024-07-26 11:15:25.573086] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.086 [2024-07-26 11:15:25.573128] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.086 [2024-07-26 11:15:25.573168] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.086 [2024-07-26 11:15:25.573203] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.086 [2024-07-26 11:15:25.573244] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.086 Message suppressed 999 times: Read completed with error (sct=0, sc=15) 00:07:30.086 [2024-07-26 11:15:25.573756] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.086 [2024-07-26 11:15:25.573797] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.086 [2024-07-26 11:15:25.573834] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.086 [2024-07-26 11:15:25.573880] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.086 [2024-07-26 11:15:25.573925] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.086 [2024-07-26 11:15:25.573969] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.086 [2024-07-26 11:15:25.574015] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.086 [2024-07-26 11:15:25.574062] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.086 [2024-07-26 11:15:25.574108] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.086 [2024-07-26 11:15:25.574154] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.086 [2024-07-26 11:15:25.574199] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.086 [2024-07-26 11:15:25.574244] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.086 [2024-07-26 11:15:25.574291] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.086 [2024-07-26 11:15:25.574383] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.086 [2024-07-26 11:15:25.574438] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.086 [2024-07-26 11:15:25.574482] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.086 [2024-07-26 11:15:25.574530] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.086 [2024-07-26 11:15:25.574576] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.086 [2024-07-26 11:15:25.574622] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.086 [2024-07-26 11:15:25.574686] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.086 [2024-07-26 11:15:25.574731] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.086 [2024-07-26 11:15:25.574774] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.086 [2024-07-26 11:15:25.574818] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.086 [2024-07-26 11:15:25.574866] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.086 [2024-07-26 11:15:25.574912] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.086 [2024-07-26 11:15:25.574956] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.086 [2024-07-26 11:15:25.575000] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.086 [2024-07-26 11:15:25.575052] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.086 [2024-07-26 11:15:25.575096] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.086 [2024-07-26 11:15:25.575142] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.086 [2024-07-26 11:15:25.575191] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.086 [2024-07-26 11:15:25.575236] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.086 [2024-07-26 11:15:25.575282] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.086 [2024-07-26 11:15:25.575329] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.086 [2024-07-26 11:15:25.575374] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.086 [2024-07-26 11:15:25.575419] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.086 [2024-07-26 11:15:25.575467] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.086 [2024-07-26 11:15:25.575516] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.086 [2024-07-26 11:15:25.575558] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.086 [2024-07-26 11:15:25.575597] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.086 [2024-07-26 11:15:25.575648] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.086 [2024-07-26 11:15:25.575691] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.086 [2024-07-26 11:15:25.575736] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.086 [2024-07-26 11:15:25.575776] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.086 [2024-07-26 11:15:25.575816] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.087 [2024-07-26 11:15:25.575858] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.087 [2024-07-26 11:15:25.575904] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.087 [2024-07-26 11:15:25.575936] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.087 [2024-07-26 11:15:25.575978] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.087 [2024-07-26 11:15:25.576015] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.087 [2024-07-26 11:15:25.576058] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.087 [2024-07-26 11:15:25.576096] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.087 [2024-07-26 11:15:25.576138] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.087 [2024-07-26 11:15:25.576182] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.087 [2024-07-26 11:15:25.576220] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.087 [2024-07-26 11:15:25.576263] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.087 [2024-07-26 11:15:25.576303] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.087 [2024-07-26 11:15:25.576341] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.087 [2024-07-26 11:15:25.576381] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.087 [2024-07-26 11:15:25.576415] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.087 [2024-07-26 11:15:25.576455] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.087 [2024-07-26 11:15:25.576494] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.087 [2024-07-26 11:15:25.576534] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.087 [2024-07-26 11:15:25.576836] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.087 [2024-07-26 11:15:25.576886] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.087 [2024-07-26 11:15:25.576930] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.087 [2024-07-26 11:15:25.576978] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.087 [2024-07-26 11:15:25.577034] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.087 [2024-07-26 11:15:25.577082] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.087 [2024-07-26 11:15:25.577128] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.087 [2024-07-26 11:15:25.577172] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.087 [2024-07-26 11:15:25.577219] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.087 [2024-07-26 11:15:25.577267] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.087 [2024-07-26 11:15:25.577315] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.087 [2024-07-26 11:15:25.577360] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.087 [2024-07-26 11:15:25.577407] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.087 [2024-07-26 11:15:25.577459] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.087 [2024-07-26 11:15:25.577504] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.087 [2024-07-26 11:15:25.577554] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.087 [2024-07-26 11:15:25.577596] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.087 [2024-07-26 11:15:25.577647] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.087 [2024-07-26 11:15:25.577695] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.087 [2024-07-26 11:15:25.577738] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.087 [2024-07-26 11:15:25.577789] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.087 [2024-07-26 11:15:25.577833] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.087 [2024-07-26 11:15:25.577881] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.087 [2024-07-26 11:15:25.577928] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.087 [2024-07-26 11:15:25.577975] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.087 [2024-07-26 11:15:25.578020] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.087 [2024-07-26 11:15:25.578067] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.087 [2024-07-26 11:15:25.578114] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.087 [2024-07-26 11:15:25.578164] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.087 [2024-07-26 11:15:25.578209] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.087 [2024-07-26 11:15:25.578257] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.087 [2024-07-26 11:15:25.578301] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.087 [2024-07-26 11:15:25.578349] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.087 [2024-07-26 11:15:25.578402] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.087 [2024-07-26 11:15:25.578450] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.087 [2024-07-26 11:15:25.578489] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.087 [2024-07-26 11:15:25.578537] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.087 [2024-07-26 11:15:25.578576] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.087 [2024-07-26 11:15:25.578612] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.087 [2024-07-26 11:15:25.578651] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.087 [2024-07-26 11:15:25.578694] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.087 [2024-07-26 11:15:25.578733] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.087 [2024-07-26 11:15:25.578778] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.087 [2024-07-26 11:15:25.578819] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.087 [2024-07-26 11:15:25.578859] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.087 [2024-07-26 11:15:25.578900] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.087 [2024-07-26 11:15:25.578938] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.087 [2024-07-26 11:15:25.578979] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.087 [2024-07-26 11:15:25.579021] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.087 [2024-07-26 11:15:25.579063] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.087 [2024-07-26 11:15:25.579101] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.087 [2024-07-26 11:15:25.579136] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.087 [2024-07-26 11:15:25.579177] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.087 [2024-07-26 11:15:25.579217] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.087 [2024-07-26 11:15:25.579256] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.087 [2024-07-26 11:15:25.579296] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.087 [2024-07-26 11:15:25.579336] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.087 [2024-07-26 11:15:25.579370] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.087 [2024-07-26 11:15:25.579407] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.087 [2024-07-26 11:15:25.579446] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.087 [2024-07-26 11:15:25.579488] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.087 [2024-07-26 11:15:25.579530] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.087 [2024-07-26 11:15:25.579576] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.087 [2024-07-26 11:15:25.579615] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.087 [2024-07-26 11:15:25.580417] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.087 [2024-07-26 11:15:25.580468] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.087 [2024-07-26 11:15:25.580519] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.087 [2024-07-26 11:15:25.580566] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.087 [2024-07-26 11:15:25.580612] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.087 [2024-07-26 11:15:25.580665] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.087 [2024-07-26 11:15:25.580710] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.087 [2024-07-26 11:15:25.580759] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.087 [2024-07-26 11:15:25.580799] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.087 [2024-07-26 11:15:25.580831] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.087 [2024-07-26 11:15:25.580875] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.087 [2024-07-26 11:15:25.580920] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.087 [2024-07-26 11:15:25.580957] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.087 [2024-07-26 11:15:25.580998] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.087 [2024-07-26 11:15:25.581045] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.087 [2024-07-26 11:15:25.581083] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.087 [2024-07-26 11:15:25.581127] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.087 [2024-07-26 11:15:25.581162] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.087 [2024-07-26 11:15:25.581199] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.087 [2024-07-26 11:15:25.581236] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.087 [2024-07-26 11:15:25.581277] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.087 [2024-07-26 11:15:25.581320] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.087 [2024-07-26 11:15:25.581361] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.087 [2024-07-26 11:15:25.581404] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.087 [2024-07-26 11:15:25.581438] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.087 [2024-07-26 11:15:25.581474] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.087 [2024-07-26 11:15:25.581516] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.087 [2024-07-26 11:15:25.581557] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.087 [2024-07-26 11:15:25.581599] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.087 [2024-07-26 11:15:25.581643] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.087 [2024-07-26 11:15:25.581683] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.087 [2024-07-26 11:15:25.581722] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.087 [2024-07-26 11:15:25.581767] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.087 [2024-07-26 11:15:25.581805] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.087 [2024-07-26 11:15:25.581851] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.087 [2024-07-26 11:15:25.581890] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.087 [2024-07-26 11:15:25.581933] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.087 [2024-07-26 11:15:25.581970] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.087 [2024-07-26 11:15:25.582016] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.087 [2024-07-26 11:15:25.582059] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.087 [2024-07-26 11:15:25.582103] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.087 [2024-07-26 11:15:25.582159] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.087 [2024-07-26 11:15:25.582207] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.087 [2024-07-26 11:15:25.582254] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.087 [2024-07-26 11:15:25.582301] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.087 [2024-07-26 11:15:25.582346] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.087 [2024-07-26 11:15:25.582395] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.087 [2024-07-26 11:15:25.582440] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.087 [2024-07-26 11:15:25.582486] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.087 [2024-07-26 11:15:25.582531] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.087 [2024-07-26 11:15:25.582576] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.087 [2024-07-26 11:15:25.582636] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.087 [2024-07-26 11:15:25.582686] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.087 [2024-07-26 11:15:25.582734] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.088 [2024-07-26 11:15:25.582779] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.088 [2024-07-26 11:15:25.582826] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.088 [2024-07-26 11:15:25.582872] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.088 [2024-07-26 11:15:25.582915] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.088 [2024-07-26 11:15:25.582961] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.088 [2024-07-26 11:15:25.583007] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.088 [2024-07-26 11:15:25.583051] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.088 [2024-07-26 11:15:25.583097] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.088 [2024-07-26 11:15:25.583143] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.088 [2024-07-26 11:15:25.583191] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.088 [2024-07-26 11:15:25.583384] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.088 [2024-07-26 11:15:25.583433] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.088 [2024-07-26 11:15:25.583474] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.088 [2024-07-26 11:15:25.583521] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.088 [2024-07-26 11:15:25.583566] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.088 [2024-07-26 11:15:25.583613] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.088 [2024-07-26 11:15:25.583664] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.088 [2024-07-26 11:15:25.583712] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.088 [2024-07-26 11:15:25.583758] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.088 [2024-07-26 11:15:25.583808] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.088 [2024-07-26 11:15:25.583861] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.088 [2024-07-26 11:15:25.583908] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.088 [2024-07-26 11:15:25.583950] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.088 [2024-07-26 11:15:25.583996] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.088 [2024-07-26 11:15:25.584046] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.088 [2024-07-26 11:15:25.584095] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.088 [2024-07-26 11:15:25.584139] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.088 [2024-07-26 11:15:25.584184] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.088 [2024-07-26 11:15:25.584229] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.088 [2024-07-26 11:15:25.584282] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.088 [2024-07-26 11:15:25.584328] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.088 [2024-07-26 11:15:25.584376] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.088 [2024-07-26 11:15:25.584423] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.088 [2024-07-26 11:15:25.584469] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.088 [2024-07-26 11:15:25.584511] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.088 [2024-07-26 11:15:25.584553] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.088 [2024-07-26 11:15:25.584594] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.088 [2024-07-26 11:15:25.584643] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.088 [2024-07-26 11:15:25.584685] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.088 [2024-07-26 11:15:25.584723] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.088 [2024-07-26 11:15:25.584761] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.088 [2024-07-26 11:15:25.584802] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.088 [2024-07-26 11:15:25.584837] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.088 [2024-07-26 11:15:25.584876] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.088 [2024-07-26 11:15:25.584913] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.088 [2024-07-26 11:15:25.584963] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.088 [2024-07-26 11:15:25.585002] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.088 [2024-07-26 11:15:25.585046] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.088 [2024-07-26 11:15:25.585091] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.088 [2024-07-26 11:15:25.585128] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.088 [2024-07-26 11:15:25.585166] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.088 [2024-07-26 11:15:25.585198] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.088 [2024-07-26 11:15:25.585241] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.088 [2024-07-26 11:15:25.585278] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.088 [2024-07-26 11:15:25.585318] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.088 [2024-07-26 11:15:25.585357] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.088 [2024-07-26 11:15:25.585401] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.088 [2024-07-26 11:15:25.585443] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.088 [2024-07-26 11:15:25.585487] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.088 [2024-07-26 11:15:25.585528] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.088 [2024-07-26 11:15:25.585567] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.088 [2024-07-26 11:15:25.585611] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.088 [2024-07-26 11:15:25.585655] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.088 [2024-07-26 11:15:25.585697] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.088 [2024-07-26 11:15:25.585735] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.088 [2024-07-26 11:15:25.585775] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.088 [2024-07-26 11:15:25.586314] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.088 [2024-07-26 11:15:25.586361] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.088 [2024-07-26 11:15:25.586400] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.088 [2024-07-26 11:15:25.586443] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.088 [2024-07-26 11:15:25.586484] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.088 [2024-07-26 11:15:25.586526] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.088 [2024-07-26 11:15:25.586571] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.088 [2024-07-26 11:15:25.586614] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.088 [2024-07-26 11:15:25.586666] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.088 [2024-07-26 11:15:25.586714] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.088 [2024-07-26 11:15:25.586761] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.088 [2024-07-26 11:15:25.586814] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.088 [2024-07-26 11:15:25.586857] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.088 [2024-07-26 11:15:25.586900] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.088 [2024-07-26 11:15:25.586948] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.088 [2024-07-26 11:15:25.587004] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.088 [2024-07-26 11:15:25.587049] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.088 [2024-07-26 11:15:25.587092] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.088 [2024-07-26 11:15:25.587142] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.088 [2024-07-26 11:15:25.587189] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.088 [2024-07-26 11:15:25.587237] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.088 [2024-07-26 11:15:25.587283] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.088 [2024-07-26 11:15:25.587324] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.088 [2024-07-26 11:15:25.587368] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.088 [2024-07-26 11:15:25.587408] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.088 [2024-07-26 11:15:25.587438] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.088 [2024-07-26 11:15:25.587480] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.088 [2024-07-26 11:15:25.587521] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.088 [2024-07-26 11:15:25.587562] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.088 [2024-07-26 11:15:25.587602] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.088 [2024-07-26 11:15:25.587653] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.088 [2024-07-26 11:15:25.587699] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.088 [2024-07-26 11:15:25.587738] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.088 [2024-07-26 11:15:25.587769] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.088 [2024-07-26 11:15:25.587812] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.088 [2024-07-26 11:15:25.587850] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.088 [2024-07-26 11:15:25.587891] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.088 [2024-07-26 11:15:25.587930] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.088 [2024-07-26 11:15:25.587976] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.088 [2024-07-26 11:15:25.588014] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.088 [2024-07-26 11:15:25.588052] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.088 [2024-07-26 11:15:25.588090] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.088 [2024-07-26 11:15:25.588137] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.088 [2024-07-26 11:15:25.588176] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.088 [2024-07-26 11:15:25.588214] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.088 [2024-07-26 11:15:25.588257] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.088 [2024-07-26 11:15:25.588295] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.088 [2024-07-26 11:15:25.588334] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.088 [2024-07-26 11:15:25.588380] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.088 [2024-07-26 11:15:25.588423] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.088 [2024-07-26 11:15:25.588463] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.088 [2024-07-26 11:15:25.588503] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.088 [2024-07-26 11:15:25.588544] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.088 [2024-07-26 11:15:25.588580] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.088 [2024-07-26 11:15:25.588630] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.089 [2024-07-26 11:15:25.588680] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.089 [2024-07-26 11:15:25.588724] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.089 [2024-07-26 11:15:25.588772] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.089 [2024-07-26 11:15:25.588820] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.089 [2024-07-26 11:15:25.588866] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.089 [2024-07-26 11:15:25.588915] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.089 [2024-07-26 11:15:25.588967] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.089 [2024-07-26 11:15:25.589011] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.089 [2024-07-26 11:15:25.589059] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.089 [2024-07-26 11:15:25.589244] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.089 [2024-07-26 11:15:25.589294] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.089 [2024-07-26 11:15:25.589338] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.089 [2024-07-26 11:15:25.589384] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.089 [2024-07-26 11:15:25.589431] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.089 [2024-07-26 11:15:25.589478] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.089 [2024-07-26 11:15:25.589526] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.089 [2024-07-26 11:15:25.590200] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.089 [2024-07-26 11:15:25.590252] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.089 [2024-07-26 11:15:25.590298] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.089 [2024-07-26 11:15:25.590339] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.089 [2024-07-26 11:15:25.590387] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.089 [2024-07-26 11:15:25.590427] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.089 [2024-07-26 11:15:25.590467] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.089 [2024-07-26 11:15:25.590500] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.089 [2024-07-26 11:15:25.590536] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.089 [2024-07-26 11:15:25.590572] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.089 [2024-07-26 11:15:25.590613] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.089 [2024-07-26 11:15:25.590665] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.089 [2024-07-26 11:15:25.590712] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.089 [2024-07-26 11:15:25.590755] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.089 [2024-07-26 11:15:25.590805] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.089 [2024-07-26 11:15:25.590843] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.089 [2024-07-26 11:15:25.590885] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.089 [2024-07-26 11:15:25.590925] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.089 [2024-07-26 11:15:25.590966] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.089 [2024-07-26 11:15:25.590998] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.089 [2024-07-26 11:15:25.591035] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.089 [2024-07-26 11:15:25.591074] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.089 [2024-07-26 11:15:25.591121] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.089 [2024-07-26 11:15:25.591168] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.089 [2024-07-26 11:15:25.591208] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.089 [2024-07-26 11:15:25.591249] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.089 [2024-07-26 11:15:25.591288] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.089 [2024-07-26 11:15:25.591327] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.089 [2024-07-26 11:15:25.591370] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.089 [2024-07-26 11:15:25.591411] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.089 [2024-07-26 11:15:25.591456] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.089 [2024-07-26 11:15:25.591495] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.089 [2024-07-26 11:15:25.591535] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.089 [2024-07-26 11:15:25.591574] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.089 [2024-07-26 11:15:25.591616] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.089 [2024-07-26 11:15:25.591660] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.089 [2024-07-26 11:15:25.591709] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.089 [2024-07-26 11:15:25.591755] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.089 [2024-07-26 11:15:25.591798] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.089 [2024-07-26 11:15:25.591847] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.089 [2024-07-26 11:15:25.591894] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.089 [2024-07-26 11:15:25.591938] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.089 [2024-07-26 11:15:25.591984] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.089 [2024-07-26 11:15:25.592035] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.089 [2024-07-26 11:15:25.592078] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.089 [2024-07-26 11:15:25.592121] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.089 [2024-07-26 11:15:25.592168] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.089 [2024-07-26 11:15:25.592216] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.089 [2024-07-26 11:15:25.592263] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.089 [2024-07-26 11:15:25.592308] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.089 [2024-07-26 11:15:25.592359] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.089 [2024-07-26 11:15:25.592405] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.089 [2024-07-26 11:15:25.592450] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.089 [2024-07-26 11:15:25.592503] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.089 [2024-07-26 11:15:25.592545] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.089 [2024-07-26 11:15:25.592592] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.089 [2024-07-26 11:15:25.592647] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.089 [2024-07-26 11:15:25.592695] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.089 [2024-07-26 11:15:25.592742] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.089 [2024-07-26 11:15:25.592788] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.089 [2024-07-26 11:15:25.592832] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.089 [2024-07-26 11:15:25.592878] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.089 [2024-07-26 11:15:25.592932] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.089 [2024-07-26 11:15:25.592978] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.089 [2024-07-26 11:15:25.593162] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.089 [2024-07-26 11:15:25.593206] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.089 [2024-07-26 11:15:25.593251] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.089 [2024-07-26 11:15:25.593295] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.089 [2024-07-26 11:15:25.593341] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.089 [2024-07-26 11:15:25.593376] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.089 [2024-07-26 11:15:25.593414] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.089 [2024-07-26 11:15:25.593456] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.089 [2024-07-26 11:15:25.593497] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.089 [2024-07-26 11:15:25.593537] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.089 [2024-07-26 11:15:25.593584] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.089 [2024-07-26 11:15:25.593623] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.089 [2024-07-26 11:15:25.593672] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.089 [2024-07-26 11:15:25.593714] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.089 [2024-07-26 11:15:25.593755] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.089 [2024-07-26 11:15:25.593798] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.089 [2024-07-26 11:15:25.593838] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.089 [2024-07-26 11:15:25.593874] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.089 [2024-07-26 11:15:25.593909] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.089 [2024-07-26 11:15:25.593946] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.089 [2024-07-26 11:15:25.593984] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.089 [2024-07-26 11:15:25.594030] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.089 [2024-07-26 11:15:25.594069] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.089 [2024-07-26 11:15:25.594110] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.089 [2024-07-26 11:15:25.594154] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.089 [2024-07-26 11:15:25.594193] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.089 [2024-07-26 11:15:25.594233] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.089 [2024-07-26 11:15:25.594276] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.089 [2024-07-26 11:15:25.594321] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.089 [2024-07-26 11:15:25.594353] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.089 [2024-07-26 11:15:25.594395] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.089 [2024-07-26 11:15:25.594436] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.089 [2024-07-26 11:15:25.594473] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.089 [2024-07-26 11:15:25.594514] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.089 [2024-07-26 11:15:25.594557] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.089 [2024-07-26 11:15:25.594600] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.089 [2024-07-26 11:15:25.594644] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.089 [2024-07-26 11:15:25.594685] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.089 [2024-07-26 11:15:25.594729] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.089 [2024-07-26 11:15:25.594770] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.089 [2024-07-26 11:15:25.594807] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.089 [2024-07-26 11:15:25.594846] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.089 [2024-07-26 11:15:25.594887] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.089 [2024-07-26 11:15:25.594929] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.089 [2024-07-26 11:15:25.594974] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.089 [2024-07-26 11:15:25.595028] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.089 [2024-07-26 11:15:25.595077] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.089 [2024-07-26 11:15:25.595124] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.089 [2024-07-26 11:15:25.595172] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.089 [2024-07-26 11:15:25.595215] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.089 [2024-07-26 11:15:25.595270] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.089 [2024-07-26 11:15:25.595321] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.089 [2024-07-26 11:15:25.595369] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.089 [2024-07-26 11:15:25.595415] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.089 [2024-07-26 11:15:25.595466] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.089 [2024-07-26 11:15:25.595517] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.089 [2024-07-26 11:15:25.595566] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.089 [2024-07-26 11:15:25.595614] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.090 [2024-07-26 11:15:25.595664] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.090 [2024-07-26 11:15:25.595709] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.090 [2024-07-26 11:15:25.595766] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.090 [2024-07-26 11:15:25.595817] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.090 [2024-07-26 11:15:25.595862] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.090 [2024-07-26 11:15:25.596672] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.090 [2024-07-26 11:15:25.596710] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.090 [2024-07-26 11:15:25.596752] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.090 [2024-07-26 11:15:25.596792] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.090 [2024-07-26 11:15:25.596835] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.090 [2024-07-26 11:15:25.596875] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.090 [2024-07-26 11:15:25.596918] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.090 [2024-07-26 11:15:25.596956] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.090 [2024-07-26 11:15:25.596994] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.090 [2024-07-26 11:15:25.597038] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.090 [2024-07-26 11:15:25.597080] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.090 [2024-07-26 11:15:25.597126] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.090 [2024-07-26 11:15:25.597165] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.090 [2024-07-26 11:15:25.597198] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.090 [2024-07-26 11:15:25.597238] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.090 [2024-07-26 11:15:25.597276] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.090 [2024-07-26 11:15:25.597314] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.090 [2024-07-26 11:15:25.597353] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.090 [2024-07-26 11:15:25.597394] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.090 [2024-07-26 11:15:25.597439] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.090 [2024-07-26 11:15:25.597482] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.090 [2024-07-26 11:15:25.597522] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.090 [2024-07-26 11:15:25.597563] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.090 [2024-07-26 11:15:25.597603] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.090 [2024-07-26 11:15:25.597656] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.090 [2024-07-26 11:15:25.597698] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.090 [2024-07-26 11:15:25.597737] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.090 [2024-07-26 11:15:25.597780] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.090 [2024-07-26 11:15:25.597822] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.090 [2024-07-26 11:15:25.597866] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.090 [2024-07-26 11:15:25.597907] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.090 [2024-07-26 11:15:25.597953] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.090 [2024-07-26 11:15:25.597992] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.090 [2024-07-26 11:15:25.598034] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.090 [2024-07-26 11:15:25.598076] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.090 [2024-07-26 11:15:25.598120] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.090 [2024-07-26 11:15:25.598165] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.090 [2024-07-26 11:15:25.598203] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.090 [2024-07-26 11:15:25.598239] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.090 [2024-07-26 11:15:25.598287] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.090 [2024-07-26 11:15:25.598334] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.090 [2024-07-26 11:15:25.598379] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.090 [2024-07-26 11:15:25.598426] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.090 [2024-07-26 11:15:25.598472] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.090 [2024-07-26 11:15:25.598519] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.090 [2024-07-26 11:15:25.598569] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.090 [2024-07-26 11:15:25.598615] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.090 [2024-07-26 11:15:25.598666] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.090 [2024-07-26 11:15:25.598707] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.090 [2024-07-26 11:15:25.598755] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.090 [2024-07-26 11:15:25.598802] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.090 [2024-07-26 11:15:25.598846] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.090 [2024-07-26 11:15:25.598891] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.090 [2024-07-26 11:15:25.598938] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.090 [2024-07-26 11:15:25.598986] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.090 [2024-07-26 11:15:25.599032] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.090 [2024-07-26 11:15:25.599075] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.090 [2024-07-26 11:15:25.599122] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.090 [2024-07-26 11:15:25.599167] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.090 [2024-07-26 11:15:25.599211] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.090 [2024-07-26 11:15:25.599254] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.090 [2024-07-26 11:15:25.599299] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.090 [2024-07-26 11:15:25.599345] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.090 [2024-07-26 11:15:25.599390] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.090 [2024-07-26 11:15:25.599583] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.090 [2024-07-26 11:15:25.599635] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.090 [2024-07-26 11:15:25.599680] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.090 [2024-07-26 11:15:25.599726] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.090 [2024-07-26 11:15:25.599771] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.090 [2024-07-26 11:15:25.599818] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.090 [2024-07-26 11:15:25.599864] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.090 [2024-07-26 11:15:25.599905] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.090 [2024-07-26 11:15:25.599943] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.090 [2024-07-26 11:15:25.599982] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.090 [2024-07-26 11:15:25.600024] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.090 [2024-07-26 11:15:25.600074] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.090 [2024-07-26 11:15:25.600115] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.090 [2024-07-26 11:15:25.600157] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.090 [2024-07-26 11:15:25.600196] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.090 [2024-07-26 11:15:25.600234] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.090 [2024-07-26 11:15:25.600281] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.090 [2024-07-26 11:15:25.600323] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.090 [2024-07-26 11:15:25.600366] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.090 [2024-07-26 11:15:25.600407] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.090 [2024-07-26 11:15:25.600443] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.090 [2024-07-26 11:15:25.600486] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.090 [2024-07-26 11:15:25.600530] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.090 [2024-07-26 11:15:25.600570] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.090 [2024-07-26 11:15:25.600610] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.090 [2024-07-26 11:15:25.600661] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.090 [2024-07-26 11:15:25.600704] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.090 [2024-07-26 11:15:25.600747] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.090 [2024-07-26 11:15:25.600790] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.090 [2024-07-26 11:15:25.600834] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.090 [2024-07-26 11:15:25.600876] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.090 [2024-07-26 11:15:25.600908] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.090 [2024-07-26 11:15:25.600950] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.090 [2024-07-26 11:15:25.600993] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.090 [2024-07-26 11:15:25.601031] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.090 [2024-07-26 11:15:25.601072] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.090 [2024-07-26 11:15:25.601112] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.090 [2024-07-26 11:15:25.601151] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.090 [2024-07-26 11:15:25.601193] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.090 [2024-07-26 11:15:25.601236] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.090 [2024-07-26 11:15:25.601278] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.090 [2024-07-26 11:15:25.601317] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.090 [2024-07-26 11:15:25.601359] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.090 [2024-07-26 11:15:25.601401] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.090 [2024-07-26 11:15:25.601439] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.090 [2024-07-26 11:15:25.601485] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.090 [2024-07-26 11:15:25.601531] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.090 [2024-07-26 11:15:25.601576] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.090 [2024-07-26 11:15:25.601620] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.090 [2024-07-26 11:15:25.601674] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.090 [2024-07-26 11:15:25.601717] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.090 [2024-07-26 11:15:25.601765] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.090 [2024-07-26 11:15:25.601812] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.090 [2024-07-26 11:15:25.601857] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.090 [2024-07-26 11:15:25.601901] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.090 [2024-07-26 11:15:25.601949] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.090 [2024-07-26 11:15:25.601992] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.090 [2024-07-26 11:15:25.602035] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.090 [2024-07-26 11:15:25.602080] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.090 [2024-07-26 11:15:25.602124] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.090 [2024-07-26 11:15:25.602184] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.090 [2024-07-26 11:15:25.602231] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.090 [2024-07-26 11:15:25.602279] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.090 [2024-07-26 11:15:25.603071] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.090 [2024-07-26 11:15:25.603122] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.090 [2024-07-26 11:15:25.603162] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.090 [2024-07-26 11:15:25.603202] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.090 [2024-07-26 11:15:25.603245] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.091 [2024-07-26 11:15:25.603288] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.091 [2024-07-26 11:15:25.603332] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.091 [2024-07-26 11:15:25.603371] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.091 [2024-07-26 11:15:25.603410] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.091 [2024-07-26 11:15:25.603449] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.091 [2024-07-26 11:15:25.603495] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.091 [2024-07-26 11:15:25.603533] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.091 [2024-07-26 11:15:25.603574] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.091 [2024-07-26 11:15:25.603614] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.091 [2024-07-26 11:15:25.603663] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.091 [2024-07-26 11:15:25.603696] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.091 [2024-07-26 11:15:25.603734] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.091 [2024-07-26 11:15:25.603773] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.091 [2024-07-26 11:15:25.603816] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.091 [2024-07-26 11:15:25.603858] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.091 [2024-07-26 11:15:25.603902] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.091 [2024-07-26 11:15:25.603943] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.091 [2024-07-26 11:15:25.603984] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.091 [2024-07-26 11:15:25.604029] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.091 [2024-07-26 11:15:25.604072] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.091 [2024-07-26 11:15:25.604124] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.091 [2024-07-26 11:15:25.604160] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.091 [2024-07-26 11:15:25.604198] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.091 [2024-07-26 11:15:25.604240] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.091 [2024-07-26 11:15:25.604284] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.091 [2024-07-26 11:15:25.604324] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.091 [2024-07-26 11:15:25.604368] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.091 [2024-07-26 11:15:25.604408] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.091 [2024-07-26 11:15:25.604449] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.091 [2024-07-26 11:15:25.604489] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.091 [2024-07-26 11:15:25.604532] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.091 [2024-07-26 11:15:25.604569] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.091 [2024-07-26 11:15:25.604608] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.091 [2024-07-26 11:15:25.604655] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.091 [2024-07-26 11:15:25.604694] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.091 [2024-07-26 11:15:25.604740] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.091 [2024-07-26 11:15:25.604785] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.091 [2024-07-26 11:15:25.604828] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.091 [2024-07-26 11:15:25.604874] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.091 [2024-07-26 11:15:25.604923] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.091 [2024-07-26 11:15:25.604971] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.091 [2024-07-26 11:15:25.605017] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.091 [2024-07-26 11:15:25.605066] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.091 [2024-07-26 11:15:25.605108] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.091 [2024-07-26 11:15:25.605150] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.091 [2024-07-26 11:15:25.605197] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.091 [2024-07-26 11:15:25.605244] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.091 [2024-07-26 11:15:25.605287] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.091 [2024-07-26 11:15:25.605333] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.091 [2024-07-26 11:15:25.605378] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.091 [2024-07-26 11:15:25.605424] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.091 [2024-07-26 11:15:25.605468] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.091 [2024-07-26 11:15:25.605517] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.091 [2024-07-26 11:15:25.605565] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.091 [2024-07-26 11:15:25.605614] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.091 [2024-07-26 11:15:25.605669] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.091 [2024-07-26 11:15:25.605720] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.091 [2024-07-26 11:15:25.605765] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.091 [2024-07-26 11:15:25.605810] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.091 [2024-07-26 11:15:25.605996] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.091 [2024-07-26 11:15:25.606044] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.091 [2024-07-26 11:15:25.606088] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.091 [2024-07-26 11:15:25.606132] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.091 [2024-07-26 11:15:25.606182] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.091 [2024-07-26 11:15:25.606231] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.091 [2024-07-26 11:15:25.606275] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.091 [2024-07-26 11:15:25.606324] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.091 [2024-07-26 11:15:25.606366] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.091 [2024-07-26 11:15:25.606409] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.091 [2024-07-26 11:15:25.606441] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.091 [2024-07-26 11:15:25.606484] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.091 [2024-07-26 11:15:25.606526] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.091 [2024-07-26 11:15:25.606565] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.091 [2024-07-26 11:15:25.606602] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.091 [2024-07-26 11:15:25.606654] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.091 [2024-07-26 11:15:25.606702] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.091 [2024-07-26 11:15:25.607091] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.091 [2024-07-26 11:15:25.607135] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.091 [2024-07-26 11:15:25.607175] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.091 [2024-07-26 11:15:25.607222] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.091 [2024-07-26 11:15:25.607263] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.091 [2024-07-26 11:15:25.607309] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.091 [2024-07-26 11:15:25.607348] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.091 [2024-07-26 11:15:25.607385] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.091 [2024-07-26 11:15:25.607416] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.091 [2024-07-26 11:15:25.607457] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.091 [2024-07-26 11:15:25.607495] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.091 [2024-07-26 11:15:25.607539] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.091 [2024-07-26 11:15:25.607579] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.091 [2024-07-26 11:15:25.607619] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.091 [2024-07-26 11:15:25.607668] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.091 [2024-07-26 11:15:25.607719] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.091 [2024-07-26 11:15:25.607758] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.091 [2024-07-26 11:15:25.607799] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.091 [2024-07-26 11:15:25.607838] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.091 [2024-07-26 11:15:25.607883] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.091 [2024-07-26 11:15:25.607928] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.091 [2024-07-26 11:15:25.607973] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.091 [2024-07-26 11:15:25.608024] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.091 [2024-07-26 11:15:25.608072] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.091 [2024-07-26 11:15:25.608117] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.091 [2024-07-26 11:15:25.608164] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.091 [2024-07-26 11:15:25.608214] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.091 [2024-07-26 11:15:25.608259] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.091 [2024-07-26 11:15:25.608305] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.091 [2024-07-26 11:15:25.608351] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.091 [2024-07-26 11:15:25.608394] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.091 [2024-07-26 11:15:25.608439] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.091 [2024-07-26 11:15:25.608487] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.091 [2024-07-26 11:15:25.608531] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.091 [2024-07-26 11:15:25.608579] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.091 [2024-07-26 11:15:25.608623] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.091 [2024-07-26 11:15:25.608675] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.091 [2024-07-26 11:15:25.608724] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.091 [2024-07-26 11:15:25.608770] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.091 [2024-07-26 11:15:25.608816] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.091 [2024-07-26 11:15:25.608863] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.091 [2024-07-26 11:15:25.608909] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.091 [2024-07-26 11:15:25.608959] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.092 [2024-07-26 11:15:25.609003] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.092 [2024-07-26 11:15:25.609051] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.092 [2024-07-26 11:15:25.609098] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.092 [2024-07-26 11:15:25.609144] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.092 [2024-07-26 11:15:25.609189] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.092 [2024-07-26 11:15:25.609232] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.092 [2024-07-26 11:15:25.609279] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.092 [2024-07-26 11:15:25.609328] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.092 [2024-07-26 11:15:25.609371] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.092 [2024-07-26 11:15:25.609421] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.092 [2024-07-26 11:15:25.609467] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.092 [2024-07-26 11:15:25.609512] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.092 [2024-07-26 11:15:25.609554] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.092 [2024-07-26 11:15:25.609594] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.092 [2024-07-26 11:15:25.609644] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.092 [2024-07-26 11:15:25.609685] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.092 [2024-07-26 11:15:25.609726] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.092 [2024-07-26 11:15:25.609757] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.092 [2024-07-26 11:15:25.609799] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.092 [2024-07-26 11:15:25.609838] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.092 [2024-07-26 11:15:25.610101] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.092 [2024-07-26 11:15:25.610144] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.092 [2024-07-26 11:15:25.610187] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.092 [2024-07-26 11:15:25.610230] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.092 [2024-07-26 11:15:25.610265] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.092 [2024-07-26 11:15:25.610306] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.092 [2024-07-26 11:15:25.610346] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.092 [2024-07-26 11:15:25.610394] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.092 [2024-07-26 11:15:25.610435] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.092 [2024-07-26 11:15:25.610478] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.092 [2024-07-26 11:15:25.610516] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.092 [2024-07-26 11:15:25.610559] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.092 [2024-07-26 11:15:25.610600] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.092 [2024-07-26 11:15:25.610646] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.092 [2024-07-26 11:15:25.610686] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.092 [2024-07-26 11:15:25.610727] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.092 [2024-07-26 11:15:25.610766] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.092 [2024-07-26 11:15:25.610807] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.092 [2024-07-26 11:15:25.610852] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.092 [2024-07-26 11:15:25.610888] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.092 [2024-07-26 11:15:25.610933] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.092 [2024-07-26 11:15:25.610979] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.092 [2024-07-26 11:15:25.611018] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.092 [2024-07-26 11:15:25.611057] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.092 [2024-07-26 11:15:25.611099] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.092 [2024-07-26 11:15:25.611143] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.092 [2024-07-26 11:15:25.611191] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.092 [2024-07-26 11:15:25.611237] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.092 [2024-07-26 11:15:25.611283] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.092 [2024-07-26 11:15:25.611342] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.092 [2024-07-26 11:15:25.611388] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.092 [2024-07-26 11:15:25.611436] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.092 [2024-07-26 11:15:25.611485] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.092 [2024-07-26 11:15:25.611529] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.092 [2024-07-26 11:15:25.611579] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.092 [2024-07-26 11:15:25.611630] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.092 [2024-07-26 11:15:25.611680] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.092 [2024-07-26 11:15:25.611726] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.092 [2024-07-26 11:15:25.611771] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.092 [2024-07-26 11:15:25.611820] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.092 [2024-07-26 11:15:25.611874] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.092 [2024-07-26 11:15:25.611921] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.092 [2024-07-26 11:15:25.611963] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.092 [2024-07-26 11:15:25.612005] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.092 [2024-07-26 11:15:25.612045] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.092 [2024-07-26 11:15:25.612088] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.092 [2024-07-26 11:15:25.612128] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.092 [2024-07-26 11:15:25.612173] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.092 [2024-07-26 11:15:25.612211] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.092 [2024-07-26 11:15:25.612256] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.092 [2024-07-26 11:15:25.612294] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.092 [2024-07-26 11:15:25.612333] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.092 [2024-07-26 11:15:25.612371] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.092 [2024-07-26 11:15:25.612408] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.092 [2024-07-26 11:15:25.612449] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.092 [2024-07-26 11:15:25.612495] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.092 [2024-07-26 11:15:25.612536] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.092 [2024-07-26 11:15:25.612579] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.092 [2024-07-26 11:15:25.612624] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.092 [2024-07-26 11:15:25.612670] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.092 [2024-07-26 11:15:25.612709] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.092 [2024-07-26 11:15:25.612749] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.092 [2024-07-26 11:15:25.612789] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.092 [2024-07-26 11:15:25.612831] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.092 [2024-07-26 11:15:25.613638] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.092 [2024-07-26 11:15:25.613690] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.092 [2024-07-26 11:15:25.613739] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.092 [2024-07-26 11:15:25.613784] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.092 [2024-07-26 11:15:25.613826] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.092 [2024-07-26 11:15:25.613874] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.092 [2024-07-26 11:15:25.613924] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.092 [2024-07-26 11:15:25.613972] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.092 [2024-07-26 11:15:25.614015] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.092 [2024-07-26 11:15:25.614063] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.092 [2024-07-26 11:15:25.614109] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.092 [2024-07-26 11:15:25.614156] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.092 [2024-07-26 11:15:25.614203] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.092 [2024-07-26 11:15:25.614245] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.092 [2024-07-26 11:15:25.614290] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.092 [2024-07-26 11:15:25.614337] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.092 [2024-07-26 11:15:25.614388] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.092 [2024-07-26 11:15:25.614434] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.092 [2024-07-26 11:15:25.614480] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.092 [2024-07-26 11:15:25.614528] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.092 [2024-07-26 11:15:25.614573] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.092 [2024-07-26 11:15:25.614619] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.092 [2024-07-26 11:15:25.614671] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.092 [2024-07-26 11:15:25.614714] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.092 [2024-07-26 11:15:25.614759] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.092 [2024-07-26 11:15:25.614802] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.092 [2024-07-26 11:15:25.614849] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.092 [2024-07-26 11:15:25.614892] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.092 [2024-07-26 11:15:25.614940] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.092 [2024-07-26 11:15:25.614982] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.092 [2024-07-26 11:15:25.615028] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.092 [2024-07-26 11:15:25.615077] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.092 [2024-07-26 11:15:25.615121] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.092 [2024-07-26 11:15:25.615165] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.092 [2024-07-26 11:15:25.615208] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.092 [2024-07-26 11:15:25.615252] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.092 [2024-07-26 11:15:25.615299] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.092 [2024-07-26 11:15:25.615346] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.092 [2024-07-26 11:15:25.615392] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.092 [2024-07-26 11:15:25.615434] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.092 [2024-07-26 11:15:25.615482] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.092 [2024-07-26 11:15:25.615527] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.092 [2024-07-26 11:15:25.615573] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.092 [2024-07-26 11:15:25.615619] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.092 [2024-07-26 11:15:25.615671] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.092 [2024-07-26 11:15:25.615714] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.092 [2024-07-26 11:15:25.615758] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.092 [2024-07-26 11:15:25.615797] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.092 [2024-07-26 11:15:25.615842] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.092 [2024-07-26 11:15:25.615873] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.092 [2024-07-26 11:15:25.615917] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.092 [2024-07-26 11:15:25.615960] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.092 [2024-07-26 11:15:25.616006] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.092 [2024-07-26 11:15:25.616048] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.092 [2024-07-26 11:15:25.616084] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.093 [2024-07-26 11:15:25.616126] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.093 [2024-07-26 11:15:25.616166] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.093 [2024-07-26 11:15:25.616204] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.093 [2024-07-26 11:15:25.616246] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.093 [2024-07-26 11:15:25.616293] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.093 [2024-07-26 11:15:25.616334] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.093 [2024-07-26 11:15:25.616368] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.093 [2024-07-26 11:15:25.616410] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.093 [2024-07-26 11:15:25.616597] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.093 [2024-07-26 11:15:25.616644] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.093 [2024-07-26 11:15:25.616688] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.093 [2024-07-26 11:15:25.616727] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.093 [2024-07-26 11:15:25.616763] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.093 [2024-07-26 11:15:25.616803] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.093 [2024-07-26 11:15:25.616834] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.093 [2024-07-26 11:15:25.616875] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.093 [2024-07-26 11:15:25.616924] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.093 [2024-07-26 11:15:25.616962] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.093 [2024-07-26 11:15:25.617001] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.093 [2024-07-26 11:15:25.617041] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.093 [2024-07-26 11:15:25.617078] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.093 [2024-07-26 11:15:25.617120] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.093 [2024-07-26 11:15:25.617161] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.093 [2024-07-26 11:15:25.617199] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.093 [2024-07-26 11:15:25.617237] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.093 [2024-07-26 11:15:25.617279] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.093 [2024-07-26 11:15:25.617321] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.093 [2024-07-26 11:15:25.617362] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.093 [2024-07-26 11:15:25.617402] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.093 [2024-07-26 11:15:25.617445] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.093 [2024-07-26 11:15:25.617487] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.093 [2024-07-26 11:15:25.617528] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.093 [2024-07-26 11:15:25.617565] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.093 [2024-07-26 11:15:25.617604] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.093 [2024-07-26 11:15:25.617653] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.093 [2024-07-26 11:15:25.617700] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.093 [2024-07-26 11:15:25.617747] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.093 [2024-07-26 11:15:25.617790] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.093 [2024-07-26 11:15:25.617833] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.093 [2024-07-26 11:15:25.617884] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.093 [2024-07-26 11:15:25.617925] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.093 [2024-07-26 11:15:25.617966] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.093 [2024-07-26 11:15:25.618015] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.093 [2024-07-26 11:15:25.618061] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.093 [2024-07-26 11:15:25.618108] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.093 [2024-07-26 11:15:25.618153] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.093 [2024-07-26 11:15:25.618199] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.093 [2024-07-26 11:15:25.618245] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.093 [2024-07-26 11:15:25.618290] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.093 [2024-07-26 11:15:25.618335] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.093 [2024-07-26 11:15:25.618381] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.093 [2024-07-26 11:15:25.618428] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.093 [2024-07-26 11:15:25.618469] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.093 [2024-07-26 11:15:25.618501] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.093 [2024-07-26 11:15:25.618542] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.093 [2024-07-26 11:15:25.618582] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.093 [2024-07-26 11:15:25.618625] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.093 [2024-07-26 11:15:25.618673] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.093 [2024-07-26 11:15:25.618716] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.093 [2024-07-26 11:15:25.618756] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.093 [2024-07-26 11:15:25.618795] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.093 [2024-07-26 11:15:25.618824] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.093 [2024-07-26 11:15:25.618866] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.093 [2024-07-26 11:15:25.618906] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.093 [2024-07-26 11:15:25.618953] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.093 [2024-07-26 11:15:25.618995] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.093 [2024-07-26 11:15:25.619035] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.093 [2024-07-26 11:15:25.619076] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.093 [2024-07-26 11:15:25.619112] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.093 [2024-07-26 11:15:25.619147] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.093 [2024-07-26 11:15:25.619188] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.093 [2024-07-26 11:15:25.619234] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.093 [2024-07-26 11:15:25.620074] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.093 [2024-07-26 11:15:25.620124] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.093 [2024-07-26 11:15:25.620170] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.093 [2024-07-26 11:15:25.620213] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.093 [2024-07-26 11:15:25.620263] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.093 [2024-07-26 11:15:25.620311] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.093 [2024-07-26 11:15:25.620359] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.093 [2024-07-26 11:15:25.620407] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.093 [2024-07-26 11:15:25.620459] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.093 [2024-07-26 11:15:25.620509] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.093 [2024-07-26 11:15:25.620555] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.093 [2024-07-26 11:15:25.620598] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.093 [2024-07-26 11:15:25.620646] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.093 [2024-07-26 11:15:25.620693] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.093 [2024-07-26 11:15:25.620737] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.093 [2024-07-26 11:15:25.620784] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.093 [2024-07-26 11:15:25.620829] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.093 [2024-07-26 11:15:25.620877] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.093 [2024-07-26 11:15:25.620920] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.093 [2024-07-26 11:15:25.620965] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.093 [2024-07-26 11:15:25.621013] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.093 [2024-07-26 11:15:25.621058] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.093 [2024-07-26 11:15:25.621104] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.093 [2024-07-26 11:15:25.621155] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.093 [2024-07-26 11:15:25.621204] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.093 [2024-07-26 11:15:25.621249] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.093 [2024-07-26 11:15:25.621293] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.093 [2024-07-26 11:15:25.621342] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.093 [2024-07-26 11:15:25.621390] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.093 [2024-07-26 11:15:25.621436] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.093 [2024-07-26 11:15:25.621477] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.093 [2024-07-26 11:15:25.621517] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.093 [2024-07-26 11:15:25.621559] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.093 [2024-07-26 11:15:25.621599] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.093 [2024-07-26 11:15:25.621637] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.093 [2024-07-26 11:15:25.621676] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.093 [2024-07-26 11:15:25.621713] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.093 [2024-07-26 11:15:25.621758] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.093 [2024-07-26 11:15:25.621798] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.093 [2024-07-26 11:15:25.621838] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.093 [2024-07-26 11:15:25.621878] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.093 [2024-07-26 11:15:25.621917] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.093 [2024-07-26 11:15:25.621956] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.093 [2024-07-26 11:15:25.621995] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.093 [2024-07-26 11:15:25.622039] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.093 [2024-07-26 11:15:25.622079] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.093 [2024-07-26 11:15:25.622111] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.093 [2024-07-26 11:15:25.622150] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.093 [2024-07-26 11:15:25.622188] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.093 [2024-07-26 11:15:25.622227] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.093 [2024-07-26 11:15:25.622268] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.093 [2024-07-26 11:15:25.622315] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.093 [2024-07-26 11:15:25.622355] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.093 [2024-07-26 11:15:25.622387] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.093 [2024-07-26 11:15:25.622425] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.093 [2024-07-26 11:15:25.622465] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.093 [2024-07-26 11:15:25.622504] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.093 [2024-07-26 11:15:25.622548] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.093 [2024-07-26 11:15:25.622585] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.093 [2024-07-26 11:15:25.622630] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.093 [2024-07-26 11:15:25.622670] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.093 [2024-07-26 11:15:25.622712] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.093 [2024-07-26 11:15:25.622755] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.093 [2024-07-26 11:15:25.622961] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.094 [2024-07-26 11:15:25.623014] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.094 [2024-07-26 11:15:25.623060] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.094 [2024-07-26 11:15:25.623103] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.094 [2024-07-26 11:15:25.623146] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.094 [2024-07-26 11:15:25.623199] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.094 Message suppressed 999 times: Read completed with error (sct=0, sc=15) 00:07:30.094 [2024-07-26 11:15:25.623251] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.094 [2024-07-26 11:15:25.623297] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.094 [2024-07-26 11:15:25.623345] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.094 [2024-07-26 11:15:25.623389] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.094 [2024-07-26 11:15:25.623435] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.094 [2024-07-26 11:15:25.623485] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.094 [2024-07-26 11:15:25.623533] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.094 [2024-07-26 11:15:25.623580] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.094 [2024-07-26 11:15:25.623631] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.094 [2024-07-26 11:15:25.623681] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.094 [2024-07-26 11:15:25.623724] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.094 [2024-07-26 11:15:25.623786] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.094 [2024-07-26 11:15:25.623830] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.094 [2024-07-26 11:15:25.623875] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.094 [2024-07-26 11:15:25.623921] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.094 [2024-07-26 11:15:25.623965] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.094 [2024-07-26 11:15:25.624012] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.094 [2024-07-26 11:15:25.624056] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.094 [2024-07-26 11:15:25.624105] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.094 [2024-07-26 11:15:25.624151] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.094 [2024-07-26 11:15:25.624200] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.094 [2024-07-26 11:15:25.624249] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.094 [2024-07-26 11:15:25.624295] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.094 [2024-07-26 11:15:25.624340] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.094 [2024-07-26 11:15:25.624388] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.094 [2024-07-26 11:15:25.624436] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.094 [2024-07-26 11:15:25.624472] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.094 [2024-07-26 11:15:25.624512] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.094 [2024-07-26 11:15:25.624556] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.094 [2024-07-26 11:15:25.624601] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.094 [2024-07-26 11:15:25.624655] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.094 [2024-07-26 11:15:25.624699] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.094 [2024-07-26 11:15:25.624740] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.094 [2024-07-26 11:15:25.624779] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.094 [2024-07-26 11:15:25.624826] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.094 [2024-07-26 11:15:25.624867] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.094 [2024-07-26 11:15:25.624913] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.094 [2024-07-26 11:15:25.624952] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.094 [2024-07-26 11:15:25.624994] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.094 [2024-07-26 11:15:25.625033] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.094 [2024-07-26 11:15:25.625073] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.094 [2024-07-26 11:15:25.625118] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.094 [2024-07-26 11:15:25.625160] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.094 [2024-07-26 11:15:25.625197] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.094 [2024-07-26 11:15:25.625235] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.094 [2024-07-26 11:15:25.625278] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.094 [2024-07-26 11:15:25.625317] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.094 [2024-07-26 11:15:25.625354] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.094 [2024-07-26 11:15:25.625404] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.094 [2024-07-26 11:15:25.625447] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.094 [2024-07-26 11:15:25.625478] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.094 [2024-07-26 11:15:25.625518] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.094 [2024-07-26 11:15:25.625557] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.094 [2024-07-26 11:15:25.625597] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.094 [2024-07-26 11:15:25.625646] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.094 [2024-07-26 11:15:25.625687] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.094 [2024-07-26 11:15:25.625728] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.094 [2024-07-26 11:15:25.625766] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.094 [2024-07-26 11:15:25.626571] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.094 [2024-07-26 11:15:25.626621] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.094 [2024-07-26 11:15:25.626680] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.094 [2024-07-26 11:15:25.626724] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.094 [2024-07-26 11:15:25.626769] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.094 [2024-07-26 11:15:25.626815] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.094 [2024-07-26 11:15:25.626864] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.094 [2024-07-26 11:15:25.626910] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.094 [2024-07-26 11:15:25.626956] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.094 [2024-07-26 11:15:25.627001] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.094 [2024-07-26 11:15:25.627046] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.094 [2024-07-26 11:15:25.627092] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.094 [2024-07-26 11:15:25.627136] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.094 [2024-07-26 11:15:25.627184] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.094 [2024-07-26 11:15:25.627234] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.094 [2024-07-26 11:15:25.627285] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.094 [2024-07-26 11:15:25.627332] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.094 [2024-07-26 11:15:25.627379] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.094 [2024-07-26 11:15:25.627426] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.094 [2024-07-26 11:15:25.627474] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.094 [2024-07-26 11:15:25.627520] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.094 [2024-07-26 11:15:25.627565] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.094 [2024-07-26 11:15:25.627610] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.094 [2024-07-26 11:15:25.627661] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.094 [2024-07-26 11:15:25.627713] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.094 [2024-07-26 11:15:25.627758] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.094 [2024-07-26 11:15:25.627803] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.094 [2024-07-26 11:15:25.627839] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.094 [2024-07-26 11:15:25.627878] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.094 [2024-07-26 11:15:25.627918] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.094 [2024-07-26 11:15:25.627956] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.094 [2024-07-26 11:15:25.627996] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.094 [2024-07-26 11:15:25.628043] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.094 [2024-07-26 11:15:25.628082] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.094 [2024-07-26 11:15:25.628125] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.094 [2024-07-26 11:15:25.628170] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.094 [2024-07-26 11:15:25.628215] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.094 [2024-07-26 11:15:25.628263] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.094 [2024-07-26 11:15:25.628304] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.094 [2024-07-26 11:15:25.628339] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.094 [2024-07-26 11:15:25.628381] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.094 [2024-07-26 11:15:25.628421] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.094 [2024-07-26 11:15:25.628463] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.094 [2024-07-26 11:15:25.628505] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.094 [2024-07-26 11:15:25.628544] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.094 [2024-07-26 11:15:25.628588] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.094 [2024-07-26 11:15:25.628633] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.094 [2024-07-26 11:15:25.628680] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.094 [2024-07-26 11:15:25.628722] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.094 [2024-07-26 11:15:25.628762] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.094 [2024-07-26 11:15:25.628810] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.094 [2024-07-26 11:15:25.628845] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.094 [2024-07-26 11:15:25.628884] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.094 [2024-07-26 11:15:25.628926] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.095 [2024-07-26 11:15:25.628964] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.095 [2024-07-26 11:15:25.629003] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.095 [2024-07-26 11:15:25.629045] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.095 [2024-07-26 11:15:25.629086] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.095 [2024-07-26 11:15:25.629130] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.095 [2024-07-26 11:15:25.629172] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.095 [2024-07-26 11:15:25.629214] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.095 [2024-07-26 11:15:25.629253] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.095 [2024-07-26 11:15:25.629294] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.095 [2024-07-26 11:15:25.629335] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.095 [2024-07-26 11:15:25.629525] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.095 [2024-07-26 11:15:25.629574] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.095 [2024-07-26 11:15:25.629618] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.095 [2024-07-26 11:15:25.629668] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.095 [2024-07-26 11:15:25.629714] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.095 [2024-07-26 11:15:25.629758] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.095 [2024-07-26 11:15:25.629805] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.095 [2024-07-26 11:15:25.629851] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.095 [2024-07-26 11:15:25.629897] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.095 [2024-07-26 11:15:25.629943] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.095 [2024-07-26 11:15:25.629992] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.095 [2024-07-26 11:15:25.630042] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.095 [2024-07-26 11:15:25.630085] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.095 [2024-07-26 11:15:25.630132] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.095 [2024-07-26 11:15:25.630179] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.095 [2024-07-26 11:15:25.630226] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.095 [2024-07-26 11:15:25.630273] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.095 [2024-07-26 11:15:25.630322] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.095 [2024-07-26 11:15:25.630371] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.095 [2024-07-26 11:15:25.630416] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.095 [2024-07-26 11:15:25.630459] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.095 [2024-07-26 11:15:25.630505] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.095 [2024-07-26 11:15:25.630551] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.095 [2024-07-26 11:15:25.630601] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.095 [2024-07-26 11:15:25.630656] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.095 [2024-07-26 11:15:25.630701] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.095 [2024-07-26 11:15:25.630749] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.095 [2024-07-26 11:15:25.630797] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.095 [2024-07-26 11:15:25.630848] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.095 [2024-07-26 11:15:25.630900] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.095 [2024-07-26 11:15:25.630942] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.095 [2024-07-26 11:15:25.630987] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.095 [2024-07-26 11:15:25.631032] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.095 [2024-07-26 11:15:25.631073] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.095 [2024-07-26 11:15:25.631105] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.095 [2024-07-26 11:15:25.631146] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.095 [2024-07-26 11:15:25.631184] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.095 [2024-07-26 11:15:25.631222] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.095 [2024-07-26 11:15:25.631269] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.095 [2024-07-26 11:15:25.631309] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.095 [2024-07-26 11:15:25.631353] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.095 [2024-07-26 11:15:25.631392] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.095 [2024-07-26 11:15:25.631429] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.095 [2024-07-26 11:15:25.631478] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.095 [2024-07-26 11:15:25.631522] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.095 [2024-07-26 11:15:25.631570] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.095 [2024-07-26 11:15:25.631604] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.095 [2024-07-26 11:15:25.631651] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.095 [2024-07-26 11:15:25.631699] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.095 [2024-07-26 11:15:25.631737] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.095 [2024-07-26 11:15:25.631782] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.095 [2024-07-26 11:15:25.631822] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.095 [2024-07-26 11:15:25.631867] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.095 [2024-07-26 11:15:25.631905] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.095 [2024-07-26 11:15:25.631947] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.095 [2024-07-26 11:15:25.631985] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.095 [2024-07-26 11:15:25.632824] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.095 [2024-07-26 11:15:25.632874] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.095 [2024-07-26 11:15:25.632917] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.095 [2024-07-26 11:15:25.632964] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.095 [2024-07-26 11:15:25.633011] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.095 [2024-07-26 11:15:25.633063] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.095 [2024-07-26 11:15:25.633106] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.095 [2024-07-26 11:15:25.633153] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.095 [2024-07-26 11:15:25.633199] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.095 [2024-07-26 11:15:25.633245] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.095 [2024-07-26 11:15:25.633303] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.095 [2024-07-26 11:15:25.633349] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.095 [2024-07-26 11:15:25.633395] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.095 [2024-07-26 11:15:25.633442] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.095 [2024-07-26 11:15:25.633497] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.095 [2024-07-26 11:15:25.633539] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.095 [2024-07-26 11:15:25.633582] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.095 [2024-07-26 11:15:25.633633] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.095 [2024-07-26 11:15:25.633681] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.095 [2024-07-26 11:15:25.633727] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.095 [2024-07-26 11:15:25.633772] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.095 [2024-07-26 11:15:25.633818] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.095 [2024-07-26 11:15:25.633862] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.095 [2024-07-26 11:15:25.633906] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.095 [2024-07-26 11:15:25.633948] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.095 [2024-07-26 11:15:25.633994] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.095 [2024-07-26 11:15:25.634037] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.095 [2024-07-26 11:15:25.634086] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.095 [2024-07-26 11:15:25.634129] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.095 [2024-07-26 11:15:25.634177] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.095 [2024-07-26 11:15:25.634224] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.095 [2024-07-26 11:15:25.634269] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.095 [2024-07-26 11:15:25.634319] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.095 [2024-07-26 11:15:25.634370] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.095 [2024-07-26 11:15:25.634412] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.095 [2024-07-26 11:15:25.634455] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.095 [2024-07-26 11:15:25.634493] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.095 [2024-07-26 11:15:25.634534] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.095 [2024-07-26 11:15:25.634569] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.095 [2024-07-26 11:15:25.634608] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.095 [2024-07-26 11:15:25.634658] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.095 [2024-07-26 11:15:25.634699] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.095 [2024-07-26 11:15:25.634736] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.095 [2024-07-26 11:15:25.634781] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.095 [2024-07-26 11:15:25.634823] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.095 [2024-07-26 11:15:25.634869] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.095 [2024-07-26 11:15:25.634910] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.095 [2024-07-26 11:15:25.634955] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.095 [2024-07-26 11:15:25.634998] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.095 [2024-07-26 11:15:25.635038] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.095 [2024-07-26 11:15:25.635074] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.095 [2024-07-26 11:15:25.635111] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.095 [2024-07-26 11:15:25.635153] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.095 [2024-07-26 11:15:25.635198] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.095 [2024-07-26 11:15:25.635237] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.095 [2024-07-26 11:15:25.635276] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.095 [2024-07-26 11:15:25.635313] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.095 [2024-07-26 11:15:25.635351] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.095 [2024-07-26 11:15:25.635389] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.095 [2024-07-26 11:15:25.635429] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.095 [2024-07-26 11:15:25.635471] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.095 [2024-07-26 11:15:25.635512] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.095 [2024-07-26 11:15:25.635555] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.095 [2024-07-26 11:15:25.635595] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.095 [2024-07-26 11:15:25.635795] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.095 [2024-07-26 11:15:25.635839] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.095 [2024-07-26 11:15:25.635883] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.095 [2024-07-26 11:15:25.635931] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.095 [2024-07-26 11:15:25.635978] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.095 [2024-07-26 11:15:25.636026] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.095 [2024-07-26 11:15:25.636078] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.096 [2024-07-26 11:15:25.636514] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.096 [2024-07-26 11:15:25.636561] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.096 [2024-07-26 11:15:25.636604] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.096 [2024-07-26 11:15:25.636659] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.096 [2024-07-26 11:15:25.636698] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.096 [2024-07-26 11:15:25.636739] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.096 [2024-07-26 11:15:25.636782] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.096 [2024-07-26 11:15:25.636823] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.096 [2024-07-26 11:15:25.636861] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.096 [2024-07-26 11:15:25.636899] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.096 [2024-07-26 11:15:25.636943] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.096 [2024-07-26 11:15:25.636983] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.096 [2024-07-26 11:15:25.637023] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.096 [2024-07-26 11:15:25.637060] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.096 [2024-07-26 11:15:25.637099] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.096 [2024-07-26 11:15:25.637135] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.096 [2024-07-26 11:15:25.637179] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.096 [2024-07-26 11:15:25.637219] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.096 [2024-07-26 11:15:25.637258] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.096 [2024-07-26 11:15:25.637293] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.096 [2024-07-26 11:15:25.637331] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.096 [2024-07-26 11:15:25.637368] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.096 [2024-07-26 11:15:25.637411] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.096 [2024-07-26 11:15:25.637450] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.096 [2024-07-26 11:15:25.637491] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.096 [2024-07-26 11:15:25.637536] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.096 [2024-07-26 11:15:25.637577] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.096 [2024-07-26 11:15:25.637624] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.096 [2024-07-26 11:15:25.637669] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.096 [2024-07-26 11:15:25.637715] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.096 [2024-07-26 11:15:25.637755] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.096 [2024-07-26 11:15:25.637798] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.096 [2024-07-26 11:15:25.637841] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.096 [2024-07-26 11:15:25.637887] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.096 [2024-07-26 11:15:25.637933] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.096 [2024-07-26 11:15:25.637981] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.096 [2024-07-26 11:15:25.638030] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.096 [2024-07-26 11:15:25.638074] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.096 [2024-07-26 11:15:25.638119] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.096 [2024-07-26 11:15:25.638168] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.096 [2024-07-26 11:15:25.638215] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.096 [2024-07-26 11:15:25.638260] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.096 [2024-07-26 11:15:25.638306] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.096 [2024-07-26 11:15:25.638351] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.096 [2024-07-26 11:15:25.638394] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.096 [2024-07-26 11:15:25.638444] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.096 [2024-07-26 11:15:25.638492] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.096 [2024-07-26 11:15:25.638541] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.096 [2024-07-26 11:15:25.638588] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.096 [2024-07-26 11:15:25.638637] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.096 [2024-07-26 11:15:25.638686] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.096 [2024-07-26 11:15:25.638729] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.096 [2024-07-26 11:15:25.638783] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.096 [2024-07-26 11:15:25.638829] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.096 [2024-07-26 11:15:25.638877] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.096 [2024-07-26 11:15:25.638925] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.096 [2024-07-26 11:15:25.638975] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.096 [2024-07-26 11:15:25.639020] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.096 [2024-07-26 11:15:25.639063] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.096 [2024-07-26 11:15:25.639109] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.096 [2024-07-26 11:15:25.639158] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.096 [2024-07-26 11:15:25.639205] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.096 [2024-07-26 11:15:25.639254] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.096 [2024-07-26 11:15:25.639297] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.096 [2024-07-26 11:15:25.639483] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.096 [2024-07-26 11:15:25.639534] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.096 [2024-07-26 11:15:25.639576] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.096 [2024-07-26 11:15:25.639622] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.096 [2024-07-26 11:15:25.639673] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.096 [2024-07-26 11:15:25.639717] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.096 [2024-07-26 11:15:25.639767] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.096 [2024-07-26 11:15:25.639817] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.096 [2024-07-26 11:15:25.639867] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.096 [2024-07-26 11:15:25.639908] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.096 [2024-07-26 11:15:25.639954] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.096 [2024-07-26 11:15:25.639993] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.096 [2024-07-26 11:15:25.640024] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.096 [2024-07-26 11:15:25.640064] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.096 [2024-07-26 11:15:25.640105] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.096 [2024-07-26 11:15:25.640146] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.096 [2024-07-26 11:15:25.640184] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.096 [2024-07-26 11:15:25.640231] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.096 [2024-07-26 11:15:25.640271] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.096 [2024-07-26 11:15:25.640312] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.096 [2024-07-26 11:15:25.640349] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.096 [2024-07-26 11:15:25.640390] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.096 [2024-07-26 11:15:25.640430] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.096 [2024-07-26 11:15:25.640476] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.096 [2024-07-26 11:15:25.640511] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.096 [2024-07-26 11:15:25.640549] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.096 [2024-07-26 11:15:25.640589] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.096 [2024-07-26 11:15:25.640634] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.096 [2024-07-26 11:15:25.640682] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.096 [2024-07-26 11:15:25.640723] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.096 [2024-07-26 11:15:25.640763] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.096 [2024-07-26 11:15:25.640801] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.096 [2024-07-26 11:15:25.640849] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.096 [2024-07-26 11:15:25.640889] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.096 [2024-07-26 11:15:25.640936] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.096 [2024-07-26 11:15:25.640977] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.096 [2024-07-26 11:15:25.641011] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.096 [2024-07-26 11:15:25.641052] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.096 [2024-07-26 11:15:25.641096] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.096 [2024-07-26 11:15:25.641139] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.096 [2024-07-26 11:15:25.641181] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.096 [2024-07-26 11:15:25.641222] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.096 [2024-07-26 11:15:25.641263] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.096 [2024-07-26 11:15:25.641302] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.096 [2024-07-26 11:15:25.641344] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.096 [2024-07-26 11:15:25.641382] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.096 [2024-07-26 11:15:25.641422] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.096 [2024-07-26 11:15:25.641462] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.096 [2024-07-26 11:15:25.641498] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.096 [2024-07-26 11:15:25.641539] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.096 [2024-07-26 11:15:25.641582] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.096 [2024-07-26 11:15:25.641632] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.096 [2024-07-26 11:15:25.641676] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.096 [2024-07-26 11:15:25.641720] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.096 [2024-07-26 11:15:25.641761] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.096 [2024-07-26 11:15:25.641810] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.096 [2024-07-26 11:15:25.641859] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.096 [2024-07-26 11:15:25.641905] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.096 [2024-07-26 11:15:25.641950] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.096 [2024-07-26 11:15:25.642002] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.096 [2024-07-26 11:15:25.642046] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.096 [2024-07-26 11:15:25.642094] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.096 [2024-07-26 11:15:25.642143] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.096 [2024-07-26 11:15:25.642962] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.096 [2024-07-26 11:15:25.643008] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.096 [2024-07-26 11:15:25.643047] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.096 [2024-07-26 11:15:25.643089] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.096 [2024-07-26 11:15:25.643131] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.096 [2024-07-26 11:15:25.643170] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.096 [2024-07-26 11:15:25.643215] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.096 [2024-07-26 11:15:25.643254] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.097 [2024-07-26 11:15:25.643292] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.097 [2024-07-26 11:15:25.643340] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.097 [2024-07-26 11:15:25.643371] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.097 [2024-07-26 11:15:25.643414] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.097 [2024-07-26 11:15:25.643453] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.097 [2024-07-26 11:15:25.643492] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.097 [2024-07-26 11:15:25.643533] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.097 [2024-07-26 11:15:25.643574] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.097 [2024-07-26 11:15:25.643617] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.097 [2024-07-26 11:15:25.643666] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.097 [2024-07-26 11:15:25.643712] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.097 [2024-07-26 11:15:25.643746] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.097 [2024-07-26 11:15:25.643786] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.097 [2024-07-26 11:15:25.643829] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.097 [2024-07-26 11:15:25.643870] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.097 [2024-07-26 11:15:25.643918] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.097 [2024-07-26 11:15:25.643958] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.097 [2024-07-26 11:15:25.643995] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.097 [2024-07-26 11:15:25.644034] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.097 [2024-07-26 11:15:25.644078] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.097 [2024-07-26 11:15:25.644117] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.097 [2024-07-26 11:15:25.644152] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.097 [2024-07-26 11:15:25.644196] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.097 [2024-07-26 11:15:25.644236] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.097 [2024-07-26 11:15:25.644275] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.097 [2024-07-26 11:15:25.644319] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.097 [2024-07-26 11:15:25.644369] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.097 [2024-07-26 11:15:25.644412] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.097 [2024-07-26 11:15:25.644456] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.097 [2024-07-26 11:15:25.644502] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.097 [2024-07-26 11:15:25.644550] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.097 [2024-07-26 11:15:25.644598] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.097 [2024-07-26 11:15:25.644650] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.097 [2024-07-26 11:15:25.644696] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.097 [2024-07-26 11:15:25.644744] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.097 [2024-07-26 11:15:25.644787] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.097 [2024-07-26 11:15:25.644832] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.097 [2024-07-26 11:15:25.644887] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.097 [2024-07-26 11:15:25.644934] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.097 [2024-07-26 11:15:25.644981] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.097 [2024-07-26 11:15:25.645024] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.097 [2024-07-26 11:15:25.645070] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.097 [2024-07-26 11:15:25.645118] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.097 [2024-07-26 11:15:25.645162] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.097 [2024-07-26 11:15:25.645208] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.097 [2024-07-26 11:15:25.645255] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.097 [2024-07-26 11:15:25.645303] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.097 [2024-07-26 11:15:25.645350] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.097 [2024-07-26 11:15:25.645394] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.097 [2024-07-26 11:15:25.645442] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.097 [2024-07-26 11:15:25.645494] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.097 [2024-07-26 11:15:25.645540] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.097 [2024-07-26 11:15:25.645593] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.097 [2024-07-26 11:15:25.645638] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.097 [2024-07-26 11:15:25.645686] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.097 [2024-07-26 11:15:25.645730] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.097 [2024-07-26 11:15:25.645917] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.097 [2024-07-26 11:15:25.645964] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.097 [2024-07-26 11:15:25.646009] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.097 [2024-07-26 11:15:25.646066] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.097 [2024-07-26 11:15:25.646105] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.097 [2024-07-26 11:15:25.646155] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.097 [2024-07-26 11:15:25.646202] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.097 [2024-07-26 11:15:25.646250] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.097 [2024-07-26 11:15:25.646307] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.097 [2024-07-26 11:15:25.646353] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.097 [2024-07-26 11:15:25.646394] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.097 [2024-07-26 11:15:25.646437] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.097 [2024-07-26 11:15:25.646474] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.097 [2024-07-26 11:15:25.646511] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.097 [2024-07-26 11:15:25.646549] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.097 [2024-07-26 11:15:25.646589] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.097 [2024-07-26 11:15:25.646635] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.097 [2024-07-26 11:15:25.647051] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.097 [2024-07-26 11:15:25.647099] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.097 [2024-07-26 11:15:25.647136] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.097 true 00:07:30.097 [2024-07-26 11:15:25.647175] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.097 [2024-07-26 11:15:25.647221] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.097 [2024-07-26 11:15:25.647261] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.097 [2024-07-26 11:15:25.647311] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.097 [2024-07-26 11:15:25.647356] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.097 [2024-07-26 11:15:25.647398] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.097 [2024-07-26 11:15:25.647437] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.097 [2024-07-26 11:15:25.647477] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.097 [2024-07-26 11:15:25.647516] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.097 [2024-07-26 11:15:25.647555] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.097 [2024-07-26 11:15:25.647607] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.097 [2024-07-26 11:15:25.647654] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.097 [2024-07-26 11:15:25.647704] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.097 [2024-07-26 11:15:25.647745] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.097 [2024-07-26 11:15:25.647788] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.097 [2024-07-26 11:15:25.647832] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.097 [2024-07-26 11:15:25.647884] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.097 [2024-07-26 11:15:25.647927] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.097 [2024-07-26 11:15:25.647968] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.097 [2024-07-26 11:15:25.648006] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.097 [2024-07-26 11:15:25.648044] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.097 [2024-07-26 11:15:25.648092] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.097 [2024-07-26 11:15:25.648140] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.097 [2024-07-26 11:15:25.648179] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.097 [2024-07-26 11:15:25.648219] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.097 [2024-07-26 11:15:25.648264] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.097 [2024-07-26 11:15:25.648312] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.097 [2024-07-26 11:15:25.648369] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.097 [2024-07-26 11:15:25.648417] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.097 [2024-07-26 11:15:25.648464] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.097 [2024-07-26 11:15:25.648515] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.097 [2024-07-26 11:15:25.648561] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.097 [2024-07-26 11:15:25.648606] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.097 [2024-07-26 11:15:25.648658] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.097 [2024-07-26 11:15:25.648702] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.097 [2024-07-26 11:15:25.648751] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.097 [2024-07-26 11:15:25.648800] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.097 [2024-07-26 11:15:25.648847] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.097 [2024-07-26 11:15:25.648893] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.097 [2024-07-26 11:15:25.648935] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.097 [2024-07-26 11:15:25.648983] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.097 [2024-07-26 11:15:25.649028] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.098 [2024-07-26 11:15:25.649074] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.098 [2024-07-26 11:15:25.649119] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.098 [2024-07-26 11:15:25.649162] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.098 [2024-07-26 11:15:25.649218] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.098 [2024-07-26 11:15:25.649264] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.098 [2024-07-26 11:15:25.649310] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.098 [2024-07-26 11:15:25.649352] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.098 [2024-07-26 11:15:25.649400] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.098 [2024-07-26 11:15:25.649445] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.098 [2024-07-26 11:15:25.649492] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.098 [2024-07-26 11:15:25.649532] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.098 [2024-07-26 11:15:25.649569] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.098 [2024-07-26 11:15:25.649610] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.098 [2024-07-26 11:15:25.649659] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.098 [2024-07-26 11:15:25.649707] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.098 [2024-07-26 11:15:25.649749] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.098 [2024-07-26 11:15:25.649789] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.098 [2024-07-26 11:15:25.649832] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.098 [2024-07-26 11:15:25.649872] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.098 [2024-07-26 11:15:25.650043] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.098 [2024-07-26 11:15:25.650084] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.098 [2024-07-26 11:15:25.650131] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.098 [2024-07-26 11:15:25.650173] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.098 [2024-07-26 11:15:25.650217] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.098 [2024-07-26 11:15:25.650257] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.098 [2024-07-26 11:15:25.650299] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.098 [2024-07-26 11:15:25.650344] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.098 [2024-07-26 11:15:25.650386] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.098 [2024-07-26 11:15:25.650422] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.098 [2024-07-26 11:15:25.650462] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.098 [2024-07-26 11:15:25.650501] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.098 [2024-07-26 11:15:25.650542] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.098 [2024-07-26 11:15:25.650581] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.098 [2024-07-26 11:15:25.650633] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.098 [2024-07-26 11:15:25.650678] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.098 [2024-07-26 11:15:25.650718] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.098 [2024-07-26 11:15:25.650758] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.098 [2024-07-26 11:15:25.650799] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.098 [2024-07-26 11:15:25.650838] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.098 [2024-07-26 11:15:25.650877] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.098 [2024-07-26 11:15:25.650914] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.098 [2024-07-26 11:15:25.650966] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.098 [2024-07-26 11:15:25.651015] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.098 [2024-07-26 11:15:25.651061] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.098 [2024-07-26 11:15:25.651108] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.098 [2024-07-26 11:15:25.651156] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.098 [2024-07-26 11:15:25.651203] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.098 [2024-07-26 11:15:25.651260] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.098 [2024-07-26 11:15:25.651306] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.098 [2024-07-26 11:15:25.651352] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.098 [2024-07-26 11:15:25.651397] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.098 [2024-07-26 11:15:25.651440] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.098 [2024-07-26 11:15:25.651474] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.098 [2024-07-26 11:15:25.651511] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.098 [2024-07-26 11:15:25.651555] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.098 [2024-07-26 11:15:25.651595] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.098 [2024-07-26 11:15:25.651642] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.098 [2024-07-26 11:15:25.651682] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.098 [2024-07-26 11:15:25.651723] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.098 [2024-07-26 11:15:25.651762] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.098 [2024-07-26 11:15:25.651801] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.098 [2024-07-26 11:15:25.651841] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.098 [2024-07-26 11:15:25.651879] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.098 [2024-07-26 11:15:25.651918] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.098 [2024-07-26 11:15:25.651954] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.098 [2024-07-26 11:15:25.652707] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.098 [2024-07-26 11:15:25.652760] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.098 [2024-07-26 11:15:25.652808] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.098 [2024-07-26 11:15:25.652855] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.098 [2024-07-26 11:15:25.652899] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.098 [2024-07-26 11:15:25.652945] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.098 [2024-07-26 11:15:25.652990] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.098 [2024-07-26 11:15:25.653038] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.098 [2024-07-26 11:15:25.653084] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.098 [2024-07-26 11:15:25.653135] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.098 [2024-07-26 11:15:25.653180] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.098 [2024-07-26 11:15:25.653225] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.098 [2024-07-26 11:15:25.653269] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.098 [2024-07-26 11:15:25.653317] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.098 [2024-07-26 11:15:25.653360] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.098 [2024-07-26 11:15:25.653403] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.098 [2024-07-26 11:15:25.653448] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.098 [2024-07-26 11:15:25.653495] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:30.098 11:15:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1356620 00:07:30.098 11:15:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:31.034 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:31.293 11:15:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:31.293 11:15:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1029 00:07:31.293 11:15:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1029 00:07:31.552 true 00:07:31.552 11:15:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1356620 00:07:31.552 11:15:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:31.810 11:15:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:31.810 11:15:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1030 00:07:31.810 11:15:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1030 00:07:32.069 true 00:07:32.069 11:15:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1356620 00:07:32.069 11:15:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:33.444 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:33.444 11:15:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:33.444 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:33.444 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:33.444 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:33.444 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:33.444 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:33.444 11:15:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1031 00:07:33.444 11:15:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1031 00:07:33.702 true 00:07:33.702 11:15:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1356620 00:07:33.702 11:15:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:34.637 11:15:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:34.637 Initializing NVMe Controllers 00:07:34.637 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:07:34.637 Controller IO queue size 128, less than required. 00:07:34.637 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:07:34.637 Controller IO queue size 128, less than required. 00:07:34.637 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:07:34.637 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:07:34.637 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:07:34.637 Initialization complete. Launching workers. 00:07:34.637 ======================================================== 00:07:34.637 Latency(us) 00:07:34.637 Device Information : IOPS MiB/s Average min max 00:07:34.637 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 2694.80 1.32 32473.17 1870.84 1012465.84 00:07:34.637 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 17611.93 8.60 7267.81 2448.32 321943.04 00:07:34.637 ======================================================== 00:07:34.637 Total : 20306.73 9.92 10612.68 1870.84 1012465.84 00:07:34.637 00:07:34.637 11:15:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1032 00:07:34.637 11:15:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1032 00:07:34.896 true 00:07:34.896 11:15:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1356620 00:07:34.896 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh: line 44: kill: (1356620) - No such process 00:07:34.896 11:15:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@53 -- # wait 1356620 00:07:34.896 11:15:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:34.896 11:15:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:07:35.156 11:15:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # nthreads=8 00:07:35.156 11:15:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # pids=() 00:07:35.156 11:15:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i = 0 )) 00:07:35.156 11:15:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:07:35.156 11:15:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null0 100 4096 00:07:35.414 null0 00:07:35.414 11:15:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:07:35.414 11:15:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:07:35.414 11:15:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null1 100 4096 00:07:35.414 null1 00:07:35.673 11:15:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:07:35.673 11:15:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:07:35.673 11:15:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null2 100 4096 00:07:35.673 null2 00:07:35.673 11:15:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:07:35.673 11:15:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:07:35.673 11:15:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null3 100 4096 00:07:35.931 null3 00:07:35.931 11:15:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:07:35.931 11:15:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:07:35.931 11:15:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null4 100 4096 00:07:36.189 null4 00:07:36.189 11:15:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:07:36.189 11:15:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:07:36.189 11:15:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null5 100 4096 00:07:36.189 null5 00:07:36.189 11:15:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:07:36.189 11:15:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:07:36.189 11:15:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null6 100 4096 00:07:36.448 null6 00:07:36.448 11:15:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:07:36.448 11:15:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:07:36.448 11:15:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null7 100 4096 00:07:36.707 null7 00:07:36.707 11:15:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:07:36.708 11:15:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:07:36.708 11:15:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i = 0 )) 00:07:36.708 11:15:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:07:36.708 11:15:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:07:36.708 11:15:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:07:36.708 11:15:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:07:36.708 11:15:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 1 null0 00:07:36.708 11:15:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=1 bdev=null0 00:07:36.708 11:15:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:07:36.708 11:15:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:07:36.708 11:15:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:36.708 11:15:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:07:36.708 11:15:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:07:36.708 11:15:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:07:36.708 11:15:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 2 null1 00:07:36.708 11:15:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=2 bdev=null1 00:07:36.708 11:15:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:07:36.708 11:15:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:07:36.708 11:15:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:36.708 11:15:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:07:36.708 11:15:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:07:36.708 11:15:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:07:36.708 11:15:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 3 null2 00:07:36.708 11:15:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=3 bdev=null2 00:07:36.708 11:15:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:07:36.708 11:15:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:07:36.708 11:15:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:36.708 11:15:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:07:36.708 11:15:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:07:36.708 11:15:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:07:36.708 11:15:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 4 null3 00:07:36.708 11:15:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=4 bdev=null3 00:07:36.708 11:15:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:07:36.708 11:15:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:07:36.708 11:15:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 5 null4 00:07:36.708 11:15:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:36.708 11:15:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=5 bdev=null4 00:07:36.708 11:15:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:07:36.708 11:15:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:07:36.708 11:15:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:36.708 11:15:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:07:36.708 11:15:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:07:36.708 11:15:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:07:36.708 11:15:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:07:36.708 11:15:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:07:36.708 11:15:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:07:36.708 11:15:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 6 null5 00:07:36.708 11:15:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=6 bdev=null5 00:07:36.708 11:15:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:07:36.708 11:15:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:07:36.708 11:15:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:07:36.708 11:15:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:36.708 11:15:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:07:36.708 11:15:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:07:36.708 11:15:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 7 null6 00:07:36.708 11:15:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=7 bdev=null6 00:07:36.708 11:15:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:07:36.708 11:15:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:07:36.708 11:15:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:07:36.708 11:15:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:36.708 11:15:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:07:36.708 11:15:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 8 null7 00:07:36.708 11:15:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:07:36.708 11:15:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@66 -- # wait 1362742 1362743 1362745 1362746 1362749 1362751 1362753 1362754 00:07:36.708 11:15:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=8 bdev=null7 00:07:36.708 11:15:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:07:36.708 11:15:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:36.708 11:15:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:07:36.708 11:15:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:07:36.708 11:15:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:07:36.708 11:15:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:36.708 11:15:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:07:36.708 11:15:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:07:36.967 11:15:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:36.967 11:15:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:07:36.967 11:15:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:07:36.967 11:15:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:36.967 11:15:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:36.967 11:15:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:07:36.967 11:15:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:36.967 11:15:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:36.967 11:15:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:07:36.967 11:15:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:36.967 11:15:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:36.967 11:15:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:07:36.967 11:15:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:36.967 11:15:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:36.967 11:15:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:07:36.967 11:15:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:36.967 11:15:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:36.967 11:15:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:07:36.967 11:15:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:36.967 11:15:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:36.967 11:15:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:07:36.967 11:15:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:36.967 11:15:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:36.967 11:15:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:07:36.967 11:15:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:36.967 11:15:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:36.967 11:15:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:07:37.226 11:15:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:37.226 11:15:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:07:37.226 11:15:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:07:37.226 11:15:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:07:37.226 11:15:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:07:37.226 11:15:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:07:37.226 11:15:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:37.226 11:15:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:07:37.485 11:15:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:37.485 11:15:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:37.485 11:15:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:07:37.485 11:15:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:37.485 11:15:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:37.485 11:15:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:07:37.485 11:15:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:37.485 11:15:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:37.485 11:15:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:37.485 11:15:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:07:37.485 11:15:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:37.485 11:15:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:07:37.485 11:15:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:37.485 11:15:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:37.485 11:15:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:37.485 11:15:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:37.485 11:15:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:37.485 11:15:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:07:37.485 11:15:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:37.485 11:15:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:07:37.485 11:15:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:07:37.485 11:15:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:37.485 11:15:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:37.485 11:15:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:07:37.485 11:15:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:07:37.485 11:15:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:07:37.485 11:15:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:07:37.485 11:15:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:37.485 11:15:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:07:37.485 11:15:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:07:37.485 11:15:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:37.744 11:15:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:07:37.744 11:15:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:37.744 11:15:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:37.744 11:15:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:07:37.744 11:15:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:37.744 11:15:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:37.744 11:15:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:07:37.744 11:15:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:37.744 11:15:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:37.744 11:15:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:07:37.744 11:15:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:37.744 11:15:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:37.744 11:15:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:07:37.744 11:15:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:37.744 11:15:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:37.744 11:15:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:07:37.744 11:15:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:37.744 11:15:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:37.744 11:15:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:07:37.744 11:15:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:37.744 11:15:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:37.744 11:15:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:07:37.744 11:15:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:37.744 11:15:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:37.744 11:15:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:07:38.016 11:15:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:07:38.016 11:15:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:07:38.016 11:15:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:07:38.016 11:15:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:38.016 11:15:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:07:38.016 11:15:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:07:38.016 11:15:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:38.016 11:15:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:07:38.016 11:15:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:38.016 11:15:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:38.016 11:15:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:07:38.016 11:15:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:38.016 11:15:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:38.016 11:15:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:07:38.296 11:15:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:38.296 11:15:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:38.296 11:15:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:07:38.296 11:15:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:38.296 11:15:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:38.296 11:15:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:07:38.296 11:15:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:38.296 11:15:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:38.296 11:15:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:38.296 11:15:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:07:38.296 11:15:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:38.296 11:15:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:07:38.296 11:15:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:38.296 11:15:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:38.296 11:15:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:07:38.296 11:15:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:38.296 11:15:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:38.297 11:15:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:07:38.297 11:15:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:07:38.297 11:15:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:07:38.297 11:15:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:07:38.297 11:15:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:38.297 11:15:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:07:38.297 11:15:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:07:38.297 11:15:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:07:38.297 11:15:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:38.555 11:15:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:38.555 11:15:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:38.555 11:15:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:07:38.555 11:15:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:38.555 11:15:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:38.556 11:15:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:07:38.556 11:15:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:38.556 11:15:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:38.556 11:15:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:07:38.556 11:15:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:38.556 11:15:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:38.556 11:15:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:07:38.556 11:15:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:38.556 11:15:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:38.556 11:15:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:07:38.556 11:15:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:38.556 11:15:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:38.556 11:15:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:07:38.556 11:15:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:38.556 11:15:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:38.556 11:15:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:07:38.556 11:15:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:38.556 11:15:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:38.556 11:15:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:07:38.815 11:15:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:07:38.815 11:15:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:07:38.815 11:15:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:07:38.815 11:15:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:07:38.815 11:15:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:38.815 11:15:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:07:38.815 11:15:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:38.815 11:15:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:07:38.815 11:15:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:38.815 11:15:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:38.815 11:15:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:07:38.815 11:15:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:38.815 11:15:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:38.815 11:15:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:07:38.815 11:15:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:38.815 11:15:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:38.815 11:15:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:38.815 11:15:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:07:38.815 11:15:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:38.815 11:15:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:07:38.815 11:15:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:38.815 11:15:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:38.815 11:15:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:07:38.815 11:15:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:38.815 11:15:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:38.815 11:15:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:07:38.815 11:15:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:38.815 11:15:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:38.815 11:15:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:07:38.815 11:15:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:38.815 11:15:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:38.815 11:15:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:07:39.074 11:15:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:07:39.074 11:15:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:39.074 11:15:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:07:39.074 11:15:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:07:39.074 11:15:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:07:39.074 11:15:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:07:39.074 11:15:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:07:39.074 11:15:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:39.333 11:15:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:39.333 11:15:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:39.333 11:15:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:07:39.333 11:15:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:39.333 11:15:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:39.333 11:15:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:07:39.333 11:15:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:39.333 11:15:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:39.333 11:15:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:07:39.333 11:15:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:39.333 11:15:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:39.333 11:15:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:39.333 11:15:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:07:39.333 11:15:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:39.333 11:15:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:07:39.333 11:15:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:39.333 11:15:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:39.334 11:15:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:07:39.334 11:15:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:39.334 11:15:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:39.334 11:15:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:07:39.334 11:15:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:39.334 11:15:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:39.334 11:15:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:07:39.334 11:15:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:07:39.334 11:15:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:39.334 11:15:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:39.334 11:15:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:07:39.334 11:15:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:07:39.593 11:15:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:07:39.593 11:15:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:07:39.593 11:15:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:07:39.593 11:15:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:39.593 11:15:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:39.593 11:15:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:07:39.593 11:15:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:39.593 11:15:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:39.593 11:15:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:07:39.593 11:15:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:39.593 11:15:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:39.593 11:15:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:07:39.593 11:15:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:39.593 11:15:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:39.593 11:15:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:07:39.593 11:15:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:39.593 11:15:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:39.593 11:15:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:07:39.594 11:15:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:39.594 11:15:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:39.594 11:15:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:07:39.594 11:15:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:39.594 11:15:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:39.594 11:15:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:07:39.594 11:15:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:39.594 11:15:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:39.594 11:15:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:07:39.852 11:15:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:07:39.852 11:15:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:39.852 11:15:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:07:39.852 11:15:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:07:39.852 11:15:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:39.852 11:15:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:07:39.852 11:15:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:07:39.852 11:15:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:07:40.112 11:15:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:40.112 11:15:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:40.113 11:15:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:07:40.113 11:15:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:40.113 11:15:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:40.113 11:15:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:07:40.113 11:15:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:40.113 11:15:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:40.113 11:15:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:07:40.113 11:15:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:40.113 11:15:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:40.113 11:15:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:40.113 11:15:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:40.113 11:15:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:07:40.113 11:15:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:07:40.113 11:15:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:40.113 11:15:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:40.113 11:15:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:07:40.113 11:15:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:40.113 11:15:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:40.113 11:15:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:07:40.113 11:15:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:40.113 11:15:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:40.113 11:15:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:07:40.113 11:15:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:40.113 11:15:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:07:40.113 11:15:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:07:40.113 11:15:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:07:40.113 11:15:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:07:40.113 11:15:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:07:40.113 11:15:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:40.113 11:15:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:07:40.372 11:15:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:40.372 11:15:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:40.372 11:15:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:40.372 11:15:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:40.372 11:15:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:40.372 11:15:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:40.372 11:15:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:40.372 11:15:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:40.372 11:15:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:40.372 11:15:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:40.372 11:15:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:40.372 11:15:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:40.372 11:15:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:40.372 11:15:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:40.372 11:15:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:40.372 11:15:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:40.372 11:15:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:07:40.372 11:15:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@70 -- # nvmftestfini 00:07:40.372 11:15:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@488 -- # nvmfcleanup 00:07:40.372 11:15:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@117 -- # sync 00:07:40.372 11:15:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:07:40.372 11:15:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@120 -- # set +e 00:07:40.372 11:15:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@121 -- # for i in {1..20} 00:07:40.372 11:15:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:07:40.372 rmmod nvme_tcp 00:07:40.372 rmmod nvme_fabrics 00:07:40.372 rmmod nvme_keyring 00:07:40.372 11:15:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:07:40.372 11:15:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@124 -- # set -e 00:07:40.372 11:15:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@125 -- # return 0 00:07:40.372 11:15:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@489 -- # '[' -n 1356199 ']' 00:07:40.372 11:15:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@490 -- # killprocess 1356199 00:07:40.372 11:15:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@950 -- # '[' -z 1356199 ']' 00:07:40.372 11:15:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@954 -- # kill -0 1356199 00:07:40.372 11:15:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@955 -- # uname 00:07:40.372 11:15:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:40.372 11:15:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1356199 00:07:40.631 11:15:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:07:40.631 11:15:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:07:40.631 11:15:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1356199' 00:07:40.631 killing process with pid 1356199 00:07:40.631 11:15:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@969 -- # kill 1356199 00:07:40.631 11:15:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@974 -- # wait 1356199 00:07:40.631 11:15:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:07:40.631 11:15:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:07:40.631 11:15:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:07:40.631 11:15:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:07:40.631 11:15:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@278 -- # remove_spdk_ns 00:07:40.631 11:15:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:40.631 11:15:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:40.631 11:15:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:43.167 11:15:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:07:43.168 00:07:43.168 real 0m47.609s 00:07:43.168 user 3m13.784s 00:07:43.168 sys 0m15.760s 00:07:43.168 11:15:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:43.168 11:15:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:07:43.168 ************************************ 00:07:43.168 END TEST nvmf_ns_hotplug_stress 00:07:43.168 ************************************ 00:07:43.168 11:15:38 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@23 -- # run_test nvmf_delete_subsystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:07:43.168 11:15:38 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:07:43.168 11:15:38 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:43.168 11:15:38 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:07:43.168 ************************************ 00:07:43.168 START TEST nvmf_delete_subsystem 00:07:43.168 ************************************ 00:07:43.168 11:15:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:07:43.168 * Looking for test storage... 00:07:43.168 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:43.168 11:15:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:43.168 11:15:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # uname -s 00:07:43.168 11:15:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:43.168 11:15:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:43.168 11:15:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:43.168 11:15:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:43.168 11:15:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:43.168 11:15:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:43.168 11:15:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:43.168 11:15:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:43.168 11:15:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:43.168 11:15:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:43.168 11:15:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:07:43.168 11:15:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:07:43.168 11:15:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:43.168 11:15:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:43.168 11:15:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:43.168 11:15:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:43.168 11:15:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:43.168 11:15:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:43.168 11:15:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:43.168 11:15:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:43.168 11:15:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:43.168 11:15:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:43.168 11:15:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:43.168 11:15:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@5 -- # export PATH 00:07:43.168 11:15:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:43.168 11:15:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@47 -- # : 0 00:07:43.168 11:15:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:07:43.168 11:15:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:07:43.168 11:15:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:43.168 11:15:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:43.168 11:15:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:43.168 11:15:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:07:43.168 11:15:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:07:43.168 11:15:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@51 -- # have_pci_nics=0 00:07:43.168 11:15:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@12 -- # nvmftestinit 00:07:43.168 11:15:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:07:43.168 11:15:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:43.168 11:15:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@448 -- # prepare_net_devs 00:07:43.168 11:15:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@410 -- # local -g is_hw=no 00:07:43.168 11:15:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@412 -- # remove_spdk_ns 00:07:43.168 11:15:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:43.168 11:15:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:43.168 11:15:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:43.168 11:15:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:07:43.168 11:15:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:07:43.168 11:15:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@285 -- # xtrace_disable 00:07:43.168 11:15:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:48.445 11:15:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:07:48.445 11:15:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@291 -- # pci_devs=() 00:07:48.445 11:15:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@291 -- # local -a pci_devs 00:07:48.445 11:15:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@292 -- # pci_net_devs=() 00:07:48.445 11:15:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:07:48.445 11:15:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@293 -- # pci_drivers=() 00:07:48.445 11:15:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@293 -- # local -A pci_drivers 00:07:48.445 11:15:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@295 -- # net_devs=() 00:07:48.445 11:15:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@295 -- # local -ga net_devs 00:07:48.445 11:15:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@296 -- # e810=() 00:07:48.445 11:15:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@296 -- # local -ga e810 00:07:48.445 11:15:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@297 -- # x722=() 00:07:48.445 11:15:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@297 -- # local -ga x722 00:07:48.445 11:15:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@298 -- # mlx=() 00:07:48.445 11:15:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@298 -- # local -ga mlx 00:07:48.445 11:15:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:48.445 11:15:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:48.445 11:15:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:48.445 11:15:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:48.445 11:15:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:48.445 11:15:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:48.445 11:15:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:48.445 11:15:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:48.445 11:15:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:48.445 11:15:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:48.445 11:15:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:48.445 11:15:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:07:48.445 11:15:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:07:48.445 11:15:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:07:48.445 11:15:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:07:48.445 11:15:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:07:48.445 11:15:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:07:48.445 11:15:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:48.445 11:15:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:07:48.445 Found 0000:86:00.0 (0x8086 - 0x159b) 00:07:48.445 11:15:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:07:48.445 11:15:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:07:48.445 11:15:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:48.445 11:15:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:48.445 11:15:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:07:48.445 11:15:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:48.445 11:15:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:07:48.445 Found 0000:86:00.1 (0x8086 - 0x159b) 00:07:48.445 11:15:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:07:48.445 11:15:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:07:48.445 11:15:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:48.445 11:15:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:48.445 11:15:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:07:48.445 11:15:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:07:48.445 11:15:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:07:48.445 11:15:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:07:48.445 11:15:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:48.445 11:15:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:48.445 11:15:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:07:48.445 11:15:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:48.445 11:15:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@390 -- # [[ up == up ]] 00:07:48.445 11:15:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:07:48.445 11:15:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:48.445 11:15:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:07:48.445 Found net devices under 0000:86:00.0: cvl_0_0 00:07:48.445 11:15:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:07:48.445 11:15:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:48.445 11:15:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:48.445 11:15:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:07:48.445 11:15:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:48.445 11:15:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@390 -- # [[ up == up ]] 00:07:48.445 11:15:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:07:48.445 11:15:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:48.445 11:15:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:07:48.445 Found net devices under 0000:86:00.1: cvl_0_1 00:07:48.445 11:15:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:07:48.445 11:15:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:07:48.445 11:15:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@414 -- # is_hw=yes 00:07:48.445 11:15:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:07:48.445 11:15:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:07:48.445 11:15:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:07:48.445 11:15:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:48.445 11:15:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:48.445 11:15:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:07:48.445 11:15:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:07:48.445 11:15:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:07:48.445 11:15:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:07:48.446 11:15:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:07:48.446 11:15:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:07:48.446 11:15:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:48.446 11:15:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:07:48.446 11:15:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:07:48.446 11:15:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:07:48.446 11:15:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:07:48.705 11:15:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:07:48.705 11:15:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:07:48.705 11:15:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:07:48.705 11:15:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:07:48.705 11:15:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:07:48.705 11:15:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:07:48.705 11:15:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:07:48.705 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:48.705 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.167 ms 00:07:48.705 00:07:48.705 --- 10.0.0.2 ping statistics --- 00:07:48.705 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:48.705 rtt min/avg/max/mdev = 0.167/0.167/0.167/0.000 ms 00:07:48.705 11:15:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:07:48.705 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:48.705 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.210 ms 00:07:48.705 00:07:48.705 --- 10.0.0.1 ping statistics --- 00:07:48.705 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:48.705 rtt min/avg/max/mdev = 0.210/0.210/0.210/0.000 ms 00:07:48.705 11:15:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:48.705 11:15:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@422 -- # return 0 00:07:48.705 11:15:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:07:48.705 11:15:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:48.705 11:15:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:07:48.705 11:15:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:07:48.705 11:15:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:48.705 11:15:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:07:48.705 11:15:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:07:48.705 11:15:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@13 -- # nvmfappstart -m 0x3 00:07:48.705 11:15:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:07:48.705 11:15:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@724 -- # xtrace_disable 00:07:48.705 11:15:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:48.705 11:15:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@481 -- # nvmfpid=1367119 00:07:48.705 11:15:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@482 -- # waitforlisten 1367119 00:07:48.705 11:15:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:07:48.705 11:15:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@831 -- # '[' -z 1367119 ']' 00:07:48.705 11:15:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:48.705 11:15:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:48.705 11:15:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:48.705 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:48.705 11:15:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:48.705 11:15:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:48.965 [2024-07-26 11:15:44.400577] Starting SPDK v24.09-pre git sha1 487ff9e1a / DPDK 24.03.0 initialization... 00:07:48.965 [2024-07-26 11:15:44.400623] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:48.965 EAL: No free 2048 kB hugepages reported on node 1 00:07:48.965 [2024-07-26 11:15:44.473014] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:07:48.965 [2024-07-26 11:15:44.546809] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:48.965 [2024-07-26 11:15:44.546850] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:48.965 [2024-07-26 11:15:44.546857] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:48.965 [2024-07-26 11:15:44.546862] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:48.965 [2024-07-26 11:15:44.546868] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:48.965 [2024-07-26 11:15:44.547012] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:48.965 [2024-07-26 11:15:44.547011] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:07:49.901 11:15:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:49.901 11:15:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@864 -- # return 0 00:07:49.901 11:15:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:07:49.901 11:15:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@730 -- # xtrace_disable 00:07:49.901 11:15:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:49.901 11:15:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:49.901 11:15:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:07:49.901 11:15:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:49.901 11:15:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:49.901 [2024-07-26 11:15:45.270807] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:49.901 11:15:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:49.901 11:15:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:07:49.901 11:15:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:49.901 11:15:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:49.901 11:15:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:49.901 11:15:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:49.901 11:15:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:49.901 11:15:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:49.901 [2024-07-26 11:15:45.290992] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:49.901 11:15:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:49.901 11:15:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:07:49.901 11:15:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:49.901 11:15:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:49.901 NULL1 00:07:49.901 11:15:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:49.901 11:15:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@23 -- # rpc_cmd bdev_delay_create -b NULL1 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:07:49.901 11:15:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:49.901 11:15:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:49.901 Delay0 00:07:49.901 11:15:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:49.901 11:15:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:49.901 11:15:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:49.901 11:15:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:49.901 11:15:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:49.901 11:15:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@28 -- # perf_pid=1367366 00:07:49.901 11:15:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@30 -- # sleep 2 00:07:49.901 11:15:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 5 -q 128 -w randrw -M 70 -o 512 -P 4 00:07:49.901 EAL: No free 2048 kB hugepages reported on node 1 00:07:49.901 [2024-07-26 11:15:45.381721] subsystem.c:1572:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:07:51.803 11:15:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@32 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:07:51.803 11:15:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:51.803 11:15:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:52.070 Read completed with error (sct=0, sc=8) 00:07:52.070 starting I/O failed: -6 00:07:52.070 Write completed with error (sct=0, sc=8) 00:07:52.070 Read completed with error (sct=0, sc=8) 00:07:52.070 Write completed with error (sct=0, sc=8) 00:07:52.070 Read completed with error (sct=0, sc=8) 00:07:52.070 starting I/O failed: -6 00:07:52.070 Read completed with error (sct=0, sc=8) 00:07:52.070 Write completed with error (sct=0, sc=8) 00:07:52.070 Read completed with error (sct=0, sc=8) 00:07:52.070 Write completed with error (sct=0, sc=8) 00:07:52.070 starting I/O failed: -6 00:07:52.070 Write completed with error (sct=0, sc=8) 00:07:52.070 Read completed with error (sct=0, sc=8) 00:07:52.070 Read completed with error (sct=0, sc=8) 00:07:52.070 Read completed with error (sct=0, sc=8) 00:07:52.070 starting I/O failed: -6 00:07:52.070 Read completed with error (sct=0, sc=8) 00:07:52.070 Read completed with error (sct=0, sc=8) 00:07:52.070 Read completed with error (sct=0, sc=8) 00:07:52.070 Read completed with error (sct=0, sc=8) 00:07:52.070 starting I/O failed: -6 00:07:52.070 Read completed with error (sct=0, sc=8) 00:07:52.070 Read completed with error (sct=0, sc=8) 00:07:52.070 Read completed with error (sct=0, sc=8) 00:07:52.070 Read completed with error (sct=0, sc=8) 00:07:52.070 starting I/O failed: -6 00:07:52.070 Read completed with error (sct=0, sc=8) 00:07:52.070 Read completed with error (sct=0, sc=8) 00:07:52.070 Read completed with error (sct=0, sc=8) 00:07:52.070 Read completed with error (sct=0, sc=8) 00:07:52.070 starting I/O failed: -6 00:07:52.070 Read completed with error (sct=0, sc=8) 00:07:52.070 Read completed with error (sct=0, sc=8) 00:07:52.070 Read completed with error (sct=0, sc=8) 00:07:52.070 Read completed with error (sct=0, sc=8) 00:07:52.070 starting I/O failed: -6 00:07:52.070 Read completed with error (sct=0, sc=8) 00:07:52.070 Read completed with error (sct=0, sc=8) 00:07:52.070 Read completed with error (sct=0, sc=8) 00:07:52.070 Read completed with error (sct=0, sc=8) 00:07:52.070 starting I/O failed: -6 00:07:52.070 Read completed with error (sct=0, sc=8) 00:07:52.070 Read completed with error (sct=0, sc=8) 00:07:52.070 Read completed with error (sct=0, sc=8) 00:07:52.070 [2024-07-26 11:15:47.478711] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f5d34000c00 is same with the state(5) to be set 00:07:52.070 Read completed with error (sct=0, sc=8) 00:07:52.070 Read completed with error (sct=0, sc=8) 00:07:52.070 Read completed with error (sct=0, sc=8) 00:07:52.070 Read completed with error (sct=0, sc=8) 00:07:52.070 Read completed with error (sct=0, sc=8) 00:07:52.070 Read completed with error (sct=0, sc=8) 00:07:52.070 Write completed with error (sct=0, sc=8) 00:07:52.070 Read completed with error (sct=0, sc=8) 00:07:52.070 Read completed with error (sct=0, sc=8) 00:07:52.070 Write completed with error (sct=0, sc=8) 00:07:52.070 Read completed with error (sct=0, sc=8) 00:07:52.070 Read completed with error (sct=0, sc=8) 00:07:52.070 Read completed with error (sct=0, sc=8) 00:07:52.070 Write completed with error (sct=0, sc=8) 00:07:52.070 Write completed with error (sct=0, sc=8) 00:07:52.070 Write completed with error (sct=0, sc=8) 00:07:52.070 Read completed with error (sct=0, sc=8) 00:07:52.070 Read completed with error (sct=0, sc=8) 00:07:52.070 Write completed with error (sct=0, sc=8) 00:07:52.070 Write completed with error (sct=0, sc=8) 00:07:52.070 Read completed with error (sct=0, sc=8) 00:07:52.070 Read completed with error (sct=0, sc=8) 00:07:52.070 Read completed with error (sct=0, sc=8) 00:07:52.070 Read completed with error (sct=0, sc=8) 00:07:52.070 Read completed with error (sct=0, sc=8) 00:07:52.070 Read completed with error (sct=0, sc=8) 00:07:52.070 Read completed with error (sct=0, sc=8) 00:07:52.070 Read completed with error (sct=0, sc=8) 00:07:52.070 Write completed with error (sct=0, sc=8) 00:07:52.070 Read completed with error (sct=0, sc=8) 00:07:52.070 Read completed with error (sct=0, sc=8) 00:07:52.070 Write completed with error (sct=0, sc=8) 00:07:52.070 Write completed with error (sct=0, sc=8) 00:07:52.070 Read completed with error (sct=0, sc=8) 00:07:52.070 Read completed with error (sct=0, sc=8) 00:07:52.070 Read completed with error (sct=0, sc=8) 00:07:52.070 Read completed with error (sct=0, sc=8) 00:07:52.070 Write completed with error (sct=0, sc=8) 00:07:52.070 Read completed with error (sct=0, sc=8) 00:07:52.070 Read completed with error (sct=0, sc=8) 00:07:52.070 Read completed with error (sct=0, sc=8) 00:07:52.070 Write completed with error (sct=0, sc=8) 00:07:52.070 Read completed with error (sct=0, sc=8) 00:07:52.070 Read completed with error (sct=0, sc=8) 00:07:52.070 Read completed with error (sct=0, sc=8) 00:07:52.070 [2024-07-26 11:15:47.479126] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f5d3400d000 is same with the state(5) to be set 00:07:52.070 Read completed with error (sct=0, sc=8) 00:07:52.070 Read completed with error (sct=0, sc=8) 00:07:52.070 Read completed with error (sct=0, sc=8) 00:07:52.070 Read completed with error (sct=0, sc=8) 00:07:52.070 starting I/O failed: -6 00:07:52.070 Read completed with error (sct=0, sc=8) 00:07:52.070 Read completed with error (sct=0, sc=8) 00:07:52.070 Write completed with error (sct=0, sc=8) 00:07:52.070 Write completed with error (sct=0, sc=8) 00:07:52.070 starting I/O failed: -6 00:07:52.070 Read completed with error (sct=0, sc=8) 00:07:52.070 Read completed with error (sct=0, sc=8) 00:07:52.070 Read completed with error (sct=0, sc=8) 00:07:52.070 Read completed with error (sct=0, sc=8) 00:07:52.070 starting I/O failed: -6 00:07:52.070 Read completed with error (sct=0, sc=8) 00:07:52.070 Read completed with error (sct=0, sc=8) 00:07:52.070 Write completed with error (sct=0, sc=8) 00:07:52.070 Write completed with error (sct=0, sc=8) 00:07:52.070 starting I/O failed: -6 00:07:52.071 Read completed with error (sct=0, sc=8) 00:07:52.071 Write completed with error (sct=0, sc=8) 00:07:52.071 Read completed with error (sct=0, sc=8) 00:07:52.071 Read completed with error (sct=0, sc=8) 00:07:52.071 starting I/O failed: -6 00:07:52.071 Read completed with error (sct=0, sc=8) 00:07:52.071 Read completed with error (sct=0, sc=8) 00:07:52.071 Read completed with error (sct=0, sc=8) 00:07:52.071 Read completed with error (sct=0, sc=8) 00:07:52.071 starting I/O failed: -6 00:07:52.071 Read completed with error (sct=0, sc=8) 00:07:52.071 Read completed with error (sct=0, sc=8) 00:07:52.071 Write completed with error (sct=0, sc=8) 00:07:52.071 Write completed with error (sct=0, sc=8) 00:07:52.071 starting I/O failed: -6 00:07:52.071 Write completed with error (sct=0, sc=8) 00:07:52.071 Write completed with error (sct=0, sc=8) 00:07:52.071 Write completed with error (sct=0, sc=8) 00:07:52.071 Read completed with error (sct=0, sc=8) 00:07:52.071 starting I/O failed: -6 00:07:52.071 Write completed with error (sct=0, sc=8) 00:07:52.071 Read completed with error (sct=0, sc=8) 00:07:52.071 Read completed with error (sct=0, sc=8) 00:07:52.071 Read completed with error (sct=0, sc=8) 00:07:52.071 starting I/O failed: -6 00:07:52.071 Read completed with error (sct=0, sc=8) 00:07:52.071 Read completed with error (sct=0, sc=8) 00:07:52.071 Write completed with error (sct=0, sc=8) 00:07:52.071 Read completed with error (sct=0, sc=8) 00:07:52.071 starting I/O failed: -6 00:07:52.071 Read completed with error (sct=0, sc=8) 00:07:52.071 Read completed with error (sct=0, sc=8) 00:07:52.071 Write completed with error (sct=0, sc=8) 00:07:52.071 Write completed with error (sct=0, sc=8) 00:07:52.071 starting I/O failed: -6 00:07:52.071 Write completed with error (sct=0, sc=8) 00:07:52.071 Read completed with error (sct=0, sc=8) 00:07:52.071 Write completed with error (sct=0, sc=8) 00:07:52.071 Write completed with error (sct=0, sc=8) 00:07:52.071 starting I/O failed: -6 00:07:52.071 Read completed with error (sct=0, sc=8) 00:07:52.071 Read completed with error (sct=0, sc=8) 00:07:52.071 Read completed with error (sct=0, sc=8) 00:07:52.071 Read completed with error (sct=0, sc=8) 00:07:52.071 starting I/O failed: -6 00:07:52.071 Write completed with error (sct=0, sc=8) 00:07:52.071 [2024-07-26 11:15:47.479522] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11203e0 is same with the state(5) to be set 00:07:52.071 Write completed with error (sct=0, sc=8) 00:07:52.071 Write completed with error (sct=0, sc=8) 00:07:52.071 Read completed with error (sct=0, sc=8) 00:07:52.071 Read completed with error (sct=0, sc=8) 00:07:52.071 Write completed with error (sct=0, sc=8) 00:07:52.071 Read completed with error (sct=0, sc=8) 00:07:52.071 Read completed with error (sct=0, sc=8) 00:07:52.071 Read completed with error (sct=0, sc=8) 00:07:52.071 Read completed with error (sct=0, sc=8) 00:07:52.071 Write completed with error (sct=0, sc=8) 00:07:52.071 Read completed with error (sct=0, sc=8) 00:07:52.071 Read completed with error (sct=0, sc=8) 00:07:52.071 Write completed with error (sct=0, sc=8) 00:07:52.071 Write completed with error (sct=0, sc=8) 00:07:52.071 Read completed with error (sct=0, sc=8) 00:07:52.071 Write completed with error (sct=0, sc=8) 00:07:52.071 Read completed with error (sct=0, sc=8) 00:07:52.071 Read completed with error (sct=0, sc=8) 00:07:52.071 Read completed with error (sct=0, sc=8) 00:07:52.071 Read completed with error (sct=0, sc=8) 00:07:52.071 Read completed with error (sct=0, sc=8) 00:07:52.071 Read completed with error (sct=0, sc=8) 00:07:52.071 Read completed with error (sct=0, sc=8) 00:07:52.071 Write completed with error (sct=0, sc=8) 00:07:52.071 Read completed with error (sct=0, sc=8) 00:07:52.071 Read completed with error (sct=0, sc=8) 00:07:52.071 Read completed with error (sct=0, sc=8) 00:07:52.071 Read completed with error (sct=0, sc=8) 00:07:52.071 Read completed with error (sct=0, sc=8) 00:07:52.071 Read completed with error (sct=0, sc=8) 00:07:52.071 Read completed with error (sct=0, sc=8) 00:07:52.071 Read completed with error (sct=0, sc=8) 00:07:52.071 Read completed with error (sct=0, sc=8) 00:07:52.071 Read completed with error (sct=0, sc=8) 00:07:52.071 Read completed with error (sct=0, sc=8) 00:07:52.071 Read completed with error (sct=0, sc=8) 00:07:52.071 Write completed with error (sct=0, sc=8) 00:07:52.071 Read completed with error (sct=0, sc=8) 00:07:52.071 Read completed with error (sct=0, sc=8) 00:07:52.071 Read completed with error (sct=0, sc=8) 00:07:52.071 Read completed with error (sct=0, sc=8) 00:07:52.071 Write completed with error (sct=0, sc=8) 00:07:52.071 Write completed with error (sct=0, sc=8) 00:07:52.071 Read completed with error (sct=0, sc=8) 00:07:52.071 Read completed with error (sct=0, sc=8) 00:07:52.071 [2024-07-26 11:15:47.479713] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f5d3400d7c0 is same with the state(5) to be set 00:07:53.186 [2024-07-26 11:15:48.435880] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1121ac0 is same with the state(5) to be set 00:07:53.186 Read completed with error (sct=0, sc=8) 00:07:53.186 Read completed with error (sct=0, sc=8) 00:07:53.186 Read completed with error (sct=0, sc=8) 00:07:53.186 Write completed with error (sct=0, sc=8) 00:07:53.186 Write completed with error (sct=0, sc=8) 00:07:53.186 Read completed with error (sct=0, sc=8) 00:07:53.186 Write completed with error (sct=0, sc=8) 00:07:53.186 Write completed with error (sct=0, sc=8) 00:07:53.186 Read completed with error (sct=0, sc=8) 00:07:53.186 Write completed with error (sct=0, sc=8) 00:07:53.186 Read completed with error (sct=0, sc=8) 00:07:53.186 Write completed with error (sct=0, sc=8) 00:07:53.186 Read completed with error (sct=0, sc=8) 00:07:53.186 Read completed with error (sct=0, sc=8) 00:07:53.186 Read completed with error (sct=0, sc=8) 00:07:53.186 Read completed with error (sct=0, sc=8) 00:07:53.186 Read completed with error (sct=0, sc=8) 00:07:53.186 Read completed with error (sct=0, sc=8) 00:07:53.186 Read completed with error (sct=0, sc=8) 00:07:53.186 Read completed with error (sct=0, sc=8) 00:07:53.186 Read completed with error (sct=0, sc=8) 00:07:53.186 Write completed with error (sct=0, sc=8) 00:07:53.186 Read completed with error (sct=0, sc=8) 00:07:53.186 Write completed with error (sct=0, sc=8) 00:07:53.186 Read completed with error (sct=0, sc=8) 00:07:53.186 Write completed with error (sct=0, sc=8) 00:07:53.186 Write completed with error (sct=0, sc=8) 00:07:53.186 Write completed with error (sct=0, sc=8) 00:07:53.186 Read completed with error (sct=0, sc=8) 00:07:53.186 Read completed with error (sct=0, sc=8) 00:07:53.186 Read completed with error (sct=0, sc=8) 00:07:53.186 Read completed with error (sct=0, sc=8) 00:07:53.186 Write completed with error (sct=0, sc=8) 00:07:53.186 Write completed with error (sct=0, sc=8) 00:07:53.186 Read completed with error (sct=0, sc=8) 00:07:53.186 Read completed with error (sct=0, sc=8) 00:07:53.186 Read completed with error (sct=0, sc=8) 00:07:53.186 Read completed with error (sct=0, sc=8) 00:07:53.186 [2024-07-26 11:15:48.481796] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1120000 is same with the state(5) to be set 00:07:53.186 Write completed with error (sct=0, sc=8) 00:07:53.186 Write completed with error (sct=0, sc=8) 00:07:53.186 Read completed with error (sct=0, sc=8) 00:07:53.186 Read completed with error (sct=0, sc=8) 00:07:53.186 Read completed with error (sct=0, sc=8) 00:07:53.186 Read completed with error (sct=0, sc=8) 00:07:53.186 Read completed with error (sct=0, sc=8) 00:07:53.186 Read completed with error (sct=0, sc=8) 00:07:53.186 Write completed with error (sct=0, sc=8) 00:07:53.186 Read completed with error (sct=0, sc=8) 00:07:53.186 Read completed with error (sct=0, sc=8) 00:07:53.186 Read completed with error (sct=0, sc=8) 00:07:53.186 Read completed with error (sct=0, sc=8) 00:07:53.186 Write completed with error (sct=0, sc=8) 00:07:53.186 Read completed with error (sct=0, sc=8) 00:07:53.186 Read completed with error (sct=0, sc=8) 00:07:53.186 Read completed with error (sct=0, sc=8) 00:07:53.186 Write completed with error (sct=0, sc=8) 00:07:53.186 Write completed with error (sct=0, sc=8) 00:07:53.186 Read completed with error (sct=0, sc=8) 00:07:53.186 Read completed with error (sct=0, sc=8) 00:07:53.186 Read completed with error (sct=0, sc=8) 00:07:53.186 Read completed with error (sct=0, sc=8) 00:07:53.186 Read completed with error (sct=0, sc=8) 00:07:53.186 Read completed with error (sct=0, sc=8) 00:07:53.186 Read completed with error (sct=0, sc=8) 00:07:53.186 Read completed with error (sct=0, sc=8) 00:07:53.186 Write completed with error (sct=0, sc=8) 00:07:53.186 Read completed with error (sct=0, sc=8) 00:07:53.186 Write completed with error (sct=0, sc=8) 00:07:53.186 Read completed with error (sct=0, sc=8) 00:07:53.186 Read completed with error (sct=0, sc=8) 00:07:53.186 Read completed with error (sct=0, sc=8) 00:07:53.186 Write completed with error (sct=0, sc=8) 00:07:53.186 Read completed with error (sct=0, sc=8) 00:07:53.186 Write completed with error (sct=0, sc=8) 00:07:53.186 Write completed with error (sct=0, sc=8) 00:07:53.186 Read completed with error (sct=0, sc=8) 00:07:53.187 Read completed with error (sct=0, sc=8) 00:07:53.187 [2024-07-26 11:15:48.482243] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1120710 is same with the state(5) to be set 00:07:53.187 Write completed with error (sct=0, sc=8) 00:07:53.187 Write completed with error (sct=0, sc=8) 00:07:53.187 Write completed with error (sct=0, sc=8) 00:07:53.187 Write completed with error (sct=0, sc=8) 00:07:53.187 Write completed with error (sct=0, sc=8) 00:07:53.187 Read completed with error (sct=0, sc=8) 00:07:53.187 Read completed with error (sct=0, sc=8) 00:07:53.187 Read completed with error (sct=0, sc=8) 00:07:53.187 Read completed with error (sct=0, sc=8) 00:07:53.187 Read completed with error (sct=0, sc=8) 00:07:53.187 Read completed with error (sct=0, sc=8) 00:07:53.187 Read completed with error (sct=0, sc=8) 00:07:53.187 Write completed with error (sct=0, sc=8) 00:07:53.187 Write completed with error (sct=0, sc=8) 00:07:53.187 Write completed with error (sct=0, sc=8) 00:07:53.187 Read completed with error (sct=0, sc=8) 00:07:53.187 Write completed with error (sct=0, sc=8) 00:07:53.187 Write completed with error (sct=0, sc=8) 00:07:53.187 Write completed with error (sct=0, sc=8) 00:07:53.187 Read completed with error (sct=0, sc=8) 00:07:53.187 Read completed with error (sct=0, sc=8) 00:07:53.187 Read completed with error (sct=0, sc=8) 00:07:53.187 Read completed with error (sct=0, sc=8) 00:07:53.187 Write completed with error (sct=0, sc=8) 00:07:53.187 Write completed with error (sct=0, sc=8) 00:07:53.187 Read completed with error (sct=0, sc=8) 00:07:53.187 Read completed with error (sct=0, sc=8) 00:07:53.187 Write completed with error (sct=0, sc=8) 00:07:53.187 Read completed with error (sct=0, sc=8) 00:07:53.187 Read completed with error (sct=0, sc=8) 00:07:53.187 Read completed with error (sct=0, sc=8) 00:07:53.187 Read completed with error (sct=0, sc=8) 00:07:53.187 Write completed with error (sct=0, sc=8) 00:07:53.187 Write completed with error (sct=0, sc=8) 00:07:53.187 Write completed with error (sct=0, sc=8) 00:07:53.187 Write completed with error (sct=0, sc=8) 00:07:53.187 Read completed with error (sct=0, sc=8) 00:07:53.187 Read completed with error (sct=0, sc=8) 00:07:53.187 [2024-07-26 11:15:48.482391] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1120a40 is same with the state(5) to be set 00:07:53.187 Read completed with error (sct=0, sc=8) 00:07:53.187 Read completed with error (sct=0, sc=8) 00:07:53.187 Read completed with error (sct=0, sc=8) 00:07:53.187 Read completed with error (sct=0, sc=8) 00:07:53.187 Read completed with error (sct=0, sc=8) 00:07:53.187 Read completed with error (sct=0, sc=8) 00:07:53.187 Read completed with error (sct=0, sc=8) 00:07:53.187 Read completed with error (sct=0, sc=8) 00:07:53.187 Read completed with error (sct=0, sc=8) 00:07:53.187 Write completed with error (sct=0, sc=8) 00:07:53.187 Read completed with error (sct=0, sc=8) 00:07:53.187 Read completed with error (sct=0, sc=8) 00:07:53.187 Read completed with error (sct=0, sc=8) 00:07:53.187 Read completed with error (sct=0, sc=8) 00:07:53.187 Read completed with error (sct=0, sc=8) 00:07:53.187 Write completed with error (sct=0, sc=8) 00:07:53.187 Read completed with error (sct=0, sc=8) 00:07:53.187 Read completed with error (sct=0, sc=8) 00:07:53.187 [2024-07-26 11:15:48.482483] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f5d3400d330 is same with the state(5) to be set 00:07:53.187 Initializing NVMe Controllers 00:07:53.187 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:07:53.187 Controller IO queue size 128, less than required. 00:07:53.187 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:07:53.187 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:07:53.187 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:07:53.187 Initialization complete. Launching workers. 00:07:53.187 ======================================================== 00:07:53.187 Latency(us) 00:07:53.187 Device Information : IOPS MiB/s Average min max 00:07:53.187 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 187.75 0.09 985923.55 728.34 2001855.45 00:07:53.187 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 148.51 0.07 921728.99 439.60 2002075.30 00:07:53.187 ======================================================== 00:07:53.187 Total : 336.27 0.16 957571.74 439.60 2002075.30 00:07:53.187 00:07:53.187 [2024-07-26 11:15:48.483148] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1121ac0 (9): Bad file descriptor 00:07:53.187 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf: errors occurred 00:07:53.187 11:15:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:53.187 11:15:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@34 -- # delay=0 00:07:53.187 11:15:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 1367366 00:07:53.187 11:15:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@36 -- # sleep 0.5 00:07:53.446 11:15:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@38 -- # (( delay++ > 30 )) 00:07:53.446 11:15:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 1367366 00:07:53.446 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 35: kill: (1367366) - No such process 00:07:53.446 11:15:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@45 -- # NOT wait 1367366 00:07:53.446 11:15:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@650 -- # local es=0 00:07:53.446 11:15:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@652 -- # valid_exec_arg wait 1367366 00:07:53.446 11:15:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@638 -- # local arg=wait 00:07:53.446 11:15:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:53.446 11:15:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@642 -- # type -t wait 00:07:53.446 11:15:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:53.446 11:15:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@653 -- # wait 1367366 00:07:53.446 11:15:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@653 -- # es=1 00:07:53.446 11:15:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:07:53.446 11:15:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:07:53.446 11:15:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:07:53.446 11:15:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:07:53.446 11:15:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:53.446 11:15:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:53.446 11:15:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:53.446 11:15:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:53.446 11:15:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:53.446 11:15:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:53.446 [2024-07-26 11:15:49.012137] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:53.446 11:15:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:53.446 11:15:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@50 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:53.446 11:15:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:53.446 11:15:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:53.446 11:15:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:53.446 11:15:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@54 -- # perf_pid=1367859 00:07:53.446 11:15:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@56 -- # delay=0 00:07:53.446 11:15:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 3 -q 128 -w randrw -M 70 -o 512 -P 4 00:07:53.446 11:15:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1367859 00:07:53.446 11:15:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:07:53.446 EAL: No free 2048 kB hugepages reported on node 1 00:07:53.446 [2024-07-26 11:15:49.092101] subsystem.c:1572:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:07:54.014 11:15:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:07:54.014 11:15:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1367859 00:07:54.014 11:15:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:07:54.582 11:15:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:07:54.582 11:15:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1367859 00:07:54.582 11:15:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:07:55.149 11:15:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:07:55.149 11:15:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1367859 00:07:55.149 11:15:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:07:55.408 11:15:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:07:55.408 11:15:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1367859 00:07:55.408 11:15:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:07:55.976 11:15:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:07:55.976 11:15:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1367859 00:07:55.976 11:15:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:07:56.543 11:15:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:07:56.543 11:15:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1367859 00:07:56.543 11:15:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:07:56.802 Initializing NVMe Controllers 00:07:56.802 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:07:56.802 Controller IO queue size 128, less than required. 00:07:56.802 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:07:56.802 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:07:56.802 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:07:56.802 Initialization complete. Launching workers. 00:07:56.802 ======================================================== 00:07:56.802 Latency(us) 00:07:56.802 Device Information : IOPS MiB/s Average min max 00:07:56.802 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 128.00 0.06 1002610.13 1000145.36 1042889.28 00:07:56.803 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 128.00 0.06 1003298.53 1000246.08 1010368.18 00:07:56.803 ======================================================== 00:07:56.803 Total : 256.00 0.12 1002954.33 1000145.36 1042889.28 00:07:56.803 00:07:57.062 11:15:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:07:57.062 11:15:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1367859 00:07:57.062 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 57: kill: (1367859) - No such process 00:07:57.062 11:15:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@67 -- # wait 1367859 00:07:57.062 11:15:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:07:57.062 11:15:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@71 -- # nvmftestfini 00:07:57.062 11:15:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@488 -- # nvmfcleanup 00:07:57.062 11:15:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@117 -- # sync 00:07:57.062 11:15:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:07:57.062 11:15:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@120 -- # set +e 00:07:57.062 11:15:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@121 -- # for i in {1..20} 00:07:57.062 11:15:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:07:57.062 rmmod nvme_tcp 00:07:57.062 rmmod nvme_fabrics 00:07:57.062 rmmod nvme_keyring 00:07:57.062 11:15:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:07:57.062 11:15:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@124 -- # set -e 00:07:57.062 11:15:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@125 -- # return 0 00:07:57.062 11:15:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@489 -- # '[' -n 1367119 ']' 00:07:57.062 11:15:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@490 -- # killprocess 1367119 00:07:57.062 11:15:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@950 -- # '[' -z 1367119 ']' 00:07:57.062 11:15:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@954 -- # kill -0 1367119 00:07:57.062 11:15:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@955 -- # uname 00:07:57.062 11:15:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:57.062 11:15:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1367119 00:07:57.062 11:15:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:57.062 11:15:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:57.062 11:15:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1367119' 00:07:57.062 killing process with pid 1367119 00:07:57.062 11:15:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@969 -- # kill 1367119 00:07:57.062 11:15:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@974 -- # wait 1367119 00:07:57.321 11:15:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:07:57.321 11:15:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:07:57.321 11:15:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:07:57.321 11:15:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:07:57.321 11:15:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@278 -- # remove_spdk_ns 00:07:57.321 11:15:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:57.321 11:15:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:57.321 11:15:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:59.858 11:15:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:07:59.858 00:07:59.858 real 0m16.539s 00:07:59.858 user 0m30.464s 00:07:59.858 sys 0m5.179s 00:07:59.858 11:15:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:59.858 11:15:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:59.858 ************************************ 00:07:59.858 END TEST nvmf_delete_subsystem 00:07:59.858 ************************************ 00:07:59.858 11:15:54 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@26 -- # run_test nvmf_host_management /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:07:59.858 11:15:54 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:07:59.858 11:15:54 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:59.858 11:15:54 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:07:59.858 ************************************ 00:07:59.858 START TEST nvmf_host_management 00:07:59.858 ************************************ 00:07:59.858 11:15:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:07:59.858 * Looking for test storage... 00:07:59.858 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:59.858 11:15:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:59.858 11:15:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@7 -- # uname -s 00:07:59.858 11:15:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:59.858 11:15:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:59.858 11:15:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:59.859 11:15:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:59.859 11:15:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:59.859 11:15:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:59.859 11:15:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:59.859 11:15:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:59.859 11:15:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:59.859 11:15:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:59.859 11:15:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:07:59.859 11:15:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:07:59.859 11:15:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:59.859 11:15:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:59.859 11:15:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:59.859 11:15:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:59.859 11:15:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:59.859 11:15:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:59.859 11:15:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:59.859 11:15:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:59.859 11:15:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:59.859 11:15:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:59.859 11:15:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:59.859 11:15:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@5 -- # export PATH 00:07:59.859 11:15:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:59.859 11:15:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@47 -- # : 0 00:07:59.859 11:15:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:07:59.859 11:15:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:07:59.859 11:15:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:59.859 11:15:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:59.859 11:15:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:59.859 11:15:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:07:59.859 11:15:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:07:59.859 11:15:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@51 -- # have_pci_nics=0 00:07:59.859 11:15:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@11 -- # MALLOC_BDEV_SIZE=64 00:07:59.859 11:15:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:07:59.859 11:15:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@105 -- # nvmftestinit 00:07:59.859 11:15:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:07:59.859 11:15:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:59.859 11:15:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@448 -- # prepare_net_devs 00:07:59.859 11:15:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@410 -- # local -g is_hw=no 00:07:59.859 11:15:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@412 -- # remove_spdk_ns 00:07:59.859 11:15:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:59.859 11:15:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:59.859 11:15:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:59.859 11:15:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:07:59.859 11:15:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:07:59.859 11:15:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@285 -- # xtrace_disable 00:07:59.859 11:15:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:05.138 11:16:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:05.138 11:16:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@291 -- # pci_devs=() 00:08:05.138 11:16:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@291 -- # local -a pci_devs 00:08:05.138 11:16:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@292 -- # pci_net_devs=() 00:08:05.138 11:16:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:08:05.138 11:16:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@293 -- # pci_drivers=() 00:08:05.138 11:16:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@293 -- # local -A pci_drivers 00:08:05.138 11:16:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@295 -- # net_devs=() 00:08:05.138 11:16:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@295 -- # local -ga net_devs 00:08:05.138 11:16:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@296 -- # e810=() 00:08:05.138 11:16:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@296 -- # local -ga e810 00:08:05.138 11:16:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@297 -- # x722=() 00:08:05.138 11:16:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@297 -- # local -ga x722 00:08:05.138 11:16:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@298 -- # mlx=() 00:08:05.138 11:16:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@298 -- # local -ga mlx 00:08:05.138 11:16:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:05.138 11:16:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:05.138 11:16:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:05.138 11:16:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:05.138 11:16:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:05.139 11:16:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:05.139 11:16:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:05.139 11:16:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:05.139 11:16:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:05.139 11:16:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:05.139 11:16:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:05.139 11:16:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:08:05.139 11:16:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:08:05.139 11:16:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:08:05.139 11:16:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:08:05.139 11:16:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:08:05.139 11:16:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:08:05.139 11:16:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:05.139 11:16:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:08:05.139 Found 0000:86:00.0 (0x8086 - 0x159b) 00:08:05.139 11:16:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:05.139 11:16:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:05.139 11:16:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:05.139 11:16:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:05.139 11:16:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:05.139 11:16:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:05.139 11:16:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:08:05.139 Found 0000:86:00.1 (0x8086 - 0x159b) 00:08:05.139 11:16:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:05.139 11:16:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:05.139 11:16:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:05.139 11:16:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:05.139 11:16:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:05.139 11:16:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:08:05.139 11:16:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:08:05.139 11:16:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:08:05.139 11:16:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:05.139 11:16:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:05.139 11:16:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:08:05.139 11:16:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:05.139 11:16:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@390 -- # [[ up == up ]] 00:08:05.139 11:16:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:05.139 11:16:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:05.139 11:16:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:08:05.139 Found net devices under 0000:86:00.0: cvl_0_0 00:08:05.139 11:16:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:05.139 11:16:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:05.139 11:16:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:05.139 11:16:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:08:05.139 11:16:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:05.139 11:16:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@390 -- # [[ up == up ]] 00:08:05.139 11:16:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:05.139 11:16:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:05.139 11:16:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:08:05.139 Found net devices under 0000:86:00.1: cvl_0_1 00:08:05.139 11:16:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:05.139 11:16:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:08:05.139 11:16:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@414 -- # is_hw=yes 00:08:05.139 11:16:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:08:05.139 11:16:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:08:05.139 11:16:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:08:05.139 11:16:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:05.139 11:16:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:05.139 11:16:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:05.139 11:16:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:08:05.139 11:16:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:05.139 11:16:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:05.139 11:16:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:08:05.139 11:16:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:05.139 11:16:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:05.139 11:16:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:08:05.139 11:16:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:08:05.139 11:16:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:08:05.139 11:16:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:05.139 11:16:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:05.139 11:16:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:05.139 11:16:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:08:05.139 11:16:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:05.457 11:16:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:05.457 11:16:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:05.457 11:16:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:08:05.457 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:05.457 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.244 ms 00:08:05.457 00:08:05.457 --- 10.0.0.2 ping statistics --- 00:08:05.457 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:05.457 rtt min/avg/max/mdev = 0.244/0.244/0.244/0.000 ms 00:08:05.457 11:16:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:05.457 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:05.457 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.182 ms 00:08:05.457 00:08:05.457 --- 10.0.0.1 ping statistics --- 00:08:05.457 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:05.457 rtt min/avg/max/mdev = 0.182/0.182/0.182/0.000 ms 00:08:05.457 11:16:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:05.457 11:16:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@422 -- # return 0 00:08:05.457 11:16:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:08:05.457 11:16:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:05.457 11:16:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:08:05.457 11:16:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:08:05.457 11:16:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:05.457 11:16:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:08:05.457 11:16:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:08:05.457 11:16:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@107 -- # nvmf_host_management 00:08:05.457 11:16:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@69 -- # starttarget 00:08:05.457 11:16:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@16 -- # nvmfappstart -m 0x1E 00:08:05.457 11:16:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:08:05.457 11:16:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@724 -- # xtrace_disable 00:08:05.458 11:16:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:05.458 11:16:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@481 -- # nvmfpid=1372062 00:08:05.458 11:16:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@482 -- # waitforlisten 1372062 00:08:05.458 11:16:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:08:05.458 11:16:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@831 -- # '[' -z 1372062 ']' 00:08:05.458 11:16:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:05.458 11:16:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:05.458 11:16:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:05.458 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:05.458 11:16:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:05.458 11:16:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:05.458 [2024-07-26 11:16:00.932906] Starting SPDK v24.09-pre git sha1 487ff9e1a / DPDK 24.03.0 initialization... 00:08:05.458 [2024-07-26 11:16:00.932948] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:05.458 EAL: No free 2048 kB hugepages reported on node 1 00:08:05.458 [2024-07-26 11:16:01.001461] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:05.458 [2024-07-26 11:16:01.074069] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:05.458 [2024-07-26 11:16:01.074109] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:05.458 [2024-07-26 11:16:01.074115] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:05.458 [2024-07-26 11:16:01.074121] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:05.458 [2024-07-26 11:16:01.074125] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:05.458 [2024-07-26 11:16:01.074249] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:08:05.458 [2024-07-26 11:16:01.074362] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:08:05.458 [2024-07-26 11:16:01.074446] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:08:05.458 [2024-07-26 11:16:01.074447] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:08:06.392 11:16:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:06.392 11:16:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@864 -- # return 0 00:08:06.392 11:16:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:08:06.392 11:16:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@730 -- # xtrace_disable 00:08:06.392 11:16:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:06.392 11:16:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:06.392 11:16:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:08:06.392 11:16:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:06.392 11:16:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:06.392 [2024-07-26 11:16:01.765275] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:06.392 11:16:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:06.392 11:16:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@20 -- # timing_enter create_subsystem 00:08:06.392 11:16:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@724 -- # xtrace_disable 00:08:06.392 11:16:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:06.392 11:16:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@22 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:08:06.392 11:16:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@23 -- # cat 00:08:06.392 11:16:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@30 -- # rpc_cmd 00:08:06.392 11:16:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:06.392 11:16:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:06.392 Malloc0 00:08:06.392 [2024-07-26 11:16:01.824991] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:06.392 11:16:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:06.392 11:16:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@31 -- # timing_exit create_subsystems 00:08:06.392 11:16:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@730 -- # xtrace_disable 00:08:06.392 11:16:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:06.392 11:16:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@73 -- # perfpid=1372245 00:08:06.392 11:16:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@74 -- # waitforlisten 1372245 /var/tmp/bdevperf.sock 00:08:06.392 11:16:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@831 -- # '[' -z 1372245 ']' 00:08:06.392 11:16:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:08:06.392 11:16:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:06.392 11:16:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:08:06.392 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:08:06.392 11:16:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:06.392 11:16:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:06.392 11:16:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:08:06.392 11:16:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@72 -- # gen_nvmf_target_json 0 00:08:06.392 11:16:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@532 -- # config=() 00:08:06.392 11:16:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@532 -- # local subsystem config 00:08:06.392 11:16:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:08:06.392 11:16:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:08:06.392 { 00:08:06.392 "params": { 00:08:06.392 "name": "Nvme$subsystem", 00:08:06.392 "trtype": "$TEST_TRANSPORT", 00:08:06.392 "traddr": "$NVMF_FIRST_TARGET_IP", 00:08:06.392 "adrfam": "ipv4", 00:08:06.392 "trsvcid": "$NVMF_PORT", 00:08:06.392 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:08:06.392 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:08:06.392 "hdgst": ${hdgst:-false}, 00:08:06.392 "ddgst": ${ddgst:-false} 00:08:06.392 }, 00:08:06.392 "method": "bdev_nvme_attach_controller" 00:08:06.392 } 00:08:06.392 EOF 00:08:06.392 )") 00:08:06.392 11:16:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@554 -- # cat 00:08:06.392 11:16:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@556 -- # jq . 00:08:06.392 11:16:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@557 -- # IFS=, 00:08:06.392 11:16:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:08:06.393 "params": { 00:08:06.393 "name": "Nvme0", 00:08:06.393 "trtype": "tcp", 00:08:06.393 "traddr": "10.0.0.2", 00:08:06.393 "adrfam": "ipv4", 00:08:06.393 "trsvcid": "4420", 00:08:06.393 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:08:06.393 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:08:06.393 "hdgst": false, 00:08:06.393 "ddgst": false 00:08:06.393 }, 00:08:06.393 "method": "bdev_nvme_attach_controller" 00:08:06.393 }' 00:08:06.393 [2024-07-26 11:16:01.918396] Starting SPDK v24.09-pre git sha1 487ff9e1a / DPDK 24.03.0 initialization... 00:08:06.393 [2024-07-26 11:16:01.918443] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1372245 ] 00:08:06.393 EAL: No free 2048 kB hugepages reported on node 1 00:08:06.393 [2024-07-26 11:16:01.983768] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:06.651 [2024-07-26 11:16:02.056697] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:06.910 Running I/O for 10 seconds... 00:08:07.171 11:16:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:07.171 11:16:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@864 -- # return 0 00:08:07.171 11:16:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@75 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:08:07.171 11:16:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:07.171 11:16:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:07.171 11:16:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:07.171 11:16:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@78 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:08:07.171 11:16:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@80 -- # waitforio /var/tmp/bdevperf.sock Nvme0n1 00:08:07.171 11:16:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@45 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:08:07.171 11:16:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@49 -- # '[' -z Nvme0n1 ']' 00:08:07.171 11:16:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@52 -- # local ret=1 00:08:07.171 11:16:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@53 -- # local i 00:08:07.171 11:16:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i = 10 )) 00:08:07.171 11:16:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:08:07.171 11:16:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:08:07.171 11:16:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:08:07.171 11:16:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:07.171 11:16:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:07.171 11:16:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:07.171 11:16:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=860 00:08:07.171 11:16:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@58 -- # '[' 860 -ge 100 ']' 00:08:07.171 11:16:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@59 -- # ret=0 00:08:07.171 11:16:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@60 -- # break 00:08:07.171 11:16:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@64 -- # return 0 00:08:07.171 11:16:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@84 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:08:07.171 11:16:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:07.171 11:16:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:07.171 [2024-07-26 11:16:02.808124] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:08:07.171 [2024-07-26 11:16:02.808163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:07.171 [2024-07-26 11:16:02.808172] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:08:07.171 [2024-07-26 11:16:02.808179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:07.171 [2024-07-26 11:16:02.808186] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:08:07.172 [2024-07-26 11:16:02.808193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:07.172 [2024-07-26 11:16:02.808200] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:08:07.172 [2024-07-26 11:16:02.808206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:07.172 [2024-07-26 11:16:02.808212] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1928980 is same with the state(5) to be set 00:08:07.172 [2024-07-26 11:16:02.808951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:122880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:07.172 11:16:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:07.172 [2024-07-26 11:16:02.808970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:07.172 [2024-07-26 11:16:02.809000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:123008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:07.172 [2024-07-26 11:16:02.809006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:07.172 [2024-07-26 11:16:02.809015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:123136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:07.172 [2024-07-26 11:16:02.809021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:07.172 [2024-07-26 11:16:02.809029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:123264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:07.172 [2024-07-26 11:16:02.809035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:07.172 [2024-07-26 11:16:02.809043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:123392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:07.172 [2024-07-26 11:16:02.809050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:07.172 [2024-07-26 11:16:02.809058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:123520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:07.172 [2024-07-26 11:16:02.809064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:07.172 [2024-07-26 11:16:02.809072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:123648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:07.172 [2024-07-26 11:16:02.809078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:07.172 [2024-07-26 11:16:02.809086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:123776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:07.172 [2024-07-26 11:16:02.809092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:07.172 [2024-07-26 11:16:02.809099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:123904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:07.172 [2024-07-26 11:16:02.809106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:07.172 [2024-07-26 11:16:02.809114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:124032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:07.172 [2024-07-26 11:16:02.809120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:07.172 [2024-07-26 11:16:02.809128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:124160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:07.172 [2024-07-26 11:16:02.809134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:07.172 [2024-07-26 11:16:02.809142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:124288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:07.172 [2024-07-26 11:16:02.809148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:07.172 [2024-07-26 11:16:02.809155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:124416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:07.172 [2024-07-26 11:16:02.809162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:07.172 [2024-07-26 11:16:02.809175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:124544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:07.172 [2024-07-26 11:16:02.809181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:07.172 [2024-07-26 11:16:02.809188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:124672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:07.172 [2024-07-26 11:16:02.809195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:07.172 [2024-07-26 11:16:02.809202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:124800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:07.172 [2024-07-26 11:16:02.809209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:07.172 [2024-07-26 11:16:02.809217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:124928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:07.172 [2024-07-26 11:16:02.809223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:07.172 [2024-07-26 11:16:02.809231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:125056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:07.172 [2024-07-26 11:16:02.809237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:07.172 [2024-07-26 11:16:02.809244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:125184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:07.172 [2024-07-26 11:16:02.809251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:07.172 [2024-07-26 11:16:02.809262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:125312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:07.172 [2024-07-26 11:16:02.809269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:07.172 [2024-07-26 11:16:02.809277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:125440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:07.172 [2024-07-26 11:16:02.809285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:07.172 [2024-07-26 11:16:02.809295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:125568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:07.172 [2024-07-26 11:16:02.809302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:07.172 [2024-07-26 11:16:02.809310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:125696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:07.172 [2024-07-26 11:16:02.809316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:07.172 [2024-07-26 11:16:02.809324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:125824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:07.172 [2024-07-26 11:16:02.809330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:07.172 [2024-07-26 11:16:02.809339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:125952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:07.172 [2024-07-26 11:16:02.809345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:07.172 [2024-07-26 11:16:02.809353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:126080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:07.172 11:16:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@85 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:08:07.172 [2024-07-26 11:16:02.809363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:07.172 [2024-07-26 11:16:02.809374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:126208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:07.173 [2024-07-26 11:16:02.809381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:07.173 [2024-07-26 11:16:02.809389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:126336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:07.173 [2024-07-26 11:16:02.809397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:07.173 [2024-07-26 11:16:02.809405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:126464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:07.173 [2024-07-26 11:16:02.809411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:07.173 [2024-07-26 11:16:02.809419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:126592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:07.173 [2024-07-26 11:16:02.809425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:07.173 [2024-07-26 11:16:02.809433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:126720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:07.173 [2024-07-26 11:16:02.809440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:07.173 [2024-07-26 11:16:02.809447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:126848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:07.173 [2024-07-26 11:16:02.809454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:07.173 [2024-07-26 11:16:02.809461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:126976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:07.173 [2024-07-26 11:16:02.809467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:07.173 [2024-07-26 11:16:02.809475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:127104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:07.173 [2024-07-26 11:16:02.809481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:07.173 [2024-07-26 11:16:02.809489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:127232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:07.173 [2024-07-26 11:16:02.809495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:07.173 [2024-07-26 11:16:02.809504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:127360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:07.173 [2024-07-26 11:16:02.809512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:07.173 [2024-07-26 11:16:02.809520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:127488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:07.173 [2024-07-26 11:16:02.809526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:07.173 [2024-07-26 11:16:02.809534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:127616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:07.173 [2024-07-26 11:16:02.809542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:07.173 [2024-07-26 11:16:02.809549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:127744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:07.173 [2024-07-26 11:16:02.809555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:07.173 [2024-07-26 11:16:02.809563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:127872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:07.173 [2024-07-26 11:16:02.809569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:07.173 [2024-07-26 11:16:02.809577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:128000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:07.173 [2024-07-26 11:16:02.809583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:07.173 [2024-07-26 11:16:02.809590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:128128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:07.173 [2024-07-26 11:16:02.809596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:07.173 [2024-07-26 11:16:02.809606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:128256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:07.173 [2024-07-26 11:16:02.809613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:07.173 [2024-07-26 11:16:02.809620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:128384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:07.173 [2024-07-26 11:16:02.809631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:07.173 [2024-07-26 11:16:02.809640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:128512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:07.173 [2024-07-26 11:16:02.809646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:07.173 [2024-07-26 11:16:02.809653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:128640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:07.173 [2024-07-26 11:16:02.809660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:07.173 [2024-07-26 11:16:02.809668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:128768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:07.173 [2024-07-26 11:16:02.809674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:07.173 11:16:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:07.173 [2024-07-26 11:16:02.809681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:128896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:07.173 [2024-07-26 11:16:02.809690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:07.173 [2024-07-26 11:16:02.809698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:129024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:07.173 [2024-07-26 11:16:02.809704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:07.173 [2024-07-26 11:16:02.809711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:129152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:07.173 [2024-07-26 11:16:02.809719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:07.173 [2024-07-26 11:16:02.809727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:129280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:07.173 [2024-07-26 11:16:02.809733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:07.173 [2024-07-26 11:16:02.809742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:129408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:07.173 [2024-07-26 11:16:02.809750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:07.173 [2024-07-26 11:16:02.809758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:129536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:07.173 [2024-07-26 11:16:02.809764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:07.173 [2024-07-26 11:16:02.809772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:129664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:07.174 [2024-07-26 11:16:02.809778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:07.174 [2024-07-26 11:16:02.809786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:129792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:07.174 [2024-07-26 11:16:02.809792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:07.174 [2024-07-26 11:16:02.809799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:129920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:07.174 [2024-07-26 11:16:02.809805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:07.174 [2024-07-26 11:16:02.809813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:130048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:07.174 [2024-07-26 11:16:02.809819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:07.174 [2024-07-26 11:16:02.809827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:130176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:07.174 [2024-07-26 11:16:02.809833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:07.174 [2024-07-26 11:16:02.809841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:130304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:07.174 [2024-07-26 11:16:02.809847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:07.174 [2024-07-26 11:16:02.809855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:130432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:07.174 [2024-07-26 11:16:02.809860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:07.174 [2024-07-26 11:16:02.809868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:130560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:07.174 [2024-07-26 11:16:02.809875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:07.174 [2024-07-26 11:16:02.809883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:130688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:07.174 [2024-07-26 11:16:02.809890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:07.174 [2024-07-26 11:16:02.809900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:130816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:07.174 [2024-07-26 11:16:02.809907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:07.174 [2024-07-26 11:16:02.809915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:130944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:07.174 [2024-07-26 11:16:02.809921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:07.174 11:16:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:07.174 [2024-07-26 11:16:02.809980] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x1d5a660 was disconnected and freed. reset controller. 00:08:07.174 [2024-07-26 11:16:02.810854] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:08:07.174 task offset: 122880 on job bdev=Nvme0n1 fails 00:08:07.174 00:08:07.174 Latency(us) 00:08:07.174 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:07.174 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:08:07.174 Job: Nvme0n1 ended in about 0.48 seconds with error 00:08:07.174 Verification LBA range: start 0x0 length 0x400 00:08:07.174 Nvme0n1 : 0.48 2000.02 125.00 133.33 0.00 29288.14 1388.74 26588.89 00:08:07.174 =================================================================================================================== 00:08:07.174 Total : 2000.02 125.00 133.33 0.00 29288.14 1388.74 26588.89 00:08:07.174 [2024-07-26 11:16:02.812371] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:08:07.174 [2024-07-26 11:16:02.812384] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1928980 (9): Bad file descriptor 00:08:07.174 11:16:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:07.174 11:16:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@87 -- # sleep 1 00:08:07.434 [2024-07-26 11:16:02.861655] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:08:08.371 11:16:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@91 -- # kill -9 1372245 00:08:08.371 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh: line 91: kill: (1372245) - No such process 00:08:08.371 11:16:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@91 -- # true 00:08:08.371 11:16:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@97 -- # rm -f /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 /var/tmp/spdk_cpu_lock_003 /var/tmp/spdk_cpu_lock_004 00:08:08.371 11:16:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:08:08.371 11:16:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@100 -- # gen_nvmf_target_json 0 00:08:08.371 11:16:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@532 -- # config=() 00:08:08.371 11:16:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@532 -- # local subsystem config 00:08:08.371 11:16:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:08:08.371 11:16:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:08:08.371 { 00:08:08.371 "params": { 00:08:08.371 "name": "Nvme$subsystem", 00:08:08.371 "trtype": "$TEST_TRANSPORT", 00:08:08.371 "traddr": "$NVMF_FIRST_TARGET_IP", 00:08:08.371 "adrfam": "ipv4", 00:08:08.371 "trsvcid": "$NVMF_PORT", 00:08:08.371 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:08:08.371 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:08:08.371 "hdgst": ${hdgst:-false}, 00:08:08.371 "ddgst": ${ddgst:-false} 00:08:08.371 }, 00:08:08.371 "method": "bdev_nvme_attach_controller" 00:08:08.371 } 00:08:08.371 EOF 00:08:08.371 )") 00:08:08.371 11:16:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@554 -- # cat 00:08:08.371 11:16:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@556 -- # jq . 00:08:08.371 11:16:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@557 -- # IFS=, 00:08:08.371 11:16:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:08:08.371 "params": { 00:08:08.371 "name": "Nvme0", 00:08:08.371 "trtype": "tcp", 00:08:08.371 "traddr": "10.0.0.2", 00:08:08.371 "adrfam": "ipv4", 00:08:08.371 "trsvcid": "4420", 00:08:08.371 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:08:08.371 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:08:08.371 "hdgst": false, 00:08:08.371 "ddgst": false 00:08:08.371 }, 00:08:08.371 "method": "bdev_nvme_attach_controller" 00:08:08.371 }' 00:08:08.371 [2024-07-26 11:16:03.871459] Starting SPDK v24.09-pre git sha1 487ff9e1a / DPDK 24.03.0 initialization... 00:08:08.371 [2024-07-26 11:16:03.871505] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1372580 ] 00:08:08.371 EAL: No free 2048 kB hugepages reported on node 1 00:08:08.371 [2024-07-26 11:16:03.935014] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:08.371 [2024-07-26 11:16:04.006016] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:08.941 Running I/O for 1 seconds... 00:08:09.879 00:08:09.879 Latency(us) 00:08:09.879 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:09.879 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:08:09.879 Verification LBA range: start 0x0 length 0x400 00:08:09.879 Nvme0n1 : 1.01 2046.82 127.93 0.00 0.00 30680.20 1825.65 26588.89 00:08:09.879 =================================================================================================================== 00:08:09.879 Total : 2046.82 127.93 0.00 0.00 30680.20 1825.65 26588.89 00:08:09.879 11:16:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@102 -- # stoptarget 00:08:09.879 11:16:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@36 -- # rm -f ./local-job0-0-verify.state 00:08:09.879 11:16:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@37 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:08:09.879 11:16:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@38 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:08:09.879 11:16:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@40 -- # nvmftestfini 00:08:09.879 11:16:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@488 -- # nvmfcleanup 00:08:09.879 11:16:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@117 -- # sync 00:08:09.879 11:16:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:08:09.879 11:16:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@120 -- # set +e 00:08:09.879 11:16:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@121 -- # for i in {1..20} 00:08:09.879 11:16:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:08:09.879 rmmod nvme_tcp 00:08:10.137 rmmod nvme_fabrics 00:08:10.137 rmmod nvme_keyring 00:08:10.137 11:16:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:08:10.137 11:16:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@124 -- # set -e 00:08:10.137 11:16:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@125 -- # return 0 00:08:10.137 11:16:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@489 -- # '[' -n 1372062 ']' 00:08:10.137 11:16:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@490 -- # killprocess 1372062 00:08:10.137 11:16:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@950 -- # '[' -z 1372062 ']' 00:08:10.137 11:16:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@954 -- # kill -0 1372062 00:08:10.137 11:16:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@955 -- # uname 00:08:10.137 11:16:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:08:10.137 11:16:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1372062 00:08:10.137 11:16:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:08:10.137 11:16:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:08:10.137 11:16:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1372062' 00:08:10.137 killing process with pid 1372062 00:08:10.137 11:16:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@969 -- # kill 1372062 00:08:10.137 11:16:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@974 -- # wait 1372062 00:08:10.397 [2024-07-26 11:16:05.833230] app.c: 711:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 1, errno: 2 00:08:10.397 11:16:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:08:10.397 11:16:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:08:10.397 11:16:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:08:10.397 11:16:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:08:10.397 11:16:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@278 -- # remove_spdk_ns 00:08:10.397 11:16:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:10.397 11:16:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:10.397 11:16:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:12.315 11:16:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:08:12.315 11:16:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:08:12.315 00:08:12.315 real 0m12.923s 00:08:12.315 user 0m23.240s 00:08:12.315 sys 0m5.459s 00:08:12.315 11:16:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:12.315 11:16:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:12.315 ************************************ 00:08:12.315 END TEST nvmf_host_management 00:08:12.315 ************************************ 00:08:12.315 11:16:07 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@27 -- # run_test nvmf_lvol /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:08:12.315 11:16:07 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:08:12.315 11:16:07 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:12.315 11:16:07 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:08:12.575 ************************************ 00:08:12.575 START TEST nvmf_lvol 00:08:12.575 ************************************ 00:08:12.575 11:16:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:08:12.575 * Looking for test storage... 00:08:12.575 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:12.575 11:16:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:12.575 11:16:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@7 -- # uname -s 00:08:12.575 11:16:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:12.575 11:16:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:12.575 11:16:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:12.575 11:16:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:12.575 11:16:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:12.575 11:16:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:12.575 11:16:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:12.575 11:16:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:12.575 11:16:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:12.575 11:16:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:12.575 11:16:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:08:12.575 11:16:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:08:12.575 11:16:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:12.575 11:16:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:12.575 11:16:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:12.575 11:16:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:12.575 11:16:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:12.575 11:16:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:12.575 11:16:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:12.575 11:16:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:12.575 11:16:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:12.575 11:16:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:12.575 11:16:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:12.575 11:16:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@5 -- # export PATH 00:08:12.575 11:16:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:12.575 11:16:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@47 -- # : 0 00:08:12.575 11:16:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:08:12.575 11:16:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:08:12.575 11:16:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:12.575 11:16:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:12.575 11:16:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:12.575 11:16:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:08:12.575 11:16:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:08:12.575 11:16:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@51 -- # have_pci_nics=0 00:08:12.575 11:16:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@11 -- # MALLOC_BDEV_SIZE=64 00:08:12.575 11:16:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:08:12.575 11:16:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@13 -- # LVOL_BDEV_INIT_SIZE=20 00:08:12.575 11:16:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@14 -- # LVOL_BDEV_FINAL_SIZE=30 00:08:12.576 11:16:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:12.576 11:16:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@18 -- # nvmftestinit 00:08:12.576 11:16:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:08:12.576 11:16:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:12.576 11:16:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@448 -- # prepare_net_devs 00:08:12.576 11:16:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@410 -- # local -g is_hw=no 00:08:12.576 11:16:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@412 -- # remove_spdk_ns 00:08:12.576 11:16:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:12.576 11:16:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:12.576 11:16:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:12.576 11:16:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:08:12.576 11:16:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:08:12.576 11:16:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@285 -- # xtrace_disable 00:08:12.576 11:16:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:08:19.234 11:16:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:19.234 11:16:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@291 -- # pci_devs=() 00:08:19.234 11:16:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@291 -- # local -a pci_devs 00:08:19.235 11:16:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@292 -- # pci_net_devs=() 00:08:19.235 11:16:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:08:19.235 11:16:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@293 -- # pci_drivers=() 00:08:19.235 11:16:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@293 -- # local -A pci_drivers 00:08:19.235 11:16:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@295 -- # net_devs=() 00:08:19.235 11:16:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@295 -- # local -ga net_devs 00:08:19.235 11:16:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@296 -- # e810=() 00:08:19.235 11:16:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@296 -- # local -ga e810 00:08:19.235 11:16:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@297 -- # x722=() 00:08:19.235 11:16:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@297 -- # local -ga x722 00:08:19.235 11:16:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@298 -- # mlx=() 00:08:19.235 11:16:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@298 -- # local -ga mlx 00:08:19.235 11:16:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:19.235 11:16:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:19.235 11:16:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:19.235 11:16:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:19.235 11:16:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:19.235 11:16:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:19.235 11:16:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:19.235 11:16:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:19.235 11:16:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:19.235 11:16:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:19.235 11:16:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:19.235 11:16:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:08:19.235 11:16:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:08:19.235 11:16:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:08:19.235 11:16:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:08:19.235 11:16:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:08:19.235 11:16:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:08:19.235 11:16:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:19.235 11:16:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:08:19.235 Found 0000:86:00.0 (0x8086 - 0x159b) 00:08:19.235 11:16:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:19.235 11:16:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:19.235 11:16:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:19.235 11:16:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:19.235 11:16:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:19.235 11:16:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:19.235 11:16:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:08:19.235 Found 0000:86:00.1 (0x8086 - 0x159b) 00:08:19.235 11:16:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:19.235 11:16:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:19.235 11:16:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:19.235 11:16:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:19.235 11:16:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:19.235 11:16:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:08:19.235 11:16:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:08:19.235 11:16:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:08:19.235 11:16:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:19.235 11:16:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:19.235 11:16:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:08:19.235 11:16:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:19.235 11:16:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@390 -- # [[ up == up ]] 00:08:19.235 11:16:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:19.235 11:16:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:19.235 11:16:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:08:19.235 Found net devices under 0000:86:00.0: cvl_0_0 00:08:19.235 11:16:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:19.235 11:16:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:19.235 11:16:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:19.235 11:16:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:08:19.235 11:16:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:19.235 11:16:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@390 -- # [[ up == up ]] 00:08:19.235 11:16:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:19.235 11:16:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:19.235 11:16:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:08:19.235 Found net devices under 0000:86:00.1: cvl_0_1 00:08:19.235 11:16:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:19.235 11:16:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:08:19.235 11:16:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@414 -- # is_hw=yes 00:08:19.235 11:16:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:08:19.235 11:16:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:08:19.235 11:16:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:08:19.235 11:16:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:19.235 11:16:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:19.235 11:16:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:19.235 11:16:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:08:19.235 11:16:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:19.235 11:16:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:19.235 11:16:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:08:19.235 11:16:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:19.235 11:16:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:19.235 11:16:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:08:19.235 11:16:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:08:19.235 11:16:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:08:19.235 11:16:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:19.235 11:16:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:19.235 11:16:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:19.235 11:16:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:08:19.235 11:16:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:19.235 11:16:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:19.235 11:16:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:19.235 11:16:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:08:19.235 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:19.235 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.159 ms 00:08:19.235 00:08:19.235 --- 10.0.0.2 ping statistics --- 00:08:19.235 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:19.235 rtt min/avg/max/mdev = 0.159/0.159/0.159/0.000 ms 00:08:19.235 11:16:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:19.235 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:19.235 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.189 ms 00:08:19.235 00:08:19.235 --- 10.0.0.1 ping statistics --- 00:08:19.235 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:19.235 rtt min/avg/max/mdev = 0.189/0.189/0.189/0.000 ms 00:08:19.235 11:16:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:19.235 11:16:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@422 -- # return 0 00:08:19.235 11:16:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:08:19.235 11:16:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:19.235 11:16:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:08:19.235 11:16:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:08:19.235 11:16:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:19.235 11:16:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:08:19.235 11:16:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:08:19.235 11:16:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@19 -- # nvmfappstart -m 0x7 00:08:19.235 11:16:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:08:19.236 11:16:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@724 -- # xtrace_disable 00:08:19.236 11:16:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:08:19.236 11:16:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@481 -- # nvmfpid=1376354 00:08:19.236 11:16:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:08:19.236 11:16:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@482 -- # waitforlisten 1376354 00:08:19.236 11:16:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@831 -- # '[' -z 1376354 ']' 00:08:19.236 11:16:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:19.236 11:16:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:19.236 11:16:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:19.236 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:19.236 11:16:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:19.236 11:16:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:08:19.236 [2024-07-26 11:16:13.995962] Starting SPDK v24.09-pre git sha1 487ff9e1a / DPDK 24.03.0 initialization... 00:08:19.236 [2024-07-26 11:16:13.996005] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:19.236 EAL: No free 2048 kB hugepages reported on node 1 00:08:19.236 [2024-07-26 11:16:14.062066] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:08:19.236 [2024-07-26 11:16:14.138429] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:19.236 [2024-07-26 11:16:14.138465] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:19.236 [2024-07-26 11:16:14.138475] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:19.236 [2024-07-26 11:16:14.138481] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:19.236 [2024-07-26 11:16:14.138485] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:19.236 [2024-07-26 11:16:14.138549] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:08:19.236 [2024-07-26 11:16:14.138670] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:08:19.236 [2024-07-26 11:16:14.138669] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:19.236 11:16:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:19.236 11:16:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@864 -- # return 0 00:08:19.236 11:16:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:08:19.236 11:16:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@730 -- # xtrace_disable 00:08:19.236 11:16:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:08:19.236 11:16:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:19.236 11:16:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:08:19.494 [2024-07-26 11:16:14.975832] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:19.494 11:16:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:08:19.754 11:16:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # base_bdevs='Malloc0 ' 00:08:19.754 11:16:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:08:19.754 11:16:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # base_bdevs+=Malloc1 00:08:19.754 11:16:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc0 Malloc1' 00:08:20.013 11:16:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore raid0 lvs 00:08:20.272 11:16:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # lvs=6ec3ab8e-5c09-46a7-a345-5145e6932a88 00:08:20.272 11:16:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 6ec3ab8e-5c09-46a7-a345-5145e6932a88 lvol 20 00:08:20.530 11:16:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # lvol=6d93688b-fb9d-4b0d-8693-e4fc3efe9dda 00:08:20.530 11:16:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:08:20.530 11:16:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 6d93688b-fb9d-4b0d-8693-e4fc3efe9dda 00:08:20.789 11:16:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:08:21.048 [2024-07-26 11:16:16.482295] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:21.049 11:16:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:08:21.049 11:16:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -o 4096 -q 128 -s 512 -w randwrite -t 10 -c 0x18 00:08:21.049 11:16:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@42 -- # perf_pid=1376849 00:08:21.049 11:16:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@44 -- # sleep 1 00:08:21.049 EAL: No free 2048 kB hugepages reported on node 1 00:08:22.425 11:16:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_snapshot 6d93688b-fb9d-4b0d-8693-e4fc3efe9dda MY_SNAPSHOT 00:08:22.425 11:16:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # snapshot=d0a4d7ab-a27f-4d12-bff7-02e8033e1acc 00:08:22.425 11:16:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_resize 6d93688b-fb9d-4b0d-8693-e4fc3efe9dda 30 00:08:22.684 11:16:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_clone d0a4d7ab-a27f-4d12-bff7-02e8033e1acc MY_CLONE 00:08:22.943 11:16:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # clone=1d7500c3-1abf-4429-a787-895c3c6de17d 00:08:22.943 11:16:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_inflate 1d7500c3-1abf-4429-a787-895c3c6de17d 00:08:23.510 11:16:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@53 -- # wait 1376849 00:08:31.622 Initializing NVMe Controllers 00:08:31.622 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:08:31.622 Controller IO queue size 128, less than required. 00:08:31.622 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:08:31.622 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 3 00:08:31.622 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 4 00:08:31.622 Initialization complete. Launching workers. 00:08:31.622 ======================================================== 00:08:31.622 Latency(us) 00:08:31.622 Device Information : IOPS MiB/s Average min max 00:08:31.622 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 3: 12690.59 49.57 10088.93 1670.35 105927.41 00:08:31.622 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 4: 12576.30 49.13 10179.11 3688.04 47968.64 00:08:31.622 ======================================================== 00:08:31.622 Total : 25266.89 98.70 10133.82 1670.35 105927.41 00:08:31.622 00:08:31.622 11:16:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:08:31.881 11:16:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 6d93688b-fb9d-4b0d-8693-e4fc3efe9dda 00:08:31.881 11:16:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 6ec3ab8e-5c09-46a7-a345-5145e6932a88 00:08:32.140 11:16:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@60 -- # rm -f 00:08:32.140 11:16:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@62 -- # trap - SIGINT SIGTERM EXIT 00:08:32.140 11:16:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@64 -- # nvmftestfini 00:08:32.140 11:16:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@488 -- # nvmfcleanup 00:08:32.140 11:16:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@117 -- # sync 00:08:32.140 11:16:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:08:32.140 11:16:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@120 -- # set +e 00:08:32.140 11:16:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@121 -- # for i in {1..20} 00:08:32.140 11:16:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:08:32.140 rmmod nvme_tcp 00:08:32.140 rmmod nvme_fabrics 00:08:32.140 rmmod nvme_keyring 00:08:32.140 11:16:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:08:32.140 11:16:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@124 -- # set -e 00:08:32.140 11:16:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@125 -- # return 0 00:08:32.140 11:16:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@489 -- # '[' -n 1376354 ']' 00:08:32.140 11:16:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@490 -- # killprocess 1376354 00:08:32.140 11:16:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@950 -- # '[' -z 1376354 ']' 00:08:32.140 11:16:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@954 -- # kill -0 1376354 00:08:32.140 11:16:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@955 -- # uname 00:08:32.140 11:16:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:08:32.140 11:16:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1376354 00:08:32.140 11:16:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:08:32.140 11:16:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:08:32.140 11:16:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1376354' 00:08:32.140 killing process with pid 1376354 00:08:32.140 11:16:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@969 -- # kill 1376354 00:08:32.140 11:16:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@974 -- # wait 1376354 00:08:32.399 11:16:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:08:32.399 11:16:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:08:32.399 11:16:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:08:32.399 11:16:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:08:32.399 11:16:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@278 -- # remove_spdk_ns 00:08:32.399 11:16:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:32.399 11:16:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:32.399 11:16:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:34.936 11:16:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:08:34.936 00:08:34.936 real 0m22.098s 00:08:34.936 user 1m4.492s 00:08:34.936 sys 0m6.913s 00:08:34.936 11:16:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:34.936 11:16:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:08:34.936 ************************************ 00:08:34.936 END TEST nvmf_lvol 00:08:34.936 ************************************ 00:08:34.936 11:16:30 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@28 -- # run_test nvmf_lvs_grow /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:08:34.936 11:16:30 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:08:34.936 11:16:30 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:34.936 11:16:30 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:08:34.936 ************************************ 00:08:34.936 START TEST nvmf_lvs_grow 00:08:34.936 ************************************ 00:08:34.936 11:16:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:08:34.936 * Looking for test storage... 00:08:34.936 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:34.936 11:16:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:34.936 11:16:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@7 -- # uname -s 00:08:34.936 11:16:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:34.936 11:16:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:34.936 11:16:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:34.936 11:16:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:34.936 11:16:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:34.936 11:16:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:34.936 11:16:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:34.936 11:16:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:34.936 11:16:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:34.936 11:16:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:34.936 11:16:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:08:34.936 11:16:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:08:34.936 11:16:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:34.936 11:16:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:34.936 11:16:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:34.936 11:16:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:34.936 11:16:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:34.936 11:16:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:34.936 11:16:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:34.936 11:16:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:34.936 11:16:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:34.936 11:16:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:34.936 11:16:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:34.936 11:16:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@5 -- # export PATH 00:08:34.936 11:16:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:34.936 11:16:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@47 -- # : 0 00:08:34.936 11:16:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:08:34.936 11:16:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:08:34.936 11:16:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:34.936 11:16:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:34.936 11:16:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:34.936 11:16:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:08:34.936 11:16:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:08:34.936 11:16:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@51 -- # have_pci_nics=0 00:08:34.936 11:16:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:34.936 11:16:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@12 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:08:34.936 11:16:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@98 -- # nvmftestinit 00:08:34.936 11:16:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:08:34.936 11:16:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:34.936 11:16:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@448 -- # prepare_net_devs 00:08:34.936 11:16:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@410 -- # local -g is_hw=no 00:08:34.936 11:16:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@412 -- # remove_spdk_ns 00:08:34.936 11:16:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:34.936 11:16:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:34.936 11:16:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:34.936 11:16:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:08:34.936 11:16:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:08:34.936 11:16:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@285 -- # xtrace_disable 00:08:34.936 11:16:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:08:40.214 11:16:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:40.214 11:16:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@291 -- # pci_devs=() 00:08:40.214 11:16:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@291 -- # local -a pci_devs 00:08:40.214 11:16:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@292 -- # pci_net_devs=() 00:08:40.214 11:16:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:08:40.214 11:16:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@293 -- # pci_drivers=() 00:08:40.214 11:16:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@293 -- # local -A pci_drivers 00:08:40.214 11:16:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@295 -- # net_devs=() 00:08:40.214 11:16:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@295 -- # local -ga net_devs 00:08:40.214 11:16:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@296 -- # e810=() 00:08:40.214 11:16:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@296 -- # local -ga e810 00:08:40.214 11:16:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@297 -- # x722=() 00:08:40.214 11:16:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@297 -- # local -ga x722 00:08:40.214 11:16:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@298 -- # mlx=() 00:08:40.214 11:16:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@298 -- # local -ga mlx 00:08:40.214 11:16:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:40.214 11:16:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:40.214 11:16:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:40.214 11:16:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:40.214 11:16:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:40.214 11:16:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:40.214 11:16:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:40.214 11:16:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:40.214 11:16:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:40.214 11:16:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:40.214 11:16:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:40.214 11:16:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:08:40.214 11:16:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:08:40.214 11:16:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:08:40.214 11:16:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:08:40.214 11:16:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:08:40.214 11:16:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:08:40.214 11:16:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:40.214 11:16:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:08:40.214 Found 0000:86:00.0 (0x8086 - 0x159b) 00:08:40.214 11:16:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:40.214 11:16:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:40.214 11:16:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:40.214 11:16:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:40.214 11:16:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:40.214 11:16:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:40.214 11:16:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:08:40.214 Found 0000:86:00.1 (0x8086 - 0x159b) 00:08:40.214 11:16:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:40.214 11:16:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:40.214 11:16:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:40.214 11:16:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:40.214 11:16:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:40.214 11:16:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:08:40.214 11:16:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:08:40.214 11:16:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:08:40.214 11:16:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:40.214 11:16:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:40.214 11:16:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:08:40.214 11:16:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:40.214 11:16:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@390 -- # [[ up == up ]] 00:08:40.214 11:16:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:40.214 11:16:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:40.214 11:16:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:08:40.214 Found net devices under 0000:86:00.0: cvl_0_0 00:08:40.214 11:16:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:40.214 11:16:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:40.214 11:16:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:40.214 11:16:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:08:40.214 11:16:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:40.214 11:16:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@390 -- # [[ up == up ]] 00:08:40.214 11:16:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:40.214 11:16:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:40.214 11:16:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:08:40.214 Found net devices under 0000:86:00.1: cvl_0_1 00:08:40.214 11:16:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:40.214 11:16:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:08:40.214 11:16:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@414 -- # is_hw=yes 00:08:40.214 11:16:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:08:40.214 11:16:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:08:40.214 11:16:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:08:40.214 11:16:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:40.214 11:16:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:40.214 11:16:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:40.214 11:16:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:08:40.214 11:16:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:40.214 11:16:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:40.215 11:16:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:08:40.215 11:16:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:40.215 11:16:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:40.215 11:16:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:08:40.215 11:16:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:08:40.215 11:16:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:08:40.215 11:16:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:40.474 11:16:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:40.474 11:16:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:40.474 11:16:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:08:40.474 11:16:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:40.474 11:16:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:40.474 11:16:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:40.474 11:16:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:08:40.474 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:40.474 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.199 ms 00:08:40.474 00:08:40.474 --- 10.0.0.2 ping statistics --- 00:08:40.474 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:40.474 rtt min/avg/max/mdev = 0.199/0.199/0.199/0.000 ms 00:08:40.474 11:16:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:40.474 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:40.474 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.159 ms 00:08:40.474 00:08:40.474 --- 10.0.0.1 ping statistics --- 00:08:40.474 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:40.474 rtt min/avg/max/mdev = 0.159/0.159/0.159/0.000 ms 00:08:40.474 11:16:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:40.474 11:16:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@422 -- # return 0 00:08:40.474 11:16:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:08:40.474 11:16:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:40.474 11:16:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:08:40.474 11:16:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:08:40.474 11:16:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:40.474 11:16:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:08:40.474 11:16:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:08:40.474 11:16:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@99 -- # nvmfappstart -m 0x1 00:08:40.474 11:16:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:08:40.474 11:16:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@724 -- # xtrace_disable 00:08:40.474 11:16:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:08:40.732 11:16:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@481 -- # nvmfpid=1382214 00:08:40.733 11:16:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@482 -- # waitforlisten 1382214 00:08:40.733 11:16:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:08:40.733 11:16:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@831 -- # '[' -z 1382214 ']' 00:08:40.733 11:16:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:40.733 11:16:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:40.733 11:16:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:40.733 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:40.733 11:16:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:40.733 11:16:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:08:40.733 [2024-07-26 11:16:36.191278] Starting SPDK v24.09-pre git sha1 487ff9e1a / DPDK 24.03.0 initialization... 00:08:40.733 [2024-07-26 11:16:36.191325] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:40.733 EAL: No free 2048 kB hugepages reported on node 1 00:08:40.733 [2024-07-26 11:16:36.261078] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:40.733 [2024-07-26 11:16:36.338231] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:40.733 [2024-07-26 11:16:36.338260] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:40.733 [2024-07-26 11:16:36.338267] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:40.733 [2024-07-26 11:16:36.338272] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:40.733 [2024-07-26 11:16:36.338276] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:40.733 [2024-07-26 11:16:36.338309] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:41.669 11:16:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:41.669 11:16:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@864 -- # return 0 00:08:41.669 11:16:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:08:41.669 11:16:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@730 -- # xtrace_disable 00:08:41.669 11:16:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:08:41.669 11:16:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:41.669 11:16:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:08:41.669 [2024-07-26 11:16:37.170217] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:41.669 11:16:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@102 -- # run_test lvs_grow_clean lvs_grow 00:08:41.669 11:16:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:08:41.669 11:16:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:41.669 11:16:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:08:41.669 ************************************ 00:08:41.669 START TEST lvs_grow_clean 00:08:41.669 ************************************ 00:08:41.669 11:16:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1125 -- # lvs_grow 00:08:41.669 11:16:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:08:41.669 11:16:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:08:41.669 11:16:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:08:41.669 11:16:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:08:41.669 11:16:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:08:41.669 11:16:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:08:41.669 11:16:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:08:41.669 11:16:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:08:41.669 11:16:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:08:41.928 11:16:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:08:41.928 11:16:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:08:42.187 11:16:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # lvs=47ba49b5-426b-4905-a417-03ba1d4b0223 00:08:42.187 11:16:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 47ba49b5-426b-4905-a417-03ba1d4b0223 00:08:42.187 11:16:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:08:42.187 11:16:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:08:42.187 11:16:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:08:42.187 11:16:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 47ba49b5-426b-4905-a417-03ba1d4b0223 lvol 150 00:08:42.446 11:16:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # lvol=1c64657d-ba6d-480c-84fc-a6a0e53db9b5 00:08:42.446 11:16:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:08:42.446 11:16:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:08:42.446 [2024-07-26 11:16:38.100303] bdev_aio.c:1030:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:08:42.446 [2024-07-26 11:16:38.100350] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:08:42.446 true 00:08:42.705 11:16:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 47ba49b5-426b-4905-a417-03ba1d4b0223 00:08:42.705 11:16:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:08:42.705 11:16:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:08:42.705 11:16:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:08:42.964 11:16:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 1c64657d-ba6d-480c-84fc-a6a0e53db9b5 00:08:42.964 11:16:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:08:43.222 [2024-07-26 11:16:38.762291] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:43.222 11:16:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:08:43.481 11:16:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:08:43.481 11:16:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=1382721 00:08:43.481 11:16:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:08:43.481 11:16:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 1382721 /var/tmp/bdevperf.sock 00:08:43.481 11:16:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@831 -- # '[' -z 1382721 ']' 00:08:43.481 11:16:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:08:43.481 11:16:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:43.481 11:16:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:08:43.481 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:08:43.481 11:16:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:43.481 11:16:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:08:43.481 [2024-07-26 11:16:38.974222] Starting SPDK v24.09-pre git sha1 487ff9e1a / DPDK 24.03.0 initialization... 00:08:43.481 [2024-07-26 11:16:38.974267] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1382721 ] 00:08:43.481 EAL: No free 2048 kB hugepages reported on node 1 00:08:43.481 [2024-07-26 11:16:39.037525] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:43.481 [2024-07-26 11:16:39.108718] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:08:44.417 11:16:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:44.417 11:16:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@864 -- # return 0 00:08:44.417 11:16:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:08:44.417 Nvme0n1 00:08:44.417 11:16:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:08:44.676 [ 00:08:44.676 { 00:08:44.676 "name": "Nvme0n1", 00:08:44.676 "aliases": [ 00:08:44.676 "1c64657d-ba6d-480c-84fc-a6a0e53db9b5" 00:08:44.676 ], 00:08:44.676 "product_name": "NVMe disk", 00:08:44.676 "block_size": 4096, 00:08:44.676 "num_blocks": 38912, 00:08:44.676 "uuid": "1c64657d-ba6d-480c-84fc-a6a0e53db9b5", 00:08:44.676 "assigned_rate_limits": { 00:08:44.676 "rw_ios_per_sec": 0, 00:08:44.676 "rw_mbytes_per_sec": 0, 00:08:44.676 "r_mbytes_per_sec": 0, 00:08:44.676 "w_mbytes_per_sec": 0 00:08:44.676 }, 00:08:44.676 "claimed": false, 00:08:44.676 "zoned": false, 00:08:44.676 "supported_io_types": { 00:08:44.676 "read": true, 00:08:44.676 "write": true, 00:08:44.676 "unmap": true, 00:08:44.676 "flush": true, 00:08:44.676 "reset": true, 00:08:44.676 "nvme_admin": true, 00:08:44.676 "nvme_io": true, 00:08:44.676 "nvme_io_md": false, 00:08:44.676 "write_zeroes": true, 00:08:44.676 "zcopy": false, 00:08:44.676 "get_zone_info": false, 00:08:44.676 "zone_management": false, 00:08:44.676 "zone_append": false, 00:08:44.676 "compare": true, 00:08:44.676 "compare_and_write": true, 00:08:44.676 "abort": true, 00:08:44.676 "seek_hole": false, 00:08:44.676 "seek_data": false, 00:08:44.676 "copy": true, 00:08:44.676 "nvme_iov_md": false 00:08:44.676 }, 00:08:44.676 "memory_domains": [ 00:08:44.676 { 00:08:44.676 "dma_device_id": "system", 00:08:44.676 "dma_device_type": 1 00:08:44.676 } 00:08:44.676 ], 00:08:44.676 "driver_specific": { 00:08:44.676 "nvme": [ 00:08:44.676 { 00:08:44.676 "trid": { 00:08:44.676 "trtype": "TCP", 00:08:44.676 "adrfam": "IPv4", 00:08:44.676 "traddr": "10.0.0.2", 00:08:44.676 "trsvcid": "4420", 00:08:44.676 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:08:44.676 }, 00:08:44.676 "ctrlr_data": { 00:08:44.676 "cntlid": 1, 00:08:44.676 "vendor_id": "0x8086", 00:08:44.676 "model_number": "SPDK bdev Controller", 00:08:44.676 "serial_number": "SPDK0", 00:08:44.676 "firmware_revision": "24.09", 00:08:44.676 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:08:44.676 "oacs": { 00:08:44.676 "security": 0, 00:08:44.676 "format": 0, 00:08:44.676 "firmware": 0, 00:08:44.676 "ns_manage": 0 00:08:44.676 }, 00:08:44.676 "multi_ctrlr": true, 00:08:44.676 "ana_reporting": false 00:08:44.676 }, 00:08:44.676 "vs": { 00:08:44.676 "nvme_version": "1.3" 00:08:44.676 }, 00:08:44.676 "ns_data": { 00:08:44.676 "id": 1, 00:08:44.676 "can_share": true 00:08:44.676 } 00:08:44.676 } 00:08:44.676 ], 00:08:44.676 "mp_policy": "active_passive" 00:08:44.676 } 00:08:44.676 } 00:08:44.676 ] 00:08:44.676 11:16:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=1382953 00:08:44.676 11:16:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:08:44.676 11:16:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:08:44.676 Running I/O for 10 seconds... 00:08:46.054 Latency(us) 00:08:46.054 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:46.054 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:46.054 Nvme0n1 : 1.00 23364.00 91.27 0.00 0.00 0.00 0.00 0.00 00:08:46.054 =================================================================================================================== 00:08:46.054 Total : 23364.00 91.27 0.00 0.00 0.00 0.00 0.00 00:08:46.054 00:08:46.622 11:16:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 47ba49b5-426b-4905-a417-03ba1d4b0223 00:08:46.881 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:46.881 Nvme0n1 : 2.00 23581.50 92.12 0.00 0.00 0.00 0.00 0.00 00:08:46.881 =================================================================================================================== 00:08:46.881 Total : 23581.50 92.12 0.00 0.00 0.00 0.00 0.00 00:08:46.881 00:08:46.881 true 00:08:46.881 11:16:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 47ba49b5-426b-4905-a417-03ba1d4b0223 00:08:46.881 11:16:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:08:47.139 11:16:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:08:47.139 11:16:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:08:47.139 11:16:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@65 -- # wait 1382953 00:08:47.705 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:47.705 Nvme0n1 : 3.00 23668.67 92.46 0.00 0.00 0.00 0.00 0.00 00:08:47.705 =================================================================================================================== 00:08:47.705 Total : 23668.67 92.46 0.00 0.00 0.00 0.00 0.00 00:08:47.705 00:08:48.637 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:48.637 Nvme0n1 : 4.00 23771.00 92.86 0.00 0.00 0.00 0.00 0.00 00:08:48.637 =================================================================================================================== 00:08:48.637 Total : 23771.00 92.86 0.00 0.00 0.00 0.00 0.00 00:08:48.637 00:08:50.012 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:50.012 Nvme0n1 : 5.00 23800.60 92.97 0.00 0.00 0.00 0.00 0.00 00:08:50.012 =================================================================================================================== 00:08:50.012 Total : 23800.60 92.97 0.00 0.00 0.00 0.00 0.00 00:08:50.012 00:08:50.948 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:50.948 Nvme0n1 : 6.00 23845.50 93.15 0.00 0.00 0.00 0.00 0.00 00:08:50.948 =================================================================================================================== 00:08:50.948 Total : 23845.50 93.15 0.00 0.00 0.00 0.00 0.00 00:08:50.948 00:08:51.944 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:51.944 Nvme0n1 : 7.00 23867.86 93.23 0.00 0.00 0.00 0.00 0.00 00:08:51.944 =================================================================================================================== 00:08:51.944 Total : 23867.86 93.23 0.00 0.00 0.00 0.00 0.00 00:08:51.944 00:08:52.879 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:52.879 Nvme0n1 : 8.00 23882.38 93.29 0.00 0.00 0.00 0.00 0.00 00:08:52.879 =================================================================================================================== 00:08:52.879 Total : 23882.38 93.29 0.00 0.00 0.00 0.00 0.00 00:08:52.879 00:08:53.814 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:53.814 Nvme0n1 : 9.00 23909.89 93.40 0.00 0.00 0.00 0.00 0.00 00:08:53.814 =================================================================================================================== 00:08:53.814 Total : 23909.89 93.40 0.00 0.00 0.00 0.00 0.00 00:08:53.814 00:08:54.750 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:54.750 Nvme0n1 : 10.00 23906.80 93.39 0.00 0.00 0.00 0.00 0.00 00:08:54.750 =================================================================================================================== 00:08:54.750 Total : 23906.80 93.39 0.00 0.00 0.00 0.00 0.00 00:08:54.750 00:08:54.750 00:08:54.750 Latency(us) 00:08:54.750 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:54.751 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:54.751 Nvme0n1 : 10.00 23908.14 93.39 0.00 0.00 5350.74 3105.16 12857.54 00:08:54.751 =================================================================================================================== 00:08:54.751 Total : 23908.14 93.39 0.00 0.00 5350.74 3105.16 12857.54 00:08:54.751 0 00:08:54.751 11:16:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@66 -- # killprocess 1382721 00:08:54.751 11:16:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@950 -- # '[' -z 1382721 ']' 00:08:54.751 11:16:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@954 -- # kill -0 1382721 00:08:54.751 11:16:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@955 -- # uname 00:08:54.751 11:16:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:08:54.751 11:16:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1382721 00:08:54.751 11:16:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:08:54.751 11:16:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:08:54.751 11:16:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1382721' 00:08:54.751 killing process with pid 1382721 00:08:54.751 11:16:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@969 -- # kill 1382721 00:08:54.751 Received shutdown signal, test time was about 10.000000 seconds 00:08:54.751 00:08:54.751 Latency(us) 00:08:54.751 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:54.751 =================================================================================================================== 00:08:54.751 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:08:54.751 11:16:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@974 -- # wait 1382721 00:08:55.009 11:16:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:08:55.267 11:16:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:08:55.526 11:16:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 47ba49b5-426b-4905-a417-03ba1d4b0223 00:08:55.526 11:16:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:08:55.526 11:16:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:08:55.526 11:16:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@72 -- # [[ '' == \d\i\r\t\y ]] 00:08:55.526 11:16:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:08:55.785 [2024-07-26 11:16:51.261223] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:08:55.785 11:16:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 47ba49b5-426b-4905-a417-03ba1d4b0223 00:08:55.785 11:16:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@650 -- # local es=0 00:08:55.785 11:16:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 47ba49b5-426b-4905-a417-03ba1d4b0223 00:08:55.785 11:16:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:55.785 11:16:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:55.785 11:16:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:55.785 11:16:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:55.786 11:16:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:55.786 11:16:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:55.786 11:16:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:55.786 11:16:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:08:55.786 11:16:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 47ba49b5-426b-4905-a417-03ba1d4b0223 00:08:56.045 request: 00:08:56.045 { 00:08:56.045 "uuid": "47ba49b5-426b-4905-a417-03ba1d4b0223", 00:08:56.045 "method": "bdev_lvol_get_lvstores", 00:08:56.045 "req_id": 1 00:08:56.045 } 00:08:56.045 Got JSON-RPC error response 00:08:56.045 response: 00:08:56.045 { 00:08:56.045 "code": -19, 00:08:56.045 "message": "No such device" 00:08:56.045 } 00:08:56.045 11:16:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@653 -- # es=1 00:08:56.045 11:16:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:08:56.045 11:16:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:08:56.045 11:16:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:08:56.045 11:16:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:08:56.045 aio_bdev 00:08:56.045 11:16:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 1c64657d-ba6d-480c-84fc-a6a0e53db9b5 00:08:56.045 11:16:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@899 -- # local bdev_name=1c64657d-ba6d-480c-84fc-a6a0e53db9b5 00:08:56.045 11:16:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:08:56.045 11:16:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@901 -- # local i 00:08:56.045 11:16:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:08:56.045 11:16:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:08:56.045 11:16:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:08:56.304 11:16:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 1c64657d-ba6d-480c-84fc-a6a0e53db9b5 -t 2000 00:08:56.304 [ 00:08:56.304 { 00:08:56.304 "name": "1c64657d-ba6d-480c-84fc-a6a0e53db9b5", 00:08:56.304 "aliases": [ 00:08:56.304 "lvs/lvol" 00:08:56.304 ], 00:08:56.304 "product_name": "Logical Volume", 00:08:56.304 "block_size": 4096, 00:08:56.304 "num_blocks": 38912, 00:08:56.304 "uuid": "1c64657d-ba6d-480c-84fc-a6a0e53db9b5", 00:08:56.304 "assigned_rate_limits": { 00:08:56.304 "rw_ios_per_sec": 0, 00:08:56.304 "rw_mbytes_per_sec": 0, 00:08:56.304 "r_mbytes_per_sec": 0, 00:08:56.304 "w_mbytes_per_sec": 0 00:08:56.304 }, 00:08:56.304 "claimed": false, 00:08:56.304 "zoned": false, 00:08:56.304 "supported_io_types": { 00:08:56.304 "read": true, 00:08:56.304 "write": true, 00:08:56.304 "unmap": true, 00:08:56.304 "flush": false, 00:08:56.304 "reset": true, 00:08:56.304 "nvme_admin": false, 00:08:56.304 "nvme_io": false, 00:08:56.304 "nvme_io_md": false, 00:08:56.304 "write_zeroes": true, 00:08:56.304 "zcopy": false, 00:08:56.304 "get_zone_info": false, 00:08:56.304 "zone_management": false, 00:08:56.304 "zone_append": false, 00:08:56.304 "compare": false, 00:08:56.304 "compare_and_write": false, 00:08:56.304 "abort": false, 00:08:56.304 "seek_hole": true, 00:08:56.304 "seek_data": true, 00:08:56.304 "copy": false, 00:08:56.304 "nvme_iov_md": false 00:08:56.304 }, 00:08:56.304 "driver_specific": { 00:08:56.304 "lvol": { 00:08:56.304 "lvol_store_uuid": "47ba49b5-426b-4905-a417-03ba1d4b0223", 00:08:56.304 "base_bdev": "aio_bdev", 00:08:56.304 "thin_provision": false, 00:08:56.304 "num_allocated_clusters": 38, 00:08:56.304 "snapshot": false, 00:08:56.304 "clone": false, 00:08:56.304 "esnap_clone": false 00:08:56.304 } 00:08:56.304 } 00:08:56.304 } 00:08:56.304 ] 00:08:56.304 11:16:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@907 -- # return 0 00:08:56.304 11:16:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 47ba49b5-426b-4905-a417-03ba1d4b0223 00:08:56.304 11:16:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:08:56.563 11:16:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:08:56.563 11:16:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 47ba49b5-426b-4905-a417-03ba1d4b0223 00:08:56.563 11:16:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:08:56.822 11:16:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:08:56.822 11:16:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 1c64657d-ba6d-480c-84fc-a6a0e53db9b5 00:08:56.822 11:16:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 47ba49b5-426b-4905-a417-03ba1d4b0223 00:08:57.081 11:16:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:08:57.395 11:16:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:08:57.395 00:08:57.395 real 0m15.617s 00:08:57.395 user 0m15.340s 00:08:57.395 sys 0m1.391s 00:08:57.395 11:16:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:57.395 11:16:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:08:57.395 ************************************ 00:08:57.395 END TEST lvs_grow_clean 00:08:57.395 ************************************ 00:08:57.395 11:16:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@103 -- # run_test lvs_grow_dirty lvs_grow dirty 00:08:57.395 11:16:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:08:57.395 11:16:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:57.395 11:16:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:08:57.395 ************************************ 00:08:57.395 START TEST lvs_grow_dirty 00:08:57.395 ************************************ 00:08:57.395 11:16:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1125 -- # lvs_grow dirty 00:08:57.395 11:16:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:08:57.395 11:16:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:08:57.395 11:16:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:08:57.395 11:16:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:08:57.395 11:16:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:08:57.395 11:16:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:08:57.395 11:16:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:08:57.395 11:16:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:08:57.395 11:16:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:08:57.654 11:16:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:08:57.654 11:16:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:08:57.654 11:16:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # lvs=296ee838-2a06-435e-a576-7801c6d21fce 00:08:57.654 11:16:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 296ee838-2a06-435e-a576-7801c6d21fce 00:08:57.654 11:16:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:08:57.913 11:16:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:08:57.913 11:16:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:08:57.913 11:16:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 296ee838-2a06-435e-a576-7801c6d21fce lvol 150 00:08:58.172 11:16:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # lvol=ab2fdf28-c063-4d14-bb07-aedbf3c65e14 00:08:58.173 11:16:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:08:58.173 11:16:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:08:58.173 [2024-07-26 11:16:53.806319] bdev_aio.c:1030:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:08:58.173 [2024-07-26 11:16:53.806365] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:08:58.173 true 00:08:58.173 11:16:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 296ee838-2a06-435e-a576-7801c6d21fce 00:08:58.173 11:16:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:08:58.432 11:16:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:08:58.432 11:16:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:08:58.690 11:16:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 ab2fdf28-c063-4d14-bb07-aedbf3c65e14 00:08:58.690 11:16:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:08:58.949 [2024-07-26 11:16:54.456265] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:58.949 11:16:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:08:59.208 11:16:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=1385410 00:08:59.208 11:16:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:08:59.208 11:16:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:08:59.208 11:16:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 1385410 /var/tmp/bdevperf.sock 00:08:59.208 11:16:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@831 -- # '[' -z 1385410 ']' 00:08:59.208 11:16:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:08:59.208 11:16:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:59.208 11:16:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:08:59.208 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:08:59.208 11:16:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:59.208 11:16:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:08:59.208 [2024-07-26 11:16:54.686867] Starting SPDK v24.09-pre git sha1 487ff9e1a / DPDK 24.03.0 initialization... 00:08:59.208 [2024-07-26 11:16:54.686916] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1385410 ] 00:08:59.208 EAL: No free 2048 kB hugepages reported on node 1 00:08:59.208 [2024-07-26 11:16:54.754103] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:59.208 [2024-07-26 11:16:54.832674] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:09:00.143 11:16:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:00.143 11:16:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # return 0 00:09:00.143 11:16:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:09:00.402 Nvme0n1 00:09:00.402 11:16:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:09:00.402 [ 00:09:00.402 { 00:09:00.402 "name": "Nvme0n1", 00:09:00.402 "aliases": [ 00:09:00.402 "ab2fdf28-c063-4d14-bb07-aedbf3c65e14" 00:09:00.402 ], 00:09:00.402 "product_name": "NVMe disk", 00:09:00.402 "block_size": 4096, 00:09:00.402 "num_blocks": 38912, 00:09:00.402 "uuid": "ab2fdf28-c063-4d14-bb07-aedbf3c65e14", 00:09:00.402 "assigned_rate_limits": { 00:09:00.402 "rw_ios_per_sec": 0, 00:09:00.402 "rw_mbytes_per_sec": 0, 00:09:00.402 "r_mbytes_per_sec": 0, 00:09:00.402 "w_mbytes_per_sec": 0 00:09:00.402 }, 00:09:00.402 "claimed": false, 00:09:00.402 "zoned": false, 00:09:00.402 "supported_io_types": { 00:09:00.402 "read": true, 00:09:00.402 "write": true, 00:09:00.402 "unmap": true, 00:09:00.402 "flush": true, 00:09:00.402 "reset": true, 00:09:00.402 "nvme_admin": true, 00:09:00.402 "nvme_io": true, 00:09:00.402 "nvme_io_md": false, 00:09:00.402 "write_zeroes": true, 00:09:00.402 "zcopy": false, 00:09:00.402 "get_zone_info": false, 00:09:00.402 "zone_management": false, 00:09:00.402 "zone_append": false, 00:09:00.402 "compare": true, 00:09:00.402 "compare_and_write": true, 00:09:00.402 "abort": true, 00:09:00.402 "seek_hole": false, 00:09:00.402 "seek_data": false, 00:09:00.402 "copy": true, 00:09:00.402 "nvme_iov_md": false 00:09:00.402 }, 00:09:00.402 "memory_domains": [ 00:09:00.402 { 00:09:00.402 "dma_device_id": "system", 00:09:00.402 "dma_device_type": 1 00:09:00.402 } 00:09:00.402 ], 00:09:00.402 "driver_specific": { 00:09:00.402 "nvme": [ 00:09:00.402 { 00:09:00.402 "trid": { 00:09:00.402 "trtype": "TCP", 00:09:00.402 "adrfam": "IPv4", 00:09:00.402 "traddr": "10.0.0.2", 00:09:00.402 "trsvcid": "4420", 00:09:00.402 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:09:00.402 }, 00:09:00.402 "ctrlr_data": { 00:09:00.402 "cntlid": 1, 00:09:00.402 "vendor_id": "0x8086", 00:09:00.402 "model_number": "SPDK bdev Controller", 00:09:00.402 "serial_number": "SPDK0", 00:09:00.402 "firmware_revision": "24.09", 00:09:00.402 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:09:00.402 "oacs": { 00:09:00.402 "security": 0, 00:09:00.402 "format": 0, 00:09:00.402 "firmware": 0, 00:09:00.402 "ns_manage": 0 00:09:00.402 }, 00:09:00.402 "multi_ctrlr": true, 00:09:00.402 "ana_reporting": false 00:09:00.402 }, 00:09:00.402 "vs": { 00:09:00.402 "nvme_version": "1.3" 00:09:00.402 }, 00:09:00.402 "ns_data": { 00:09:00.402 "id": 1, 00:09:00.402 "can_share": true 00:09:00.402 } 00:09:00.402 } 00:09:00.402 ], 00:09:00.402 "mp_policy": "active_passive" 00:09:00.402 } 00:09:00.402 } 00:09:00.402 ] 00:09:00.403 11:16:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=1385643 00:09:00.403 11:16:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:09:00.403 11:16:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:09:00.661 Running I/O for 10 seconds... 00:09:01.598 Latency(us) 00:09:01.598 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:01.598 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:01.598 Nvme0n1 : 1.00 23520.00 91.88 0.00 0.00 0.00 0.00 0.00 00:09:01.598 =================================================================================================================== 00:09:01.598 Total : 23520.00 91.88 0.00 0.00 0.00 0.00 0.00 00:09:01.598 00:09:02.534 11:16:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 296ee838-2a06-435e-a576-7801c6d21fce 00:09:02.534 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:02.534 Nvme0n1 : 2.00 23730.50 92.70 0.00 0.00 0.00 0.00 0.00 00:09:02.534 =================================================================================================================== 00:09:02.534 Total : 23730.50 92.70 0.00 0.00 0.00 0.00 0.00 00:09:02.534 00:09:02.793 true 00:09:02.793 11:16:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 296ee838-2a06-435e-a576-7801c6d21fce 00:09:02.793 11:16:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:09:02.793 11:16:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:09:02.793 11:16:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:09:02.793 11:16:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@65 -- # wait 1385643 00:09:03.729 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:03.729 Nvme0n1 : 3.00 23779.00 92.89 0.00 0.00 0.00 0.00 0.00 00:09:03.729 =================================================================================================================== 00:09:03.729 Total : 23779.00 92.89 0.00 0.00 0.00 0.00 0.00 00:09:03.729 00:09:04.666 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:04.666 Nvme0n1 : 4.00 23840.00 93.12 0.00 0.00 0.00 0.00 0.00 00:09:04.666 =================================================================================================================== 00:09:04.666 Total : 23840.00 93.12 0.00 0.00 0.00 0.00 0.00 00:09:04.666 00:09:05.603 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:05.603 Nvme0n1 : 5.00 23898.20 93.35 0.00 0.00 0.00 0.00 0.00 00:09:05.603 =================================================================================================================== 00:09:05.603 Total : 23898.20 93.35 0.00 0.00 0.00 0.00 0.00 00:09:05.603 00:09:06.539 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:06.539 Nvme0n1 : 6.00 23935.33 93.50 0.00 0.00 0.00 0.00 0.00 00:09:06.539 =================================================================================================================== 00:09:06.539 Total : 23935.33 93.50 0.00 0.00 0.00 0.00 0.00 00:09:06.539 00:09:07.475 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:07.475 Nvme0n1 : 7.00 23958.43 93.59 0.00 0.00 0.00 0.00 0.00 00:09:07.475 =================================================================================================================== 00:09:07.475 Total : 23958.43 93.59 0.00 0.00 0.00 0.00 0.00 00:09:07.475 00:09:08.848 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:08.848 Nvme0n1 : 8.00 23980.50 93.67 0.00 0.00 0.00 0.00 0.00 00:09:08.848 =================================================================================================================== 00:09:08.848 Total : 23980.50 93.67 0.00 0.00 0.00 0.00 0.00 00:09:08.848 00:09:09.783 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:09.783 Nvme0n1 : 9.00 23990.11 93.71 0.00 0.00 0.00 0.00 0.00 00:09:09.783 =================================================================================================================== 00:09:09.783 Total : 23990.11 93.71 0.00 0.00 0.00 0.00 0.00 00:09:09.783 00:09:10.719 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:10.719 Nvme0n1 : 10.00 24004.20 93.77 0.00 0.00 0.00 0.00 0.00 00:09:10.719 =================================================================================================================== 00:09:10.719 Total : 24004.20 93.77 0.00 0.00 0.00 0.00 0.00 00:09:10.719 00:09:10.719 00:09:10.719 Latency(us) 00:09:10.719 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:10.719 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:10.719 Nvme0n1 : 10.00 24010.17 93.79 0.00 0.00 5328.16 3136.37 13856.18 00:09:10.719 =================================================================================================================== 00:09:10.719 Total : 24010.17 93.79 0.00 0.00 5328.16 3136.37 13856.18 00:09:10.719 0 00:09:10.719 11:17:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@66 -- # killprocess 1385410 00:09:10.719 11:17:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@950 -- # '[' -z 1385410 ']' 00:09:10.719 11:17:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@954 -- # kill -0 1385410 00:09:10.719 11:17:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@955 -- # uname 00:09:10.719 11:17:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:09:10.719 11:17:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1385410 00:09:10.719 11:17:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:09:10.719 11:17:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:09:10.719 11:17:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1385410' 00:09:10.719 killing process with pid 1385410 00:09:10.719 11:17:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@969 -- # kill 1385410 00:09:10.719 Received shutdown signal, test time was about 10.000000 seconds 00:09:10.719 00:09:10.719 Latency(us) 00:09:10.719 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:10.719 =================================================================================================================== 00:09:10.719 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:09:10.719 11:17:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@974 -- # wait 1385410 00:09:10.977 11:17:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:09:10.977 11:17:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:09:11.236 11:17:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 296ee838-2a06-435e-a576-7801c6d21fce 00:09:11.236 11:17:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:09:11.495 11:17:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:09:11.495 11:17:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@72 -- # [[ dirty == \d\i\r\t\y ]] 00:09:11.495 11:17:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@74 -- # kill -9 1382214 00:09:11.495 11:17:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # wait 1382214 00:09:11.495 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh: line 75: 1382214 Killed "${NVMF_APP[@]}" "$@" 00:09:11.495 11:17:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # true 00:09:11.495 11:17:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@76 -- # nvmfappstart -m 0x1 00:09:11.495 11:17:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:09:11.495 11:17:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@724 -- # xtrace_disable 00:09:11.495 11:17:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:09:11.495 11:17:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@481 -- # nvmfpid=1387413 00:09:11.495 11:17:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@482 -- # waitforlisten 1387413 00:09:11.495 11:17:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:09:11.495 11:17:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@831 -- # '[' -z 1387413 ']' 00:09:11.495 11:17:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:11.495 11:17:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:11.495 11:17:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:11.495 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:11.495 11:17:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:11.495 11:17:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:09:11.495 [2024-07-26 11:17:07.025968] Starting SPDK v24.09-pre git sha1 487ff9e1a / DPDK 24.03.0 initialization... 00:09:11.495 [2024-07-26 11:17:07.026017] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:11.495 EAL: No free 2048 kB hugepages reported on node 1 00:09:11.495 [2024-07-26 11:17:07.096592] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:11.754 [2024-07-26 11:17:07.173902] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:11.754 [2024-07-26 11:17:07.173935] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:11.754 [2024-07-26 11:17:07.173942] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:11.754 [2024-07-26 11:17:07.173948] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:11.754 [2024-07-26 11:17:07.173952] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:11.754 [2024-07-26 11:17:07.173984] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:12.322 11:17:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:12.322 11:17:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # return 0 00:09:12.322 11:17:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:09:12.322 11:17:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@730 -- # xtrace_disable 00:09:12.322 11:17:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:09:12.322 11:17:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:12.322 11:17:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:09:12.581 [2024-07-26 11:17:08.007411] blobstore.c:4865:bs_recover: *NOTICE*: Performing recovery on blobstore 00:09:12.581 [2024-07-26 11:17:08.007509] blobstore.c:4812:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:09:12.581 [2024-07-26 11:17:08.007534] blobstore.c:4812:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:09:12.581 11:17:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # aio_bdev=aio_bdev 00:09:12.581 11:17:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@78 -- # waitforbdev ab2fdf28-c063-4d14-bb07-aedbf3c65e14 00:09:12.581 11:17:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@899 -- # local bdev_name=ab2fdf28-c063-4d14-bb07-aedbf3c65e14 00:09:12.581 11:17:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:09:12.581 11:17:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@901 -- # local i 00:09:12.581 11:17:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:09:12.581 11:17:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:09:12.581 11:17:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:09:12.581 11:17:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b ab2fdf28-c063-4d14-bb07-aedbf3c65e14 -t 2000 00:09:12.840 [ 00:09:12.840 { 00:09:12.840 "name": "ab2fdf28-c063-4d14-bb07-aedbf3c65e14", 00:09:12.840 "aliases": [ 00:09:12.840 "lvs/lvol" 00:09:12.840 ], 00:09:12.840 "product_name": "Logical Volume", 00:09:12.840 "block_size": 4096, 00:09:12.840 "num_blocks": 38912, 00:09:12.840 "uuid": "ab2fdf28-c063-4d14-bb07-aedbf3c65e14", 00:09:12.840 "assigned_rate_limits": { 00:09:12.840 "rw_ios_per_sec": 0, 00:09:12.840 "rw_mbytes_per_sec": 0, 00:09:12.840 "r_mbytes_per_sec": 0, 00:09:12.840 "w_mbytes_per_sec": 0 00:09:12.840 }, 00:09:12.840 "claimed": false, 00:09:12.840 "zoned": false, 00:09:12.840 "supported_io_types": { 00:09:12.840 "read": true, 00:09:12.840 "write": true, 00:09:12.840 "unmap": true, 00:09:12.840 "flush": false, 00:09:12.840 "reset": true, 00:09:12.840 "nvme_admin": false, 00:09:12.840 "nvme_io": false, 00:09:12.840 "nvme_io_md": false, 00:09:12.840 "write_zeroes": true, 00:09:12.840 "zcopy": false, 00:09:12.840 "get_zone_info": false, 00:09:12.840 "zone_management": false, 00:09:12.840 "zone_append": false, 00:09:12.840 "compare": false, 00:09:12.840 "compare_and_write": false, 00:09:12.840 "abort": false, 00:09:12.840 "seek_hole": true, 00:09:12.840 "seek_data": true, 00:09:12.840 "copy": false, 00:09:12.840 "nvme_iov_md": false 00:09:12.840 }, 00:09:12.840 "driver_specific": { 00:09:12.840 "lvol": { 00:09:12.840 "lvol_store_uuid": "296ee838-2a06-435e-a576-7801c6d21fce", 00:09:12.840 "base_bdev": "aio_bdev", 00:09:12.840 "thin_provision": false, 00:09:12.840 "num_allocated_clusters": 38, 00:09:12.840 "snapshot": false, 00:09:12.840 "clone": false, 00:09:12.840 "esnap_clone": false 00:09:12.840 } 00:09:12.840 } 00:09:12.840 } 00:09:12.840 ] 00:09:12.840 11:17:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@907 -- # return 0 00:09:12.840 11:17:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 296ee838-2a06-435e-a576-7801c6d21fce 00:09:12.840 11:17:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # jq -r '.[0].free_clusters' 00:09:13.098 11:17:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # (( free_clusters == 61 )) 00:09:13.098 11:17:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 296ee838-2a06-435e-a576-7801c6d21fce 00:09:13.098 11:17:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # jq -r '.[0].total_data_clusters' 00:09:13.098 11:17:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # (( data_clusters == 99 )) 00:09:13.098 11:17:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:09:13.357 [2024-07-26 11:17:08.835789] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:09:13.357 11:17:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 296ee838-2a06-435e-a576-7801c6d21fce 00:09:13.357 11:17:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@650 -- # local es=0 00:09:13.357 11:17:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 296ee838-2a06-435e-a576-7801c6d21fce 00:09:13.357 11:17:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:09:13.357 11:17:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:13.357 11:17:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:09:13.357 11:17:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:13.357 11:17:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:09:13.357 11:17:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:13.357 11:17:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:09:13.357 11:17:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:09:13.357 11:17:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 296ee838-2a06-435e-a576-7801c6d21fce 00:09:13.616 request: 00:09:13.616 { 00:09:13.616 "uuid": "296ee838-2a06-435e-a576-7801c6d21fce", 00:09:13.616 "method": "bdev_lvol_get_lvstores", 00:09:13.616 "req_id": 1 00:09:13.616 } 00:09:13.616 Got JSON-RPC error response 00:09:13.616 response: 00:09:13.616 { 00:09:13.616 "code": -19, 00:09:13.616 "message": "No such device" 00:09:13.616 } 00:09:13.616 11:17:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@653 -- # es=1 00:09:13.616 11:17:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:09:13.616 11:17:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:09:13.616 11:17:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:09:13.616 11:17:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:09:13.616 aio_bdev 00:09:13.616 11:17:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev ab2fdf28-c063-4d14-bb07-aedbf3c65e14 00:09:13.616 11:17:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@899 -- # local bdev_name=ab2fdf28-c063-4d14-bb07-aedbf3c65e14 00:09:13.616 11:17:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:09:13.616 11:17:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@901 -- # local i 00:09:13.616 11:17:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:09:13.616 11:17:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:09:13.616 11:17:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:09:13.875 11:17:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b ab2fdf28-c063-4d14-bb07-aedbf3c65e14 -t 2000 00:09:14.134 [ 00:09:14.134 { 00:09:14.134 "name": "ab2fdf28-c063-4d14-bb07-aedbf3c65e14", 00:09:14.134 "aliases": [ 00:09:14.134 "lvs/lvol" 00:09:14.134 ], 00:09:14.134 "product_name": "Logical Volume", 00:09:14.134 "block_size": 4096, 00:09:14.134 "num_blocks": 38912, 00:09:14.134 "uuid": "ab2fdf28-c063-4d14-bb07-aedbf3c65e14", 00:09:14.134 "assigned_rate_limits": { 00:09:14.134 "rw_ios_per_sec": 0, 00:09:14.134 "rw_mbytes_per_sec": 0, 00:09:14.134 "r_mbytes_per_sec": 0, 00:09:14.134 "w_mbytes_per_sec": 0 00:09:14.134 }, 00:09:14.134 "claimed": false, 00:09:14.134 "zoned": false, 00:09:14.134 "supported_io_types": { 00:09:14.134 "read": true, 00:09:14.134 "write": true, 00:09:14.134 "unmap": true, 00:09:14.134 "flush": false, 00:09:14.134 "reset": true, 00:09:14.134 "nvme_admin": false, 00:09:14.134 "nvme_io": false, 00:09:14.134 "nvme_io_md": false, 00:09:14.134 "write_zeroes": true, 00:09:14.134 "zcopy": false, 00:09:14.134 "get_zone_info": false, 00:09:14.134 "zone_management": false, 00:09:14.134 "zone_append": false, 00:09:14.134 "compare": false, 00:09:14.134 "compare_and_write": false, 00:09:14.134 "abort": false, 00:09:14.134 "seek_hole": true, 00:09:14.134 "seek_data": true, 00:09:14.134 "copy": false, 00:09:14.134 "nvme_iov_md": false 00:09:14.134 }, 00:09:14.134 "driver_specific": { 00:09:14.134 "lvol": { 00:09:14.134 "lvol_store_uuid": "296ee838-2a06-435e-a576-7801c6d21fce", 00:09:14.134 "base_bdev": "aio_bdev", 00:09:14.134 "thin_provision": false, 00:09:14.134 "num_allocated_clusters": 38, 00:09:14.134 "snapshot": false, 00:09:14.134 "clone": false, 00:09:14.134 "esnap_clone": false 00:09:14.134 } 00:09:14.134 } 00:09:14.134 } 00:09:14.134 ] 00:09:14.134 11:17:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@907 -- # return 0 00:09:14.134 11:17:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 296ee838-2a06-435e-a576-7801c6d21fce 00:09:14.134 11:17:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:09:14.134 11:17:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:09:14.134 11:17:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 296ee838-2a06-435e-a576-7801c6d21fce 00:09:14.134 11:17:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:09:14.393 11:17:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:09:14.393 11:17:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete ab2fdf28-c063-4d14-bb07-aedbf3c65e14 00:09:14.651 11:17:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 296ee838-2a06-435e-a576-7801c6d21fce 00:09:14.651 11:17:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:09:14.911 11:17:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:09:14.911 00:09:14.911 real 0m17.527s 00:09:14.911 user 0m44.930s 00:09:14.911 sys 0m3.756s 00:09:14.911 11:17:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:14.911 11:17:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:09:14.911 ************************************ 00:09:14.911 END TEST lvs_grow_dirty 00:09:14.911 ************************************ 00:09:14.911 11:17:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # process_shm --id 0 00:09:14.911 11:17:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@808 -- # type=--id 00:09:14.911 11:17:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@809 -- # id=0 00:09:14.911 11:17:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@810 -- # '[' --id = --pid ']' 00:09:14.911 11:17:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@814 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:09:14.911 11:17:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@814 -- # shm_files=nvmf_trace.0 00:09:14.911 11:17:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@816 -- # [[ -z nvmf_trace.0 ]] 00:09:14.911 11:17:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@820 -- # for n in $shm_files 00:09:14.911 11:17:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@821 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:09:14.911 nvmf_trace.0 00:09:14.911 11:17:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@823 -- # return 0 00:09:14.911 11:17:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # nvmftestfini 00:09:14.911 11:17:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@488 -- # nvmfcleanup 00:09:14.911 11:17:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@117 -- # sync 00:09:14.911 11:17:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:09:14.911 11:17:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@120 -- # set +e 00:09:14.911 11:17:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@121 -- # for i in {1..20} 00:09:14.911 11:17:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:09:14.911 rmmod nvme_tcp 00:09:14.911 rmmod nvme_fabrics 00:09:14.911 rmmod nvme_keyring 00:09:15.170 11:17:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:09:15.170 11:17:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@124 -- # set -e 00:09:15.170 11:17:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@125 -- # return 0 00:09:15.170 11:17:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@489 -- # '[' -n 1387413 ']' 00:09:15.170 11:17:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@490 -- # killprocess 1387413 00:09:15.170 11:17:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@950 -- # '[' -z 1387413 ']' 00:09:15.170 11:17:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@954 -- # kill -0 1387413 00:09:15.170 11:17:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@955 -- # uname 00:09:15.170 11:17:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:09:15.170 11:17:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1387413 00:09:15.170 11:17:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:09:15.170 11:17:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:09:15.170 11:17:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1387413' 00:09:15.170 killing process with pid 1387413 00:09:15.170 11:17:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@969 -- # kill 1387413 00:09:15.170 11:17:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@974 -- # wait 1387413 00:09:15.170 11:17:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:09:15.170 11:17:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:09:15.170 11:17:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:09:15.170 11:17:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:09:15.170 11:17:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@278 -- # remove_spdk_ns 00:09:15.170 11:17:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:15.170 11:17:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:15.170 11:17:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:17.706 11:17:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:09:17.706 00:09:17.706 real 0m42.713s 00:09:17.706 user 1m6.119s 00:09:17.706 sys 0m9.942s 00:09:17.706 11:17:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:17.706 11:17:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:09:17.706 ************************************ 00:09:17.706 END TEST nvmf_lvs_grow 00:09:17.706 ************************************ 00:09:17.706 11:17:12 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@29 -- # run_test nvmf_bdev_io_wait /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:09:17.706 11:17:12 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:09:17.706 11:17:12 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:17.706 11:17:12 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:09:17.706 ************************************ 00:09:17.706 START TEST nvmf_bdev_io_wait 00:09:17.706 ************************************ 00:09:17.706 11:17:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:09:17.706 * Looking for test storage... 00:09:17.706 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:17.706 11:17:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:17.706 11:17:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # uname -s 00:09:17.706 11:17:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:17.706 11:17:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:17.706 11:17:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:17.706 11:17:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:17.706 11:17:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:17.706 11:17:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:17.706 11:17:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:17.706 11:17:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:17.706 11:17:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:17.706 11:17:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:17.706 11:17:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:09:17.706 11:17:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:09:17.706 11:17:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:17.706 11:17:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:17.706 11:17:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:17.706 11:17:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:17.706 11:17:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:17.706 11:17:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:17.706 11:17:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:17.706 11:17:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:17.706 11:17:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:17.706 11:17:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:17.706 11:17:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:17.707 11:17:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@5 -- # export PATH 00:09:17.707 11:17:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:17.707 11:17:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@47 -- # : 0 00:09:17.707 11:17:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:09:17.707 11:17:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:09:17.707 11:17:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:17.707 11:17:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:17.707 11:17:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:17.707 11:17:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:09:17.707 11:17:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:09:17.707 11:17:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@51 -- # have_pci_nics=0 00:09:17.707 11:17:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@11 -- # MALLOC_BDEV_SIZE=64 00:09:17.707 11:17:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:09:17.707 11:17:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@14 -- # nvmftestinit 00:09:17.707 11:17:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:09:17.707 11:17:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:17.707 11:17:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@448 -- # prepare_net_devs 00:09:17.707 11:17:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@410 -- # local -g is_hw=no 00:09:17.707 11:17:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@412 -- # remove_spdk_ns 00:09:17.707 11:17:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:17.707 11:17:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:17.707 11:17:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:17.707 11:17:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:09:17.707 11:17:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:09:17.707 11:17:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@285 -- # xtrace_disable 00:09:17.707 11:17:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:23.037 11:17:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:23.037 11:17:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@291 -- # pci_devs=() 00:09:23.037 11:17:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@291 -- # local -a pci_devs 00:09:23.037 11:17:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@292 -- # pci_net_devs=() 00:09:23.037 11:17:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:09:23.037 11:17:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@293 -- # pci_drivers=() 00:09:23.037 11:17:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@293 -- # local -A pci_drivers 00:09:23.037 11:17:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@295 -- # net_devs=() 00:09:23.037 11:17:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@295 -- # local -ga net_devs 00:09:23.037 11:17:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@296 -- # e810=() 00:09:23.037 11:17:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@296 -- # local -ga e810 00:09:23.037 11:17:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@297 -- # x722=() 00:09:23.037 11:17:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@297 -- # local -ga x722 00:09:23.037 11:17:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@298 -- # mlx=() 00:09:23.037 11:17:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@298 -- # local -ga mlx 00:09:23.037 11:17:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:23.037 11:17:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:23.037 11:17:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:23.037 11:17:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:23.037 11:17:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:23.037 11:17:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:23.037 11:17:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:23.037 11:17:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:23.037 11:17:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:23.037 11:17:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:23.037 11:17:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:23.037 11:17:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:09:23.037 11:17:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:09:23.037 11:17:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:09:23.037 11:17:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:09:23.037 11:17:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:09:23.037 11:17:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:09:23.037 11:17:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:23.037 11:17:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:09:23.037 Found 0000:86:00.0 (0x8086 - 0x159b) 00:09:23.037 11:17:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:09:23.037 11:17:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:09:23.037 11:17:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:23.037 11:17:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:23.037 11:17:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:09:23.037 11:17:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:23.038 11:17:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:09:23.038 Found 0000:86:00.1 (0x8086 - 0x159b) 00:09:23.038 11:17:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:09:23.038 11:17:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:09:23.038 11:17:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:23.038 11:17:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:23.038 11:17:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:09:23.038 11:17:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:09:23.038 11:17:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:09:23.038 11:17:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:09:23.038 11:17:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:23.038 11:17:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:23.038 11:17:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:09:23.038 11:17:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:23.038 11:17:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@390 -- # [[ up == up ]] 00:09:23.038 11:17:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:09:23.038 11:17:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:23.038 11:17:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:09:23.038 Found net devices under 0000:86:00.0: cvl_0_0 00:09:23.038 11:17:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:09:23.038 11:17:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:23.038 11:17:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:23.038 11:17:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:09:23.038 11:17:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:23.038 11:17:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@390 -- # [[ up == up ]] 00:09:23.038 11:17:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:09:23.038 11:17:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:23.038 11:17:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:09:23.038 Found net devices under 0000:86:00.1: cvl_0_1 00:09:23.038 11:17:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:09:23.038 11:17:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:09:23.038 11:17:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@414 -- # is_hw=yes 00:09:23.038 11:17:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:09:23.038 11:17:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:09:23.038 11:17:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:09:23.038 11:17:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:23.038 11:17:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:23.038 11:17:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:23.038 11:17:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:09:23.038 11:17:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:23.038 11:17:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:23.038 11:17:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:09:23.038 11:17:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:23.038 11:17:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:23.038 11:17:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:09:23.038 11:17:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:09:23.038 11:17:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:09:23.038 11:17:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:23.297 11:17:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:23.297 11:17:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:23.297 11:17:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:09:23.297 11:17:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:23.297 11:17:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:23.297 11:17:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:23.297 11:17:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:09:23.297 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:23.297 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.141 ms 00:09:23.297 00:09:23.297 --- 10.0.0.2 ping statistics --- 00:09:23.297 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:23.297 rtt min/avg/max/mdev = 0.141/0.141/0.141/0.000 ms 00:09:23.297 11:17:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:23.297 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:23.297 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.139 ms 00:09:23.297 00:09:23.297 --- 10.0.0.1 ping statistics --- 00:09:23.297 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:23.297 rtt min/avg/max/mdev = 0.139/0.139/0.139/0.000 ms 00:09:23.297 11:17:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:23.297 11:17:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@422 -- # return 0 00:09:23.297 11:17:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:09:23.297 11:17:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:23.297 11:17:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:09:23.297 11:17:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:09:23.297 11:17:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:23.297 11:17:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:09:23.297 11:17:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:09:23.297 11:17:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@15 -- # nvmfappstart -m 0xF --wait-for-rpc 00:09:23.297 11:17:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:09:23.297 11:17:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@724 -- # xtrace_disable 00:09:23.297 11:17:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:23.297 11:17:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@481 -- # nvmfpid=1391684 00:09:23.297 11:17:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@482 -- # waitforlisten 1391684 00:09:23.297 11:17:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:09:23.297 11:17:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@831 -- # '[' -z 1391684 ']' 00:09:23.297 11:17:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:23.297 11:17:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:23.297 11:17:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:23.297 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:23.297 11:17:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:23.297 11:17:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:23.297 [2024-07-26 11:17:18.951569] Starting SPDK v24.09-pre git sha1 487ff9e1a / DPDK 24.03.0 initialization... 00:09:23.297 [2024-07-26 11:17:18.951610] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:23.557 EAL: No free 2048 kB hugepages reported on node 1 00:09:23.557 [2024-07-26 11:17:19.019925] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:23.557 [2024-07-26 11:17:19.099491] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:23.557 [2024-07-26 11:17:19.099526] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:23.557 [2024-07-26 11:17:19.099533] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:23.557 [2024-07-26 11:17:19.099538] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:23.557 [2024-07-26 11:17:19.099543] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:23.557 [2024-07-26 11:17:19.099604] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:09:23.557 [2024-07-26 11:17:19.099724] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:09:23.557 [2024-07-26 11:17:19.099833] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:23.557 [2024-07-26 11:17:19.099835] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:09:24.124 11:17:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:24.124 11:17:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@864 -- # return 0 00:09:24.124 11:17:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:09:24.124 11:17:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@730 -- # xtrace_disable 00:09:24.124 11:17:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:24.383 11:17:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:24.384 11:17:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@18 -- # rpc_cmd bdev_set_options -p 5 -c 1 00:09:24.384 11:17:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:24.384 11:17:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:24.384 11:17:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:24.384 11:17:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@19 -- # rpc_cmd framework_start_init 00:09:24.384 11:17:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:24.384 11:17:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:24.384 11:17:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:24.384 11:17:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:09:24.384 11:17:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:24.384 11:17:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:24.384 [2024-07-26 11:17:19.869675] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:24.384 11:17:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:24.384 11:17:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:09:24.384 11:17:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:24.384 11:17:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:24.384 Malloc0 00:09:24.384 11:17:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:24.384 11:17:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:09:24.384 11:17:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:24.384 11:17:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:24.384 11:17:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:24.384 11:17:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:09:24.384 11:17:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:24.384 11:17:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:24.384 11:17:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:24.384 11:17:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:24.384 11:17:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:24.384 11:17:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:24.384 [2024-07-26 11:17:19.938394] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:24.384 11:17:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:24.384 11:17:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@28 -- # WRITE_PID=1391877 00:09:24.384 11:17:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x10 -i 1 --json /dev/fd/63 -q 128 -o 4096 -w write -t 1 -s 256 00:09:24.384 11:17:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # gen_nvmf_target_json 00:09:24.384 11:17:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@30 -- # READ_PID=1391880 00:09:24.384 11:17:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:09:24.384 11:17:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:09:24.384 11:17:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:09:24.384 11:17:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:09:24.384 { 00:09:24.384 "params": { 00:09:24.384 "name": "Nvme$subsystem", 00:09:24.384 "trtype": "$TEST_TRANSPORT", 00:09:24.384 "traddr": "$NVMF_FIRST_TARGET_IP", 00:09:24.384 "adrfam": "ipv4", 00:09:24.384 "trsvcid": "$NVMF_PORT", 00:09:24.384 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:09:24.384 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:09:24.384 "hdgst": ${hdgst:-false}, 00:09:24.384 "ddgst": ${ddgst:-false} 00:09:24.384 }, 00:09:24.384 "method": "bdev_nvme_attach_controller" 00:09:24.384 } 00:09:24.384 EOF 00:09:24.384 )") 00:09:24.384 11:17:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x20 -i 2 --json /dev/fd/63 -q 128 -o 4096 -w read -t 1 -s 256 00:09:24.384 11:17:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # gen_nvmf_target_json 00:09:24.384 11:17:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@32 -- # FLUSH_PID=1391883 00:09:24.384 11:17:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:09:24.384 11:17:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:09:24.384 11:17:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:09:24.384 11:17:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:09:24.384 { 00:09:24.384 "params": { 00:09:24.384 "name": "Nvme$subsystem", 00:09:24.384 "trtype": "$TEST_TRANSPORT", 00:09:24.384 "traddr": "$NVMF_FIRST_TARGET_IP", 00:09:24.384 "adrfam": "ipv4", 00:09:24.384 "trsvcid": "$NVMF_PORT", 00:09:24.384 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:09:24.384 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:09:24.384 "hdgst": ${hdgst:-false}, 00:09:24.384 "ddgst": ${ddgst:-false} 00:09:24.384 }, 00:09:24.384 "method": "bdev_nvme_attach_controller" 00:09:24.384 } 00:09:24.384 EOF 00:09:24.384 )") 00:09:24.384 11:17:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x40 -i 3 --json /dev/fd/63 -q 128 -o 4096 -w flush -t 1 -s 256 00:09:24.384 11:17:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # gen_nvmf_target_json 00:09:24.384 11:17:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@34 -- # UNMAP_PID=1391887 00:09:24.384 11:17:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:09:24.384 11:17:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:09:24.384 11:17:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@35 -- # sync 00:09:24.384 11:17:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:09:24.384 11:17:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:09:24.384 11:17:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:09:24.384 { 00:09:24.384 "params": { 00:09:24.384 "name": "Nvme$subsystem", 00:09:24.384 "trtype": "$TEST_TRANSPORT", 00:09:24.384 "traddr": "$NVMF_FIRST_TARGET_IP", 00:09:24.384 "adrfam": "ipv4", 00:09:24.384 "trsvcid": "$NVMF_PORT", 00:09:24.384 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:09:24.384 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:09:24.384 "hdgst": ${hdgst:-false}, 00:09:24.384 "ddgst": ${ddgst:-false} 00:09:24.384 }, 00:09:24.384 "method": "bdev_nvme_attach_controller" 00:09:24.384 } 00:09:24.384 EOF 00:09:24.384 )") 00:09:24.384 11:17:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x80 -i 4 --json /dev/fd/63 -q 128 -o 4096 -w unmap -t 1 -s 256 00:09:24.384 11:17:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # gen_nvmf_target_json 00:09:24.384 11:17:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:09:24.384 11:17:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:09:24.384 11:17:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:09:24.384 11:17:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:09:24.384 11:17:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:09:24.384 { 00:09:24.384 "params": { 00:09:24.384 "name": "Nvme$subsystem", 00:09:24.384 "trtype": "$TEST_TRANSPORT", 00:09:24.384 "traddr": "$NVMF_FIRST_TARGET_IP", 00:09:24.384 "adrfam": "ipv4", 00:09:24.384 "trsvcid": "$NVMF_PORT", 00:09:24.384 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:09:24.384 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:09:24.384 "hdgst": ${hdgst:-false}, 00:09:24.384 "ddgst": ${ddgst:-false} 00:09:24.384 }, 00:09:24.384 "method": "bdev_nvme_attach_controller" 00:09:24.384 } 00:09:24.384 EOF 00:09:24.384 )") 00:09:24.384 11:17:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:09:24.384 11:17:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@37 -- # wait 1391877 00:09:24.384 11:17:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:09:24.384 11:17:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:09:24.384 11:17:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:09:24.384 11:17:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:09:24.384 11:17:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:09:24.384 11:17:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:09:24.384 "params": { 00:09:24.384 "name": "Nvme1", 00:09:24.384 "trtype": "tcp", 00:09:24.384 "traddr": "10.0.0.2", 00:09:24.384 "adrfam": "ipv4", 00:09:24.384 "trsvcid": "4420", 00:09:24.384 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:09:24.384 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:09:24.384 "hdgst": false, 00:09:24.384 "ddgst": false 00:09:24.384 }, 00:09:24.384 "method": "bdev_nvme_attach_controller" 00:09:24.384 }' 00:09:24.385 11:17:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:09:24.385 11:17:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:09:24.385 11:17:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:09:24.385 "params": { 00:09:24.385 "name": "Nvme1", 00:09:24.385 "trtype": "tcp", 00:09:24.385 "traddr": "10.0.0.2", 00:09:24.385 "adrfam": "ipv4", 00:09:24.385 "trsvcid": "4420", 00:09:24.385 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:09:24.385 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:09:24.385 "hdgst": false, 00:09:24.385 "ddgst": false 00:09:24.385 }, 00:09:24.385 "method": "bdev_nvme_attach_controller" 00:09:24.385 }' 00:09:24.385 11:17:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:09:24.385 11:17:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:09:24.385 "params": { 00:09:24.385 "name": "Nvme1", 00:09:24.385 "trtype": "tcp", 00:09:24.385 "traddr": "10.0.0.2", 00:09:24.385 "adrfam": "ipv4", 00:09:24.385 "trsvcid": "4420", 00:09:24.385 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:09:24.385 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:09:24.385 "hdgst": false, 00:09:24.385 "ddgst": false 00:09:24.385 }, 00:09:24.385 "method": "bdev_nvme_attach_controller" 00:09:24.385 }' 00:09:24.385 11:17:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:09:24.385 11:17:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:09:24.385 "params": { 00:09:24.385 "name": "Nvme1", 00:09:24.385 "trtype": "tcp", 00:09:24.385 "traddr": "10.0.0.2", 00:09:24.385 "adrfam": "ipv4", 00:09:24.385 "trsvcid": "4420", 00:09:24.385 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:09:24.385 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:09:24.385 "hdgst": false, 00:09:24.385 "ddgst": false 00:09:24.385 }, 00:09:24.385 "method": "bdev_nvme_attach_controller" 00:09:24.385 }' 00:09:24.385 [2024-07-26 11:17:19.988254] Starting SPDK v24.09-pre git sha1 487ff9e1a / DPDK 24.03.0 initialization... 00:09:24.385 [2024-07-26 11:17:19.988305] [ DPDK EAL parameters: bdevperf -c 0x10 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:09:24.385 [2024-07-26 11:17:19.990070] Starting SPDK v24.09-pre git sha1 487ff9e1a / DPDK 24.03.0 initialization... 00:09:24.385 [2024-07-26 11:17:19.990113] [ DPDK EAL parameters: bdevperf -c 0x20 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk2 --proc-type=auto ] 00:09:24.385 [2024-07-26 11:17:19.990291] Starting SPDK v24.09-pre git sha1 487ff9e1a / DPDK 24.03.0 initialization... 00:09:24.385 [2024-07-26 11:17:19.990333] [ DPDK EAL parameters: bdevperf -c 0x40 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk3 --proc-type=auto ] 00:09:24.385 [2024-07-26 11:17:19.990618] Starting SPDK v24.09-pre git sha1 487ff9e1a / DPDK 24.03.0 initialization... 00:09:24.385 [2024-07-26 11:17:19.990663] [ DPDK EAL parameters: bdevperf -c 0x80 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk4 --proc-type=auto ] 00:09:24.385 EAL: No free 2048 kB hugepages reported on node 1 00:09:24.643 EAL: No free 2048 kB hugepages reported on node 1 00:09:24.643 [2024-07-26 11:17:20.159600] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:24.643 EAL: No free 2048 kB hugepages reported on node 1 00:09:24.643 [2024-07-26 11:17:20.236289] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:09:24.643 [2024-07-26 11:17:20.257876] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:24.902 EAL: No free 2048 kB hugepages reported on node 1 00:09:24.902 [2024-07-26 11:17:20.331591] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 7 00:09:24.902 [2024-07-26 11:17:20.351246] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:24.902 [2024-07-26 11:17:20.428886] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 6 00:09:24.902 [2024-07-26 11:17:20.451218] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:24.902 [2024-07-26 11:17:20.539906] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 5 00:09:25.160 Running I/O for 1 seconds... 00:09:25.160 Running I/O for 1 seconds... 00:09:25.160 Running I/O for 1 seconds... 00:09:25.418 Running I/O for 1 seconds... 00:09:26.351 00:09:26.351 Latency(us) 00:09:26.351 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:26.351 Job: Nvme1n1 (Core Mask 0x10, workload: write, depth: 128, IO size: 4096) 00:09:26.351 Nvme1n1 : 1.01 8572.84 33.49 0.00 0.00 14825.83 6428.77 23343.30 00:09:26.351 =================================================================================================================== 00:09:26.351 Total : 8572.84 33.49 0.00 0.00 14825.83 6428.77 23343.30 00:09:26.351 00:09:26.351 Latency(us) 00:09:26.351 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:26.351 Job: Nvme1n1 (Core Mask 0x40, workload: flush, depth: 128, IO size: 4096) 00:09:26.351 Nvme1n1 : 1.00 251961.01 984.22 0.00 0.00 505.31 208.70 651.46 00:09:26.351 =================================================================================================================== 00:09:26.351 Total : 251961.01 984.22 0.00 0.00 505.31 208.70 651.46 00:09:26.351 00:09:26.351 Latency(us) 00:09:26.351 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:26.351 Job: Nvme1n1 (Core Mask 0x80, workload: unmap, depth: 128, IO size: 4096) 00:09:26.351 Nvme1n1 : 1.00 8246.12 32.21 0.00 0.00 15486.17 4431.48 29210.33 00:09:26.351 =================================================================================================================== 00:09:26.351 Total : 8246.12 32.21 0.00 0.00 15486.17 4431.48 29210.33 00:09:26.351 00:09:26.351 Latency(us) 00:09:26.351 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:26.351 Job: Nvme1n1 (Core Mask 0x20, workload: read, depth: 128, IO size: 4096) 00:09:26.351 Nvme1n1 : 1.01 11531.02 45.04 0.00 0.00 11063.72 5742.20 23468.13 00:09:26.351 =================================================================================================================== 00:09:26.351 Total : 11531.02 45.04 0.00 0.00 11063.72 5742.20 23468.13 00:09:26.351 11:17:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@38 -- # wait 1391880 00:09:26.610 11:17:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@39 -- # wait 1391883 00:09:26.610 11:17:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@40 -- # wait 1391887 00:09:26.610 11:17:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@42 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:26.610 11:17:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:26.610 11:17:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:26.610 11:17:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:26.610 11:17:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@44 -- # trap - SIGINT SIGTERM EXIT 00:09:26.610 11:17:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@46 -- # nvmftestfini 00:09:26.610 11:17:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@488 -- # nvmfcleanup 00:09:26.610 11:17:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@117 -- # sync 00:09:26.610 11:17:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:09:26.610 11:17:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@120 -- # set +e 00:09:26.610 11:17:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@121 -- # for i in {1..20} 00:09:26.610 11:17:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:09:26.610 rmmod nvme_tcp 00:09:26.610 rmmod nvme_fabrics 00:09:26.610 rmmod nvme_keyring 00:09:26.610 11:17:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:09:26.610 11:17:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@124 -- # set -e 00:09:26.610 11:17:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@125 -- # return 0 00:09:26.610 11:17:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@489 -- # '[' -n 1391684 ']' 00:09:26.610 11:17:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@490 -- # killprocess 1391684 00:09:26.610 11:17:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@950 -- # '[' -z 1391684 ']' 00:09:26.610 11:17:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@954 -- # kill -0 1391684 00:09:26.610 11:17:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@955 -- # uname 00:09:26.610 11:17:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:09:26.610 11:17:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1391684 00:09:26.610 11:17:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:09:26.610 11:17:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:09:26.611 11:17:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1391684' 00:09:26.611 killing process with pid 1391684 00:09:26.611 11:17:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@969 -- # kill 1391684 00:09:26.611 11:17:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@974 -- # wait 1391684 00:09:26.869 11:17:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:09:26.869 11:17:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:09:26.869 11:17:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:09:26.869 11:17:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:09:26.869 11:17:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@278 -- # remove_spdk_ns 00:09:26.869 11:17:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:26.869 11:17:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:26.869 11:17:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:28.773 11:17:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:09:28.773 00:09:28.773 real 0m11.476s 00:09:28.773 user 0m20.431s 00:09:28.773 sys 0m5.988s 00:09:28.773 11:17:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:28.773 11:17:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:28.773 ************************************ 00:09:28.773 END TEST nvmf_bdev_io_wait 00:09:28.773 ************************************ 00:09:29.031 11:17:24 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@30 -- # run_test nvmf_queue_depth /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:09:29.031 11:17:24 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:09:29.031 11:17:24 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:29.031 11:17:24 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:09:29.031 ************************************ 00:09:29.031 START TEST nvmf_queue_depth 00:09:29.031 ************************************ 00:09:29.031 11:17:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:09:29.031 * Looking for test storage... 00:09:29.031 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:29.031 11:17:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:29.031 11:17:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@7 -- # uname -s 00:09:29.031 11:17:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:29.031 11:17:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:29.031 11:17:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:29.031 11:17:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:29.031 11:17:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:29.031 11:17:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:29.031 11:17:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:29.031 11:17:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:29.031 11:17:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:29.031 11:17:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:29.031 11:17:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:09:29.031 11:17:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:09:29.031 11:17:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:29.031 11:17:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:29.031 11:17:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:29.031 11:17:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:29.031 11:17:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:29.031 11:17:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:29.031 11:17:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:29.031 11:17:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:29.032 11:17:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:29.032 11:17:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:29.032 11:17:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:29.032 11:17:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@5 -- # export PATH 00:09:29.032 11:17:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:29.032 11:17:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@47 -- # : 0 00:09:29.032 11:17:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:09:29.032 11:17:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:09:29.032 11:17:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:29.032 11:17:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:29.032 11:17:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:29.032 11:17:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:09:29.032 11:17:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:09:29.032 11:17:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@51 -- # have_pci_nics=0 00:09:29.032 11:17:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@14 -- # MALLOC_BDEV_SIZE=64 00:09:29.032 11:17:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@15 -- # MALLOC_BLOCK_SIZE=512 00:09:29.032 11:17:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:09:29.032 11:17:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@19 -- # nvmftestinit 00:09:29.032 11:17:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:09:29.032 11:17:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:29.032 11:17:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@448 -- # prepare_net_devs 00:09:29.032 11:17:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@410 -- # local -g is_hw=no 00:09:29.032 11:17:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@412 -- # remove_spdk_ns 00:09:29.032 11:17:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:29.032 11:17:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:29.032 11:17:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:29.032 11:17:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:09:29.032 11:17:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:09:29.032 11:17:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@285 -- # xtrace_disable 00:09:29.032 11:17:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:35.600 11:17:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:35.600 11:17:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@291 -- # pci_devs=() 00:09:35.600 11:17:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@291 -- # local -a pci_devs 00:09:35.600 11:17:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@292 -- # pci_net_devs=() 00:09:35.600 11:17:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:09:35.600 11:17:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@293 -- # pci_drivers=() 00:09:35.600 11:17:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@293 -- # local -A pci_drivers 00:09:35.600 11:17:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@295 -- # net_devs=() 00:09:35.600 11:17:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@295 -- # local -ga net_devs 00:09:35.600 11:17:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@296 -- # e810=() 00:09:35.600 11:17:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@296 -- # local -ga e810 00:09:35.600 11:17:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@297 -- # x722=() 00:09:35.600 11:17:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@297 -- # local -ga x722 00:09:35.600 11:17:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@298 -- # mlx=() 00:09:35.600 11:17:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@298 -- # local -ga mlx 00:09:35.600 11:17:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:35.600 11:17:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:35.600 11:17:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:35.600 11:17:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:35.600 11:17:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:35.600 11:17:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:35.600 11:17:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:35.600 11:17:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:35.600 11:17:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:35.600 11:17:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:35.600 11:17:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:35.600 11:17:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:09:35.600 11:17:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:09:35.600 11:17:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:09:35.600 11:17:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:09:35.600 11:17:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:09:35.600 11:17:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:09:35.600 11:17:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:35.600 11:17:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:09:35.600 Found 0000:86:00.0 (0x8086 - 0x159b) 00:09:35.600 11:17:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:09:35.600 11:17:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:09:35.600 11:17:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:35.600 11:17:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:35.600 11:17:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:09:35.600 11:17:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:35.600 11:17:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:09:35.600 Found 0000:86:00.1 (0x8086 - 0x159b) 00:09:35.600 11:17:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:09:35.600 11:17:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:09:35.600 11:17:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:35.600 11:17:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:35.600 11:17:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:09:35.600 11:17:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:09:35.600 11:17:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:09:35.600 11:17:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:09:35.600 11:17:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:35.600 11:17:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:35.600 11:17:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:09:35.600 11:17:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:35.600 11:17:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@390 -- # [[ up == up ]] 00:09:35.600 11:17:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:09:35.600 11:17:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:35.600 11:17:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:09:35.600 Found net devices under 0000:86:00.0: cvl_0_0 00:09:35.600 11:17:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:09:35.600 11:17:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:35.600 11:17:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:35.600 11:17:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:09:35.600 11:17:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:35.600 11:17:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@390 -- # [[ up == up ]] 00:09:35.600 11:17:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:09:35.600 11:17:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:35.600 11:17:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:09:35.600 Found net devices under 0000:86:00.1: cvl_0_1 00:09:35.600 11:17:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:09:35.600 11:17:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:09:35.600 11:17:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@414 -- # is_hw=yes 00:09:35.600 11:17:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:09:35.600 11:17:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:09:35.600 11:17:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:09:35.600 11:17:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:35.600 11:17:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:35.600 11:17:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:35.600 11:17:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:09:35.600 11:17:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:35.600 11:17:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:35.600 11:17:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:09:35.601 11:17:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:35.601 11:17:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:35.601 11:17:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:09:35.601 11:17:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:09:35.601 11:17:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:09:35.601 11:17:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:35.601 11:17:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:35.601 11:17:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:35.601 11:17:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:09:35.601 11:17:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:35.601 11:17:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:35.601 11:17:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:35.601 11:17:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:09:35.601 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:35.601 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.170 ms 00:09:35.601 00:09:35.601 --- 10.0.0.2 ping statistics --- 00:09:35.601 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:35.601 rtt min/avg/max/mdev = 0.170/0.170/0.170/0.000 ms 00:09:35.601 11:17:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:35.601 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:35.601 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.202 ms 00:09:35.601 00:09:35.601 --- 10.0.0.1 ping statistics --- 00:09:35.601 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:35.601 rtt min/avg/max/mdev = 0.202/0.202/0.202/0.000 ms 00:09:35.601 11:17:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:35.601 11:17:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@422 -- # return 0 00:09:35.601 11:17:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:09:35.601 11:17:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:35.601 11:17:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:09:35.601 11:17:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:09:35.601 11:17:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:35.601 11:17:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:09:35.601 11:17:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:09:35.601 11:17:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@21 -- # nvmfappstart -m 0x2 00:09:35.601 11:17:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:09:35.601 11:17:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@724 -- # xtrace_disable 00:09:35.601 11:17:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:35.601 11:17:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@481 -- # nvmfpid=1395725 00:09:35.601 11:17:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:09:35.601 11:17:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@482 -- # waitforlisten 1395725 00:09:35.601 11:17:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@831 -- # '[' -z 1395725 ']' 00:09:35.601 11:17:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:35.601 11:17:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:35.601 11:17:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:35.601 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:35.601 11:17:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:35.601 11:17:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:35.601 [2024-07-26 11:17:30.474646] Starting SPDK v24.09-pre git sha1 487ff9e1a / DPDK 24.03.0 initialization... 00:09:35.601 [2024-07-26 11:17:30.474691] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:35.601 EAL: No free 2048 kB hugepages reported on node 1 00:09:35.601 [2024-07-26 11:17:30.545786] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:35.601 [2024-07-26 11:17:30.623326] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:35.601 [2024-07-26 11:17:30.623359] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:35.601 [2024-07-26 11:17:30.623366] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:35.601 [2024-07-26 11:17:30.623373] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:35.601 [2024-07-26 11:17:30.623378] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:35.601 [2024-07-26 11:17:30.623395] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:09:35.861 11:17:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:35.861 11:17:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@864 -- # return 0 00:09:35.861 11:17:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:09:35.861 11:17:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@730 -- # xtrace_disable 00:09:35.861 11:17:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:35.861 11:17:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:35.861 11:17:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:09:35.861 11:17:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:35.861 11:17:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:35.861 [2024-07-26 11:17:31.310031] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:35.861 11:17:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:35.861 11:17:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:09:35.861 11:17:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:35.861 11:17:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:35.861 Malloc0 00:09:35.861 11:17:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:35.861 11:17:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:09:35.861 11:17:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:35.861 11:17:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:35.861 11:17:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:35.861 11:17:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:09:35.861 11:17:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:35.861 11:17:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:35.861 11:17:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:35.861 11:17:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:35.861 11:17:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:35.861 11:17:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:35.861 [2024-07-26 11:17:31.380877] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:35.861 11:17:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:35.861 11:17:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@30 -- # bdevperf_pid=1395972 00:09:35.861 11:17:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@32 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:09:35.861 11:17:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@33 -- # waitforlisten 1395972 /var/tmp/bdevperf.sock 00:09:35.861 11:17:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@831 -- # '[' -z 1395972 ']' 00:09:35.861 11:17:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:09:35.861 11:17:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 1024 -o 4096 -w verify -t 10 00:09:35.861 11:17:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:35.861 11:17:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:09:35.862 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:09:35.862 11:17:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:35.862 11:17:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:35.862 [2024-07-26 11:17:31.428155] Starting SPDK v24.09-pre git sha1 487ff9e1a / DPDK 24.03.0 initialization... 00:09:35.862 [2024-07-26 11:17:31.428193] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1395972 ] 00:09:35.862 EAL: No free 2048 kB hugepages reported on node 1 00:09:35.862 [2024-07-26 11:17:31.495406] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:36.121 [2024-07-26 11:17:31.574324] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:36.688 11:17:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:36.688 11:17:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@864 -- # return 0 00:09:36.688 11:17:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@34 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:09:36.688 11:17:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:36.688 11:17:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:36.688 NVMe0n1 00:09:36.688 11:17:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:36.688 11:17:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:09:36.947 Running I/O for 10 seconds... 00:09:46.922 00:09:46.922 Latency(us) 00:09:46.922 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:46.922 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 1024, IO size: 4096) 00:09:46.922 Verification LBA range: start 0x0 length 0x4000 00:09:46.922 NVMe0n1 : 10.05 12611.16 49.26 0.00 0.00 80947.10 13107.20 51929.48 00:09:46.922 =================================================================================================================== 00:09:46.922 Total : 12611.16 49.26 0.00 0.00 80947.10 13107.20 51929.48 00:09:46.922 0 00:09:46.922 11:17:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@39 -- # killprocess 1395972 00:09:46.922 11:17:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@950 -- # '[' -z 1395972 ']' 00:09:46.922 11:17:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@954 -- # kill -0 1395972 00:09:46.922 11:17:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@955 -- # uname 00:09:46.922 11:17:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:09:46.922 11:17:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1395972 00:09:46.922 11:17:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:09:46.922 11:17:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:09:46.922 11:17:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1395972' 00:09:46.922 killing process with pid 1395972 00:09:46.922 11:17:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@969 -- # kill 1395972 00:09:46.922 Received shutdown signal, test time was about 10.000000 seconds 00:09:46.922 00:09:46.922 Latency(us) 00:09:46.922 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:46.922 =================================================================================================================== 00:09:46.922 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:09:46.922 11:17:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@974 -- # wait 1395972 00:09:47.180 11:17:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:09:47.180 11:17:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@43 -- # nvmftestfini 00:09:47.180 11:17:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@488 -- # nvmfcleanup 00:09:47.180 11:17:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@117 -- # sync 00:09:47.180 11:17:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:09:47.180 11:17:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@120 -- # set +e 00:09:47.180 11:17:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@121 -- # for i in {1..20} 00:09:47.180 11:17:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:09:47.180 rmmod nvme_tcp 00:09:47.180 rmmod nvme_fabrics 00:09:47.180 rmmod nvme_keyring 00:09:47.180 11:17:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:09:47.180 11:17:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@124 -- # set -e 00:09:47.180 11:17:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@125 -- # return 0 00:09:47.180 11:17:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@489 -- # '[' -n 1395725 ']' 00:09:47.180 11:17:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@490 -- # killprocess 1395725 00:09:47.181 11:17:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@950 -- # '[' -z 1395725 ']' 00:09:47.181 11:17:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@954 -- # kill -0 1395725 00:09:47.181 11:17:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@955 -- # uname 00:09:47.181 11:17:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:09:47.181 11:17:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1395725 00:09:47.440 11:17:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:09:47.440 11:17:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:09:47.440 11:17:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1395725' 00:09:47.440 killing process with pid 1395725 00:09:47.440 11:17:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@969 -- # kill 1395725 00:09:47.440 11:17:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@974 -- # wait 1395725 00:09:47.440 11:17:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:09:47.440 11:17:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:09:47.440 11:17:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:09:47.440 11:17:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:09:47.440 11:17:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@278 -- # remove_spdk_ns 00:09:47.440 11:17:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:47.440 11:17:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:47.440 11:17:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:49.973 11:17:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:09:49.973 00:09:49.973 real 0m20.624s 00:09:49.973 user 0m24.928s 00:09:49.973 sys 0m5.859s 00:09:49.973 11:17:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:49.973 11:17:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:49.973 ************************************ 00:09:49.973 END TEST nvmf_queue_depth 00:09:49.973 ************************************ 00:09:49.973 11:17:45 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@31 -- # run_test nvmf_target_multipath /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:09:49.973 11:17:45 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:09:49.973 11:17:45 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:49.973 11:17:45 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:09:49.973 ************************************ 00:09:49.973 START TEST nvmf_target_multipath 00:09:49.973 ************************************ 00:09:49.973 11:17:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:09:49.973 * Looking for test storage... 00:09:49.973 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:49.973 11:17:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:49.973 11:17:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@7 -- # uname -s 00:09:49.973 11:17:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:49.973 11:17:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:49.973 11:17:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:49.973 11:17:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:49.973 11:17:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:49.973 11:17:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:49.973 11:17:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:49.973 11:17:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:49.973 11:17:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:49.973 11:17:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:49.973 11:17:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:09:49.973 11:17:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:09:49.973 11:17:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:49.973 11:17:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:49.973 11:17:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:49.973 11:17:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:49.973 11:17:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:49.973 11:17:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:49.973 11:17:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:49.973 11:17:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:49.973 11:17:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:49.973 11:17:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:49.973 11:17:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:49.973 11:17:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@5 -- # export PATH 00:09:49.973 11:17:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:49.973 11:17:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@47 -- # : 0 00:09:49.973 11:17:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:09:49.973 11:17:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:09:49.973 11:17:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:49.973 11:17:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:49.973 11:17:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:49.973 11:17:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:09:49.973 11:17:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:09:49.973 11:17:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@51 -- # have_pci_nics=0 00:09:49.973 11:17:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:09:49.973 11:17:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:09:49.973 11:17:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:09:49.973 11:17:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:09:49.973 11:17:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@43 -- # nvmftestinit 00:09:49.973 11:17:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:09:49.973 11:17:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:49.973 11:17:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@448 -- # prepare_net_devs 00:09:49.973 11:17:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@410 -- # local -g is_hw=no 00:09:49.973 11:17:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@412 -- # remove_spdk_ns 00:09:49.973 11:17:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:49.973 11:17:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:49.973 11:17:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:49.973 11:17:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:09:49.973 11:17:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:09:49.973 11:17:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@285 -- # xtrace_disable 00:09:49.973 11:17:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:09:55.247 11:17:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:55.247 11:17:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@291 -- # pci_devs=() 00:09:55.247 11:17:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@291 -- # local -a pci_devs 00:09:55.247 11:17:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@292 -- # pci_net_devs=() 00:09:55.247 11:17:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:09:55.247 11:17:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@293 -- # pci_drivers=() 00:09:55.247 11:17:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@293 -- # local -A pci_drivers 00:09:55.247 11:17:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@295 -- # net_devs=() 00:09:55.247 11:17:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@295 -- # local -ga net_devs 00:09:55.247 11:17:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@296 -- # e810=() 00:09:55.247 11:17:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@296 -- # local -ga e810 00:09:55.247 11:17:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@297 -- # x722=() 00:09:55.247 11:17:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@297 -- # local -ga x722 00:09:55.247 11:17:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@298 -- # mlx=() 00:09:55.247 11:17:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@298 -- # local -ga mlx 00:09:55.247 11:17:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:55.247 11:17:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:55.247 11:17:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:55.247 11:17:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:55.247 11:17:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:55.247 11:17:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:55.247 11:17:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:55.247 11:17:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:55.247 11:17:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:55.247 11:17:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:55.247 11:17:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:55.247 11:17:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:09:55.247 11:17:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:09:55.247 11:17:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:09:55.247 11:17:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:09:55.247 11:17:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:09:55.247 11:17:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:09:55.247 11:17:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:55.247 11:17:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:09:55.247 Found 0000:86:00.0 (0x8086 - 0x159b) 00:09:55.247 11:17:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:09:55.247 11:17:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:09:55.247 11:17:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:55.247 11:17:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:55.247 11:17:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:09:55.247 11:17:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:55.247 11:17:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:09:55.247 Found 0000:86:00.1 (0x8086 - 0x159b) 00:09:55.247 11:17:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:09:55.247 11:17:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:09:55.247 11:17:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:55.247 11:17:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:55.247 11:17:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:09:55.247 11:17:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:09:55.247 11:17:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:09:55.247 11:17:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:09:55.247 11:17:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:55.247 11:17:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:55.247 11:17:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:09:55.247 11:17:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:55.247 11:17:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@390 -- # [[ up == up ]] 00:09:55.247 11:17:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:09:55.247 11:17:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:55.247 11:17:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:09:55.247 Found net devices under 0000:86:00.0: cvl_0_0 00:09:55.247 11:17:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:09:55.247 11:17:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:55.247 11:17:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:55.247 11:17:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:09:55.247 11:17:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:55.247 11:17:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@390 -- # [[ up == up ]] 00:09:55.247 11:17:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:09:55.247 11:17:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:55.247 11:17:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:09:55.247 Found net devices under 0000:86:00.1: cvl_0_1 00:09:55.247 11:17:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:09:55.247 11:17:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:09:55.247 11:17:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@414 -- # is_hw=yes 00:09:55.247 11:17:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:09:55.247 11:17:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:09:55.247 11:17:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:09:55.247 11:17:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:55.247 11:17:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:55.247 11:17:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:55.247 11:17:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:09:55.247 11:17:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:55.247 11:17:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:55.247 11:17:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:09:55.247 11:17:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:55.247 11:17:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:55.247 11:17:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:09:55.247 11:17:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:09:55.247 11:17:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:09:55.247 11:17:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:55.507 11:17:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:55.507 11:17:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:55.507 11:17:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:09:55.507 11:17:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:55.507 11:17:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:55.507 11:17:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:55.507 11:17:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:09:55.507 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:55.507 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.154 ms 00:09:55.507 00:09:55.507 --- 10.0.0.2 ping statistics --- 00:09:55.507 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:55.507 rtt min/avg/max/mdev = 0.154/0.154/0.154/0.000 ms 00:09:55.507 11:17:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:55.507 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:55.507 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.144 ms 00:09:55.507 00:09:55.507 --- 10.0.0.1 ping statistics --- 00:09:55.507 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:55.507 rtt min/avg/max/mdev = 0.144/0.144/0.144/0.000 ms 00:09:55.507 11:17:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:55.507 11:17:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@422 -- # return 0 00:09:55.507 11:17:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:09:55.507 11:17:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:55.507 11:17:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:09:55.507 11:17:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:09:55.507 11:17:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:55.507 11:17:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:09:55.507 11:17:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:09:55.507 11:17:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@45 -- # '[' -z ']' 00:09:55.507 11:17:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@46 -- # echo 'only one NIC for nvmf test' 00:09:55.507 only one NIC for nvmf test 00:09:55.507 11:17:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@47 -- # nvmftestfini 00:09:55.507 11:17:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@488 -- # nvmfcleanup 00:09:55.507 11:17:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@117 -- # sync 00:09:55.507 11:17:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:09:55.507 11:17:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@120 -- # set +e 00:09:55.507 11:17:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@121 -- # for i in {1..20} 00:09:55.507 11:17:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:09:55.507 rmmod nvme_tcp 00:09:55.507 rmmod nvme_fabrics 00:09:55.507 rmmod nvme_keyring 00:09:55.507 11:17:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:09:55.507 11:17:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@124 -- # set -e 00:09:55.507 11:17:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@125 -- # return 0 00:09:55.507 11:17:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:09:55.507 11:17:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:09:55.765 11:17:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:09:55.765 11:17:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:09:55.765 11:17:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:09:55.765 11:17:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@278 -- # remove_spdk_ns 00:09:55.765 11:17:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:55.765 11:17:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:55.765 11:17:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:57.672 11:17:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:09:57.672 11:17:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@48 -- # exit 0 00:09:57.672 11:17:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@1 -- # nvmftestfini 00:09:57.672 11:17:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@488 -- # nvmfcleanup 00:09:57.672 11:17:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@117 -- # sync 00:09:57.672 11:17:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:09:57.672 11:17:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@120 -- # set +e 00:09:57.672 11:17:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@121 -- # for i in {1..20} 00:09:57.672 11:17:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:09:57.672 11:17:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:09:57.672 11:17:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@124 -- # set -e 00:09:57.672 11:17:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@125 -- # return 0 00:09:57.672 11:17:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:09:57.672 11:17:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:09:57.672 11:17:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:09:57.672 11:17:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:09:57.672 11:17:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:09:57.672 11:17:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@278 -- # remove_spdk_ns 00:09:57.672 11:17:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:57.672 11:17:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:57.672 11:17:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:57.672 11:17:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:09:57.672 00:09:57.672 real 0m8.077s 00:09:57.672 user 0m1.713s 00:09:57.672 sys 0m4.345s 00:09:57.672 11:17:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:57.672 11:17:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:09:57.672 ************************************ 00:09:57.672 END TEST nvmf_target_multipath 00:09:57.672 ************************************ 00:09:57.672 11:17:53 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@32 -- # run_test nvmf_zcopy /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:09:57.672 11:17:53 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:09:57.672 11:17:53 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:57.672 11:17:53 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:09:57.931 ************************************ 00:09:57.931 START TEST nvmf_zcopy 00:09:57.931 ************************************ 00:09:57.931 11:17:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:09:57.931 * Looking for test storage... 00:09:57.931 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:57.931 11:17:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:57.931 11:17:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@7 -- # uname -s 00:09:57.931 11:17:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:57.931 11:17:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:57.931 11:17:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:57.931 11:17:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:57.931 11:17:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:57.931 11:17:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:57.931 11:17:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:57.931 11:17:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:57.931 11:17:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:57.931 11:17:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:57.931 11:17:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:09:57.931 11:17:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:09:57.931 11:17:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:57.931 11:17:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:57.931 11:17:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:57.931 11:17:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:57.931 11:17:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:57.931 11:17:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:57.931 11:17:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:57.931 11:17:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:57.931 11:17:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:57.931 11:17:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:57.931 11:17:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:57.931 11:17:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@5 -- # export PATH 00:09:57.931 11:17:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:57.931 11:17:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@47 -- # : 0 00:09:57.931 11:17:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:09:57.931 11:17:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:09:57.931 11:17:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:57.931 11:17:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:57.931 11:17:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:57.931 11:17:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:09:57.931 11:17:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:09:57.931 11:17:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@51 -- # have_pci_nics=0 00:09:57.931 11:17:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@12 -- # nvmftestinit 00:09:57.931 11:17:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:09:57.931 11:17:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:57.931 11:17:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@448 -- # prepare_net_devs 00:09:57.931 11:17:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@410 -- # local -g is_hw=no 00:09:57.931 11:17:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@412 -- # remove_spdk_ns 00:09:57.931 11:17:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:57.931 11:17:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:57.931 11:17:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:57.931 11:17:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:09:57.931 11:17:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:09:57.931 11:17:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@285 -- # xtrace_disable 00:09:57.931 11:17:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:04.499 11:17:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:04.499 11:17:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@291 -- # pci_devs=() 00:10:04.499 11:17:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@291 -- # local -a pci_devs 00:10:04.499 11:17:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@292 -- # pci_net_devs=() 00:10:04.499 11:17:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:10:04.499 11:17:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@293 -- # pci_drivers=() 00:10:04.499 11:17:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@293 -- # local -A pci_drivers 00:10:04.499 11:17:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@295 -- # net_devs=() 00:10:04.499 11:17:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@295 -- # local -ga net_devs 00:10:04.499 11:17:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@296 -- # e810=() 00:10:04.499 11:17:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@296 -- # local -ga e810 00:10:04.499 11:17:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@297 -- # x722=() 00:10:04.499 11:17:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@297 -- # local -ga x722 00:10:04.499 11:17:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@298 -- # mlx=() 00:10:04.499 11:17:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@298 -- # local -ga mlx 00:10:04.499 11:17:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:04.499 11:17:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:04.499 11:17:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:04.499 11:17:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:04.499 11:17:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:04.499 11:17:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:04.499 11:17:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:04.499 11:17:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:04.499 11:17:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:04.499 11:17:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:04.499 11:17:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:04.499 11:17:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:10:04.499 11:17:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:10:04.499 11:17:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:10:04.499 11:17:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:10:04.499 11:17:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:10:04.499 11:17:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:10:04.499 11:17:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:10:04.499 11:17:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:10:04.499 Found 0000:86:00.0 (0x8086 - 0x159b) 00:10:04.499 11:17:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:10:04.499 11:17:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:10:04.499 11:17:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:04.499 11:17:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:04.499 11:17:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:10:04.499 11:17:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:10:04.499 11:17:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:10:04.499 Found 0000:86:00.1 (0x8086 - 0x159b) 00:10:04.499 11:17:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:10:04.499 11:17:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:10:04.499 11:17:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:04.499 11:17:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:04.499 11:17:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:10:04.499 11:17:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:10:04.499 11:17:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:10:04.499 11:17:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:10:04.499 11:17:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:10:04.499 11:17:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:04.499 11:17:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:10:04.499 11:17:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:04.499 11:17:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@390 -- # [[ up == up ]] 00:10:04.499 11:17:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:10:04.499 11:17:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:04.499 11:17:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:10:04.499 Found net devices under 0000:86:00.0: cvl_0_0 00:10:04.499 11:17:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:10:04.499 11:17:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:10:04.499 11:17:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:04.499 11:17:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:10:04.499 11:17:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:04.499 11:17:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@390 -- # [[ up == up ]] 00:10:04.499 11:17:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:10:04.499 11:17:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:04.499 11:17:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:10:04.499 Found net devices under 0000:86:00.1: cvl_0_1 00:10:04.499 11:17:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:10:04.499 11:17:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:10:04.499 11:17:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@414 -- # is_hw=yes 00:10:04.499 11:17:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:10:04.499 11:17:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:10:04.499 11:17:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:10:04.499 11:17:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:04.499 11:17:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:04.499 11:17:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:10:04.499 11:17:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:10:04.499 11:17:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:10:04.499 11:17:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:10:04.499 11:17:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:10:04.499 11:17:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:10:04.499 11:17:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:04.499 11:17:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:10:04.499 11:17:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:10:04.499 11:17:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:10:04.499 11:17:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:10:04.499 11:17:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:10:04.499 11:17:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:10:04.499 11:17:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:10:04.499 11:17:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:10:04.499 11:17:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:10:04.499 11:17:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:10:04.499 11:17:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:10:04.499 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:04.499 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.161 ms 00:10:04.499 00:10:04.499 --- 10.0.0.2 ping statistics --- 00:10:04.499 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:04.499 rtt min/avg/max/mdev = 0.161/0.161/0.161/0.000 ms 00:10:04.499 11:17:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:10:04.499 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:04.499 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.233 ms 00:10:04.499 00:10:04.499 --- 10.0.0.1 ping statistics --- 00:10:04.499 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:04.499 rtt min/avg/max/mdev = 0.233/0.233/0.233/0.000 ms 00:10:04.499 11:17:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:04.499 11:17:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@422 -- # return 0 00:10:04.499 11:17:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:10:04.499 11:17:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:04.499 11:17:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:10:04.499 11:17:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:10:04.499 11:17:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:04.499 11:17:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:10:04.499 11:17:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:10:04.499 11:17:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@13 -- # nvmfappstart -m 0x2 00:10:04.499 11:17:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:10:04.499 11:17:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@724 -- # xtrace_disable 00:10:04.499 11:17:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:04.499 11:17:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@481 -- # nvmfpid=1404730 00:10:04.499 11:17:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@482 -- # waitforlisten 1404730 00:10:04.499 11:17:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:10:04.499 11:17:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@831 -- # '[' -z 1404730 ']' 00:10:04.499 11:17:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:04.499 11:17:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@836 -- # local max_retries=100 00:10:04.499 11:17:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:04.499 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:04.499 11:17:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@840 -- # xtrace_disable 00:10:04.499 11:17:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:04.499 [2024-07-26 11:17:59.312223] Starting SPDK v24.09-pre git sha1 487ff9e1a / DPDK 24.03.0 initialization... 00:10:04.499 [2024-07-26 11:17:59.312266] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:04.499 EAL: No free 2048 kB hugepages reported on node 1 00:10:04.499 [2024-07-26 11:17:59.381853] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:04.499 [2024-07-26 11:17:59.453206] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:04.499 [2024-07-26 11:17:59.453245] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:04.499 [2024-07-26 11:17:59.453252] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:04.499 [2024-07-26 11:17:59.453258] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:04.499 [2024-07-26 11:17:59.453263] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:04.499 [2024-07-26 11:17:59.453297] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:10:04.499 11:18:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:10:04.499 11:18:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@864 -- # return 0 00:10:04.499 11:18:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:10:04.499 11:18:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@730 -- # xtrace_disable 00:10:04.499 11:18:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:04.801 11:18:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:04.801 11:18:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@15 -- # '[' tcp '!=' tcp ']' 00:10:04.801 11:18:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@22 -- # rpc_cmd nvmf_create_transport -t tcp -o -c 0 --zcopy 00:10:04.801 11:18:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:04.801 11:18:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:04.801 [2024-07-26 11:18:00.167938] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:04.801 11:18:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:04.801 11:18:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:10:04.801 11:18:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:04.801 11:18:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:04.801 11:18:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:04.801 11:18:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:04.801 11:18:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:04.801 11:18:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:04.801 [2024-07-26 11:18:00.188096] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:04.801 11:18:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:04.801 11:18:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:10:04.801 11:18:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:04.801 11:18:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:04.801 11:18:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:04.801 11:18:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@29 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc0 00:10:04.801 11:18:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:04.801 11:18:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:04.801 malloc0 00:10:04.801 11:18:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:04.801 11:18:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:10:04.801 11:18:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:04.801 11:18:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:04.801 11:18:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:04.801 11:18:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -t 10 -q 128 -w verify -o 8192 00:10:04.801 11:18:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@33 -- # gen_nvmf_target_json 00:10:04.801 11:18:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@532 -- # config=() 00:10:04.801 11:18:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@532 -- # local subsystem config 00:10:04.801 11:18:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:10:04.801 11:18:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:10:04.801 { 00:10:04.801 "params": { 00:10:04.801 "name": "Nvme$subsystem", 00:10:04.801 "trtype": "$TEST_TRANSPORT", 00:10:04.801 "traddr": "$NVMF_FIRST_TARGET_IP", 00:10:04.801 "adrfam": "ipv4", 00:10:04.801 "trsvcid": "$NVMF_PORT", 00:10:04.801 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:10:04.801 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:10:04.801 "hdgst": ${hdgst:-false}, 00:10:04.802 "ddgst": ${ddgst:-false} 00:10:04.802 }, 00:10:04.802 "method": "bdev_nvme_attach_controller" 00:10:04.802 } 00:10:04.802 EOF 00:10:04.802 )") 00:10:04.802 11:18:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@554 -- # cat 00:10:04.802 11:18:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@556 -- # jq . 00:10:04.802 11:18:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@557 -- # IFS=, 00:10:04.802 11:18:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:10:04.802 "params": { 00:10:04.802 "name": "Nvme1", 00:10:04.802 "trtype": "tcp", 00:10:04.802 "traddr": "10.0.0.2", 00:10:04.802 "adrfam": "ipv4", 00:10:04.802 "trsvcid": "4420", 00:10:04.802 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:10:04.802 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:10:04.802 "hdgst": false, 00:10:04.802 "ddgst": false 00:10:04.802 }, 00:10:04.802 "method": "bdev_nvme_attach_controller" 00:10:04.802 }' 00:10:04.802 [2024-07-26 11:18:00.284229] Starting SPDK v24.09-pre git sha1 487ff9e1a / DPDK 24.03.0 initialization... 00:10:04.802 [2024-07-26 11:18:00.284279] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1404895 ] 00:10:04.802 EAL: No free 2048 kB hugepages reported on node 1 00:10:04.802 [2024-07-26 11:18:00.350287] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:04.802 [2024-07-26 11:18:00.422420] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:05.368 Running I/O for 10 seconds... 00:10:15.347 00:10:15.347 Latency(us) 00:10:15.347 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:15.347 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 8192) 00:10:15.347 Verification LBA range: start 0x0 length 0x1000 00:10:15.347 Nvme1n1 : 10.01 8977.58 70.14 0.00 0.00 14216.84 2106.51 24217.11 00:10:15.347 =================================================================================================================== 00:10:15.347 Total : 8977.58 70.14 0.00 0.00 14216.84 2106.51 24217.11 00:10:15.347 11:18:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@39 -- # perfpid=1407225 00:10:15.347 11:18:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@41 -- # xtrace_disable 00:10:15.347 11:18:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:15.347 11:18:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -t 5 -q 128 -w randrw -M 50 -o 8192 00:10:15.347 11:18:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@37 -- # gen_nvmf_target_json 00:10:15.347 11:18:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@532 -- # config=() 00:10:15.347 11:18:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@532 -- # local subsystem config 00:10:15.347 11:18:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:10:15.347 11:18:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:10:15.347 { 00:10:15.347 "params": { 00:10:15.347 "name": "Nvme$subsystem", 00:10:15.347 "trtype": "$TEST_TRANSPORT", 00:10:15.347 "traddr": "$NVMF_FIRST_TARGET_IP", 00:10:15.347 "adrfam": "ipv4", 00:10:15.347 "trsvcid": "$NVMF_PORT", 00:10:15.347 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:10:15.347 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:10:15.347 "hdgst": ${hdgst:-false}, 00:10:15.347 "ddgst": ${ddgst:-false} 00:10:15.347 }, 00:10:15.347 "method": "bdev_nvme_attach_controller" 00:10:15.347 } 00:10:15.347 EOF 00:10:15.347 )") 00:10:15.347 [2024-07-26 11:18:10.963706] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.347 [2024-07-26 11:18:10.963743] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.347 11:18:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@554 -- # cat 00:10:15.347 11:18:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@556 -- # jq . 00:10:15.347 11:18:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@557 -- # IFS=, 00:10:15.347 11:18:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:10:15.347 "params": { 00:10:15.347 "name": "Nvme1", 00:10:15.347 "trtype": "tcp", 00:10:15.347 "traddr": "10.0.0.2", 00:10:15.347 "adrfam": "ipv4", 00:10:15.347 "trsvcid": "4420", 00:10:15.347 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:10:15.347 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:10:15.347 "hdgst": false, 00:10:15.347 "ddgst": false 00:10:15.347 }, 00:10:15.347 "method": "bdev_nvme_attach_controller" 00:10:15.347 }' 00:10:15.347 [2024-07-26 11:18:10.975690] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.347 [2024-07-26 11:18:10.975704] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.347 [2024-07-26 11:18:10.983707] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.347 [2024-07-26 11:18:10.983716] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.347 [2024-07-26 11:18:10.995740] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.347 [2024-07-26 11:18:10.995749] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.347 [2024-07-26 11:18:11.002662] Starting SPDK v24.09-pre git sha1 487ff9e1a / DPDK 24.03.0 initialization... 00:10:15.347 [2024-07-26 11:18:11.002699] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1407225 ] 00:10:15.606 [2024-07-26 11:18:11.007779] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.606 [2024-07-26 11:18:11.007794] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.606 [2024-07-26 11:18:11.019807] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.606 [2024-07-26 11:18:11.019816] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.606 EAL: No free 2048 kB hugepages reported on node 1 00:10:15.606 [2024-07-26 11:18:11.031838] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.606 [2024-07-26 11:18:11.031847] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.606 [2024-07-26 11:18:11.043868] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.606 [2024-07-26 11:18:11.043877] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.606 [2024-07-26 11:18:11.055899] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.606 [2024-07-26 11:18:11.055912] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.606 [2024-07-26 11:18:11.063918] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.606 [2024-07-26 11:18:11.063927] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.606 [2024-07-26 11:18:11.067238] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:15.606 [2024-07-26 11:18:11.071941] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.606 [2024-07-26 11:18:11.071951] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.607 [2024-07-26 11:18:11.083973] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.607 [2024-07-26 11:18:11.083984] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.607 [2024-07-26 11:18:11.091991] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.607 [2024-07-26 11:18:11.092000] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.607 [2024-07-26 11:18:11.100013] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.607 [2024-07-26 11:18:11.100023] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.607 [2024-07-26 11:18:11.108040] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.607 [2024-07-26 11:18:11.108060] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.607 [2024-07-26 11:18:11.116057] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.607 [2024-07-26 11:18:11.116068] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.607 [2024-07-26 11:18:11.128092] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.607 [2024-07-26 11:18:11.128101] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.607 [2024-07-26 11:18:11.136112] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.607 [2024-07-26 11:18:11.136120] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.607 [2024-07-26 11:18:11.141670] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:15.607 [2024-07-26 11:18:11.144134] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.607 [2024-07-26 11:18:11.144144] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.607 [2024-07-26 11:18:11.152160] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.607 [2024-07-26 11:18:11.152173] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.607 [2024-07-26 11:18:11.160185] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.607 [2024-07-26 11:18:11.160200] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.607 [2024-07-26 11:18:11.172216] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.607 [2024-07-26 11:18:11.172230] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.607 [2024-07-26 11:18:11.180234] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.607 [2024-07-26 11:18:11.180245] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.607 [2024-07-26 11:18:11.188253] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.607 [2024-07-26 11:18:11.188265] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.607 [2024-07-26 11:18:11.196276] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.607 [2024-07-26 11:18:11.196287] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.607 [2024-07-26 11:18:11.204298] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.607 [2024-07-26 11:18:11.204308] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.607 [2024-07-26 11:18:11.216329] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.607 [2024-07-26 11:18:11.216343] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.607 [2024-07-26 11:18:11.224351] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.607 [2024-07-26 11:18:11.224360] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.607 [2024-07-26 11:18:11.232383] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.607 [2024-07-26 11:18:11.232400] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.607 [2024-07-26 11:18:11.240402] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.607 [2024-07-26 11:18:11.240415] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.607 [2024-07-26 11:18:11.248420] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.607 [2024-07-26 11:18:11.248431] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.607 [2024-07-26 11:18:11.260452] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.607 [2024-07-26 11:18:11.260464] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.866 [2024-07-26 11:18:11.268471] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.866 [2024-07-26 11:18:11.268483] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.866 [2024-07-26 11:18:11.276490] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.866 [2024-07-26 11:18:11.276500] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.866 [2024-07-26 11:18:11.284512] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.866 [2024-07-26 11:18:11.284520] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.866 [2024-07-26 11:18:11.292537] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.866 [2024-07-26 11:18:11.292547] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.866 [2024-07-26 11:18:11.304575] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.866 [2024-07-26 11:18:11.304588] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.866 [2024-07-26 11:18:11.312594] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.866 [2024-07-26 11:18:11.312606] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.866 [2024-07-26 11:18:11.320614] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.866 [2024-07-26 11:18:11.320622] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.866 [2024-07-26 11:18:11.328649] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.866 [2024-07-26 11:18:11.328665] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.866 [2024-07-26 11:18:11.336663] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.866 [2024-07-26 11:18:11.336672] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.866 Running I/O for 5 seconds... 00:10:15.866 [2024-07-26 11:18:11.352998] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.866 [2024-07-26 11:18:11.353016] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.866 [2024-07-26 11:18:11.360399] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.866 [2024-07-26 11:18:11.360417] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.866 [2024-07-26 11:18:11.369142] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.866 [2024-07-26 11:18:11.369160] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.866 [2024-07-26 11:18:11.378229] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.866 [2024-07-26 11:18:11.378248] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.866 [2024-07-26 11:18:11.387492] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.866 [2024-07-26 11:18:11.387510] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.866 [2024-07-26 11:18:11.396625] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.866 [2024-07-26 11:18:11.396649] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.866 [2024-07-26 11:18:11.405774] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.866 [2024-07-26 11:18:11.405792] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.866 [2024-07-26 11:18:11.414899] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.866 [2024-07-26 11:18:11.414918] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.866 [2024-07-26 11:18:11.424039] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.866 [2024-07-26 11:18:11.424057] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.866 [2024-07-26 11:18:11.433173] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.866 [2024-07-26 11:18:11.433190] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.866 [2024-07-26 11:18:11.442256] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.866 [2024-07-26 11:18:11.442273] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.866 [2024-07-26 11:18:11.451400] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.866 [2024-07-26 11:18:11.451418] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.866 [2024-07-26 11:18:11.460335] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.866 [2024-07-26 11:18:11.460352] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.866 [2024-07-26 11:18:11.469947] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.866 [2024-07-26 11:18:11.469966] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.867 [2024-07-26 11:18:11.479207] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.867 [2024-07-26 11:18:11.479226] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.867 [2024-07-26 11:18:11.487866] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.867 [2024-07-26 11:18:11.487884] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.867 [2024-07-26 11:18:11.496947] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.867 [2024-07-26 11:18:11.496964] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.867 [2024-07-26 11:18:11.506130] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.867 [2024-07-26 11:18:11.506148] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.867 [2024-07-26 11:18:11.515732] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.867 [2024-07-26 11:18:11.515749] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.867 [2024-07-26 11:18:11.524430] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.867 [2024-07-26 11:18:11.524448] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.126 [2024-07-26 11:18:11.538572] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.126 [2024-07-26 11:18:11.538590] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.126 [2024-07-26 11:18:11.547345] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.126 [2024-07-26 11:18:11.547363] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.126 [2024-07-26 11:18:11.555787] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.126 [2024-07-26 11:18:11.555804] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.126 [2024-07-26 11:18:11.564995] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.126 [2024-07-26 11:18:11.565013] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.126 [2024-07-26 11:18:11.574146] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.126 [2024-07-26 11:18:11.574163] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.126 [2024-07-26 11:18:11.588146] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.126 [2024-07-26 11:18:11.588165] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.126 [2024-07-26 11:18:11.597341] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.126 [2024-07-26 11:18:11.597359] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.126 [2024-07-26 11:18:11.606738] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.126 [2024-07-26 11:18:11.606756] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.126 [2024-07-26 11:18:11.615916] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.126 [2024-07-26 11:18:11.615934] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.126 [2024-07-26 11:18:11.625051] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.126 [2024-07-26 11:18:11.625070] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.126 [2024-07-26 11:18:11.639792] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.126 [2024-07-26 11:18:11.639812] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.126 [2024-07-26 11:18:11.647597] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.126 [2024-07-26 11:18:11.647615] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.126 [2024-07-26 11:18:11.656089] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.126 [2024-07-26 11:18:11.656107] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.126 [2024-07-26 11:18:11.665824] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.126 [2024-07-26 11:18:11.665842] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.126 [2024-07-26 11:18:11.674503] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.126 [2024-07-26 11:18:11.674521] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.126 [2024-07-26 11:18:11.688333] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.126 [2024-07-26 11:18:11.688351] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.126 [2024-07-26 11:18:11.697201] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.126 [2024-07-26 11:18:11.697219] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.126 [2024-07-26 11:18:11.705712] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.126 [2024-07-26 11:18:11.705731] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.126 [2024-07-26 11:18:11.715294] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.126 [2024-07-26 11:18:11.715312] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.126 [2024-07-26 11:18:11.724573] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.126 [2024-07-26 11:18:11.724591] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.126 [2024-07-26 11:18:11.738970] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.126 [2024-07-26 11:18:11.738989] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.126 [2024-07-26 11:18:11.747713] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.126 [2024-07-26 11:18:11.747731] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.126 [2024-07-26 11:18:11.757393] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.126 [2024-07-26 11:18:11.757410] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.126 [2024-07-26 11:18:11.766984] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.126 [2024-07-26 11:18:11.767002] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.126 [2024-07-26 11:18:11.775825] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.126 [2024-07-26 11:18:11.775843] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.385 [2024-07-26 11:18:11.790135] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.385 [2024-07-26 11:18:11.790154] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.385 [2024-07-26 11:18:11.798816] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.385 [2024-07-26 11:18:11.798834] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.385 [2024-07-26 11:18:11.807332] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.385 [2024-07-26 11:18:11.807350] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.385 [2024-07-26 11:18:11.816161] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.386 [2024-07-26 11:18:11.816179] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.386 [2024-07-26 11:18:11.825451] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.386 [2024-07-26 11:18:11.825469] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.386 [2024-07-26 11:18:11.839598] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.386 [2024-07-26 11:18:11.839617] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.386 [2024-07-26 11:18:11.848389] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.386 [2024-07-26 11:18:11.848406] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.386 [2024-07-26 11:18:11.858281] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.386 [2024-07-26 11:18:11.858298] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.386 [2024-07-26 11:18:11.867349] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.386 [2024-07-26 11:18:11.867366] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.386 [2024-07-26 11:18:11.876845] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.386 [2024-07-26 11:18:11.876863] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.386 [2024-07-26 11:18:11.891326] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.386 [2024-07-26 11:18:11.891344] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.386 [2024-07-26 11:18:11.900158] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.386 [2024-07-26 11:18:11.900176] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.386 [2024-07-26 11:18:11.909645] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.386 [2024-07-26 11:18:11.909663] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.386 [2024-07-26 11:18:11.918054] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.386 [2024-07-26 11:18:11.918073] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.386 [2024-07-26 11:18:11.926523] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.386 [2024-07-26 11:18:11.926540] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.386 [2024-07-26 11:18:11.940532] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.386 [2024-07-26 11:18:11.940555] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.386 [2024-07-26 11:18:11.949153] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.386 [2024-07-26 11:18:11.949171] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.386 [2024-07-26 11:18:11.958157] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.386 [2024-07-26 11:18:11.958174] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.386 [2024-07-26 11:18:11.967271] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.386 [2024-07-26 11:18:11.967288] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.386 [2024-07-26 11:18:11.976167] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.386 [2024-07-26 11:18:11.976185] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.386 [2024-07-26 11:18:11.985848] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.386 [2024-07-26 11:18:11.985867] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.386 [2024-07-26 11:18:11.995169] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.386 [2024-07-26 11:18:11.995187] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.386 [2024-07-26 11:18:12.004326] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.386 [2024-07-26 11:18:12.004343] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.386 [2024-07-26 11:18:12.013300] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.386 [2024-07-26 11:18:12.013319] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.386 [2024-07-26 11:18:12.022452] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.386 [2024-07-26 11:18:12.022469] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.386 [2024-07-26 11:18:12.031471] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.386 [2024-07-26 11:18:12.031489] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.386 [2024-07-26 11:18:12.040408] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.386 [2024-07-26 11:18:12.040427] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.645 [2024-07-26 11:18:12.049427] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.645 [2024-07-26 11:18:12.049445] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.645 [2024-07-26 11:18:12.058414] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.645 [2024-07-26 11:18:12.058432] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.645 [2024-07-26 11:18:12.067788] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.645 [2024-07-26 11:18:12.067816] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.645 [2024-07-26 11:18:12.076897] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.645 [2024-07-26 11:18:12.076914] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.645 [2024-07-26 11:18:12.085996] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.645 [2024-07-26 11:18:12.086013] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.645 [2024-07-26 11:18:12.095465] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.645 [2024-07-26 11:18:12.095483] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.645 [2024-07-26 11:18:12.104657] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.645 [2024-07-26 11:18:12.104674] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.645 [2024-07-26 11:18:12.114207] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.645 [2024-07-26 11:18:12.114228] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.645 [2024-07-26 11:18:12.128274] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.645 [2024-07-26 11:18:12.128292] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.645 [2024-07-26 11:18:12.135638] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.645 [2024-07-26 11:18:12.135670] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.645 [2024-07-26 11:18:12.145564] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.645 [2024-07-26 11:18:12.145581] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.645 [2024-07-26 11:18:12.154325] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.645 [2024-07-26 11:18:12.154342] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.645 [2024-07-26 11:18:12.163174] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.645 [2024-07-26 11:18:12.163191] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.645 [2024-07-26 11:18:12.172836] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.645 [2024-07-26 11:18:12.172853] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.645 [2024-07-26 11:18:12.182422] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.645 [2024-07-26 11:18:12.182440] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.645 [2024-07-26 11:18:12.191819] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.645 [2024-07-26 11:18:12.191836] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.645 [2024-07-26 11:18:12.200781] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.645 [2024-07-26 11:18:12.200809] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.645 [2024-07-26 11:18:12.210334] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.645 [2024-07-26 11:18:12.210353] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.645 [2024-07-26 11:18:12.226151] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.645 [2024-07-26 11:18:12.226171] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.646 [2024-07-26 11:18:12.237285] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.646 [2024-07-26 11:18:12.237303] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.646 [2024-07-26 11:18:12.246170] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.646 [2024-07-26 11:18:12.246188] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.646 [2024-07-26 11:18:12.255467] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.646 [2024-07-26 11:18:12.255485] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.646 [2024-07-26 11:18:12.264806] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.646 [2024-07-26 11:18:12.264824] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.646 [2024-07-26 11:18:12.279513] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.646 [2024-07-26 11:18:12.279532] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.646 [2024-07-26 11:18:12.287035] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.646 [2024-07-26 11:18:12.287052] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.646 [2024-07-26 11:18:12.295941] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.646 [2024-07-26 11:18:12.295959] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.646 [2024-07-26 11:18:12.305164] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.646 [2024-07-26 11:18:12.305186] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.905 [2024-07-26 11:18:12.314311] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.905 [2024-07-26 11:18:12.314330] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.905 [2024-07-26 11:18:12.323493] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.905 [2024-07-26 11:18:12.323511] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.905 [2024-07-26 11:18:12.332577] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.905 [2024-07-26 11:18:12.332594] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.905 [2024-07-26 11:18:12.341174] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.905 [2024-07-26 11:18:12.341191] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.905 [2024-07-26 11:18:12.350356] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.905 [2024-07-26 11:18:12.350374] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.905 [2024-07-26 11:18:12.359456] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.905 [2024-07-26 11:18:12.359473] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.905 [2024-07-26 11:18:12.373995] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.905 [2024-07-26 11:18:12.374013] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.905 [2024-07-26 11:18:12.387759] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.905 [2024-07-26 11:18:12.387779] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.905 [2024-07-26 11:18:12.396582] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.905 [2024-07-26 11:18:12.396600] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.905 [2024-07-26 11:18:12.405903] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.905 [2024-07-26 11:18:12.405921] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.905 [2024-07-26 11:18:12.414389] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.905 [2024-07-26 11:18:12.414406] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.905 [2024-07-26 11:18:12.423518] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.905 [2024-07-26 11:18:12.423536] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.905 [2024-07-26 11:18:12.432601] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.905 [2024-07-26 11:18:12.432618] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.905 [2024-07-26 11:18:12.441680] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.905 [2024-07-26 11:18:12.441697] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.905 [2024-07-26 11:18:12.450842] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.905 [2024-07-26 11:18:12.450860] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.905 [2024-07-26 11:18:12.459322] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.905 [2024-07-26 11:18:12.459339] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.905 [2024-07-26 11:18:12.473848] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.905 [2024-07-26 11:18:12.473866] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.905 [2024-07-26 11:18:12.482837] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.905 [2024-07-26 11:18:12.482856] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.905 [2024-07-26 11:18:12.491759] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.905 [2024-07-26 11:18:12.491782] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.905 [2024-07-26 11:18:12.501422] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.905 [2024-07-26 11:18:12.501440] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.905 [2024-07-26 11:18:12.510664] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.905 [2024-07-26 11:18:12.510682] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.905 [2024-07-26 11:18:12.519855] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.905 [2024-07-26 11:18:12.519882] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.905 [2024-07-26 11:18:12.528978] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.905 [2024-07-26 11:18:12.528995] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.905 [2024-07-26 11:18:12.538079] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.905 [2024-07-26 11:18:12.538096] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.905 [2024-07-26 11:18:12.547135] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.905 [2024-07-26 11:18:12.547152] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.905 [2024-07-26 11:18:12.556816] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.905 [2024-07-26 11:18:12.556834] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.164 [2024-07-26 11:18:12.570883] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.164 [2024-07-26 11:18:12.570901] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.164 [2024-07-26 11:18:12.579542] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.164 [2024-07-26 11:18:12.579560] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.164 [2024-07-26 11:18:12.588066] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.164 [2024-07-26 11:18:12.588083] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.164 [2024-07-26 11:18:12.597251] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.164 [2024-07-26 11:18:12.597269] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.164 [2024-07-26 11:18:12.606289] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.164 [2024-07-26 11:18:12.606307] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.164 [2024-07-26 11:18:12.615900] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.164 [2024-07-26 11:18:12.615918] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.164 [2024-07-26 11:18:12.624936] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.164 [2024-07-26 11:18:12.624953] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.164 [2024-07-26 11:18:12.634161] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.164 [2024-07-26 11:18:12.634178] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.164 [2024-07-26 11:18:12.643131] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.164 [2024-07-26 11:18:12.643148] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.164 [2024-07-26 11:18:12.652361] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.164 [2024-07-26 11:18:12.652378] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.164 [2024-07-26 11:18:12.661832] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.164 [2024-07-26 11:18:12.661849] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.164 [2024-07-26 11:18:12.670940] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.164 [2024-07-26 11:18:12.670974] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.164 [2024-07-26 11:18:12.679403] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.164 [2024-07-26 11:18:12.679420] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.164 [2024-07-26 11:18:12.688416] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.164 [2024-07-26 11:18:12.688433] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.164 [2024-07-26 11:18:12.696890] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.164 [2024-07-26 11:18:12.696907] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.165 [2024-07-26 11:18:12.711059] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.165 [2024-07-26 11:18:12.711077] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.165 [2024-07-26 11:18:12.719736] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.165 [2024-07-26 11:18:12.719753] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.165 [2024-07-26 11:18:12.728683] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.165 [2024-07-26 11:18:12.728701] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.165 [2024-07-26 11:18:12.737735] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.165 [2024-07-26 11:18:12.737753] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.165 [2024-07-26 11:18:12.746905] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.165 [2024-07-26 11:18:12.746922] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.165 [2024-07-26 11:18:12.756125] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.165 [2024-07-26 11:18:12.756142] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.165 [2024-07-26 11:18:12.765746] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.165 [2024-07-26 11:18:12.765771] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.165 [2024-07-26 11:18:12.774072] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.165 [2024-07-26 11:18:12.774089] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.165 [2024-07-26 11:18:12.782947] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.165 [2024-07-26 11:18:12.782964] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.165 [2024-07-26 11:18:12.791930] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.165 [2024-07-26 11:18:12.791946] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.165 [2024-07-26 11:18:12.801251] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.165 [2024-07-26 11:18:12.801269] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.165 [2024-07-26 11:18:12.810508] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.165 [2024-07-26 11:18:12.810526] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.165 [2024-07-26 11:18:12.819667] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.165 [2024-07-26 11:18:12.819685] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.424 [2024-07-26 11:18:12.828294] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.424 [2024-07-26 11:18:12.828312] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.424 [2024-07-26 11:18:12.837240] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.424 [2024-07-26 11:18:12.837257] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.424 [2024-07-26 11:18:12.851119] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.424 [2024-07-26 11:18:12.851137] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.424 [2024-07-26 11:18:12.864795] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.424 [2024-07-26 11:18:12.864813] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.424 [2024-07-26 11:18:12.873744] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.424 [2024-07-26 11:18:12.873762] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.424 [2024-07-26 11:18:12.882327] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.424 [2024-07-26 11:18:12.882344] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.424 [2024-07-26 11:18:12.891993] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.424 [2024-07-26 11:18:12.892011] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.424 [2024-07-26 11:18:12.901198] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.424 [2024-07-26 11:18:12.901215] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.424 [2024-07-26 11:18:12.910112] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.424 [2024-07-26 11:18:12.910129] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.424 [2024-07-26 11:18:12.918818] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.424 [2024-07-26 11:18:12.918835] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.424 [2024-07-26 11:18:12.928271] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.424 [2024-07-26 11:18:12.928289] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.424 [2024-07-26 11:18:12.937702] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.424 [2024-07-26 11:18:12.937720] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.424 [2024-07-26 11:18:12.947324] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.424 [2024-07-26 11:18:12.947341] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.424 [2024-07-26 11:18:12.955918] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.424 [2024-07-26 11:18:12.955935] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.425 [2024-07-26 11:18:12.965581] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.425 [2024-07-26 11:18:12.965598] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.425 [2024-07-26 11:18:12.975253] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.425 [2024-07-26 11:18:12.975271] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.425 [2024-07-26 11:18:12.983951] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.425 [2024-07-26 11:18:12.983969] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.425 [2024-07-26 11:18:12.992515] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.425 [2024-07-26 11:18:12.992532] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.425 [2024-07-26 11:18:13.001526] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.425 [2024-07-26 11:18:13.001545] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.425 [2024-07-26 11:18:13.010281] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.425 [2024-07-26 11:18:13.010300] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.425 [2024-07-26 11:18:13.018933] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.425 [2024-07-26 11:18:13.018951] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.425 [2024-07-26 11:18:13.028128] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.425 [2024-07-26 11:18:13.028148] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.425 [2024-07-26 11:18:13.037265] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.425 [2024-07-26 11:18:13.037284] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.425 [2024-07-26 11:18:13.046257] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.425 [2024-07-26 11:18:13.046274] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.425 [2024-07-26 11:18:13.055411] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.425 [2024-07-26 11:18:13.055429] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.425 [2024-07-26 11:18:13.064473] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.425 [2024-07-26 11:18:13.064491] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.425 [2024-07-26 11:18:13.073234] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.425 [2024-07-26 11:18:13.073253] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.425 [2024-07-26 11:18:13.082355] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.425 [2024-07-26 11:18:13.082374] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.684 [2024-07-26 11:18:13.091450] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.684 [2024-07-26 11:18:13.091468] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.684 [2024-07-26 11:18:13.099871] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.684 [2024-07-26 11:18:13.099889] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.684 [2024-07-26 11:18:13.109519] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.684 [2024-07-26 11:18:13.109537] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.684 [2024-07-26 11:18:13.118030] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.684 [2024-07-26 11:18:13.118048] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.684 [2024-07-26 11:18:13.131992] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.684 [2024-07-26 11:18:13.132011] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.684 [2024-07-26 11:18:13.140725] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.684 [2024-07-26 11:18:13.140743] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.684 [2024-07-26 11:18:13.149203] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.684 [2024-07-26 11:18:13.149220] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.684 [2024-07-26 11:18:13.158198] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.684 [2024-07-26 11:18:13.158215] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.684 [2024-07-26 11:18:13.167139] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.684 [2024-07-26 11:18:13.167157] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.684 [2024-07-26 11:18:13.181051] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.684 [2024-07-26 11:18:13.181070] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.684 [2024-07-26 11:18:13.189948] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.684 [2024-07-26 11:18:13.189966] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.684 [2024-07-26 11:18:13.198786] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.684 [2024-07-26 11:18:13.198804] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.684 [2024-07-26 11:18:13.207966] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.684 [2024-07-26 11:18:13.207985] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.684 [2024-07-26 11:18:13.216845] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.684 [2024-07-26 11:18:13.216863] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.684 [2024-07-26 11:18:13.230808] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.684 [2024-07-26 11:18:13.230828] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.684 [2024-07-26 11:18:13.239593] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.684 [2024-07-26 11:18:13.239611] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.684 [2024-07-26 11:18:13.248972] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.684 [2024-07-26 11:18:13.248990] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.684 [2024-07-26 11:18:13.257822] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.684 [2024-07-26 11:18:13.257840] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.684 [2024-07-26 11:18:13.267140] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.684 [2024-07-26 11:18:13.267158] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.684 [2024-07-26 11:18:13.276802] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.684 [2024-07-26 11:18:13.276820] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.684 [2024-07-26 11:18:13.285281] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.684 [2024-07-26 11:18:13.285299] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.684 [2024-07-26 11:18:13.294437] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.684 [2024-07-26 11:18:13.294455] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.684 [2024-07-26 11:18:13.303709] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.684 [2024-07-26 11:18:13.303727] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.684 [2024-07-26 11:18:13.312562] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.684 [2024-07-26 11:18:13.312579] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.684 [2024-07-26 11:18:13.321810] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.684 [2024-07-26 11:18:13.321828] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.685 [2024-07-26 11:18:13.331095] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.685 [2024-07-26 11:18:13.331113] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.685 [2024-07-26 11:18:13.339983] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.685 [2024-07-26 11:18:13.340001] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.943 [2024-07-26 11:18:13.349000] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.943 [2024-07-26 11:18:13.349019] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.943 [2024-07-26 11:18:13.358724] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.943 [2024-07-26 11:18:13.358743] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.943 [2024-07-26 11:18:13.373016] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.943 [2024-07-26 11:18:13.373034] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.943 [2024-07-26 11:18:13.381802] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.943 [2024-07-26 11:18:13.381824] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.943 [2024-07-26 11:18:13.390388] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.943 [2024-07-26 11:18:13.390406] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.943 [2024-07-26 11:18:13.399389] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.943 [2024-07-26 11:18:13.399408] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.943 [2024-07-26 11:18:13.408280] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.943 [2024-07-26 11:18:13.408298] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.943 [2024-07-26 11:18:13.422494] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.943 [2024-07-26 11:18:13.422511] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.943 [2024-07-26 11:18:13.430036] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.943 [2024-07-26 11:18:13.430054] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.943 [2024-07-26 11:18:13.439460] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.943 [2024-07-26 11:18:13.439478] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.943 [2024-07-26 11:18:13.447972] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.943 [2024-07-26 11:18:13.447990] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.943 [2024-07-26 11:18:13.456431] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.943 [2024-07-26 11:18:13.456448] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.943 [2024-07-26 11:18:13.465495] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.943 [2024-07-26 11:18:13.465512] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.943 [2024-07-26 11:18:13.474546] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.943 [2024-07-26 11:18:13.474563] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.943 [2024-07-26 11:18:13.483574] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.943 [2024-07-26 11:18:13.483591] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.943 [2024-07-26 11:18:13.492726] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.943 [2024-07-26 11:18:13.492743] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.943 [2024-07-26 11:18:13.501942] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.943 [2024-07-26 11:18:13.501961] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.943 [2024-07-26 11:18:13.511108] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.943 [2024-07-26 11:18:13.511127] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.943 [2024-07-26 11:18:13.520056] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.943 [2024-07-26 11:18:13.520074] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.943 [2024-07-26 11:18:13.528755] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.943 [2024-07-26 11:18:13.528772] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.943 [2024-07-26 11:18:13.538220] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.943 [2024-07-26 11:18:13.538238] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.943 [2024-07-26 11:18:13.547441] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.943 [2024-07-26 11:18:13.547459] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.943 [2024-07-26 11:18:13.562084] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.943 [2024-07-26 11:18:13.562107] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.943 [2024-07-26 11:18:13.569347] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.943 [2024-07-26 11:18:13.569365] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.943 [2024-07-26 11:18:13.577870] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.943 [2024-07-26 11:18:13.577888] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.943 [2024-07-26 11:18:13.586320] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.943 [2024-07-26 11:18:13.586337] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.943 [2024-07-26 11:18:13.595390] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.943 [2024-07-26 11:18:13.595408] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.202 [2024-07-26 11:18:13.604536] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.202 [2024-07-26 11:18:13.604553] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.202 [2024-07-26 11:18:13.613648] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.202 [2024-07-26 11:18:13.613665] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.202 [2024-07-26 11:18:13.623058] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.202 [2024-07-26 11:18:13.623075] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.202 [2024-07-26 11:18:13.632077] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.202 [2024-07-26 11:18:13.632094] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.202 [2024-07-26 11:18:13.641060] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.202 [2024-07-26 11:18:13.641078] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.202 [2024-07-26 11:18:13.650550] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.202 [2024-07-26 11:18:13.650567] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.202 [2024-07-26 11:18:13.659198] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.202 [2024-07-26 11:18:13.659216] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.202 [2024-07-26 11:18:13.668904] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.202 [2024-07-26 11:18:13.668922] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.202 [2024-07-26 11:18:13.677527] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.202 [2024-07-26 11:18:13.677544] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.202 [2024-07-26 11:18:13.687044] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.202 [2024-07-26 11:18:13.687062] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.202 [2024-07-26 11:18:13.701103] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.202 [2024-07-26 11:18:13.701121] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.202 [2024-07-26 11:18:13.709953] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.202 [2024-07-26 11:18:13.709970] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.202 [2024-07-26 11:18:13.719217] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.202 [2024-07-26 11:18:13.719235] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.202 [2024-07-26 11:18:13.728306] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.202 [2024-07-26 11:18:13.728323] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.202 [2024-07-26 11:18:13.736702] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.202 [2024-07-26 11:18:13.736722] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.202 [2024-07-26 11:18:13.746486] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.202 [2024-07-26 11:18:13.746503] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.202 [2024-07-26 11:18:13.755309] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.202 [2024-07-26 11:18:13.755326] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.202 [2024-07-26 11:18:13.764424] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.202 [2024-07-26 11:18:13.764441] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.202 [2024-07-26 11:18:13.773015] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.202 [2024-07-26 11:18:13.773032] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.202 [2024-07-26 11:18:13.782340] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.202 [2024-07-26 11:18:13.782357] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.202 [2024-07-26 11:18:13.791888] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.202 [2024-07-26 11:18:13.791905] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.202 [2024-07-26 11:18:13.800378] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.202 [2024-07-26 11:18:13.800395] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.202 [2024-07-26 11:18:13.809335] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.202 [2024-07-26 11:18:13.809353] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.202 [2024-07-26 11:18:13.818357] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.202 [2024-07-26 11:18:13.818374] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.202 [2024-07-26 11:18:13.827852] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.202 [2024-07-26 11:18:13.827870] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.202 [2024-07-26 11:18:13.841961] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.202 [2024-07-26 11:18:13.841979] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.202 [2024-07-26 11:18:13.851073] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.202 [2024-07-26 11:18:13.851091] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.202 [2024-07-26 11:18:13.860469] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.202 [2024-07-26 11:18:13.860488] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.462 [2024-07-26 11:18:13.868899] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.462 [2024-07-26 11:18:13.868918] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.462 [2024-07-26 11:18:13.877853] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.462 [2024-07-26 11:18:13.877870] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.462 [2024-07-26 11:18:13.886925] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.462 [2024-07-26 11:18:13.886943] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.462 [2024-07-26 11:18:13.896166] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.462 [2024-07-26 11:18:13.896183] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.462 [2024-07-26 11:18:13.905232] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.462 [2024-07-26 11:18:13.905249] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.462 [2024-07-26 11:18:13.914207] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.462 [2024-07-26 11:18:13.914228] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.462 [2024-07-26 11:18:13.922925] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.462 [2024-07-26 11:18:13.922942] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.462 [2024-07-26 11:18:13.932017] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.462 [2024-07-26 11:18:13.932035] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.462 [2024-07-26 11:18:13.941101] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.462 [2024-07-26 11:18:13.941119] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.462 [2024-07-26 11:18:13.950056] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.462 [2024-07-26 11:18:13.950073] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.462 [2024-07-26 11:18:13.958613] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.462 [2024-07-26 11:18:13.958638] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.462 [2024-07-26 11:18:13.967434] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.462 [2024-07-26 11:18:13.967451] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.462 [2024-07-26 11:18:13.981177] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.462 [2024-07-26 11:18:13.981195] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.462 [2024-07-26 11:18:13.990006] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.462 [2024-07-26 11:18:13.990024] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.462 [2024-07-26 11:18:13.998868] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.462 [2024-07-26 11:18:13.998885] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.462 [2024-07-26 11:18:14.007994] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.462 [2024-07-26 11:18:14.008011] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.462 [2024-07-26 11:18:14.016516] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.462 [2024-07-26 11:18:14.016533] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.462 [2024-07-26 11:18:14.025763] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.462 [2024-07-26 11:18:14.025780] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.462 [2024-07-26 11:18:14.034970] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.462 [2024-07-26 11:18:14.034987] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.462 [2024-07-26 11:18:14.044583] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.462 [2024-07-26 11:18:14.044601] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.462 [2024-07-26 11:18:14.053307] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.462 [2024-07-26 11:18:14.053324] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.462 [2024-07-26 11:18:14.062109] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.462 [2024-07-26 11:18:14.062126] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.462 [2024-07-26 11:18:14.076318] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.462 [2024-07-26 11:18:14.076336] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.462 [2024-07-26 11:18:14.085084] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.462 [2024-07-26 11:18:14.085101] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.462 [2024-07-26 11:18:14.093749] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.462 [2024-07-26 11:18:14.093767] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.462 [2024-07-26 11:18:14.102992] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.462 [2024-07-26 11:18:14.103009] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.462 [2024-07-26 11:18:14.111965] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.462 [2024-07-26 11:18:14.111983] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.722 [2024-07-26 11:18:14.126181] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.722 [2024-07-26 11:18:14.126200] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.722 [2024-07-26 11:18:14.134774] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.722 [2024-07-26 11:18:14.134792] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.722 [2024-07-26 11:18:14.144069] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.722 [2024-07-26 11:18:14.144087] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.722 [2024-07-26 11:18:14.153050] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.722 [2024-07-26 11:18:14.153067] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.722 [2024-07-26 11:18:14.162366] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.722 [2024-07-26 11:18:14.162384] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.722 [2024-07-26 11:18:14.176363] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.722 [2024-07-26 11:18:14.176381] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.722 [2024-07-26 11:18:14.185120] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.722 [2024-07-26 11:18:14.185138] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.722 [2024-07-26 11:18:14.194454] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.722 [2024-07-26 11:18:14.194472] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.722 [2024-07-26 11:18:14.202831] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.722 [2024-07-26 11:18:14.202849] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.722 [2024-07-26 11:18:14.211841] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.722 [2024-07-26 11:18:14.211859] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.722 [2024-07-26 11:18:14.225942] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.722 [2024-07-26 11:18:14.225960] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.722 [2024-07-26 11:18:14.234854] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.722 [2024-07-26 11:18:14.234871] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.722 [2024-07-26 11:18:14.244141] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.722 [2024-07-26 11:18:14.244158] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.722 [2024-07-26 11:18:14.252621] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.722 [2024-07-26 11:18:14.252644] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.722 [2024-07-26 11:18:14.262319] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.722 [2024-07-26 11:18:14.262337] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.722 [2024-07-26 11:18:14.276819] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.722 [2024-07-26 11:18:14.276837] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.722 [2024-07-26 11:18:14.285578] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.722 [2024-07-26 11:18:14.285595] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.722 [2024-07-26 11:18:14.295366] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.722 [2024-07-26 11:18:14.295384] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.722 [2024-07-26 11:18:14.304133] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.722 [2024-07-26 11:18:14.304150] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.722 [2024-07-26 11:18:14.313353] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.722 [2024-07-26 11:18:14.313370] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.722 [2024-07-26 11:18:14.322875] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.722 [2024-07-26 11:18:14.322892] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.722 [2024-07-26 11:18:14.332236] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.722 [2024-07-26 11:18:14.332253] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.722 [2024-07-26 11:18:14.340908] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.722 [2024-07-26 11:18:14.340925] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.722 [2024-07-26 11:18:14.349520] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.722 [2024-07-26 11:18:14.349537] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.722 [2024-07-26 11:18:14.358512] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.722 [2024-07-26 11:18:14.358529] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.722 [2024-07-26 11:18:14.368105] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.722 [2024-07-26 11:18:14.368123] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.722 [2024-07-26 11:18:14.377219] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.722 [2024-07-26 11:18:14.377237] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.981 [2024-07-26 11:18:14.385971] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.981 [2024-07-26 11:18:14.385990] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.981 [2024-07-26 11:18:14.395600] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.981 [2024-07-26 11:18:14.395620] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.981 [2024-07-26 11:18:14.404227] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.981 [2024-07-26 11:18:14.404247] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.981 [2024-07-26 11:18:14.413730] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.981 [2024-07-26 11:18:14.413750] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.981 [2024-07-26 11:18:14.422392] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.981 [2024-07-26 11:18:14.422411] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.981 [2024-07-26 11:18:14.430984] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.981 [2024-07-26 11:18:14.431002] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.981 [2024-07-26 11:18:14.439588] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.981 [2024-07-26 11:18:14.439606] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.981 [2024-07-26 11:18:14.448517] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.981 [2024-07-26 11:18:14.448535] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.981 [2024-07-26 11:18:14.457640] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.981 [2024-07-26 11:18:14.457658] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.981 [2024-07-26 11:18:14.471661] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.981 [2024-07-26 11:18:14.471679] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.981 [2024-07-26 11:18:14.478867] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.981 [2024-07-26 11:18:14.478885] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.981 [2024-07-26 11:18:14.489130] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.981 [2024-07-26 11:18:14.489148] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.981 [2024-07-26 11:18:14.497678] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.981 [2024-07-26 11:18:14.497696] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.981 [2024-07-26 11:18:14.511702] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.981 [2024-07-26 11:18:14.511720] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.981 [2024-07-26 11:18:14.520570] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.981 [2024-07-26 11:18:14.520589] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.981 [2024-07-26 11:18:14.529637] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.981 [2024-07-26 11:18:14.529656] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.981 [2024-07-26 11:18:14.538693] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.981 [2024-07-26 11:18:14.538710] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.981 [2024-07-26 11:18:14.547698] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.981 [2024-07-26 11:18:14.547716] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.981 [2024-07-26 11:18:14.556847] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.981 [2024-07-26 11:18:14.556864] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.981 [2024-07-26 11:18:14.565974] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.981 [2024-07-26 11:18:14.565991] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.981 [2024-07-26 11:18:14.575047] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.982 [2024-07-26 11:18:14.575064] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.982 [2024-07-26 11:18:14.584666] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.982 [2024-07-26 11:18:14.584686] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.982 [2024-07-26 11:18:14.593336] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.982 [2024-07-26 11:18:14.593353] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.982 [2024-07-26 11:18:14.602501] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.982 [2024-07-26 11:18:14.602518] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.982 [2024-07-26 11:18:14.612085] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.982 [2024-07-26 11:18:14.612102] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.982 [2024-07-26 11:18:14.620757] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.982 [2024-07-26 11:18:14.620775] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.982 [2024-07-26 11:18:14.629620] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.982 [2024-07-26 11:18:14.629644] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.982 [2024-07-26 11:18:14.639179] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.982 [2024-07-26 11:18:14.639197] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.241 [2024-07-26 11:18:14.652990] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.241 [2024-07-26 11:18:14.653009] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.241 [2024-07-26 11:18:14.661714] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.241 [2024-07-26 11:18:14.661732] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.241 [2024-07-26 11:18:14.675586] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.241 [2024-07-26 11:18:14.675605] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.241 [2024-07-26 11:18:14.684419] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.241 [2024-07-26 11:18:14.684437] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.241 [2024-07-26 11:18:14.693198] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.241 [2024-07-26 11:18:14.693215] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.241 [2024-07-26 11:18:14.707122] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.241 [2024-07-26 11:18:14.707139] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.241 [2024-07-26 11:18:14.716017] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.241 [2024-07-26 11:18:14.716035] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.241 [2024-07-26 11:18:14.725143] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.241 [2024-07-26 11:18:14.725161] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.241 [2024-07-26 11:18:14.734773] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.241 [2024-07-26 11:18:14.734791] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.241 [2024-07-26 11:18:14.743518] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.241 [2024-07-26 11:18:14.743536] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.241 [2024-07-26 11:18:14.752638] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.241 [2024-07-26 11:18:14.752655] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.241 [2024-07-26 11:18:14.761233] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.241 [2024-07-26 11:18:14.761250] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.241 [2024-07-26 11:18:14.770283] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.241 [2024-07-26 11:18:14.770300] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.241 [2024-07-26 11:18:14.778746] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.241 [2024-07-26 11:18:14.778764] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.241 [2024-07-26 11:18:14.787956] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.241 [2024-07-26 11:18:14.787973] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.241 [2024-07-26 11:18:14.797005] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.241 [2024-07-26 11:18:14.797022] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.241 [2024-07-26 11:18:14.806113] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.241 [2024-07-26 11:18:14.806130] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.241 [2024-07-26 11:18:14.815028] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.241 [2024-07-26 11:18:14.815049] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.241 [2024-07-26 11:18:14.823635] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.241 [2024-07-26 11:18:14.823653] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.241 [2024-07-26 11:18:14.832702] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.241 [2024-07-26 11:18:14.832719] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.241 [2024-07-26 11:18:14.847354] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.241 [2024-07-26 11:18:14.847372] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.241 [2024-07-26 11:18:14.855082] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.241 [2024-07-26 11:18:14.855100] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.241 [2024-07-26 11:18:14.862775] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.241 [2024-07-26 11:18:14.862792] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.241 [2024-07-26 11:18:14.871941] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.242 [2024-07-26 11:18:14.871958] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.242 [2024-07-26 11:18:14.881302] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.242 [2024-07-26 11:18:14.881320] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.242 [2024-07-26 11:18:14.890491] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.242 [2024-07-26 11:18:14.890509] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.242 [2024-07-26 11:18:14.899614] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.242 [2024-07-26 11:18:14.899638] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.501 [2024-07-26 11:18:14.908809] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.501 [2024-07-26 11:18:14.908827] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.501 [2024-07-26 11:18:14.917756] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.501 [2024-07-26 11:18:14.917773] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.501 [2024-07-26 11:18:14.926785] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.501 [2024-07-26 11:18:14.926803] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.501 [2024-07-26 11:18:14.940715] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.501 [2024-07-26 11:18:14.940732] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.501 [2024-07-26 11:18:14.954236] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.501 [2024-07-26 11:18:14.954253] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.501 [2024-07-26 11:18:14.962884] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.501 [2024-07-26 11:18:14.962901] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.501 [2024-07-26 11:18:14.971563] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.501 [2024-07-26 11:18:14.971580] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.501 [2024-07-26 11:18:14.980062] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.501 [2024-07-26 11:18:14.980079] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.501 [2024-07-26 11:18:14.989128] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.501 [2024-07-26 11:18:14.989145] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.501 [2024-07-26 11:18:14.998344] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.501 [2024-07-26 11:18:14.998365] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.501 [2024-07-26 11:18:15.007449] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.501 [2024-07-26 11:18:15.007466] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.501 [2024-07-26 11:18:15.017036] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.501 [2024-07-26 11:18:15.017054] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.501 [2024-07-26 11:18:15.025491] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.501 [2024-07-26 11:18:15.025508] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.501 [2024-07-26 11:18:15.034623] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.501 [2024-07-26 11:18:15.034646] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.501 [2024-07-26 11:18:15.043987] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.501 [2024-07-26 11:18:15.044004] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.501 [2024-07-26 11:18:15.052708] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.501 [2024-07-26 11:18:15.052726] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.501 [2024-07-26 11:18:15.061684] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.501 [2024-07-26 11:18:15.061701] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.501 [2024-07-26 11:18:15.070822] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.502 [2024-07-26 11:18:15.070839] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.502 [2024-07-26 11:18:15.080183] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.502 [2024-07-26 11:18:15.080201] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.502 [2024-07-26 11:18:15.089711] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.502 [2024-07-26 11:18:15.089729] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.502 [2024-07-26 11:18:15.098736] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.502 [2024-07-26 11:18:15.098753] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.502 [2024-07-26 11:18:15.107972] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.502 [2024-07-26 11:18:15.107989] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.502 [2024-07-26 11:18:15.117081] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.502 [2024-07-26 11:18:15.117098] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.502 [2024-07-26 11:18:15.131014] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.502 [2024-07-26 11:18:15.131032] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.502 [2024-07-26 11:18:15.139754] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.502 [2024-07-26 11:18:15.139771] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.502 [2024-07-26 11:18:15.148869] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.502 [2024-07-26 11:18:15.148886] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.502 [2024-07-26 11:18:15.157468] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.502 [2024-07-26 11:18:15.157486] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.761 [2024-07-26 11:18:15.166495] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.761 [2024-07-26 11:18:15.166513] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.761 [2024-07-26 11:18:15.180549] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.761 [2024-07-26 11:18:15.180571] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.761 [2024-07-26 11:18:15.189570] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.761 [2024-07-26 11:18:15.189587] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.761 [2024-07-26 11:18:15.198753] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.761 [2024-07-26 11:18:15.198771] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.761 [2024-07-26 11:18:15.207810] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.761 [2024-07-26 11:18:15.207827] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.761 [2024-07-26 11:18:15.216759] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.761 [2024-07-26 11:18:15.216777] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.761 [2024-07-26 11:18:15.230771] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.761 [2024-07-26 11:18:15.230790] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.761 [2024-07-26 11:18:15.239569] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.761 [2024-07-26 11:18:15.239587] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.761 [2024-07-26 11:18:15.248681] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.761 [2024-07-26 11:18:15.248698] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.761 [2024-07-26 11:18:15.257173] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.761 [2024-07-26 11:18:15.257190] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.761 [2024-07-26 11:18:15.266390] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.761 [2024-07-26 11:18:15.266407] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.761 [2024-07-26 11:18:15.288715] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.761 [2024-07-26 11:18:15.288734] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.761 [2024-07-26 11:18:15.297256] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.761 [2024-07-26 11:18:15.297274] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.761 [2024-07-26 11:18:15.306260] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.761 [2024-07-26 11:18:15.306278] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.761 [2024-07-26 11:18:15.315961] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.761 [2024-07-26 11:18:15.315978] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.761 [2024-07-26 11:18:15.329890] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.761 [2024-07-26 11:18:15.329909] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.761 [2024-07-26 11:18:15.338544] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.761 [2024-07-26 11:18:15.338563] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.761 [2024-07-26 11:18:15.348185] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.761 [2024-07-26 11:18:15.348202] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.761 [2024-07-26 11:18:15.357377] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.761 [2024-07-26 11:18:15.357394] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.761 [2024-07-26 11:18:15.366658] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.761 [2024-07-26 11:18:15.366675] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.761 [2024-07-26 11:18:15.380968] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.761 [2024-07-26 11:18:15.380989] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.761 [2024-07-26 11:18:15.389877] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.761 [2024-07-26 11:18:15.389894] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.761 [2024-07-26 11:18:15.399614] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.761 [2024-07-26 11:18:15.399640] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.761 [2024-07-26 11:18:15.408835] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.761 [2024-07-26 11:18:15.408852] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.761 [2024-07-26 11:18:15.417707] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.761 [2024-07-26 11:18:15.417724] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.021 [2024-07-26 11:18:15.431914] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.021 [2024-07-26 11:18:15.431931] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.021 [2024-07-26 11:18:15.440706] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.021 [2024-07-26 11:18:15.440724] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.021 [2024-07-26 11:18:15.449893] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.021 [2024-07-26 11:18:15.449910] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.021 [2024-07-26 11:18:15.459768] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.021 [2024-07-26 11:18:15.459786] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.021 [2024-07-26 11:18:15.468289] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.021 [2024-07-26 11:18:15.468307] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.021 [2024-07-26 11:18:15.482202] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.021 [2024-07-26 11:18:15.482220] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.021 [2024-07-26 11:18:15.490886] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.021 [2024-07-26 11:18:15.490903] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.021 [2024-07-26 11:18:15.500047] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.021 [2024-07-26 11:18:15.500065] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.021 [2024-07-26 11:18:15.509286] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.021 [2024-07-26 11:18:15.509303] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.021 [2024-07-26 11:18:15.517739] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.021 [2024-07-26 11:18:15.517756] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.021 [2024-07-26 11:18:15.526805] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.021 [2024-07-26 11:18:15.526823] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.021 [2024-07-26 11:18:15.535695] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.021 [2024-07-26 11:18:15.535714] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.021 [2024-07-26 11:18:15.544970] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.021 [2024-07-26 11:18:15.544988] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.021 [2024-07-26 11:18:15.554464] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.021 [2024-07-26 11:18:15.554482] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.021 [2024-07-26 11:18:15.563364] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.021 [2024-07-26 11:18:15.563382] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.021 [2024-07-26 11:18:15.572970] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.021 [2024-07-26 11:18:15.572987] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.021 [2024-07-26 11:18:15.581468] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.021 [2024-07-26 11:18:15.581486] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.021 [2024-07-26 11:18:15.590383] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.021 [2024-07-26 11:18:15.590401] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.021 [2024-07-26 11:18:15.598845] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.021 [2024-07-26 11:18:15.598862] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.021 [2024-07-26 11:18:15.607330] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.021 [2024-07-26 11:18:15.607347] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.021 [2024-07-26 11:18:15.621223] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.021 [2024-07-26 11:18:15.621241] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.021 [2024-07-26 11:18:15.629733] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.021 [2024-07-26 11:18:15.629750] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.021 [2024-07-26 11:18:15.638428] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.021 [2024-07-26 11:18:15.638445] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.021 [2024-07-26 11:18:15.647862] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.021 [2024-07-26 11:18:15.647880] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.021 [2024-07-26 11:18:15.656975] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.021 [2024-07-26 11:18:15.656992] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.021 [2024-07-26 11:18:15.666469] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.021 [2024-07-26 11:18:15.666486] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.021 [2024-07-26 11:18:15.675393] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.021 [2024-07-26 11:18:15.675413] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.280 [2024-07-26 11:18:15.683940] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.280 [2024-07-26 11:18:15.683957] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.280 [2024-07-26 11:18:15.693178] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.280 [2024-07-26 11:18:15.693196] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.280 [2024-07-26 11:18:15.701869] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.280 [2024-07-26 11:18:15.701886] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.280 [2024-07-26 11:18:15.716149] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.280 [2024-07-26 11:18:15.716166] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.280 [2024-07-26 11:18:15.724876] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.280 [2024-07-26 11:18:15.724894] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.280 [2024-07-26 11:18:15.733910] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.280 [2024-07-26 11:18:15.733928] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.280 [2024-07-26 11:18:15.742456] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.280 [2024-07-26 11:18:15.742474] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.280 [2024-07-26 11:18:15.751471] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.280 [2024-07-26 11:18:15.751488] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.281 [2024-07-26 11:18:15.765843] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.281 [2024-07-26 11:18:15.765862] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.281 [2024-07-26 11:18:15.774448] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.281 [2024-07-26 11:18:15.774466] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.281 [2024-07-26 11:18:15.783579] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.281 [2024-07-26 11:18:15.783597] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.281 [2024-07-26 11:18:15.793200] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.281 [2024-07-26 11:18:15.793219] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.281 [2024-07-26 11:18:15.801649] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.281 [2024-07-26 11:18:15.801668] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.281 [2024-07-26 11:18:15.811242] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.281 [2024-07-26 11:18:15.811260] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.281 [2024-07-26 11:18:15.819681] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.281 [2024-07-26 11:18:15.819698] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.281 [2024-07-26 11:18:15.828339] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.281 [2024-07-26 11:18:15.828358] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.281 [2024-07-26 11:18:15.837308] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.281 [2024-07-26 11:18:15.837326] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.281 [2024-07-26 11:18:15.846515] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.281 [2024-07-26 11:18:15.846533] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.281 [2024-07-26 11:18:15.860940] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.281 [2024-07-26 11:18:15.860958] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.281 [2024-07-26 11:18:15.874045] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.281 [2024-07-26 11:18:15.874062] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.281 [2024-07-26 11:18:15.882624] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.281 [2024-07-26 11:18:15.882647] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.281 [2024-07-26 11:18:15.892106] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.281 [2024-07-26 11:18:15.892123] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.281 [2024-07-26 11:18:15.901345] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.281 [2024-07-26 11:18:15.901363] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.281 [2024-07-26 11:18:15.915510] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.281 [2024-07-26 11:18:15.915528] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.281 [2024-07-26 11:18:15.924433] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.281 [2024-07-26 11:18:15.924450] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.281 [2024-07-26 11:18:15.933042] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.281 [2024-07-26 11:18:15.933060] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.540 [2024-07-26 11:18:15.942249] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.540 [2024-07-26 11:18:15.942268] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.540 [2024-07-26 11:18:15.951227] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.540 [2024-07-26 11:18:15.951245] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.540 [2024-07-26 11:18:15.965255] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.540 [2024-07-26 11:18:15.965273] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.540 [2024-07-26 11:18:15.974220] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.540 [2024-07-26 11:18:15.974238] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.540 [2024-07-26 11:18:15.982901] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.540 [2024-07-26 11:18:15.982920] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.540 [2024-07-26 11:18:15.991818] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.540 [2024-07-26 11:18:15.991835] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.540 [2024-07-26 11:18:16.001019] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.540 [2024-07-26 11:18:16.001037] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.540 [2024-07-26 11:18:16.010251] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.540 [2024-07-26 11:18:16.010269] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.540 [2024-07-26 11:18:16.020051] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.540 [2024-07-26 11:18:16.020070] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.540 [2024-07-26 11:18:16.028383] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.540 [2024-07-26 11:18:16.028401] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.540 [2024-07-26 11:18:16.037479] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.540 [2024-07-26 11:18:16.037496] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.540 [2024-07-26 11:18:16.046622] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.540 [2024-07-26 11:18:16.046645] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.540 [2024-07-26 11:18:16.061125] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.540 [2024-07-26 11:18:16.061143] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.540 [2024-07-26 11:18:16.069913] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.540 [2024-07-26 11:18:16.069931] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.540 [2024-07-26 11:18:16.079376] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.540 [2024-07-26 11:18:16.079393] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.540 [2024-07-26 11:18:16.088682] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.540 [2024-07-26 11:18:16.088700] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.540 [2024-07-26 11:18:16.097780] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.540 [2024-07-26 11:18:16.097798] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.540 [2024-07-26 11:18:16.106830] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.540 [2024-07-26 11:18:16.106848] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.540 [2024-07-26 11:18:16.116044] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.540 [2024-07-26 11:18:16.116062] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.540 [2024-07-26 11:18:16.125287] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.540 [2024-07-26 11:18:16.125304] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.540 [2024-07-26 11:18:16.134776] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.541 [2024-07-26 11:18:16.134794] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.541 [2024-07-26 11:18:16.143820] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.541 [2024-07-26 11:18:16.143837] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.541 [2024-07-26 11:18:16.152746] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.541 [2024-07-26 11:18:16.152763] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.541 [2024-07-26 11:18:16.161784] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.541 [2024-07-26 11:18:16.161802] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.541 [2024-07-26 11:18:16.170895] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.541 [2024-07-26 11:18:16.170912] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.541 [2024-07-26 11:18:16.180066] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.541 [2024-07-26 11:18:16.180084] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.541 [2024-07-26 11:18:16.188730] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.541 [2024-07-26 11:18:16.188748] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.800 [2024-07-26 11:18:16.203252] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.800 [2024-07-26 11:18:16.203269] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.800 [2024-07-26 11:18:16.211986] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.800 [2024-07-26 11:18:16.212004] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.800 [2024-07-26 11:18:16.221428] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.800 [2024-07-26 11:18:16.221447] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.800 [2024-07-26 11:18:16.230509] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.800 [2024-07-26 11:18:16.230527] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.800 [2024-07-26 11:18:16.239812] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.800 [2024-07-26 11:18:16.239829] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.800 [2024-07-26 11:18:16.253861] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.800 [2024-07-26 11:18:16.253878] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.800 [2024-07-26 11:18:16.262740] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.800 [2024-07-26 11:18:16.262757] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.800 [2024-07-26 11:18:16.271898] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.800 [2024-07-26 11:18:16.271915] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.800 [2024-07-26 11:18:16.280317] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.800 [2024-07-26 11:18:16.280334] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.800 [2024-07-26 11:18:16.289432] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.800 [2024-07-26 11:18:16.289453] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.800 [2024-07-26 11:18:16.298749] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.800 [2024-07-26 11:18:16.298766] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.800 [2024-07-26 11:18:16.307884] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.800 [2024-07-26 11:18:16.307902] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.800 [2024-07-26 11:18:16.316958] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.800 [2024-07-26 11:18:16.316976] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.800 [2024-07-26 11:18:16.325498] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.800 [2024-07-26 11:18:16.325516] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.800 [2024-07-26 11:18:16.334806] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.800 [2024-07-26 11:18:16.334823] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.800 [2024-07-26 11:18:16.343954] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.800 [2024-07-26 11:18:16.343972] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.800 [2024-07-26 11:18:16.352765] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.800 [2024-07-26 11:18:16.352783] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.800 [2024-07-26 11:18:16.358777] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.800 [2024-07-26 11:18:16.358793] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.800 00:10:20.800 Latency(us) 00:10:20.800 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:20.800 Job: Nvme1n1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 128, IO size: 8192) 00:10:20.800 Nvme1n1 : 5.01 17155.07 134.02 0.00 0.00 7454.34 3214.38 19723.22 00:10:20.800 =================================================================================================================== 00:10:20.801 Total : 17155.07 134.02 0.00 0.00 7454.34 3214.38 19723.22 00:10:20.801 [2024-07-26 11:18:16.366791] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.801 [2024-07-26 11:18:16.366804] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.801 [2024-07-26 11:18:16.374810] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.801 [2024-07-26 11:18:16.374833] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.801 [2024-07-26 11:18:16.386856] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.801 [2024-07-26 11:18:16.386872] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.801 [2024-07-26 11:18:16.398886] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.801 [2024-07-26 11:18:16.398906] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.801 [2024-07-26 11:18:16.410915] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.801 [2024-07-26 11:18:16.410930] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.801 [2024-07-26 11:18:16.422944] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.801 [2024-07-26 11:18:16.422959] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.801 [2024-07-26 11:18:16.430963] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.801 [2024-07-26 11:18:16.430975] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.801 [2024-07-26 11:18:16.442995] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.801 [2024-07-26 11:18:16.443017] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.801 [2024-07-26 11:18:16.451014] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.801 [2024-07-26 11:18:16.451028] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.801 [2024-07-26 11:18:16.459040] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.801 [2024-07-26 11:18:16.459056] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:21.060 [2024-07-26 11:18:16.467059] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:21.060 [2024-07-26 11:18:16.467069] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:21.060 [2024-07-26 11:18:16.479092] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:21.060 [2024-07-26 11:18:16.479101] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:21.060 [2024-07-26 11:18:16.491128] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:21.060 [2024-07-26 11:18:16.491141] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:21.060 [2024-07-26 11:18:16.499145] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:21.060 [2024-07-26 11:18:16.499155] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:21.060 [2024-07-26 11:18:16.507165] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:21.060 [2024-07-26 11:18:16.507174] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:21.060 [2024-07-26 11:18:16.515186] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:21.060 [2024-07-26 11:18:16.515198] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:21.060 [2024-07-26 11:18:16.523208] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:21.060 [2024-07-26 11:18:16.523219] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:21.060 [2024-07-26 11:18:16.535241] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:21.060 [2024-07-26 11:18:16.535250] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:21.060 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh: line 42: kill: (1407225) - No such process 00:10:21.060 11:18:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@49 -- # wait 1407225 00:10:21.060 11:18:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@52 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:21.060 11:18:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:21.060 11:18:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:21.060 11:18:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:21.060 11:18:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@53 -- # rpc_cmd bdev_delay_create -b malloc0 -d delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:10:21.060 11:18:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:21.060 11:18:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:21.060 delay0 00:10:21.060 11:18:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:21.060 11:18:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@54 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 delay0 -n 1 00:10:21.060 11:18:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:21.060 11:18:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:21.060 11:18:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:21.060 11:18:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -c 0x1 -t 5 -q 64 -w randrw -M 50 -l warning -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 ns:1' 00:10:21.060 EAL: No free 2048 kB hugepages reported on node 1 00:10:21.060 [2024-07-26 11:18:16.716768] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:10:27.625 [2024-07-26 11:18:23.247087] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b225d0 is same with the state(5) to be set 00:10:27.625 Initializing NVMe Controllers 00:10:27.625 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:10:27.625 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:10:27.625 Initialization complete. Launching workers. 00:10:27.625 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 I/O completed: 320, failed: 2709 00:10:27.625 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) abort submitted 2972, failed to submit 57 00:10:27.625 success 2813, unsuccessful 159, failed 0 00:10:27.625 11:18:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@59 -- # trap - SIGINT SIGTERM EXIT 00:10:27.625 11:18:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@60 -- # nvmftestfini 00:10:27.625 11:18:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@488 -- # nvmfcleanup 00:10:27.625 11:18:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@117 -- # sync 00:10:27.625 11:18:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:10:27.625 11:18:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@120 -- # set +e 00:10:27.625 11:18:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@121 -- # for i in {1..20} 00:10:27.625 11:18:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:10:27.625 rmmod nvme_tcp 00:10:27.625 rmmod nvme_fabrics 00:10:27.884 rmmod nvme_keyring 00:10:27.884 11:18:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:10:27.884 11:18:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@124 -- # set -e 00:10:27.884 11:18:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@125 -- # return 0 00:10:27.884 11:18:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@489 -- # '[' -n 1404730 ']' 00:10:27.884 11:18:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@490 -- # killprocess 1404730 00:10:27.884 11:18:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@950 -- # '[' -z 1404730 ']' 00:10:27.884 11:18:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@954 -- # kill -0 1404730 00:10:27.884 11:18:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@955 -- # uname 00:10:27.884 11:18:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:10:27.884 11:18:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1404730 00:10:27.884 11:18:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:10:27.884 11:18:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:10:27.884 11:18:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1404730' 00:10:27.884 killing process with pid 1404730 00:10:27.884 11:18:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@969 -- # kill 1404730 00:10:27.884 11:18:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@974 -- # wait 1404730 00:10:28.143 11:18:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:10:28.143 11:18:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:10:28.143 11:18:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:10:28.143 11:18:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:10:28.143 11:18:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@278 -- # remove_spdk_ns 00:10:28.143 11:18:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:28.143 11:18:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:28.143 11:18:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:30.045 11:18:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:10:30.045 00:10:30.045 real 0m32.279s 00:10:30.045 user 0m44.021s 00:10:30.045 sys 0m10.717s 00:10:30.045 11:18:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:30.045 11:18:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:30.045 ************************************ 00:10:30.045 END TEST nvmf_zcopy 00:10:30.045 ************************************ 00:10:30.045 11:18:25 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@33 -- # run_test nvmf_nmic /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:10:30.045 11:18:25 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:10:30.045 11:18:25 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:30.045 11:18:25 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:10:30.045 ************************************ 00:10:30.045 START TEST nvmf_nmic 00:10:30.045 ************************************ 00:10:30.045 11:18:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:10:30.304 * Looking for test storage... 00:10:30.304 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:30.304 11:18:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:30.304 11:18:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@7 -- # uname -s 00:10:30.304 11:18:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:30.304 11:18:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:30.304 11:18:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:30.304 11:18:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:30.304 11:18:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:30.304 11:18:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:30.304 11:18:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:30.304 11:18:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:30.304 11:18:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:30.304 11:18:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:30.304 11:18:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:10:30.304 11:18:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:10:30.304 11:18:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:30.304 11:18:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:30.304 11:18:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:30.304 11:18:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:30.304 11:18:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:30.304 11:18:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:30.304 11:18:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:30.304 11:18:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:30.304 11:18:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:30.305 11:18:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:30.305 11:18:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:30.305 11:18:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@5 -- # export PATH 00:10:30.305 11:18:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:30.305 11:18:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@47 -- # : 0 00:10:30.305 11:18:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:10:30.305 11:18:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:10:30.305 11:18:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:30.305 11:18:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:30.305 11:18:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:30.305 11:18:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:10:30.305 11:18:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:10:30.305 11:18:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@51 -- # have_pci_nics=0 00:10:30.305 11:18:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@11 -- # MALLOC_BDEV_SIZE=64 00:10:30.305 11:18:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:10:30.305 11:18:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@14 -- # nvmftestinit 00:10:30.305 11:18:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:10:30.305 11:18:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:30.305 11:18:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@448 -- # prepare_net_devs 00:10:30.305 11:18:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@410 -- # local -g is_hw=no 00:10:30.305 11:18:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@412 -- # remove_spdk_ns 00:10:30.305 11:18:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:30.305 11:18:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:30.305 11:18:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:30.305 11:18:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:10:30.305 11:18:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:10:30.305 11:18:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@285 -- # xtrace_disable 00:10:30.305 11:18:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:36.876 11:18:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:36.876 11:18:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@291 -- # pci_devs=() 00:10:36.876 11:18:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@291 -- # local -a pci_devs 00:10:36.876 11:18:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@292 -- # pci_net_devs=() 00:10:36.876 11:18:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:10:36.876 11:18:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@293 -- # pci_drivers=() 00:10:36.876 11:18:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@293 -- # local -A pci_drivers 00:10:36.876 11:18:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@295 -- # net_devs=() 00:10:36.876 11:18:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@295 -- # local -ga net_devs 00:10:36.876 11:18:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@296 -- # e810=() 00:10:36.876 11:18:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@296 -- # local -ga e810 00:10:36.876 11:18:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@297 -- # x722=() 00:10:36.876 11:18:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@297 -- # local -ga x722 00:10:36.876 11:18:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@298 -- # mlx=() 00:10:36.876 11:18:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@298 -- # local -ga mlx 00:10:36.876 11:18:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:36.876 11:18:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:36.876 11:18:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:36.876 11:18:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:36.876 11:18:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:36.876 11:18:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:36.876 11:18:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:36.876 11:18:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:36.876 11:18:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:36.876 11:18:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:36.876 11:18:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:36.876 11:18:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:10:36.876 11:18:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:10:36.876 11:18:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:10:36.876 11:18:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:10:36.876 11:18:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:10:36.877 11:18:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:10:36.877 11:18:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:10:36.877 11:18:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:10:36.877 Found 0000:86:00.0 (0x8086 - 0x159b) 00:10:36.877 11:18:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:10:36.877 11:18:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:10:36.877 11:18:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:36.877 11:18:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:36.877 11:18:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:10:36.877 11:18:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:10:36.877 11:18:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:10:36.877 Found 0000:86:00.1 (0x8086 - 0x159b) 00:10:36.877 11:18:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:10:36.877 11:18:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:10:36.877 11:18:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:36.877 11:18:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:36.877 11:18:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:10:36.877 11:18:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:10:36.877 11:18:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:10:36.877 11:18:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:10:36.877 11:18:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:10:36.877 11:18:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:36.877 11:18:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:10:36.877 11:18:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:36.877 11:18:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@390 -- # [[ up == up ]] 00:10:36.877 11:18:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:10:36.877 11:18:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:36.877 11:18:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:10:36.877 Found net devices under 0000:86:00.0: cvl_0_0 00:10:36.877 11:18:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:10:36.877 11:18:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:10:36.877 11:18:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:36.877 11:18:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:10:36.877 11:18:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:36.877 11:18:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@390 -- # [[ up == up ]] 00:10:36.877 11:18:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:10:36.877 11:18:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:36.877 11:18:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:10:36.877 Found net devices under 0000:86:00.1: cvl_0_1 00:10:36.877 11:18:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:10:36.877 11:18:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:10:36.877 11:18:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@414 -- # is_hw=yes 00:10:36.877 11:18:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:10:36.877 11:18:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:10:36.877 11:18:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:10:36.877 11:18:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:36.877 11:18:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:36.877 11:18:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:10:36.877 11:18:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:10:36.877 11:18:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:10:36.877 11:18:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:10:36.877 11:18:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:10:36.877 11:18:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:10:36.877 11:18:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:36.877 11:18:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:10:36.877 11:18:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:10:36.877 11:18:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:10:36.877 11:18:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:10:36.877 11:18:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:10:36.877 11:18:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:10:36.877 11:18:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:10:36.877 11:18:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:10:36.877 11:18:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:10:36.877 11:18:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:10:36.877 11:18:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:10:36.877 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:36.877 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.167 ms 00:10:36.877 00:10:36.877 --- 10.0.0.2 ping statistics --- 00:10:36.877 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:36.877 rtt min/avg/max/mdev = 0.167/0.167/0.167/0.000 ms 00:10:36.877 11:18:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:10:36.877 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:36.877 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.221 ms 00:10:36.877 00:10:36.877 --- 10.0.0.1 ping statistics --- 00:10:36.877 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:36.877 rtt min/avg/max/mdev = 0.221/0.221/0.221/0.000 ms 00:10:36.877 11:18:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:36.877 11:18:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@422 -- # return 0 00:10:36.877 11:18:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:10:36.877 11:18:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:36.877 11:18:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:10:36.877 11:18:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:10:36.877 11:18:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:36.878 11:18:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:10:36.878 11:18:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:10:36.878 11:18:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@15 -- # nvmfappstart -m 0xF 00:10:36.878 11:18:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:10:36.878 11:18:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@724 -- # xtrace_disable 00:10:36.878 11:18:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:36.878 11:18:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@481 -- # nvmfpid=1412813 00:10:36.878 11:18:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@482 -- # waitforlisten 1412813 00:10:36.878 11:18:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@831 -- # '[' -z 1412813 ']' 00:10:36.878 11:18:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:10:36.878 11:18:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:36.878 11:18:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@836 -- # local max_retries=100 00:10:36.878 11:18:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:36.878 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:36.878 11:18:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@840 -- # xtrace_disable 00:10:36.878 11:18:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:36.878 [2024-07-26 11:18:31.662657] Starting SPDK v24.09-pre git sha1 487ff9e1a / DPDK 24.03.0 initialization... 00:10:36.878 [2024-07-26 11:18:31.662696] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:36.878 EAL: No free 2048 kB hugepages reported on node 1 00:10:36.878 [2024-07-26 11:18:31.729595] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:36.878 [2024-07-26 11:18:31.809330] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:36.878 [2024-07-26 11:18:31.809363] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:36.878 [2024-07-26 11:18:31.809369] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:36.878 [2024-07-26 11:18:31.809375] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:36.878 [2024-07-26 11:18:31.809380] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:36.878 [2024-07-26 11:18:31.809455] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:10:36.878 [2024-07-26 11:18:31.809561] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:10:36.878 [2024-07-26 11:18:31.809659] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:36.878 [2024-07-26 11:18:31.809658] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:10:36.878 11:18:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:10:36.878 11:18:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@864 -- # return 0 00:10:36.878 11:18:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:10:36.878 11:18:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@730 -- # xtrace_disable 00:10:36.878 11:18:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:36.878 11:18:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:36.878 11:18:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:10:36.878 11:18:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:36.878 11:18:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:36.878 [2024-07-26 11:18:32.496777] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:36.878 11:18:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:36.878 11:18:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:10:36.878 11:18:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:36.878 11:18:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:36.878 Malloc0 00:10:36.878 11:18:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:36.878 11:18:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@21 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:10:36.878 11:18:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:36.878 11:18:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:36.878 11:18:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:36.878 11:18:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:10:36.878 11:18:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:36.878 11:18:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:37.138 11:18:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:37.138 11:18:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:37.138 11:18:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:37.138 11:18:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:37.138 [2024-07-26 11:18:32.540493] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:37.138 11:18:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:37.138 11:18:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@25 -- # echo 'test case1: single bdev can'\''t be used in multiple subsystems' 00:10:37.138 test case1: single bdev can't be used in multiple subsystems 00:10:37.138 11:18:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@26 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:10:37.138 11:18:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:37.138 11:18:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:37.138 11:18:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:37.138 11:18:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:10:37.138 11:18:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:37.138 11:18:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:37.138 11:18:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:37.138 11:18:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@28 -- # nmic_status=0 00:10:37.138 11:18:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc0 00:10:37.138 11:18:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:37.138 11:18:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:37.138 [2024-07-26 11:18:32.564392] bdev.c:8111:bdev_open: *ERROR*: bdev Malloc0 already claimed: type exclusive_write by module NVMe-oF Target 00:10:37.138 [2024-07-26 11:18:32.564410] subsystem.c:2087:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2: bdev Malloc0 cannot be opened, error=-1 00:10:37.138 [2024-07-26 11:18:32.564418] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:37.138 request: 00:10:37.138 { 00:10:37.138 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:10:37.138 "namespace": { 00:10:37.138 "bdev_name": "Malloc0", 00:10:37.138 "no_auto_visible": false 00:10:37.138 }, 00:10:37.138 "method": "nvmf_subsystem_add_ns", 00:10:37.138 "req_id": 1 00:10:37.138 } 00:10:37.138 Got JSON-RPC error response 00:10:37.138 response: 00:10:37.138 { 00:10:37.138 "code": -32602, 00:10:37.138 "message": "Invalid parameters" 00:10:37.138 } 00:10:37.138 11:18:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:10:37.138 11:18:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@29 -- # nmic_status=1 00:10:37.138 11:18:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@31 -- # '[' 1 -eq 0 ']' 00:10:37.138 11:18:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@36 -- # echo ' Adding namespace failed - expected result.' 00:10:37.138 Adding namespace failed - expected result. 00:10:37.138 11:18:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@39 -- # echo 'test case2: host connect to nvmf target in multiple paths' 00:10:37.138 test case2: host connect to nvmf target in multiple paths 00:10:37.138 11:18:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:10:37.138 11:18:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:37.138 11:18:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:37.138 [2024-07-26 11:18:32.572495] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:10:37.138 11:18:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:37.138 11:18:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@41 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid=00ad29c2-ccbd-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:10:38.073 11:18:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@42 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid=00ad29c2-ccbd-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4421 00:10:39.445 11:18:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@44 -- # waitforserial SPDKISFASTANDAWESOME 00:10:39.445 11:18:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1198 -- # local i=0 00:10:39.445 11:18:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:10:39.445 11:18:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:10:39.445 11:18:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1205 -- # sleep 2 00:10:41.374 11:18:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:10:41.374 11:18:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:10:41.374 11:18:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:10:41.374 11:18:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:10:41.374 11:18:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:10:41.374 11:18:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1208 -- # return 0 00:10:41.374 11:18:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:10:41.374 [global] 00:10:41.374 thread=1 00:10:41.374 invalidate=1 00:10:41.374 rw=write 00:10:41.374 time_based=1 00:10:41.374 runtime=1 00:10:41.374 ioengine=libaio 00:10:41.374 direct=1 00:10:41.374 bs=4096 00:10:41.374 iodepth=1 00:10:41.374 norandommap=0 00:10:41.374 numjobs=1 00:10:41.374 00:10:41.374 verify_dump=1 00:10:41.374 verify_backlog=512 00:10:41.374 verify_state_save=0 00:10:41.374 do_verify=1 00:10:41.374 verify=crc32c-intel 00:10:41.374 [job0] 00:10:41.374 filename=/dev/nvme0n1 00:10:41.374 Could not set queue depth (nvme0n1) 00:10:41.633 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:41.633 fio-3.35 00:10:41.633 Starting 1 thread 00:10:43.009 00:10:43.009 job0: (groupid=0, jobs=1): err= 0: pid=1413901: Fri Jul 26 11:18:38 2024 00:10:43.009 read: IOPS=2491, BW=9966KiB/s (10.2MB/s)(9976KiB/1001msec) 00:10:43.009 slat (nsec): min=6480, max=25632, avg=7272.55, stdev=738.53 00:10:43.009 clat (usec): min=164, max=328, avg=225.37, stdev=19.98 00:10:43.009 lat (usec): min=170, max=335, avg=232.65, stdev=20.01 00:10:43.009 clat percentiles (usec): 00:10:43.009 | 1.00th=[ 190], 5.00th=[ 194], 10.00th=[ 196], 20.00th=[ 206], 00:10:43.009 | 30.00th=[ 223], 40.00th=[ 225], 50.00th=[ 227], 60.00th=[ 229], 00:10:43.009 | 70.00th=[ 231], 80.00th=[ 237], 90.00th=[ 258], 95.00th=[ 265], 00:10:43.009 | 99.00th=[ 273], 99.50th=[ 277], 99.90th=[ 285], 99.95th=[ 293], 00:10:43.009 | 99.99th=[ 330] 00:10:43.009 write: IOPS=2557, BW=9.99MiB/s (10.5MB/s)(10.0MiB/1001msec); 0 zone resets 00:10:43.009 slat (nsec): min=8367, max=45286, avg=10134.53, stdev=1235.33 00:10:43.009 clat (usec): min=109, max=352, avg=149.28, stdev=15.32 00:10:43.009 lat (usec): min=118, max=397, avg=159.42, stdev=15.53 00:10:43.009 clat percentiles (usec): 00:10:43.009 | 1.00th=[ 119], 5.00th=[ 133], 10.00th=[ 141], 20.00th=[ 143], 00:10:43.009 | 30.00th=[ 145], 40.00th=[ 145], 50.00th=[ 147], 60.00th=[ 149], 00:10:43.009 | 70.00th=[ 151], 80.00th=[ 153], 90.00th=[ 161], 95.00th=[ 186], 00:10:43.009 | 99.00th=[ 200], 99.50th=[ 215], 99.90th=[ 273], 99.95th=[ 277], 00:10:43.009 | 99.99th=[ 355] 00:10:43.009 bw ( KiB/s): min=12232, max=12232, per=100.00%, avg=12232.00, stdev= 0.00, samples=1 00:10:43.009 iops : min= 3058, max= 3058, avg=3058.00, stdev= 0.00, samples=1 00:10:43.009 lat (usec) : 250=93.17%, 500=6.83% 00:10:43.009 cpu : usr=3.20%, sys=3.90%, ctx=5054, majf=0, minf=2 00:10:43.009 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:43.009 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:43.009 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:43.009 issued rwts: total=2494,2560,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:43.009 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:43.009 00:10:43.009 Run status group 0 (all jobs): 00:10:43.009 READ: bw=9966KiB/s (10.2MB/s), 9966KiB/s-9966KiB/s (10.2MB/s-10.2MB/s), io=9976KiB (10.2MB), run=1001-1001msec 00:10:43.009 WRITE: bw=9.99MiB/s (10.5MB/s), 9.99MiB/s-9.99MiB/s (10.5MB/s-10.5MB/s), io=10.0MiB (10.5MB), run=1001-1001msec 00:10:43.009 00:10:43.009 Disk stats (read/write): 00:10:43.009 nvme0n1: ios=2102/2560, merge=0/0, ticks=479/375, in_queue=854, util=91.18% 00:10:43.009 11:18:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@48 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:10:43.009 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:10:43.009 11:18:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@49 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:10:43.009 11:18:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1219 -- # local i=0 00:10:43.009 11:18:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:10:43.009 11:18:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:43.009 11:18:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:10:43.009 11:18:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:43.009 11:18:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1231 -- # return 0 00:10:43.009 11:18:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:10:43.009 11:18:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@53 -- # nvmftestfini 00:10:43.009 11:18:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@488 -- # nvmfcleanup 00:10:43.009 11:18:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@117 -- # sync 00:10:43.009 11:18:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:10:43.009 11:18:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@120 -- # set +e 00:10:43.009 11:18:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@121 -- # for i in {1..20} 00:10:43.009 11:18:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:10:43.009 rmmod nvme_tcp 00:10:43.009 rmmod nvme_fabrics 00:10:43.009 rmmod nvme_keyring 00:10:43.009 11:18:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:10:43.009 11:18:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@124 -- # set -e 00:10:43.009 11:18:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@125 -- # return 0 00:10:43.009 11:18:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@489 -- # '[' -n 1412813 ']' 00:10:43.009 11:18:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@490 -- # killprocess 1412813 00:10:43.009 11:18:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@950 -- # '[' -z 1412813 ']' 00:10:43.009 11:18:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@954 -- # kill -0 1412813 00:10:43.009 11:18:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@955 -- # uname 00:10:43.009 11:18:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:10:43.009 11:18:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1412813 00:10:43.009 11:18:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:10:43.009 11:18:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:10:43.009 11:18:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1412813' 00:10:43.009 killing process with pid 1412813 00:10:43.009 11:18:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@969 -- # kill 1412813 00:10:43.009 11:18:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@974 -- # wait 1412813 00:10:43.268 11:18:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:10:43.268 11:18:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:10:43.268 11:18:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:10:43.268 11:18:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:10:43.268 11:18:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@278 -- # remove_spdk_ns 00:10:43.268 11:18:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:43.268 11:18:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:43.268 11:18:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:45.803 11:18:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:10:45.803 00:10:45.803 real 0m15.156s 00:10:45.803 user 0m34.878s 00:10:45.803 sys 0m5.208s 00:10:45.803 11:18:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:45.803 11:18:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:45.803 ************************************ 00:10:45.803 END TEST nvmf_nmic 00:10:45.803 ************************************ 00:10:45.803 11:18:40 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@34 -- # run_test nvmf_fio_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp 00:10:45.803 11:18:40 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:10:45.803 11:18:40 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:45.803 11:18:40 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:10:45.803 ************************************ 00:10:45.803 START TEST nvmf_fio_target 00:10:45.803 ************************************ 00:10:45.803 11:18:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp 00:10:45.803 * Looking for test storage... 00:10:45.803 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:45.803 11:18:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:45.803 11:18:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@7 -- # uname -s 00:10:45.803 11:18:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:45.803 11:18:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:45.803 11:18:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:45.803 11:18:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:45.803 11:18:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:45.803 11:18:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:45.803 11:18:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:45.803 11:18:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:45.803 11:18:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:45.803 11:18:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:45.803 11:18:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:10:45.804 11:18:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:10:45.804 11:18:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:45.804 11:18:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:45.804 11:18:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:45.804 11:18:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:45.804 11:18:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:45.804 11:18:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:45.804 11:18:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:45.804 11:18:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:45.804 11:18:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:45.804 11:18:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:45.804 11:18:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:45.804 11:18:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@5 -- # export PATH 00:10:45.804 11:18:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:45.804 11:18:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@47 -- # : 0 00:10:45.804 11:18:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:10:45.804 11:18:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:10:45.804 11:18:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:45.804 11:18:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:45.804 11:18:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:45.804 11:18:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:10:45.804 11:18:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:10:45.804 11:18:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@51 -- # have_pci_nics=0 00:10:45.804 11:18:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:10:45.804 11:18:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:10:45.804 11:18:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:10:45.804 11:18:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@16 -- # nvmftestinit 00:10:45.804 11:18:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:10:45.804 11:18:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:45.804 11:18:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@448 -- # prepare_net_devs 00:10:45.804 11:18:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@410 -- # local -g is_hw=no 00:10:45.804 11:18:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@412 -- # remove_spdk_ns 00:10:45.804 11:18:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:45.804 11:18:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:45.804 11:18:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:45.804 11:18:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:10:45.804 11:18:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:10:45.804 11:18:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@285 -- # xtrace_disable 00:10:45.804 11:18:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:10:51.078 11:18:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:51.078 11:18:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@291 -- # pci_devs=() 00:10:51.078 11:18:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@291 -- # local -a pci_devs 00:10:51.078 11:18:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@292 -- # pci_net_devs=() 00:10:51.078 11:18:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:10:51.078 11:18:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@293 -- # pci_drivers=() 00:10:51.078 11:18:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@293 -- # local -A pci_drivers 00:10:51.078 11:18:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@295 -- # net_devs=() 00:10:51.078 11:18:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@295 -- # local -ga net_devs 00:10:51.078 11:18:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@296 -- # e810=() 00:10:51.078 11:18:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@296 -- # local -ga e810 00:10:51.078 11:18:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@297 -- # x722=() 00:10:51.078 11:18:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@297 -- # local -ga x722 00:10:51.078 11:18:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@298 -- # mlx=() 00:10:51.078 11:18:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@298 -- # local -ga mlx 00:10:51.078 11:18:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:51.078 11:18:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:51.078 11:18:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:51.078 11:18:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:51.078 11:18:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:51.078 11:18:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:51.078 11:18:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:51.078 11:18:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:51.078 11:18:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:51.078 11:18:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:51.078 11:18:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:51.078 11:18:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:10:51.078 11:18:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:10:51.078 11:18:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:10:51.078 11:18:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:10:51.078 11:18:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:10:51.078 11:18:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:10:51.078 11:18:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:10:51.078 11:18:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:10:51.078 Found 0000:86:00.0 (0x8086 - 0x159b) 00:10:51.078 11:18:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:10:51.078 11:18:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:10:51.078 11:18:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:51.078 11:18:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:51.078 11:18:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:10:51.078 11:18:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:10:51.078 11:18:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:10:51.078 Found 0000:86:00.1 (0x8086 - 0x159b) 00:10:51.078 11:18:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:10:51.078 11:18:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:10:51.078 11:18:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:51.078 11:18:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:51.078 11:18:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:10:51.078 11:18:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:10:51.078 11:18:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:10:51.078 11:18:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:10:51.078 11:18:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:10:51.078 11:18:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:51.078 11:18:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:10:51.078 11:18:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:51.078 11:18:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:10:51.078 11:18:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:10:51.078 11:18:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:51.078 11:18:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:10:51.078 Found net devices under 0000:86:00.0: cvl_0_0 00:10:51.078 11:18:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:10:51.078 11:18:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:10:51.078 11:18:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:51.078 11:18:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:10:51.079 11:18:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:51.079 11:18:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:10:51.079 11:18:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:10:51.079 11:18:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:51.079 11:18:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:10:51.079 Found net devices under 0000:86:00.1: cvl_0_1 00:10:51.079 11:18:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:10:51.079 11:18:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:10:51.079 11:18:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@414 -- # is_hw=yes 00:10:51.079 11:18:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:10:51.079 11:18:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:10:51.079 11:18:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:10:51.079 11:18:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:51.079 11:18:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:51.079 11:18:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:10:51.079 11:18:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:10:51.079 11:18:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:10:51.079 11:18:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:10:51.079 11:18:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:10:51.079 11:18:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:10:51.079 11:18:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:51.079 11:18:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:10:51.079 11:18:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:10:51.079 11:18:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:10:51.079 11:18:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:10:51.079 11:18:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:10:51.079 11:18:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:10:51.079 11:18:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:10:51.079 11:18:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:10:51.338 11:18:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:10:51.338 11:18:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:10:51.338 11:18:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:10:51.338 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:51.338 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.283 ms 00:10:51.338 00:10:51.338 --- 10.0.0.2 ping statistics --- 00:10:51.338 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:51.338 rtt min/avg/max/mdev = 0.283/0.283/0.283/0.000 ms 00:10:51.338 11:18:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:10:51.338 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:51.338 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.202 ms 00:10:51.338 00:10:51.338 --- 10.0.0.1 ping statistics --- 00:10:51.338 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:51.338 rtt min/avg/max/mdev = 0.202/0.202/0.202/0.000 ms 00:10:51.338 11:18:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:51.338 11:18:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@422 -- # return 0 00:10:51.338 11:18:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:10:51.338 11:18:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:51.338 11:18:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:10:51.338 11:18:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:10:51.338 11:18:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:51.338 11:18:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:10:51.338 11:18:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:10:51.338 11:18:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@17 -- # nvmfappstart -m 0xF 00:10:51.338 11:18:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:10:51.338 11:18:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@724 -- # xtrace_disable 00:10:51.338 11:18:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:10:51.338 11:18:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@481 -- # nvmfpid=1417574 00:10:51.338 11:18:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@482 -- # waitforlisten 1417574 00:10:51.338 11:18:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:10:51.338 11:18:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@831 -- # '[' -z 1417574 ']' 00:10:51.338 11:18:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:51.338 11:18:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@836 -- # local max_retries=100 00:10:51.338 11:18:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:51.338 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:51.338 11:18:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@840 -- # xtrace_disable 00:10:51.338 11:18:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:10:51.338 [2024-07-26 11:18:46.889109] Starting SPDK v24.09-pre git sha1 487ff9e1a / DPDK 24.03.0 initialization... 00:10:51.338 [2024-07-26 11:18:46.889156] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:51.338 EAL: No free 2048 kB hugepages reported on node 1 00:10:51.338 [2024-07-26 11:18:46.962088] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:51.597 [2024-07-26 11:18:47.041760] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:51.597 [2024-07-26 11:18:47.041796] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:51.597 [2024-07-26 11:18:47.041803] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:51.597 [2024-07-26 11:18:47.041808] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:51.597 [2024-07-26 11:18:47.041813] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:51.597 [2024-07-26 11:18:47.041871] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:10:51.597 [2024-07-26 11:18:47.041980] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:10:51.597 [2024-07-26 11:18:47.042086] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:51.597 [2024-07-26 11:18:47.042088] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:10:52.164 11:18:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:10:52.164 11:18:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@864 -- # return 0 00:10:52.164 11:18:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:10:52.164 11:18:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@730 -- # xtrace_disable 00:10:52.164 11:18:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:10:52.164 11:18:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:52.164 11:18:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:10:52.422 [2024-07-26 11:18:47.882265] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:52.422 11:18:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:52.681 11:18:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@21 -- # malloc_bdevs='Malloc0 ' 00:10:52.681 11:18:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:52.681 11:18:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@22 -- # malloc_bdevs+=Malloc1 00:10:52.681 11:18:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:52.939 11:18:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@24 -- # raid_malloc_bdevs='Malloc2 ' 00:10:52.939 11:18:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:53.198 11:18:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@25 -- # raid_malloc_bdevs+=Malloc3 00:10:53.198 11:18:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc2 Malloc3' 00:10:53.456 11:18:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:53.456 11:18:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@29 -- # concat_malloc_bdevs='Malloc4 ' 00:10:53.456 11:18:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:53.715 11:18:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@30 -- # concat_malloc_bdevs+='Malloc5 ' 00:10:53.715 11:18:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:53.973 11:18:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@31 -- # concat_malloc_bdevs+=Malloc6 00:10:53.973 11:18:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n concat0 -r concat -z 64 -b 'Malloc4 Malloc5 Malloc6' 00:10:54.231 11:18:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:10:54.231 11:18:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:10:54.231 11:18:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:10:54.489 11:18:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:10:54.489 11:18:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:10:54.747 11:18:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:54.747 [2024-07-26 11:18:50.351895] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:54.747 11:18:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 raid0 00:10:55.005 11:18:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 concat0 00:10:55.263 11:18:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@46 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid=00ad29c2-ccbd-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:10:56.636 11:18:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@48 -- # waitforserial SPDKISFASTANDAWESOME 4 00:10:56.636 11:18:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1198 -- # local i=0 00:10:56.636 11:18:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:10:56.636 11:18:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1200 -- # [[ -n 4 ]] 00:10:56.636 11:18:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1201 -- # nvme_device_counter=4 00:10:56.636 11:18:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1205 -- # sleep 2 00:10:58.538 11:18:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:10:58.538 11:18:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:10:58.538 11:18:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:10:58.538 11:18:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1207 -- # nvme_devices=4 00:10:58.538 11:18:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:10:58.538 11:18:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1208 -- # return 0 00:10:58.538 11:18:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:10:58.538 [global] 00:10:58.538 thread=1 00:10:58.538 invalidate=1 00:10:58.538 rw=write 00:10:58.538 time_based=1 00:10:58.538 runtime=1 00:10:58.538 ioengine=libaio 00:10:58.538 direct=1 00:10:58.538 bs=4096 00:10:58.538 iodepth=1 00:10:58.538 norandommap=0 00:10:58.538 numjobs=1 00:10:58.538 00:10:58.538 verify_dump=1 00:10:58.538 verify_backlog=512 00:10:58.538 verify_state_save=0 00:10:58.538 do_verify=1 00:10:58.538 verify=crc32c-intel 00:10:58.538 [job0] 00:10:58.538 filename=/dev/nvme0n1 00:10:58.538 [job1] 00:10:58.538 filename=/dev/nvme0n2 00:10:58.538 [job2] 00:10:58.538 filename=/dev/nvme0n3 00:10:58.538 [job3] 00:10:58.538 filename=/dev/nvme0n4 00:10:58.538 Could not set queue depth (nvme0n1) 00:10:58.538 Could not set queue depth (nvme0n2) 00:10:58.538 Could not set queue depth (nvme0n3) 00:10:58.538 Could not set queue depth (nvme0n4) 00:10:58.796 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:58.796 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:58.796 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:58.796 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:58.796 fio-3.35 00:10:58.796 Starting 4 threads 00:11:00.171 00:11:00.171 job0: (groupid=0, jobs=1): err= 0: pid=1419022: Fri Jul 26 11:18:55 2024 00:11:00.171 read: IOPS=2066, BW=8268KiB/s (8466kB/s)(8276KiB/1001msec) 00:11:00.171 slat (nsec): min=8394, max=38488, avg=9484.94, stdev=1807.10 00:11:00.171 clat (usec): min=192, max=489, avg=242.69, stdev=21.08 00:11:00.171 lat (usec): min=202, max=498, avg=252.17, stdev=21.19 00:11:00.171 clat percentiles (usec): 00:11:00.171 | 1.00th=[ 204], 5.00th=[ 219], 10.00th=[ 225], 20.00th=[ 231], 00:11:00.171 | 30.00th=[ 235], 40.00th=[ 239], 50.00th=[ 241], 60.00th=[ 245], 00:11:00.171 | 70.00th=[ 249], 80.00th=[ 253], 90.00th=[ 260], 95.00th=[ 265], 00:11:00.171 | 99.00th=[ 293], 99.50th=[ 400], 99.90th=[ 474], 99.95th=[ 478], 00:11:00.171 | 99.99th=[ 490] 00:11:00.171 write: IOPS=2557, BW=9.99MiB/s (10.5MB/s)(10.0MiB/1001msec); 0 zone resets 00:11:00.171 slat (nsec): min=9829, max=45353, avg=13146.80, stdev=2228.90 00:11:00.171 clat (usec): min=119, max=1300, avg=167.47, stdev=34.06 00:11:00.171 lat (usec): min=132, max=1315, avg=180.62, stdev=34.39 00:11:00.171 clat percentiles (usec): 00:11:00.171 | 1.00th=[ 133], 5.00th=[ 139], 10.00th=[ 143], 20.00th=[ 149], 00:11:00.171 | 30.00th=[ 155], 40.00th=[ 159], 50.00th=[ 163], 60.00th=[ 167], 00:11:00.171 | 70.00th=[ 172], 80.00th=[ 180], 90.00th=[ 192], 95.00th=[ 210], 00:11:00.171 | 99.00th=[ 273], 99.50th=[ 277], 99.90th=[ 285], 99.95th=[ 285], 00:11:00.171 | 99.99th=[ 1303] 00:11:00.171 bw ( KiB/s): min=10480, max=10480, per=47.35%, avg=10480.00, stdev= 0.00, samples=1 00:11:00.171 iops : min= 2620, max= 2620, avg=2620.00, stdev= 0.00, samples=1 00:11:00.171 lat (usec) : 250=86.04%, 500=13.93% 00:11:00.171 lat (msec) : 2=0.02% 00:11:00.171 cpu : usr=5.40%, sys=6.70%, ctx=4633, majf=0, minf=1 00:11:00.171 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:00.171 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:00.171 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:00.171 issued rwts: total=2069,2560,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:00.171 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:00.171 job1: (groupid=0, jobs=1): err= 0: pid=1419023: Fri Jul 26 11:18:55 2024 00:11:00.171 read: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec) 00:11:00.171 slat (nsec): min=7090, max=30867, avg=8205.23, stdev=1485.79 00:11:00.171 clat (usec): min=187, max=41315, avg=297.84, stdev=1561.29 00:11:00.171 lat (usec): min=195, max=41323, avg=306.05, stdev=1561.28 00:11:00.171 clat percentiles (usec): 00:11:00.171 | 1.00th=[ 208], 5.00th=[ 215], 10.00th=[ 219], 20.00th=[ 225], 00:11:00.171 | 30.00th=[ 229], 40.00th=[ 233], 50.00th=[ 237], 60.00th=[ 239], 00:11:00.171 | 70.00th=[ 243], 80.00th=[ 249], 90.00th=[ 255], 95.00th=[ 262], 00:11:00.171 | 99.00th=[ 392], 99.50th=[ 408], 99.90th=[40633], 99.95th=[41157], 00:11:00.171 | 99.99th=[41157] 00:11:00.171 write: IOPS=2046, BW=8188KiB/s (8384kB/s)(8196KiB/1001msec); 0 zone resets 00:11:00.171 slat (nsec): min=10192, max=45571, avg=12159.24, stdev=1726.08 00:11:00.171 clat (usec): min=127, max=356, avg=163.18, stdev=26.74 00:11:00.171 lat (usec): min=138, max=401, avg=175.34, stdev=26.95 00:11:00.171 clat percentiles (usec): 00:11:00.171 | 1.00th=[ 133], 5.00th=[ 137], 10.00th=[ 141], 20.00th=[ 147], 00:11:00.172 | 30.00th=[ 151], 40.00th=[ 153], 50.00th=[ 157], 60.00th=[ 161], 00:11:00.172 | 70.00th=[ 165], 80.00th=[ 172], 90.00th=[ 190], 95.00th=[ 231], 00:11:00.172 | 99.00th=[ 269], 99.50th=[ 277], 99.90th=[ 285], 99.95th=[ 285], 00:11:00.172 | 99.99th=[ 355] 00:11:00.172 bw ( KiB/s): min= 8192, max= 8192, per=37.01%, avg=8192.00, stdev= 0.00, samples=1 00:11:00.172 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:11:00.172 lat (usec) : 250=90.72%, 500=9.20% 00:11:00.172 lat (msec) : 50=0.07% 00:11:00.172 cpu : usr=4.50%, sys=5.60%, ctx=4098, majf=0, minf=1 00:11:00.172 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:00.172 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:00.172 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:00.172 issued rwts: total=2048,2049,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:00.172 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:00.172 job2: (groupid=0, jobs=1): err= 0: pid=1419024: Fri Jul 26 11:18:55 2024 00:11:00.172 read: IOPS=55, BW=220KiB/s (226kB/s)(224KiB/1016msec) 00:11:00.172 slat (nsec): min=7764, max=25098, avg=13789.18, stdev=6622.50 00:11:00.172 clat (usec): min=240, max=42078, avg=16372.08, stdev=20210.93 00:11:00.172 lat (usec): min=248, max=42099, avg=16385.87, stdev=20213.13 00:11:00.172 clat percentiles (usec): 00:11:00.172 | 1.00th=[ 241], 5.00th=[ 245], 10.00th=[ 247], 20.00th=[ 251], 00:11:00.172 | 30.00th=[ 255], 40.00th=[ 262], 50.00th=[ 273], 60.00th=[ 330], 00:11:00.172 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41681], 95.00th=[42206], 00:11:00.172 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:11:00.172 | 99.99th=[42206] 00:11:00.172 write: IOPS=503, BW=2016KiB/s (2064kB/s)(2048KiB/1016msec); 0 zone resets 00:11:00.172 slat (nsec): min=10587, max=48172, avg=12308.67, stdev=2129.16 00:11:00.172 clat (usec): min=142, max=287, avg=175.80, stdev=15.84 00:11:00.172 lat (usec): min=154, max=298, avg=188.11, stdev=16.20 00:11:00.172 clat percentiles (usec): 00:11:00.172 | 1.00th=[ 149], 5.00th=[ 155], 10.00th=[ 157], 20.00th=[ 163], 00:11:00.172 | 30.00th=[ 167], 40.00th=[ 172], 50.00th=[ 174], 60.00th=[ 178], 00:11:00.172 | 70.00th=[ 184], 80.00th=[ 188], 90.00th=[ 196], 95.00th=[ 202], 00:11:00.172 | 99.00th=[ 215], 99.50th=[ 233], 99.90th=[ 289], 99.95th=[ 289], 00:11:00.172 | 99.99th=[ 289] 00:11:00.172 bw ( KiB/s): min= 4096, max= 4096, per=18.51%, avg=4096.00, stdev= 0.00, samples=1 00:11:00.172 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:11:00.172 lat (usec) : 250=91.37%, 500=4.75% 00:11:00.172 lat (msec) : 50=3.87% 00:11:00.172 cpu : usr=0.39%, sys=1.08%, ctx=568, majf=0, minf=2 00:11:00.172 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:00.172 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:00.172 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:00.172 issued rwts: total=56,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:00.172 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:00.172 job3: (groupid=0, jobs=1): err= 0: pid=1419026: Fri Jul 26 11:18:55 2024 00:11:00.172 read: IOPS=20, BW=82.5KiB/s (84.5kB/s)(84.0KiB/1018msec) 00:11:00.172 slat (nsec): min=9132, max=23937, avg=22678.38, stdev=3117.43 00:11:00.172 clat (usec): min=40825, max=42172, avg=41215.55, stdev=457.22 00:11:00.172 lat (usec): min=40848, max=42181, avg=41238.23, stdev=455.72 00:11:00.172 clat percentiles (usec): 00:11:00.172 | 1.00th=[40633], 5.00th=[41157], 10.00th=[41157], 20.00th=[41157], 00:11:00.172 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:11:00.172 | 70.00th=[41157], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:11:00.172 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:11:00.172 | 99.99th=[42206] 00:11:00.172 write: IOPS=502, BW=2012KiB/s (2060kB/s)(2048KiB/1018msec); 0 zone resets 00:11:00.172 slat (usec): min=9, max=22949, avg=56.15, stdev=1013.75 00:11:00.172 clat (usec): min=145, max=479, avg=238.09, stdev=22.67 00:11:00.172 lat (usec): min=155, max=23186, avg=294.24, stdev=1013.96 00:11:00.172 clat percentiles (usec): 00:11:00.172 | 1.00th=[ 155], 5.00th=[ 176], 10.00th=[ 237], 20.00th=[ 239], 00:11:00.172 | 30.00th=[ 239], 40.00th=[ 241], 50.00th=[ 241], 60.00th=[ 243], 00:11:00.172 | 70.00th=[ 243], 80.00th=[ 245], 90.00th=[ 245], 95.00th=[ 249], 00:11:00.172 | 99.00th=[ 260], 99.50th=[ 293], 99.90th=[ 478], 99.95th=[ 478], 00:11:00.172 | 99.99th=[ 478] 00:11:00.172 bw ( KiB/s): min= 4096, max= 4096, per=18.51%, avg=4096.00, stdev= 0.00, samples=1 00:11:00.172 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:11:00.172 lat (usec) : 250=92.50%, 500=3.56% 00:11:00.172 lat (msec) : 50=3.94% 00:11:00.172 cpu : usr=0.20%, sys=0.69%, ctx=535, majf=0, minf=1 00:11:00.172 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:00.172 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:00.172 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:00.172 issued rwts: total=21,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:00.172 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:00.172 00:11:00.172 Run status group 0 (all jobs): 00:11:00.172 READ: bw=16.1MiB/s (16.9MB/s), 82.5KiB/s-8268KiB/s (84.5kB/s-8466kB/s), io=16.4MiB (17.2MB), run=1001-1018msec 00:11:00.172 WRITE: bw=21.6MiB/s (22.7MB/s), 2012KiB/s-9.99MiB/s (2060kB/s-10.5MB/s), io=22.0MiB (23.1MB), run=1001-1018msec 00:11:00.172 00:11:00.172 Disk stats (read/write): 00:11:00.172 nvme0n1: ios=1924/2048, merge=0/0, ticks=1432/306, in_queue=1738, util=98.00% 00:11:00.172 nvme0n2: ios=1578/2048, merge=0/0, ticks=1418/298, in_queue=1716, util=98.57% 00:11:00.172 nvme0n3: ios=50/512, merge=0/0, ticks=753/85, in_queue=838, util=89.05% 00:11:00.172 nvme0n4: ios=40/512, merge=0/0, ticks=1645/122, in_queue=1767, util=98.42% 00:11:00.172 11:18:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t randwrite -r 1 -v 00:11:00.172 [global] 00:11:00.172 thread=1 00:11:00.172 invalidate=1 00:11:00.172 rw=randwrite 00:11:00.172 time_based=1 00:11:00.172 runtime=1 00:11:00.172 ioengine=libaio 00:11:00.172 direct=1 00:11:00.172 bs=4096 00:11:00.172 iodepth=1 00:11:00.172 norandommap=0 00:11:00.172 numjobs=1 00:11:00.172 00:11:00.172 verify_dump=1 00:11:00.172 verify_backlog=512 00:11:00.172 verify_state_save=0 00:11:00.172 do_verify=1 00:11:00.172 verify=crc32c-intel 00:11:00.172 [job0] 00:11:00.172 filename=/dev/nvme0n1 00:11:00.172 [job1] 00:11:00.172 filename=/dev/nvme0n2 00:11:00.172 [job2] 00:11:00.172 filename=/dev/nvme0n3 00:11:00.172 [job3] 00:11:00.172 filename=/dev/nvme0n4 00:11:00.172 Could not set queue depth (nvme0n1) 00:11:00.172 Could not set queue depth (nvme0n2) 00:11:00.172 Could not set queue depth (nvme0n3) 00:11:00.172 Could not set queue depth (nvme0n4) 00:11:00.430 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:00.430 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:00.430 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:00.430 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:00.430 fio-3.35 00:11:00.430 Starting 4 threads 00:11:01.802 00:11:01.802 job0: (groupid=0, jobs=1): err= 0: pid=1419394: Fri Jul 26 11:18:57 2024 00:11:01.802 read: IOPS=932, BW=3729KiB/s (3818kB/s)(3852KiB/1033msec) 00:11:01.802 slat (nsec): min=6270, max=24032, avg=7292.29, stdev=1670.62 00:11:01.802 clat (usec): min=163, max=41273, avg=853.65, stdev=5052.21 00:11:01.802 lat (usec): min=170, max=41282, avg=860.94, stdev=5053.29 00:11:01.802 clat percentiles (usec): 00:11:01.802 | 1.00th=[ 172], 5.00th=[ 178], 10.00th=[ 184], 20.00th=[ 188], 00:11:01.802 | 30.00th=[ 194], 40.00th=[ 202], 50.00th=[ 210], 60.00th=[ 237], 00:11:01.802 | 70.00th=[ 245], 80.00th=[ 249], 90.00th=[ 258], 95.00th=[ 262], 00:11:01.803 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:11:01.803 | 99.99th=[41157] 00:11:01.803 write: IOPS=991, BW=3965KiB/s (4060kB/s)(4096KiB/1033msec); 0 zone resets 00:11:01.803 slat (nsec): min=8563, max=38454, avg=10132.28, stdev=2049.31 00:11:01.803 clat (usec): min=106, max=417, avg=184.53, stdev=53.26 00:11:01.803 lat (usec): min=115, max=456, avg=194.67, stdev=53.75 00:11:01.803 clat percentiles (usec): 00:11:01.803 | 1.00th=[ 114], 5.00th=[ 119], 10.00th=[ 122], 20.00th=[ 131], 00:11:01.803 | 30.00th=[ 139], 40.00th=[ 145], 50.00th=[ 161], 60.00th=[ 225], 00:11:01.803 | 70.00th=[ 241], 80.00th=[ 243], 90.00th=[ 245], 95.00th=[ 247], 00:11:01.803 | 99.00th=[ 262], 99.50th=[ 265], 99.90th=[ 326], 99.95th=[ 416], 00:11:01.803 | 99.99th=[ 416] 00:11:01.803 bw ( KiB/s): min= 8192, max= 8192, per=42.40%, avg=8192.00, stdev= 0.00, samples=1 00:11:01.803 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:11:01.803 lat (usec) : 250=89.43%, 500=9.81% 00:11:01.803 lat (msec) : 50=0.75% 00:11:01.803 cpu : usr=0.87%, sys=1.74%, ctx=1987, majf=0, minf=1 00:11:01.803 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:01.803 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:01.803 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:01.803 issued rwts: total=963,1024,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:01.803 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:01.803 job1: (groupid=0, jobs=1): err= 0: pid=1419395: Fri Jul 26 11:18:57 2024 00:11:01.803 read: IOPS=1067, BW=4272KiB/s (4374kB/s)(4276KiB/1001msec) 00:11:01.803 slat (nsec): min=6972, max=29759, avg=8206.78, stdev=1898.28 00:11:01.803 clat (usec): min=183, max=41972, avg=680.64, stdev=4336.06 00:11:01.803 lat (usec): min=190, max=41994, avg=688.85, stdev=4337.43 00:11:01.803 clat percentiles (usec): 00:11:01.803 | 1.00th=[ 190], 5.00th=[ 198], 10.00th=[ 202], 20.00th=[ 208], 00:11:01.803 | 30.00th=[ 212], 40.00th=[ 217], 50.00th=[ 219], 60.00th=[ 223], 00:11:01.803 | 70.00th=[ 225], 80.00th=[ 231], 90.00th=[ 237], 95.00th=[ 243], 00:11:01.803 | 99.00th=[41157], 99.50th=[41157], 99.90th=[42206], 99.95th=[42206], 00:11:01.803 | 99.99th=[42206] 00:11:01.803 write: IOPS=1534, BW=6138KiB/s (6285kB/s)(6144KiB/1001msec); 0 zone resets 00:11:01.803 slat (nsec): min=8717, max=35578, avg=10581.09, stdev=1619.41 00:11:01.803 clat (usec): min=113, max=303, avg=155.82, stdev=20.67 00:11:01.803 lat (usec): min=124, max=338, avg=166.40, stdev=20.59 00:11:01.803 clat percentiles (usec): 00:11:01.803 | 1.00th=[ 122], 5.00th=[ 129], 10.00th=[ 133], 20.00th=[ 139], 00:11:01.803 | 30.00th=[ 143], 40.00th=[ 147], 50.00th=[ 151], 60.00th=[ 159], 00:11:01.803 | 70.00th=[ 167], 80.00th=[ 176], 90.00th=[ 184], 95.00th=[ 192], 00:11:01.803 | 99.00th=[ 208], 99.50th=[ 215], 99.90th=[ 302], 99.95th=[ 302], 00:11:01.803 | 99.99th=[ 302] 00:11:01.803 bw ( KiB/s): min= 4096, max= 4096, per=21.20%, avg=4096.00, stdev= 0.00, samples=1 00:11:01.803 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:11:01.803 lat (usec) : 250=98.46%, 500=1.07% 00:11:01.803 lat (msec) : 50=0.46% 00:11:01.803 cpu : usr=2.10%, sys=2.00%, ctx=2605, majf=0, minf=2 00:11:01.803 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:01.803 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:01.803 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:01.803 issued rwts: total=1069,1536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:01.803 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:01.803 job2: (groupid=0, jobs=1): err= 0: pid=1419397: Fri Jul 26 11:18:57 2024 00:11:01.803 read: IOPS=22, BW=91.0KiB/s (93.2kB/s)(92.0KiB/1011msec) 00:11:01.803 slat (nsec): min=9365, max=25978, avg=21713.57, stdev=3873.24 00:11:01.803 clat (usec): min=476, max=41966, avg=39434.84, stdev=8501.15 00:11:01.803 lat (usec): min=502, max=41989, avg=39456.55, stdev=8500.20 00:11:01.803 clat percentiles (usec): 00:11:01.803 | 1.00th=[ 478], 5.00th=[41157], 10.00th=[41157], 20.00th=[41157], 00:11:01.803 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:11:01.803 | 70.00th=[41157], 80.00th=[41681], 90.00th=[42206], 95.00th=[42206], 00:11:01.803 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:11:01.803 | 99.99th=[42206] 00:11:01.803 write: IOPS=506, BW=2026KiB/s (2074kB/s)(2048KiB/1011msec); 0 zone resets 00:11:01.803 slat (nsec): min=9230, max=35989, avg=10432.25, stdev=1857.99 00:11:01.803 clat (usec): min=141, max=474, avg=189.16, stdev=31.05 00:11:01.803 lat (usec): min=152, max=510, avg=199.60, stdev=31.72 00:11:01.803 clat percentiles (usec): 00:11:01.803 | 1.00th=[ 149], 5.00th=[ 155], 10.00th=[ 161], 20.00th=[ 165], 00:11:01.803 | 30.00th=[ 169], 40.00th=[ 174], 50.00th=[ 180], 60.00th=[ 188], 00:11:01.803 | 70.00th=[ 200], 80.00th=[ 219], 90.00th=[ 231], 95.00th=[ 243], 00:11:01.803 | 99.00th=[ 277], 99.50th=[ 306], 99.90th=[ 474], 99.95th=[ 474], 00:11:01.803 | 99.99th=[ 474] 00:11:01.803 bw ( KiB/s): min= 4096, max= 4096, per=21.20%, avg=4096.00, stdev= 0.00, samples=1 00:11:01.803 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:11:01.803 lat (usec) : 250=94.21%, 500=1.68% 00:11:01.803 lat (msec) : 50=4.11% 00:11:01.803 cpu : usr=0.40%, sys=0.40%, ctx=536, majf=0, minf=1 00:11:01.803 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:01.803 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:01.803 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:01.803 issued rwts: total=23,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:01.803 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:01.803 job3: (groupid=0, jobs=1): err= 0: pid=1419401: Fri Jul 26 11:18:57 2024 00:11:01.803 read: IOPS=1534, BW=6138KiB/s (6285kB/s)(6144KiB/1001msec) 00:11:01.803 slat (nsec): min=6569, max=24886, avg=7461.25, stdev=1162.41 00:11:01.803 clat (usec): min=194, max=41826, avg=412.35, stdev=2554.06 00:11:01.803 lat (usec): min=201, max=41849, avg=419.81, stdev=2554.87 00:11:01.803 clat percentiles (usec): 00:11:01.803 | 1.00th=[ 200], 5.00th=[ 208], 10.00th=[ 212], 20.00th=[ 221], 00:11:01.803 | 30.00th=[ 229], 40.00th=[ 239], 50.00th=[ 255], 60.00th=[ 265], 00:11:01.803 | 70.00th=[ 273], 80.00th=[ 281], 90.00th=[ 293], 95.00th=[ 302], 00:11:01.803 | 99.00th=[ 334], 99.50th=[ 388], 99.90th=[41157], 99.95th=[41681], 00:11:01.803 | 99.99th=[41681] 00:11:01.803 write: IOPS=1916, BW=7664KiB/s (7848kB/s)(7672KiB/1001msec); 0 zone resets 00:11:01.803 slat (nsec): min=8993, max=42144, avg=10081.79, stdev=1551.62 00:11:01.803 clat (usec): min=125, max=432, avg=171.03, stdev=20.72 00:11:01.803 lat (usec): min=138, max=441, avg=181.11, stdev=20.89 00:11:01.803 clat percentiles (usec): 00:11:01.803 | 1.00th=[ 137], 5.00th=[ 143], 10.00th=[ 147], 20.00th=[ 155], 00:11:01.803 | 30.00th=[ 161], 40.00th=[ 167], 50.00th=[ 172], 60.00th=[ 176], 00:11:01.803 | 70.00th=[ 182], 80.00th=[ 186], 90.00th=[ 192], 95.00th=[ 200], 00:11:01.803 | 99.00th=[ 219], 99.50th=[ 251], 99.90th=[ 420], 99.95th=[ 433], 00:11:01.803 | 99.99th=[ 433] 00:11:01.803 bw ( KiB/s): min= 8192, max= 8192, per=42.40%, avg=8192.00, stdev= 0.00, samples=1 00:11:01.803 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:11:01.803 lat (usec) : 250=76.00%, 500=23.83% 00:11:01.803 lat (msec) : 50=0.17% 00:11:01.803 cpu : usr=1.40%, sys=3.50%, ctx=3454, majf=0, minf=1 00:11:01.803 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:01.803 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:01.803 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:01.803 issued rwts: total=1536,1918,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:01.803 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:01.803 00:11:01.803 Run status group 0 (all jobs): 00:11:01.803 READ: bw=13.6MiB/s (14.2MB/s), 91.0KiB/s-6138KiB/s (93.2kB/s-6285kB/s), io=14.0MiB (14.7MB), run=1001-1033msec 00:11:01.803 WRITE: bw=18.9MiB/s (19.8MB/s), 2026KiB/s-7664KiB/s (2074kB/s-7848kB/s), io=19.5MiB (20.4MB), run=1001-1033msec 00:11:01.803 00:11:01.803 Disk stats (read/write): 00:11:01.803 nvme0n1: ios=1007/1024, merge=0/0, ticks=603/182, in_queue=785, util=82.77% 00:11:01.803 nvme0n2: ios=604/1024, merge=0/0, ticks=624/163, in_queue=787, util=83.49% 00:11:01.803 nvme0n3: ios=76/512, merge=0/0, ticks=877/99, in_queue=976, util=98.49% 00:11:01.803 nvme0n4: ios=1133/1536, merge=0/0, ticks=514/247, in_queue=761, util=89.25% 00:11:01.803 11:18:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t write -r 1 -v 00:11:01.803 [global] 00:11:01.803 thread=1 00:11:01.803 invalidate=1 00:11:01.803 rw=write 00:11:01.803 time_based=1 00:11:01.803 runtime=1 00:11:01.803 ioengine=libaio 00:11:01.803 direct=1 00:11:01.803 bs=4096 00:11:01.803 iodepth=128 00:11:01.803 norandommap=0 00:11:01.803 numjobs=1 00:11:01.803 00:11:01.803 verify_dump=1 00:11:01.803 verify_backlog=512 00:11:01.803 verify_state_save=0 00:11:01.803 do_verify=1 00:11:01.803 verify=crc32c-intel 00:11:01.803 [job0] 00:11:01.803 filename=/dev/nvme0n1 00:11:01.803 [job1] 00:11:01.803 filename=/dev/nvme0n2 00:11:01.803 [job2] 00:11:01.803 filename=/dev/nvme0n3 00:11:01.803 [job3] 00:11:01.803 filename=/dev/nvme0n4 00:11:01.803 Could not set queue depth (nvme0n1) 00:11:01.803 Could not set queue depth (nvme0n2) 00:11:01.803 Could not set queue depth (nvme0n3) 00:11:01.803 Could not set queue depth (nvme0n4) 00:11:02.061 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:11:02.061 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:11:02.061 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:11:02.061 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:11:02.061 fio-3.35 00:11:02.061 Starting 4 threads 00:11:03.436 00:11:03.436 job0: (groupid=0, jobs=1): err= 0: pid=1419773: Fri Jul 26 11:18:58 2024 00:11:03.436 read: IOPS=5607, BW=21.9MiB/s (23.0MB/s)(22.0MiB/1003msec) 00:11:03.436 slat (nsec): min=1264, max=25971k, avg=99512.95, stdev=785971.03 00:11:03.436 clat (usec): min=1735, max=45020, avg=12324.76, stdev=5406.60 00:11:03.436 lat (usec): min=3488, max=45045, avg=12424.27, stdev=5452.76 00:11:03.436 clat percentiles (usec): 00:11:03.436 | 1.00th=[ 4621], 5.00th=[ 7308], 10.00th=[ 8717], 20.00th=[ 9372], 00:11:03.436 | 30.00th=[ 9634], 40.00th=[ 9765], 50.00th=[10028], 60.00th=[10683], 00:11:03.436 | 70.00th=[13042], 80.00th=[15270], 90.00th=[19006], 95.00th=[22676], 00:11:03.436 | 99.00th=[33424], 99.50th=[33817], 99.90th=[35914], 99.95th=[35914], 00:11:03.436 | 99.99th=[44827] 00:11:03.436 write: IOPS=5615, BW=21.9MiB/s (23.0MB/s)(22.0MiB/1003msec); 0 zone resets 00:11:03.436 slat (usec): min=2, max=12776, avg=69.90, stdev=353.77 00:11:03.436 clat (usec): min=919, max=35960, avg=10201.27, stdev=3866.96 00:11:03.436 lat (usec): min=930, max=35965, avg=10271.17, stdev=3901.02 00:11:03.436 clat percentiles (usec): 00:11:03.436 | 1.00th=[ 3130], 5.00th=[ 4686], 10.00th=[ 6063], 20.00th=[ 8029], 00:11:03.436 | 30.00th=[ 9110], 40.00th=[ 9503], 50.00th=[ 9765], 60.00th=[ 9765], 00:11:03.436 | 70.00th=[ 9896], 80.00th=[10683], 90.00th=[16581], 95.00th=[17433], 00:11:03.436 | 99.00th=[23462], 99.50th=[24511], 99.90th=[26084], 99.95th=[34866], 00:11:03.436 | 99.99th=[35914] 00:11:03.436 bw ( KiB/s): min=20480, max=24576, per=30.91%, avg=22528.00, stdev=2896.31, samples=2 00:11:03.436 iops : min= 5120, max= 6144, avg=5632.00, stdev=724.08, samples=2 00:11:03.436 lat (usec) : 1000=0.07% 00:11:03.436 lat (msec) : 2=0.24%, 4=1.42%, 10=58.79%, 20=34.35%, 50=5.14% 00:11:03.436 cpu : usr=4.59%, sys=5.99%, ctx=699, majf=0, minf=1 00:11:03.436 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.4% 00:11:03.436 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:03.436 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:11:03.436 issued rwts: total=5624,5632,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:03.436 latency : target=0, window=0, percentile=100.00%, depth=128 00:11:03.436 job1: (groupid=0, jobs=1): err= 0: pid=1419774: Fri Jul 26 11:18:58 2024 00:11:03.436 read: IOPS=4589, BW=17.9MiB/s (18.8MB/s)(18.0MiB/1004msec) 00:11:03.436 slat (nsec): min=1379, max=14868k, avg=92190.74, stdev=643612.31 00:11:03.436 clat (usec): min=1615, max=31141, avg=11387.36, stdev=4177.29 00:11:03.436 lat (usec): min=1625, max=40481, avg=11479.55, stdev=4242.59 00:11:03.436 clat percentiles (usec): 00:11:03.436 | 1.00th=[ 2868], 5.00th=[ 4555], 10.00th=[ 8029], 20.00th=[ 9372], 00:11:03.436 | 30.00th=[ 9634], 40.00th=[10028], 50.00th=[10421], 60.00th=[11076], 00:11:03.436 | 70.00th=[11731], 80.00th=[14353], 90.00th=[16450], 95.00th=[17957], 00:11:03.436 | 99.00th=[27132], 99.50th=[28967], 99.90th=[31065], 99.95th=[31065], 00:11:03.436 | 99.99th=[31065] 00:11:03.436 write: IOPS=5077, BW=19.8MiB/s (20.8MB/s)(19.9MiB/1004msec); 0 zone resets 00:11:03.436 slat (usec): min=2, max=11532, avg=100.55, stdev=522.51 00:11:03.436 clat (usec): min=367, max=72385, avg=14577.28, stdev=9116.28 00:11:03.436 lat (usec): min=638, max=72389, avg=14677.83, stdev=9164.31 00:11:03.436 clat percentiles (usec): 00:11:03.436 | 1.00th=[ 1745], 5.00th=[ 4948], 10.00th=[ 7767], 20.00th=[ 9241], 00:11:03.436 | 30.00th=[ 9765], 40.00th=[ 9896], 50.00th=[10683], 60.00th=[13173], 00:11:03.436 | 70.00th=[17171], 80.00th=[20841], 90.00th=[26084], 95.00th=[31851], 00:11:03.436 | 99.00th=[54264], 99.50th=[62129], 99.90th=[69731], 99.95th=[72877], 00:11:03.436 | 99.99th=[72877] 00:11:03.436 bw ( KiB/s): min=17648, max=22120, per=27.28%, avg=19884.00, stdev=3162.18, samples=2 00:11:03.436 iops : min= 4412, max= 5530, avg=4971.00, stdev=790.55, samples=2 00:11:03.436 lat (usec) : 500=0.01%, 750=0.19%, 1000=0.07% 00:11:03.437 lat (msec) : 2=0.46%, 4=3.47%, 10=36.51%, 20=44.98%, 50=13.74% 00:11:03.437 lat (msec) : 100=0.56% 00:11:03.437 cpu : usr=5.28%, sys=4.89%, ctx=515, majf=0, minf=1 00:11:03.437 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 00:11:03.437 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:03.437 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:11:03.437 issued rwts: total=4608,5098,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:03.437 latency : target=0, window=0, percentile=100.00%, depth=128 00:11:03.437 job2: (groupid=0, jobs=1): err= 0: pid=1419777: Fri Jul 26 11:18:58 2024 00:11:03.437 read: IOPS=2542, BW=9.93MiB/s (10.4MB/s)(10.0MiB/1007msec) 00:11:03.437 slat (nsec): min=1494, max=20186k, avg=155564.20, stdev=1099522.99 00:11:03.437 clat (usec): min=8203, max=55970, avg=18697.04, stdev=9003.02 00:11:03.437 lat (usec): min=8207, max=55997, avg=18852.60, stdev=9092.49 00:11:03.437 clat percentiles (usec): 00:11:03.437 | 1.00th=[ 9241], 5.00th=[11076], 10.00th=[11338], 20.00th=[11731], 00:11:03.437 | 30.00th=[12911], 40.00th=[14353], 50.00th=[15139], 60.00th=[16909], 00:11:03.437 | 70.00th=[21103], 80.00th=[23462], 90.00th=[29492], 95.00th=[41157], 00:11:03.437 | 99.00th=[48497], 99.50th=[48497], 99.90th=[50594], 99.95th=[52691], 00:11:03.437 | 99.99th=[55837] 00:11:03.437 write: IOPS=2843, BW=11.1MiB/s (11.6MB/s)(11.2MiB/1007msec); 0 zone resets 00:11:03.437 slat (usec): min=2, max=23029, avg=204.33, stdev=1053.47 00:11:03.437 clat (usec): min=6187, max=61430, avg=27830.13, stdev=11026.01 00:11:03.437 lat (usec): min=6198, max=61461, avg=28034.46, stdev=11094.99 00:11:03.437 clat percentiles (usec): 00:11:03.437 | 1.00th=[ 9110], 5.00th=[14615], 10.00th=[16712], 20.00th=[19530], 00:11:03.437 | 30.00th=[21103], 40.00th=[22152], 50.00th=[22938], 60.00th=[28443], 00:11:03.437 | 70.00th=[31327], 80.00th=[37487], 90.00th=[45876], 95.00th=[49021], 00:11:03.437 | 99.00th=[56886], 99.50th=[58459], 99.90th=[59507], 99.95th=[59507], 00:11:03.437 | 99.99th=[61604] 00:11:03.437 bw ( KiB/s): min= 9600, max=12288, per=15.01%, avg=10944.00, stdev=1900.70, samples=2 00:11:03.437 iops : min= 2400, max= 3072, avg=2736.00, stdev=475.18, samples=2 00:11:03.437 lat (msec) : 10=1.59%, 20=41.69%, 50=54.36%, 100=2.36% 00:11:03.437 cpu : usr=2.49%, sys=2.98%, ctx=366, majf=0, minf=1 00:11:03.437 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.6%, >=64=98.8% 00:11:03.437 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:03.437 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:11:03.437 issued rwts: total=2560,2863,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:03.437 latency : target=0, window=0, percentile=100.00%, depth=128 00:11:03.437 job3: (groupid=0, jobs=1): err= 0: pid=1419778: Fri Jul 26 11:18:58 2024 00:11:03.437 read: IOPS=4571, BW=17.9MiB/s (18.7MB/s)(18.0MiB/1008msec) 00:11:03.437 slat (nsec): min=1010, max=15621k, avg=97939.21, stdev=782455.89 00:11:03.437 clat (usec): min=455, max=32258, avg=13976.85, stdev=4420.17 00:11:03.437 lat (usec): min=460, max=32266, avg=14074.79, stdev=4481.88 00:11:03.437 clat percentiles (usec): 00:11:03.437 | 1.00th=[ 4621], 5.00th=[ 8717], 10.00th=[ 9634], 20.00th=[10814], 00:11:03.437 | 30.00th=[11207], 40.00th=[11600], 50.00th=[13435], 60.00th=[14484], 00:11:03.437 | 70.00th=[15533], 80.00th=[16581], 90.00th=[20317], 95.00th=[21890], 00:11:03.437 | 99.00th=[27919], 99.50th=[30278], 99.90th=[32375], 99.95th=[32375], 00:11:03.437 | 99.99th=[32375] 00:11:03.437 write: IOPS=4737, BW=18.5MiB/s (19.4MB/s)(18.7MiB/1008msec); 0 zone resets 00:11:03.437 slat (nsec): min=1793, max=13529k, avg=89911.73, stdev=542342.62 00:11:03.437 clat (usec): min=1081, max=47670, avg=13304.07, stdev=6865.12 00:11:03.437 lat (usec): min=1093, max=47675, avg=13393.98, stdev=6912.08 00:11:03.437 clat percentiles (usec): 00:11:03.437 | 1.00th=[ 1631], 5.00th=[ 4359], 10.00th=[ 6259], 20.00th=[ 8979], 00:11:03.437 | 30.00th=[10290], 40.00th=[11076], 50.00th=[11469], 60.00th=[12780], 00:11:03.437 | 70.00th=[15270], 80.00th=[16909], 90.00th=[22152], 95.00th=[24511], 00:11:03.437 | 99.00th=[39060], 99.50th=[41157], 99.90th=[47449], 99.95th=[47449], 00:11:03.437 | 99.99th=[47449] 00:11:03.437 bw ( KiB/s): min=16704, max=20480, per=25.51%, avg=18592.00, stdev=2670.04, samples=2 00:11:03.437 iops : min= 4176, max= 5120, avg=4648.00, stdev=667.51, samples=2 00:11:03.437 lat (usec) : 500=0.06%, 750=0.03% 00:11:03.437 lat (msec) : 2=0.72%, 4=1.83%, 10=16.92%, 20=68.29%, 50=12.13% 00:11:03.437 cpu : usr=2.68%, sys=4.57%, ctx=476, majf=0, minf=1 00:11:03.437 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.3% 00:11:03.437 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:03.437 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:11:03.437 issued rwts: total=4608,4775,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:03.437 latency : target=0, window=0, percentile=100.00%, depth=128 00:11:03.437 00:11:03.437 Run status group 0 (all jobs): 00:11:03.437 READ: bw=67.4MiB/s (70.7MB/s), 9.93MiB/s-21.9MiB/s (10.4MB/s-23.0MB/s), io=68.0MiB (71.3MB), run=1003-1008msec 00:11:03.437 WRITE: bw=71.2MiB/s (74.6MB/s), 11.1MiB/s-21.9MiB/s (11.6MB/s-23.0MB/s), io=71.8MiB (75.2MB), run=1003-1008msec 00:11:03.437 00:11:03.437 Disk stats (read/write): 00:11:03.437 nvme0n1: ios=4630/4687, merge=0/0, ticks=57466/47914, in_queue=105380, util=96.89% 00:11:03.437 nvme0n2: ios=3742/4096, merge=0/0, ticks=36754/56655, in_queue=93409, util=98.68% 00:11:03.437 nvme0n3: ios=2048/2471, merge=0/0, ticks=19235/33553, in_queue=52788, util=88.97% 00:11:03.437 nvme0n4: ios=4096/4375, merge=0/0, ticks=42286/39001, in_queue=81287, util=89.72% 00:11:03.437 11:18:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randwrite -r 1 -v 00:11:03.437 [global] 00:11:03.437 thread=1 00:11:03.437 invalidate=1 00:11:03.437 rw=randwrite 00:11:03.437 time_based=1 00:11:03.437 runtime=1 00:11:03.437 ioengine=libaio 00:11:03.437 direct=1 00:11:03.437 bs=4096 00:11:03.437 iodepth=128 00:11:03.437 norandommap=0 00:11:03.437 numjobs=1 00:11:03.437 00:11:03.437 verify_dump=1 00:11:03.437 verify_backlog=512 00:11:03.437 verify_state_save=0 00:11:03.437 do_verify=1 00:11:03.437 verify=crc32c-intel 00:11:03.437 [job0] 00:11:03.437 filename=/dev/nvme0n1 00:11:03.437 [job1] 00:11:03.437 filename=/dev/nvme0n2 00:11:03.437 [job2] 00:11:03.437 filename=/dev/nvme0n3 00:11:03.437 [job3] 00:11:03.437 filename=/dev/nvme0n4 00:11:03.437 Could not set queue depth (nvme0n1) 00:11:03.437 Could not set queue depth (nvme0n2) 00:11:03.437 Could not set queue depth (nvme0n3) 00:11:03.437 Could not set queue depth (nvme0n4) 00:11:03.437 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:11:03.437 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:11:03.437 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:11:03.437 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:11:03.437 fio-3.35 00:11:03.437 Starting 4 threads 00:11:04.810 00:11:04.810 job0: (groupid=0, jobs=1): err= 0: pid=1420148: Fri Jul 26 11:19:00 2024 00:11:04.810 read: IOPS=3566, BW=13.9MiB/s (14.6MB/s)(14.0MiB/1005msec) 00:11:04.810 slat (nsec): min=1010, max=22219k, avg=135699.83, stdev=928931.38 00:11:04.810 clat (usec): min=5664, max=55447, avg=17396.07, stdev=10761.52 00:11:04.810 lat (usec): min=6317, max=55456, avg=17531.77, stdev=10815.14 00:11:04.810 clat percentiles (usec): 00:11:04.810 | 1.00th=[ 6980], 5.00th=[ 8291], 10.00th=[ 9372], 20.00th=[ 9896], 00:11:04.810 | 30.00th=[10159], 40.00th=[10683], 50.00th=[11863], 60.00th=[17171], 00:11:04.810 | 70.00th=[20841], 80.00th=[21365], 90.00th=[35390], 95.00th=[45351], 00:11:04.810 | 99.00th=[52167], 99.50th=[55313], 99.90th=[55313], 99.95th=[55313], 00:11:04.810 | 99.99th=[55313] 00:11:04.810 write: IOPS=4074, BW=15.9MiB/s (16.7MB/s)(16.0MiB/1005msec); 0 zone resets 00:11:04.810 slat (nsec): min=1749, max=9416.6k, avg=120061.41, stdev=609702.76 00:11:04.810 clat (usec): min=4351, max=59036, avg=15760.81, stdev=9465.90 00:11:04.810 lat (usec): min=4863, max=59046, avg=15880.87, stdev=9530.48 00:11:04.810 clat percentiles (usec): 00:11:04.810 | 1.00th=[ 6128], 5.00th=[ 8291], 10.00th=[ 8979], 20.00th=[ 9503], 00:11:04.810 | 30.00th=[ 9896], 40.00th=[10028], 50.00th=[11338], 60.00th=[12256], 00:11:04.810 | 70.00th=[20579], 80.00th=[21103], 90.00th=[25297], 95.00th=[30278], 00:11:04.810 | 99.00th=[58983], 99.50th=[58983], 99.90th=[58983], 99.95th=[58983], 00:11:04.810 | 99.99th=[58983] 00:11:04.810 bw ( KiB/s): min=15872, max=15872, per=20.61%, avg=15872.00, stdev= 0.00, samples=2 00:11:04.810 iops : min= 3968, max= 3968, avg=3968.00, stdev= 0.00, samples=2 00:11:04.810 lat (msec) : 10=30.88%, 20=35.77%, 50=31.33%, 100=2.02% 00:11:04.810 cpu : usr=2.49%, sys=4.08%, ctx=460, majf=0, minf=1 00:11:04.810 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:11:04.810 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:04.810 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:11:04.810 issued rwts: total=3584,4095,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:04.810 latency : target=0, window=0, percentile=100.00%, depth=128 00:11:04.810 job1: (groupid=0, jobs=1): err= 0: pid=1420149: Fri Jul 26 11:19:00 2024 00:11:04.810 read: IOPS=5813, BW=22.7MiB/s (23.8MB/s)(22.9MiB/1007msec) 00:11:04.810 slat (nsec): min=1341, max=9966.0k, avg=85832.08, stdev=613152.20 00:11:04.810 clat (usec): min=3916, max=30498, avg=10540.64, stdev=2843.03 00:11:04.810 lat (usec): min=3928, max=30501, avg=10626.47, stdev=2891.12 00:11:04.810 clat percentiles (usec): 00:11:04.810 | 1.00th=[ 4817], 5.00th=[ 7635], 10.00th=[ 8717], 20.00th=[ 9110], 00:11:04.810 | 30.00th=[ 9241], 40.00th=[ 9503], 50.00th=[ 9634], 60.00th=[ 9896], 00:11:04.810 | 70.00th=[10683], 80.00th=[11469], 90.00th=[14353], 95.00th=[15795], 00:11:04.810 | 99.00th=[23462], 99.50th=[25035], 99.90th=[29230], 99.95th=[30540], 00:11:04.810 | 99.99th=[30540] 00:11:04.810 write: IOPS=6101, BW=23.8MiB/s (25.0MB/s)(24.0MiB/1007msec); 0 zone resets 00:11:04.810 slat (usec): min=2, max=8484, avg=72.13, stdev=426.81 00:11:04.810 clat (usec): min=271, max=46804, avg=10684.28, stdev=6938.19 00:11:04.810 lat (usec): min=358, max=46829, avg=10756.42, stdev=6993.21 00:11:04.810 clat percentiles (usec): 00:11:04.810 | 1.00th=[ 3261], 5.00th=[ 4686], 10.00th=[ 6128], 20.00th=[ 7832], 00:11:04.810 | 30.00th=[ 8848], 40.00th=[ 9110], 50.00th=[ 9503], 60.00th=[ 9765], 00:11:04.810 | 70.00th=[ 9896], 80.00th=[10421], 90.00th=[11863], 95.00th=[25560], 00:11:04.810 | 99.00th=[41681], 99.50th=[42206], 99.90th=[46924], 99.95th=[46924], 00:11:04.810 | 99.99th=[46924] 00:11:04.810 bw ( KiB/s): min=20496, max=28656, per=31.91%, avg=24576.00, stdev=5769.99, samples=2 00:11:04.810 iops : min= 5124, max= 7164, avg=6144.00, stdev=1442.50, samples=2 00:11:04.810 lat (usec) : 500=0.02%, 1000=0.08% 00:11:04.810 lat (msec) : 2=0.03%, 4=1.34%, 10=67.49%, 20=26.32%, 50=4.72% 00:11:04.810 cpu : usr=5.67%, sys=6.96%, ctx=634, majf=0, minf=1 00:11:04.810 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.5% 00:11:04.810 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:04.810 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:11:04.810 issued rwts: total=5854,6144,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:04.810 latency : target=0, window=0, percentile=100.00%, depth=128 00:11:04.810 job2: (groupid=0, jobs=1): err= 0: pid=1420150: Fri Jul 26 11:19:00 2024 00:11:04.810 read: IOPS=4566, BW=17.8MiB/s (18.7MB/s)(18.0MiB/1009msec) 00:11:04.811 slat (nsec): min=1458, max=9103.2k, avg=103452.16, stdev=624246.09 00:11:04.811 clat (usec): min=6583, max=28521, avg=12846.44, stdev=2915.40 00:11:04.811 lat (usec): min=6592, max=28528, avg=12949.89, stdev=2971.05 00:11:04.811 clat percentiles (usec): 00:11:04.811 | 1.00th=[ 7832], 5.00th=[ 8586], 10.00th=[ 9896], 20.00th=[10683], 00:11:04.811 | 30.00th=[10945], 40.00th=[11338], 50.00th=[12780], 60.00th=[13304], 00:11:04.811 | 70.00th=[13960], 80.00th=[14746], 90.00th=[16712], 95.00th=[17957], 00:11:04.811 | 99.00th=[21103], 99.50th=[24511], 99.90th=[28443], 99.95th=[28443], 00:11:04.811 | 99.99th=[28443] 00:11:04.811 write: IOPS=4890, BW=19.1MiB/s (20.0MB/s)(19.3MiB/1009msec); 0 zone resets 00:11:04.811 slat (usec): min=2, max=11865, avg=100.27, stdev=545.32 00:11:04.811 clat (usec): min=4796, max=46063, avg=13883.01, stdev=5097.33 00:11:04.811 lat (usec): min=4813, max=46071, avg=13983.28, stdev=5125.50 00:11:04.811 clat percentiles (usec): 00:11:04.811 | 1.00th=[ 7046], 5.00th=[ 8848], 10.00th=[10290], 20.00th=[10814], 00:11:04.811 | 30.00th=[11207], 40.00th=[11469], 50.00th=[11731], 60.00th=[13173], 00:11:04.811 | 70.00th=[14353], 80.00th=[16712], 90.00th=[20055], 95.00th=[22938], 00:11:04.811 | 99.00th=[34866], 99.50th=[39060], 99.90th=[45876], 99.95th=[45876], 00:11:04.811 | 99.99th=[45876] 00:11:04.811 bw ( KiB/s): min=18768, max=19688, per=24.96%, avg=19228.00, stdev=650.54, samples=2 00:11:04.811 iops : min= 4692, max= 4922, avg=4807.00, stdev=162.63, samples=2 00:11:04.811 lat (msec) : 10=9.48%, 20=84.34%, 50=6.17% 00:11:04.811 cpu : usr=4.07%, sys=5.75%, ctx=563, majf=0, minf=1 00:11:04.811 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.3% 00:11:04.811 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:04.811 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:11:04.811 issued rwts: total=4608,4935,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:04.811 latency : target=0, window=0, percentile=100.00%, depth=128 00:11:04.811 job3: (groupid=0, jobs=1): err= 0: pid=1420151: Fri Jul 26 11:19:00 2024 00:11:04.811 read: IOPS=4055, BW=15.8MiB/s (16.6MB/s)(16.0MiB/1010msec) 00:11:04.811 slat (nsec): min=1113, max=13781k, avg=125779.74, stdev=912894.13 00:11:04.811 clat (usec): min=4631, max=53396, avg=15674.93, stdev=9972.26 00:11:04.811 lat (usec): min=4646, max=53402, avg=15800.71, stdev=10039.06 00:11:04.811 clat percentiles (usec): 00:11:04.811 | 1.00th=[ 6915], 5.00th=[ 8717], 10.00th=[10290], 20.00th=[10814], 00:11:04.811 | 30.00th=[11469], 40.00th=[11863], 50.00th=[12256], 60.00th=[12649], 00:11:04.811 | 70.00th=[13960], 80.00th=[16188], 90.00th=[26608], 95.00th=[48497], 00:11:04.811 | 99.00th=[50594], 99.50th=[53216], 99.90th=[53216], 99.95th=[53216], 00:11:04.811 | 99.99th=[53216] 00:11:04.811 write: IOPS=4232, BW=16.5MiB/s (17.3MB/s)(16.7MiB/1010msec); 0 zone resets 00:11:04.811 slat (usec): min=2, max=6381, avg=105.35, stdev=448.30 00:11:04.811 clat (usec): min=2573, max=30404, avg=14844.21, stdev=5043.63 00:11:04.811 lat (usec): min=2596, max=30409, avg=14949.57, stdev=5079.15 00:11:04.811 clat percentiles (usec): 00:11:04.811 | 1.00th=[ 4015], 5.00th=[ 6652], 10.00th=[ 9241], 20.00th=[10945], 00:11:04.811 | 30.00th=[11469], 40.00th=[12256], 50.00th=[13173], 60.00th=[15926], 00:11:04.811 | 70.00th=[19530], 80.00th=[20841], 90.00th=[21103], 95.00th=[21365], 00:11:04.811 | 99.00th=[23725], 99.50th=[25297], 99.90th=[27132], 99.95th=[27132], 00:11:04.811 | 99.99th=[30278] 00:11:04.811 bw ( KiB/s): min=15352, max=17832, per=21.54%, avg=16592.00, stdev=1753.62, samples=2 00:11:04.811 iops : min= 3838, max= 4458, avg=4148.00, stdev=438.41, samples=2 00:11:04.811 lat (msec) : 4=0.50%, 10=9.90%, 20=68.96%, 50=19.89%, 100=0.74% 00:11:04.811 cpu : usr=2.58%, sys=5.05%, ctx=508, majf=0, minf=1 00:11:04.811 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:11:04.811 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:04.811 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:11:04.811 issued rwts: total=4096,4275,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:04.811 latency : target=0, window=0, percentile=100.00%, depth=128 00:11:04.811 00:11:04.811 Run status group 0 (all jobs): 00:11:04.811 READ: bw=70.2MiB/s (73.6MB/s), 13.9MiB/s-22.7MiB/s (14.6MB/s-23.8MB/s), io=70.9MiB (74.3MB), run=1005-1010msec 00:11:04.811 WRITE: bw=75.2MiB/s (78.9MB/s), 15.9MiB/s-23.8MiB/s (16.7MB/s-25.0MB/s), io=76.0MiB (79.7MB), run=1005-1010msec 00:11:04.811 00:11:04.811 Disk stats (read/write): 00:11:04.811 nvme0n1: ios=2673/3072, merge=0/0, ticks=14546/15539, in_queue=30085, util=87.17% 00:11:04.811 nvme0n2: ios=5645/5639, merge=0/0, ticks=56544/48054, in_queue=104598, util=100.00% 00:11:04.811 nvme0n3: ios=4096/4247, merge=0/0, ticks=26273/25729, in_queue=52002, util=89.09% 00:11:04.811 nvme0n4: ios=3130/3518, merge=0/0, ticks=31031/37389, in_queue=68420, util=98.32% 00:11:04.811 11:19:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@55 -- # sync 00:11:04.811 11:19:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@59 -- # fio_pid=1420377 00:11:04.811 11:19:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t read -r 10 00:11:04.811 11:19:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@61 -- # sleep 3 00:11:04.811 [global] 00:11:04.811 thread=1 00:11:04.811 invalidate=1 00:11:04.811 rw=read 00:11:04.811 time_based=1 00:11:04.811 runtime=10 00:11:04.811 ioengine=libaio 00:11:04.811 direct=1 00:11:04.811 bs=4096 00:11:04.811 iodepth=1 00:11:04.811 norandommap=1 00:11:04.811 numjobs=1 00:11:04.811 00:11:04.811 [job0] 00:11:04.811 filename=/dev/nvme0n1 00:11:04.811 [job1] 00:11:04.811 filename=/dev/nvme0n2 00:11:04.811 [job2] 00:11:04.811 filename=/dev/nvme0n3 00:11:04.811 [job3] 00:11:04.811 filename=/dev/nvme0n4 00:11:04.811 Could not set queue depth (nvme0n1) 00:11:04.811 Could not set queue depth (nvme0n2) 00:11:04.811 Could not set queue depth (nvme0n3) 00:11:04.811 Could not set queue depth (nvme0n4) 00:11:05.069 job0: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:05.069 job1: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:05.069 job2: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:05.069 job3: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:05.069 fio-3.35 00:11:05.069 Starting 4 threads 00:11:08.350 11:19:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete concat0 00:11:08.350 fio: io_u error on file /dev/nvme0n4: Remote I/O error: read offset=278528, buflen=4096 00:11:08.350 fio: pid=1420586, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:11:08.350 11:19:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete raid0 00:11:08.350 fio: io_u error on file /dev/nvme0n3: Remote I/O error: read offset=35561472, buflen=4096 00:11:08.350 fio: pid=1420580, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:11:08.350 11:19:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:11:08.350 11:19:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc0 00:11:08.350 fio: io_u error on file /dev/nvme0n1: Remote I/O error: read offset=51417088, buflen=4096 00:11:08.350 fio: pid=1420543, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:11:08.350 11:19:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:11:08.350 11:19:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc1 00:11:08.608 11:19:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:11:08.608 11:19:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc2 00:11:08.608 fio: io_u error on file /dev/nvme0n2: Remote I/O error: read offset=26263552, buflen=4096 00:11:08.608 fio: pid=1420560, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:11:08.608 00:11:08.608 job0: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=1420543: Fri Jul 26 11:19:04 2024 00:11:08.608 read: IOPS=4004, BW=15.6MiB/s (16.4MB/s)(49.0MiB/3135msec) 00:11:08.608 slat (usec): min=6, max=20992, avg=11.74, stdev=220.21 00:11:08.608 clat (usec): min=173, max=1798, avg=233.95, stdev=41.53 00:11:08.608 lat (usec): min=181, max=21235, avg=244.92, stdev=207.32 00:11:08.608 clat percentiles (usec): 00:11:08.608 | 1.00th=[ 192], 5.00th=[ 200], 10.00th=[ 204], 20.00th=[ 210], 00:11:08.608 | 30.00th=[ 217], 40.00th=[ 223], 50.00th=[ 229], 60.00th=[ 237], 00:11:08.608 | 70.00th=[ 245], 80.00th=[ 253], 90.00th=[ 265], 95.00th=[ 273], 00:11:08.608 | 99.00th=[ 302], 99.50th=[ 433], 99.90th=[ 553], 99.95th=[ 619], 00:11:08.608 | 99.99th=[ 1696] 00:11:08.608 bw ( KiB/s): min=14704, max=17560, per=48.70%, avg=16189.67, stdev=1029.35, samples=6 00:11:08.608 iops : min= 3676, max= 4390, avg=4047.33, stdev=257.31, samples=6 00:11:08.608 lat (usec) : 250=76.98%, 500=22.77%, 750=0.19% 00:11:08.608 lat (msec) : 2=0.05% 00:11:08.608 cpu : usr=1.75%, sys=7.24%, ctx=12559, majf=0, minf=1 00:11:08.608 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:08.608 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:08.608 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:08.608 issued rwts: total=12554,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:08.608 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:08.608 job1: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=1420560: Fri Jul 26 11:19:04 2024 00:11:08.608 read: IOPS=1922, BW=7691KiB/s (7875kB/s)(25.0MiB/3335msec) 00:11:08.608 slat (usec): min=7, max=8597, avg=11.46, stdev=156.33 00:11:08.608 clat (usec): min=198, max=41966, avg=502.64, stdev=3208.50 00:11:08.608 lat (usec): min=206, max=46003, avg=514.10, stdev=3222.56 00:11:08.608 clat percentiles (usec): 00:11:08.608 | 1.00th=[ 225], 5.00th=[ 231], 10.00th=[ 235], 20.00th=[ 239], 00:11:08.608 | 30.00th=[ 243], 40.00th=[ 245], 50.00th=[ 249], 60.00th=[ 251], 00:11:08.608 | 70.00th=[ 253], 80.00th=[ 258], 90.00th=[ 262], 95.00th=[ 265], 00:11:08.608 | 99.00th=[ 285], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:11:08.608 | 99.99th=[42206] 00:11:08.608 bw ( KiB/s): min= 96, max=15512, per=25.32%, avg=8418.67, stdev=6274.76, samples=6 00:11:08.608 iops : min= 24, max= 3878, avg=2104.67, stdev=1568.69, samples=6 00:11:08.608 lat (usec) : 250=58.43%, 500=40.89%, 750=0.02% 00:11:08.608 lat (msec) : 2=0.03%, 50=0.62% 00:11:08.608 cpu : usr=1.08%, sys=3.09%, ctx=6417, majf=0, minf=1 00:11:08.608 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:08.608 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:08.608 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:08.608 issued rwts: total=6413,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:08.608 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:08.608 job2: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=1420580: Fri Jul 26 11:19:04 2024 00:11:08.608 read: IOPS=2957, BW=11.6MiB/s (12.1MB/s)(33.9MiB/2936msec) 00:11:08.608 slat (nsec): min=7125, max=41062, avg=8364.09, stdev=1279.53 00:11:08.608 clat (usec): min=202, max=41154, avg=324.87, stdev=1745.44 00:11:08.608 lat (usec): min=217, max=41163, avg=333.24, stdev=1745.96 00:11:08.608 clat percentiles (usec): 00:11:08.608 | 1.00th=[ 225], 5.00th=[ 231], 10.00th=[ 235], 20.00th=[ 239], 00:11:08.608 | 30.00th=[ 243], 40.00th=[ 245], 50.00th=[ 249], 60.00th=[ 251], 00:11:08.608 | 70.00th=[ 253], 80.00th=[ 258], 90.00th=[ 265], 95.00th=[ 269], 00:11:08.608 | 99.00th=[ 375], 99.50th=[ 429], 99.90th=[41157], 99.95th=[41157], 00:11:08.608 | 99.99th=[41157] 00:11:08.608 bw ( KiB/s): min= 192, max=15520, per=34.09%, avg=11332.80, stdev=6681.22, samples=5 00:11:08.608 iops : min= 48, max= 3880, avg=2833.20, stdev=1670.31, samples=5 00:11:08.608 lat (usec) : 250=57.51%, 500=42.27%, 750=0.02% 00:11:08.608 lat (msec) : 50=0.18% 00:11:08.608 cpu : usr=1.91%, sys=4.57%, ctx=8687, majf=0, minf=1 00:11:08.608 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:08.608 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:08.608 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:08.608 issued rwts: total=8683,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:08.608 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:08.608 job3: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=1420586: Fri Jul 26 11:19:04 2024 00:11:08.608 read: IOPS=25, BW=99.5KiB/s (102kB/s)(272KiB/2735msec) 00:11:08.608 slat (nsec): min=11729, max=39207, avg=17828.26, stdev=5292.22 00:11:08.608 clat (usec): min=314, max=45046, avg=39846.10, stdev=6946.42 00:11:08.608 lat (usec): min=327, max=45060, avg=39863.84, stdev=6944.88 00:11:08.608 clat percentiles (usec): 00:11:08.608 | 1.00th=[ 314], 5.00th=[40633], 10.00th=[41157], 20.00th=[41157], 00:11:08.608 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:11:08.608 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:11:08.608 | 99.00th=[44827], 99.50th=[44827], 99.90th=[44827], 99.95th=[44827], 00:11:08.608 | 99.99th=[44827] 00:11:08.608 bw ( KiB/s): min= 96, max= 104, per=0.30%, avg=99.20, stdev= 4.38, samples=5 00:11:08.608 iops : min= 24, max= 26, avg=24.80, stdev= 1.10, samples=5 00:11:08.608 lat (usec) : 500=2.90% 00:11:08.608 lat (msec) : 50=95.65% 00:11:08.608 cpu : usr=0.11%, sys=0.00%, ctx=71, majf=0, minf=2 00:11:08.608 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:08.608 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:08.608 complete : 0=1.4%, 4=98.6%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:08.608 issued rwts: total=69,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:08.608 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:08.608 00:11:08.608 Run status group 0 (all jobs): 00:11:08.608 READ: bw=32.5MiB/s (34.0MB/s), 99.5KiB/s-15.6MiB/s (102kB/s-16.4MB/s), io=108MiB (114MB), run=2735-3335msec 00:11:08.608 00:11:08.608 Disk stats (read/write): 00:11:08.608 nvme0n1: ios=12553/0, merge=0/0, ticks=2770/0, in_queue=2770, util=94.76% 00:11:08.608 nvme0n2: ios=6434/0, merge=0/0, ticks=3746/0, in_queue=3746, util=98.86% 00:11:08.608 nvme0n3: ios=8480/0, merge=0/0, ticks=3549/0, in_queue=3549, util=99.86% 00:11:08.608 nvme0n4: ios=106/0, merge=0/0, ticks=3429/0, in_queue=3429, util=99.30% 00:11:08.608 11:19:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:11:08.608 11:19:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc3 00:11:08.866 11:19:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:11:08.866 11:19:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc4 00:11:09.123 11:19:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:11:09.123 11:19:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc5 00:11:09.381 11:19:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:11:09.381 11:19:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc6 00:11:09.381 11:19:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@69 -- # fio_status=0 00:11:09.381 11:19:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@70 -- # wait 1420377 00:11:09.381 11:19:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@70 -- # fio_status=4 00:11:09.381 11:19:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@72 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:11:09.639 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:09.639 11:19:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@73 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:11:09.639 11:19:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1219 -- # local i=0 00:11:09.639 11:19:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:11:09.639 11:19:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:09.639 11:19:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:11:09.639 11:19:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:09.639 11:19:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1231 -- # return 0 00:11:09.639 11:19:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@75 -- # '[' 4 -eq 0 ']' 00:11:09.639 11:19:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@80 -- # echo 'nvmf hotplug test: fio failed as expected' 00:11:09.639 nvmf hotplug test: fio failed as expected 00:11:09.639 11:19:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:09.898 11:19:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@85 -- # rm -f ./local-job0-0-verify.state 00:11:09.898 11:19:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@86 -- # rm -f ./local-job1-1-verify.state 00:11:09.898 11:19:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@87 -- # rm -f ./local-job2-2-verify.state 00:11:09.898 11:19:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@89 -- # trap - SIGINT SIGTERM EXIT 00:11:09.898 11:19:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@91 -- # nvmftestfini 00:11:09.898 11:19:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@488 -- # nvmfcleanup 00:11:09.898 11:19:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@117 -- # sync 00:11:09.898 11:19:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:11:09.898 11:19:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@120 -- # set +e 00:11:09.898 11:19:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@121 -- # for i in {1..20} 00:11:09.898 11:19:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:11:09.898 rmmod nvme_tcp 00:11:09.898 rmmod nvme_fabrics 00:11:09.898 rmmod nvme_keyring 00:11:09.898 11:19:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:11:09.898 11:19:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@124 -- # set -e 00:11:09.898 11:19:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@125 -- # return 0 00:11:09.898 11:19:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@489 -- # '[' -n 1417574 ']' 00:11:09.898 11:19:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@490 -- # killprocess 1417574 00:11:09.898 11:19:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@950 -- # '[' -z 1417574 ']' 00:11:09.898 11:19:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@954 -- # kill -0 1417574 00:11:09.898 11:19:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@955 -- # uname 00:11:09.898 11:19:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:11:09.898 11:19:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1417574 00:11:09.898 11:19:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:11:09.898 11:19:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:11:09.898 11:19:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1417574' 00:11:09.898 killing process with pid 1417574 00:11:09.898 11:19:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@969 -- # kill 1417574 00:11:09.898 11:19:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@974 -- # wait 1417574 00:11:10.208 11:19:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:11:10.208 11:19:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:11:10.208 11:19:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:11:10.208 11:19:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:11:10.208 11:19:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@278 -- # remove_spdk_ns 00:11:10.208 11:19:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:10.208 11:19:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:10.208 11:19:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:12.123 11:19:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:11:12.123 00:11:12.123 real 0m26.746s 00:11:12.123 user 1m47.976s 00:11:12.123 sys 0m8.577s 00:11:12.123 11:19:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:12.123 11:19:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:11:12.123 ************************************ 00:11:12.123 END TEST nvmf_fio_target 00:11:12.123 ************************************ 00:11:12.123 11:19:07 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@35 -- # run_test nvmf_bdevio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:11:12.123 11:19:07 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:11:12.123 11:19:07 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:12.123 11:19:07 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:11:12.123 ************************************ 00:11:12.123 START TEST nvmf_bdevio 00:11:12.123 ************************************ 00:11:12.123 11:19:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:11:12.383 * Looking for test storage... 00:11:12.383 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:12.383 11:19:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:12.383 11:19:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@7 -- # uname -s 00:11:12.383 11:19:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:12.383 11:19:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:12.383 11:19:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:12.383 11:19:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:12.383 11:19:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:12.383 11:19:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:12.383 11:19:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:12.383 11:19:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:12.383 11:19:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:12.383 11:19:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:12.383 11:19:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:11:12.383 11:19:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:11:12.383 11:19:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:12.383 11:19:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:12.383 11:19:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:12.383 11:19:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:12.383 11:19:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:12.383 11:19:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:12.383 11:19:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:12.383 11:19:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:12.383 11:19:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:12.383 11:19:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:12.383 11:19:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:12.383 11:19:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@5 -- # export PATH 00:11:12.383 11:19:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:12.383 11:19:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@47 -- # : 0 00:11:12.383 11:19:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:11:12.383 11:19:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:11:12.383 11:19:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:12.383 11:19:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:12.383 11:19:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:12.383 11:19:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:11:12.383 11:19:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:11:12.383 11:19:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@51 -- # have_pci_nics=0 00:11:12.383 11:19:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:11:12.383 11:19:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:11:12.383 11:19:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@14 -- # nvmftestinit 00:11:12.383 11:19:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:11:12.383 11:19:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:12.383 11:19:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@448 -- # prepare_net_devs 00:11:12.383 11:19:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@410 -- # local -g is_hw=no 00:11:12.383 11:19:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@412 -- # remove_spdk_ns 00:11:12.383 11:19:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:12.383 11:19:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:12.383 11:19:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:12.383 11:19:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:11:12.383 11:19:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:11:12.383 11:19:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@285 -- # xtrace_disable 00:11:12.383 11:19:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:18.950 11:19:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:18.950 11:19:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@291 -- # pci_devs=() 00:11:18.951 11:19:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@291 -- # local -a pci_devs 00:11:18.951 11:19:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@292 -- # pci_net_devs=() 00:11:18.951 11:19:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:11:18.951 11:19:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@293 -- # pci_drivers=() 00:11:18.951 11:19:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@293 -- # local -A pci_drivers 00:11:18.951 11:19:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@295 -- # net_devs=() 00:11:18.951 11:19:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@295 -- # local -ga net_devs 00:11:18.951 11:19:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@296 -- # e810=() 00:11:18.951 11:19:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@296 -- # local -ga e810 00:11:18.951 11:19:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@297 -- # x722=() 00:11:18.951 11:19:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@297 -- # local -ga x722 00:11:18.951 11:19:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@298 -- # mlx=() 00:11:18.951 11:19:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@298 -- # local -ga mlx 00:11:18.951 11:19:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:18.951 11:19:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:18.951 11:19:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:18.951 11:19:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:18.951 11:19:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:18.951 11:19:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:18.951 11:19:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:18.951 11:19:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:18.951 11:19:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:18.951 11:19:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:18.951 11:19:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:18.951 11:19:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:11:18.951 11:19:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:11:18.951 11:19:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:11:18.951 11:19:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:11:18.951 11:19:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:11:18.951 11:19:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:11:18.951 11:19:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:11:18.951 11:19:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:11:18.951 Found 0000:86:00.0 (0x8086 - 0x159b) 00:11:18.951 11:19:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:11:18.951 11:19:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:11:18.951 11:19:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:18.951 11:19:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:18.951 11:19:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:11:18.951 11:19:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:11:18.951 11:19:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:11:18.951 Found 0000:86:00.1 (0x8086 - 0x159b) 00:11:18.951 11:19:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:11:18.951 11:19:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:11:18.951 11:19:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:18.951 11:19:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:18.951 11:19:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:11:18.951 11:19:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:11:18.951 11:19:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:11:18.951 11:19:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:11:18.951 11:19:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:11:18.951 11:19:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:18.951 11:19:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:11:18.951 11:19:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:18.951 11:19:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@390 -- # [[ up == up ]] 00:11:18.951 11:19:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:11:18.951 11:19:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:18.951 11:19:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:11:18.951 Found net devices under 0000:86:00.0: cvl_0_0 00:11:18.951 11:19:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:11:18.951 11:19:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:11:18.951 11:19:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:18.951 11:19:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:11:18.951 11:19:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:18.951 11:19:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@390 -- # [[ up == up ]] 00:11:18.951 11:19:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:11:18.951 11:19:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:18.951 11:19:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:11:18.951 Found net devices under 0000:86:00.1: cvl_0_1 00:11:18.951 11:19:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:11:18.951 11:19:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:11:18.951 11:19:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@414 -- # is_hw=yes 00:11:18.951 11:19:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:11:18.951 11:19:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:11:18.951 11:19:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:11:18.951 11:19:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:18.951 11:19:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:18.951 11:19:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:18.951 11:19:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:11:18.951 11:19:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:18.951 11:19:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:18.951 11:19:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:11:18.951 11:19:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:18.951 11:19:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:18.951 11:19:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:11:18.951 11:19:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:11:18.951 11:19:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:11:18.951 11:19:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:18.951 11:19:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:18.951 11:19:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:18.951 11:19:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:11:18.951 11:19:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:18.951 11:19:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:18.951 11:19:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:18.951 11:19:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:11:18.951 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:18.951 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.152 ms 00:11:18.951 00:11:18.951 --- 10.0.0.2 ping statistics --- 00:11:18.951 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:18.951 rtt min/avg/max/mdev = 0.152/0.152/0.152/0.000 ms 00:11:18.951 11:19:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:18.951 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:18.951 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.190 ms 00:11:18.951 00:11:18.951 --- 10.0.0.1 ping statistics --- 00:11:18.951 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:18.951 rtt min/avg/max/mdev = 0.190/0.190/0.190/0.000 ms 00:11:18.951 11:19:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:18.951 11:19:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@422 -- # return 0 00:11:18.951 11:19:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:11:18.951 11:19:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:18.951 11:19:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:11:18.951 11:19:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:11:18.952 11:19:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:18.952 11:19:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:11:18.952 11:19:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:11:18.952 11:19:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:11:18.952 11:19:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:11:18.952 11:19:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@724 -- # xtrace_disable 00:11:18.952 11:19:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:18.952 11:19:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@481 -- # nvmfpid=1424981 00:11:18.952 11:19:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@482 -- # waitforlisten 1424981 00:11:18.952 11:19:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x78 00:11:18.952 11:19:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@831 -- # '[' -z 1424981 ']' 00:11:18.952 11:19:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:18.952 11:19:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@836 -- # local max_retries=100 00:11:18.952 11:19:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:18.952 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:18.952 11:19:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@840 -- # xtrace_disable 00:11:18.952 11:19:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:18.952 [2024-07-26 11:19:13.675364] Starting SPDK v24.09-pre git sha1 487ff9e1a / DPDK 24.03.0 initialization... 00:11:18.952 [2024-07-26 11:19:13.675409] [ DPDK EAL parameters: nvmf -c 0x78 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:18.952 EAL: No free 2048 kB hugepages reported on node 1 00:11:18.952 [2024-07-26 11:19:13.743715] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:18.952 [2024-07-26 11:19:13.821211] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:18.952 [2024-07-26 11:19:13.821249] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:18.952 [2024-07-26 11:19:13.821256] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:18.952 [2024-07-26 11:19:13.821267] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:18.952 [2024-07-26 11:19:13.821272] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:18.952 [2024-07-26 11:19:13.821382] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:11:18.952 [2024-07-26 11:19:13.821486] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 5 00:11:18.952 [2024-07-26 11:19:13.821568] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:11:18.952 [2024-07-26 11:19:13.821569] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 6 00:11:18.952 11:19:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:11:18.952 11:19:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@864 -- # return 0 00:11:18.952 11:19:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:11:18.952 11:19:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@730 -- # xtrace_disable 00:11:18.952 11:19:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:18.952 11:19:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:18.952 11:19:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:11:18.952 11:19:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:18.952 11:19:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:18.952 [2024-07-26 11:19:14.514743] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:18.952 11:19:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:18.952 11:19:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:11:18.952 11:19:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:18.952 11:19:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:18.952 Malloc0 00:11:18.952 11:19:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:18.952 11:19:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:11:18.952 11:19:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:18.952 11:19:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:18.952 11:19:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:18.952 11:19:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:11:18.952 11:19:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:18.952 11:19:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:18.952 11:19:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:18.952 11:19:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:18.952 11:19:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:18.952 11:19:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:18.952 [2024-07-26 11:19:14.566244] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:18.952 11:19:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:18.952 11:19:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 00:11:18.952 11:19:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:11:18.952 11:19:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@532 -- # config=() 00:11:18.952 11:19:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@532 -- # local subsystem config 00:11:18.952 11:19:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:11:18.952 11:19:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:11:18.952 { 00:11:18.952 "params": { 00:11:18.952 "name": "Nvme$subsystem", 00:11:18.952 "trtype": "$TEST_TRANSPORT", 00:11:18.952 "traddr": "$NVMF_FIRST_TARGET_IP", 00:11:18.952 "adrfam": "ipv4", 00:11:18.952 "trsvcid": "$NVMF_PORT", 00:11:18.952 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:11:18.952 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:11:18.952 "hdgst": ${hdgst:-false}, 00:11:18.952 "ddgst": ${ddgst:-false} 00:11:18.952 }, 00:11:18.952 "method": "bdev_nvme_attach_controller" 00:11:18.952 } 00:11:18.952 EOF 00:11:18.952 )") 00:11:18.952 11:19:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@554 -- # cat 00:11:18.952 11:19:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@556 -- # jq . 00:11:18.952 11:19:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@557 -- # IFS=, 00:11:18.952 11:19:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:11:18.952 "params": { 00:11:18.952 "name": "Nvme1", 00:11:18.952 "trtype": "tcp", 00:11:18.952 "traddr": "10.0.0.2", 00:11:18.952 "adrfam": "ipv4", 00:11:18.952 "trsvcid": "4420", 00:11:18.952 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:11:18.952 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:11:18.952 "hdgst": false, 00:11:18.952 "ddgst": false 00:11:18.952 }, 00:11:18.952 "method": "bdev_nvme_attach_controller" 00:11:18.952 }' 00:11:19.210 [2024-07-26 11:19:14.614494] Starting SPDK v24.09-pre git sha1 487ff9e1a / DPDK 24.03.0 initialization... 00:11:19.210 [2024-07-26 11:19:14.614535] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1425014 ] 00:11:19.210 EAL: No free 2048 kB hugepages reported on node 1 00:11:19.210 [2024-07-26 11:19:14.683257] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:11:19.210 [2024-07-26 11:19:14.758051] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:11:19.210 [2024-07-26 11:19:14.758155] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:11:19.210 [2024-07-26 11:19:14.758156] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:11:19.467 I/O targets: 00:11:19.467 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:11:19.467 00:11:19.467 00:11:19.467 CUnit - A unit testing framework for C - Version 2.1-3 00:11:19.467 http://cunit.sourceforge.net/ 00:11:19.467 00:11:19.467 00:11:19.467 Suite: bdevio tests on: Nvme1n1 00:11:19.467 Test: blockdev write read block ...passed 00:11:19.724 Test: blockdev write zeroes read block ...passed 00:11:19.724 Test: blockdev write zeroes read no split ...passed 00:11:19.724 Test: blockdev write zeroes read split ...passed 00:11:19.724 Test: blockdev write zeroes read split partial ...passed 00:11:19.724 Test: blockdev reset ...[2024-07-26 11:19:15.192871] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:11:19.724 [2024-07-26 11:19:15.192935] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b146d0 (9): Bad file descriptor 00:11:19.724 [2024-07-26 11:19:15.244342] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:11:19.724 passed 00:11:19.724 Test: blockdev write read 8 blocks ...passed 00:11:19.724 Test: blockdev write read size > 128k ...passed 00:11:19.724 Test: blockdev write read invalid size ...passed 00:11:19.724 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:11:19.724 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:11:19.724 Test: blockdev write read max offset ...passed 00:11:19.724 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:11:19.724 Test: blockdev writev readv 8 blocks ...passed 00:11:19.724 Test: blockdev writev readv 30 x 1block ...passed 00:11:19.982 Test: blockdev writev readv block ...passed 00:11:19.982 Test: blockdev writev readv size > 128k ...passed 00:11:19.982 Test: blockdev writev readv size > 128k in two iovs ...passed 00:11:19.982 Test: blockdev comparev and writev ...[2024-07-26 11:19:15.417384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:11:19.982 [2024-07-26 11:19:15.417410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:11:19.982 [2024-07-26 11:19:15.417424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:11:19.982 [2024-07-26 11:19:15.417431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:11:19.982 [2024-07-26 11:19:15.417684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:11:19.982 [2024-07-26 11:19:15.417695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:11:19.982 [2024-07-26 11:19:15.417706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:11:19.982 [2024-07-26 11:19:15.417712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:11:19.982 [2024-07-26 11:19:15.417934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:11:19.982 [2024-07-26 11:19:15.417944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:11:19.982 [2024-07-26 11:19:15.417956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:11:19.982 [2024-07-26 11:19:15.417963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:11:19.982 [2024-07-26 11:19:15.418207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:11:19.982 [2024-07-26 11:19:15.418217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:11:19.982 [2024-07-26 11:19:15.418228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:11:19.982 [2024-07-26 11:19:15.418235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:11:19.982 passed 00:11:19.982 Test: blockdev nvme passthru rw ...passed 00:11:19.982 Test: blockdev nvme passthru vendor specific ...[2024-07-26 11:19:15.500045] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:11:19.982 [2024-07-26 11:19:15.500061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:11:19.983 [2024-07-26 11:19:15.500172] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:11:19.983 [2024-07-26 11:19:15.500185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:11:19.983 [2024-07-26 11:19:15.500289] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:11:19.983 [2024-07-26 11:19:15.500298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:11:19.983 [2024-07-26 11:19:15.500399] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:11:19.983 [2024-07-26 11:19:15.500409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:11:19.983 passed 00:11:19.983 Test: blockdev nvme admin passthru ...passed 00:11:19.983 Test: blockdev copy ...passed 00:11:19.983 00:11:19.983 Run Summary: Type Total Ran Passed Failed Inactive 00:11:19.983 suites 1 1 n/a 0 0 00:11:19.983 tests 23 23 23 0 0 00:11:19.983 asserts 152 152 152 0 n/a 00:11:19.983 00:11:19.983 Elapsed time = 0.963 seconds 00:11:20.240 11:19:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:20.240 11:19:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:20.240 11:19:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:20.240 11:19:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:20.240 11:19:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:11:20.240 11:19:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@30 -- # nvmftestfini 00:11:20.240 11:19:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@488 -- # nvmfcleanup 00:11:20.240 11:19:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@117 -- # sync 00:11:20.240 11:19:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:11:20.240 11:19:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@120 -- # set +e 00:11:20.240 11:19:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@121 -- # for i in {1..20} 00:11:20.240 11:19:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:11:20.240 rmmod nvme_tcp 00:11:20.240 rmmod nvme_fabrics 00:11:20.240 rmmod nvme_keyring 00:11:20.240 11:19:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:11:20.240 11:19:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@124 -- # set -e 00:11:20.240 11:19:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@125 -- # return 0 00:11:20.240 11:19:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@489 -- # '[' -n 1424981 ']' 00:11:20.240 11:19:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@490 -- # killprocess 1424981 00:11:20.240 11:19:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@950 -- # '[' -z 1424981 ']' 00:11:20.241 11:19:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@954 -- # kill -0 1424981 00:11:20.241 11:19:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@955 -- # uname 00:11:20.241 11:19:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:11:20.241 11:19:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1424981 00:11:20.241 11:19:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@956 -- # process_name=reactor_3 00:11:20.241 11:19:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@960 -- # '[' reactor_3 = sudo ']' 00:11:20.241 11:19:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1424981' 00:11:20.241 killing process with pid 1424981 00:11:20.241 11:19:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@969 -- # kill 1424981 00:11:20.241 11:19:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@974 -- # wait 1424981 00:11:20.500 11:19:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:11:20.500 11:19:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:11:20.500 11:19:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:11:20.500 11:19:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:11:20.500 11:19:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@278 -- # remove_spdk_ns 00:11:20.500 11:19:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:20.500 11:19:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:20.500 11:19:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:23.034 11:19:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:11:23.034 00:11:23.034 real 0m10.345s 00:11:23.034 user 0m12.708s 00:11:23.034 sys 0m4.816s 00:11:23.034 11:19:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:23.034 11:19:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:23.034 ************************************ 00:11:23.034 END TEST nvmf_bdevio 00:11:23.034 ************************************ 00:11:23.034 11:19:18 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:11:23.034 00:11:23.034 real 4m39.061s 00:11:23.034 user 10m38.212s 00:11:23.034 sys 1m34.675s 00:11:23.034 11:19:18 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:23.034 11:19:18 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:11:23.034 ************************************ 00:11:23.034 END TEST nvmf_target_core 00:11:23.034 ************************************ 00:11:23.034 11:19:18 nvmf_tcp -- nvmf/nvmf.sh@15 -- # run_test nvmf_target_extra /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_extra.sh --transport=tcp 00:11:23.034 11:19:18 nvmf_tcp -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:11:23.034 11:19:18 nvmf_tcp -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:23.034 11:19:18 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:11:23.034 ************************************ 00:11:23.034 START TEST nvmf_target_extra 00:11:23.034 ************************************ 00:11:23.034 11:19:18 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_extra.sh --transport=tcp 00:11:23.034 * Looking for test storage... 00:11:23.034 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:11:23.034 11:19:18 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:23.034 11:19:18 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@7 -- # uname -s 00:11:23.034 11:19:18 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:23.034 11:19:18 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:23.034 11:19:18 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:23.034 11:19:18 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:23.034 11:19:18 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:23.034 11:19:18 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:23.034 11:19:18 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:23.034 11:19:18 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:23.034 11:19:18 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:23.034 11:19:18 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:23.034 11:19:18 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:11:23.034 11:19:18 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:11:23.034 11:19:18 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:23.034 11:19:18 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:23.034 11:19:18 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:23.034 11:19:18 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:23.034 11:19:18 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:23.034 11:19:18 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:23.034 11:19:18 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:23.034 11:19:18 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:23.034 11:19:18 nvmf_tcp.nvmf_target_extra -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:23.034 11:19:18 nvmf_tcp.nvmf_target_extra -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:23.034 11:19:18 nvmf_tcp.nvmf_target_extra -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:23.034 11:19:18 nvmf_tcp.nvmf_target_extra -- paths/export.sh@5 -- # export PATH 00:11:23.034 11:19:18 nvmf_tcp.nvmf_target_extra -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:23.034 11:19:18 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@47 -- # : 0 00:11:23.034 11:19:18 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:11:23.034 11:19:18 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:11:23.034 11:19:18 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:23.035 11:19:18 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:23.035 11:19:18 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:23.035 11:19:18 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:11:23.035 11:19:18 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:11:23.035 11:19:18 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@51 -- # have_pci_nics=0 00:11:23.035 11:19:18 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@11 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:11:23.035 11:19:18 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@13 -- # TEST_ARGS=("$@") 00:11:23.035 11:19:18 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@15 -- # [[ 0 -eq 0 ]] 00:11:23.035 11:19:18 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@16 -- # run_test nvmf_example /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:11:23.035 11:19:18 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:11:23.035 11:19:18 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:23.035 11:19:18 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:11:23.035 ************************************ 00:11:23.035 START TEST nvmf_example 00:11:23.035 ************************************ 00:11:23.035 11:19:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:11:23.035 * Looking for test storage... 00:11:23.035 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:23.035 11:19:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:23.035 11:19:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@7 -- # uname -s 00:11:23.035 11:19:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:23.035 11:19:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:23.035 11:19:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:23.035 11:19:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:23.035 11:19:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:23.035 11:19:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:23.035 11:19:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:23.035 11:19:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:23.035 11:19:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:23.035 11:19:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:23.035 11:19:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:11:23.035 11:19:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:11:23.035 11:19:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:23.035 11:19:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:23.035 11:19:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:23.035 11:19:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:23.035 11:19:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:23.035 11:19:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:23.035 11:19:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:23.035 11:19:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:23.035 11:19:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:23.035 11:19:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:23.035 11:19:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:23.035 11:19:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@5 -- # export PATH 00:11:23.035 11:19:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:23.035 11:19:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@47 -- # : 0 00:11:23.035 11:19:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:11:23.035 11:19:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:11:23.035 11:19:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:23.035 11:19:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:23.035 11:19:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:23.035 11:19:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:11:23.035 11:19:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:11:23.035 11:19:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@51 -- # have_pci_nics=0 00:11:23.035 11:19:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@11 -- # NVMF_EXAMPLE=("$SPDK_EXAMPLE_DIR/nvmf") 00:11:23.035 11:19:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@13 -- # MALLOC_BDEV_SIZE=64 00:11:23.035 11:19:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:11:23.035 11:19:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@24 -- # build_nvmf_example_args 00:11:23.035 11:19:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@17 -- # '[' 0 -eq 1 ']' 00:11:23.035 11:19:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@20 -- # NVMF_EXAMPLE+=(-i "$NVMF_APP_SHM_ID" -g 10000) 00:11:23.035 11:19:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@21 -- # NVMF_EXAMPLE+=("${NO_HUGE[@]}") 00:11:23.035 11:19:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@40 -- # timing_enter nvmf_example_test 00:11:23.035 11:19:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@724 -- # xtrace_disable 00:11:23.035 11:19:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:23.035 11:19:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@41 -- # nvmftestinit 00:11:23.035 11:19:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:11:23.035 11:19:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:23.035 11:19:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@448 -- # prepare_net_devs 00:11:23.035 11:19:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@410 -- # local -g is_hw=no 00:11:23.035 11:19:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@412 -- # remove_spdk_ns 00:11:23.035 11:19:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:23.035 11:19:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:23.035 11:19:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:23.035 11:19:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:11:23.035 11:19:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:11:23.035 11:19:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@285 -- # xtrace_disable 00:11:23.035 11:19:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:29.606 11:19:23 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:29.606 11:19:23 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@291 -- # pci_devs=() 00:11:29.606 11:19:23 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@291 -- # local -a pci_devs 00:11:29.606 11:19:23 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@292 -- # pci_net_devs=() 00:11:29.606 11:19:23 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:11:29.606 11:19:23 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@293 -- # pci_drivers=() 00:11:29.606 11:19:23 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@293 -- # local -A pci_drivers 00:11:29.606 11:19:23 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@295 -- # net_devs=() 00:11:29.606 11:19:23 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@295 -- # local -ga net_devs 00:11:29.606 11:19:23 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@296 -- # e810=() 00:11:29.606 11:19:23 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@296 -- # local -ga e810 00:11:29.606 11:19:23 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@297 -- # x722=() 00:11:29.606 11:19:23 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@297 -- # local -ga x722 00:11:29.606 11:19:23 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@298 -- # mlx=() 00:11:29.606 11:19:23 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@298 -- # local -ga mlx 00:11:29.606 11:19:23 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:29.606 11:19:23 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:29.606 11:19:23 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:29.606 11:19:23 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:29.606 11:19:23 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:29.606 11:19:23 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:29.606 11:19:23 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:29.606 11:19:23 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:29.606 11:19:23 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:29.606 11:19:23 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:29.606 11:19:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:29.606 11:19:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:11:29.606 11:19:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:11:29.606 11:19:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:11:29.606 11:19:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:11:29.606 11:19:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:11:29.606 11:19:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:11:29.606 11:19:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:11:29.606 11:19:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:11:29.606 Found 0000:86:00.0 (0x8086 - 0x159b) 00:11:29.606 11:19:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:11:29.606 11:19:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:11:29.606 11:19:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:29.606 11:19:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:29.606 11:19:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:11:29.606 11:19:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:11:29.606 11:19:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:11:29.606 Found 0000:86:00.1 (0x8086 - 0x159b) 00:11:29.606 11:19:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:11:29.606 11:19:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:11:29.606 11:19:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:29.606 11:19:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:29.606 11:19:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:11:29.606 11:19:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:11:29.606 11:19:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:11:29.606 11:19:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:11:29.606 11:19:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:11:29.606 11:19:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:29.606 11:19:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:11:29.606 11:19:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:29.606 11:19:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@390 -- # [[ up == up ]] 00:11:29.606 11:19:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:11:29.606 11:19:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:29.606 11:19:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:11:29.606 Found net devices under 0000:86:00.0: cvl_0_0 00:11:29.606 11:19:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:11:29.606 11:19:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:11:29.606 11:19:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:29.606 11:19:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:11:29.606 11:19:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:29.606 11:19:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@390 -- # [[ up == up ]] 00:11:29.606 11:19:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:11:29.606 11:19:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:29.606 11:19:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:11:29.606 Found net devices under 0000:86:00.1: cvl_0_1 00:11:29.606 11:19:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:11:29.606 11:19:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:11:29.606 11:19:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@414 -- # is_hw=yes 00:11:29.606 11:19:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:11:29.606 11:19:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:11:29.606 11:19:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:11:29.606 11:19:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:29.606 11:19:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:29.606 11:19:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:29.606 11:19:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:11:29.606 11:19:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:29.606 11:19:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:29.606 11:19:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:11:29.607 11:19:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:29.607 11:19:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:29.607 11:19:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:11:29.607 11:19:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:11:29.607 11:19:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:11:29.607 11:19:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:29.607 11:19:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:29.607 11:19:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:29.607 11:19:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:11:29.607 11:19:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:29.607 11:19:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:29.607 11:19:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:29.607 11:19:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:11:29.607 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:29.607 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.153 ms 00:11:29.607 00:11:29.607 --- 10.0.0.2 ping statistics --- 00:11:29.607 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:29.607 rtt min/avg/max/mdev = 0.153/0.153/0.153/0.000 ms 00:11:29.607 11:19:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:29.607 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:29.607 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.221 ms 00:11:29.607 00:11:29.607 --- 10.0.0.1 ping statistics --- 00:11:29.607 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:29.607 rtt min/avg/max/mdev = 0.221/0.221/0.221/0.000 ms 00:11:29.607 11:19:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:29.607 11:19:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@422 -- # return 0 00:11:29.607 11:19:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:11:29.607 11:19:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:29.607 11:19:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:11:29.607 11:19:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:11:29.607 11:19:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:29.607 11:19:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:11:29.607 11:19:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:11:29.607 11:19:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@42 -- # nvmfexamplestart '-m 0xF' 00:11:29.607 11:19:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@27 -- # timing_enter start_nvmf_example 00:11:29.607 11:19:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@724 -- # xtrace_disable 00:11:29.607 11:19:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:29.607 11:19:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@29 -- # '[' tcp == tcp ']' 00:11:29.607 11:19:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@30 -- # NVMF_EXAMPLE=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_EXAMPLE[@]}") 00:11:29.607 11:19:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@34 -- # nvmfpid=1428813 00:11:29.607 11:19:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@35 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:11:29.607 11:19:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@33 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/nvmf -i 0 -g 10000 -m 0xF 00:11:29.607 11:19:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@36 -- # waitforlisten 1428813 00:11:29.607 11:19:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@831 -- # '[' -z 1428813 ']' 00:11:29.607 11:19:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:29.607 11:19:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@836 -- # local max_retries=100 00:11:29.607 11:19:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:29.607 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:29.607 11:19:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@840 -- # xtrace_disable 00:11:29.607 11:19:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:29.607 EAL: No free 2048 kB hugepages reported on node 1 00:11:29.607 11:19:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:11:29.607 11:19:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@864 -- # return 0 00:11:29.607 11:19:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@37 -- # timing_exit start_nvmf_example 00:11:29.607 11:19:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@730 -- # xtrace_disable 00:11:29.607 11:19:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:29.607 11:19:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:11:29.607 11:19:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:29.607 11:19:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:29.607 11:19:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:29.607 11:19:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@47 -- # rpc_cmd bdev_malloc_create 64 512 00:11:29.607 11:19:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:29.607 11:19:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:29.607 11:19:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:29.607 11:19:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@47 -- # malloc_bdevs='Malloc0 ' 00:11:29.607 11:19:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@49 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:11:29.607 11:19:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:29.607 11:19:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:29.607 11:19:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:29.607 11:19:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@52 -- # for malloc_bdev in $malloc_bdevs 00:11:29.607 11:19:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:11:29.607 11:19:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:29.607 11:19:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:29.607 11:19:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:29.607 11:19:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@57 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:29.607 11:19:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:29.607 11:19:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:29.865 11:19:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:29.865 11:19:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@59 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:11:29.865 11:19:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:11:29.865 EAL: No free 2048 kB hugepages reported on node 1 00:11:39.832 Initializing NVMe Controllers 00:11:39.832 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:11:39.832 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:11:39.832 Initialization complete. Launching workers. 00:11:39.832 ======================================================== 00:11:39.832 Latency(us) 00:11:39.832 Device Information : IOPS MiB/s Average min max 00:11:39.832 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 18412.48 71.92 3475.55 474.26 15422.97 00:11:39.832 ======================================================== 00:11:39.832 Total : 18412.48 71.92 3475.55 474.26 15422.97 00:11:39.832 00:11:39.832 11:19:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@65 -- # trap - SIGINT SIGTERM EXIT 00:11:39.832 11:19:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@66 -- # nvmftestfini 00:11:39.832 11:19:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@488 -- # nvmfcleanup 00:11:39.832 11:19:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@117 -- # sync 00:11:39.832 11:19:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:11:39.832 11:19:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@120 -- # set +e 00:11:39.832 11:19:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@121 -- # for i in {1..20} 00:11:39.832 11:19:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:11:39.832 rmmod nvme_tcp 00:11:39.832 rmmod nvme_fabrics 00:11:39.832 rmmod nvme_keyring 00:11:39.832 11:19:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:11:39.832 11:19:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@124 -- # set -e 00:11:39.832 11:19:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@125 -- # return 0 00:11:39.832 11:19:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@489 -- # '[' -n 1428813 ']' 00:11:39.832 11:19:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@490 -- # killprocess 1428813 00:11:39.832 11:19:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@950 -- # '[' -z 1428813 ']' 00:11:39.832 11:19:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@954 -- # kill -0 1428813 00:11:39.832 11:19:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@955 -- # uname 00:11:39.832 11:19:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:11:39.832 11:19:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1428813 00:11:40.093 11:19:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@956 -- # process_name=nvmf 00:11:40.093 11:19:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@960 -- # '[' nvmf = sudo ']' 00:11:40.093 11:19:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1428813' 00:11:40.093 killing process with pid 1428813 00:11:40.093 11:19:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@969 -- # kill 1428813 00:11:40.093 11:19:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@974 -- # wait 1428813 00:11:40.093 nvmf threads initialize successfully 00:11:40.093 bdev subsystem init successfully 00:11:40.093 created a nvmf target service 00:11:40.093 create targets's poll groups done 00:11:40.093 all subsystems of target started 00:11:40.093 nvmf target is running 00:11:40.093 all subsystems of target stopped 00:11:40.093 destroy targets's poll groups done 00:11:40.093 destroyed the nvmf target service 00:11:40.093 bdev subsystem finish successfully 00:11:40.093 nvmf threads destroy successfully 00:11:40.093 11:19:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:11:40.093 11:19:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:11:40.093 11:19:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:11:40.093 11:19:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:11:40.093 11:19:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@278 -- # remove_spdk_ns 00:11:40.093 11:19:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:40.093 11:19:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:40.093 11:19:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:42.634 11:19:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:11:42.634 11:19:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@67 -- # timing_exit nvmf_example_test 00:11:42.634 11:19:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@730 -- # xtrace_disable 00:11:42.634 11:19:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:42.634 00:11:42.634 real 0m19.437s 00:11:42.634 user 0m45.512s 00:11:42.634 sys 0m5.742s 00:11:42.634 11:19:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:42.634 11:19:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:42.634 ************************************ 00:11:42.634 END TEST nvmf_example 00:11:42.634 ************************************ 00:11:42.634 11:19:37 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@17 -- # run_test nvmf_filesystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:11:42.634 11:19:37 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:11:42.634 11:19:37 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:42.634 11:19:37 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:11:42.634 ************************************ 00:11:42.634 START TEST nvmf_filesystem 00:11:42.634 ************************************ 00:11:42.634 11:19:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:11:42.634 * Looking for test storage... 00:11:42.634 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:42.634 11:19:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh 00:11:42.634 11:19:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@7 -- # rpc_py=rpc_cmd 00:11:42.634 11:19:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@34 -- # set -e 00:11:42.634 11:19:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@35 -- # shopt -s nullglob 00:11:42.634 11:19:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@36 -- # shopt -s extglob 00:11:42.634 11:19:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@37 -- # shopt -s inherit_errexit 00:11:42.634 11:19:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@39 -- # '[' -z /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output ']' 00:11:42.634 11:19:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@44 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/build_config.sh ]] 00:11:42.634 11:19:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/build_config.sh 00:11:42.634 11:19:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:11:42.634 11:19:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@2 -- # CONFIG_ASAN=n 00:11:42.634 11:19:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:11:42.634 11:19:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:11:42.634 11:19:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@5 -- # CONFIG_USDT=n 00:11:42.634 11:19:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:11:42.634 11:19:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:11:42.634 11:19:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:11:42.634 11:19:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:11:42.634 11:19:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:11:42.634 11:19:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:11:42.634 11:19:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:11:42.634 11:19:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:11:42.635 11:19:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:11:42.635 11:19:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:11:42.635 11:19:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:11:42.635 11:19:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@17 -- # CONFIG_PGO_CAPTURE=n 00:11:42.635 11:19:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@18 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:11:42.635 11:19:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@19 -- # CONFIG_ENV=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:11:42.635 11:19:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@20 -- # CONFIG_LTO=n 00:11:42.635 11:19:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@21 -- # CONFIG_ISCSI_INITIATOR=y 00:11:42.635 11:19:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@22 -- # CONFIG_CET=n 00:11:42.635 11:19:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@23 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:11:42.635 11:19:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@24 -- # CONFIG_OCF_PATH= 00:11:42.635 11:19:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@25 -- # CONFIG_RDMA_SET_TOS=y 00:11:42.635 11:19:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@26 -- # CONFIG_HAVE_ARC4RANDOM=y 00:11:42.635 11:19:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@27 -- # CONFIG_HAVE_LIBARCHIVE=n 00:11:42.635 11:19:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@28 -- # CONFIG_UBLK=y 00:11:42.635 11:19:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@29 -- # CONFIG_ISAL_CRYPTO=y 00:11:42.635 11:19:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@30 -- # CONFIG_OPENSSL_PATH= 00:11:42.635 11:19:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@31 -- # CONFIG_OCF=n 00:11:42.635 11:19:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@32 -- # CONFIG_FUSE=n 00:11:42.635 11:19:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@33 -- # CONFIG_VTUNE_DIR= 00:11:42.635 11:19:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@34 -- # CONFIG_FUZZER_LIB= 00:11:42.635 11:19:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@35 -- # CONFIG_FUZZER=n 00:11:42.635 11:19:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@36 -- # CONFIG_DPDK_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:11:42.635 11:19:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@37 -- # CONFIG_CRYPTO=n 00:11:42.635 11:19:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@38 -- # CONFIG_PGO_USE=n 00:11:42.635 11:19:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@39 -- # CONFIG_VHOST=y 00:11:42.635 11:19:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@40 -- # CONFIG_DAOS=n 00:11:42.635 11:19:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@41 -- # CONFIG_DPDK_INC_DIR= 00:11:42.635 11:19:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@42 -- # CONFIG_DAOS_DIR= 00:11:42.635 11:19:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@43 -- # CONFIG_UNIT_TESTS=n 00:11:42.635 11:19:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@44 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:11:42.635 11:19:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@45 -- # CONFIG_VIRTIO=y 00:11:42.635 11:19:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@46 -- # CONFIG_DPDK_UADK=n 00:11:42.635 11:19:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@47 -- # CONFIG_COVERAGE=y 00:11:42.635 11:19:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@48 -- # CONFIG_RDMA=y 00:11:42.635 11:19:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@49 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:11:42.635 11:19:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@50 -- # CONFIG_URING_PATH= 00:11:42.635 11:19:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@51 -- # CONFIG_XNVME=n 00:11:42.635 11:19:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@52 -- # CONFIG_VFIO_USER=y 00:11:42.635 11:19:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@53 -- # CONFIG_ARCH=native 00:11:42.635 11:19:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@54 -- # CONFIG_HAVE_EVP_MAC=y 00:11:42.635 11:19:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@55 -- # CONFIG_URING_ZNS=n 00:11:42.635 11:19:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@56 -- # CONFIG_WERROR=y 00:11:42.635 11:19:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@57 -- # CONFIG_HAVE_LIBBSD=n 00:11:42.635 11:19:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@58 -- # CONFIG_UBSAN=y 00:11:42.635 11:19:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@59 -- # CONFIG_IPSEC_MB_DIR= 00:11:42.635 11:19:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@60 -- # CONFIG_GOLANG=n 00:11:42.635 11:19:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@61 -- # CONFIG_ISAL=y 00:11:42.635 11:19:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@62 -- # CONFIG_IDXD_KERNEL=y 00:11:42.635 11:19:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@63 -- # CONFIG_DPDK_LIB_DIR= 00:11:42.635 11:19:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@64 -- # CONFIG_RDMA_PROV=verbs 00:11:42.635 11:19:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@65 -- # CONFIG_APPS=y 00:11:42.635 11:19:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@66 -- # CONFIG_SHARED=y 00:11:42.635 11:19:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@67 -- # CONFIG_HAVE_KEYUTILS=y 00:11:42.635 11:19:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@68 -- # CONFIG_FC_PATH= 00:11:42.635 11:19:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@69 -- # CONFIG_DPDK_PKG_CONFIG=n 00:11:42.635 11:19:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@70 -- # CONFIG_FC=n 00:11:42.635 11:19:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@71 -- # CONFIG_AVAHI=n 00:11:42.635 11:19:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@72 -- # CONFIG_FIO_PLUGIN=y 00:11:42.635 11:19:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@73 -- # CONFIG_RAID5F=n 00:11:42.635 11:19:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@74 -- # CONFIG_EXAMPLES=y 00:11:42.635 11:19:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@75 -- # CONFIG_TESTS=y 00:11:42.635 11:19:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@76 -- # CONFIG_CRYPTO_MLX5=n 00:11:42.635 11:19:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@77 -- # CONFIG_MAX_LCORES=128 00:11:42.635 11:19:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@78 -- # CONFIG_IPSEC_MB=n 00:11:42.635 11:19:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@79 -- # CONFIG_PGO_DIR= 00:11:42.635 11:19:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@80 -- # CONFIG_DEBUG=y 00:11:42.635 11:19:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@81 -- # CONFIG_DPDK_COMPRESSDEV=n 00:11:42.635 11:19:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@82 -- # CONFIG_CROSS_PREFIX= 00:11:42.635 11:19:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@83 -- # CONFIG_URING=n 00:11:42.635 11:19:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@54 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/applications.sh 00:11:42.635 11:19:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@8 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/applications.sh 00:11:42.635 11:19:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@8 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common 00:11:42.635 11:19:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@8 -- # _root=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common 00:11:42.635 11:19:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@9 -- # _root=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:11:42.635 11:19:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@10 -- # _app_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:11:42.635 11:19:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@11 -- # _test_app_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:11:42.635 11:19:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@12 -- # _examples_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:11:42.635 11:19:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@14 -- # VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 00:11:42.635 11:19:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@15 -- # ISCSI_APP=("$_app_dir/iscsi_tgt") 00:11:42.635 11:19:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@16 -- # NVMF_APP=("$_app_dir/nvmf_tgt") 00:11:42.635 11:19:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@17 -- # VHOST_APP=("$_app_dir/vhost") 00:11:42.635 11:19:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@18 -- # DD_APP=("$_app_dir/spdk_dd") 00:11:42.635 11:19:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@19 -- # SPDK_APP=("$_app_dir/spdk_tgt") 00:11:42.635 11:19:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@22 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/config.h ]] 00:11:42.635 11:19:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@23 -- # [[ #ifndef SPDK_CONFIG_H 00:11:42.635 #define SPDK_CONFIG_H 00:11:42.635 #define SPDK_CONFIG_APPS 1 00:11:42.635 #define SPDK_CONFIG_ARCH native 00:11:42.635 #undef SPDK_CONFIG_ASAN 00:11:42.635 #undef SPDK_CONFIG_AVAHI 00:11:42.635 #undef SPDK_CONFIG_CET 00:11:42.635 #define SPDK_CONFIG_COVERAGE 1 00:11:42.635 #define SPDK_CONFIG_CROSS_PREFIX 00:11:42.635 #undef SPDK_CONFIG_CRYPTO 00:11:42.635 #undef SPDK_CONFIG_CRYPTO_MLX5 00:11:42.635 #undef SPDK_CONFIG_CUSTOMOCF 00:11:42.635 #undef SPDK_CONFIG_DAOS 00:11:42.635 #define SPDK_CONFIG_DAOS_DIR 00:11:42.635 #define SPDK_CONFIG_DEBUG 1 00:11:42.635 #undef SPDK_CONFIG_DPDK_COMPRESSDEV 00:11:42.635 #define SPDK_CONFIG_DPDK_DIR /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:11:42.635 #define SPDK_CONFIG_DPDK_INC_DIR 00:11:42.635 #define SPDK_CONFIG_DPDK_LIB_DIR 00:11:42.635 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 00:11:42.636 #undef SPDK_CONFIG_DPDK_UADK 00:11:42.636 #define SPDK_CONFIG_ENV /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:11:42.636 #define SPDK_CONFIG_EXAMPLES 1 00:11:42.636 #undef SPDK_CONFIG_FC 00:11:42.636 #define SPDK_CONFIG_FC_PATH 00:11:42.636 #define SPDK_CONFIG_FIO_PLUGIN 1 00:11:42.636 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 00:11:42.636 #undef SPDK_CONFIG_FUSE 00:11:42.636 #undef SPDK_CONFIG_FUZZER 00:11:42.636 #define SPDK_CONFIG_FUZZER_LIB 00:11:42.636 #undef SPDK_CONFIG_GOLANG 00:11:42.636 #define SPDK_CONFIG_HAVE_ARC4RANDOM 1 00:11:42.636 #define SPDK_CONFIG_HAVE_EVP_MAC 1 00:11:42.636 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 00:11:42.636 #define SPDK_CONFIG_HAVE_KEYUTILS 1 00:11:42.636 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 00:11:42.636 #undef SPDK_CONFIG_HAVE_LIBBSD 00:11:42.636 #define SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 1 00:11:42.636 #define SPDK_CONFIG_IDXD 1 00:11:42.636 #define SPDK_CONFIG_IDXD_KERNEL 1 00:11:42.636 #undef SPDK_CONFIG_IPSEC_MB 00:11:42.636 #define SPDK_CONFIG_IPSEC_MB_DIR 00:11:42.636 #define SPDK_CONFIG_ISAL 1 00:11:42.636 #define SPDK_CONFIG_ISAL_CRYPTO 1 00:11:42.636 #define SPDK_CONFIG_ISCSI_INITIATOR 1 00:11:42.636 #define SPDK_CONFIG_LIBDIR 00:11:42.636 #undef SPDK_CONFIG_LTO 00:11:42.636 #define SPDK_CONFIG_MAX_LCORES 128 00:11:42.636 #define SPDK_CONFIG_NVME_CUSE 1 00:11:42.636 #undef SPDK_CONFIG_OCF 00:11:42.636 #define SPDK_CONFIG_OCF_PATH 00:11:42.636 #define SPDK_CONFIG_OPENSSL_PATH 00:11:42.636 #undef SPDK_CONFIG_PGO_CAPTURE 00:11:42.636 #define SPDK_CONFIG_PGO_DIR 00:11:42.636 #undef SPDK_CONFIG_PGO_USE 00:11:42.636 #define SPDK_CONFIG_PREFIX /usr/local 00:11:42.636 #undef SPDK_CONFIG_RAID5F 00:11:42.636 #undef SPDK_CONFIG_RBD 00:11:42.636 #define SPDK_CONFIG_RDMA 1 00:11:42.636 #define SPDK_CONFIG_RDMA_PROV verbs 00:11:42.636 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 00:11:42.636 #define SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 1 00:11:42.636 #define SPDK_CONFIG_RDMA_SET_TOS 1 00:11:42.636 #define SPDK_CONFIG_SHARED 1 00:11:42.636 #undef SPDK_CONFIG_SMA 00:11:42.636 #define SPDK_CONFIG_TESTS 1 00:11:42.636 #undef SPDK_CONFIG_TSAN 00:11:42.636 #define SPDK_CONFIG_UBLK 1 00:11:42.636 #define SPDK_CONFIG_UBSAN 1 00:11:42.636 #undef SPDK_CONFIG_UNIT_TESTS 00:11:42.636 #undef SPDK_CONFIG_URING 00:11:42.636 #define SPDK_CONFIG_URING_PATH 00:11:42.636 #undef SPDK_CONFIG_URING_ZNS 00:11:42.636 #undef SPDK_CONFIG_USDT 00:11:42.636 #undef SPDK_CONFIG_VBDEV_COMPRESS 00:11:42.636 #undef SPDK_CONFIG_VBDEV_COMPRESS_MLX5 00:11:42.636 #define SPDK_CONFIG_VFIO_USER 1 00:11:42.636 #define SPDK_CONFIG_VFIO_USER_DIR 00:11:42.636 #define SPDK_CONFIG_VHOST 1 00:11:42.636 #define SPDK_CONFIG_VIRTIO 1 00:11:42.636 #undef SPDK_CONFIG_VTUNE 00:11:42.636 #define SPDK_CONFIG_VTUNE_DIR 00:11:42.636 #define SPDK_CONFIG_WERROR 1 00:11:42.636 #define SPDK_CONFIG_WPDK_DIR 00:11:42.636 #undef SPDK_CONFIG_XNVME 00:11:42.636 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 00:11:42.636 11:19:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@24 -- # (( SPDK_AUTOTEST_DEBUG_APPS )) 00:11:42.636 11:19:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@55 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:42.636 11:19:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:42.636 11:19:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:42.636 11:19:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:42.636 11:19:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:42.636 11:19:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:42.636 11:19:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:42.636 11:19:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@5 -- # export PATH 00:11:42.636 11:19:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:42.636 11:19:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@56 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/common 00:11:42.636 11:19:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@6 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/common 00:11:42.636 11:19:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@6 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm 00:11:42.636 11:19:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@6 -- # _pmdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm 00:11:42.636 11:19:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@7 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/../../../ 00:11:42.636 11:19:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@7 -- # _pmrootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:11:42.636 11:19:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@64 -- # TEST_TAG=N/A 00:11:42.636 11:19:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@65 -- # TEST_TAG_FILE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.run_test_name 00:11:42.636 11:19:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@67 -- # PM_OUTPUTDIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power 00:11:42.636 11:19:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@68 -- # uname -s 00:11:42.636 11:19:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@68 -- # PM_OS=Linux 00:11:42.636 11:19:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@70 -- # MONITOR_RESOURCES_SUDO=() 00:11:42.636 11:19:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@70 -- # declare -A MONITOR_RESOURCES_SUDO 00:11:42.636 11:19:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@71 -- # MONITOR_RESOURCES_SUDO["collect-bmc-pm"]=1 00:11:42.636 11:19:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@72 -- # MONITOR_RESOURCES_SUDO["collect-cpu-load"]=0 00:11:42.636 11:19:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@73 -- # MONITOR_RESOURCES_SUDO["collect-cpu-temp"]=0 00:11:42.636 11:19:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@74 -- # MONITOR_RESOURCES_SUDO["collect-vmstat"]=0 00:11:42.636 11:19:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@76 -- # SUDO[0]= 00:11:42.636 11:19:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@76 -- # SUDO[1]='sudo -E' 00:11:42.636 11:19:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@78 -- # MONITOR_RESOURCES=(collect-cpu-load collect-vmstat) 00:11:42.636 11:19:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@79 -- # [[ Linux == FreeBSD ]] 00:11:42.636 11:19:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@81 -- # [[ Linux == Linux ]] 00:11:42.636 11:19:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@81 -- # [[ ............................... != QEMU ]] 00:11:42.636 11:19:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@81 -- # [[ ! -e /.dockerenv ]] 00:11:42.636 11:19:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@84 -- # MONITOR_RESOURCES+=(collect-cpu-temp) 00:11:42.636 11:19:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@85 -- # MONITOR_RESOURCES+=(collect-bmc-pm) 00:11:42.636 11:19:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@88 -- # [[ ! -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power ]] 00:11:42.636 11:19:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@58 -- # : 0 00:11:42.636 11:19:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@59 -- # export RUN_NIGHTLY 00:11:42.636 11:19:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@62 -- # : 0 00:11:42.636 11:19:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@63 -- # export SPDK_AUTOTEST_DEBUG_APPS 00:11:42.636 11:19:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@64 -- # : 0 00:11:42.636 11:19:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@65 -- # export SPDK_RUN_VALGRIND 00:11:42.636 11:19:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@66 -- # : 1 00:11:42.636 11:19:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@67 -- # export SPDK_RUN_FUNCTIONAL_TEST 00:11:42.636 11:19:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@68 -- # : 0 00:11:42.636 11:19:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@69 -- # export SPDK_TEST_UNITTEST 00:11:42.636 11:19:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@70 -- # : 00:11:42.636 11:19:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@71 -- # export SPDK_TEST_AUTOBUILD 00:11:42.636 11:19:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@72 -- # : 0 00:11:42.636 11:19:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@73 -- # export SPDK_TEST_RELEASE_BUILD 00:11:42.636 11:19:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@74 -- # : 0 00:11:42.636 11:19:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@75 -- # export SPDK_TEST_ISAL 00:11:42.636 11:19:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@76 -- # : 0 00:11:42.636 11:19:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@77 -- # export SPDK_TEST_ISCSI 00:11:42.637 11:19:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@78 -- # : 0 00:11:42.637 11:19:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@79 -- # export SPDK_TEST_ISCSI_INITIATOR 00:11:42.637 11:19:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@80 -- # : 0 00:11:42.637 11:19:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@81 -- # export SPDK_TEST_NVME 00:11:42.637 11:19:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@82 -- # : 0 00:11:42.637 11:19:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@83 -- # export SPDK_TEST_NVME_PMR 00:11:42.637 11:19:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@84 -- # : 0 00:11:42.637 11:19:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@85 -- # export SPDK_TEST_NVME_BP 00:11:42.637 11:19:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@86 -- # : 1 00:11:42.637 11:19:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@87 -- # export SPDK_TEST_NVME_CLI 00:11:42.637 11:19:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@88 -- # : 0 00:11:42.637 11:19:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@89 -- # export SPDK_TEST_NVME_CUSE 00:11:42.637 11:19:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@90 -- # : 0 00:11:42.637 11:19:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@91 -- # export SPDK_TEST_NVME_FDP 00:11:42.637 11:19:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@92 -- # : 1 00:11:42.637 11:19:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@93 -- # export SPDK_TEST_NVMF 00:11:42.637 11:19:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@94 -- # : 1 00:11:42.637 11:19:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@95 -- # export SPDK_TEST_VFIOUSER 00:11:42.637 11:19:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@96 -- # : 0 00:11:42.637 11:19:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@97 -- # export SPDK_TEST_VFIOUSER_QEMU 00:11:42.637 11:19:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@98 -- # : 0 00:11:42.637 11:19:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@99 -- # export SPDK_TEST_FUZZER 00:11:42.637 11:19:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@100 -- # : 0 00:11:42.637 11:19:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@101 -- # export SPDK_TEST_FUZZER_SHORT 00:11:42.637 11:19:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@102 -- # : tcp 00:11:42.637 11:19:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@103 -- # export SPDK_TEST_NVMF_TRANSPORT 00:11:42.637 11:19:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@104 -- # : 0 00:11:42.637 11:19:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@105 -- # export SPDK_TEST_RBD 00:11:42.637 11:19:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@106 -- # : 0 00:11:42.637 11:19:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@107 -- # export SPDK_TEST_VHOST 00:11:42.637 11:19:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@108 -- # : 0 00:11:42.637 11:19:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@109 -- # export SPDK_TEST_BLOCKDEV 00:11:42.637 11:19:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@110 -- # : 0 00:11:42.637 11:19:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@111 -- # export SPDK_TEST_IOAT 00:11:42.637 11:19:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@112 -- # : 0 00:11:42.637 11:19:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@113 -- # export SPDK_TEST_BLOBFS 00:11:42.637 11:19:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@114 -- # : 0 00:11:42.637 11:19:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@115 -- # export SPDK_TEST_VHOST_INIT 00:11:42.637 11:19:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@116 -- # : 0 00:11:42.637 11:19:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@117 -- # export SPDK_TEST_LVOL 00:11:42.637 11:19:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@118 -- # : 0 00:11:42.637 11:19:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@119 -- # export SPDK_TEST_VBDEV_COMPRESS 00:11:42.637 11:19:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@120 -- # : 0 00:11:42.637 11:19:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@121 -- # export SPDK_RUN_ASAN 00:11:42.637 11:19:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@122 -- # : 1 00:11:42.637 11:19:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@123 -- # export SPDK_RUN_UBSAN 00:11:42.637 11:19:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@124 -- # : 00:11:42.637 11:19:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@125 -- # export SPDK_RUN_EXTERNAL_DPDK 00:11:42.637 11:19:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@126 -- # : 0 00:11:42.637 11:19:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@127 -- # export SPDK_RUN_NON_ROOT 00:11:42.637 11:19:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@128 -- # : 0 00:11:42.637 11:19:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@129 -- # export SPDK_TEST_CRYPTO 00:11:42.637 11:19:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@130 -- # : 0 00:11:42.637 11:19:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@131 -- # export SPDK_TEST_FTL 00:11:42.637 11:19:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@132 -- # : 0 00:11:42.637 11:19:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@133 -- # export SPDK_TEST_OCF 00:11:42.637 11:19:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@134 -- # : 0 00:11:42.637 11:19:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@135 -- # export SPDK_TEST_VMD 00:11:42.637 11:19:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@136 -- # : 0 00:11:42.637 11:19:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@137 -- # export SPDK_TEST_OPAL 00:11:42.637 11:19:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@138 -- # : 00:11:42.637 11:19:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@139 -- # export SPDK_TEST_NATIVE_DPDK 00:11:42.637 11:19:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@140 -- # : true 00:11:42.637 11:19:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@141 -- # export SPDK_AUTOTEST_X 00:11:42.637 11:19:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@142 -- # : 0 00:11:42.637 11:19:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@143 -- # export SPDK_TEST_RAID5 00:11:42.637 11:19:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@144 -- # : 0 00:11:42.637 11:19:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@145 -- # export SPDK_TEST_URING 00:11:42.637 11:19:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@146 -- # : 0 00:11:42.637 11:19:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@147 -- # export SPDK_TEST_USDT 00:11:42.637 11:19:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@148 -- # : 0 00:11:42.637 11:19:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@149 -- # export SPDK_TEST_USE_IGB_UIO 00:11:42.637 11:19:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@150 -- # : 0 00:11:42.637 11:19:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@151 -- # export SPDK_TEST_SCHEDULER 00:11:42.637 11:19:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@152 -- # : 0 00:11:42.637 11:19:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@153 -- # export SPDK_TEST_SCANBUILD 00:11:42.637 11:19:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@154 -- # : e810 00:11:42.637 11:19:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@155 -- # export SPDK_TEST_NVMF_NICS 00:11:42.637 11:19:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@156 -- # : 0 00:11:42.637 11:19:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@157 -- # export SPDK_TEST_SMA 00:11:42.637 11:19:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@158 -- # : 0 00:11:42.637 11:19:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@159 -- # export SPDK_TEST_DAOS 00:11:42.637 11:19:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@160 -- # : 0 00:11:42.637 11:19:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@161 -- # export SPDK_TEST_XNVME 00:11:42.637 11:19:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@162 -- # : 0 00:11:42.637 11:19:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@163 -- # export SPDK_TEST_ACCEL 00:11:42.637 11:19:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@164 -- # : 0 00:11:42.637 11:19:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@165 -- # export SPDK_TEST_ACCEL_DSA 00:11:42.637 11:19:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@166 -- # : 0 00:11:42.637 11:19:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@167 -- # export SPDK_TEST_ACCEL_IAA 00:11:42.637 11:19:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@169 -- # : 00:11:42.637 11:19:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@170 -- # export SPDK_TEST_FUZZER_TARGET 00:11:42.637 11:19:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@171 -- # : 0 00:11:42.637 11:19:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@172 -- # export SPDK_TEST_NVMF_MDNS 00:11:42.637 11:19:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@173 -- # : 0 00:11:42.637 11:19:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@174 -- # export SPDK_JSONRPC_GO_CLIENT 00:11:42.637 11:19:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@177 -- # export SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib 00:11:42.637 11:19:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@177 -- # SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib 00:11:42.637 11:19:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@178 -- # export DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib 00:11:42.637 11:19:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@178 -- # DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib 00:11:42.637 11:19:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@179 -- # export VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:11:42.637 11:19:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@179 -- # VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:11:42.638 11:19:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@180 -- # export LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:11:42.638 11:19:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@180 -- # LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:11:42.638 11:19:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@183 -- # export PCI_BLOCK_SYNC_ON_RESET=yes 00:11:42.638 11:19:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@183 -- # PCI_BLOCK_SYNC_ON_RESET=yes 00:11:42.638 11:19:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@187 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:11:42.638 11:19:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@187 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:11:42.638 11:19:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@191 -- # export PYTHONDONTWRITEBYTECODE=1 00:11:42.638 11:19:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@191 -- # PYTHONDONTWRITEBYTECODE=1 00:11:42.638 11:19:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@195 -- # export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:11:42.638 11:19:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@195 -- # ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:11:42.638 11:19:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@196 -- # export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:11:42.638 11:19:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@196 -- # UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:11:42.638 11:19:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@200 -- # asan_suppression_file=/var/tmp/asan_suppression_file 00:11:42.638 11:19:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@201 -- # rm -rf /var/tmp/asan_suppression_file 00:11:42.638 11:19:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@202 -- # cat 00:11:42.638 11:19:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@238 -- # echo leak:libfuse3.so 00:11:42.638 11:19:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@240 -- # export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:11:42.638 11:19:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@240 -- # LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:11:42.638 11:19:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@242 -- # export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:11:42.638 11:19:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@242 -- # DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:11:42.638 11:19:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@244 -- # '[' -z /var/spdk/dependencies ']' 00:11:42.638 11:19:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@247 -- # export DEPENDENCY_DIR 00:11:42.638 11:19:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@251 -- # export SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:11:42.638 11:19:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@251 -- # SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:11:42.638 11:19:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@252 -- # export SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:11:42.638 11:19:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@252 -- # SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:11:42.638 11:19:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@255 -- # export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:11:42.638 11:19:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@255 -- # QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:11:42.638 11:19:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@256 -- # export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:11:42.638 11:19:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@256 -- # VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:11:42.638 11:19:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@258 -- # export AR_TOOL=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:11:42.638 11:19:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@258 -- # AR_TOOL=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:11:42.638 11:19:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@261 -- # export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:11:42.638 11:19:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@261 -- # UNBIND_ENTIRE_IOMMU_GROUP=yes 00:11:42.638 11:19:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@264 -- # '[' 0 -eq 0 ']' 00:11:42.638 11:19:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@265 -- # export valgrind= 00:11:42.638 11:19:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@265 -- # valgrind= 00:11:42.638 11:19:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@271 -- # uname -s 00:11:42.638 11:19:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@271 -- # '[' Linux = Linux ']' 00:11:42.638 11:19:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@272 -- # HUGEMEM=4096 00:11:42.638 11:19:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@273 -- # export CLEAR_HUGE=yes 00:11:42.638 11:19:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@273 -- # CLEAR_HUGE=yes 00:11:42.638 11:19:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@274 -- # [[ 0 -eq 1 ]] 00:11:42.638 11:19:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@274 -- # [[ 0 -eq 1 ]] 00:11:42.638 11:19:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@281 -- # MAKE=make 00:11:42.638 11:19:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@282 -- # MAKEFLAGS=-j96 00:11:42.638 11:19:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@298 -- # export HUGEMEM=4096 00:11:42.638 11:19:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@298 -- # HUGEMEM=4096 00:11:42.638 11:19:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@300 -- # NO_HUGE=() 00:11:42.638 11:19:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@301 -- # TEST_MODE= 00:11:42.638 11:19:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@302 -- # for i in "$@" 00:11:42.638 11:19:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@303 -- # case "$i" in 00:11:42.638 11:19:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@308 -- # TEST_TRANSPORT=tcp 00:11:42.638 11:19:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@320 -- # [[ -z 1431216 ]] 00:11:42.638 11:19:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@320 -- # kill -0 1431216 00:11:42.638 11:19:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1680 -- # set_test_storage 2147483648 00:11:42.638 11:19:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@330 -- # [[ -v testdir ]] 00:11:42.638 11:19:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@332 -- # local requested_size=2147483648 00:11:42.638 11:19:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@333 -- # local mount target_dir 00:11:42.638 11:19:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@335 -- # local -A mounts fss sizes avails uses 00:11:42.638 11:19:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@336 -- # local source fs size avail mount use 00:11:42.638 11:19:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@338 -- # local storage_fallback storage_candidates 00:11:42.638 11:19:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@340 -- # mktemp -udt spdk.XXXXXX 00:11:42.638 11:19:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@340 -- # storage_fallback=/tmp/spdk.AHuSoE 00:11:42.638 11:19:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@345 -- # storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback") 00:11:42.638 11:19:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@347 -- # [[ -n '' ]] 00:11:42.638 11:19:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@352 -- # [[ -n '' ]] 00:11:42.638 11:19:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@357 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target /tmp/spdk.AHuSoE/tests/target /tmp/spdk.AHuSoE 00:11:42.638 11:19:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@360 -- # requested_size=2214592512 00:11:42.638 11:19:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@362 -- # read -r source fs size use avail _ mount 00:11:42.638 11:19:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@329 -- # df -T 00:11:42.638 11:19:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@329 -- # grep -v Filesystem 00:11:42.638 11:19:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@363 -- # mounts["$mount"]=spdk_devtmpfs 00:11:42.638 11:19:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@363 -- # fss["$mount"]=devtmpfs 00:11:42.638 11:19:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@364 -- # avails["$mount"]=67108864 00:11:42.639 11:19:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@364 -- # sizes["$mount"]=67108864 00:11:42.639 11:19:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@365 -- # uses["$mount"]=0 00:11:42.639 11:19:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@362 -- # read -r source fs size use avail _ mount 00:11:42.639 11:19:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@363 -- # mounts["$mount"]=/dev/pmem0 00:11:42.639 11:19:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@363 -- # fss["$mount"]=ext2 00:11:42.639 11:19:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@364 -- # avails["$mount"]=953421824 00:11:42.639 11:19:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@364 -- # sizes["$mount"]=5284429824 00:11:42.639 11:19:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@365 -- # uses["$mount"]=4331008000 00:11:42.639 11:19:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@362 -- # read -r source fs size use avail _ mount 00:11:42.639 11:19:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@363 -- # mounts["$mount"]=spdk_root 00:11:42.639 11:19:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@363 -- # fss["$mount"]=overlay 00:11:42.639 11:19:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@364 -- # avails["$mount"]=190353608704 00:11:42.639 11:19:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@364 -- # sizes["$mount"]=195974307840 00:11:42.639 11:19:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@365 -- # uses["$mount"]=5620699136 00:11:42.639 11:19:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@362 -- # read -r source fs size use avail _ mount 00:11:42.639 11:19:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@363 -- # mounts["$mount"]=tmpfs 00:11:42.639 11:19:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@363 -- # fss["$mount"]=tmpfs 00:11:42.639 11:19:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@364 -- # avails["$mount"]=97977233408 00:11:42.639 11:19:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@364 -- # sizes["$mount"]=97987153920 00:11:42.639 11:19:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@365 -- # uses["$mount"]=9920512 00:11:42.639 11:19:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@362 -- # read -r source fs size use avail _ mount 00:11:42.639 11:19:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@363 -- # mounts["$mount"]=tmpfs 00:11:42.639 11:19:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@363 -- # fss["$mount"]=tmpfs 00:11:42.639 11:19:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@364 -- # avails["$mount"]=39171837952 00:11:42.639 11:19:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@364 -- # sizes["$mount"]=39194861568 00:11:42.639 11:19:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@365 -- # uses["$mount"]=23023616 00:11:42.639 11:19:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@362 -- # read -r source fs size use avail _ mount 00:11:42.639 11:19:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@363 -- # mounts["$mount"]=tmpfs 00:11:42.639 11:19:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@363 -- # fss["$mount"]=tmpfs 00:11:42.639 11:19:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@364 -- # avails["$mount"]=97986682880 00:11:42.639 11:19:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@364 -- # sizes["$mount"]=97987153920 00:11:42.639 11:19:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@365 -- # uses["$mount"]=471040 00:11:42.639 11:19:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@362 -- # read -r source fs size use avail _ mount 00:11:42.639 11:19:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@363 -- # mounts["$mount"]=tmpfs 00:11:42.639 11:19:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@363 -- # fss["$mount"]=tmpfs 00:11:42.639 11:19:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@364 -- # avails["$mount"]=19597426688 00:11:42.639 11:19:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@364 -- # sizes["$mount"]=19597430784 00:11:42.639 11:19:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@365 -- # uses["$mount"]=4096 00:11:42.639 11:19:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@362 -- # read -r source fs size use avail _ mount 00:11:42.639 11:19:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@368 -- # printf '* Looking for test storage...\n' 00:11:42.639 * Looking for test storage... 00:11:42.639 11:19:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@370 -- # local target_space new_size 00:11:42.639 11:19:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@371 -- # for target_dir in "${storage_candidates[@]}" 00:11:42.639 11:19:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # awk '$1 !~ /Filesystem/{print $6}' 00:11:42.639 11:19:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # df /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:42.639 11:19:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mount=/ 00:11:42.639 11:19:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # target_space=190353608704 00:11:42.639 11:19:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@377 -- # (( target_space == 0 || target_space < requested_size )) 00:11:42.639 11:19:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@380 -- # (( target_space >= requested_size )) 00:11:42.639 11:19:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@382 -- # [[ overlay == tmpfs ]] 00:11:42.639 11:19:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@382 -- # [[ overlay == ramfs ]] 00:11:42.639 11:19:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@382 -- # [[ / == / ]] 00:11:42.639 11:19:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@383 -- # new_size=7835291648 00:11:42.639 11:19:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@384 -- # (( new_size * 100 / sizes[/] > 95 )) 00:11:42.639 11:19:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@389 -- # export SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:42.639 11:19:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@389 -- # SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:42.639 11:19:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@390 -- # printf '* Found test storage at %s\n' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:42.639 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:42.639 11:19:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@391 -- # return 0 00:11:42.639 11:19:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1682 -- # set -o errtrace 00:11:42.639 11:19:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1683 -- # shopt -s extdebug 00:11:42.639 11:19:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1684 -- # trap 'trap - ERR; print_backtrace >&2' ERR 00:11:42.639 11:19:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1686 -- # PS4=' \t ${test_domain:-} -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ ' 00:11:42.639 11:19:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1687 -- # true 00:11:42.639 11:19:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1689 -- # xtrace_fd 00:11:42.639 11:19:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@25 -- # [[ -n 15 ]] 00:11:42.639 11:19:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@25 -- # [[ -e /proc/self/fd/15 ]] 00:11:42.639 11:19:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@27 -- # exec 00:11:42.639 11:19:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@29 -- # exec 00:11:42.639 11:19:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@31 -- # xtrace_restore 00:11:42.639 11:19:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]' 00:11:42.639 11:19:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@17 -- # (( 0 == 0 )) 00:11:42.639 11:19:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@18 -- # set -x 00:11:42.639 11:19:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:42.639 11:19:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@7 -- # uname -s 00:11:42.639 11:19:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:42.639 11:19:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:42.639 11:19:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:42.639 11:19:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:42.640 11:19:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:42.640 11:19:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:42.640 11:19:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:42.640 11:19:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:42.640 11:19:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:42.640 11:19:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:42.640 11:19:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:11:42.640 11:19:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:11:42.640 11:19:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:42.640 11:19:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:42.640 11:19:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:42.640 11:19:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:42.640 11:19:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:42.640 11:19:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:42.640 11:19:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:42.640 11:19:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:42.640 11:19:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:42.640 11:19:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:42.640 11:19:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:42.640 11:19:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@5 -- # export PATH 00:11:42.640 11:19:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:42.640 11:19:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@47 -- # : 0 00:11:42.640 11:19:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:11:42.640 11:19:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:11:42.640 11:19:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:42.640 11:19:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:42.640 11:19:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:42.640 11:19:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:11:42.640 11:19:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:11:42.640 11:19:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@51 -- # have_pci_nics=0 00:11:42.640 11:19:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@12 -- # MALLOC_BDEV_SIZE=512 00:11:42.640 11:19:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:11:42.640 11:19:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@15 -- # nvmftestinit 00:11:42.640 11:19:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:11:42.640 11:19:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:42.640 11:19:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@448 -- # prepare_net_devs 00:11:42.640 11:19:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@410 -- # local -g is_hw=no 00:11:42.640 11:19:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@412 -- # remove_spdk_ns 00:11:42.640 11:19:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:42.640 11:19:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:42.640 11:19:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:42.640 11:19:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:11:42.640 11:19:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:11:42.640 11:19:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@285 -- # xtrace_disable 00:11:42.640 11:19:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:11:49.210 11:19:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:49.210 11:19:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@291 -- # pci_devs=() 00:11:49.210 11:19:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@291 -- # local -a pci_devs 00:11:49.210 11:19:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@292 -- # pci_net_devs=() 00:11:49.210 11:19:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:11:49.210 11:19:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@293 -- # pci_drivers=() 00:11:49.210 11:19:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@293 -- # local -A pci_drivers 00:11:49.210 11:19:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@295 -- # net_devs=() 00:11:49.210 11:19:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@295 -- # local -ga net_devs 00:11:49.210 11:19:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@296 -- # e810=() 00:11:49.210 11:19:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@296 -- # local -ga e810 00:11:49.210 11:19:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@297 -- # x722=() 00:11:49.210 11:19:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@297 -- # local -ga x722 00:11:49.210 11:19:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@298 -- # mlx=() 00:11:49.210 11:19:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@298 -- # local -ga mlx 00:11:49.210 11:19:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:49.210 11:19:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:49.210 11:19:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:49.210 11:19:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:49.210 11:19:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:49.210 11:19:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:49.210 11:19:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:49.210 11:19:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:49.210 11:19:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:49.210 11:19:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:49.210 11:19:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:49.210 11:19:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:11:49.210 11:19:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:11:49.210 11:19:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:11:49.210 11:19:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:11:49.210 11:19:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:11:49.210 11:19:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:11:49.210 11:19:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:11:49.210 11:19:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:11:49.210 Found 0000:86:00.0 (0x8086 - 0x159b) 00:11:49.210 11:19:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:11:49.210 11:19:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:11:49.210 11:19:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:49.211 11:19:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:49.211 11:19:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:11:49.211 11:19:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:11:49.211 11:19:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:11:49.211 Found 0000:86:00.1 (0x8086 - 0x159b) 00:11:49.211 11:19:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:11:49.211 11:19:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:11:49.211 11:19:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:49.211 11:19:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:49.211 11:19:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:11:49.211 11:19:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:11:49.211 11:19:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:11:49.211 11:19:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:11:49.211 11:19:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:11:49.211 11:19:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:49.211 11:19:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:11:49.211 11:19:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:49.211 11:19:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@390 -- # [[ up == up ]] 00:11:49.211 11:19:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:11:49.211 11:19:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:49.211 11:19:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:11:49.211 Found net devices under 0000:86:00.0: cvl_0_0 00:11:49.211 11:19:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:11:49.211 11:19:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:11:49.211 11:19:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:49.211 11:19:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:11:49.211 11:19:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:49.211 11:19:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@390 -- # [[ up == up ]] 00:11:49.211 11:19:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:11:49.211 11:19:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:49.211 11:19:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:11:49.211 Found net devices under 0000:86:00.1: cvl_0_1 00:11:49.211 11:19:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:11:49.211 11:19:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:11:49.211 11:19:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@414 -- # is_hw=yes 00:11:49.211 11:19:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:11:49.211 11:19:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:11:49.211 11:19:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:11:49.211 11:19:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:49.211 11:19:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:49.211 11:19:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:49.211 11:19:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:11:49.211 11:19:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:49.211 11:19:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:49.211 11:19:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:11:49.211 11:19:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:49.211 11:19:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:49.211 11:19:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:11:49.211 11:19:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:11:49.211 11:19:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:11:49.211 11:19:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:49.211 11:19:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:49.211 11:19:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:49.211 11:19:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:11:49.211 11:19:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:49.211 11:19:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:49.211 11:19:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:49.211 11:19:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:11:49.211 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:49.211 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.145 ms 00:11:49.211 00:11:49.211 --- 10.0.0.2 ping statistics --- 00:11:49.211 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:49.211 rtt min/avg/max/mdev = 0.145/0.145/0.145/0.000 ms 00:11:49.211 11:19:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:49.211 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:49.211 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.066 ms 00:11:49.211 00:11:49.211 --- 10.0.0.1 ping statistics --- 00:11:49.211 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:49.211 rtt min/avg/max/mdev = 0.066/0.066/0.066/0.000 ms 00:11:49.211 11:19:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:49.211 11:19:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@422 -- # return 0 00:11:49.211 11:19:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:11:49.211 11:19:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:49.211 11:19:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:11:49.211 11:19:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:11:49.211 11:19:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:49.211 11:19:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:11:49.211 11:19:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:11:49.211 11:19:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@105 -- # run_test nvmf_filesystem_no_in_capsule nvmf_filesystem_part 0 00:11:49.211 11:19:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:11:49.211 11:19:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:49.211 11:19:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:11:49.211 ************************************ 00:11:49.211 START TEST nvmf_filesystem_no_in_capsule 00:11:49.211 ************************************ 00:11:49.211 11:19:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1125 -- # nvmf_filesystem_part 0 00:11:49.211 11:19:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@47 -- # in_capsule=0 00:11:49.211 11:19:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:11:49.211 11:19:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:11:49.211 11:19:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@724 -- # xtrace_disable 00:11:49.211 11:19:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:49.211 11:19:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@481 -- # nvmfpid=1434234 00:11:49.211 11:19:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@482 -- # waitforlisten 1434234 00:11:49.211 11:19:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:11:49.211 11:19:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@831 -- # '[' -z 1434234 ']' 00:11:49.211 11:19:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:49.211 11:19:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@836 -- # local max_retries=100 00:11:49.211 11:19:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:49.211 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:49.211 11:19:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@840 -- # xtrace_disable 00:11:49.211 11:19:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:49.211 [2024-07-26 11:19:44.103368] Starting SPDK v24.09-pre git sha1 487ff9e1a / DPDK 24.03.0 initialization... 00:11:49.211 [2024-07-26 11:19:44.103410] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:49.211 EAL: No free 2048 kB hugepages reported on node 1 00:11:49.211 [2024-07-26 11:19:44.173787] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:49.211 [2024-07-26 11:19:44.248146] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:49.211 [2024-07-26 11:19:44.248187] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:49.212 [2024-07-26 11:19:44.248194] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:49.212 [2024-07-26 11:19:44.248200] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:49.212 [2024-07-26 11:19:44.248204] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:49.212 [2024-07-26 11:19:44.248264] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:11:49.212 [2024-07-26 11:19:44.248374] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:11:49.212 [2024-07-26 11:19:44.248479] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:11:49.212 [2024-07-26 11:19:44.248480] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:11:49.471 11:19:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:11:49.471 11:19:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@864 -- # return 0 00:11:49.471 11:19:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:11:49.471 11:19:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@730 -- # xtrace_disable 00:11:49.471 11:19:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:49.471 11:19:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:49.471 11:19:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:11:49.471 11:19:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:11:49.471 11:19:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:49.471 11:19:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:49.471 [2024-07-26 11:19:44.944966] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:49.471 11:19:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:49.471 11:19:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:11:49.471 11:19:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:49.471 11:19:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:49.471 Malloc1 00:11:49.471 11:19:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:49.471 11:19:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:11:49.471 11:19:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:49.471 11:19:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:49.471 11:19:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:49.471 11:19:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:11:49.471 11:19:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:49.471 11:19:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:49.471 11:19:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:49.471 11:19:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:49.471 11:19:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:49.471 11:19:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:49.471 [2024-07-26 11:19:45.094886] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:49.471 11:19:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:49.471 11:19:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:11:49.471 11:19:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1378 -- # local bdev_name=Malloc1 00:11:49.471 11:19:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1379 -- # local bdev_info 00:11:49.471 11:19:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1380 -- # local bs 00:11:49.471 11:19:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1381 -- # local nb 00:11:49.471 11:19:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1382 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:11:49.471 11:19:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:49.471 11:19:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:49.471 11:19:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:49.471 11:19:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:11:49.471 { 00:11:49.471 "name": "Malloc1", 00:11:49.471 "aliases": [ 00:11:49.471 "b3997a08-f488-4210-96ed-3c83e1cb8c60" 00:11:49.471 ], 00:11:49.471 "product_name": "Malloc disk", 00:11:49.471 "block_size": 512, 00:11:49.471 "num_blocks": 1048576, 00:11:49.471 "uuid": "b3997a08-f488-4210-96ed-3c83e1cb8c60", 00:11:49.471 "assigned_rate_limits": { 00:11:49.471 "rw_ios_per_sec": 0, 00:11:49.471 "rw_mbytes_per_sec": 0, 00:11:49.471 "r_mbytes_per_sec": 0, 00:11:49.471 "w_mbytes_per_sec": 0 00:11:49.471 }, 00:11:49.471 "claimed": true, 00:11:49.471 "claim_type": "exclusive_write", 00:11:49.471 "zoned": false, 00:11:49.471 "supported_io_types": { 00:11:49.471 "read": true, 00:11:49.471 "write": true, 00:11:49.471 "unmap": true, 00:11:49.471 "flush": true, 00:11:49.471 "reset": true, 00:11:49.471 "nvme_admin": false, 00:11:49.471 "nvme_io": false, 00:11:49.471 "nvme_io_md": false, 00:11:49.471 "write_zeroes": true, 00:11:49.471 "zcopy": true, 00:11:49.471 "get_zone_info": false, 00:11:49.471 "zone_management": false, 00:11:49.471 "zone_append": false, 00:11:49.471 "compare": false, 00:11:49.471 "compare_and_write": false, 00:11:49.471 "abort": true, 00:11:49.471 "seek_hole": false, 00:11:49.471 "seek_data": false, 00:11:49.471 "copy": true, 00:11:49.471 "nvme_iov_md": false 00:11:49.471 }, 00:11:49.471 "memory_domains": [ 00:11:49.471 { 00:11:49.471 "dma_device_id": "system", 00:11:49.471 "dma_device_type": 1 00:11:49.471 }, 00:11:49.471 { 00:11:49.471 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:49.471 "dma_device_type": 2 00:11:49.471 } 00:11:49.471 ], 00:11:49.471 "driver_specific": {} 00:11:49.471 } 00:11:49.471 ]' 00:11:49.471 11:19:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:11:49.730 11:19:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1383 -- # bs=512 00:11:49.730 11:19:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:11:49.730 11:19:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1384 -- # nb=1048576 00:11:49.730 11:19:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1387 -- # bdev_size=512 00:11:49.730 11:19:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1388 -- # echo 512 00:11:49.730 11:19:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@58 -- # malloc_size=536870912 00:11:49.730 11:19:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid=00ad29c2-ccbd-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:11:50.663 11:19:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:11:50.663 11:19:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1198 -- # local i=0 00:11:50.663 11:19:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:11:50.663 11:19:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:11:50.663 11:19:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1205 -- # sleep 2 00:11:53.243 11:19:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:11:53.243 11:19:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:11:53.243 11:19:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:11:53.243 11:19:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:11:53.244 11:19:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:11:53.244 11:19:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1208 -- # return 0 00:11:53.244 11:19:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:11:53.244 11:19:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:11:53.244 11:19:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:11:53.244 11:19:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:11:53.244 11:19:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@76 -- # local dev=nvme0n1 00:11:53.244 11:19:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:11:53.244 11:19:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@80 -- # echo 536870912 00:11:53.244 11:19:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@64 -- # nvme_size=536870912 00:11:53.244 11:19:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:11:53.244 11:19:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:11:53.244 11:19:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:11:53.244 11:19:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@69 -- # partprobe 00:11:53.244 11:19:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@70 -- # sleep 1 00:11:54.614 11:19:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@76 -- # '[' 0 -eq 0 ']' 00:11:54.614 11:19:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@77 -- # run_test filesystem_ext4 nvmf_filesystem_create ext4 nvme0n1 00:11:54.614 11:19:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:11:54.614 11:19:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:54.614 11:19:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:54.614 ************************************ 00:11:54.614 START TEST filesystem_ext4 00:11:54.614 ************************************ 00:11:54.615 11:19:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@1125 -- # nvmf_filesystem_create ext4 nvme0n1 00:11:54.615 11:19:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@18 -- # fstype=ext4 00:11:54.615 11:19:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:11:54.615 11:19:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:11:54.615 11:19:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@926 -- # local fstype=ext4 00:11:54.615 11:19:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@927 -- # local dev_name=/dev/nvme0n1p1 00:11:54.615 11:19:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@928 -- # local i=0 00:11:54.615 11:19:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@929 -- # local force 00:11:54.615 11:19:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@931 -- # '[' ext4 = ext4 ']' 00:11:54.615 11:19:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@932 -- # force=-F 00:11:54.615 11:19:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@937 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:11:54.615 mke2fs 1.46.5 (30-Dec-2021) 00:11:54.615 Discarding device blocks: 0/522240 done 00:11:54.615 Creating filesystem with 522240 1k blocks and 130560 inodes 00:11:54.615 Filesystem UUID: 9a528296-cc31-42d9-aa39-d128cd0d68a3 00:11:54.615 Superblock backups stored on blocks: 00:11:54.615 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:11:54.615 00:11:54.615 Allocating group tables: 0/64 done 00:11:54.615 Writing inode tables: 0/64 done 00:11:55.985 Creating journal (8192 blocks): done 00:11:55.985 Writing superblocks and filesystem accounting information: 0/64 done 00:11:55.985 00:11:55.985 11:19:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@945 -- # return 0 00:11:55.985 11:19:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:11:55.985 11:19:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:11:55.985 11:19:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@25 -- # sync 00:11:55.985 11:19:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:11:55.985 11:19:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@27 -- # sync 00:11:55.985 11:19:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@29 -- # i=0 00:11:55.985 11:19:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@30 -- # umount /mnt/device 00:11:55.985 11:19:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@37 -- # kill -0 1434234 00:11:55.985 11:19:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:11:55.985 11:19:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:11:55.985 11:19:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:11:55.985 11:19:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:11:55.985 00:11:55.985 real 0m1.661s 00:11:55.985 user 0m0.028s 00:11:55.985 sys 0m0.063s 00:11:55.985 11:19:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:55.985 11:19:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@10 -- # set +x 00:11:55.985 ************************************ 00:11:55.985 END TEST filesystem_ext4 00:11:55.985 ************************************ 00:11:55.985 11:19:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@78 -- # run_test filesystem_btrfs nvmf_filesystem_create btrfs nvme0n1 00:11:55.985 11:19:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:11:55.985 11:19:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:55.985 11:19:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:55.985 ************************************ 00:11:55.985 START TEST filesystem_btrfs 00:11:55.985 ************************************ 00:11:55.985 11:19:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@1125 -- # nvmf_filesystem_create btrfs nvme0n1 00:11:55.985 11:19:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@18 -- # fstype=btrfs 00:11:55.985 11:19:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:11:55.985 11:19:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:11:55.985 11:19:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@926 -- # local fstype=btrfs 00:11:55.985 11:19:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@927 -- # local dev_name=/dev/nvme0n1p1 00:11:55.985 11:19:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@928 -- # local i=0 00:11:55.985 11:19:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@929 -- # local force 00:11:55.985 11:19:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@931 -- # '[' btrfs = ext4 ']' 00:11:55.985 11:19:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@934 -- # force=-f 00:11:55.985 11:19:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@937 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:11:56.549 btrfs-progs v6.6.2 00:11:56.549 See https://btrfs.readthedocs.io for more information. 00:11:56.549 00:11:56.549 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:11:56.549 NOTE: several default settings have changed in version 5.15, please make sure 00:11:56.549 this does not affect your deployments: 00:11:56.549 - DUP for metadata (-m dup) 00:11:56.549 - enabled no-holes (-O no-holes) 00:11:56.549 - enabled free-space-tree (-R free-space-tree) 00:11:56.549 00:11:56.549 Label: (null) 00:11:56.549 UUID: 9ee6a476-b015-4f6a-af35-e4f9b1d935ab 00:11:56.549 Node size: 16384 00:11:56.549 Sector size: 4096 00:11:56.549 Filesystem size: 510.00MiB 00:11:56.549 Block group profiles: 00:11:56.550 Data: single 8.00MiB 00:11:56.550 Metadata: DUP 32.00MiB 00:11:56.550 System: DUP 8.00MiB 00:11:56.550 SSD detected: yes 00:11:56.550 Zoned device: no 00:11:56.550 Incompat features: extref, skinny-metadata, no-holes, free-space-tree 00:11:56.550 Runtime features: free-space-tree 00:11:56.550 Checksum: crc32c 00:11:56.550 Number of devices: 1 00:11:56.550 Devices: 00:11:56.550 ID SIZE PATH 00:11:56.550 1 510.00MiB /dev/nvme0n1p1 00:11:56.550 00:11:56.550 11:19:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@945 -- # return 0 00:11:56.550 11:19:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:11:57.481 11:19:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:11:57.481 11:19:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@25 -- # sync 00:11:57.481 11:19:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:11:57.481 11:19:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@27 -- # sync 00:11:57.481 11:19:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@29 -- # i=0 00:11:57.481 11:19:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:11:57.481 11:19:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@37 -- # kill -0 1434234 00:11:57.481 11:19:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:11:57.481 11:19:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:11:57.481 11:19:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:11:57.481 11:19:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:11:57.481 00:11:57.481 real 0m1.363s 00:11:57.481 user 0m0.026s 00:11:57.481 sys 0m0.124s 00:11:57.481 11:19:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:57.481 11:19:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@10 -- # set +x 00:11:57.481 ************************************ 00:11:57.481 END TEST filesystem_btrfs 00:11:57.481 ************************************ 00:11:57.481 11:19:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@79 -- # run_test filesystem_xfs nvmf_filesystem_create xfs nvme0n1 00:11:57.481 11:19:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:11:57.481 11:19:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:57.481 11:19:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:57.481 ************************************ 00:11:57.481 START TEST filesystem_xfs 00:11:57.481 ************************************ 00:11:57.481 11:19:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@1125 -- # nvmf_filesystem_create xfs nvme0n1 00:11:57.481 11:19:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@18 -- # fstype=xfs 00:11:57.481 11:19:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:11:57.481 11:19:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:11:57.481 11:19:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@926 -- # local fstype=xfs 00:11:57.481 11:19:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@927 -- # local dev_name=/dev/nvme0n1p1 00:11:57.481 11:19:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@928 -- # local i=0 00:11:57.481 11:19:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@929 -- # local force 00:11:57.481 11:19:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@931 -- # '[' xfs = ext4 ']' 00:11:57.481 11:19:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@934 -- # force=-f 00:11:57.481 11:19:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@937 -- # mkfs.xfs -f /dev/nvme0n1p1 00:11:57.481 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:11:57.481 = sectsz=512 attr=2, projid32bit=1 00:11:57.481 = crc=1 finobt=1, sparse=1, rmapbt=0 00:11:57.481 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:11:57.481 data = bsize=4096 blocks=130560, imaxpct=25 00:11:57.481 = sunit=0 swidth=0 blks 00:11:57.481 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:11:57.481 log =internal log bsize=4096 blocks=16384, version=2 00:11:57.481 = sectsz=512 sunit=0 blks, lazy-count=1 00:11:57.482 realtime =none extsz=4096 blocks=0, rtextents=0 00:11:58.853 Discarding blocks...Done. 00:11:58.853 11:19:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@945 -- # return 0 00:11:58.853 11:19:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:12:00.751 11:19:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:12:00.751 11:19:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@25 -- # sync 00:12:00.751 11:19:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:12:00.751 11:19:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@27 -- # sync 00:12:00.751 11:19:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@29 -- # i=0 00:12:00.751 11:19:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:12:00.751 11:19:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@37 -- # kill -0 1434234 00:12:00.751 11:19:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:12:00.751 11:19:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:12:00.751 11:19:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:12:00.751 11:19:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:12:00.751 00:12:00.751 real 0m3.354s 00:12:00.751 user 0m0.024s 00:12:00.751 sys 0m0.071s 00:12:00.751 11:19:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@1126 -- # xtrace_disable 00:12:00.751 11:19:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@10 -- # set +x 00:12:00.751 ************************************ 00:12:00.751 END TEST filesystem_xfs 00:12:00.751 ************************************ 00:12:01.010 11:19:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:12:01.010 11:19:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@93 -- # sync 00:12:01.010 11:19:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:01.010 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:01.010 11:19:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:01.010 11:19:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1219 -- # local i=0 00:12:01.010 11:19:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:12:01.010 11:19:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:01.010 11:19:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:12:01.010 11:19:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:01.010 11:19:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1231 -- # return 0 00:12:01.010 11:19:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:01.010 11:19:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:01.010 11:19:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:01.010 11:19:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:01.010 11:19:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:12:01.010 11:19:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@101 -- # killprocess 1434234 00:12:01.010 11:19:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@950 -- # '[' -z 1434234 ']' 00:12:01.010 11:19:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@954 -- # kill -0 1434234 00:12:01.010 11:19:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@955 -- # uname 00:12:01.010 11:19:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:12:01.010 11:19:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1434234 00:12:01.268 11:19:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:12:01.268 11:19:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:12:01.268 11:19:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1434234' 00:12:01.268 killing process with pid 1434234 00:12:01.268 11:19:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@969 -- # kill 1434234 00:12:01.268 11:19:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@974 -- # wait 1434234 00:12:01.527 11:19:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@102 -- # nvmfpid= 00:12:01.527 00:12:01.527 real 0m12.980s 00:12:01.527 user 0m50.947s 00:12:01.527 sys 0m1.238s 00:12:01.527 11:19:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1126 -- # xtrace_disable 00:12:01.527 11:19:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:01.527 ************************************ 00:12:01.527 END TEST nvmf_filesystem_no_in_capsule 00:12:01.527 ************************************ 00:12:01.527 11:19:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@106 -- # run_test nvmf_filesystem_in_capsule nvmf_filesystem_part 4096 00:12:01.527 11:19:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:12:01.527 11:19:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1107 -- # xtrace_disable 00:12:01.527 11:19:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:12:01.527 ************************************ 00:12:01.527 START TEST nvmf_filesystem_in_capsule 00:12:01.527 ************************************ 00:12:01.527 11:19:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1125 -- # nvmf_filesystem_part 4096 00:12:01.527 11:19:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@47 -- # in_capsule=4096 00:12:01.527 11:19:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:12:01.527 11:19:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:12:01.527 11:19:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@724 -- # xtrace_disable 00:12:01.527 11:19:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:01.527 11:19:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@481 -- # nvmfpid=1436721 00:12:01.527 11:19:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@482 -- # waitforlisten 1436721 00:12:01.527 11:19:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:12:01.527 11:19:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@831 -- # '[' -z 1436721 ']' 00:12:01.527 11:19:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:01.527 11:19:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@836 -- # local max_retries=100 00:12:01.527 11:19:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:01.527 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:01.527 11:19:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@840 -- # xtrace_disable 00:12:01.527 11:19:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:01.528 [2024-07-26 11:19:57.154922] Starting SPDK v24.09-pre git sha1 487ff9e1a / DPDK 24.03.0 initialization... 00:12:01.528 [2024-07-26 11:19:57.154963] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:01.528 EAL: No free 2048 kB hugepages reported on node 1 00:12:01.786 [2024-07-26 11:19:57.225161] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:01.786 [2024-07-26 11:19:57.296809] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:01.786 [2024-07-26 11:19:57.296852] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:01.786 [2024-07-26 11:19:57.296859] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:01.786 [2024-07-26 11:19:57.296865] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:01.786 [2024-07-26 11:19:57.296870] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:01.786 [2024-07-26 11:19:57.296928] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:12:01.786 [2024-07-26 11:19:57.297031] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:12:01.786 [2024-07-26 11:19:57.297138] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:12:01.786 [2024-07-26 11:19:57.297139] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:12:02.352 11:19:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:12:02.352 11:19:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@864 -- # return 0 00:12:02.352 11:19:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:12:02.352 11:19:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@730 -- # xtrace_disable 00:12:02.352 11:19:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:02.352 11:19:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:02.352 11:19:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:12:02.352 11:19:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 4096 00:12:02.352 11:19:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:02.352 11:19:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:02.352 [2024-07-26 11:19:57.990757] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:02.352 11:19:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:02.352 11:19:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:12:02.352 11:19:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:02.352 11:19:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:02.610 Malloc1 00:12:02.610 11:19:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:02.610 11:19:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:12:02.610 11:19:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:02.610 11:19:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:02.610 11:19:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:02.610 11:19:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:12:02.610 11:19:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:02.610 11:19:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:02.610 11:19:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:02.610 11:19:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:02.610 11:19:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:02.610 11:19:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:02.610 [2024-07-26 11:19:58.139055] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:02.610 11:19:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:02.610 11:19:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:12:02.610 11:19:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1378 -- # local bdev_name=Malloc1 00:12:02.610 11:19:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1379 -- # local bdev_info 00:12:02.610 11:19:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1380 -- # local bs 00:12:02.610 11:19:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1381 -- # local nb 00:12:02.610 11:19:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1382 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:12:02.610 11:19:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:02.610 11:19:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:02.610 11:19:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:02.610 11:19:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:12:02.610 { 00:12:02.610 "name": "Malloc1", 00:12:02.610 "aliases": [ 00:12:02.610 "6ca0d11f-a50e-4dd7-a94f-d78a1ea319a5" 00:12:02.610 ], 00:12:02.610 "product_name": "Malloc disk", 00:12:02.610 "block_size": 512, 00:12:02.610 "num_blocks": 1048576, 00:12:02.610 "uuid": "6ca0d11f-a50e-4dd7-a94f-d78a1ea319a5", 00:12:02.610 "assigned_rate_limits": { 00:12:02.610 "rw_ios_per_sec": 0, 00:12:02.610 "rw_mbytes_per_sec": 0, 00:12:02.610 "r_mbytes_per_sec": 0, 00:12:02.610 "w_mbytes_per_sec": 0 00:12:02.610 }, 00:12:02.610 "claimed": true, 00:12:02.610 "claim_type": "exclusive_write", 00:12:02.610 "zoned": false, 00:12:02.610 "supported_io_types": { 00:12:02.610 "read": true, 00:12:02.610 "write": true, 00:12:02.610 "unmap": true, 00:12:02.610 "flush": true, 00:12:02.610 "reset": true, 00:12:02.610 "nvme_admin": false, 00:12:02.610 "nvme_io": false, 00:12:02.610 "nvme_io_md": false, 00:12:02.610 "write_zeroes": true, 00:12:02.610 "zcopy": true, 00:12:02.610 "get_zone_info": false, 00:12:02.610 "zone_management": false, 00:12:02.610 "zone_append": false, 00:12:02.610 "compare": false, 00:12:02.610 "compare_and_write": false, 00:12:02.610 "abort": true, 00:12:02.610 "seek_hole": false, 00:12:02.610 "seek_data": false, 00:12:02.610 "copy": true, 00:12:02.610 "nvme_iov_md": false 00:12:02.610 }, 00:12:02.610 "memory_domains": [ 00:12:02.610 { 00:12:02.610 "dma_device_id": "system", 00:12:02.610 "dma_device_type": 1 00:12:02.610 }, 00:12:02.610 { 00:12:02.610 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:02.610 "dma_device_type": 2 00:12:02.610 } 00:12:02.610 ], 00:12:02.610 "driver_specific": {} 00:12:02.610 } 00:12:02.610 ]' 00:12:02.610 11:19:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:12:02.610 11:19:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1383 -- # bs=512 00:12:02.610 11:19:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:12:02.610 11:19:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1384 -- # nb=1048576 00:12:02.610 11:19:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1387 -- # bdev_size=512 00:12:02.610 11:19:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1388 -- # echo 512 00:12:02.610 11:19:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@58 -- # malloc_size=536870912 00:12:02.610 11:19:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid=00ad29c2-ccbd-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:03.983 11:19:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:12:03.983 11:19:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1198 -- # local i=0 00:12:03.983 11:19:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:12:03.983 11:19:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:12:03.983 11:19:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1205 -- # sleep 2 00:12:05.881 11:20:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:12:05.881 11:20:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:12:05.881 11:20:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:12:05.881 11:20:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:12:05.881 11:20:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:12:05.881 11:20:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1208 -- # return 0 00:12:05.881 11:20:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:12:05.881 11:20:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:12:05.881 11:20:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:12:05.881 11:20:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:12:05.881 11:20:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@76 -- # local dev=nvme0n1 00:12:05.881 11:20:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:12:05.881 11:20:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@80 -- # echo 536870912 00:12:05.881 11:20:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@64 -- # nvme_size=536870912 00:12:05.881 11:20:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:12:05.881 11:20:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:12:05.881 11:20:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:12:06.138 11:20:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@69 -- # partprobe 00:12:06.703 11:20:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@70 -- # sleep 1 00:12:07.637 11:20:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@76 -- # '[' 4096 -eq 0 ']' 00:12:07.637 11:20:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@81 -- # run_test filesystem_in_capsule_ext4 nvmf_filesystem_create ext4 nvme0n1 00:12:07.637 11:20:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:12:07.637 11:20:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1107 -- # xtrace_disable 00:12:07.637 11:20:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:07.637 ************************************ 00:12:07.637 START TEST filesystem_in_capsule_ext4 00:12:07.637 ************************************ 00:12:07.637 11:20:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@1125 -- # nvmf_filesystem_create ext4 nvme0n1 00:12:07.637 11:20:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@18 -- # fstype=ext4 00:12:07.637 11:20:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:12:07.637 11:20:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:12:07.637 11:20:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@926 -- # local fstype=ext4 00:12:07.637 11:20:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@927 -- # local dev_name=/dev/nvme0n1p1 00:12:07.637 11:20:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@928 -- # local i=0 00:12:07.637 11:20:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@929 -- # local force 00:12:07.637 11:20:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@931 -- # '[' ext4 = ext4 ']' 00:12:07.637 11:20:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@932 -- # force=-F 00:12:07.637 11:20:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@937 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:12:07.637 mke2fs 1.46.5 (30-Dec-2021) 00:12:07.637 Discarding device blocks: 0/522240 done 00:12:07.637 Creating filesystem with 522240 1k blocks and 130560 inodes 00:12:07.637 Filesystem UUID: 778df8ff-3ebb-4488-9509-c1db112fae72 00:12:07.637 Superblock backups stored on blocks: 00:12:07.637 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:12:07.637 00:12:07.637 Allocating group tables: 0/64 done 00:12:07.637 Writing inode tables: 0/64 done 00:12:07.905 Creating journal (8192 blocks): done 00:12:08.474 Writing superblocks and filesystem accounting information: 0/64 done 00:12:08.474 00:12:08.474 11:20:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@945 -- # return 0 00:12:08.474 11:20:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:12:08.474 11:20:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:12:08.474 11:20:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@25 -- # sync 00:12:08.474 11:20:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:12:08.474 11:20:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@27 -- # sync 00:12:08.474 11:20:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@29 -- # i=0 00:12:08.474 11:20:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@30 -- # umount /mnt/device 00:12:08.474 11:20:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@37 -- # kill -0 1436721 00:12:08.474 11:20:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:12:08.474 11:20:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:12:08.474 11:20:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:12:08.474 11:20:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:12:08.474 00:12:08.474 real 0m1.015s 00:12:08.474 user 0m0.030s 00:12:08.474 sys 0m0.059s 00:12:08.474 11:20:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@1126 -- # xtrace_disable 00:12:08.474 11:20:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@10 -- # set +x 00:12:08.474 ************************************ 00:12:08.474 END TEST filesystem_in_capsule_ext4 00:12:08.474 ************************************ 00:12:08.732 11:20:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@82 -- # run_test filesystem_in_capsule_btrfs nvmf_filesystem_create btrfs nvme0n1 00:12:08.732 11:20:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:12:08.732 11:20:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1107 -- # xtrace_disable 00:12:08.733 11:20:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:08.733 ************************************ 00:12:08.733 START TEST filesystem_in_capsule_btrfs 00:12:08.733 ************************************ 00:12:08.733 11:20:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@1125 -- # nvmf_filesystem_create btrfs nvme0n1 00:12:08.733 11:20:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@18 -- # fstype=btrfs 00:12:08.733 11:20:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:12:08.733 11:20:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:12:08.733 11:20:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@926 -- # local fstype=btrfs 00:12:08.733 11:20:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@927 -- # local dev_name=/dev/nvme0n1p1 00:12:08.733 11:20:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@928 -- # local i=0 00:12:08.733 11:20:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@929 -- # local force 00:12:08.733 11:20:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@931 -- # '[' btrfs = ext4 ']' 00:12:08.733 11:20:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@934 -- # force=-f 00:12:08.733 11:20:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@937 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:12:08.733 btrfs-progs v6.6.2 00:12:08.733 See https://btrfs.readthedocs.io for more information. 00:12:08.733 00:12:08.733 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:12:08.733 NOTE: several default settings have changed in version 5.15, please make sure 00:12:08.733 this does not affect your deployments: 00:12:08.733 - DUP for metadata (-m dup) 00:12:08.733 - enabled no-holes (-O no-holes) 00:12:08.733 - enabled free-space-tree (-R free-space-tree) 00:12:08.733 00:12:08.733 Label: (null) 00:12:08.733 UUID: 427e5a99-54ed-4a08-9962-b5b3af847664 00:12:08.733 Node size: 16384 00:12:08.733 Sector size: 4096 00:12:08.733 Filesystem size: 510.00MiB 00:12:08.733 Block group profiles: 00:12:08.733 Data: single 8.00MiB 00:12:08.733 Metadata: DUP 32.00MiB 00:12:08.733 System: DUP 8.00MiB 00:12:08.733 SSD detected: yes 00:12:08.733 Zoned device: no 00:12:08.733 Incompat features: extref, skinny-metadata, no-holes, free-space-tree 00:12:08.733 Runtime features: free-space-tree 00:12:08.733 Checksum: crc32c 00:12:08.733 Number of devices: 1 00:12:08.733 Devices: 00:12:08.733 ID SIZE PATH 00:12:08.733 1 510.00MiB /dev/nvme0n1p1 00:12:08.733 00:12:08.733 11:20:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@945 -- # return 0 00:12:08.733 11:20:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:12:08.991 11:20:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:12:08.991 11:20:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@25 -- # sync 00:12:08.991 11:20:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:12:08.991 11:20:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@27 -- # sync 00:12:08.991 11:20:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@29 -- # i=0 00:12:08.991 11:20:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:12:08.991 11:20:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@37 -- # kill -0 1436721 00:12:08.991 11:20:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:12:08.991 11:20:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:12:08.991 11:20:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:12:08.991 11:20:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:12:08.991 00:12:08.991 real 0m0.430s 00:12:08.991 user 0m0.029s 00:12:08.991 sys 0m0.119s 00:12:08.991 11:20:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@1126 -- # xtrace_disable 00:12:08.991 11:20:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@10 -- # set +x 00:12:08.991 ************************************ 00:12:08.991 END TEST filesystem_in_capsule_btrfs 00:12:08.991 ************************************ 00:12:08.991 11:20:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@83 -- # run_test filesystem_in_capsule_xfs nvmf_filesystem_create xfs nvme0n1 00:12:08.991 11:20:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:12:08.991 11:20:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1107 -- # xtrace_disable 00:12:08.991 11:20:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:09.249 ************************************ 00:12:09.249 START TEST filesystem_in_capsule_xfs 00:12:09.249 ************************************ 00:12:09.249 11:20:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@1125 -- # nvmf_filesystem_create xfs nvme0n1 00:12:09.250 11:20:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@18 -- # fstype=xfs 00:12:09.250 11:20:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:12:09.250 11:20:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:12:09.250 11:20:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@926 -- # local fstype=xfs 00:12:09.250 11:20:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@927 -- # local dev_name=/dev/nvme0n1p1 00:12:09.250 11:20:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@928 -- # local i=0 00:12:09.250 11:20:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@929 -- # local force 00:12:09.250 11:20:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@931 -- # '[' xfs = ext4 ']' 00:12:09.250 11:20:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@934 -- # force=-f 00:12:09.250 11:20:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@937 -- # mkfs.xfs -f /dev/nvme0n1p1 00:12:09.250 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:12:09.250 = sectsz=512 attr=2, projid32bit=1 00:12:09.250 = crc=1 finobt=1, sparse=1, rmapbt=0 00:12:09.250 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:12:09.250 data = bsize=4096 blocks=130560, imaxpct=25 00:12:09.250 = sunit=0 swidth=0 blks 00:12:09.250 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:12:09.250 log =internal log bsize=4096 blocks=16384, version=2 00:12:09.250 = sectsz=512 sunit=0 blks, lazy-count=1 00:12:09.250 realtime =none extsz=4096 blocks=0, rtextents=0 00:12:10.183 Discarding blocks...Done. 00:12:10.184 11:20:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@945 -- # return 0 00:12:10.184 11:20:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:12:12.712 11:20:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:12:12.712 11:20:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@25 -- # sync 00:12:12.712 11:20:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:12:12.712 11:20:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@27 -- # sync 00:12:12.712 11:20:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@29 -- # i=0 00:12:12.712 11:20:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:12:12.712 11:20:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@37 -- # kill -0 1436721 00:12:12.712 11:20:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:12:12.712 11:20:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:12:12.712 11:20:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:12:12.712 11:20:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:12:12.712 00:12:12.712 real 0m3.245s 00:12:12.712 user 0m0.025s 00:12:12.712 sys 0m0.071s 00:12:12.712 11:20:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@1126 -- # xtrace_disable 00:12:12.712 11:20:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@10 -- # set +x 00:12:12.712 ************************************ 00:12:12.712 END TEST filesystem_in_capsule_xfs 00:12:12.712 ************************************ 00:12:12.712 11:20:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:12:12.712 11:20:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@93 -- # sync 00:12:12.712 11:20:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:12.712 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:12.712 11:20:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:12.712 11:20:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1219 -- # local i=0 00:12:12.712 11:20:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:12:12.712 11:20:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:12.712 11:20:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:12:12.712 11:20:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:12.712 11:20:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1231 -- # return 0 00:12:12.712 11:20:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:12.712 11:20:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:12.712 11:20:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:12.712 11:20:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:12.712 11:20:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:12:12.712 11:20:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@101 -- # killprocess 1436721 00:12:12.712 11:20:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@950 -- # '[' -z 1436721 ']' 00:12:12.712 11:20:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@954 -- # kill -0 1436721 00:12:12.712 11:20:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@955 -- # uname 00:12:12.712 11:20:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:12:12.712 11:20:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1436721 00:12:12.712 11:20:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:12:12.712 11:20:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:12:12.712 11:20:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1436721' 00:12:12.712 killing process with pid 1436721 00:12:12.712 11:20:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@969 -- # kill 1436721 00:12:12.712 11:20:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@974 -- # wait 1436721 00:12:12.970 11:20:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@102 -- # nvmfpid= 00:12:12.970 00:12:12.970 real 0m11.531s 00:12:12.970 user 0m45.220s 00:12:12.970 sys 0m1.211s 00:12:13.229 11:20:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1126 -- # xtrace_disable 00:12:13.229 11:20:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:13.229 ************************************ 00:12:13.229 END TEST nvmf_filesystem_in_capsule 00:12:13.229 ************************************ 00:12:13.229 11:20:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@108 -- # nvmftestfini 00:12:13.229 11:20:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@488 -- # nvmfcleanup 00:12:13.229 11:20:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@117 -- # sync 00:12:13.229 11:20:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:12:13.229 11:20:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@120 -- # set +e 00:12:13.229 11:20:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@121 -- # for i in {1..20} 00:12:13.229 11:20:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:12:13.229 rmmod nvme_tcp 00:12:13.229 rmmod nvme_fabrics 00:12:13.229 rmmod nvme_keyring 00:12:13.229 11:20:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:12:13.229 11:20:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@124 -- # set -e 00:12:13.229 11:20:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@125 -- # return 0 00:12:13.229 11:20:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:12:13.229 11:20:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:12:13.229 11:20:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:12:13.229 11:20:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:12:13.229 11:20:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:12:13.230 11:20:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@278 -- # remove_spdk_ns 00:12:13.230 11:20:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:13.230 11:20:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:13.230 11:20:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:15.135 11:20:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:12:15.135 00:12:15.135 real 0m32.934s 00:12:15.135 user 1m38.014s 00:12:15.135 sys 0m7.024s 00:12:15.135 11:20:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1126 -- # xtrace_disable 00:12:15.135 11:20:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:12:15.135 ************************************ 00:12:15.135 END TEST nvmf_filesystem 00:12:15.135 ************************************ 00:12:15.394 11:20:10 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@18 -- # run_test nvmf_target_discovery /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:12:15.394 11:20:10 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:12:15.394 11:20:10 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:12:15.394 11:20:10 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:12:15.394 ************************************ 00:12:15.394 START TEST nvmf_target_discovery 00:12:15.394 ************************************ 00:12:15.394 11:20:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:12:15.394 * Looking for test storage... 00:12:15.394 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:15.395 11:20:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:15.395 11:20:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@7 -- # uname -s 00:12:15.395 11:20:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:15.395 11:20:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:15.395 11:20:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:15.395 11:20:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:15.395 11:20:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:15.395 11:20:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:15.395 11:20:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:15.395 11:20:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:15.395 11:20:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:15.395 11:20:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:15.395 11:20:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:12:15.395 11:20:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:12:15.395 11:20:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:15.395 11:20:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:15.395 11:20:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:15.395 11:20:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:15.395 11:20:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:15.395 11:20:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:15.395 11:20:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:15.395 11:20:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:15.395 11:20:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:15.395 11:20:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:15.395 11:20:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:15.395 11:20:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@5 -- # export PATH 00:12:15.395 11:20:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:15.395 11:20:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@47 -- # : 0 00:12:15.395 11:20:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:12:15.395 11:20:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:12:15.395 11:20:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:15.395 11:20:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:15.395 11:20:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:15.395 11:20:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:12:15.395 11:20:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:12:15.395 11:20:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@51 -- # have_pci_nics=0 00:12:15.395 11:20:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@11 -- # NULL_BDEV_SIZE=102400 00:12:15.395 11:20:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@12 -- # NULL_BLOCK_SIZE=512 00:12:15.395 11:20:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@13 -- # NVMF_PORT_REFERRAL=4430 00:12:15.395 11:20:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@15 -- # hash nvme 00:12:15.395 11:20:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@20 -- # nvmftestinit 00:12:15.395 11:20:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:12:15.395 11:20:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:15.395 11:20:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@448 -- # prepare_net_devs 00:12:15.395 11:20:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@410 -- # local -g is_hw=no 00:12:15.395 11:20:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@412 -- # remove_spdk_ns 00:12:15.395 11:20:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:15.395 11:20:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:15.395 11:20:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:15.395 11:20:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:12:15.395 11:20:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:12:15.395 11:20:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@285 -- # xtrace_disable 00:12:15.395 11:20:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:21.965 11:20:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:21.965 11:20:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@291 -- # pci_devs=() 00:12:21.965 11:20:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@291 -- # local -a pci_devs 00:12:21.965 11:20:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@292 -- # pci_net_devs=() 00:12:21.965 11:20:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:12:21.965 11:20:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@293 -- # pci_drivers=() 00:12:21.966 11:20:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@293 -- # local -A pci_drivers 00:12:21.966 11:20:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@295 -- # net_devs=() 00:12:21.966 11:20:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@295 -- # local -ga net_devs 00:12:21.966 11:20:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@296 -- # e810=() 00:12:21.966 11:20:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@296 -- # local -ga e810 00:12:21.966 11:20:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@297 -- # x722=() 00:12:21.966 11:20:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@297 -- # local -ga x722 00:12:21.966 11:20:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@298 -- # mlx=() 00:12:21.966 11:20:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@298 -- # local -ga mlx 00:12:21.966 11:20:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:21.966 11:20:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:21.966 11:20:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:21.966 11:20:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:21.966 11:20:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:21.966 11:20:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:21.966 11:20:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:21.966 11:20:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:21.966 11:20:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:21.966 11:20:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:21.966 11:20:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:21.966 11:20:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:12:21.966 11:20:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:12:21.966 11:20:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:12:21.966 11:20:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:12:21.966 11:20:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:12:21.966 11:20:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:12:21.966 11:20:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:21.966 11:20:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:12:21.966 Found 0000:86:00.0 (0x8086 - 0x159b) 00:12:21.966 11:20:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:12:21.966 11:20:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:12:21.966 11:20:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:21.966 11:20:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:21.966 11:20:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:12:21.966 11:20:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:21.966 11:20:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:12:21.966 Found 0000:86:00.1 (0x8086 - 0x159b) 00:12:21.966 11:20:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:12:21.966 11:20:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:12:21.966 11:20:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:21.966 11:20:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:21.966 11:20:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:12:21.966 11:20:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:12:21.966 11:20:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:12:21.966 11:20:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:12:21.966 11:20:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:21.966 11:20:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:21.966 11:20:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:12:21.966 11:20:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:21.966 11:20:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@390 -- # [[ up == up ]] 00:12:21.966 11:20:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:12:21.966 11:20:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:21.966 11:20:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:12:21.966 Found net devices under 0000:86:00.0: cvl_0_0 00:12:21.966 11:20:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:12:21.966 11:20:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:21.966 11:20:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:21.966 11:20:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:12:21.966 11:20:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:21.966 11:20:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@390 -- # [[ up == up ]] 00:12:21.966 11:20:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:12:21.966 11:20:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:21.966 11:20:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:12:21.966 Found net devices under 0000:86:00.1: cvl_0_1 00:12:21.966 11:20:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:12:21.966 11:20:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:12:21.966 11:20:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@414 -- # is_hw=yes 00:12:21.966 11:20:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:12:21.966 11:20:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:12:21.966 11:20:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:12:21.966 11:20:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:21.966 11:20:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:21.966 11:20:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:21.966 11:20:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:12:21.966 11:20:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:21.966 11:20:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:21.966 11:20:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:12:21.966 11:20:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:21.966 11:20:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:21.966 11:20:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:12:21.966 11:20:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:12:21.966 11:20:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:12:21.966 11:20:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:21.966 11:20:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:21.966 11:20:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:21.966 11:20:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:12:21.966 11:20:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:21.966 11:20:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:21.966 11:20:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:21.966 11:20:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:12:21.966 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:21.966 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.195 ms 00:12:21.966 00:12:21.966 --- 10.0.0.2 ping statistics --- 00:12:21.966 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:21.966 rtt min/avg/max/mdev = 0.195/0.195/0.195/0.000 ms 00:12:21.966 11:20:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:21.966 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:21.967 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.136 ms 00:12:21.967 00:12:21.967 --- 10.0.0.1 ping statistics --- 00:12:21.967 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:21.967 rtt min/avg/max/mdev = 0.136/0.136/0.136/0.000 ms 00:12:21.967 11:20:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:21.967 11:20:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@422 -- # return 0 00:12:21.967 11:20:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:12:21.967 11:20:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:21.967 11:20:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:12:21.967 11:20:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:12:21.967 11:20:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:21.967 11:20:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:12:21.967 11:20:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:12:21.967 11:20:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@21 -- # nvmfappstart -m 0xF 00:12:21.967 11:20:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:12:21.967 11:20:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@724 -- # xtrace_disable 00:12:21.967 11:20:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:21.967 11:20:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@481 -- # nvmfpid=1442274 00:12:21.967 11:20:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:12:21.967 11:20:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@482 -- # waitforlisten 1442274 00:12:21.967 11:20:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@831 -- # '[' -z 1442274 ']' 00:12:21.967 11:20:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:21.967 11:20:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@836 -- # local max_retries=100 00:12:21.967 11:20:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:21.967 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:21.967 11:20:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@840 -- # xtrace_disable 00:12:21.967 11:20:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:21.967 [2024-07-26 11:20:16.882730] Starting SPDK v24.09-pre git sha1 487ff9e1a / DPDK 24.03.0 initialization... 00:12:21.967 [2024-07-26 11:20:16.882775] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:21.967 EAL: No free 2048 kB hugepages reported on node 1 00:12:21.967 [2024-07-26 11:20:16.951934] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:21.967 [2024-07-26 11:20:17.031081] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:21.967 [2024-07-26 11:20:17.031115] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:21.967 [2024-07-26 11:20:17.031125] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:21.967 [2024-07-26 11:20:17.031131] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:21.967 [2024-07-26 11:20:17.031136] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:21.967 [2024-07-26 11:20:17.031179] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:12:21.967 [2024-07-26 11:20:17.031288] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:12:21.967 [2024-07-26 11:20:17.031394] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:12:21.967 [2024-07-26 11:20:17.031396] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:12:22.227 11:20:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:12:22.227 11:20:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@864 -- # return 0 00:12:22.227 11:20:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:12:22.227 11:20:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@730 -- # xtrace_disable 00:12:22.227 11:20:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:22.227 11:20:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:22.227 11:20:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:12:22.227 11:20:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:22.227 11:20:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:22.227 [2024-07-26 11:20:17.732955] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:22.227 11:20:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:22.227 11:20:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # seq 1 4 00:12:22.227 11:20:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:12:22.227 11:20:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null1 102400 512 00:12:22.227 11:20:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:22.227 11:20:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:22.227 Null1 00:12:22.227 11:20:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:22.227 11:20:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:12:22.227 11:20:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:22.227 11:20:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:22.227 11:20:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:22.227 11:20:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Null1 00:12:22.227 11:20:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:22.227 11:20:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:22.227 11:20:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:22.227 11:20:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:22.227 11:20:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:22.227 11:20:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:22.227 [2024-07-26 11:20:17.778331] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:22.227 11:20:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:22.227 11:20:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:12:22.227 11:20:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null2 102400 512 00:12:22.227 11:20:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:22.227 11:20:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:22.227 Null2 00:12:22.227 11:20:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:22.227 11:20:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:12:22.227 11:20:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:22.227 11:20:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:22.227 11:20:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:22.227 11:20:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Null2 00:12:22.227 11:20:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:22.227 11:20:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:22.227 11:20:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:22.227 11:20:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:12:22.227 11:20:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:22.227 11:20:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:22.227 11:20:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:22.227 11:20:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:12:22.227 11:20:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null3 102400 512 00:12:22.227 11:20:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:22.227 11:20:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:22.227 Null3 00:12:22.227 11:20:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:22.227 11:20:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK00000000000003 00:12:22.227 11:20:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:22.228 11:20:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:22.228 11:20:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:22.228 11:20:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 Null3 00:12:22.228 11:20:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:22.228 11:20:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:22.228 11:20:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:22.228 11:20:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:12:22.228 11:20:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:22.228 11:20:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:22.228 11:20:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:22.228 11:20:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:12:22.228 11:20:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null4 102400 512 00:12:22.228 11:20:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:22.228 11:20:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:22.228 Null4 00:12:22.228 11:20:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:22.228 11:20:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4 -a -s SPDK00000000000004 00:12:22.228 11:20:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:22.228 11:20:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:22.228 11:20:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:22.228 11:20:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode4 Null4 00:12:22.228 11:20:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:22.228 11:20:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:22.228 11:20:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:22.228 11:20:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode4 -t tcp -a 10.0.0.2 -s 4420 00:12:22.228 11:20:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:22.228 11:20:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:22.228 11:20:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:22.228 11:20:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@32 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:12:22.228 11:20:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:22.228 11:20:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:22.487 11:20:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:22.487 11:20:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@35 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 10.0.0.2 -s 4430 00:12:22.487 11:20:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:22.487 11:20:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:22.487 11:20:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:22.488 11:20:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@37 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid=00ad29c2-ccbd-e911-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 4420 00:12:22.488 00:12:22.488 Discovery Log Number of Records 6, Generation counter 6 00:12:22.488 =====Discovery Log Entry 0====== 00:12:22.488 trtype: tcp 00:12:22.488 adrfam: ipv4 00:12:22.488 subtype: current discovery subsystem 00:12:22.488 treq: not required 00:12:22.488 portid: 0 00:12:22.488 trsvcid: 4420 00:12:22.488 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:12:22.488 traddr: 10.0.0.2 00:12:22.488 eflags: explicit discovery connections, duplicate discovery information 00:12:22.488 sectype: none 00:12:22.488 =====Discovery Log Entry 1====== 00:12:22.488 trtype: tcp 00:12:22.488 adrfam: ipv4 00:12:22.488 subtype: nvme subsystem 00:12:22.488 treq: not required 00:12:22.488 portid: 0 00:12:22.488 trsvcid: 4420 00:12:22.488 subnqn: nqn.2016-06.io.spdk:cnode1 00:12:22.488 traddr: 10.0.0.2 00:12:22.488 eflags: none 00:12:22.488 sectype: none 00:12:22.488 =====Discovery Log Entry 2====== 00:12:22.488 trtype: tcp 00:12:22.488 adrfam: ipv4 00:12:22.488 subtype: nvme subsystem 00:12:22.488 treq: not required 00:12:22.488 portid: 0 00:12:22.488 trsvcid: 4420 00:12:22.488 subnqn: nqn.2016-06.io.spdk:cnode2 00:12:22.488 traddr: 10.0.0.2 00:12:22.488 eflags: none 00:12:22.488 sectype: none 00:12:22.488 =====Discovery Log Entry 3====== 00:12:22.488 trtype: tcp 00:12:22.488 adrfam: ipv4 00:12:22.488 subtype: nvme subsystem 00:12:22.488 treq: not required 00:12:22.488 portid: 0 00:12:22.488 trsvcid: 4420 00:12:22.488 subnqn: nqn.2016-06.io.spdk:cnode3 00:12:22.488 traddr: 10.0.0.2 00:12:22.488 eflags: none 00:12:22.488 sectype: none 00:12:22.488 =====Discovery Log Entry 4====== 00:12:22.488 trtype: tcp 00:12:22.488 adrfam: ipv4 00:12:22.488 subtype: nvme subsystem 00:12:22.488 treq: not required 00:12:22.488 portid: 0 00:12:22.488 trsvcid: 4420 00:12:22.488 subnqn: nqn.2016-06.io.spdk:cnode4 00:12:22.488 traddr: 10.0.0.2 00:12:22.488 eflags: none 00:12:22.488 sectype: none 00:12:22.488 =====Discovery Log Entry 5====== 00:12:22.488 trtype: tcp 00:12:22.488 adrfam: ipv4 00:12:22.488 subtype: discovery subsystem referral 00:12:22.488 treq: not required 00:12:22.488 portid: 0 00:12:22.488 trsvcid: 4430 00:12:22.488 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:12:22.488 traddr: 10.0.0.2 00:12:22.488 eflags: none 00:12:22.488 sectype: none 00:12:22.488 11:20:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@39 -- # echo 'Perform nvmf subsystem discovery via RPC' 00:12:22.488 Perform nvmf subsystem discovery via RPC 00:12:22.488 11:20:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@40 -- # rpc_cmd nvmf_get_subsystems 00:12:22.488 11:20:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:22.488 11:20:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:22.488 [ 00:12:22.488 { 00:12:22.488 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:12:22.488 "subtype": "Discovery", 00:12:22.488 "listen_addresses": [ 00:12:22.488 { 00:12:22.488 "trtype": "TCP", 00:12:22.488 "adrfam": "IPv4", 00:12:22.488 "traddr": "10.0.0.2", 00:12:22.488 "trsvcid": "4420" 00:12:22.488 } 00:12:22.488 ], 00:12:22.488 "allow_any_host": true, 00:12:22.488 "hosts": [] 00:12:22.488 }, 00:12:22.488 { 00:12:22.488 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:12:22.488 "subtype": "NVMe", 00:12:22.488 "listen_addresses": [ 00:12:22.488 { 00:12:22.488 "trtype": "TCP", 00:12:22.488 "adrfam": "IPv4", 00:12:22.488 "traddr": "10.0.0.2", 00:12:22.488 "trsvcid": "4420" 00:12:22.488 } 00:12:22.488 ], 00:12:22.488 "allow_any_host": true, 00:12:22.488 "hosts": [], 00:12:22.488 "serial_number": "SPDK00000000000001", 00:12:22.488 "model_number": "SPDK bdev Controller", 00:12:22.488 "max_namespaces": 32, 00:12:22.488 "min_cntlid": 1, 00:12:22.488 "max_cntlid": 65519, 00:12:22.488 "namespaces": [ 00:12:22.488 { 00:12:22.488 "nsid": 1, 00:12:22.488 "bdev_name": "Null1", 00:12:22.488 "name": "Null1", 00:12:22.488 "nguid": "1F9386A0E2454EB8AC487CABD9B9EF14", 00:12:22.488 "uuid": "1f9386a0-e245-4eb8-ac48-7cabd9b9ef14" 00:12:22.488 } 00:12:22.488 ] 00:12:22.488 }, 00:12:22.488 { 00:12:22.488 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:12:22.488 "subtype": "NVMe", 00:12:22.488 "listen_addresses": [ 00:12:22.488 { 00:12:22.488 "trtype": "TCP", 00:12:22.488 "adrfam": "IPv4", 00:12:22.488 "traddr": "10.0.0.2", 00:12:22.488 "trsvcid": "4420" 00:12:22.488 } 00:12:22.488 ], 00:12:22.488 "allow_any_host": true, 00:12:22.488 "hosts": [], 00:12:22.488 "serial_number": "SPDK00000000000002", 00:12:22.488 "model_number": "SPDK bdev Controller", 00:12:22.488 "max_namespaces": 32, 00:12:22.488 "min_cntlid": 1, 00:12:22.488 "max_cntlid": 65519, 00:12:22.488 "namespaces": [ 00:12:22.488 { 00:12:22.488 "nsid": 1, 00:12:22.488 "bdev_name": "Null2", 00:12:22.488 "name": "Null2", 00:12:22.488 "nguid": "80778DD6AF714E38AA98A4E43E846B32", 00:12:22.488 "uuid": "80778dd6-af71-4e38-aa98-a4e43e846b32" 00:12:22.488 } 00:12:22.488 ] 00:12:22.488 }, 00:12:22.488 { 00:12:22.488 "nqn": "nqn.2016-06.io.spdk:cnode3", 00:12:22.488 "subtype": "NVMe", 00:12:22.488 "listen_addresses": [ 00:12:22.488 { 00:12:22.488 "trtype": "TCP", 00:12:22.488 "adrfam": "IPv4", 00:12:22.488 "traddr": "10.0.0.2", 00:12:22.488 "trsvcid": "4420" 00:12:22.488 } 00:12:22.488 ], 00:12:22.488 "allow_any_host": true, 00:12:22.488 "hosts": [], 00:12:22.488 "serial_number": "SPDK00000000000003", 00:12:22.488 "model_number": "SPDK bdev Controller", 00:12:22.488 "max_namespaces": 32, 00:12:22.488 "min_cntlid": 1, 00:12:22.488 "max_cntlid": 65519, 00:12:22.488 "namespaces": [ 00:12:22.488 { 00:12:22.488 "nsid": 1, 00:12:22.488 "bdev_name": "Null3", 00:12:22.488 "name": "Null3", 00:12:22.488 "nguid": "DD1B9B1AABF9492A95F0ECA4A858A328", 00:12:22.488 "uuid": "dd1b9b1a-abf9-492a-95f0-eca4a858a328" 00:12:22.488 } 00:12:22.488 ] 00:12:22.488 }, 00:12:22.488 { 00:12:22.488 "nqn": "nqn.2016-06.io.spdk:cnode4", 00:12:22.488 "subtype": "NVMe", 00:12:22.488 "listen_addresses": [ 00:12:22.488 { 00:12:22.488 "trtype": "TCP", 00:12:22.488 "adrfam": "IPv4", 00:12:22.488 "traddr": "10.0.0.2", 00:12:22.488 "trsvcid": "4420" 00:12:22.488 } 00:12:22.488 ], 00:12:22.488 "allow_any_host": true, 00:12:22.488 "hosts": [], 00:12:22.488 "serial_number": "SPDK00000000000004", 00:12:22.488 "model_number": "SPDK bdev Controller", 00:12:22.488 "max_namespaces": 32, 00:12:22.488 "min_cntlid": 1, 00:12:22.488 "max_cntlid": 65519, 00:12:22.488 "namespaces": [ 00:12:22.488 { 00:12:22.488 "nsid": 1, 00:12:22.488 "bdev_name": "Null4", 00:12:22.488 "name": "Null4", 00:12:22.488 "nguid": "946D24A3A36D4C40A1BC824D54DFA58D", 00:12:22.488 "uuid": "946d24a3-a36d-4c40-a1bc-824d54dfa58d" 00:12:22.488 } 00:12:22.488 ] 00:12:22.488 } 00:12:22.488 ] 00:12:22.488 11:20:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:22.488 11:20:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # seq 1 4 00:12:22.488 11:20:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:12:22.488 11:20:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:22.488 11:20:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:22.488 11:20:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:22.488 11:20:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:22.488 11:20:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null1 00:12:22.488 11:20:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:22.488 11:20:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:22.488 11:20:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:22.488 11:20:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:12:22.488 11:20:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:12:22.488 11:20:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:22.488 11:20:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:22.488 11:20:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:22.488 11:20:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null2 00:12:22.488 11:20:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:22.488 11:20:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:22.488 11:20:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:22.488 11:20:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:12:22.488 11:20:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:12:22.488 11:20:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:22.488 11:20:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:22.488 11:20:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:22.488 11:20:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null3 00:12:22.488 11:20:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:22.488 11:20:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:22.488 11:20:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:22.488 11:20:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:12:22.488 11:20:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode4 00:12:22.488 11:20:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:22.488 11:20:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:22.488 11:20:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:22.488 11:20:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null4 00:12:22.488 11:20:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:22.488 11:20:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:22.488 11:20:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:22.488 11:20:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@47 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 10.0.0.2 -s 4430 00:12:22.488 11:20:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:22.488 11:20:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:22.488 11:20:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:22.488 11:20:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@49 -- # rpc_cmd bdev_get_bdevs 00:12:22.488 11:20:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@49 -- # jq -r '.[].name' 00:12:22.488 11:20:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:22.488 11:20:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:22.488 11:20:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:22.488 11:20:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@49 -- # check_bdevs= 00:12:22.488 11:20:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@50 -- # '[' -n '' ']' 00:12:22.488 11:20:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@55 -- # trap - SIGINT SIGTERM EXIT 00:12:22.488 11:20:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@57 -- # nvmftestfini 00:12:22.488 11:20:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@488 -- # nvmfcleanup 00:12:22.488 11:20:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@117 -- # sync 00:12:22.488 11:20:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:12:22.488 11:20:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@120 -- # set +e 00:12:22.488 11:20:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@121 -- # for i in {1..20} 00:12:22.488 11:20:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:12:22.488 rmmod nvme_tcp 00:12:22.747 rmmod nvme_fabrics 00:12:22.747 rmmod nvme_keyring 00:12:22.747 11:20:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:12:22.747 11:20:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@124 -- # set -e 00:12:22.747 11:20:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@125 -- # return 0 00:12:22.747 11:20:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@489 -- # '[' -n 1442274 ']' 00:12:22.747 11:20:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@490 -- # killprocess 1442274 00:12:22.747 11:20:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@950 -- # '[' -z 1442274 ']' 00:12:22.747 11:20:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@954 -- # kill -0 1442274 00:12:22.747 11:20:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@955 -- # uname 00:12:22.747 11:20:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:12:22.747 11:20:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1442274 00:12:22.747 11:20:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:12:22.747 11:20:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:12:22.747 11:20:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1442274' 00:12:22.747 killing process with pid 1442274 00:12:22.747 11:20:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@969 -- # kill 1442274 00:12:22.747 11:20:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@974 -- # wait 1442274 00:12:23.036 11:20:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:12:23.036 11:20:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:12:23.036 11:20:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:12:23.036 11:20:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:12:23.036 11:20:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@278 -- # remove_spdk_ns 00:12:23.036 11:20:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:23.036 11:20:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:23.036 11:20:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:24.953 11:20:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:12:24.953 00:12:24.953 real 0m9.626s 00:12:24.953 user 0m7.315s 00:12:24.953 sys 0m4.794s 00:12:24.953 11:20:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1126 -- # xtrace_disable 00:12:24.953 11:20:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:24.953 ************************************ 00:12:24.953 END TEST nvmf_target_discovery 00:12:24.953 ************************************ 00:12:24.953 11:20:20 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@19 -- # run_test nvmf_referrals /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:12:24.953 11:20:20 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:12:24.953 11:20:20 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:12:24.953 11:20:20 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:12:24.953 ************************************ 00:12:24.953 START TEST nvmf_referrals 00:12:24.953 ************************************ 00:12:24.953 11:20:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:12:25.213 * Looking for test storage... 00:12:25.213 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:25.213 11:20:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:25.213 11:20:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@7 -- # uname -s 00:12:25.213 11:20:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:25.213 11:20:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:25.213 11:20:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:25.213 11:20:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:25.213 11:20:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:25.213 11:20:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:25.213 11:20:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:25.213 11:20:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:25.213 11:20:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:25.213 11:20:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:25.213 11:20:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:12:25.213 11:20:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:12:25.213 11:20:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:25.213 11:20:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:25.213 11:20:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:25.213 11:20:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:25.213 11:20:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:25.213 11:20:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:25.213 11:20:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:25.213 11:20:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:25.213 11:20:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:25.213 11:20:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:25.213 11:20:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:25.213 11:20:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@5 -- # export PATH 00:12:25.213 11:20:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:25.213 11:20:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@47 -- # : 0 00:12:25.213 11:20:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:12:25.213 11:20:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:12:25.213 11:20:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:25.213 11:20:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:25.213 11:20:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:25.213 11:20:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:12:25.213 11:20:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:12:25.213 11:20:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@51 -- # have_pci_nics=0 00:12:25.213 11:20:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@11 -- # NVMF_REFERRAL_IP_1=127.0.0.2 00:12:25.213 11:20:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@12 -- # NVMF_REFERRAL_IP_2=127.0.0.3 00:12:25.213 11:20:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@13 -- # NVMF_REFERRAL_IP_3=127.0.0.4 00:12:25.213 11:20:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@14 -- # NVMF_PORT_REFERRAL=4430 00:12:25.213 11:20:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@15 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:12:25.213 11:20:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@16 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:12:25.213 11:20:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@37 -- # nvmftestinit 00:12:25.213 11:20:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:12:25.213 11:20:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:25.213 11:20:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@448 -- # prepare_net_devs 00:12:25.213 11:20:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@410 -- # local -g is_hw=no 00:12:25.213 11:20:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@412 -- # remove_spdk_ns 00:12:25.213 11:20:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:25.213 11:20:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:25.213 11:20:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:25.213 11:20:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:12:25.213 11:20:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:12:25.213 11:20:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@285 -- # xtrace_disable 00:12:25.213 11:20:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:31.781 11:20:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:31.781 11:20:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@291 -- # pci_devs=() 00:12:31.781 11:20:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@291 -- # local -a pci_devs 00:12:31.781 11:20:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@292 -- # pci_net_devs=() 00:12:31.781 11:20:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:12:31.781 11:20:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@293 -- # pci_drivers=() 00:12:31.781 11:20:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@293 -- # local -A pci_drivers 00:12:31.781 11:20:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@295 -- # net_devs=() 00:12:31.781 11:20:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@295 -- # local -ga net_devs 00:12:31.781 11:20:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@296 -- # e810=() 00:12:31.781 11:20:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@296 -- # local -ga e810 00:12:31.781 11:20:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@297 -- # x722=() 00:12:31.781 11:20:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@297 -- # local -ga x722 00:12:31.781 11:20:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@298 -- # mlx=() 00:12:31.781 11:20:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@298 -- # local -ga mlx 00:12:31.781 11:20:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:31.781 11:20:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:31.781 11:20:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:31.781 11:20:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:31.781 11:20:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:31.781 11:20:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:31.781 11:20:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:31.781 11:20:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:31.781 11:20:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:31.781 11:20:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:31.781 11:20:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:31.781 11:20:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:12:31.781 11:20:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:12:31.781 11:20:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:12:31.781 11:20:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:12:31.781 11:20:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:12:31.781 11:20:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:12:31.781 11:20:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:31.781 11:20:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:12:31.781 Found 0000:86:00.0 (0x8086 - 0x159b) 00:12:31.781 11:20:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:12:31.781 11:20:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:12:31.781 11:20:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:31.781 11:20:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:31.781 11:20:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:12:31.781 11:20:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:31.781 11:20:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:12:31.781 Found 0000:86:00.1 (0x8086 - 0x159b) 00:12:31.781 11:20:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:12:31.781 11:20:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:12:31.781 11:20:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:31.781 11:20:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:31.781 11:20:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:12:31.781 11:20:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:12:31.781 11:20:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:12:31.781 11:20:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:12:31.781 11:20:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:31.781 11:20:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:31.781 11:20:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:12:31.781 11:20:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:31.781 11:20:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@390 -- # [[ up == up ]] 00:12:31.781 11:20:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:12:31.781 11:20:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:31.781 11:20:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:12:31.781 Found net devices under 0000:86:00.0: cvl_0_0 00:12:31.781 11:20:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:12:31.781 11:20:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:31.781 11:20:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:31.781 11:20:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:12:31.781 11:20:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:31.781 11:20:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@390 -- # [[ up == up ]] 00:12:31.781 11:20:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:12:31.781 11:20:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:31.781 11:20:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:12:31.781 Found net devices under 0000:86:00.1: cvl_0_1 00:12:31.781 11:20:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:12:31.781 11:20:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:12:31.781 11:20:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@414 -- # is_hw=yes 00:12:31.781 11:20:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:12:31.781 11:20:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:12:31.781 11:20:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:12:31.781 11:20:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:31.781 11:20:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:31.781 11:20:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:31.781 11:20:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:12:31.781 11:20:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:31.781 11:20:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:31.781 11:20:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:12:31.782 11:20:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:31.782 11:20:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:31.782 11:20:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:12:31.782 11:20:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:12:31.782 11:20:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:12:31.782 11:20:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:31.782 11:20:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:31.782 11:20:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:31.782 11:20:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:12:31.782 11:20:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:31.782 11:20:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:31.782 11:20:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:31.782 11:20:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:12:31.782 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:31.782 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.145 ms 00:12:31.782 00:12:31.782 --- 10.0.0.2 ping statistics --- 00:12:31.782 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:31.782 rtt min/avg/max/mdev = 0.145/0.145/0.145/0.000 ms 00:12:31.782 11:20:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:31.782 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:31.782 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.207 ms 00:12:31.782 00:12:31.782 --- 10.0.0.1 ping statistics --- 00:12:31.782 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:31.782 rtt min/avg/max/mdev = 0.207/0.207/0.207/0.000 ms 00:12:31.782 11:20:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:31.782 11:20:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@422 -- # return 0 00:12:31.782 11:20:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:12:31.782 11:20:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:31.782 11:20:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:12:31.782 11:20:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:12:31.782 11:20:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:31.782 11:20:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:12:31.782 11:20:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:12:31.782 11:20:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@38 -- # nvmfappstart -m 0xF 00:12:31.782 11:20:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:12:31.782 11:20:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@724 -- # xtrace_disable 00:12:31.782 11:20:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:31.782 11:20:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@481 -- # nvmfpid=1445903 00:12:31.782 11:20:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@482 -- # waitforlisten 1445903 00:12:31.782 11:20:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:12:31.782 11:20:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@831 -- # '[' -z 1445903 ']' 00:12:31.782 11:20:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:31.782 11:20:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@836 -- # local max_retries=100 00:12:31.782 11:20:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:31.782 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:31.782 11:20:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@840 -- # xtrace_disable 00:12:31.782 11:20:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:31.782 [2024-07-26 11:20:26.551835] Starting SPDK v24.09-pre git sha1 487ff9e1a / DPDK 24.03.0 initialization... 00:12:31.782 [2024-07-26 11:20:26.551879] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:31.782 EAL: No free 2048 kB hugepages reported on node 1 00:12:31.782 [2024-07-26 11:20:26.628610] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:31.782 [2024-07-26 11:20:26.705453] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:31.782 [2024-07-26 11:20:26.705491] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:31.782 [2024-07-26 11:20:26.705498] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:31.782 [2024-07-26 11:20:26.705504] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:31.782 [2024-07-26 11:20:26.705510] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:31.782 [2024-07-26 11:20:26.705555] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:12:31.782 [2024-07-26 11:20:26.705587] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:12:31.782 [2024-07-26 11:20:26.705717] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:12:31.782 [2024-07-26 11:20:26.705718] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:12:31.782 11:20:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:12:31.782 11:20:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@864 -- # return 0 00:12:31.782 11:20:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:12:31.782 11:20:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@730 -- # xtrace_disable 00:12:31.782 11:20:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:31.782 11:20:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:31.782 11:20:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@40 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:12:31.782 11:20:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:31.782 11:20:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:31.782 [2024-07-26 11:20:27.405840] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:31.782 11:20:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:31.782 11:20:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 10.0.0.2 -s 8009 discovery 00:12:31.782 11:20:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:31.782 11:20:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:31.782 [2024-07-26 11:20:27.419150] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:12:31.782 11:20:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:31.782 11:20:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@44 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 00:12:31.782 11:20:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:31.782 11:20:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:31.782 11:20:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:31.782 11:20:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@45 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.3 -s 4430 00:12:31.782 11:20:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:31.782 11:20:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:31.782 11:20:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:31.782 11:20:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@46 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.4 -s 4430 00:12:32.041 11:20:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:32.041 11:20:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:32.041 11:20:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:32.041 11:20:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@48 -- # rpc_cmd nvmf_discovery_get_referrals 00:12:32.041 11:20:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@48 -- # jq length 00:12:32.041 11:20:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:32.041 11:20:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:32.041 11:20:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:32.041 11:20:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@48 -- # (( 3 == 3 )) 00:12:32.041 11:20:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@49 -- # get_referral_ips rpc 00:12:32.041 11:20:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:12:32.041 11:20:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:12:32.041 11:20:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:12:32.041 11:20:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:32.041 11:20:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:12:32.041 11:20:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:32.041 11:20:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:32.041 11:20:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:12:32.041 11:20:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@49 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:12:32.041 11:20:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@50 -- # get_referral_ips nvme 00:12:32.041 11:20:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:12:32.041 11:20:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:12:32.041 11:20:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid=00ad29c2-ccbd-e911-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:12:32.041 11:20:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:12:32.041 11:20:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:12:32.041 11:20:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:12:32.041 11:20:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@50 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:12:32.041 11:20:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@52 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 00:12:32.041 11:20:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:32.041 11:20:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:32.041 11:20:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:32.041 11:20:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@53 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.3 -s 4430 00:12:32.041 11:20:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:32.041 11:20:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:32.041 11:20:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:32.041 11:20:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@54 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.4 -s 4430 00:12:32.041 11:20:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:32.041 11:20:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:32.041 11:20:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:32.041 11:20:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@56 -- # rpc_cmd nvmf_discovery_get_referrals 00:12:32.041 11:20:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@56 -- # jq length 00:12:32.041 11:20:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:32.041 11:20:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:32.299 11:20:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:32.299 11:20:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@56 -- # (( 0 == 0 )) 00:12:32.299 11:20:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@57 -- # get_referral_ips nvme 00:12:32.299 11:20:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:12:32.299 11:20:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:12:32.299 11:20:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid=00ad29c2-ccbd-e911-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:12:32.299 11:20:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:12:32.299 11:20:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:12:32.299 11:20:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 00:12:32.299 11:20:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@57 -- # [[ '' == '' ]] 00:12:32.299 11:20:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@60 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n discovery 00:12:32.299 11:20:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:32.299 11:20:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:32.299 11:20:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:32.299 11:20:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@62 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:12:32.299 11:20:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:32.299 11:20:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:32.299 11:20:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:32.299 11:20:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@65 -- # get_referral_ips rpc 00:12:32.299 11:20:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:12:32.299 11:20:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:12:32.299 11:20:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:12:32.299 11:20:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:32.299 11:20:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:12:32.299 11:20:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:32.299 11:20:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:32.299 11:20:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.2 00:12:32.299 11:20:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@65 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:12:32.299 11:20:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@66 -- # get_referral_ips nvme 00:12:32.299 11:20:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:12:32.299 11:20:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:12:32.299 11:20:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid=00ad29c2-ccbd-e911-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:12:32.299 11:20:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:12:32.299 11:20:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:12:32.557 11:20:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.2 00:12:32.557 11:20:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@66 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:12:32.557 11:20:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@67 -- # get_discovery_entries 'nvme subsystem' 00:12:32.557 11:20:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@67 -- # jq -r .subnqn 00:12:32.557 11:20:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:12:32.557 11:20:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid=00ad29c2-ccbd-e911-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:12:32.557 11:20:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:12:32.557 11:20:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@67 -- # [[ nqn.2016-06.io.spdk:cnode1 == \n\q\n\.\2\0\1\6\-\0\6\.\i\o\.\s\p\d\k\:\c\n\o\d\e\1 ]] 00:12:32.557 11:20:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@68 -- # get_discovery_entries 'discovery subsystem referral' 00:12:32.557 11:20:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@68 -- # jq -r .subnqn 00:12:32.557 11:20:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:12:32.557 11:20:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid=00ad29c2-ccbd-e911-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:12:32.558 11:20:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:12:32.816 11:20:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@68 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:12:32.816 11:20:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@71 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:12:32.816 11:20:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:32.816 11:20:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:32.816 11:20:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:32.816 11:20:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@73 -- # get_referral_ips rpc 00:12:32.816 11:20:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:12:32.816 11:20:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:12:32.816 11:20:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:12:32.816 11:20:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:12:32.816 11:20:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:32.816 11:20:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:32.816 11:20:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:32.816 11:20:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 00:12:32.816 11:20:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@73 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:12:32.816 11:20:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@74 -- # get_referral_ips nvme 00:12:32.816 11:20:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:12:32.816 11:20:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:12:32.816 11:20:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid=00ad29c2-ccbd-e911-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:12:32.816 11:20:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:12:32.816 11:20:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:12:33.074 11:20:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 00:12:33.074 11:20:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@74 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:12:33.074 11:20:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@75 -- # get_discovery_entries 'nvme subsystem' 00:12:33.074 11:20:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@75 -- # jq -r .subnqn 00:12:33.074 11:20:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:12:33.074 11:20:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid=00ad29c2-ccbd-e911-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:12:33.074 11:20:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:12:33.074 11:20:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@75 -- # [[ '' == '' ]] 00:12:33.074 11:20:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@76 -- # get_discovery_entries 'discovery subsystem referral' 00:12:33.074 11:20:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:12:33.074 11:20:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@76 -- # jq -r .subnqn 00:12:33.074 11:20:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid=00ad29c2-ccbd-e911-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:12:33.074 11:20:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:12:33.332 11:20:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@76 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:12:33.332 11:20:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@79 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2014-08.org.nvmexpress.discovery 00:12:33.332 11:20:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:33.332 11:20:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:33.332 11:20:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:33.332 11:20:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@82 -- # rpc_cmd nvmf_discovery_get_referrals 00:12:33.332 11:20:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@82 -- # jq length 00:12:33.332 11:20:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:33.332 11:20:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:33.332 11:20:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:33.332 11:20:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@82 -- # (( 0 == 0 )) 00:12:33.332 11:20:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@83 -- # get_referral_ips nvme 00:12:33.332 11:20:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:12:33.332 11:20:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:12:33.332 11:20:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:12:33.332 11:20:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid=00ad29c2-ccbd-e911-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:12:33.332 11:20:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:12:33.332 11:20:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 00:12:33.332 11:20:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@83 -- # [[ '' == '' ]] 00:12:33.332 11:20:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@85 -- # trap - SIGINT SIGTERM EXIT 00:12:33.332 11:20:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@86 -- # nvmftestfini 00:12:33.332 11:20:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@488 -- # nvmfcleanup 00:12:33.332 11:20:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@117 -- # sync 00:12:33.332 11:20:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:12:33.332 11:20:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@120 -- # set +e 00:12:33.332 11:20:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@121 -- # for i in {1..20} 00:12:33.332 11:20:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:12:33.332 rmmod nvme_tcp 00:12:33.332 rmmod nvme_fabrics 00:12:33.332 rmmod nvme_keyring 00:12:33.332 11:20:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:12:33.332 11:20:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@124 -- # set -e 00:12:33.332 11:20:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@125 -- # return 0 00:12:33.332 11:20:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@489 -- # '[' -n 1445903 ']' 00:12:33.332 11:20:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@490 -- # killprocess 1445903 00:12:33.332 11:20:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@950 -- # '[' -z 1445903 ']' 00:12:33.332 11:20:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@954 -- # kill -0 1445903 00:12:33.332 11:20:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@955 -- # uname 00:12:33.591 11:20:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:12:33.591 11:20:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1445903 00:12:33.591 11:20:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:12:33.591 11:20:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:12:33.591 11:20:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1445903' 00:12:33.591 killing process with pid 1445903 00:12:33.591 11:20:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@969 -- # kill 1445903 00:12:33.591 11:20:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@974 -- # wait 1445903 00:12:33.591 11:20:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:12:33.591 11:20:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:12:33.591 11:20:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:12:33.591 11:20:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:12:33.591 11:20:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@278 -- # remove_spdk_ns 00:12:33.591 11:20:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:33.591 11:20:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:33.591 11:20:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:36.127 11:20:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:12:36.127 00:12:36.127 real 0m10.724s 00:12:36.127 user 0m12.571s 00:12:36.127 sys 0m4.982s 00:12:36.127 11:20:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1126 -- # xtrace_disable 00:12:36.127 11:20:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:36.127 ************************************ 00:12:36.127 END TEST nvmf_referrals 00:12:36.127 ************************************ 00:12:36.127 11:20:31 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@20 -- # run_test nvmf_connect_disconnect /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:12:36.127 11:20:31 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:12:36.127 11:20:31 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:12:36.127 11:20:31 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:12:36.127 ************************************ 00:12:36.127 START TEST nvmf_connect_disconnect 00:12:36.127 ************************************ 00:12:36.127 11:20:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:12:36.127 * Looking for test storage... 00:12:36.127 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:36.127 11:20:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:36.127 11:20:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@7 -- # uname -s 00:12:36.127 11:20:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:36.127 11:20:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:36.127 11:20:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:36.127 11:20:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:36.127 11:20:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:36.127 11:20:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:36.128 11:20:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:36.128 11:20:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:36.128 11:20:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:36.128 11:20:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:36.128 11:20:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:12:36.128 11:20:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:12:36.128 11:20:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:36.128 11:20:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:36.128 11:20:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:36.128 11:20:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:36.128 11:20:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:36.128 11:20:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:36.128 11:20:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:36.128 11:20:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:36.128 11:20:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:36.128 11:20:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:36.128 11:20:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:36.128 11:20:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@5 -- # export PATH 00:12:36.128 11:20:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:36.128 11:20:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@47 -- # : 0 00:12:36.128 11:20:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:12:36.128 11:20:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:12:36.128 11:20:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:36.128 11:20:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:36.128 11:20:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:36.128 11:20:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:12:36.128 11:20:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:12:36.128 11:20:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@51 -- # have_pci_nics=0 00:12:36.128 11:20:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@11 -- # MALLOC_BDEV_SIZE=64 00:12:36.128 11:20:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:12:36.128 11:20:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@15 -- # nvmftestinit 00:12:36.128 11:20:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:12:36.128 11:20:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:36.128 11:20:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@448 -- # prepare_net_devs 00:12:36.128 11:20:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@410 -- # local -g is_hw=no 00:12:36.128 11:20:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@412 -- # remove_spdk_ns 00:12:36.128 11:20:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:36.128 11:20:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:36.128 11:20:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:36.128 11:20:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:12:36.128 11:20:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:12:36.128 11:20:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@285 -- # xtrace_disable 00:12:36.128 11:20:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:41.405 11:20:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:41.405 11:20:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@291 -- # pci_devs=() 00:12:41.405 11:20:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@291 -- # local -a pci_devs 00:12:41.405 11:20:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@292 -- # pci_net_devs=() 00:12:41.405 11:20:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:12:41.405 11:20:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@293 -- # pci_drivers=() 00:12:41.405 11:20:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@293 -- # local -A pci_drivers 00:12:41.405 11:20:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@295 -- # net_devs=() 00:12:41.405 11:20:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@295 -- # local -ga net_devs 00:12:41.405 11:20:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@296 -- # e810=() 00:12:41.405 11:20:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@296 -- # local -ga e810 00:12:41.405 11:20:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@297 -- # x722=() 00:12:41.405 11:20:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@297 -- # local -ga x722 00:12:41.405 11:20:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@298 -- # mlx=() 00:12:41.405 11:20:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@298 -- # local -ga mlx 00:12:41.405 11:20:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:41.405 11:20:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:41.405 11:20:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:41.405 11:20:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:41.405 11:20:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:41.405 11:20:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:41.405 11:20:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:41.405 11:20:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:41.405 11:20:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:41.405 11:20:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:41.405 11:20:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:41.405 11:20:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:12:41.405 11:20:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:12:41.405 11:20:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:12:41.405 11:20:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:12:41.405 11:20:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:12:41.405 11:20:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:12:41.405 11:20:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:41.405 11:20:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:12:41.405 Found 0000:86:00.0 (0x8086 - 0x159b) 00:12:41.405 11:20:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:12:41.405 11:20:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:12:41.405 11:20:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:41.405 11:20:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:41.405 11:20:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:12:41.405 11:20:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:41.405 11:20:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:12:41.405 Found 0000:86:00.1 (0x8086 - 0x159b) 00:12:41.405 11:20:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:12:41.405 11:20:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:12:41.405 11:20:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:41.405 11:20:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:41.405 11:20:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:12:41.405 11:20:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:12:41.405 11:20:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:12:41.405 11:20:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:12:41.405 11:20:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:41.405 11:20:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:41.405 11:20:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:12:41.405 11:20:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:41.405 11:20:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@390 -- # [[ up == up ]] 00:12:41.405 11:20:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:12:41.405 11:20:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:41.405 11:20:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:12:41.405 Found net devices under 0000:86:00.0: cvl_0_0 00:12:41.405 11:20:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:12:41.405 11:20:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:41.405 11:20:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:41.405 11:20:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:12:41.405 11:20:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:41.405 11:20:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@390 -- # [[ up == up ]] 00:12:41.405 11:20:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:12:41.405 11:20:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:41.405 11:20:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:12:41.405 Found net devices under 0000:86:00.1: cvl_0_1 00:12:41.405 11:20:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:12:41.405 11:20:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:12:41.405 11:20:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@414 -- # is_hw=yes 00:12:41.405 11:20:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:12:41.405 11:20:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:12:41.405 11:20:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:12:41.405 11:20:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:41.405 11:20:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:41.405 11:20:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:41.406 11:20:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:12:41.406 11:20:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:41.406 11:20:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:41.406 11:20:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:12:41.406 11:20:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:41.406 11:20:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:41.406 11:20:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:12:41.406 11:20:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:12:41.406 11:20:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:12:41.406 11:20:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:41.665 11:20:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:41.665 11:20:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:41.665 11:20:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:12:41.665 11:20:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:41.665 11:20:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:41.665 11:20:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:41.665 11:20:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:12:41.665 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:41.665 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.262 ms 00:12:41.665 00:12:41.665 --- 10.0.0.2 ping statistics --- 00:12:41.665 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:41.665 rtt min/avg/max/mdev = 0.262/0.262/0.262/0.000 ms 00:12:41.665 11:20:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:41.665 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:41.665 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.072 ms 00:12:41.665 00:12:41.665 --- 10.0.0.1 ping statistics --- 00:12:41.665 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:41.665 rtt min/avg/max/mdev = 0.072/0.072/0.072/0.000 ms 00:12:41.665 11:20:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:41.665 11:20:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@422 -- # return 0 00:12:41.665 11:20:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:12:41.665 11:20:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:41.665 11:20:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:12:41.665 11:20:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:12:41.665 11:20:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:41.665 11:20:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:12:41.665 11:20:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:12:41.665 11:20:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@16 -- # nvmfappstart -m 0xF 00:12:41.924 11:20:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:12:41.924 11:20:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@724 -- # xtrace_disable 00:12:41.924 11:20:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:41.924 11:20:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@481 -- # nvmfpid=1449956 00:12:41.924 11:20:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@482 -- # waitforlisten 1449956 00:12:41.924 11:20:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:12:41.924 11:20:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@831 -- # '[' -z 1449956 ']' 00:12:41.924 11:20:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:41.924 11:20:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@836 -- # local max_retries=100 00:12:41.924 11:20:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:41.924 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:41.924 11:20:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@840 -- # xtrace_disable 00:12:41.924 11:20:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:41.924 [2024-07-26 11:20:37.380951] Starting SPDK v24.09-pre git sha1 487ff9e1a / DPDK 24.03.0 initialization... 00:12:41.924 [2024-07-26 11:20:37.380992] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:41.924 EAL: No free 2048 kB hugepages reported on node 1 00:12:41.924 [2024-07-26 11:20:37.448490] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:41.924 [2024-07-26 11:20:37.526867] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:41.924 [2024-07-26 11:20:37.526902] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:41.924 [2024-07-26 11:20:37.526909] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:41.924 [2024-07-26 11:20:37.526915] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:41.924 [2024-07-26 11:20:37.526920] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:41.924 [2024-07-26 11:20:37.526976] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:12:41.924 [2024-07-26 11:20:37.527084] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:12:41.924 [2024-07-26 11:20:37.527189] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:12:41.924 [2024-07-26 11:20:37.527190] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:12:42.857 11:20:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:12:42.857 11:20:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@864 -- # return 0 00:12:42.857 11:20:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:12:42.857 11:20:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@730 -- # xtrace_disable 00:12:42.857 11:20:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:42.857 11:20:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:42.857 11:20:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:12:42.857 11:20:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:42.857 11:20:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:42.857 [2024-07-26 11:20:38.241959] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:42.857 11:20:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:42.857 11:20:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 00:12:42.857 11:20:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:42.857 11:20:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:42.857 11:20:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:42.857 11:20:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@20 -- # bdev=Malloc0 00:12:42.857 11:20:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:12:42.857 11:20:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:42.857 11:20:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:42.857 11:20:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:42.857 11:20:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:12:42.857 11:20:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:42.857 11:20:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:42.857 11:20:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:42.857 11:20:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:42.857 11:20:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:42.857 11:20:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:42.857 [2024-07-26 11:20:38.293585] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:42.857 11:20:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:42.857 11:20:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@26 -- # '[' 0 -eq 1 ']' 00:12:42.857 11:20:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@31 -- # num_iterations=5 00:12:42.857 11:20:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@34 -- # set +x 00:12:46.210 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:49.497 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:52.779 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:56.062 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:59.347 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:59.347 11:20:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@43 -- # trap - SIGINT SIGTERM EXIT 00:12:59.347 11:20:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@45 -- # nvmftestfini 00:12:59.347 11:20:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@488 -- # nvmfcleanup 00:12:59.347 11:20:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@117 -- # sync 00:12:59.347 11:20:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:12:59.347 11:20:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@120 -- # set +e 00:12:59.347 11:20:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@121 -- # for i in {1..20} 00:12:59.347 11:20:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:12:59.347 rmmod nvme_tcp 00:12:59.347 rmmod nvme_fabrics 00:12:59.347 rmmod nvme_keyring 00:12:59.347 11:20:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:12:59.347 11:20:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@124 -- # set -e 00:12:59.347 11:20:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@125 -- # return 0 00:12:59.347 11:20:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@489 -- # '[' -n 1449956 ']' 00:12:59.347 11:20:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@490 -- # killprocess 1449956 00:12:59.348 11:20:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@950 -- # '[' -z 1449956 ']' 00:12:59.348 11:20:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@954 -- # kill -0 1449956 00:12:59.348 11:20:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@955 -- # uname 00:12:59.348 11:20:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:12:59.348 11:20:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1449956 00:12:59.348 11:20:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:12:59.348 11:20:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:12:59.348 11:20:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1449956' 00:12:59.348 killing process with pid 1449956 00:12:59.348 11:20:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@969 -- # kill 1449956 00:12:59.348 11:20:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@974 -- # wait 1449956 00:12:59.348 11:20:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:12:59.348 11:20:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:12:59.348 11:20:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:12:59.348 11:20:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:12:59.348 11:20:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@278 -- # remove_spdk_ns 00:12:59.348 11:20:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:59.348 11:20:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:59.348 11:20:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:01.913 11:20:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:13:01.913 00:13:01.913 real 0m25.593s 00:13:01.913 user 1m10.670s 00:13:01.913 sys 0m5.526s 00:13:01.913 11:20:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1126 -- # xtrace_disable 00:13:01.913 11:20:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:13:01.913 ************************************ 00:13:01.913 END TEST nvmf_connect_disconnect 00:13:01.913 ************************************ 00:13:01.913 11:20:56 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@21 -- # run_test nvmf_multitarget /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:13:01.913 11:20:56 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:13:01.913 11:20:56 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:13:01.913 11:20:56 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:13:01.913 ************************************ 00:13:01.913 START TEST nvmf_multitarget 00:13:01.913 ************************************ 00:13:01.913 11:20:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:13:01.913 * Looking for test storage... 00:13:01.913 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:01.913 11:20:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:01.913 11:20:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@7 -- # uname -s 00:13:01.913 11:20:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:01.913 11:20:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:01.913 11:20:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:01.913 11:20:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:01.913 11:20:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:01.913 11:20:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:01.913 11:20:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:01.913 11:20:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:01.913 11:20:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:01.913 11:20:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:01.913 11:20:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:13:01.913 11:20:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:13:01.913 11:20:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:01.913 11:20:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:01.913 11:20:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:01.913 11:20:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:01.913 11:20:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:01.913 11:20:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:01.913 11:20:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:01.913 11:20:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:01.913 11:20:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:01.913 11:20:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:01.913 11:20:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:01.913 11:20:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@5 -- # export PATH 00:13:01.913 11:20:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:01.913 11:20:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@47 -- # : 0 00:13:01.913 11:20:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:13:01.913 11:20:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:13:01.913 11:20:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:01.913 11:20:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:01.913 11:20:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:01.913 11:20:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:13:01.913 11:20:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:13:01.913 11:20:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@51 -- # have_pci_nics=0 00:13:01.913 11:20:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@13 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:13:01.913 11:20:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@15 -- # nvmftestinit 00:13:01.913 11:20:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:13:01.913 11:20:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:01.913 11:20:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@448 -- # prepare_net_devs 00:13:01.913 11:20:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@410 -- # local -g is_hw=no 00:13:01.913 11:20:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@412 -- # remove_spdk_ns 00:13:01.913 11:20:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:01.913 11:20:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:01.913 11:20:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:01.913 11:20:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:13:01.913 11:20:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:13:01.913 11:20:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@285 -- # xtrace_disable 00:13:01.913 11:20:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:13:07.188 11:21:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:13:07.188 11:21:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@291 -- # pci_devs=() 00:13:07.188 11:21:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@291 -- # local -a pci_devs 00:13:07.188 11:21:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@292 -- # pci_net_devs=() 00:13:07.188 11:21:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:13:07.188 11:21:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@293 -- # pci_drivers=() 00:13:07.188 11:21:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@293 -- # local -A pci_drivers 00:13:07.188 11:21:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@295 -- # net_devs=() 00:13:07.188 11:21:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@295 -- # local -ga net_devs 00:13:07.188 11:21:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@296 -- # e810=() 00:13:07.188 11:21:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@296 -- # local -ga e810 00:13:07.188 11:21:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@297 -- # x722=() 00:13:07.188 11:21:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@297 -- # local -ga x722 00:13:07.188 11:21:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@298 -- # mlx=() 00:13:07.188 11:21:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@298 -- # local -ga mlx 00:13:07.188 11:21:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:07.188 11:21:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:07.188 11:21:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:07.188 11:21:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:07.188 11:21:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:07.188 11:21:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:07.188 11:21:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:07.188 11:21:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:07.188 11:21:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:07.188 11:21:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:07.188 11:21:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:07.188 11:21:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:13:07.188 11:21:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:13:07.188 11:21:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:13:07.188 11:21:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:13:07.188 11:21:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:13:07.188 11:21:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:13:07.188 11:21:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:07.188 11:21:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:13:07.188 Found 0000:86:00.0 (0x8086 - 0x159b) 00:13:07.188 11:21:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:13:07.188 11:21:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:13:07.188 11:21:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:07.189 11:21:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:07.189 11:21:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:13:07.189 11:21:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:07.189 11:21:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:13:07.189 Found 0000:86:00.1 (0x8086 - 0x159b) 00:13:07.189 11:21:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:13:07.189 11:21:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:13:07.189 11:21:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:07.189 11:21:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:07.189 11:21:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:13:07.189 11:21:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:13:07.189 11:21:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:13:07.189 11:21:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:13:07.189 11:21:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:07.189 11:21:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:07.189 11:21:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:13:07.189 11:21:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:07.189 11:21:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@390 -- # [[ up == up ]] 00:13:07.189 11:21:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:13:07.189 11:21:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:07.189 11:21:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:13:07.189 Found net devices under 0000:86:00.0: cvl_0_0 00:13:07.189 11:21:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:13:07.189 11:21:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:07.189 11:21:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:07.189 11:21:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:13:07.189 11:21:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:07.189 11:21:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@390 -- # [[ up == up ]] 00:13:07.189 11:21:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:13:07.189 11:21:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:07.189 11:21:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:13:07.189 Found net devices under 0000:86:00.1: cvl_0_1 00:13:07.189 11:21:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:13:07.189 11:21:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:13:07.189 11:21:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@414 -- # is_hw=yes 00:13:07.189 11:21:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:13:07.189 11:21:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:13:07.189 11:21:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:13:07.189 11:21:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:07.189 11:21:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:07.189 11:21:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:13:07.189 11:21:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:13:07.189 11:21:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:13:07.189 11:21:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:13:07.189 11:21:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:13:07.189 11:21:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:13:07.189 11:21:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:07.189 11:21:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:13:07.189 11:21:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:13:07.189 11:21:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:13:07.189 11:21:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:13:07.189 11:21:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:13:07.189 11:21:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:13:07.189 11:21:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:13:07.189 11:21:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:13:07.189 11:21:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:13:07.189 11:21:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:13:07.448 11:21:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:13:07.448 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:07.448 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.163 ms 00:13:07.448 00:13:07.448 --- 10.0.0.2 ping statistics --- 00:13:07.448 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:07.448 rtt min/avg/max/mdev = 0.163/0.163/0.163/0.000 ms 00:13:07.448 11:21:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:13:07.448 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:07.448 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.076 ms 00:13:07.448 00:13:07.448 --- 10.0.0.1 ping statistics --- 00:13:07.448 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:07.448 rtt min/avg/max/mdev = 0.076/0.076/0.076/0.000 ms 00:13:07.448 11:21:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:07.448 11:21:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@422 -- # return 0 00:13:07.448 11:21:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:13:07.448 11:21:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:07.448 11:21:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:13:07.448 11:21:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:13:07.448 11:21:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:07.448 11:21:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:13:07.448 11:21:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:13:07.448 11:21:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@16 -- # nvmfappstart -m 0xF 00:13:07.448 11:21:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:13:07.448 11:21:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@724 -- # xtrace_disable 00:13:07.448 11:21:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:13:07.448 11:21:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@481 -- # nvmfpid=1456461 00:13:07.448 11:21:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@482 -- # waitforlisten 1456461 00:13:07.448 11:21:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:13:07.448 11:21:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@831 -- # '[' -z 1456461 ']' 00:13:07.448 11:21:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:07.448 11:21:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@836 -- # local max_retries=100 00:13:07.448 11:21:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:07.448 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:07.448 11:21:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@840 -- # xtrace_disable 00:13:07.448 11:21:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:13:07.448 [2024-07-26 11:21:02.954985] Starting SPDK v24.09-pre git sha1 487ff9e1a / DPDK 24.03.0 initialization... 00:13:07.448 [2024-07-26 11:21:02.955026] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:07.448 EAL: No free 2048 kB hugepages reported on node 1 00:13:07.448 [2024-07-26 11:21:03.023024] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:13:07.448 [2024-07-26 11:21:03.096486] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:07.448 [2024-07-26 11:21:03.096528] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:07.448 [2024-07-26 11:21:03.096535] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:07.448 [2024-07-26 11:21:03.096541] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:07.448 [2024-07-26 11:21:03.096546] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:07.448 [2024-07-26 11:21:03.096620] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:13:07.448 [2024-07-26 11:21:03.096726] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:13:07.448 [2024-07-26 11:21:03.096759] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:13:07.448 [2024-07-26 11:21:03.096760] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:13:08.379 11:21:03 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:13:08.379 11:21:03 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@864 -- # return 0 00:13:08.379 11:21:03 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:13:08.379 11:21:03 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@730 -- # xtrace_disable 00:13:08.379 11:21:03 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:13:08.379 11:21:03 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:08.379 11:21:03 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@18 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:13:08.379 11:21:03 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:13:08.379 11:21:03 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@21 -- # jq length 00:13:08.379 11:21:03 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@21 -- # '[' 1 '!=' 1 ']' 00:13:08.379 11:21:03 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_1 -s 32 00:13:08.379 "nvmf_tgt_1" 00:13:08.379 11:21:03 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_2 -s 32 00:13:08.636 "nvmf_tgt_2" 00:13:08.636 11:21:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:13:08.636 11:21:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@28 -- # jq length 00:13:08.636 11:21:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@28 -- # '[' 3 '!=' 3 ']' 00:13:08.636 11:21:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_1 00:13:08.636 true 00:13:08.636 11:21:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_2 00:13:08.893 true 00:13:08.893 11:21:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:13:08.893 11:21:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@35 -- # jq length 00:13:08.893 11:21:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@35 -- # '[' 1 '!=' 1 ']' 00:13:08.893 11:21:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:13:08.893 11:21:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@41 -- # nvmftestfini 00:13:08.893 11:21:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@488 -- # nvmfcleanup 00:13:08.893 11:21:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@117 -- # sync 00:13:08.893 11:21:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:13:08.893 11:21:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@120 -- # set +e 00:13:08.893 11:21:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@121 -- # for i in {1..20} 00:13:08.893 11:21:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:13:08.893 rmmod nvme_tcp 00:13:08.893 rmmod nvme_fabrics 00:13:08.893 rmmod nvme_keyring 00:13:08.894 11:21:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:13:08.894 11:21:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@124 -- # set -e 00:13:08.894 11:21:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@125 -- # return 0 00:13:08.894 11:21:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@489 -- # '[' -n 1456461 ']' 00:13:08.894 11:21:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@490 -- # killprocess 1456461 00:13:08.894 11:21:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@950 -- # '[' -z 1456461 ']' 00:13:08.894 11:21:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@954 -- # kill -0 1456461 00:13:09.152 11:21:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@955 -- # uname 00:13:09.152 11:21:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:13:09.152 11:21:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1456461 00:13:09.152 11:21:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:13:09.152 11:21:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:13:09.152 11:21:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1456461' 00:13:09.152 killing process with pid 1456461 00:13:09.152 11:21:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@969 -- # kill 1456461 00:13:09.152 11:21:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@974 -- # wait 1456461 00:13:09.152 11:21:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:13:09.152 11:21:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:13:09.152 11:21:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:13:09.152 11:21:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:13:09.152 11:21:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@278 -- # remove_spdk_ns 00:13:09.152 11:21:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:09.152 11:21:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:09.152 11:21:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:11.685 11:21:06 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:13:11.685 00:13:11.685 real 0m9.832s 00:13:11.685 user 0m9.089s 00:13:11.685 sys 0m4.726s 00:13:11.685 11:21:06 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1126 -- # xtrace_disable 00:13:11.685 11:21:06 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:13:11.685 ************************************ 00:13:11.685 END TEST nvmf_multitarget 00:13:11.685 ************************************ 00:13:11.685 11:21:06 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@22 -- # run_test nvmf_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:13:11.685 11:21:06 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:13:11.685 11:21:06 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:13:11.685 11:21:06 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:13:11.685 ************************************ 00:13:11.685 START TEST nvmf_rpc 00:13:11.685 ************************************ 00:13:11.685 11:21:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:13:11.685 * Looking for test storage... 00:13:11.685 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:11.685 11:21:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:11.685 11:21:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@7 -- # uname -s 00:13:11.685 11:21:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:11.685 11:21:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:11.685 11:21:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:11.685 11:21:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:11.685 11:21:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:11.685 11:21:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:11.685 11:21:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:11.685 11:21:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:11.685 11:21:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:11.685 11:21:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:11.685 11:21:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:13:11.686 11:21:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:13:11.686 11:21:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:11.686 11:21:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:11.686 11:21:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:11.686 11:21:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:11.686 11:21:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:11.686 11:21:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:11.686 11:21:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:11.686 11:21:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:11.686 11:21:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:11.686 11:21:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:11.686 11:21:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:11.686 11:21:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@5 -- # export PATH 00:13:11.686 11:21:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:11.686 11:21:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@47 -- # : 0 00:13:11.686 11:21:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:13:11.686 11:21:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:13:11.686 11:21:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:11.686 11:21:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:11.686 11:21:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:11.686 11:21:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:13:11.686 11:21:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:13:11.686 11:21:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@51 -- # have_pci_nics=0 00:13:11.686 11:21:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@11 -- # loops=5 00:13:11.686 11:21:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@23 -- # nvmftestinit 00:13:11.686 11:21:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:13:11.686 11:21:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:11.686 11:21:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@448 -- # prepare_net_devs 00:13:11.686 11:21:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@410 -- # local -g is_hw=no 00:13:11.686 11:21:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@412 -- # remove_spdk_ns 00:13:11.686 11:21:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:11.686 11:21:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:11.686 11:21:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:11.686 11:21:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:13:11.686 11:21:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:13:11.686 11:21:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@285 -- # xtrace_disable 00:13:11.686 11:21:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:16.962 11:21:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:13:16.962 11:21:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@291 -- # pci_devs=() 00:13:16.962 11:21:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@291 -- # local -a pci_devs 00:13:16.962 11:21:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@292 -- # pci_net_devs=() 00:13:16.962 11:21:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:13:16.962 11:21:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@293 -- # pci_drivers=() 00:13:16.962 11:21:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@293 -- # local -A pci_drivers 00:13:16.962 11:21:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@295 -- # net_devs=() 00:13:16.962 11:21:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@295 -- # local -ga net_devs 00:13:16.962 11:21:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@296 -- # e810=() 00:13:16.962 11:21:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@296 -- # local -ga e810 00:13:16.962 11:21:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@297 -- # x722=() 00:13:16.962 11:21:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@297 -- # local -ga x722 00:13:16.962 11:21:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@298 -- # mlx=() 00:13:16.962 11:21:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@298 -- # local -ga mlx 00:13:16.962 11:21:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:16.962 11:21:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:16.962 11:21:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:16.962 11:21:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:16.962 11:21:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:16.962 11:21:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:16.962 11:21:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:16.962 11:21:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:16.962 11:21:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:16.962 11:21:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:16.962 11:21:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:16.962 11:21:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:13:16.962 11:21:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:13:16.962 11:21:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:13:16.962 11:21:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:13:16.962 11:21:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:13:16.962 11:21:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:13:16.962 11:21:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:16.962 11:21:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:13:16.962 Found 0000:86:00.0 (0x8086 - 0x159b) 00:13:16.962 11:21:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:13:16.962 11:21:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:13:16.962 11:21:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:16.962 11:21:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:16.962 11:21:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:13:16.962 11:21:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:16.962 11:21:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:13:16.962 Found 0000:86:00.1 (0x8086 - 0x159b) 00:13:16.962 11:21:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:13:16.962 11:21:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:13:16.962 11:21:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:16.962 11:21:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:16.962 11:21:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:13:16.962 11:21:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:13:16.962 11:21:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:13:16.962 11:21:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:13:16.962 11:21:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:16.962 11:21:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:16.962 11:21:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:13:16.962 11:21:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:16.962 11:21:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@390 -- # [[ up == up ]] 00:13:16.962 11:21:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:13:16.962 11:21:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:16.962 11:21:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:13:16.962 Found net devices under 0000:86:00.0: cvl_0_0 00:13:16.962 11:21:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:13:16.962 11:21:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:16.962 11:21:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:16.962 11:21:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:13:16.962 11:21:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:16.962 11:21:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@390 -- # [[ up == up ]] 00:13:16.962 11:21:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:13:16.962 11:21:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:16.962 11:21:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:13:16.962 Found net devices under 0000:86:00.1: cvl_0_1 00:13:16.962 11:21:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:13:16.962 11:21:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:13:16.962 11:21:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@414 -- # is_hw=yes 00:13:16.962 11:21:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:13:16.962 11:21:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:13:16.962 11:21:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:13:16.962 11:21:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:16.962 11:21:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:16.962 11:21:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:13:16.962 11:21:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:13:16.962 11:21:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:13:16.962 11:21:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:13:16.962 11:21:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:13:16.962 11:21:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:13:16.962 11:21:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:16.962 11:21:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:13:16.962 11:21:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:13:16.962 11:21:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:13:16.962 11:21:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:13:17.294 11:21:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:13:17.294 11:21:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:13:17.294 11:21:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:13:17.294 11:21:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:13:17.294 11:21:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:13:17.294 11:21:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:13:17.294 11:21:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:13:17.294 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:17.294 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.175 ms 00:13:17.294 00:13:17.294 --- 10.0.0.2 ping statistics --- 00:13:17.294 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:17.294 rtt min/avg/max/mdev = 0.175/0.175/0.175/0.000 ms 00:13:17.294 11:21:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:13:17.294 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:17.294 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.215 ms 00:13:17.294 00:13:17.294 --- 10.0.0.1 ping statistics --- 00:13:17.294 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:17.294 rtt min/avg/max/mdev = 0.215/0.215/0.215/0.000 ms 00:13:17.294 11:21:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:17.294 11:21:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@422 -- # return 0 00:13:17.294 11:21:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:13:17.294 11:21:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:17.294 11:21:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:13:17.294 11:21:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:13:17.294 11:21:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:17.294 11:21:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:13:17.294 11:21:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:13:17.294 11:21:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@24 -- # nvmfappstart -m 0xF 00:13:17.294 11:21:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:13:17.294 11:21:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@724 -- # xtrace_disable 00:13:17.294 11:21:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:17.294 11:21:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@481 -- # nvmfpid=1460632 00:13:17.294 11:21:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@482 -- # waitforlisten 1460632 00:13:17.294 11:21:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:13:17.294 11:21:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@831 -- # '[' -z 1460632 ']' 00:13:17.294 11:21:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:17.294 11:21:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:13:17.294 11:21:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:17.294 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:17.294 11:21:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:13:17.294 11:21:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:17.294 [2024-07-26 11:21:12.885919] Starting SPDK v24.09-pre git sha1 487ff9e1a / DPDK 24.03.0 initialization... 00:13:17.294 [2024-07-26 11:21:12.885960] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:17.294 EAL: No free 2048 kB hugepages reported on node 1 00:13:17.553 [2024-07-26 11:21:12.953946] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:13:17.553 [2024-07-26 11:21:13.032134] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:17.553 [2024-07-26 11:21:13.032168] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:17.553 [2024-07-26 11:21:13.032175] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:17.553 [2024-07-26 11:21:13.032181] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:17.553 [2024-07-26 11:21:13.032186] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:17.553 [2024-07-26 11:21:13.032293] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:13:17.553 [2024-07-26 11:21:13.032321] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:13:17.553 [2024-07-26 11:21:13.032433] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:13:17.553 [2024-07-26 11:21:13.032434] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:13:18.117 11:21:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:13:18.117 11:21:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@864 -- # return 0 00:13:18.117 11:21:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:13:18.117 11:21:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@730 -- # xtrace_disable 00:13:18.117 11:21:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:18.117 11:21:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:18.117 11:21:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@26 -- # rpc_cmd nvmf_get_stats 00:13:18.117 11:21:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:18.117 11:21:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:18.117 11:21:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:18.117 11:21:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@26 -- # stats='{ 00:13:18.117 "tick_rate": 2100000000, 00:13:18.117 "poll_groups": [ 00:13:18.117 { 00:13:18.117 "name": "nvmf_tgt_poll_group_000", 00:13:18.117 "admin_qpairs": 0, 00:13:18.117 "io_qpairs": 0, 00:13:18.117 "current_admin_qpairs": 0, 00:13:18.117 "current_io_qpairs": 0, 00:13:18.117 "pending_bdev_io": 0, 00:13:18.117 "completed_nvme_io": 0, 00:13:18.117 "transports": [] 00:13:18.117 }, 00:13:18.117 { 00:13:18.117 "name": "nvmf_tgt_poll_group_001", 00:13:18.117 "admin_qpairs": 0, 00:13:18.117 "io_qpairs": 0, 00:13:18.117 "current_admin_qpairs": 0, 00:13:18.117 "current_io_qpairs": 0, 00:13:18.117 "pending_bdev_io": 0, 00:13:18.117 "completed_nvme_io": 0, 00:13:18.117 "transports": [] 00:13:18.117 }, 00:13:18.117 { 00:13:18.117 "name": "nvmf_tgt_poll_group_002", 00:13:18.117 "admin_qpairs": 0, 00:13:18.117 "io_qpairs": 0, 00:13:18.117 "current_admin_qpairs": 0, 00:13:18.117 "current_io_qpairs": 0, 00:13:18.117 "pending_bdev_io": 0, 00:13:18.117 "completed_nvme_io": 0, 00:13:18.117 "transports": [] 00:13:18.117 }, 00:13:18.117 { 00:13:18.117 "name": "nvmf_tgt_poll_group_003", 00:13:18.117 "admin_qpairs": 0, 00:13:18.117 "io_qpairs": 0, 00:13:18.117 "current_admin_qpairs": 0, 00:13:18.117 "current_io_qpairs": 0, 00:13:18.117 "pending_bdev_io": 0, 00:13:18.117 "completed_nvme_io": 0, 00:13:18.117 "transports": [] 00:13:18.117 } 00:13:18.117 ] 00:13:18.117 }' 00:13:18.117 11:21:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@28 -- # jcount '.poll_groups[].name' 00:13:18.117 11:21:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@14 -- # local 'filter=.poll_groups[].name' 00:13:18.117 11:21:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@15 -- # jq '.poll_groups[].name' 00:13:18.117 11:21:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@15 -- # wc -l 00:13:18.374 11:21:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@28 -- # (( 4 == 4 )) 00:13:18.374 11:21:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@29 -- # jq '.poll_groups[0].transports[0]' 00:13:18.374 11:21:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@29 -- # [[ null == null ]] 00:13:18.374 11:21:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@31 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:13:18.374 11:21:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:18.374 11:21:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:18.374 [2024-07-26 11:21:13.838299] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:18.374 11:21:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:18.374 11:21:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@33 -- # rpc_cmd nvmf_get_stats 00:13:18.374 11:21:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:18.374 11:21:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:18.374 11:21:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:18.374 11:21:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@33 -- # stats='{ 00:13:18.374 "tick_rate": 2100000000, 00:13:18.374 "poll_groups": [ 00:13:18.374 { 00:13:18.374 "name": "nvmf_tgt_poll_group_000", 00:13:18.374 "admin_qpairs": 0, 00:13:18.374 "io_qpairs": 0, 00:13:18.374 "current_admin_qpairs": 0, 00:13:18.374 "current_io_qpairs": 0, 00:13:18.374 "pending_bdev_io": 0, 00:13:18.374 "completed_nvme_io": 0, 00:13:18.374 "transports": [ 00:13:18.374 { 00:13:18.374 "trtype": "TCP" 00:13:18.374 } 00:13:18.374 ] 00:13:18.374 }, 00:13:18.374 { 00:13:18.374 "name": "nvmf_tgt_poll_group_001", 00:13:18.374 "admin_qpairs": 0, 00:13:18.374 "io_qpairs": 0, 00:13:18.374 "current_admin_qpairs": 0, 00:13:18.374 "current_io_qpairs": 0, 00:13:18.374 "pending_bdev_io": 0, 00:13:18.375 "completed_nvme_io": 0, 00:13:18.375 "transports": [ 00:13:18.375 { 00:13:18.375 "trtype": "TCP" 00:13:18.375 } 00:13:18.375 ] 00:13:18.375 }, 00:13:18.375 { 00:13:18.375 "name": "nvmf_tgt_poll_group_002", 00:13:18.375 "admin_qpairs": 0, 00:13:18.375 "io_qpairs": 0, 00:13:18.375 "current_admin_qpairs": 0, 00:13:18.375 "current_io_qpairs": 0, 00:13:18.375 "pending_bdev_io": 0, 00:13:18.375 "completed_nvme_io": 0, 00:13:18.375 "transports": [ 00:13:18.375 { 00:13:18.375 "trtype": "TCP" 00:13:18.375 } 00:13:18.375 ] 00:13:18.375 }, 00:13:18.375 { 00:13:18.375 "name": "nvmf_tgt_poll_group_003", 00:13:18.375 "admin_qpairs": 0, 00:13:18.375 "io_qpairs": 0, 00:13:18.375 "current_admin_qpairs": 0, 00:13:18.375 "current_io_qpairs": 0, 00:13:18.375 "pending_bdev_io": 0, 00:13:18.375 "completed_nvme_io": 0, 00:13:18.375 "transports": [ 00:13:18.375 { 00:13:18.375 "trtype": "TCP" 00:13:18.375 } 00:13:18.375 ] 00:13:18.375 } 00:13:18.375 ] 00:13:18.375 }' 00:13:18.375 11:21:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@35 -- # jsum '.poll_groups[].admin_qpairs' 00:13:18.375 11:21:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:13:18.375 11:21:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:13:18.375 11:21:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:13:18.375 11:21:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@35 -- # (( 0 == 0 )) 00:13:18.375 11:21:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@36 -- # jsum '.poll_groups[].io_qpairs' 00:13:18.375 11:21:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:13:18.375 11:21:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:13:18.375 11:21:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:13:18.375 11:21:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@36 -- # (( 0 == 0 )) 00:13:18.375 11:21:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@38 -- # '[' rdma == tcp ']' 00:13:18.375 11:21:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@46 -- # MALLOC_BDEV_SIZE=64 00:13:18.375 11:21:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@47 -- # MALLOC_BLOCK_SIZE=512 00:13:18.375 11:21:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@49 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:13:18.375 11:21:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:18.375 11:21:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:18.375 Malloc1 00:13:18.375 11:21:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:18.375 11:21:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@52 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:13:18.375 11:21:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:18.375 11:21:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:18.375 11:21:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:18.375 11:21:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:13:18.375 11:21:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:18.375 11:21:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:18.375 11:21:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:18.375 11:21:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@54 -- # rpc_cmd nvmf_subsystem_allow_any_host -d nqn.2016-06.io.spdk:cnode1 00:13:18.375 11:21:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:18.375 11:21:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:18.375 11:21:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:18.375 11:21:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@55 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:18.375 11:21:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:18.375 11:21:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:18.375 [2024-07-26 11:21:14.010162] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:18.375 11:21:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:18.375 11:21:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@58 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid=00ad29c2-ccbd-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -a 10.0.0.2 -s 4420 00:13:18.375 11:21:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@650 -- # local es=0 00:13:18.375 11:21:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@652 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid=00ad29c2-ccbd-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -a 10.0.0.2 -s 4420 00:13:18.375 11:21:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@638 -- # local arg=nvme 00:13:18.375 11:21:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:13:18.375 11:21:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # type -t nvme 00:13:18.375 11:21:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:13:18.375 11:21:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # type -P nvme 00:13:18.375 11:21:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:13:18.375 11:21:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # arg=/usr/sbin/nvme 00:13:18.375 11:21:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # [[ -x /usr/sbin/nvme ]] 00:13:18.375 11:21:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@653 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid=00ad29c2-ccbd-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -a 10.0.0.2 -s 4420 00:13:18.633 [2024-07-26 11:21:14.034545] ctrlr.c: 822:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562' 00:13:18.633 Failed to write to /dev/nvme-fabrics: Input/output error 00:13:18.633 could not add new controller: failed to write to nvme-fabrics device 00:13:18.633 11:21:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@653 -- # es=1 00:13:18.633 11:21:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:13:18.633 11:21:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:13:18.633 11:21:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:13:18.633 11:21:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@61 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:13:18.633 11:21:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:18.633 11:21:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:18.633 11:21:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:18.633 11:21:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@62 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid=00ad29c2-ccbd-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:13:19.564 11:21:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@63 -- # waitforserial SPDKISFASTANDAWESOME 00:13:19.564 11:21:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:13:19.564 11:21:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:13:19.564 11:21:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:13:19.564 11:21:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:13:22.088 11:21:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:13:22.088 11:21:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:13:22.088 11:21:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:13:22.088 11:21:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:13:22.088 11:21:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:13:22.088 11:21:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:13:22.088 11:21:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@64 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:22.088 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:22.088 11:21:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@65 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:13:22.088 11:21:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:13:22.088 11:21:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:13:22.088 11:21:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:22.088 11:21:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:13:22.088 11:21:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:22.088 11:21:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:13:22.088 11:21:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@68 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:13:22.088 11:21:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:22.088 11:21:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:22.088 11:21:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:22.088 11:21:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@69 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid=00ad29c2-ccbd-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:13:22.088 11:21:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@650 -- # local es=0 00:13:22.088 11:21:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@652 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid=00ad29c2-ccbd-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:13:22.088 11:21:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@638 -- # local arg=nvme 00:13:22.088 11:21:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:13:22.088 11:21:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # type -t nvme 00:13:22.088 11:21:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:13:22.088 11:21:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # type -P nvme 00:13:22.088 11:21:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:13:22.088 11:21:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # arg=/usr/sbin/nvme 00:13:22.088 11:21:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # [[ -x /usr/sbin/nvme ]] 00:13:22.088 11:21:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@653 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid=00ad29c2-ccbd-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:13:22.088 [2024-07-26 11:21:17.338952] ctrlr.c: 822:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562' 00:13:22.088 Failed to write to /dev/nvme-fabrics: Input/output error 00:13:22.088 could not add new controller: failed to write to nvme-fabrics device 00:13:22.088 11:21:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@653 -- # es=1 00:13:22.088 11:21:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:13:22.088 11:21:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:13:22.088 11:21:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:13:22.088 11:21:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@72 -- # rpc_cmd nvmf_subsystem_allow_any_host -e nqn.2016-06.io.spdk:cnode1 00:13:22.088 11:21:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:22.088 11:21:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:22.088 11:21:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:22.088 11:21:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@73 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid=00ad29c2-ccbd-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:13:23.021 11:21:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@74 -- # waitforserial SPDKISFASTANDAWESOME 00:13:23.021 11:21:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:13:23.021 11:21:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:13:23.021 11:21:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:13:23.021 11:21:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:13:24.918 11:21:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:13:24.918 11:21:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:13:24.918 11:21:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:13:24.918 11:21:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:13:24.918 11:21:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:13:24.918 11:21:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:13:24.918 11:21:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@75 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:25.176 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:25.176 11:21:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@76 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:13:25.176 11:21:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:13:25.176 11:21:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:13:25.176 11:21:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:25.176 11:21:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:13:25.176 11:21:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:25.176 11:21:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:13:25.176 11:21:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@78 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:25.176 11:21:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:25.176 11:21:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:25.176 11:21:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:25.176 11:21:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # seq 1 5 00:13:25.176 11:21:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:13:25.176 11:21:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:13:25.176 11:21:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:25.176 11:21:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:25.176 11:21:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:25.176 11:21:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:25.176 11:21:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:25.176 11:21:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:25.176 [2024-07-26 11:21:20.749513] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:25.176 11:21:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:25.176 11:21:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:13:25.176 11:21:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:25.176 11:21:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:25.176 11:21:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:25.176 11:21:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:13:25.176 11:21:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:25.176 11:21:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:25.176 11:21:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:25.176 11:21:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid=00ad29c2-ccbd-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:13:26.547 11:21:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:13:26.547 11:21:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:13:26.547 11:21:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:13:26.547 11:21:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:13:26.547 11:21:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:13:28.440 11:21:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:13:28.440 11:21:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:13:28.440 11:21:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:13:28.440 11:21:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:13:28.440 11:21:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:13:28.440 11:21:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:13:28.440 11:21:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:28.440 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:28.440 11:21:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:13:28.440 11:21:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:13:28.440 11:21:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:13:28.440 11:21:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:28.440 11:21:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:13:28.440 11:21:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:28.440 11:21:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:13:28.440 11:21:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:13:28.440 11:21:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:28.440 11:21:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:28.441 11:21:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:28.441 11:21:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:28.441 11:21:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:28.441 11:21:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:28.441 11:21:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:28.441 11:21:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:13:28.441 11:21:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:13:28.441 11:21:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:28.441 11:21:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:28.441 11:21:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:28.441 11:21:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:28.441 11:21:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:28.441 11:21:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:28.441 [2024-07-26 11:21:24.029972] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:28.441 11:21:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:28.441 11:21:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:13:28.441 11:21:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:28.441 11:21:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:28.441 11:21:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:28.441 11:21:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:13:28.441 11:21:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:28.441 11:21:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:28.441 11:21:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:28.441 11:21:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid=00ad29c2-ccbd-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:13:29.812 11:21:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:13:29.812 11:21:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:13:29.812 11:21:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:13:29.812 11:21:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:13:29.812 11:21:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:13:31.715 11:21:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:13:31.715 11:21:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:13:31.715 11:21:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:13:31.715 11:21:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:13:31.715 11:21:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:13:31.715 11:21:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:13:31.715 11:21:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:31.715 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:31.715 11:21:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:13:31.715 11:21:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:13:31.715 11:21:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:13:31.715 11:21:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:31.715 11:21:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:13:31.715 11:21:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:31.974 11:21:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:13:31.974 11:21:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:13:31.974 11:21:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:31.974 11:21:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:31.974 11:21:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:31.974 11:21:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:31.974 11:21:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:31.974 11:21:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:31.974 11:21:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:31.974 11:21:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:13:31.974 11:21:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:13:31.974 11:21:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:31.974 11:21:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:31.974 11:21:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:31.974 11:21:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:31.974 11:21:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:31.974 11:21:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:31.974 [2024-07-26 11:21:27.414869] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:31.974 11:21:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:31.974 11:21:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:13:31.975 11:21:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:31.975 11:21:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:31.975 11:21:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:31.975 11:21:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:13:31.975 11:21:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:31.975 11:21:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:31.975 11:21:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:31.975 11:21:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid=00ad29c2-ccbd-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:13:33.358 11:21:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:13:33.358 11:21:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:13:33.358 11:21:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:13:33.358 11:21:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:13:33.358 11:21:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:13:35.279 11:21:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:13:35.279 11:21:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:13:35.279 11:21:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:13:35.279 11:21:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:13:35.279 11:21:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:13:35.279 11:21:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:13:35.279 11:21:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:35.279 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:35.279 11:21:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:13:35.279 11:21:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:13:35.279 11:21:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:13:35.279 11:21:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:35.279 11:21:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:13:35.279 11:21:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:35.279 11:21:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:13:35.279 11:21:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:13:35.279 11:21:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:35.279 11:21:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:35.279 11:21:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:35.279 11:21:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:35.279 11:21:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:35.279 11:21:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:35.279 11:21:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:35.279 11:21:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:13:35.279 11:21:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:13:35.279 11:21:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:35.279 11:21:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:35.279 11:21:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:35.279 11:21:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:35.279 11:21:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:35.279 11:21:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:35.279 [2024-07-26 11:21:30.751105] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:35.279 11:21:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:35.280 11:21:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:13:35.280 11:21:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:35.280 11:21:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:35.280 11:21:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:35.280 11:21:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:13:35.280 11:21:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:35.280 11:21:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:35.280 11:21:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:35.280 11:21:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid=00ad29c2-ccbd-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:13:36.212 11:21:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:13:36.212 11:21:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:13:36.212 11:21:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:13:36.212 11:21:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:13:36.212 11:21:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:13:38.740 11:21:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:13:38.740 11:21:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:13:38.740 11:21:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:13:38.740 11:21:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:13:38.740 11:21:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:13:38.740 11:21:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:13:38.740 11:21:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:38.740 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:38.740 11:21:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:13:38.740 11:21:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:13:38.740 11:21:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:13:38.740 11:21:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:38.740 11:21:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:13:38.740 11:21:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:38.740 11:21:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:13:38.740 11:21:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:13:38.740 11:21:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:38.740 11:21:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:38.740 11:21:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:38.740 11:21:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:38.740 11:21:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:38.740 11:21:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:38.740 11:21:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:38.740 11:21:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:13:38.740 11:21:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:13:38.740 11:21:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:38.740 11:21:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:38.740 11:21:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:38.740 11:21:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:38.740 11:21:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:38.740 11:21:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:38.740 [2024-07-26 11:21:34.052672] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:38.740 11:21:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:38.740 11:21:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:13:38.740 11:21:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:38.740 11:21:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:38.740 11:21:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:38.740 11:21:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:13:38.740 11:21:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:38.740 11:21:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:38.740 11:21:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:38.740 11:21:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid=00ad29c2-ccbd-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:13:39.673 11:21:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:13:39.673 11:21:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:13:39.673 11:21:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:13:39.673 11:21:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:13:39.673 11:21:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:13:42.197 11:21:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:13:42.197 11:21:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:13:42.197 11:21:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:13:42.197 11:21:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:13:42.197 11:21:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:13:42.197 11:21:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:13:42.197 11:21:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:42.197 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:42.197 11:21:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:13:42.197 11:21:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:13:42.197 11:21:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:13:42.197 11:21:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:42.197 11:21:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:13:42.197 11:21:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:42.197 11:21:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:13:42.197 11:21:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:13:42.197 11:21:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:42.197 11:21:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:42.197 11:21:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:42.197 11:21:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:42.197 11:21:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:42.197 11:21:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:42.197 11:21:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:42.197 11:21:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # seq 1 5 00:13:42.197 11:21:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:13:42.197 11:21:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:13:42.197 11:21:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:42.197 11:21:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:42.197 11:21:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:42.197 11:21:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:42.197 11:21:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:42.197 11:21:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:42.197 [2024-07-26 11:21:37.398923] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:42.197 11:21:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:42.197 11:21:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:13:42.197 11:21:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:42.197 11:21:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:42.197 11:21:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:42.197 11:21:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:13:42.197 11:21:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:42.197 11:21:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:42.197 11:21:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:42.197 11:21:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:42.197 11:21:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:42.197 11:21:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:42.197 11:21:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:42.197 11:21:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:42.197 11:21:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:42.197 11:21:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:42.197 11:21:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:42.197 11:21:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:13:42.197 11:21:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:13:42.197 11:21:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:42.197 11:21:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:42.197 11:21:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:42.197 11:21:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:42.197 11:21:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:42.197 11:21:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:42.197 [2024-07-26 11:21:37.447028] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:42.197 11:21:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:42.197 11:21:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:13:42.197 11:21:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:42.197 11:21:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:42.197 11:21:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:42.197 11:21:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:13:42.197 11:21:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:42.197 11:21:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:42.197 11:21:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:42.197 11:21:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:42.197 11:21:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:42.197 11:21:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:42.197 11:21:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:42.197 11:21:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:42.197 11:21:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:42.197 11:21:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:42.197 11:21:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:42.197 11:21:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:13:42.198 11:21:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:13:42.198 11:21:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:42.198 11:21:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:42.198 11:21:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:42.198 11:21:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:42.198 11:21:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:42.198 11:21:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:42.198 [2024-07-26 11:21:37.499191] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:42.198 11:21:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:42.198 11:21:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:13:42.198 11:21:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:42.198 11:21:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:42.198 11:21:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:42.198 11:21:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:13:42.198 11:21:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:42.198 11:21:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:42.198 11:21:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:42.198 11:21:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:42.198 11:21:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:42.198 11:21:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:42.198 11:21:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:42.198 11:21:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:42.198 11:21:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:42.198 11:21:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:42.198 11:21:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:42.198 11:21:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:13:42.198 11:21:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:13:42.198 11:21:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:42.198 11:21:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:42.198 11:21:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:42.198 11:21:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:42.198 11:21:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:42.198 11:21:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:42.198 [2024-07-26 11:21:37.547351] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:42.198 11:21:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:42.198 11:21:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:13:42.198 11:21:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:42.198 11:21:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:42.198 11:21:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:42.198 11:21:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:13:42.198 11:21:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:42.198 11:21:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:42.198 11:21:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:42.198 11:21:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:42.198 11:21:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:42.198 11:21:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:42.198 11:21:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:42.198 11:21:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:42.198 11:21:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:42.198 11:21:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:42.198 11:21:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:42.198 11:21:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:13:42.198 11:21:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:13:42.198 11:21:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:42.198 11:21:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:42.198 11:21:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:42.198 11:21:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:42.198 11:21:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:42.198 11:21:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:42.198 [2024-07-26 11:21:37.595546] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:42.198 11:21:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:42.198 11:21:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:13:42.198 11:21:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:42.198 11:21:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:42.198 11:21:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:42.198 11:21:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:13:42.198 11:21:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:42.198 11:21:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:42.198 11:21:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:42.198 11:21:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:42.198 11:21:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:42.198 11:21:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:42.198 11:21:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:42.198 11:21:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:42.198 11:21:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:42.198 11:21:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:42.198 11:21:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:42.198 11:21:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@110 -- # rpc_cmd nvmf_get_stats 00:13:42.198 11:21:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:42.198 11:21:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:42.198 11:21:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:42.198 11:21:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@110 -- # stats='{ 00:13:42.198 "tick_rate": 2100000000, 00:13:42.198 "poll_groups": [ 00:13:42.198 { 00:13:42.198 "name": "nvmf_tgt_poll_group_000", 00:13:42.198 "admin_qpairs": 2, 00:13:42.198 "io_qpairs": 168, 00:13:42.198 "current_admin_qpairs": 0, 00:13:42.198 "current_io_qpairs": 0, 00:13:42.198 "pending_bdev_io": 0, 00:13:42.198 "completed_nvme_io": 268, 00:13:42.198 "transports": [ 00:13:42.198 { 00:13:42.198 "trtype": "TCP" 00:13:42.198 } 00:13:42.198 ] 00:13:42.198 }, 00:13:42.198 { 00:13:42.198 "name": "nvmf_tgt_poll_group_001", 00:13:42.198 "admin_qpairs": 2, 00:13:42.198 "io_qpairs": 168, 00:13:42.198 "current_admin_qpairs": 0, 00:13:42.198 "current_io_qpairs": 0, 00:13:42.198 "pending_bdev_io": 0, 00:13:42.198 "completed_nvme_io": 169, 00:13:42.198 "transports": [ 00:13:42.198 { 00:13:42.198 "trtype": "TCP" 00:13:42.198 } 00:13:42.198 ] 00:13:42.198 }, 00:13:42.198 { 00:13:42.198 "name": "nvmf_tgt_poll_group_002", 00:13:42.198 "admin_qpairs": 1, 00:13:42.198 "io_qpairs": 168, 00:13:42.198 "current_admin_qpairs": 0, 00:13:42.198 "current_io_qpairs": 0, 00:13:42.198 "pending_bdev_io": 0, 00:13:42.198 "completed_nvme_io": 296, 00:13:42.198 "transports": [ 00:13:42.198 { 00:13:42.198 "trtype": "TCP" 00:13:42.198 } 00:13:42.198 ] 00:13:42.198 }, 00:13:42.198 { 00:13:42.198 "name": "nvmf_tgt_poll_group_003", 00:13:42.198 "admin_qpairs": 2, 00:13:42.198 "io_qpairs": 168, 00:13:42.198 "current_admin_qpairs": 0, 00:13:42.198 "current_io_qpairs": 0, 00:13:42.198 "pending_bdev_io": 0, 00:13:42.198 "completed_nvme_io": 289, 00:13:42.198 "transports": [ 00:13:42.198 { 00:13:42.198 "trtype": "TCP" 00:13:42.198 } 00:13:42.198 ] 00:13:42.198 } 00:13:42.198 ] 00:13:42.198 }' 00:13:42.199 11:21:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@112 -- # jsum '.poll_groups[].admin_qpairs' 00:13:42.199 11:21:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:13:42.199 11:21:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:13:42.199 11:21:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:13:42.199 11:21:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@112 -- # (( 7 > 0 )) 00:13:42.199 11:21:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@113 -- # jsum '.poll_groups[].io_qpairs' 00:13:42.199 11:21:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:13:42.199 11:21:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:13:42.199 11:21:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:13:42.199 11:21:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@113 -- # (( 672 > 0 )) 00:13:42.199 11:21:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@115 -- # '[' rdma == tcp ']' 00:13:42.199 11:21:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@121 -- # trap - SIGINT SIGTERM EXIT 00:13:42.199 11:21:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@123 -- # nvmftestfini 00:13:42.199 11:21:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@488 -- # nvmfcleanup 00:13:42.199 11:21:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@117 -- # sync 00:13:42.199 11:21:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:13:42.199 11:21:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@120 -- # set +e 00:13:42.199 11:21:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@121 -- # for i in {1..20} 00:13:42.199 11:21:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:13:42.199 rmmod nvme_tcp 00:13:42.199 rmmod nvme_fabrics 00:13:42.199 rmmod nvme_keyring 00:13:42.199 11:21:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:13:42.199 11:21:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@124 -- # set -e 00:13:42.199 11:21:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@125 -- # return 0 00:13:42.199 11:21:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@489 -- # '[' -n 1460632 ']' 00:13:42.199 11:21:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@490 -- # killprocess 1460632 00:13:42.199 11:21:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@950 -- # '[' -z 1460632 ']' 00:13:42.199 11:21:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@954 -- # kill -0 1460632 00:13:42.199 11:21:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@955 -- # uname 00:13:42.199 11:21:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:13:42.199 11:21:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1460632 00:13:42.458 11:21:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:13:42.458 11:21:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:13:42.458 11:21:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1460632' 00:13:42.458 killing process with pid 1460632 00:13:42.458 11:21:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@969 -- # kill 1460632 00:13:42.458 11:21:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@974 -- # wait 1460632 00:13:42.458 11:21:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:13:42.458 11:21:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:13:42.458 11:21:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:13:42.458 11:21:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:13:42.458 11:21:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@278 -- # remove_spdk_ns 00:13:42.458 11:21:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:42.458 11:21:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:42.458 11:21:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:44.992 11:21:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:13:44.992 00:13:44.992 real 0m33.213s 00:13:44.992 user 1m41.564s 00:13:44.992 sys 0m6.084s 00:13:44.992 11:21:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:13:44.992 11:21:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:44.992 ************************************ 00:13:44.992 END TEST nvmf_rpc 00:13:44.992 ************************************ 00:13:44.992 11:21:40 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@23 -- # run_test nvmf_invalid /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:13:44.992 11:21:40 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:13:44.992 11:21:40 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:13:44.992 11:21:40 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:13:44.992 ************************************ 00:13:44.992 START TEST nvmf_invalid 00:13:44.992 ************************************ 00:13:44.992 11:21:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:13:44.992 * Looking for test storage... 00:13:44.992 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:44.992 11:21:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:44.992 11:21:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@7 -- # uname -s 00:13:44.992 11:21:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:44.992 11:21:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:44.992 11:21:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:44.992 11:21:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:44.992 11:21:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:44.992 11:21:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:44.992 11:21:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:44.993 11:21:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:44.993 11:21:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:44.993 11:21:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:44.993 11:21:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:13:44.993 11:21:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:13:44.993 11:21:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:44.993 11:21:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:44.993 11:21:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:44.993 11:21:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:44.993 11:21:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:44.993 11:21:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:44.993 11:21:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:44.993 11:21:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:44.993 11:21:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:44.993 11:21:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:44.993 11:21:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:44.993 11:21:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@5 -- # export PATH 00:13:44.993 11:21:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:44.993 11:21:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@47 -- # : 0 00:13:44.993 11:21:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:13:44.993 11:21:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:13:44.993 11:21:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:44.993 11:21:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:44.993 11:21:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:44.993 11:21:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:13:44.993 11:21:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:13:44.993 11:21:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@51 -- # have_pci_nics=0 00:13:44.993 11:21:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@11 -- # multi_target_rpc=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:13:44.993 11:21:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@12 -- # rpc=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:13:44.993 11:21:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode 00:13:44.993 11:21:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@14 -- # target=foobar 00:13:44.993 11:21:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@16 -- # RANDOM=0 00:13:44.993 11:21:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@34 -- # nvmftestinit 00:13:44.993 11:21:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:13:44.993 11:21:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:44.993 11:21:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@448 -- # prepare_net_devs 00:13:44.993 11:21:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@410 -- # local -g is_hw=no 00:13:44.993 11:21:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@412 -- # remove_spdk_ns 00:13:44.993 11:21:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:44.993 11:21:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:44.993 11:21:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:44.993 11:21:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:13:44.993 11:21:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:13:44.993 11:21:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@285 -- # xtrace_disable 00:13:44.993 11:21:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:13:50.264 11:21:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:13:50.264 11:21:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@291 -- # pci_devs=() 00:13:50.264 11:21:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@291 -- # local -a pci_devs 00:13:50.264 11:21:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@292 -- # pci_net_devs=() 00:13:50.264 11:21:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:13:50.264 11:21:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@293 -- # pci_drivers=() 00:13:50.264 11:21:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@293 -- # local -A pci_drivers 00:13:50.264 11:21:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@295 -- # net_devs=() 00:13:50.264 11:21:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@295 -- # local -ga net_devs 00:13:50.264 11:21:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@296 -- # e810=() 00:13:50.264 11:21:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@296 -- # local -ga e810 00:13:50.264 11:21:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@297 -- # x722=() 00:13:50.264 11:21:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@297 -- # local -ga x722 00:13:50.264 11:21:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@298 -- # mlx=() 00:13:50.264 11:21:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@298 -- # local -ga mlx 00:13:50.264 11:21:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:50.264 11:21:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:50.264 11:21:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:50.264 11:21:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:50.264 11:21:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:50.264 11:21:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:50.264 11:21:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:50.264 11:21:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:50.264 11:21:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:50.264 11:21:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:50.264 11:21:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:50.264 11:21:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:13:50.264 11:21:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:13:50.264 11:21:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:13:50.264 11:21:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:13:50.264 11:21:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:13:50.264 11:21:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:13:50.264 11:21:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:50.264 11:21:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:13:50.264 Found 0000:86:00.0 (0x8086 - 0x159b) 00:13:50.264 11:21:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:13:50.264 11:21:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:13:50.264 11:21:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:50.264 11:21:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:50.264 11:21:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:13:50.264 11:21:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:50.264 11:21:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:13:50.264 Found 0000:86:00.1 (0x8086 - 0x159b) 00:13:50.264 11:21:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:13:50.264 11:21:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:13:50.264 11:21:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:50.264 11:21:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:50.264 11:21:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:13:50.264 11:21:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:13:50.264 11:21:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:13:50.264 11:21:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:13:50.264 11:21:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:50.264 11:21:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:50.264 11:21:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:13:50.264 11:21:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:50.264 11:21:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@390 -- # [[ up == up ]] 00:13:50.264 11:21:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:13:50.264 11:21:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:50.264 11:21:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:13:50.264 Found net devices under 0000:86:00.0: cvl_0_0 00:13:50.264 11:21:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:13:50.264 11:21:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:50.264 11:21:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:50.264 11:21:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:13:50.264 11:21:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:50.264 11:21:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@390 -- # [[ up == up ]] 00:13:50.264 11:21:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:13:50.264 11:21:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:50.264 11:21:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:13:50.264 Found net devices under 0000:86:00.1: cvl_0_1 00:13:50.264 11:21:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:13:50.264 11:21:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:13:50.264 11:21:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@414 -- # is_hw=yes 00:13:50.264 11:21:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:13:50.264 11:21:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:13:50.264 11:21:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:13:50.264 11:21:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:50.264 11:21:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:50.264 11:21:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:13:50.264 11:21:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:13:50.264 11:21:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:13:50.264 11:21:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:13:50.264 11:21:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:13:50.264 11:21:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:13:50.264 11:21:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:50.264 11:21:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:13:50.264 11:21:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:13:50.264 11:21:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:13:50.264 11:21:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:13:50.524 11:21:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:13:50.524 11:21:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:13:50.524 11:21:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:13:50.524 11:21:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:13:50.524 11:21:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:13:50.524 11:21:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:13:50.524 11:21:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:13:50.524 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:50.524 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.164 ms 00:13:50.524 00:13:50.524 --- 10.0.0.2 ping statistics --- 00:13:50.524 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:50.524 rtt min/avg/max/mdev = 0.164/0.164/0.164/0.000 ms 00:13:50.524 11:21:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:13:50.524 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:50.524 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.191 ms 00:13:50.524 00:13:50.524 --- 10.0.0.1 ping statistics --- 00:13:50.524 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:50.524 rtt min/avg/max/mdev = 0.191/0.191/0.191/0.000 ms 00:13:50.524 11:21:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:50.524 11:21:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@422 -- # return 0 00:13:50.524 11:21:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:13:50.524 11:21:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:50.524 11:21:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:13:50.524 11:21:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:13:50.524 11:21:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:50.524 11:21:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:13:50.524 11:21:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:13:50.524 11:21:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@35 -- # nvmfappstart -m 0xF 00:13:50.524 11:21:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:13:50.524 11:21:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@724 -- # xtrace_disable 00:13:50.524 11:21:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:13:50.524 11:21:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@481 -- # nvmfpid=1468446 00:13:50.524 11:21:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@482 -- # waitforlisten 1468446 00:13:50.524 11:21:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:13:50.524 11:21:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@831 -- # '[' -z 1468446 ']' 00:13:50.524 11:21:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:50.524 11:21:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@836 -- # local max_retries=100 00:13:50.524 11:21:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:50.524 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:50.524 11:21:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@840 -- # xtrace_disable 00:13:50.524 11:21:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:13:50.783 [2024-07-26 11:21:46.203733] Starting SPDK v24.09-pre git sha1 487ff9e1a / DPDK 24.03.0 initialization... 00:13:50.783 [2024-07-26 11:21:46.203776] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:50.783 EAL: No free 2048 kB hugepages reported on node 1 00:13:50.783 [2024-07-26 11:21:46.262873] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:13:50.783 [2024-07-26 11:21:46.357305] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:50.783 [2024-07-26 11:21:46.357346] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:50.783 [2024-07-26 11:21:46.357356] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:50.783 [2024-07-26 11:21:46.357364] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:50.783 [2024-07-26 11:21:46.357387] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:50.783 [2024-07-26 11:21:46.357451] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:13:50.783 [2024-07-26 11:21:46.357579] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:13:50.783 [2024-07-26 11:21:46.357686] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:13:50.783 [2024-07-26 11:21:46.357686] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:13:51.716 11:21:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:13:51.716 11:21:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@864 -- # return 0 00:13:51.716 11:21:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:13:51.716 11:21:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@730 -- # xtrace_disable 00:13:51.716 11:21:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:13:51.716 11:21:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:51.716 11:21:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@37 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:13:51.716 11:21:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -t foobar nqn.2016-06.io.spdk:cnode1741 00:13:51.716 [2024-07-26 11:21:47.215730] nvmf_rpc.c: 396:rpc_nvmf_create_subsystem: *ERROR*: Unable to find target foobar 00:13:51.716 11:21:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@40 -- # out='request: 00:13:51.716 { 00:13:51.716 "nqn": "nqn.2016-06.io.spdk:cnode1741", 00:13:51.716 "tgt_name": "foobar", 00:13:51.716 "method": "nvmf_create_subsystem", 00:13:51.716 "req_id": 1 00:13:51.716 } 00:13:51.716 Got JSON-RPC error response 00:13:51.716 response: 00:13:51.716 { 00:13:51.716 "code": -32603, 00:13:51.716 "message": "Unable to find target foobar" 00:13:51.716 }' 00:13:51.716 11:21:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@41 -- # [[ request: 00:13:51.716 { 00:13:51.716 "nqn": "nqn.2016-06.io.spdk:cnode1741", 00:13:51.716 "tgt_name": "foobar", 00:13:51.716 "method": "nvmf_create_subsystem", 00:13:51.716 "req_id": 1 00:13:51.716 } 00:13:51.716 Got JSON-RPC error response 00:13:51.716 response: 00:13:51.716 { 00:13:51.716 "code": -32603, 00:13:51.716 "message": "Unable to find target foobar" 00:13:51.716 } == *\U\n\a\b\l\e\ \t\o\ \f\i\n\d\ \t\a\r\g\e\t* ]] 00:13:51.716 11:21:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@45 -- # echo -e '\x1f' 00:13:51.716 11:21:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s $'SPDKISFASTANDAWESOME\037' nqn.2016-06.io.spdk:cnode25023 00:13:51.974 [2024-07-26 11:21:47.400396] nvmf_rpc.c: 413:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode25023: invalid serial number 'SPDKISFASTANDAWESOME' 00:13:51.974 11:21:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@45 -- # out='request: 00:13:51.974 { 00:13:51.974 "nqn": "nqn.2016-06.io.spdk:cnode25023", 00:13:51.974 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:13:51.974 "method": "nvmf_create_subsystem", 00:13:51.974 "req_id": 1 00:13:51.974 } 00:13:51.974 Got JSON-RPC error response 00:13:51.974 response: 00:13:51.974 { 00:13:51.974 "code": -32602, 00:13:51.974 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:13:51.974 }' 00:13:51.974 11:21:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@46 -- # [[ request: 00:13:51.974 { 00:13:51.974 "nqn": "nqn.2016-06.io.spdk:cnode25023", 00:13:51.974 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:13:51.974 "method": "nvmf_create_subsystem", 00:13:51.974 "req_id": 1 00:13:51.974 } 00:13:51.974 Got JSON-RPC error response 00:13:51.974 response: 00:13:51.974 { 00:13:51.974 "code": -32602, 00:13:51.974 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:13:51.974 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:13:51.974 11:21:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@50 -- # echo -e '\x1f' 00:13:51.974 11:21:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -d $'SPDK_Controller\037' nqn.2016-06.io.spdk:cnode28614 00:13:51.974 [2024-07-26 11:21:47.584978] nvmf_rpc.c: 422:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode28614: invalid model number 'SPDK_Controller' 00:13:51.974 11:21:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@50 -- # out='request: 00:13:51.974 { 00:13:51.974 "nqn": "nqn.2016-06.io.spdk:cnode28614", 00:13:51.974 "model_number": "SPDK_Controller\u001f", 00:13:51.974 "method": "nvmf_create_subsystem", 00:13:51.974 "req_id": 1 00:13:51.974 } 00:13:51.974 Got JSON-RPC error response 00:13:51.974 response: 00:13:51.974 { 00:13:51.974 "code": -32602, 00:13:51.974 "message": "Invalid MN SPDK_Controller\u001f" 00:13:51.974 }' 00:13:51.974 11:21:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@51 -- # [[ request: 00:13:51.974 { 00:13:51.974 "nqn": "nqn.2016-06.io.spdk:cnode28614", 00:13:51.974 "model_number": "SPDK_Controller\u001f", 00:13:51.974 "method": "nvmf_create_subsystem", 00:13:51.974 "req_id": 1 00:13:51.974 } 00:13:51.974 Got JSON-RPC error response 00:13:51.974 response: 00:13:51.974 { 00:13:51.974 "code": -32602, 00:13:51.974 "message": "Invalid MN SPDK_Controller\u001f" 00:13:51.974 } == *\I\n\v\a\l\i\d\ \M\N* ]] 00:13:51.974 11:21:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@54 -- # gen_random_s 21 00:13:51.974 11:21:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@19 -- # local length=21 ll 00:13:51.975 11:21:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:13:51.975 11:21:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # local chars 00:13:51.975 11:21:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@22 -- # local string 00:13:51.975 11:21:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll = 0 )) 00:13:51.975 11:21:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:51.975 11:21:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 102 00:13:51.975 11:21:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x66' 00:13:51.975 11:21:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=f 00:13:51.975 11:21:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:51.975 11:21:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:51.975 11:21:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 50 00:13:51.975 11:21:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x32' 00:13:52.234 11:21:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=2 00:13:52.234 11:21:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:52.234 11:21:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:52.234 11:21:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 111 00:13:52.234 11:21:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6f' 00:13:52.234 11:21:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=o 00:13:52.234 11:21:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:52.234 11:21:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:52.234 11:21:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 100 00:13:52.234 11:21:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x64' 00:13:52.234 11:21:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=d 00:13:52.234 11:21:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:52.234 11:21:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:52.234 11:21:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 101 00:13:52.234 11:21:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x65' 00:13:52.234 11:21:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=e 00:13:52.234 11:21:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:52.234 11:21:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:52.234 11:21:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 87 00:13:52.234 11:21:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x57' 00:13:52.234 11:21:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=W 00:13:52.234 11:21:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:52.234 11:21:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:52.234 11:21:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 123 00:13:52.234 11:21:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7b' 00:13:52.234 11:21:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='{' 00:13:52.234 11:21:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:52.234 11:21:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:52.234 11:21:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 35 00:13:52.234 11:21:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x23' 00:13:52.234 11:21:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='#' 00:13:52.234 11:21:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:52.234 11:21:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:52.234 11:21:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 32 00:13:52.234 11:21:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x20' 00:13:52.234 11:21:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=' ' 00:13:52.234 11:21:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:52.234 11:21:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:52.234 11:21:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 111 00:13:52.234 11:21:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6f' 00:13:52.234 11:21:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=o 00:13:52.234 11:21:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:52.234 11:21:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:52.234 11:21:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 126 00:13:52.234 11:21:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7e' 00:13:52.234 11:21:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='~' 00:13:52.234 11:21:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:52.234 11:21:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:52.234 11:21:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 85 00:13:52.234 11:21:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x55' 00:13:52.234 11:21:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=U 00:13:52.234 11:21:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:52.234 11:21:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:52.234 11:21:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 95 00:13:52.234 11:21:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5f' 00:13:52.234 11:21:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=_ 00:13:52.234 11:21:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:52.234 11:21:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:52.234 11:21:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 41 00:13:52.234 11:21:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x29' 00:13:52.234 11:21:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=')' 00:13:52.234 11:21:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:52.234 11:21:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:52.234 11:21:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 124 00:13:52.234 11:21:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7c' 00:13:52.234 11:21:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='|' 00:13:52.234 11:21:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:52.234 11:21:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:52.234 11:21:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 61 00:13:52.234 11:21:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3d' 00:13:52.234 11:21:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+== 00:13:52.234 11:21:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:52.234 11:21:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:52.234 11:21:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 82 00:13:52.234 11:21:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x52' 00:13:52.234 11:21:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=R 00:13:52.234 11:21:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:52.234 11:21:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:52.234 11:21:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 56 00:13:52.234 11:21:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x38' 00:13:52.234 11:21:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=8 00:13:52.234 11:21:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:52.234 11:21:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:52.234 11:21:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 102 00:13:52.234 11:21:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x66' 00:13:52.234 11:21:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=f 00:13:52.234 11:21:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:52.234 11:21:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:52.234 11:21:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 41 00:13:52.234 11:21:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x29' 00:13:52.235 11:21:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=')' 00:13:52.235 11:21:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:52.235 11:21:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:52.235 11:21:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 109 00:13:52.235 11:21:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6d' 00:13:52.235 11:21:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=m 00:13:52.235 11:21:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:52.235 11:21:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:52.235 11:21:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@28 -- # [[ f == \- ]] 00:13:52.235 11:21:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@31 -- # echo 'f2odeW{# o~U_)|=R8f)m' 00:13:52.235 11:21:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s 'f2odeW{# o~U_)|=R8f)m' nqn.2016-06.io.spdk:cnode15740 00:13:52.494 [2024-07-26 11:21:47.910069] nvmf_rpc.c: 413:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode15740: invalid serial number 'f2odeW{# o~U_)|=R8f)m' 00:13:52.494 11:21:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@54 -- # out='request: 00:13:52.494 { 00:13:52.494 "nqn": "nqn.2016-06.io.spdk:cnode15740", 00:13:52.494 "serial_number": "f2odeW{# o~U_)|=R8f)m", 00:13:52.494 "method": "nvmf_create_subsystem", 00:13:52.494 "req_id": 1 00:13:52.494 } 00:13:52.494 Got JSON-RPC error response 00:13:52.494 response: 00:13:52.494 { 00:13:52.494 "code": -32602, 00:13:52.494 "message": "Invalid SN f2odeW{# o~U_)|=R8f)m" 00:13:52.494 }' 00:13:52.494 11:21:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@55 -- # [[ request: 00:13:52.494 { 00:13:52.494 "nqn": "nqn.2016-06.io.spdk:cnode15740", 00:13:52.494 "serial_number": "f2odeW{# o~U_)|=R8f)m", 00:13:52.494 "method": "nvmf_create_subsystem", 00:13:52.494 "req_id": 1 00:13:52.494 } 00:13:52.494 Got JSON-RPC error response 00:13:52.494 response: 00:13:52.494 { 00:13:52.494 "code": -32602, 00:13:52.494 "message": "Invalid SN f2odeW{# o~U_)|=R8f)m" 00:13:52.494 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:13:52.494 11:21:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@58 -- # gen_random_s 41 00:13:52.494 11:21:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@19 -- # local length=41 ll 00:13:52.494 11:21:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:13:52.494 11:21:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # local chars 00:13:52.494 11:21:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@22 -- # local string 00:13:52.494 11:21:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll = 0 )) 00:13:52.494 11:21:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:52.494 11:21:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 50 00:13:52.494 11:21:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x32' 00:13:52.494 11:21:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=2 00:13:52.494 11:21:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:52.494 11:21:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:52.494 11:21:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 37 00:13:52.494 11:21:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x25' 00:13:52.494 11:21:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=% 00:13:52.494 11:21:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:52.494 11:21:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:52.494 11:21:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 101 00:13:52.494 11:21:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x65' 00:13:52.494 11:21:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=e 00:13:52.494 11:21:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:52.494 11:21:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:52.494 11:21:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 35 00:13:52.494 11:21:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x23' 00:13:52.494 11:21:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='#' 00:13:52.494 11:21:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:52.494 11:21:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:52.494 11:21:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 34 00:13:52.494 11:21:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x22' 00:13:52.494 11:21:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='"' 00:13:52.494 11:21:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:52.494 11:21:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:52.494 11:21:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 116 00:13:52.494 11:21:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x74' 00:13:52.494 11:21:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=t 00:13:52.494 11:21:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:52.494 11:21:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:52.494 11:21:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 113 00:13:52.494 11:21:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x71' 00:13:52.494 11:21:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=q 00:13:52.494 11:21:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:52.494 11:21:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:52.494 11:21:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 39 00:13:52.494 11:21:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x27' 00:13:52.494 11:21:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=\' 00:13:52.494 11:21:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:52.494 11:21:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:52.494 11:21:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 58 00:13:52.494 11:21:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3a' 00:13:52.494 11:21:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=: 00:13:52.494 11:21:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:52.494 11:21:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:52.494 11:21:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 60 00:13:52.494 11:21:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3c' 00:13:52.494 11:21:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='<' 00:13:52.494 11:21:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:52.494 11:21:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:52.494 11:21:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 93 00:13:52.494 11:21:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5d' 00:13:52.494 11:21:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=']' 00:13:52.494 11:21:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:52.494 11:21:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:52.494 11:21:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 90 00:13:52.494 11:21:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5a' 00:13:52.494 11:21:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=Z 00:13:52.494 11:21:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:52.494 11:21:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:52.494 11:21:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 60 00:13:52.494 11:21:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3c' 00:13:52.494 11:21:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='<' 00:13:52.494 11:21:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:52.494 11:21:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:52.494 11:21:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 43 00:13:52.494 11:21:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2b' 00:13:52.494 11:21:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=+ 00:13:52.494 11:21:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:52.494 11:21:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:52.494 11:21:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 37 00:13:52.494 11:21:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x25' 00:13:52.494 11:21:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=% 00:13:52.494 11:21:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:52.494 11:21:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:52.494 11:21:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 95 00:13:52.494 11:21:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5f' 00:13:52.494 11:21:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=_ 00:13:52.494 11:21:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:52.495 11:21:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:52.495 11:21:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 33 00:13:52.495 11:21:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x21' 00:13:52.495 11:21:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='!' 00:13:52.495 11:21:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:52.495 11:21:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:52.495 11:21:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 110 00:13:52.495 11:21:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6e' 00:13:52.495 11:21:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=n 00:13:52.495 11:21:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:52.495 11:21:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:52.495 11:21:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 107 00:13:52.495 11:21:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6b' 00:13:52.495 11:21:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=k 00:13:52.495 11:21:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:52.495 11:21:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:52.495 11:21:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 110 00:13:52.495 11:21:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6e' 00:13:52.495 11:21:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=n 00:13:52.495 11:21:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:52.495 11:21:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:52.495 11:21:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 89 00:13:52.495 11:21:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x59' 00:13:52.495 11:21:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=Y 00:13:52.495 11:21:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:52.495 11:21:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:52.495 11:21:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 47 00:13:52.495 11:21:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2f' 00:13:52.495 11:21:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=/ 00:13:52.495 11:21:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:52.495 11:21:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:52.495 11:21:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 106 00:13:52.495 11:21:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6a' 00:13:52.495 11:21:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=j 00:13:52.495 11:21:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:52.495 11:21:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:52.495 11:21:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 100 00:13:52.495 11:21:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x64' 00:13:52.495 11:21:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=d 00:13:52.495 11:21:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:52.495 11:21:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:52.495 11:21:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 83 00:13:52.495 11:21:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x53' 00:13:52.495 11:21:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=S 00:13:52.495 11:21:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:52.495 11:21:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:52.495 11:21:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 91 00:13:52.495 11:21:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5b' 00:13:52.495 11:21:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='[' 00:13:52.495 11:21:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:52.495 11:21:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:52.495 11:21:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 79 00:13:52.495 11:21:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4f' 00:13:52.495 11:21:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=O 00:13:52.495 11:21:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:52.495 11:21:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:52.495 11:21:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 91 00:13:52.495 11:21:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5b' 00:13:52.495 11:21:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='[' 00:13:52.495 11:21:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:52.495 11:21:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:52.495 11:21:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 69 00:13:52.495 11:21:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x45' 00:13:52.495 11:21:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=E 00:13:52.495 11:21:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:52.495 11:21:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:52.495 11:21:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 103 00:13:52.495 11:21:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x67' 00:13:52.495 11:21:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=g 00:13:52.495 11:21:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:52.495 11:21:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:52.495 11:21:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 70 00:13:52.495 11:21:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x46' 00:13:52.495 11:21:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=F 00:13:52.495 11:21:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:52.495 11:21:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:52.495 11:21:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 106 00:13:52.495 11:21:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6a' 00:13:52.495 11:21:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=j 00:13:52.495 11:21:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:52.495 11:21:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:52.495 11:21:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 54 00:13:52.495 11:21:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x36' 00:13:52.495 11:21:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=6 00:13:52.495 11:21:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:52.495 11:21:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:52.753 11:21:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 39 00:13:52.753 11:21:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x27' 00:13:52.753 11:21:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=\' 00:13:52.753 11:21:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:52.753 11:21:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:52.753 11:21:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 104 00:13:52.753 11:21:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x68' 00:13:52.753 11:21:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=h 00:13:52.753 11:21:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:52.753 11:21:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:52.753 11:21:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 120 00:13:52.753 11:21:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x78' 00:13:52.753 11:21:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=x 00:13:52.753 11:21:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:52.753 11:21:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:52.753 11:21:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 111 00:13:52.753 11:21:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6f' 00:13:52.753 11:21:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=o 00:13:52.753 11:21:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:52.753 11:21:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:52.753 11:21:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 76 00:13:52.753 11:21:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4c' 00:13:52.753 11:21:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=L 00:13:52.753 11:21:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:52.753 11:21:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:52.753 11:21:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 79 00:13:52.753 11:21:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4f' 00:13:52.753 11:21:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=O 00:13:52.753 11:21:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:52.753 11:21:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:52.754 11:21:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 56 00:13:52.754 11:21:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x38' 00:13:52.754 11:21:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=8 00:13:52.754 11:21:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:52.754 11:21:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:52.754 11:21:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 93 00:13:52.754 11:21:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5d' 00:13:52.754 11:21:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=']' 00:13:52.754 11:21:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:52.754 11:21:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:52.754 11:21:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@28 -- # [[ 2 == \- ]] 00:13:52.754 11:21:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@31 -- # echo '2%e#"tq'\'':<]Z<+%_!nknY/jdS[O[EgFj6'\''hxoLO8]' 00:13:52.754 11:21:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -d '2%e#"tq'\'':<]Z<+%_!nknY/jdS[O[EgFj6'\''hxoLO8]' nqn.2016-06.io.spdk:cnode2538 00:13:52.754 [2024-07-26 11:21:48.363603] nvmf_rpc.c: 422:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2538: invalid model number '2%e#"tq':<]Z<+%_!nknY/jdS[O[EgFj6'hxoLO8]' 00:13:52.754 11:21:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@58 -- # out='request: 00:13:52.754 { 00:13:52.754 "nqn": "nqn.2016-06.io.spdk:cnode2538", 00:13:52.754 "model_number": "2%e#\"tq'\'':<]Z<+%_!nknY/jdS[O[EgFj6'\''hxoLO8]", 00:13:52.754 "method": "nvmf_create_subsystem", 00:13:52.754 "req_id": 1 00:13:52.754 } 00:13:52.754 Got JSON-RPC error response 00:13:52.754 response: 00:13:52.754 { 00:13:52.754 "code": -32602, 00:13:52.754 "message": "Invalid MN 2%e#\"tq'\'':<]Z<+%_!nknY/jdS[O[EgFj6'\''hxoLO8]" 00:13:52.754 }' 00:13:52.754 11:21:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@59 -- # [[ request: 00:13:52.754 { 00:13:52.754 "nqn": "nqn.2016-06.io.spdk:cnode2538", 00:13:52.754 "model_number": "2%e#\"tq':<]Z<+%_!nknY/jdS[O[EgFj6'hxoLO8]", 00:13:52.754 "method": "nvmf_create_subsystem", 00:13:52.754 "req_id": 1 00:13:52.754 } 00:13:52.754 Got JSON-RPC error response 00:13:52.754 response: 00:13:52.754 { 00:13:52.754 "code": -32602, 00:13:52.754 "message": "Invalid MN 2%e#\"tq':<]Z<+%_!nknY/jdS[O[EgFj6'hxoLO8]" 00:13:52.754 } == *\I\n\v\a\l\i\d\ \M\N* ]] 00:13:52.754 11:21:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport --trtype tcp 00:13:53.082 [2024-07-26 11:21:48.556327] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:53.082 11:21:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode -s SPDK001 -a 00:13:53.340 11:21:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@64 -- # [[ tcp == \T\C\P ]] 00:13:53.340 11:21:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@67 -- # echo '' 00:13:53.340 11:21:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@67 -- # head -n 1 00:13:53.340 11:21:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@67 -- # IP= 00:13:53.340 11:21:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode -t tcp -a '' -s 4421 00:13:53.340 [2024-07-26 11:21:48.930858] nvmf_rpc.c: 809:nvmf_rpc_listen_paused: *ERROR*: Unable to remove listener, rc -2 00:13:53.340 11:21:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@69 -- # out='request: 00:13:53.340 { 00:13:53.340 "nqn": "nqn.2016-06.io.spdk:cnode", 00:13:53.340 "listen_address": { 00:13:53.340 "trtype": "tcp", 00:13:53.340 "traddr": "", 00:13:53.340 "trsvcid": "4421" 00:13:53.340 }, 00:13:53.340 "method": "nvmf_subsystem_remove_listener", 00:13:53.340 "req_id": 1 00:13:53.340 } 00:13:53.340 Got JSON-RPC error response 00:13:53.340 response: 00:13:53.340 { 00:13:53.340 "code": -32602, 00:13:53.340 "message": "Invalid parameters" 00:13:53.340 }' 00:13:53.340 11:21:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@70 -- # [[ request: 00:13:53.340 { 00:13:53.340 "nqn": "nqn.2016-06.io.spdk:cnode", 00:13:53.340 "listen_address": { 00:13:53.340 "trtype": "tcp", 00:13:53.340 "traddr": "", 00:13:53.340 "trsvcid": "4421" 00:13:53.340 }, 00:13:53.340 "method": "nvmf_subsystem_remove_listener", 00:13:53.340 "req_id": 1 00:13:53.340 } 00:13:53.340 Got JSON-RPC error response 00:13:53.340 response: 00:13:53.340 { 00:13:53.340 "code": -32602, 00:13:53.340 "message": "Invalid parameters" 00:13:53.340 } != *\U\n\a\b\l\e\ \t\o\ \s\t\o\p\ \l\i\s\t\e\n\e\r\.* ]] 00:13:53.340 11:21:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode16829 -i 0 00:13:53.598 [2024-07-26 11:21:49.115415] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode16829: invalid cntlid range [0-65519] 00:13:53.598 11:21:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@73 -- # out='request: 00:13:53.598 { 00:13:53.598 "nqn": "nqn.2016-06.io.spdk:cnode16829", 00:13:53.598 "min_cntlid": 0, 00:13:53.598 "method": "nvmf_create_subsystem", 00:13:53.598 "req_id": 1 00:13:53.598 } 00:13:53.598 Got JSON-RPC error response 00:13:53.598 response: 00:13:53.598 { 00:13:53.598 "code": -32602, 00:13:53.598 "message": "Invalid cntlid range [0-65519]" 00:13:53.598 }' 00:13:53.598 11:21:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@74 -- # [[ request: 00:13:53.598 { 00:13:53.599 "nqn": "nqn.2016-06.io.spdk:cnode16829", 00:13:53.599 "min_cntlid": 0, 00:13:53.599 "method": "nvmf_create_subsystem", 00:13:53.599 "req_id": 1 00:13:53.599 } 00:13:53.599 Got JSON-RPC error response 00:13:53.599 response: 00:13:53.599 { 00:13:53.599 "code": -32602, 00:13:53.599 "message": "Invalid cntlid range [0-65519]" 00:13:53.599 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:13:53.599 11:21:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@75 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode11164 -i 65520 00:13:53.858 [2024-07-26 11:21:49.296029] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode11164: invalid cntlid range [65520-65519] 00:13:53.858 11:21:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@75 -- # out='request: 00:13:53.858 { 00:13:53.858 "nqn": "nqn.2016-06.io.spdk:cnode11164", 00:13:53.858 "min_cntlid": 65520, 00:13:53.858 "method": "nvmf_create_subsystem", 00:13:53.858 "req_id": 1 00:13:53.858 } 00:13:53.858 Got JSON-RPC error response 00:13:53.858 response: 00:13:53.858 { 00:13:53.858 "code": -32602, 00:13:53.858 "message": "Invalid cntlid range [65520-65519]" 00:13:53.858 }' 00:13:53.858 11:21:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@76 -- # [[ request: 00:13:53.858 { 00:13:53.858 "nqn": "nqn.2016-06.io.spdk:cnode11164", 00:13:53.858 "min_cntlid": 65520, 00:13:53.858 "method": "nvmf_create_subsystem", 00:13:53.858 "req_id": 1 00:13:53.858 } 00:13:53.858 Got JSON-RPC error response 00:13:53.858 response: 00:13:53.858 { 00:13:53.858 "code": -32602, 00:13:53.858 "message": "Invalid cntlid range [65520-65519]" 00:13:53.858 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:13:53.858 11:21:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode23556 -I 0 00:13:53.858 [2024-07-26 11:21:49.484718] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode23556: invalid cntlid range [1-0] 00:13:53.858 11:21:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@77 -- # out='request: 00:13:53.858 { 00:13:53.858 "nqn": "nqn.2016-06.io.spdk:cnode23556", 00:13:53.858 "max_cntlid": 0, 00:13:53.858 "method": "nvmf_create_subsystem", 00:13:53.858 "req_id": 1 00:13:53.858 } 00:13:53.858 Got JSON-RPC error response 00:13:53.858 response: 00:13:53.858 { 00:13:53.858 "code": -32602, 00:13:53.858 "message": "Invalid cntlid range [1-0]" 00:13:53.858 }' 00:13:53.858 11:21:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@78 -- # [[ request: 00:13:53.858 { 00:13:53.858 "nqn": "nqn.2016-06.io.spdk:cnode23556", 00:13:53.858 "max_cntlid": 0, 00:13:53.858 "method": "nvmf_create_subsystem", 00:13:53.858 "req_id": 1 00:13:53.858 } 00:13:53.858 Got JSON-RPC error response 00:13:53.858 response: 00:13:53.858 { 00:13:53.858 "code": -32602, 00:13:53.858 "message": "Invalid cntlid range [1-0]" 00:13:53.858 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:13:53.858 11:21:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode19939 -I 65520 00:13:54.116 [2024-07-26 11:21:49.673275] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode19939: invalid cntlid range [1-65520] 00:13:54.116 11:21:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@79 -- # out='request: 00:13:54.116 { 00:13:54.116 "nqn": "nqn.2016-06.io.spdk:cnode19939", 00:13:54.116 "max_cntlid": 65520, 00:13:54.116 "method": "nvmf_create_subsystem", 00:13:54.116 "req_id": 1 00:13:54.116 } 00:13:54.116 Got JSON-RPC error response 00:13:54.116 response: 00:13:54.116 { 00:13:54.116 "code": -32602, 00:13:54.116 "message": "Invalid cntlid range [1-65520]" 00:13:54.116 }' 00:13:54.116 11:21:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@80 -- # [[ request: 00:13:54.116 { 00:13:54.116 "nqn": "nqn.2016-06.io.spdk:cnode19939", 00:13:54.116 "max_cntlid": 65520, 00:13:54.116 "method": "nvmf_create_subsystem", 00:13:54.116 "req_id": 1 00:13:54.116 } 00:13:54.116 Got JSON-RPC error response 00:13:54.116 response: 00:13:54.116 { 00:13:54.116 "code": -32602, 00:13:54.116 "message": "Invalid cntlid range [1-65520]" 00:13:54.116 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:13:54.116 11:21:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode15492 -i 6 -I 5 00:13:54.374 [2024-07-26 11:21:49.869942] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode15492: invalid cntlid range [6-5] 00:13:54.374 11:21:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@83 -- # out='request: 00:13:54.374 { 00:13:54.374 "nqn": "nqn.2016-06.io.spdk:cnode15492", 00:13:54.374 "min_cntlid": 6, 00:13:54.374 "max_cntlid": 5, 00:13:54.374 "method": "nvmf_create_subsystem", 00:13:54.374 "req_id": 1 00:13:54.374 } 00:13:54.374 Got JSON-RPC error response 00:13:54.374 response: 00:13:54.374 { 00:13:54.374 "code": -32602, 00:13:54.374 "message": "Invalid cntlid range [6-5]" 00:13:54.374 }' 00:13:54.374 11:21:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@84 -- # [[ request: 00:13:54.374 { 00:13:54.374 "nqn": "nqn.2016-06.io.spdk:cnode15492", 00:13:54.374 "min_cntlid": 6, 00:13:54.374 "max_cntlid": 5, 00:13:54.374 "method": "nvmf_create_subsystem", 00:13:54.374 "req_id": 1 00:13:54.374 } 00:13:54.374 Got JSON-RPC error response 00:13:54.374 response: 00:13:54.374 { 00:13:54.374 "code": -32602, 00:13:54.374 "message": "Invalid cntlid range [6-5]" 00:13:54.374 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:13:54.374 11:21:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target --name foobar 00:13:54.375 11:21:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@87 -- # out='request: 00:13:54.375 { 00:13:54.375 "name": "foobar", 00:13:54.375 "method": "nvmf_delete_target", 00:13:54.375 "req_id": 1 00:13:54.375 } 00:13:54.375 Got JSON-RPC error response 00:13:54.375 response: 00:13:54.375 { 00:13:54.375 "code": -32602, 00:13:54.375 "message": "The specified target doesn'\''t exist, cannot delete it." 00:13:54.375 }' 00:13:54.375 11:21:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@88 -- # [[ request: 00:13:54.375 { 00:13:54.375 "name": "foobar", 00:13:54.375 "method": "nvmf_delete_target", 00:13:54.375 "req_id": 1 00:13:54.375 } 00:13:54.375 Got JSON-RPC error response 00:13:54.375 response: 00:13:54.375 { 00:13:54.375 "code": -32602, 00:13:54.375 "message": "The specified target doesn't exist, cannot delete it." 00:13:54.375 } == *\T\h\e\ \s\p\e\c\i\f\i\e\d\ \t\a\r\g\e\t\ \d\o\e\s\n\'\t\ \e\x\i\s\t\,\ \c\a\n\n\o\t\ \d\e\l\e\t\e\ \i\t\.* ]] 00:13:54.375 11:21:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@90 -- # trap - SIGINT SIGTERM EXIT 00:13:54.375 11:21:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@91 -- # nvmftestfini 00:13:54.375 11:21:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@488 -- # nvmfcleanup 00:13:54.375 11:21:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@117 -- # sync 00:13:54.375 11:21:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:13:54.375 11:21:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@120 -- # set +e 00:13:54.375 11:21:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@121 -- # for i in {1..20} 00:13:54.375 11:21:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:13:54.375 rmmod nvme_tcp 00:13:54.633 rmmod nvme_fabrics 00:13:54.633 rmmod nvme_keyring 00:13:54.633 11:21:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:13:54.633 11:21:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@124 -- # set -e 00:13:54.633 11:21:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@125 -- # return 0 00:13:54.633 11:21:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@489 -- # '[' -n 1468446 ']' 00:13:54.633 11:21:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@490 -- # killprocess 1468446 00:13:54.633 11:21:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@950 -- # '[' -z 1468446 ']' 00:13:54.633 11:21:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@954 -- # kill -0 1468446 00:13:54.633 11:21:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@955 -- # uname 00:13:54.633 11:21:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:13:54.633 11:21:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1468446 00:13:54.634 11:21:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:13:54.634 11:21:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:13:54.634 11:21:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1468446' 00:13:54.634 killing process with pid 1468446 00:13:54.634 11:21:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@969 -- # kill 1468446 00:13:54.634 11:21:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@974 -- # wait 1468446 00:13:54.634 11:21:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:13:54.634 11:21:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:13:54.634 11:21:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:13:54.634 11:21:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:13:54.634 11:21:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@278 -- # remove_spdk_ns 00:13:54.634 11:21:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:54.634 11:21:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:54.634 11:21:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:57.171 11:21:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:13:57.171 00:13:57.171 real 0m12.144s 00:13:57.171 user 0m19.676s 00:13:57.171 sys 0m5.273s 00:13:57.171 11:21:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1126 -- # xtrace_disable 00:13:57.171 11:21:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:13:57.171 ************************************ 00:13:57.171 END TEST nvmf_invalid 00:13:57.171 ************************************ 00:13:57.171 11:21:52 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@24 -- # run_test nvmf_connect_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:13:57.171 11:21:52 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:13:57.171 11:21:52 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:13:57.171 11:21:52 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:13:57.171 ************************************ 00:13:57.171 START TEST nvmf_connect_stress 00:13:57.171 ************************************ 00:13:57.171 11:21:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:13:57.171 * Looking for test storage... 00:13:57.171 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:57.171 11:21:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:57.171 11:21:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@7 -- # uname -s 00:13:57.171 11:21:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:57.171 11:21:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:57.171 11:21:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:57.171 11:21:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:57.171 11:21:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:57.171 11:21:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:57.171 11:21:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:57.171 11:21:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:57.171 11:21:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:57.171 11:21:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:57.171 11:21:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:13:57.171 11:21:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:13:57.171 11:21:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:57.171 11:21:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:57.171 11:21:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:57.171 11:21:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:57.171 11:21:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:57.171 11:21:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:57.171 11:21:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:57.171 11:21:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:57.171 11:21:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:57.171 11:21:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:57.171 11:21:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:57.171 11:21:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@5 -- # export PATH 00:13:57.171 11:21:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:57.171 11:21:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@47 -- # : 0 00:13:57.171 11:21:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:13:57.171 11:21:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:13:57.171 11:21:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:57.171 11:21:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:57.171 11:21:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:57.171 11:21:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:13:57.171 11:21:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:13:57.171 11:21:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@51 -- # have_pci_nics=0 00:13:57.171 11:21:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@12 -- # nvmftestinit 00:13:57.171 11:21:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:13:57.171 11:21:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:57.171 11:21:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@448 -- # prepare_net_devs 00:13:57.171 11:21:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@410 -- # local -g is_hw=no 00:13:57.171 11:21:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@412 -- # remove_spdk_ns 00:13:57.171 11:21:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:57.171 11:21:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:57.171 11:21:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:57.171 11:21:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:13:57.171 11:21:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:13:57.171 11:21:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@285 -- # xtrace_disable 00:13:57.171 11:21:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:02.447 11:21:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:14:02.447 11:21:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@291 -- # pci_devs=() 00:14:02.447 11:21:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@291 -- # local -a pci_devs 00:14:02.447 11:21:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@292 -- # pci_net_devs=() 00:14:02.447 11:21:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:14:02.447 11:21:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@293 -- # pci_drivers=() 00:14:02.447 11:21:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@293 -- # local -A pci_drivers 00:14:02.447 11:21:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@295 -- # net_devs=() 00:14:02.447 11:21:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@295 -- # local -ga net_devs 00:14:02.447 11:21:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@296 -- # e810=() 00:14:02.447 11:21:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@296 -- # local -ga e810 00:14:02.447 11:21:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@297 -- # x722=() 00:14:02.447 11:21:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@297 -- # local -ga x722 00:14:02.447 11:21:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@298 -- # mlx=() 00:14:02.447 11:21:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@298 -- # local -ga mlx 00:14:02.447 11:21:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:14:02.447 11:21:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:14:02.447 11:21:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:14:02.447 11:21:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:14:02.447 11:21:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:14:02.447 11:21:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:14:02.447 11:21:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:14:02.447 11:21:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:14:02.447 11:21:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:14:02.447 11:21:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:14:02.447 11:21:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:14:02.447 11:21:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:14:02.447 11:21:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:14:02.447 11:21:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:14:02.447 11:21:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:14:02.447 11:21:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:14:02.447 11:21:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:14:02.447 11:21:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:02.447 11:21:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:14:02.447 Found 0000:86:00.0 (0x8086 - 0x159b) 00:14:02.447 11:21:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:14:02.447 11:21:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:14:02.447 11:21:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:02.447 11:21:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:02.447 11:21:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:14:02.447 11:21:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:02.447 11:21:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:14:02.447 Found 0000:86:00.1 (0x8086 - 0x159b) 00:14:02.447 11:21:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:14:02.447 11:21:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:14:02.447 11:21:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:02.447 11:21:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:02.447 11:21:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:14:02.447 11:21:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:14:02.447 11:21:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:14:02.447 11:21:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:14:02.447 11:21:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:02.447 11:21:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:02.447 11:21:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:14:02.447 11:21:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:02.447 11:21:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@390 -- # [[ up == up ]] 00:14:02.447 11:21:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:14:02.447 11:21:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:02.447 11:21:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:14:02.447 Found net devices under 0000:86:00.0: cvl_0_0 00:14:02.447 11:21:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:14:02.447 11:21:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:02.447 11:21:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:02.447 11:21:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:14:02.447 11:21:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:02.447 11:21:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@390 -- # [[ up == up ]] 00:14:02.447 11:21:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:14:02.447 11:21:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:02.447 11:21:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:14:02.447 Found net devices under 0000:86:00.1: cvl_0_1 00:14:02.447 11:21:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:14:02.447 11:21:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:14:02.447 11:21:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@414 -- # is_hw=yes 00:14:02.447 11:21:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:14:02.447 11:21:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:14:02.447 11:21:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:14:02.447 11:21:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:02.447 11:21:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:02.447 11:21:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:14:02.448 11:21:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:14:02.448 11:21:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:14:02.448 11:21:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:14:02.448 11:21:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:14:02.448 11:21:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:14:02.448 11:21:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:02.448 11:21:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:14:02.448 11:21:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:14:02.448 11:21:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:14:02.448 11:21:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:14:02.707 11:21:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:14:02.707 11:21:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:14:02.707 11:21:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:14:02.707 11:21:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:14:02.707 11:21:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:14:02.707 11:21:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:14:02.707 11:21:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:14:02.707 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:02.707 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.253 ms 00:14:02.707 00:14:02.707 --- 10.0.0.2 ping statistics --- 00:14:02.707 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:02.707 rtt min/avg/max/mdev = 0.253/0.253/0.253/0.000 ms 00:14:02.707 11:21:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:14:02.707 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:02.707 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.203 ms 00:14:02.707 00:14:02.707 --- 10.0.0.1 ping statistics --- 00:14:02.707 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:02.707 rtt min/avg/max/mdev = 0.203/0.203/0.203/0.000 ms 00:14:02.707 11:21:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:02.707 11:21:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@422 -- # return 0 00:14:02.707 11:21:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:14:02.707 11:21:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:02.707 11:21:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:14:02.707 11:21:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:14:02.707 11:21:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:02.707 11:21:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:14:02.707 11:21:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:14:02.707 11:21:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@13 -- # nvmfappstart -m 0xE 00:14:02.707 11:21:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:14:02.707 11:21:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@724 -- # xtrace_disable 00:14:02.707 11:21:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:02.707 11:21:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@481 -- # nvmfpid=1472732 00:14:02.707 11:21:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@482 -- # waitforlisten 1472732 00:14:02.707 11:21:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:14:02.707 11:21:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@831 -- # '[' -z 1472732 ']' 00:14:02.707 11:21:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:02.707 11:21:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@836 -- # local max_retries=100 00:14:02.707 11:21:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:02.707 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:02.707 11:21:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@840 -- # xtrace_disable 00:14:02.707 11:21:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:02.965 [2024-07-26 11:21:58.388795] Starting SPDK v24.09-pre git sha1 487ff9e1a / DPDK 24.03.0 initialization... 00:14:02.965 [2024-07-26 11:21:58.388842] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:02.965 EAL: No free 2048 kB hugepages reported on node 1 00:14:02.965 [2024-07-26 11:21:58.460847] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:14:02.965 [2024-07-26 11:21:58.537936] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:02.965 [2024-07-26 11:21:58.537972] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:02.965 [2024-07-26 11:21:58.537979] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:02.965 [2024-07-26 11:21:58.537985] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:02.965 [2024-07-26 11:21:58.537990] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:02.965 [2024-07-26 11:21:58.538104] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:14:02.965 [2024-07-26 11:21:58.538209] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:14:02.965 [2024-07-26 11:21:58.538211] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:14:03.900 11:21:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:14:03.900 11:21:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@864 -- # return 0 00:14:03.900 11:21:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:14:03.900 11:21:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@730 -- # xtrace_disable 00:14:03.900 11:21:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:03.900 11:21:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:03.900 11:21:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:14:03.900 11:21:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:03.900 11:21:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:03.900 [2024-07-26 11:21:59.238121] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:03.900 11:21:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:03.900 11:21:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:14:03.900 11:21:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:03.900 11:21:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:03.900 11:21:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:03.900 11:21:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:03.900 11:21:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:03.900 11:21:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:03.900 [2024-07-26 11:21:59.266321] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:03.900 11:21:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:03.900 11:21:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:14:03.900 11:21:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:03.900 11:21:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:03.900 NULL1 00:14:03.900 11:21:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:03.900 11:21:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@21 -- # PERF_PID=1472860 00:14:03.900 11:21:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@23 -- # rpcs=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:14:03.900 11:21:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@20 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/connect_stress/connect_stress -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -t 10 00:14:03.900 11:21:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@25 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:14:03.900 11:21:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # seq 1 20 00:14:03.900 11:21:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:03.901 11:21:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:03.901 11:21:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:03.901 11:21:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:03.901 11:21:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:03.901 11:21:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:03.901 11:21:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:03.901 11:21:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:03.901 11:21:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:03.901 11:21:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:03.901 11:21:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:03.901 11:21:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:03.901 11:21:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:03.901 EAL: No free 2048 kB hugepages reported on node 1 00:14:03.901 11:21:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:03.901 11:21:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:03.901 11:21:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:03.901 11:21:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:03.901 11:21:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:03.901 11:21:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:03.901 11:21:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:03.901 11:21:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:03.901 11:21:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:03.901 11:21:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:03.901 11:21:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:03.901 11:21:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:03.901 11:21:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:03.901 11:21:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:03.901 11:21:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:03.901 11:21:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:03.901 11:21:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:03.901 11:21:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:03.901 11:21:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:03.901 11:21:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:03.901 11:21:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:03.901 11:21:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:03.901 11:21:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:03.901 11:21:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:03.901 11:21:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:03.901 11:21:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:03.901 11:21:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:03.901 11:21:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1472860 00:14:03.901 11:21:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:03.901 11:21:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:03.901 11:21:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:04.159 11:21:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:04.159 11:21:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1472860 00:14:04.159 11:21:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:04.159 11:21:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:04.159 11:21:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:04.416 11:22:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:04.416 11:22:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1472860 00:14:04.416 11:22:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:04.416 11:22:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:04.416 11:22:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:04.673 11:22:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:04.673 11:22:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1472860 00:14:04.673 11:22:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:04.673 11:22:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:04.673 11:22:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:05.309 11:22:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:05.309 11:22:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1472860 00:14:05.309 11:22:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:05.309 11:22:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:05.309 11:22:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:05.567 11:22:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:05.567 11:22:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1472860 00:14:05.567 11:22:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:05.567 11:22:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:05.567 11:22:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:05.824 11:22:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:05.824 11:22:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1472860 00:14:05.824 11:22:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:05.824 11:22:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:05.824 11:22:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:06.082 11:22:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:06.082 11:22:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1472860 00:14:06.082 11:22:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:06.082 11:22:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:06.082 11:22:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:06.340 11:22:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:06.340 11:22:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1472860 00:14:06.340 11:22:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:06.340 11:22:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:06.340 11:22:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:06.903 11:22:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:06.903 11:22:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1472860 00:14:06.903 11:22:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:06.903 11:22:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:06.903 11:22:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:07.160 11:22:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:07.160 11:22:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1472860 00:14:07.160 11:22:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:07.160 11:22:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:07.160 11:22:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:07.418 11:22:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:07.418 11:22:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1472860 00:14:07.418 11:22:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:07.418 11:22:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:07.418 11:22:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:07.676 11:22:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:07.676 11:22:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1472860 00:14:07.676 11:22:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:07.676 11:22:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:07.676 11:22:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:07.936 11:22:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:07.936 11:22:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1472860 00:14:07.936 11:22:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:07.936 11:22:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:07.936 11:22:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:08.500 11:22:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:08.500 11:22:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1472860 00:14:08.500 11:22:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:08.500 11:22:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:08.501 11:22:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:08.758 11:22:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:08.758 11:22:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1472860 00:14:08.758 11:22:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:08.758 11:22:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:08.758 11:22:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:09.016 11:22:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:09.016 11:22:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1472860 00:14:09.016 11:22:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:09.016 11:22:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:09.016 11:22:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:09.273 11:22:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:09.273 11:22:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1472860 00:14:09.273 11:22:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:09.273 11:22:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:09.273 11:22:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:09.839 11:22:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:09.839 11:22:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1472860 00:14:09.839 11:22:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:09.839 11:22:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:09.839 11:22:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:10.097 11:22:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:10.097 11:22:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1472860 00:14:10.097 11:22:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:10.097 11:22:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:10.097 11:22:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:10.355 11:22:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:10.355 11:22:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1472860 00:14:10.355 11:22:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:10.355 11:22:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:10.355 11:22:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:10.613 11:22:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:10.613 11:22:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1472860 00:14:10.613 11:22:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:10.613 11:22:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:10.613 11:22:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:10.871 11:22:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:10.871 11:22:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1472860 00:14:10.871 11:22:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:10.871 11:22:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:10.871 11:22:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:11.436 11:22:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:11.436 11:22:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1472860 00:14:11.436 11:22:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:11.436 11:22:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:11.437 11:22:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:11.694 11:22:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:11.694 11:22:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1472860 00:14:11.694 11:22:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:11.694 11:22:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:11.694 11:22:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:11.952 11:22:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:11.952 11:22:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1472860 00:14:11.952 11:22:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:11.952 11:22:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:11.952 11:22:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:12.210 11:22:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:12.210 11:22:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1472860 00:14:12.210 11:22:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:12.210 11:22:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:12.210 11:22:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:12.776 11:22:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:12.776 11:22:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1472860 00:14:12.776 11:22:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:12.776 11:22:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:12.776 11:22:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:13.033 11:22:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:13.033 11:22:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1472860 00:14:13.033 11:22:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:13.033 11:22:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:13.033 11:22:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:13.291 11:22:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:13.291 11:22:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1472860 00:14:13.291 11:22:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:13.291 11:22:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:13.291 11:22:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:13.548 11:22:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:13.548 11:22:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1472860 00:14:13.548 11:22:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:13.548 11:22:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:13.548 11:22:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:13.806 Testing NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:14:13.806 11:22:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:13.806 11:22:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1472860 00:14:13.806 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh: line 34: kill: (1472860) - No such process 00:14:13.806 11:22:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@38 -- # wait 1472860 00:14:13.806 11:22:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@39 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:14:13.806 11:22:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:14:13.806 11:22:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@43 -- # nvmftestfini 00:14:13.806 11:22:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@488 -- # nvmfcleanup 00:14:13.806 11:22:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@117 -- # sync 00:14:13.806 11:22:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:14:13.806 11:22:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@120 -- # set +e 00:14:13.806 11:22:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@121 -- # for i in {1..20} 00:14:13.806 11:22:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:14:13.806 rmmod nvme_tcp 00:14:14.064 rmmod nvme_fabrics 00:14:14.064 rmmod nvme_keyring 00:14:14.064 11:22:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:14:14.064 11:22:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@124 -- # set -e 00:14:14.064 11:22:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@125 -- # return 0 00:14:14.064 11:22:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@489 -- # '[' -n 1472732 ']' 00:14:14.064 11:22:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@490 -- # killprocess 1472732 00:14:14.064 11:22:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@950 -- # '[' -z 1472732 ']' 00:14:14.064 11:22:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@954 -- # kill -0 1472732 00:14:14.064 11:22:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@955 -- # uname 00:14:14.064 11:22:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:14:14.064 11:22:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1472732 00:14:14.064 11:22:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:14:14.065 11:22:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:14:14.065 11:22:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1472732' 00:14:14.065 killing process with pid 1472732 00:14:14.065 11:22:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@969 -- # kill 1472732 00:14:14.065 11:22:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@974 -- # wait 1472732 00:14:14.323 11:22:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:14:14.323 11:22:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:14:14.323 11:22:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:14:14.323 11:22:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:14:14.323 11:22:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@278 -- # remove_spdk_ns 00:14:14.323 11:22:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:14.323 11:22:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:14.323 11:22:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:16.229 11:22:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:14:16.229 00:14:16.229 real 0m19.370s 00:14:16.229 user 0m41.106s 00:14:16.229 sys 0m8.307s 00:14:16.229 11:22:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1126 -- # xtrace_disable 00:14:16.229 11:22:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:16.229 ************************************ 00:14:16.229 END TEST nvmf_connect_stress 00:14:16.229 ************************************ 00:14:16.229 11:22:11 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@25 -- # run_test nvmf_fused_ordering /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:14:16.229 11:22:11 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:14:16.229 11:22:11 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:14:16.229 11:22:11 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:14:16.229 ************************************ 00:14:16.229 START TEST nvmf_fused_ordering 00:14:16.229 ************************************ 00:14:16.229 11:22:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:14:16.488 * Looking for test storage... 00:14:16.488 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:14:16.488 11:22:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:14:16.488 11:22:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@7 -- # uname -s 00:14:16.488 11:22:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:16.488 11:22:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:16.488 11:22:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:16.488 11:22:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:16.488 11:22:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:16.488 11:22:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:16.488 11:22:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:16.488 11:22:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:16.488 11:22:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:16.488 11:22:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:16.488 11:22:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:14:16.488 11:22:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:14:16.488 11:22:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:16.488 11:22:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:16.488 11:22:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:16.488 11:22:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:16.488 11:22:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:16.488 11:22:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:16.488 11:22:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:16.488 11:22:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:16.488 11:22:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:16.488 11:22:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:16.488 11:22:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:16.488 11:22:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@5 -- # export PATH 00:14:16.488 11:22:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:16.488 11:22:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@47 -- # : 0 00:14:16.488 11:22:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:14:16.488 11:22:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:14:16.488 11:22:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:16.488 11:22:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:16.488 11:22:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:16.488 11:22:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:14:16.488 11:22:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:14:16.488 11:22:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@51 -- # have_pci_nics=0 00:14:16.488 11:22:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@12 -- # nvmftestinit 00:14:16.488 11:22:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:14:16.488 11:22:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:16.488 11:22:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@448 -- # prepare_net_devs 00:14:16.488 11:22:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@410 -- # local -g is_hw=no 00:14:16.488 11:22:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@412 -- # remove_spdk_ns 00:14:16.488 11:22:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:16.488 11:22:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:16.488 11:22:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:16.488 11:22:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:14:16.488 11:22:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:14:16.488 11:22:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@285 -- # xtrace_disable 00:14:16.488 11:22:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:14:23.050 11:22:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:14:23.050 11:22:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@291 -- # pci_devs=() 00:14:23.050 11:22:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@291 -- # local -a pci_devs 00:14:23.050 11:22:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@292 -- # pci_net_devs=() 00:14:23.050 11:22:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:14:23.050 11:22:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@293 -- # pci_drivers=() 00:14:23.050 11:22:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@293 -- # local -A pci_drivers 00:14:23.050 11:22:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@295 -- # net_devs=() 00:14:23.050 11:22:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@295 -- # local -ga net_devs 00:14:23.050 11:22:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@296 -- # e810=() 00:14:23.050 11:22:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@296 -- # local -ga e810 00:14:23.050 11:22:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@297 -- # x722=() 00:14:23.050 11:22:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@297 -- # local -ga x722 00:14:23.050 11:22:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@298 -- # mlx=() 00:14:23.050 11:22:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@298 -- # local -ga mlx 00:14:23.050 11:22:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:14:23.050 11:22:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:14:23.050 11:22:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:14:23.050 11:22:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:14:23.050 11:22:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:14:23.050 11:22:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:14:23.050 11:22:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:14:23.050 11:22:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:14:23.050 11:22:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:14:23.050 11:22:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:14:23.050 11:22:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:14:23.050 11:22:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:14:23.050 11:22:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:14:23.050 11:22:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:14:23.050 11:22:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:14:23.050 11:22:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:14:23.050 11:22:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:14:23.050 11:22:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:23.050 11:22:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:14:23.050 Found 0000:86:00.0 (0x8086 - 0x159b) 00:14:23.050 11:22:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:14:23.050 11:22:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:14:23.050 11:22:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:23.050 11:22:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:23.050 11:22:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:14:23.050 11:22:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:23.050 11:22:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:14:23.050 Found 0000:86:00.1 (0x8086 - 0x159b) 00:14:23.050 11:22:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:14:23.050 11:22:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:14:23.050 11:22:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:23.050 11:22:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:23.050 11:22:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:14:23.050 11:22:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:14:23.050 11:22:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:14:23.050 11:22:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:14:23.050 11:22:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:23.050 11:22:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:23.050 11:22:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:14:23.050 11:22:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:23.050 11:22:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@390 -- # [[ up == up ]] 00:14:23.050 11:22:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:14:23.050 11:22:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:23.050 11:22:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:14:23.050 Found net devices under 0000:86:00.0: cvl_0_0 00:14:23.050 11:22:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:14:23.050 11:22:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:23.050 11:22:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:23.050 11:22:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:14:23.050 11:22:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:23.050 11:22:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@390 -- # [[ up == up ]] 00:14:23.050 11:22:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:14:23.050 11:22:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:23.050 11:22:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:14:23.050 Found net devices under 0000:86:00.1: cvl_0_1 00:14:23.050 11:22:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:14:23.050 11:22:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:14:23.050 11:22:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@414 -- # is_hw=yes 00:14:23.050 11:22:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:14:23.050 11:22:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:14:23.050 11:22:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:14:23.050 11:22:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:23.050 11:22:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:23.050 11:22:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:14:23.050 11:22:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:14:23.050 11:22:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:14:23.050 11:22:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:14:23.050 11:22:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:14:23.050 11:22:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:14:23.050 11:22:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:23.051 11:22:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:14:23.051 11:22:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:14:23.051 11:22:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:14:23.051 11:22:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:14:23.051 11:22:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:14:23.051 11:22:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:14:23.051 11:22:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:14:23.051 11:22:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:14:23.051 11:22:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:14:23.051 11:22:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:14:23.051 11:22:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:14:23.051 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:23.051 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.157 ms 00:14:23.051 00:14:23.051 --- 10.0.0.2 ping statistics --- 00:14:23.051 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:23.051 rtt min/avg/max/mdev = 0.157/0.157/0.157/0.000 ms 00:14:23.051 11:22:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:14:23.051 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:23.051 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.246 ms 00:14:23.051 00:14:23.051 --- 10.0.0.1 ping statistics --- 00:14:23.051 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:23.051 rtt min/avg/max/mdev = 0.246/0.246/0.246/0.000 ms 00:14:23.051 11:22:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:23.051 11:22:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@422 -- # return 0 00:14:23.051 11:22:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:14:23.051 11:22:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:23.051 11:22:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:14:23.051 11:22:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:14:23.051 11:22:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:23.051 11:22:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:14:23.051 11:22:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:14:23.051 11:22:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@13 -- # nvmfappstart -m 0x2 00:14:23.051 11:22:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:14:23.051 11:22:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@724 -- # xtrace_disable 00:14:23.051 11:22:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:14:23.051 11:22:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@481 -- # nvmfpid=1478008 00:14:23.051 11:22:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:14:23.051 11:22:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@482 -- # waitforlisten 1478008 00:14:23.051 11:22:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@831 -- # '[' -z 1478008 ']' 00:14:23.051 11:22:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:23.051 11:22:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@836 -- # local max_retries=100 00:14:23.051 11:22:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:23.051 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:23.051 11:22:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@840 -- # xtrace_disable 00:14:23.051 11:22:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:14:23.051 [2024-07-26 11:22:17.803419] Starting SPDK v24.09-pre git sha1 487ff9e1a / DPDK 24.03.0 initialization... 00:14:23.051 [2024-07-26 11:22:17.803461] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:23.051 EAL: No free 2048 kB hugepages reported on node 1 00:14:23.051 [2024-07-26 11:22:17.875198] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:23.051 [2024-07-26 11:22:17.948562] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:23.051 [2024-07-26 11:22:17.948600] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:23.051 [2024-07-26 11:22:17.948607] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:23.051 [2024-07-26 11:22:17.948612] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:23.051 [2024-07-26 11:22:17.948617] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:23.051 [2024-07-26 11:22:17.948642] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:14:23.051 11:22:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:14:23.051 11:22:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@864 -- # return 0 00:14:23.051 11:22:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:14:23.051 11:22:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@730 -- # xtrace_disable 00:14:23.051 11:22:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:14:23.051 11:22:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:23.051 11:22:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:14:23.051 11:22:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:23.051 11:22:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:14:23.051 [2024-07-26 11:22:18.651328] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:23.051 11:22:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:23.051 11:22:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:14:23.051 11:22:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:23.051 11:22:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:14:23.051 11:22:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:23.051 11:22:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:23.051 11:22:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:23.051 11:22:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:14:23.051 [2024-07-26 11:22:18.671505] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:23.051 11:22:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:23.051 11:22:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:14:23.051 11:22:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:23.051 11:22:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:14:23.051 NULL1 00:14:23.051 11:22:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:23.051 11:22:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@19 -- # rpc_cmd bdev_wait_for_examine 00:14:23.051 11:22:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:23.051 11:22:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:14:23.051 11:22:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:23.051 11:22:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:14:23.051 11:22:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:23.051 11:22:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:14:23.051 11:22:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:23.051 11:22:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/fused_ordering/fused_ordering -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:14:23.309 [2024-07-26 11:22:18.725452] Starting SPDK v24.09-pre git sha1 487ff9e1a / DPDK 24.03.0 initialization... 00:14:23.309 [2024-07-26 11:22:18.725488] [ DPDK EAL parameters: fused_ordering --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1478253 ] 00:14:23.309 EAL: No free 2048 kB hugepages reported on node 1 00:14:23.567 Attached to nqn.2016-06.io.spdk:cnode1 00:14:23.567 Namespace ID: 1 size: 1GB 00:14:23.567 fused_ordering(0) 00:14:23.567 fused_ordering(1) 00:14:23.567 fused_ordering(2) 00:14:23.567 fused_ordering(3) 00:14:23.567 fused_ordering(4) 00:14:23.567 fused_ordering(5) 00:14:23.567 fused_ordering(6) 00:14:23.567 fused_ordering(7) 00:14:23.567 fused_ordering(8) 00:14:23.567 fused_ordering(9) 00:14:23.567 fused_ordering(10) 00:14:23.567 fused_ordering(11) 00:14:23.567 fused_ordering(12) 00:14:23.567 fused_ordering(13) 00:14:23.567 fused_ordering(14) 00:14:23.567 fused_ordering(15) 00:14:23.567 fused_ordering(16) 00:14:23.567 fused_ordering(17) 00:14:23.567 fused_ordering(18) 00:14:23.567 fused_ordering(19) 00:14:23.567 fused_ordering(20) 00:14:23.567 fused_ordering(21) 00:14:23.567 fused_ordering(22) 00:14:23.567 fused_ordering(23) 00:14:23.567 fused_ordering(24) 00:14:23.567 fused_ordering(25) 00:14:23.567 fused_ordering(26) 00:14:23.567 fused_ordering(27) 00:14:23.567 fused_ordering(28) 00:14:23.567 fused_ordering(29) 00:14:23.567 fused_ordering(30) 00:14:23.567 fused_ordering(31) 00:14:23.567 fused_ordering(32) 00:14:23.567 fused_ordering(33) 00:14:23.567 fused_ordering(34) 00:14:23.567 fused_ordering(35) 00:14:23.567 fused_ordering(36) 00:14:23.567 fused_ordering(37) 00:14:23.567 fused_ordering(38) 00:14:23.567 fused_ordering(39) 00:14:23.567 fused_ordering(40) 00:14:23.567 fused_ordering(41) 00:14:23.567 fused_ordering(42) 00:14:23.567 fused_ordering(43) 00:14:23.567 fused_ordering(44) 00:14:23.567 fused_ordering(45) 00:14:23.567 fused_ordering(46) 00:14:23.567 fused_ordering(47) 00:14:23.567 fused_ordering(48) 00:14:23.567 fused_ordering(49) 00:14:23.567 fused_ordering(50) 00:14:23.567 fused_ordering(51) 00:14:23.567 fused_ordering(52) 00:14:23.567 fused_ordering(53) 00:14:23.567 fused_ordering(54) 00:14:23.567 fused_ordering(55) 00:14:23.567 fused_ordering(56) 00:14:23.567 fused_ordering(57) 00:14:23.567 fused_ordering(58) 00:14:23.567 fused_ordering(59) 00:14:23.567 fused_ordering(60) 00:14:23.567 fused_ordering(61) 00:14:23.567 fused_ordering(62) 00:14:23.567 fused_ordering(63) 00:14:23.568 fused_ordering(64) 00:14:23.568 fused_ordering(65) 00:14:23.568 fused_ordering(66) 00:14:23.568 fused_ordering(67) 00:14:23.568 fused_ordering(68) 00:14:23.568 fused_ordering(69) 00:14:23.568 fused_ordering(70) 00:14:23.568 fused_ordering(71) 00:14:23.568 fused_ordering(72) 00:14:23.568 fused_ordering(73) 00:14:23.568 fused_ordering(74) 00:14:23.568 fused_ordering(75) 00:14:23.568 fused_ordering(76) 00:14:23.568 fused_ordering(77) 00:14:23.568 fused_ordering(78) 00:14:23.568 fused_ordering(79) 00:14:23.568 fused_ordering(80) 00:14:23.568 fused_ordering(81) 00:14:23.568 fused_ordering(82) 00:14:23.568 fused_ordering(83) 00:14:23.568 fused_ordering(84) 00:14:23.568 fused_ordering(85) 00:14:23.568 fused_ordering(86) 00:14:23.568 fused_ordering(87) 00:14:23.568 fused_ordering(88) 00:14:23.568 fused_ordering(89) 00:14:23.568 fused_ordering(90) 00:14:23.568 fused_ordering(91) 00:14:23.568 fused_ordering(92) 00:14:23.568 fused_ordering(93) 00:14:23.568 fused_ordering(94) 00:14:23.568 fused_ordering(95) 00:14:23.568 fused_ordering(96) 00:14:23.568 fused_ordering(97) 00:14:23.568 fused_ordering(98) 00:14:23.568 fused_ordering(99) 00:14:23.568 fused_ordering(100) 00:14:23.568 fused_ordering(101) 00:14:23.568 fused_ordering(102) 00:14:23.568 fused_ordering(103) 00:14:23.568 fused_ordering(104) 00:14:23.568 fused_ordering(105) 00:14:23.568 fused_ordering(106) 00:14:23.568 fused_ordering(107) 00:14:23.568 fused_ordering(108) 00:14:23.568 fused_ordering(109) 00:14:23.568 fused_ordering(110) 00:14:23.568 fused_ordering(111) 00:14:23.568 fused_ordering(112) 00:14:23.568 fused_ordering(113) 00:14:23.568 fused_ordering(114) 00:14:23.568 fused_ordering(115) 00:14:23.568 fused_ordering(116) 00:14:23.568 fused_ordering(117) 00:14:23.568 fused_ordering(118) 00:14:23.568 fused_ordering(119) 00:14:23.568 fused_ordering(120) 00:14:23.568 fused_ordering(121) 00:14:23.568 fused_ordering(122) 00:14:23.568 fused_ordering(123) 00:14:23.568 fused_ordering(124) 00:14:23.568 fused_ordering(125) 00:14:23.568 fused_ordering(126) 00:14:23.568 fused_ordering(127) 00:14:23.568 fused_ordering(128) 00:14:23.568 fused_ordering(129) 00:14:23.568 fused_ordering(130) 00:14:23.568 fused_ordering(131) 00:14:23.568 fused_ordering(132) 00:14:23.568 fused_ordering(133) 00:14:23.568 fused_ordering(134) 00:14:23.568 fused_ordering(135) 00:14:23.568 fused_ordering(136) 00:14:23.568 fused_ordering(137) 00:14:23.568 fused_ordering(138) 00:14:23.568 fused_ordering(139) 00:14:23.568 fused_ordering(140) 00:14:23.568 fused_ordering(141) 00:14:23.568 fused_ordering(142) 00:14:23.568 fused_ordering(143) 00:14:23.568 fused_ordering(144) 00:14:23.568 fused_ordering(145) 00:14:23.568 fused_ordering(146) 00:14:23.568 fused_ordering(147) 00:14:23.568 fused_ordering(148) 00:14:23.568 fused_ordering(149) 00:14:23.568 fused_ordering(150) 00:14:23.568 fused_ordering(151) 00:14:23.568 fused_ordering(152) 00:14:23.568 fused_ordering(153) 00:14:23.568 fused_ordering(154) 00:14:23.568 fused_ordering(155) 00:14:23.568 fused_ordering(156) 00:14:23.568 fused_ordering(157) 00:14:23.568 fused_ordering(158) 00:14:23.568 fused_ordering(159) 00:14:23.568 fused_ordering(160) 00:14:23.568 fused_ordering(161) 00:14:23.568 fused_ordering(162) 00:14:23.568 fused_ordering(163) 00:14:23.568 fused_ordering(164) 00:14:23.568 fused_ordering(165) 00:14:23.568 fused_ordering(166) 00:14:23.568 fused_ordering(167) 00:14:23.568 fused_ordering(168) 00:14:23.568 fused_ordering(169) 00:14:23.568 fused_ordering(170) 00:14:23.568 fused_ordering(171) 00:14:23.568 fused_ordering(172) 00:14:23.568 fused_ordering(173) 00:14:23.568 fused_ordering(174) 00:14:23.568 fused_ordering(175) 00:14:23.568 fused_ordering(176) 00:14:23.568 fused_ordering(177) 00:14:23.568 fused_ordering(178) 00:14:23.568 fused_ordering(179) 00:14:23.568 fused_ordering(180) 00:14:23.568 fused_ordering(181) 00:14:23.568 fused_ordering(182) 00:14:23.568 fused_ordering(183) 00:14:23.568 fused_ordering(184) 00:14:23.568 fused_ordering(185) 00:14:23.568 fused_ordering(186) 00:14:23.568 fused_ordering(187) 00:14:23.568 fused_ordering(188) 00:14:23.568 fused_ordering(189) 00:14:23.568 fused_ordering(190) 00:14:23.568 fused_ordering(191) 00:14:23.568 fused_ordering(192) 00:14:23.568 fused_ordering(193) 00:14:23.568 fused_ordering(194) 00:14:23.568 fused_ordering(195) 00:14:23.568 fused_ordering(196) 00:14:23.568 fused_ordering(197) 00:14:23.568 fused_ordering(198) 00:14:23.568 fused_ordering(199) 00:14:23.568 fused_ordering(200) 00:14:23.568 fused_ordering(201) 00:14:23.568 fused_ordering(202) 00:14:23.568 fused_ordering(203) 00:14:23.568 fused_ordering(204) 00:14:23.568 fused_ordering(205) 00:14:23.826 fused_ordering(206) 00:14:23.826 fused_ordering(207) 00:14:23.826 fused_ordering(208) 00:14:23.826 fused_ordering(209) 00:14:23.826 fused_ordering(210) 00:14:23.826 fused_ordering(211) 00:14:23.826 fused_ordering(212) 00:14:23.827 fused_ordering(213) 00:14:23.827 fused_ordering(214) 00:14:23.827 fused_ordering(215) 00:14:23.827 fused_ordering(216) 00:14:23.827 fused_ordering(217) 00:14:23.827 fused_ordering(218) 00:14:23.827 fused_ordering(219) 00:14:23.827 fused_ordering(220) 00:14:23.827 fused_ordering(221) 00:14:23.827 fused_ordering(222) 00:14:23.827 fused_ordering(223) 00:14:23.827 fused_ordering(224) 00:14:23.827 fused_ordering(225) 00:14:23.827 fused_ordering(226) 00:14:23.827 fused_ordering(227) 00:14:23.827 fused_ordering(228) 00:14:23.827 fused_ordering(229) 00:14:23.827 fused_ordering(230) 00:14:23.827 fused_ordering(231) 00:14:23.827 fused_ordering(232) 00:14:23.827 fused_ordering(233) 00:14:23.827 fused_ordering(234) 00:14:23.827 fused_ordering(235) 00:14:23.827 fused_ordering(236) 00:14:23.827 fused_ordering(237) 00:14:23.827 fused_ordering(238) 00:14:23.827 fused_ordering(239) 00:14:23.827 fused_ordering(240) 00:14:23.827 fused_ordering(241) 00:14:23.827 fused_ordering(242) 00:14:23.827 fused_ordering(243) 00:14:23.827 fused_ordering(244) 00:14:23.827 fused_ordering(245) 00:14:23.827 fused_ordering(246) 00:14:23.827 fused_ordering(247) 00:14:23.827 fused_ordering(248) 00:14:23.827 fused_ordering(249) 00:14:23.827 fused_ordering(250) 00:14:23.827 fused_ordering(251) 00:14:23.827 fused_ordering(252) 00:14:23.827 fused_ordering(253) 00:14:23.827 fused_ordering(254) 00:14:23.827 fused_ordering(255) 00:14:23.827 fused_ordering(256) 00:14:23.827 fused_ordering(257) 00:14:23.827 fused_ordering(258) 00:14:23.827 fused_ordering(259) 00:14:23.827 fused_ordering(260) 00:14:23.827 fused_ordering(261) 00:14:23.827 fused_ordering(262) 00:14:23.827 fused_ordering(263) 00:14:23.827 fused_ordering(264) 00:14:23.827 fused_ordering(265) 00:14:23.827 fused_ordering(266) 00:14:23.827 fused_ordering(267) 00:14:23.827 fused_ordering(268) 00:14:23.827 fused_ordering(269) 00:14:23.827 fused_ordering(270) 00:14:23.827 fused_ordering(271) 00:14:23.827 fused_ordering(272) 00:14:23.827 fused_ordering(273) 00:14:23.827 fused_ordering(274) 00:14:23.827 fused_ordering(275) 00:14:23.827 fused_ordering(276) 00:14:23.827 fused_ordering(277) 00:14:23.827 fused_ordering(278) 00:14:23.827 fused_ordering(279) 00:14:23.827 fused_ordering(280) 00:14:23.827 fused_ordering(281) 00:14:23.827 fused_ordering(282) 00:14:23.827 fused_ordering(283) 00:14:23.827 fused_ordering(284) 00:14:23.827 fused_ordering(285) 00:14:23.827 fused_ordering(286) 00:14:23.827 fused_ordering(287) 00:14:23.827 fused_ordering(288) 00:14:23.827 fused_ordering(289) 00:14:23.827 fused_ordering(290) 00:14:23.827 fused_ordering(291) 00:14:23.827 fused_ordering(292) 00:14:23.827 fused_ordering(293) 00:14:23.827 fused_ordering(294) 00:14:23.827 fused_ordering(295) 00:14:23.827 fused_ordering(296) 00:14:23.827 fused_ordering(297) 00:14:23.827 fused_ordering(298) 00:14:23.827 fused_ordering(299) 00:14:23.827 fused_ordering(300) 00:14:23.827 fused_ordering(301) 00:14:23.827 fused_ordering(302) 00:14:23.827 fused_ordering(303) 00:14:23.827 fused_ordering(304) 00:14:23.827 fused_ordering(305) 00:14:23.827 fused_ordering(306) 00:14:23.827 fused_ordering(307) 00:14:23.827 fused_ordering(308) 00:14:23.827 fused_ordering(309) 00:14:23.827 fused_ordering(310) 00:14:23.827 fused_ordering(311) 00:14:23.827 fused_ordering(312) 00:14:23.827 fused_ordering(313) 00:14:23.827 fused_ordering(314) 00:14:23.827 fused_ordering(315) 00:14:23.827 fused_ordering(316) 00:14:23.827 fused_ordering(317) 00:14:23.827 fused_ordering(318) 00:14:23.827 fused_ordering(319) 00:14:23.827 fused_ordering(320) 00:14:23.827 fused_ordering(321) 00:14:23.827 fused_ordering(322) 00:14:23.827 fused_ordering(323) 00:14:23.827 fused_ordering(324) 00:14:23.827 fused_ordering(325) 00:14:23.827 fused_ordering(326) 00:14:23.827 fused_ordering(327) 00:14:23.827 fused_ordering(328) 00:14:23.827 fused_ordering(329) 00:14:23.827 fused_ordering(330) 00:14:23.827 fused_ordering(331) 00:14:23.827 fused_ordering(332) 00:14:23.827 fused_ordering(333) 00:14:23.827 fused_ordering(334) 00:14:23.827 fused_ordering(335) 00:14:23.827 fused_ordering(336) 00:14:23.827 fused_ordering(337) 00:14:23.827 fused_ordering(338) 00:14:23.827 fused_ordering(339) 00:14:23.827 fused_ordering(340) 00:14:23.827 fused_ordering(341) 00:14:23.827 fused_ordering(342) 00:14:23.827 fused_ordering(343) 00:14:23.827 fused_ordering(344) 00:14:23.827 fused_ordering(345) 00:14:23.827 fused_ordering(346) 00:14:23.827 fused_ordering(347) 00:14:23.827 fused_ordering(348) 00:14:23.827 fused_ordering(349) 00:14:23.827 fused_ordering(350) 00:14:23.827 fused_ordering(351) 00:14:23.827 fused_ordering(352) 00:14:23.827 fused_ordering(353) 00:14:23.827 fused_ordering(354) 00:14:23.827 fused_ordering(355) 00:14:23.827 fused_ordering(356) 00:14:23.827 fused_ordering(357) 00:14:23.827 fused_ordering(358) 00:14:23.827 fused_ordering(359) 00:14:23.827 fused_ordering(360) 00:14:23.827 fused_ordering(361) 00:14:23.827 fused_ordering(362) 00:14:23.827 fused_ordering(363) 00:14:23.827 fused_ordering(364) 00:14:23.827 fused_ordering(365) 00:14:23.827 fused_ordering(366) 00:14:23.827 fused_ordering(367) 00:14:23.827 fused_ordering(368) 00:14:23.827 fused_ordering(369) 00:14:23.827 fused_ordering(370) 00:14:23.827 fused_ordering(371) 00:14:23.827 fused_ordering(372) 00:14:23.827 fused_ordering(373) 00:14:23.827 fused_ordering(374) 00:14:23.827 fused_ordering(375) 00:14:23.827 fused_ordering(376) 00:14:23.827 fused_ordering(377) 00:14:23.827 fused_ordering(378) 00:14:23.827 fused_ordering(379) 00:14:23.827 fused_ordering(380) 00:14:23.827 fused_ordering(381) 00:14:23.827 fused_ordering(382) 00:14:23.827 fused_ordering(383) 00:14:23.827 fused_ordering(384) 00:14:23.827 fused_ordering(385) 00:14:23.827 fused_ordering(386) 00:14:23.827 fused_ordering(387) 00:14:23.827 fused_ordering(388) 00:14:23.827 fused_ordering(389) 00:14:23.827 fused_ordering(390) 00:14:23.827 fused_ordering(391) 00:14:23.827 fused_ordering(392) 00:14:23.827 fused_ordering(393) 00:14:23.827 fused_ordering(394) 00:14:23.827 fused_ordering(395) 00:14:23.827 fused_ordering(396) 00:14:23.827 fused_ordering(397) 00:14:23.827 fused_ordering(398) 00:14:23.827 fused_ordering(399) 00:14:23.827 fused_ordering(400) 00:14:23.827 fused_ordering(401) 00:14:23.827 fused_ordering(402) 00:14:23.827 fused_ordering(403) 00:14:23.827 fused_ordering(404) 00:14:23.827 fused_ordering(405) 00:14:23.827 fused_ordering(406) 00:14:23.827 fused_ordering(407) 00:14:23.827 fused_ordering(408) 00:14:23.827 fused_ordering(409) 00:14:23.827 fused_ordering(410) 00:14:24.086 fused_ordering(411) 00:14:24.086 fused_ordering(412) 00:14:24.086 fused_ordering(413) 00:14:24.086 fused_ordering(414) 00:14:24.086 fused_ordering(415) 00:14:24.086 fused_ordering(416) 00:14:24.086 fused_ordering(417) 00:14:24.086 fused_ordering(418) 00:14:24.086 fused_ordering(419) 00:14:24.086 fused_ordering(420) 00:14:24.086 fused_ordering(421) 00:14:24.086 fused_ordering(422) 00:14:24.086 fused_ordering(423) 00:14:24.086 fused_ordering(424) 00:14:24.086 fused_ordering(425) 00:14:24.086 fused_ordering(426) 00:14:24.086 fused_ordering(427) 00:14:24.086 fused_ordering(428) 00:14:24.086 fused_ordering(429) 00:14:24.086 fused_ordering(430) 00:14:24.086 fused_ordering(431) 00:14:24.086 fused_ordering(432) 00:14:24.086 fused_ordering(433) 00:14:24.086 fused_ordering(434) 00:14:24.086 fused_ordering(435) 00:14:24.086 fused_ordering(436) 00:14:24.086 fused_ordering(437) 00:14:24.086 fused_ordering(438) 00:14:24.086 fused_ordering(439) 00:14:24.086 fused_ordering(440) 00:14:24.086 fused_ordering(441) 00:14:24.086 fused_ordering(442) 00:14:24.086 fused_ordering(443) 00:14:24.086 fused_ordering(444) 00:14:24.086 fused_ordering(445) 00:14:24.086 fused_ordering(446) 00:14:24.086 fused_ordering(447) 00:14:24.086 fused_ordering(448) 00:14:24.086 fused_ordering(449) 00:14:24.086 fused_ordering(450) 00:14:24.086 fused_ordering(451) 00:14:24.086 fused_ordering(452) 00:14:24.086 fused_ordering(453) 00:14:24.086 fused_ordering(454) 00:14:24.086 fused_ordering(455) 00:14:24.086 fused_ordering(456) 00:14:24.086 fused_ordering(457) 00:14:24.086 fused_ordering(458) 00:14:24.086 fused_ordering(459) 00:14:24.086 fused_ordering(460) 00:14:24.086 fused_ordering(461) 00:14:24.086 fused_ordering(462) 00:14:24.086 fused_ordering(463) 00:14:24.086 fused_ordering(464) 00:14:24.086 fused_ordering(465) 00:14:24.086 fused_ordering(466) 00:14:24.086 fused_ordering(467) 00:14:24.086 fused_ordering(468) 00:14:24.086 fused_ordering(469) 00:14:24.086 fused_ordering(470) 00:14:24.086 fused_ordering(471) 00:14:24.086 fused_ordering(472) 00:14:24.086 fused_ordering(473) 00:14:24.086 fused_ordering(474) 00:14:24.086 fused_ordering(475) 00:14:24.086 fused_ordering(476) 00:14:24.086 fused_ordering(477) 00:14:24.086 fused_ordering(478) 00:14:24.086 fused_ordering(479) 00:14:24.086 fused_ordering(480) 00:14:24.086 fused_ordering(481) 00:14:24.086 fused_ordering(482) 00:14:24.086 fused_ordering(483) 00:14:24.086 fused_ordering(484) 00:14:24.086 fused_ordering(485) 00:14:24.086 fused_ordering(486) 00:14:24.086 fused_ordering(487) 00:14:24.086 fused_ordering(488) 00:14:24.086 fused_ordering(489) 00:14:24.086 fused_ordering(490) 00:14:24.086 fused_ordering(491) 00:14:24.086 fused_ordering(492) 00:14:24.086 fused_ordering(493) 00:14:24.086 fused_ordering(494) 00:14:24.086 fused_ordering(495) 00:14:24.086 fused_ordering(496) 00:14:24.086 fused_ordering(497) 00:14:24.086 fused_ordering(498) 00:14:24.086 fused_ordering(499) 00:14:24.086 fused_ordering(500) 00:14:24.086 fused_ordering(501) 00:14:24.086 fused_ordering(502) 00:14:24.086 fused_ordering(503) 00:14:24.086 fused_ordering(504) 00:14:24.086 fused_ordering(505) 00:14:24.086 fused_ordering(506) 00:14:24.086 fused_ordering(507) 00:14:24.086 fused_ordering(508) 00:14:24.086 fused_ordering(509) 00:14:24.086 fused_ordering(510) 00:14:24.086 fused_ordering(511) 00:14:24.086 fused_ordering(512) 00:14:24.086 fused_ordering(513) 00:14:24.086 fused_ordering(514) 00:14:24.086 fused_ordering(515) 00:14:24.086 fused_ordering(516) 00:14:24.086 fused_ordering(517) 00:14:24.086 fused_ordering(518) 00:14:24.086 fused_ordering(519) 00:14:24.086 fused_ordering(520) 00:14:24.086 fused_ordering(521) 00:14:24.086 fused_ordering(522) 00:14:24.086 fused_ordering(523) 00:14:24.086 fused_ordering(524) 00:14:24.086 fused_ordering(525) 00:14:24.086 fused_ordering(526) 00:14:24.086 fused_ordering(527) 00:14:24.086 fused_ordering(528) 00:14:24.086 fused_ordering(529) 00:14:24.086 fused_ordering(530) 00:14:24.086 fused_ordering(531) 00:14:24.086 fused_ordering(532) 00:14:24.086 fused_ordering(533) 00:14:24.086 fused_ordering(534) 00:14:24.086 fused_ordering(535) 00:14:24.086 fused_ordering(536) 00:14:24.086 fused_ordering(537) 00:14:24.086 fused_ordering(538) 00:14:24.086 fused_ordering(539) 00:14:24.086 fused_ordering(540) 00:14:24.086 fused_ordering(541) 00:14:24.086 fused_ordering(542) 00:14:24.086 fused_ordering(543) 00:14:24.086 fused_ordering(544) 00:14:24.086 fused_ordering(545) 00:14:24.086 fused_ordering(546) 00:14:24.086 fused_ordering(547) 00:14:24.086 fused_ordering(548) 00:14:24.086 fused_ordering(549) 00:14:24.086 fused_ordering(550) 00:14:24.086 fused_ordering(551) 00:14:24.086 fused_ordering(552) 00:14:24.086 fused_ordering(553) 00:14:24.086 fused_ordering(554) 00:14:24.086 fused_ordering(555) 00:14:24.086 fused_ordering(556) 00:14:24.086 fused_ordering(557) 00:14:24.086 fused_ordering(558) 00:14:24.086 fused_ordering(559) 00:14:24.086 fused_ordering(560) 00:14:24.086 fused_ordering(561) 00:14:24.086 fused_ordering(562) 00:14:24.086 fused_ordering(563) 00:14:24.086 fused_ordering(564) 00:14:24.086 fused_ordering(565) 00:14:24.086 fused_ordering(566) 00:14:24.086 fused_ordering(567) 00:14:24.086 fused_ordering(568) 00:14:24.086 fused_ordering(569) 00:14:24.086 fused_ordering(570) 00:14:24.086 fused_ordering(571) 00:14:24.086 fused_ordering(572) 00:14:24.086 fused_ordering(573) 00:14:24.086 fused_ordering(574) 00:14:24.086 fused_ordering(575) 00:14:24.086 fused_ordering(576) 00:14:24.086 fused_ordering(577) 00:14:24.086 fused_ordering(578) 00:14:24.086 fused_ordering(579) 00:14:24.086 fused_ordering(580) 00:14:24.086 fused_ordering(581) 00:14:24.086 fused_ordering(582) 00:14:24.086 fused_ordering(583) 00:14:24.086 fused_ordering(584) 00:14:24.086 fused_ordering(585) 00:14:24.086 fused_ordering(586) 00:14:24.086 fused_ordering(587) 00:14:24.086 fused_ordering(588) 00:14:24.086 fused_ordering(589) 00:14:24.086 fused_ordering(590) 00:14:24.086 fused_ordering(591) 00:14:24.086 fused_ordering(592) 00:14:24.086 fused_ordering(593) 00:14:24.086 fused_ordering(594) 00:14:24.086 fused_ordering(595) 00:14:24.086 fused_ordering(596) 00:14:24.086 fused_ordering(597) 00:14:24.086 fused_ordering(598) 00:14:24.086 fused_ordering(599) 00:14:24.086 fused_ordering(600) 00:14:24.086 fused_ordering(601) 00:14:24.086 fused_ordering(602) 00:14:24.086 fused_ordering(603) 00:14:24.086 fused_ordering(604) 00:14:24.086 fused_ordering(605) 00:14:24.086 fused_ordering(606) 00:14:24.086 fused_ordering(607) 00:14:24.086 fused_ordering(608) 00:14:24.086 fused_ordering(609) 00:14:24.086 fused_ordering(610) 00:14:24.086 fused_ordering(611) 00:14:24.087 fused_ordering(612) 00:14:24.087 fused_ordering(613) 00:14:24.087 fused_ordering(614) 00:14:24.087 fused_ordering(615) 00:14:24.653 fused_ordering(616) 00:14:24.653 fused_ordering(617) 00:14:24.653 fused_ordering(618) 00:14:24.653 fused_ordering(619) 00:14:24.653 fused_ordering(620) 00:14:24.653 fused_ordering(621) 00:14:24.653 fused_ordering(622) 00:14:24.653 fused_ordering(623) 00:14:24.653 fused_ordering(624) 00:14:24.653 fused_ordering(625) 00:14:24.653 fused_ordering(626) 00:14:24.653 fused_ordering(627) 00:14:24.653 fused_ordering(628) 00:14:24.653 fused_ordering(629) 00:14:24.653 fused_ordering(630) 00:14:24.653 fused_ordering(631) 00:14:24.653 fused_ordering(632) 00:14:24.653 fused_ordering(633) 00:14:24.653 fused_ordering(634) 00:14:24.653 fused_ordering(635) 00:14:24.653 fused_ordering(636) 00:14:24.653 fused_ordering(637) 00:14:24.653 fused_ordering(638) 00:14:24.653 fused_ordering(639) 00:14:24.653 fused_ordering(640) 00:14:24.653 fused_ordering(641) 00:14:24.653 fused_ordering(642) 00:14:24.653 fused_ordering(643) 00:14:24.653 fused_ordering(644) 00:14:24.653 fused_ordering(645) 00:14:24.653 fused_ordering(646) 00:14:24.653 fused_ordering(647) 00:14:24.653 fused_ordering(648) 00:14:24.653 fused_ordering(649) 00:14:24.653 fused_ordering(650) 00:14:24.653 fused_ordering(651) 00:14:24.653 fused_ordering(652) 00:14:24.653 fused_ordering(653) 00:14:24.653 fused_ordering(654) 00:14:24.653 fused_ordering(655) 00:14:24.653 fused_ordering(656) 00:14:24.653 fused_ordering(657) 00:14:24.653 fused_ordering(658) 00:14:24.653 fused_ordering(659) 00:14:24.653 fused_ordering(660) 00:14:24.653 fused_ordering(661) 00:14:24.653 fused_ordering(662) 00:14:24.653 fused_ordering(663) 00:14:24.653 fused_ordering(664) 00:14:24.653 fused_ordering(665) 00:14:24.653 fused_ordering(666) 00:14:24.653 fused_ordering(667) 00:14:24.653 fused_ordering(668) 00:14:24.653 fused_ordering(669) 00:14:24.653 fused_ordering(670) 00:14:24.653 fused_ordering(671) 00:14:24.653 fused_ordering(672) 00:14:24.653 fused_ordering(673) 00:14:24.653 fused_ordering(674) 00:14:24.653 fused_ordering(675) 00:14:24.653 fused_ordering(676) 00:14:24.653 fused_ordering(677) 00:14:24.653 fused_ordering(678) 00:14:24.653 fused_ordering(679) 00:14:24.653 fused_ordering(680) 00:14:24.653 fused_ordering(681) 00:14:24.653 fused_ordering(682) 00:14:24.653 fused_ordering(683) 00:14:24.653 fused_ordering(684) 00:14:24.653 fused_ordering(685) 00:14:24.653 fused_ordering(686) 00:14:24.653 fused_ordering(687) 00:14:24.653 fused_ordering(688) 00:14:24.653 fused_ordering(689) 00:14:24.653 fused_ordering(690) 00:14:24.653 fused_ordering(691) 00:14:24.653 fused_ordering(692) 00:14:24.653 fused_ordering(693) 00:14:24.653 fused_ordering(694) 00:14:24.653 fused_ordering(695) 00:14:24.653 fused_ordering(696) 00:14:24.653 fused_ordering(697) 00:14:24.653 fused_ordering(698) 00:14:24.653 fused_ordering(699) 00:14:24.653 fused_ordering(700) 00:14:24.653 fused_ordering(701) 00:14:24.653 fused_ordering(702) 00:14:24.653 fused_ordering(703) 00:14:24.653 fused_ordering(704) 00:14:24.653 fused_ordering(705) 00:14:24.653 fused_ordering(706) 00:14:24.653 fused_ordering(707) 00:14:24.653 fused_ordering(708) 00:14:24.653 fused_ordering(709) 00:14:24.653 fused_ordering(710) 00:14:24.653 fused_ordering(711) 00:14:24.653 fused_ordering(712) 00:14:24.653 fused_ordering(713) 00:14:24.653 fused_ordering(714) 00:14:24.653 fused_ordering(715) 00:14:24.653 fused_ordering(716) 00:14:24.653 fused_ordering(717) 00:14:24.653 fused_ordering(718) 00:14:24.653 fused_ordering(719) 00:14:24.653 fused_ordering(720) 00:14:24.653 fused_ordering(721) 00:14:24.653 fused_ordering(722) 00:14:24.653 fused_ordering(723) 00:14:24.653 fused_ordering(724) 00:14:24.653 fused_ordering(725) 00:14:24.653 fused_ordering(726) 00:14:24.653 fused_ordering(727) 00:14:24.653 fused_ordering(728) 00:14:24.653 fused_ordering(729) 00:14:24.653 fused_ordering(730) 00:14:24.653 fused_ordering(731) 00:14:24.653 fused_ordering(732) 00:14:24.653 fused_ordering(733) 00:14:24.653 fused_ordering(734) 00:14:24.653 fused_ordering(735) 00:14:24.653 fused_ordering(736) 00:14:24.653 fused_ordering(737) 00:14:24.653 fused_ordering(738) 00:14:24.653 fused_ordering(739) 00:14:24.653 fused_ordering(740) 00:14:24.653 fused_ordering(741) 00:14:24.653 fused_ordering(742) 00:14:24.653 fused_ordering(743) 00:14:24.653 fused_ordering(744) 00:14:24.653 fused_ordering(745) 00:14:24.653 fused_ordering(746) 00:14:24.653 fused_ordering(747) 00:14:24.653 fused_ordering(748) 00:14:24.653 fused_ordering(749) 00:14:24.653 fused_ordering(750) 00:14:24.653 fused_ordering(751) 00:14:24.653 fused_ordering(752) 00:14:24.653 fused_ordering(753) 00:14:24.653 fused_ordering(754) 00:14:24.653 fused_ordering(755) 00:14:24.653 fused_ordering(756) 00:14:24.653 fused_ordering(757) 00:14:24.653 fused_ordering(758) 00:14:24.653 fused_ordering(759) 00:14:24.653 fused_ordering(760) 00:14:24.653 fused_ordering(761) 00:14:24.653 fused_ordering(762) 00:14:24.653 fused_ordering(763) 00:14:24.653 fused_ordering(764) 00:14:24.653 fused_ordering(765) 00:14:24.653 fused_ordering(766) 00:14:24.653 fused_ordering(767) 00:14:24.653 fused_ordering(768) 00:14:24.653 fused_ordering(769) 00:14:24.653 fused_ordering(770) 00:14:24.653 fused_ordering(771) 00:14:24.653 fused_ordering(772) 00:14:24.653 fused_ordering(773) 00:14:24.653 fused_ordering(774) 00:14:24.653 fused_ordering(775) 00:14:24.653 fused_ordering(776) 00:14:24.653 fused_ordering(777) 00:14:24.653 fused_ordering(778) 00:14:24.653 fused_ordering(779) 00:14:24.653 fused_ordering(780) 00:14:24.653 fused_ordering(781) 00:14:24.653 fused_ordering(782) 00:14:24.653 fused_ordering(783) 00:14:24.653 fused_ordering(784) 00:14:24.653 fused_ordering(785) 00:14:24.653 fused_ordering(786) 00:14:24.653 fused_ordering(787) 00:14:24.653 fused_ordering(788) 00:14:24.653 fused_ordering(789) 00:14:24.653 fused_ordering(790) 00:14:24.653 fused_ordering(791) 00:14:24.653 fused_ordering(792) 00:14:24.653 fused_ordering(793) 00:14:24.653 fused_ordering(794) 00:14:24.653 fused_ordering(795) 00:14:24.653 fused_ordering(796) 00:14:24.653 fused_ordering(797) 00:14:24.653 fused_ordering(798) 00:14:24.653 fused_ordering(799) 00:14:24.653 fused_ordering(800) 00:14:24.653 fused_ordering(801) 00:14:24.653 fused_ordering(802) 00:14:24.653 fused_ordering(803) 00:14:24.653 fused_ordering(804) 00:14:24.653 fused_ordering(805) 00:14:24.653 fused_ordering(806) 00:14:24.653 fused_ordering(807) 00:14:24.653 fused_ordering(808) 00:14:24.653 fused_ordering(809) 00:14:24.653 fused_ordering(810) 00:14:24.653 fused_ordering(811) 00:14:24.653 fused_ordering(812) 00:14:24.653 fused_ordering(813) 00:14:24.653 fused_ordering(814) 00:14:24.653 fused_ordering(815) 00:14:24.653 fused_ordering(816) 00:14:24.653 fused_ordering(817) 00:14:24.653 fused_ordering(818) 00:14:24.654 fused_ordering(819) 00:14:24.654 fused_ordering(820) 00:14:24.912 fused_ordering(821) 00:14:24.912 fused_ordering(822) 00:14:24.912 fused_ordering(823) 00:14:24.912 fused_ordering(824) 00:14:24.912 fused_ordering(825) 00:14:24.912 fused_ordering(826) 00:14:24.912 fused_ordering(827) 00:14:24.912 fused_ordering(828) 00:14:24.912 fused_ordering(829) 00:14:24.912 fused_ordering(830) 00:14:24.912 fused_ordering(831) 00:14:24.912 fused_ordering(832) 00:14:24.912 fused_ordering(833) 00:14:24.912 fused_ordering(834) 00:14:24.912 fused_ordering(835) 00:14:24.912 fused_ordering(836) 00:14:24.912 fused_ordering(837) 00:14:24.912 fused_ordering(838) 00:14:24.912 fused_ordering(839) 00:14:24.912 fused_ordering(840) 00:14:24.912 fused_ordering(841) 00:14:24.912 fused_ordering(842) 00:14:24.912 fused_ordering(843) 00:14:24.912 fused_ordering(844) 00:14:24.912 fused_ordering(845) 00:14:24.912 fused_ordering(846) 00:14:24.912 fused_ordering(847) 00:14:24.912 fused_ordering(848) 00:14:24.912 fused_ordering(849) 00:14:24.912 fused_ordering(850) 00:14:24.912 fused_ordering(851) 00:14:24.912 fused_ordering(852) 00:14:24.912 fused_ordering(853) 00:14:24.912 fused_ordering(854) 00:14:24.912 fused_ordering(855) 00:14:24.912 fused_ordering(856) 00:14:24.912 fused_ordering(857) 00:14:24.912 fused_ordering(858) 00:14:24.912 fused_ordering(859) 00:14:24.912 fused_ordering(860) 00:14:24.912 fused_ordering(861) 00:14:24.912 fused_ordering(862) 00:14:24.912 fused_ordering(863) 00:14:24.912 fused_ordering(864) 00:14:24.912 fused_ordering(865) 00:14:24.912 fused_ordering(866) 00:14:24.912 fused_ordering(867) 00:14:24.912 fused_ordering(868) 00:14:24.912 fused_ordering(869) 00:14:24.912 fused_ordering(870) 00:14:24.912 fused_ordering(871) 00:14:24.912 fused_ordering(872) 00:14:24.912 fused_ordering(873) 00:14:24.912 fused_ordering(874) 00:14:24.912 fused_ordering(875) 00:14:24.912 fused_ordering(876) 00:14:24.912 fused_ordering(877) 00:14:24.912 fused_ordering(878) 00:14:24.912 fused_ordering(879) 00:14:24.912 fused_ordering(880) 00:14:24.913 fused_ordering(881) 00:14:24.913 fused_ordering(882) 00:14:24.913 fused_ordering(883) 00:14:24.913 fused_ordering(884) 00:14:24.913 fused_ordering(885) 00:14:24.913 fused_ordering(886) 00:14:24.913 fused_ordering(887) 00:14:24.913 fused_ordering(888) 00:14:24.913 fused_ordering(889) 00:14:24.913 fused_ordering(890) 00:14:24.913 fused_ordering(891) 00:14:24.913 fused_ordering(892) 00:14:24.913 fused_ordering(893) 00:14:24.913 fused_ordering(894) 00:14:24.913 fused_ordering(895) 00:14:24.913 fused_ordering(896) 00:14:24.913 fused_ordering(897) 00:14:24.913 fused_ordering(898) 00:14:24.913 fused_ordering(899) 00:14:24.913 fused_ordering(900) 00:14:24.913 fused_ordering(901) 00:14:24.913 fused_ordering(902) 00:14:24.913 fused_ordering(903) 00:14:24.913 fused_ordering(904) 00:14:24.913 fused_ordering(905) 00:14:24.913 fused_ordering(906) 00:14:24.913 fused_ordering(907) 00:14:24.913 fused_ordering(908) 00:14:24.913 fused_ordering(909) 00:14:24.913 fused_ordering(910) 00:14:24.913 fused_ordering(911) 00:14:24.913 fused_ordering(912) 00:14:24.913 fused_ordering(913) 00:14:24.913 fused_ordering(914) 00:14:24.913 fused_ordering(915) 00:14:24.913 fused_ordering(916) 00:14:24.913 fused_ordering(917) 00:14:24.913 fused_ordering(918) 00:14:24.913 fused_ordering(919) 00:14:24.913 fused_ordering(920) 00:14:24.913 fused_ordering(921) 00:14:24.913 fused_ordering(922) 00:14:24.913 fused_ordering(923) 00:14:24.913 fused_ordering(924) 00:14:24.913 fused_ordering(925) 00:14:24.913 fused_ordering(926) 00:14:24.913 fused_ordering(927) 00:14:24.913 fused_ordering(928) 00:14:24.913 fused_ordering(929) 00:14:24.913 fused_ordering(930) 00:14:24.913 fused_ordering(931) 00:14:24.913 fused_ordering(932) 00:14:24.913 fused_ordering(933) 00:14:24.913 fused_ordering(934) 00:14:24.913 fused_ordering(935) 00:14:24.913 fused_ordering(936) 00:14:24.913 fused_ordering(937) 00:14:24.913 fused_ordering(938) 00:14:24.913 fused_ordering(939) 00:14:24.913 fused_ordering(940) 00:14:24.913 fused_ordering(941) 00:14:24.913 fused_ordering(942) 00:14:24.913 fused_ordering(943) 00:14:24.913 fused_ordering(944) 00:14:24.913 fused_ordering(945) 00:14:24.913 fused_ordering(946) 00:14:24.913 fused_ordering(947) 00:14:24.913 fused_ordering(948) 00:14:24.913 fused_ordering(949) 00:14:24.913 fused_ordering(950) 00:14:24.913 fused_ordering(951) 00:14:24.913 fused_ordering(952) 00:14:24.913 fused_ordering(953) 00:14:24.913 fused_ordering(954) 00:14:24.913 fused_ordering(955) 00:14:24.913 fused_ordering(956) 00:14:24.913 fused_ordering(957) 00:14:24.913 fused_ordering(958) 00:14:24.913 fused_ordering(959) 00:14:24.913 fused_ordering(960) 00:14:24.913 fused_ordering(961) 00:14:24.913 fused_ordering(962) 00:14:24.913 fused_ordering(963) 00:14:24.913 fused_ordering(964) 00:14:24.913 fused_ordering(965) 00:14:24.913 fused_ordering(966) 00:14:24.913 fused_ordering(967) 00:14:24.913 fused_ordering(968) 00:14:24.913 fused_ordering(969) 00:14:24.913 fused_ordering(970) 00:14:24.913 fused_ordering(971) 00:14:24.913 fused_ordering(972) 00:14:24.913 fused_ordering(973) 00:14:24.913 fused_ordering(974) 00:14:24.913 fused_ordering(975) 00:14:24.913 fused_ordering(976) 00:14:24.913 fused_ordering(977) 00:14:24.913 fused_ordering(978) 00:14:24.913 fused_ordering(979) 00:14:24.913 fused_ordering(980) 00:14:24.913 fused_ordering(981) 00:14:24.913 fused_ordering(982) 00:14:24.913 fused_ordering(983) 00:14:24.913 fused_ordering(984) 00:14:24.913 fused_ordering(985) 00:14:24.913 fused_ordering(986) 00:14:24.913 fused_ordering(987) 00:14:24.913 fused_ordering(988) 00:14:24.913 fused_ordering(989) 00:14:24.913 fused_ordering(990) 00:14:24.913 fused_ordering(991) 00:14:24.913 fused_ordering(992) 00:14:24.913 fused_ordering(993) 00:14:24.913 fused_ordering(994) 00:14:24.913 fused_ordering(995) 00:14:24.913 fused_ordering(996) 00:14:24.913 fused_ordering(997) 00:14:24.913 fused_ordering(998) 00:14:24.913 fused_ordering(999) 00:14:24.913 fused_ordering(1000) 00:14:24.913 fused_ordering(1001) 00:14:24.913 fused_ordering(1002) 00:14:24.913 fused_ordering(1003) 00:14:24.913 fused_ordering(1004) 00:14:24.913 fused_ordering(1005) 00:14:24.913 fused_ordering(1006) 00:14:24.913 fused_ordering(1007) 00:14:24.913 fused_ordering(1008) 00:14:24.913 fused_ordering(1009) 00:14:24.913 fused_ordering(1010) 00:14:24.913 fused_ordering(1011) 00:14:24.913 fused_ordering(1012) 00:14:24.913 fused_ordering(1013) 00:14:24.913 fused_ordering(1014) 00:14:24.913 fused_ordering(1015) 00:14:24.913 fused_ordering(1016) 00:14:24.913 fused_ordering(1017) 00:14:24.913 fused_ordering(1018) 00:14:24.913 fused_ordering(1019) 00:14:24.913 fused_ordering(1020) 00:14:24.913 fused_ordering(1021) 00:14:24.913 fused_ordering(1022) 00:14:24.913 fused_ordering(1023) 00:14:24.913 11:22:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@23 -- # trap - SIGINT SIGTERM EXIT 00:14:24.913 11:22:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@25 -- # nvmftestfini 00:14:24.913 11:22:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@488 -- # nvmfcleanup 00:14:24.913 11:22:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@117 -- # sync 00:14:24.913 11:22:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:14:24.913 11:22:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@120 -- # set +e 00:14:24.913 11:22:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@121 -- # for i in {1..20} 00:14:24.913 11:22:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:14:24.913 rmmod nvme_tcp 00:14:24.913 rmmod nvme_fabrics 00:14:24.913 rmmod nvme_keyring 00:14:24.913 11:22:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:14:24.913 11:22:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@124 -- # set -e 00:14:24.913 11:22:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@125 -- # return 0 00:14:24.913 11:22:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@489 -- # '[' -n 1478008 ']' 00:14:24.913 11:22:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@490 -- # killprocess 1478008 00:14:24.913 11:22:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@950 -- # '[' -z 1478008 ']' 00:14:24.913 11:22:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@954 -- # kill -0 1478008 00:14:24.913 11:22:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@955 -- # uname 00:14:24.913 11:22:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:14:24.913 11:22:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1478008 00:14:25.171 11:22:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:14:25.171 11:22:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:14:25.171 11:22:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1478008' 00:14:25.171 killing process with pid 1478008 00:14:25.171 11:22:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@969 -- # kill 1478008 00:14:25.171 11:22:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@974 -- # wait 1478008 00:14:25.171 11:22:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:14:25.171 11:22:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:14:25.171 11:22:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:14:25.171 11:22:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:14:25.171 11:22:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@278 -- # remove_spdk_ns 00:14:25.171 11:22:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:25.171 11:22:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:25.171 11:22:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:27.707 11:22:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:14:27.707 00:14:27.707 real 0m10.972s 00:14:27.707 user 0m5.538s 00:14:27.707 sys 0m5.681s 00:14:27.707 11:22:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1126 -- # xtrace_disable 00:14:27.707 11:22:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:14:27.707 ************************************ 00:14:27.707 END TEST nvmf_fused_ordering 00:14:27.707 ************************************ 00:14:27.707 11:22:22 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@26 -- # run_test nvmf_ns_masking test/nvmf/target/ns_masking.sh --transport=tcp 00:14:27.707 11:22:22 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:14:27.707 11:22:22 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:14:27.707 11:22:22 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:14:27.707 ************************************ 00:14:27.707 START TEST nvmf_ns_masking 00:14:27.707 ************************************ 00:14:27.707 11:22:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1125 -- # test/nvmf/target/ns_masking.sh --transport=tcp 00:14:27.707 * Looking for test storage... 00:14:27.707 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:14:27.707 11:22:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@8 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:14:27.707 11:22:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@7 -- # uname -s 00:14:27.707 11:22:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:27.707 11:22:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:27.707 11:22:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:27.707 11:22:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:27.707 11:22:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:27.707 11:22:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:27.707 11:22:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:27.707 11:22:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:27.707 11:22:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:27.707 11:22:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:27.707 11:22:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:14:27.707 11:22:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:14:27.708 11:22:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:27.708 11:22:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:27.708 11:22:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:27.708 11:22:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:27.708 11:22:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:27.708 11:22:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:27.708 11:22:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:27.708 11:22:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:27.708 11:22:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:27.708 11:22:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:27.708 11:22:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:27.708 11:22:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@5 -- # export PATH 00:14:27.708 11:22:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:27.708 11:22:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@47 -- # : 0 00:14:27.708 11:22:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:14:27.708 11:22:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:14:27.708 11:22:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:27.708 11:22:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:27.708 11:22:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:27.708 11:22:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:14:27.708 11:22:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:14:27.708 11:22:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@51 -- # have_pci_nics=0 00:14:27.708 11:22:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@10 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:27.708 11:22:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@11 -- # hostsock=/var/tmp/host.sock 00:14:27.708 11:22:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@12 -- # loops=5 00:14:27.708 11:22:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@13 -- # uuidgen 00:14:27.708 11:22:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@13 -- # ns1uuid=9506e596-952d-4003-813f-8589281435ba 00:14:27.708 11:22:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@14 -- # uuidgen 00:14:27.708 11:22:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@14 -- # ns2uuid=6b56ed1a-d4cd-4542-ad45-17570a04934c 00:14:27.708 11:22:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@16 -- # SUBSYSNQN=nqn.2016-06.io.spdk:cnode1 00:14:27.708 11:22:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@17 -- # HOSTNQN1=nqn.2016-06.io.spdk:host1 00:14:27.708 11:22:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@18 -- # HOSTNQN2=nqn.2016-06.io.spdk:host2 00:14:27.708 11:22:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@19 -- # uuidgen 00:14:27.708 11:22:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@19 -- # HOSTID=a8b1cb3c-6403-4a8d-b197-faea956242eb 00:14:27.708 11:22:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@50 -- # nvmftestinit 00:14:27.708 11:22:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:14:27.708 11:22:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:27.708 11:22:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@448 -- # prepare_net_devs 00:14:27.708 11:22:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@410 -- # local -g is_hw=no 00:14:27.708 11:22:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@412 -- # remove_spdk_ns 00:14:27.708 11:22:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:27.708 11:22:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:27.708 11:22:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:27.708 11:22:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:14:27.708 11:22:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:14:27.708 11:22:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@285 -- # xtrace_disable 00:14:27.708 11:22:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:14:32.988 11:22:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:14:32.988 11:22:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@291 -- # pci_devs=() 00:14:32.988 11:22:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@291 -- # local -a pci_devs 00:14:32.988 11:22:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@292 -- # pci_net_devs=() 00:14:32.988 11:22:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:14:32.988 11:22:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@293 -- # pci_drivers=() 00:14:32.988 11:22:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@293 -- # local -A pci_drivers 00:14:32.988 11:22:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@295 -- # net_devs=() 00:14:32.988 11:22:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@295 -- # local -ga net_devs 00:14:32.988 11:22:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@296 -- # e810=() 00:14:32.988 11:22:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@296 -- # local -ga e810 00:14:32.988 11:22:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@297 -- # x722=() 00:14:32.988 11:22:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@297 -- # local -ga x722 00:14:32.988 11:22:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@298 -- # mlx=() 00:14:32.988 11:22:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@298 -- # local -ga mlx 00:14:32.988 11:22:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:14:32.988 11:22:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:14:32.988 11:22:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:14:32.988 11:22:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:14:32.988 11:22:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:14:32.988 11:22:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:14:32.988 11:22:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:14:32.988 11:22:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:14:32.988 11:22:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:14:32.988 11:22:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:14:32.988 11:22:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:14:32.988 11:22:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:14:32.988 11:22:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:14:32.988 11:22:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:14:32.988 11:22:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:14:32.988 11:22:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:14:32.988 11:22:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:14:32.988 11:22:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:32.988 11:22:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:14:32.988 Found 0000:86:00.0 (0x8086 - 0x159b) 00:14:32.988 11:22:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:14:32.988 11:22:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:14:32.988 11:22:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:32.988 11:22:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:32.988 11:22:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:14:32.989 11:22:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:32.989 11:22:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:14:32.989 Found 0000:86:00.1 (0x8086 - 0x159b) 00:14:32.989 11:22:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:14:32.989 11:22:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:14:32.989 11:22:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:32.989 11:22:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:32.989 11:22:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:14:32.989 11:22:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:14:32.989 11:22:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:14:32.989 11:22:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:14:32.989 11:22:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:32.989 11:22:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:32.989 11:22:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:14:32.989 11:22:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:32.989 11:22:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@390 -- # [[ up == up ]] 00:14:32.989 11:22:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:14:32.989 11:22:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:32.989 11:22:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:14:32.989 Found net devices under 0000:86:00.0: cvl_0_0 00:14:32.989 11:22:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:14:32.989 11:22:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:32.989 11:22:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:32.989 11:22:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:14:32.989 11:22:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:32.989 11:22:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@390 -- # [[ up == up ]] 00:14:32.989 11:22:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:14:32.989 11:22:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:32.989 11:22:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:14:32.989 Found net devices under 0000:86:00.1: cvl_0_1 00:14:32.989 11:22:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:14:32.989 11:22:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:14:32.989 11:22:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@414 -- # is_hw=yes 00:14:32.989 11:22:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:14:32.989 11:22:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:14:32.989 11:22:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:14:32.989 11:22:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:32.989 11:22:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:32.989 11:22:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:14:32.989 11:22:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:14:32.989 11:22:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:14:32.989 11:22:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:14:32.989 11:22:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:14:32.989 11:22:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:14:32.989 11:22:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:32.989 11:22:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:14:32.989 11:22:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:14:32.989 11:22:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:14:32.989 11:22:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:14:33.247 11:22:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:14:33.247 11:22:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:14:33.247 11:22:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:14:33.247 11:22:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:14:33.247 11:22:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:14:33.247 11:22:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:14:33.247 11:22:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:14:33.247 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:33.247 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.137 ms 00:14:33.247 00:14:33.248 --- 10.0.0.2 ping statistics --- 00:14:33.248 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:33.248 rtt min/avg/max/mdev = 0.137/0.137/0.137/0.000 ms 00:14:33.248 11:22:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:14:33.248 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:33.248 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.143 ms 00:14:33.248 00:14:33.248 --- 10.0.0.1 ping statistics --- 00:14:33.248 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:33.248 rtt min/avg/max/mdev = 0.143/0.143/0.143/0.000 ms 00:14:33.248 11:22:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:33.248 11:22:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@422 -- # return 0 00:14:33.248 11:22:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:14:33.248 11:22:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:33.248 11:22:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:14:33.248 11:22:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:14:33.248 11:22:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:33.248 11:22:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:14:33.248 11:22:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:14:33.248 11:22:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@51 -- # nvmfappstart 00:14:33.248 11:22:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:14:33.248 11:22:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@724 -- # xtrace_disable 00:14:33.248 11:22:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:14:33.248 11:22:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@481 -- # nvmfpid=1482003 00:14:33.248 11:22:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@482 -- # waitforlisten 1482003 00:14:33.248 11:22:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:14:33.248 11:22:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@831 -- # '[' -z 1482003 ']' 00:14:33.248 11:22:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:33.248 11:22:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@836 -- # local max_retries=100 00:14:33.248 11:22:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:33.248 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:33.248 11:22:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@840 -- # xtrace_disable 00:14:33.248 11:22:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:14:33.248 [2024-07-26 11:22:28.877671] Starting SPDK v24.09-pre git sha1 487ff9e1a / DPDK 24.03.0 initialization... 00:14:33.248 [2024-07-26 11:22:28.877714] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:33.248 EAL: No free 2048 kB hugepages reported on node 1 00:14:33.506 [2024-07-26 11:22:28.947604] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:33.506 [2024-07-26 11:22:29.023212] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:33.507 [2024-07-26 11:22:29.023250] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:33.507 [2024-07-26 11:22:29.023256] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:33.507 [2024-07-26 11:22:29.023262] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:33.507 [2024-07-26 11:22:29.023268] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:33.507 [2024-07-26 11:22:29.023292] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:14:34.098 11:22:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:14:34.098 11:22:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@864 -- # return 0 00:14:34.098 11:22:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:14:34.098 11:22:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@730 -- # xtrace_disable 00:14:34.098 11:22:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:14:34.098 11:22:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:34.098 11:22:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:14:34.408 [2024-07-26 11:22:29.866983] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:34.408 11:22:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@55 -- # MALLOC_BDEV_SIZE=64 00:14:34.408 11:22:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@56 -- # MALLOC_BLOCK_SIZE=512 00:14:34.408 11:22:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:14:34.673 Malloc1 00:14:34.673 11:22:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:14:34.673 Malloc2 00:14:34.673 11:22:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:14:34.931 11:22:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 00:14:35.189 11:22:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:35.189 [2024-07-26 11:22:30.781937] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:35.189 11:22:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@67 -- # connect 00:14:35.189 11:22:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I a8b1cb3c-6403-4a8d-b197-faea956242eb -a 10.0.0.2 -s 4420 -i 4 00:14:35.447 11:22:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 00:14:35.447 11:22:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1198 -- # local i=0 00:14:35.447 11:22:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:14:35.447 11:22:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:14:35.447 11:22:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # sleep 2 00:14:37.978 11:22:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:14:37.978 11:22:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:14:37.978 11:22:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:14:37.978 11:22:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:14:37.978 11:22:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:14:37.978 11:22:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # return 0 00:14:37.978 11:22:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:14:37.978 11:22:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:14:37.978 11:22:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:14:37.978 11:22:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:14:37.978 11:22:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@68 -- # ns_is_visible 0x1 00:14:37.978 11:22:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:14:37.978 11:22:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:14:37.978 [ 0]:0x1 00:14:37.978 11:22:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:14:37.978 11:22:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:14:37.978 11:22:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=01b8a86b05d24de0a6ded4b1765e92e1 00:14:37.978 11:22:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 01b8a86b05d24de0a6ded4b1765e92e1 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:37.978 11:22:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc2 -n 2 00:14:37.978 11:22:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@72 -- # ns_is_visible 0x1 00:14:37.978 11:22:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:14:37.978 11:22:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:14:37.978 [ 0]:0x1 00:14:37.978 11:22:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:14:37.978 11:22:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:14:37.978 11:22:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=01b8a86b05d24de0a6ded4b1765e92e1 00:14:37.978 11:22:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 01b8a86b05d24de0a6ded4b1765e92e1 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:37.978 11:22:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@73 -- # ns_is_visible 0x2 00:14:37.978 11:22:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:14:37.978 11:22:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:14:37.978 [ 1]:0x2 00:14:37.978 11:22:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:14:37.978 11:22:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:14:37.978 11:22:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=e0c22939a128439c82775f8ef7ce2de1 00:14:37.978 11:22:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ e0c22939a128439c82775f8ef7ce2de1 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:37.978 11:22:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@75 -- # disconnect 00:14:37.978 11:22:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:14:37.978 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:37.978 11:22:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:38.237 11:22:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 --no-auto-visible 00:14:38.495 11:22:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@83 -- # connect 1 00:14:38.495 11:22:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I a8b1cb3c-6403-4a8d-b197-faea956242eb -a 10.0.0.2 -s 4420 -i 4 00:14:38.495 11:22:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 1 00:14:38.495 11:22:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1198 -- # local i=0 00:14:38.495 11:22:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:14:38.495 11:22:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1200 -- # [[ -n 1 ]] 00:14:38.495 11:22:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1201 -- # nvme_device_counter=1 00:14:38.495 11:22:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # sleep 2 00:14:40.399 11:22:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:14:40.399 11:22:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:14:40.399 11:22:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:14:40.399 11:22:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:14:40.399 11:22:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:14:40.399 11:22:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # return 0 00:14:40.657 11:22:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:14:40.657 11:22:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:14:40.657 11:22:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:14:40.657 11:22:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:14:40.657 11:22:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@84 -- # NOT ns_is_visible 0x1 00:14:40.657 11:22:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@650 -- # local es=0 00:14:40.657 11:22:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # valid_exec_arg ns_is_visible 0x1 00:14:40.657 11:22:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@638 -- # local arg=ns_is_visible 00:14:40.657 11:22:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:40.657 11:22:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # type -t ns_is_visible 00:14:40.657 11:22:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:40.657 11:22:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # ns_is_visible 0x1 00:14:40.657 11:22:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:14:40.657 11:22:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:14:40.657 11:22:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:14:40.657 11:22:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:14:40.657 11:22:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:14:40.657 11:22:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:40.657 11:22:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # es=1 00:14:40.657 11:22:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:14:40.657 11:22:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:14:40.657 11:22:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:14:40.657 11:22:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@85 -- # ns_is_visible 0x2 00:14:40.657 11:22:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:14:40.657 11:22:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:14:40.657 [ 0]:0x2 00:14:40.657 11:22:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:14:40.657 11:22:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:14:40.657 11:22:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=e0c22939a128439c82775f8ef7ce2de1 00:14:40.657 11:22:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ e0c22939a128439c82775f8ef7ce2de1 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:40.657 11:22:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:14:40.916 11:22:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@89 -- # ns_is_visible 0x1 00:14:40.916 11:22:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:14:40.916 11:22:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:14:40.916 [ 0]:0x1 00:14:40.916 11:22:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:14:40.916 11:22:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:14:40.916 11:22:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=01b8a86b05d24de0a6ded4b1765e92e1 00:14:40.916 11:22:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 01b8a86b05d24de0a6ded4b1765e92e1 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:40.916 11:22:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@90 -- # ns_is_visible 0x2 00:14:40.916 11:22:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:14:40.916 11:22:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:14:40.916 [ 1]:0x2 00:14:40.916 11:22:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:14:40.916 11:22:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:14:40.916 11:22:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=e0c22939a128439c82775f8ef7ce2de1 00:14:40.916 11:22:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ e0c22939a128439c82775f8ef7ce2de1 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:40.916 11:22:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:14:41.174 11:22:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@94 -- # NOT ns_is_visible 0x1 00:14:41.174 11:22:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@650 -- # local es=0 00:14:41.174 11:22:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # valid_exec_arg ns_is_visible 0x1 00:14:41.174 11:22:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@638 -- # local arg=ns_is_visible 00:14:41.174 11:22:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:41.174 11:22:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # type -t ns_is_visible 00:14:41.174 11:22:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:41.174 11:22:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # ns_is_visible 0x1 00:14:41.174 11:22:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:14:41.174 11:22:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:14:41.174 11:22:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:14:41.174 11:22:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:14:41.174 11:22:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:14:41.175 11:22:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:41.175 11:22:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # es=1 00:14:41.175 11:22:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:14:41.175 11:22:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:14:41.175 11:22:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:14:41.175 11:22:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@95 -- # ns_is_visible 0x2 00:14:41.175 11:22:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:14:41.175 11:22:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:14:41.175 [ 0]:0x2 00:14:41.175 11:22:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:14:41.175 11:22:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:14:41.175 11:22:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=e0c22939a128439c82775f8ef7ce2de1 00:14:41.175 11:22:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ e0c22939a128439c82775f8ef7ce2de1 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:41.175 11:22:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@97 -- # disconnect 00:14:41.175 11:22:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:14:41.433 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:41.433 11:22:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:14:41.433 11:22:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@101 -- # connect 2 00:14:41.433 11:22:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I a8b1cb3c-6403-4a8d-b197-faea956242eb -a 10.0.0.2 -s 4420 -i 4 00:14:41.692 11:22:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 2 00:14:41.692 11:22:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1198 -- # local i=0 00:14:41.692 11:22:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:14:41.692 11:22:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1200 -- # [[ -n 2 ]] 00:14:41.692 11:22:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1201 -- # nvme_device_counter=2 00:14:41.692 11:22:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # sleep 2 00:14:43.594 11:22:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:14:43.594 11:22:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:14:43.594 11:22:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:14:43.594 11:22:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # nvme_devices=2 00:14:43.594 11:22:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:14:43.594 11:22:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # return 0 00:14:43.594 11:22:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:14:43.594 11:22:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:14:43.852 11:22:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:14:43.852 11:22:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:14:43.852 11:22:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@102 -- # ns_is_visible 0x1 00:14:43.852 11:22:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:14:43.852 11:22:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:14:43.852 [ 0]:0x1 00:14:43.852 11:22:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:14:43.852 11:22:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:14:43.852 11:22:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=01b8a86b05d24de0a6ded4b1765e92e1 00:14:43.852 11:22:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 01b8a86b05d24de0a6ded4b1765e92e1 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:43.852 11:22:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@103 -- # ns_is_visible 0x2 00:14:43.852 11:22:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:14:43.852 11:22:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:14:44.111 [ 1]:0x2 00:14:44.111 11:22:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:14:44.111 11:22:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:14:44.111 11:22:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=e0c22939a128439c82775f8ef7ce2de1 00:14:44.111 11:22:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ e0c22939a128439c82775f8ef7ce2de1 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:44.111 11:22:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@106 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:14:44.111 11:22:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@107 -- # NOT ns_is_visible 0x1 00:14:44.111 11:22:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@650 -- # local es=0 00:14:44.111 11:22:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # valid_exec_arg ns_is_visible 0x1 00:14:44.111 11:22:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@638 -- # local arg=ns_is_visible 00:14:44.111 11:22:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:44.111 11:22:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # type -t ns_is_visible 00:14:44.111 11:22:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:44.111 11:22:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # ns_is_visible 0x1 00:14:44.111 11:22:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:14:44.111 11:22:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:14:44.111 11:22:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:14:44.111 11:22:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:14:44.371 11:22:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:14:44.371 11:22:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:44.371 11:22:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # es=1 00:14:44.371 11:22:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:14:44.371 11:22:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:14:44.371 11:22:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:14:44.371 11:22:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@108 -- # ns_is_visible 0x2 00:14:44.371 11:22:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:14:44.371 11:22:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:14:44.371 [ 0]:0x2 00:14:44.371 11:22:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:14:44.371 11:22:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:14:44.371 11:22:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=e0c22939a128439c82775f8ef7ce2de1 00:14:44.371 11:22:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ e0c22939a128439c82775f8ef7ce2de1 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:44.371 11:22:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@111 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:14:44.371 11:22:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@650 -- # local es=0 00:14:44.371 11:22:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:14:44.371 11:22:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:44.371 11:22:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:44.371 11:22:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:44.371 11:22:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:44.371 11:22:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:44.371 11:22:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:44.371 11:22:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:44.371 11:22:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:14:44.371 11:22:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:14:44.371 [2024-07-26 11:22:40.007949] nvmf_rpc.c:1798:nvmf_rpc_ns_visible_paused: *ERROR*: Unable to add/remove nqn.2016-06.io.spdk:host1 to namespace ID 2 00:14:44.371 request: 00:14:44.371 { 00:14:44.371 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:14:44.371 "nsid": 2, 00:14:44.371 "host": "nqn.2016-06.io.spdk:host1", 00:14:44.371 "method": "nvmf_ns_remove_host", 00:14:44.371 "req_id": 1 00:14:44.371 } 00:14:44.371 Got JSON-RPC error response 00:14:44.371 response: 00:14:44.371 { 00:14:44.371 "code": -32602, 00:14:44.371 "message": "Invalid parameters" 00:14:44.371 } 00:14:44.371 11:22:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # es=1 00:14:44.371 11:22:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:14:44.371 11:22:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:14:44.371 11:22:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:14:44.371 11:22:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@112 -- # NOT ns_is_visible 0x1 00:14:44.371 11:22:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@650 -- # local es=0 00:14:44.371 11:22:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # valid_exec_arg ns_is_visible 0x1 00:14:44.371 11:22:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@638 -- # local arg=ns_is_visible 00:14:44.371 11:22:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:44.371 11:22:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # type -t ns_is_visible 00:14:44.371 11:22:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:44.371 11:22:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # ns_is_visible 0x1 00:14:44.630 11:22:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:14:44.630 11:22:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:14:44.630 11:22:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:14:44.630 11:22:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:14:44.630 11:22:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:14:44.630 11:22:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:44.630 11:22:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # es=1 00:14:44.630 11:22:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:14:44.630 11:22:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:14:44.630 11:22:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:14:44.630 11:22:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@113 -- # ns_is_visible 0x2 00:14:44.630 11:22:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:14:44.630 11:22:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:14:44.630 [ 0]:0x2 00:14:44.630 11:22:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:14:44.630 11:22:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:14:44.630 11:22:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=e0c22939a128439c82775f8ef7ce2de1 00:14:44.630 11:22:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ e0c22939a128439c82775f8ef7ce2de1 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:44.630 11:22:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@114 -- # disconnect 00:14:44.630 11:22:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:14:44.630 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:44.630 11:22:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@118 -- # hostpid=1484008 00:14:44.630 11:22:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@117 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -r /var/tmp/host.sock -m 2 00:14:44.630 11:22:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@119 -- # trap 'killprocess $hostpid; nvmftestfini' SIGINT SIGTERM EXIT 00:14:44.630 11:22:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@121 -- # waitforlisten 1484008 /var/tmp/host.sock 00:14:44.630 11:22:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@831 -- # '[' -z 1484008 ']' 00:14:44.630 11:22:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/host.sock 00:14:44.630 11:22:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@836 -- # local max_retries=100 00:14:44.630 11:22:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:14:44.630 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:14:44.630 11:22:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@840 -- # xtrace_disable 00:14:44.630 11:22:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:14:44.630 [2024-07-26 11:22:40.219597] Starting SPDK v24.09-pre git sha1 487ff9e1a / DPDK 24.03.0 initialization... 00:14:44.630 [2024-07-26 11:22:40.219649] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1484008 ] 00:14:44.630 EAL: No free 2048 kB hugepages reported on node 1 00:14:44.630 [2024-07-26 11:22:40.283475] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:44.889 [2024-07-26 11:22:40.362381] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:14:45.457 11:22:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:14:45.457 11:22:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@864 -- # return 0 00:14:45.457 11:22:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@122 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:45.716 11:22:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:14:45.716 11:22:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@124 -- # uuid2nguid 9506e596-952d-4003-813f-8589281435ba 00:14:45.716 11:22:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@759 -- # tr -d - 00:14:45.716 11:22:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 -g 9506E596952D4003813F8589281435BA -i 00:14:45.974 11:22:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@125 -- # uuid2nguid 6b56ed1a-d4cd-4542-ad45-17570a04934c 00:14:45.974 11:22:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@759 -- # tr -d - 00:14:45.974 11:22:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc2 -n 2 -g 6B56ED1AD4CD4542AD4517570A04934C -i 00:14:46.233 11:22:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@126 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:14:46.492 11:22:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host2 00:14:46.492 11:22:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@129 -- # hostrpc bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -b nvme0 00:14:46.492 11:22:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -b nvme0 00:14:46.751 nvme0n1 00:14:46.751 11:22:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@131 -- # hostrpc bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 -b nvme1 00:14:46.751 11:22:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 -b nvme1 00:14:47.317 nvme1n2 00:14:47.317 11:22:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # hostrpc bdev_get_bdevs 00:14:47.317 11:22:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # jq -r '.[].name' 00:14:47.317 11:22:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs 00:14:47.317 11:22:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # sort 00:14:47.317 11:22:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # xargs 00:14:47.576 11:22:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # [[ nvme0n1 nvme1n2 == \n\v\m\e\0\n\1\ \n\v\m\e\1\n\2 ]] 00:14:47.576 11:22:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@135 -- # hostrpc bdev_get_bdevs -b nvme0n1 00:14:47.576 11:22:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@135 -- # jq -r '.[].uuid' 00:14:47.576 11:22:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs -b nvme0n1 00:14:47.576 11:22:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@135 -- # [[ 9506e596-952d-4003-813f-8589281435ba == \9\5\0\6\e\5\9\6\-\9\5\2\d\-\4\0\0\3\-\8\1\3\f\-\8\5\8\9\2\8\1\4\3\5\b\a ]] 00:14:47.576 11:22:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@136 -- # hostrpc bdev_get_bdevs -b nvme1n2 00:14:47.576 11:22:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@136 -- # jq -r '.[].uuid' 00:14:47.576 11:22:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs -b nvme1n2 00:14:47.841 11:22:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@136 -- # [[ 6b56ed1a-d4cd-4542-ad45-17570a04934c == \6\b\5\6\e\d\1\a\-\d\4\c\d\-\4\5\4\2\-\a\d\4\5\-\1\7\5\7\0\a\0\4\9\3\4\c ]] 00:14:47.841 11:22:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@138 -- # killprocess 1484008 00:14:47.841 11:22:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@950 -- # '[' -z 1484008 ']' 00:14:47.841 11:22:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@954 -- # kill -0 1484008 00:14:47.841 11:22:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@955 -- # uname 00:14:47.841 11:22:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:14:47.841 11:22:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1484008 00:14:47.841 11:22:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:14:47.841 11:22:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:14:47.841 11:22:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1484008' 00:14:47.841 killing process with pid 1484008 00:14:47.841 11:22:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@969 -- # kill 1484008 00:14:47.841 11:22:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@974 -- # wait 1484008 00:14:48.098 11:22:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@139 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:48.357 11:22:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@141 -- # trap - SIGINT SIGTERM EXIT 00:14:48.357 11:22:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@142 -- # nvmftestfini 00:14:48.357 11:22:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@488 -- # nvmfcleanup 00:14:48.357 11:22:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@117 -- # sync 00:14:48.357 11:22:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:14:48.357 11:22:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@120 -- # set +e 00:14:48.357 11:22:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@121 -- # for i in {1..20} 00:14:48.357 11:22:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:14:48.357 rmmod nvme_tcp 00:14:48.357 rmmod nvme_fabrics 00:14:48.357 rmmod nvme_keyring 00:14:48.357 11:22:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:14:48.357 11:22:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@124 -- # set -e 00:14:48.357 11:22:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@125 -- # return 0 00:14:48.357 11:22:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@489 -- # '[' -n 1482003 ']' 00:14:48.357 11:22:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@490 -- # killprocess 1482003 00:14:48.357 11:22:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@950 -- # '[' -z 1482003 ']' 00:14:48.357 11:22:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@954 -- # kill -0 1482003 00:14:48.357 11:22:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@955 -- # uname 00:14:48.357 11:22:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:14:48.357 11:22:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1482003 00:14:48.357 11:22:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:14:48.357 11:22:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:14:48.357 11:22:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1482003' 00:14:48.357 killing process with pid 1482003 00:14:48.357 11:22:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@969 -- # kill 1482003 00:14:48.357 11:22:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@974 -- # wait 1482003 00:14:48.616 11:22:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:14:48.616 11:22:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:14:48.616 11:22:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:14:48.616 11:22:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:14:48.616 11:22:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@278 -- # remove_spdk_ns 00:14:48.616 11:22:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:48.616 11:22:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:48.616 11:22:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:51.152 11:22:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:14:51.152 00:14:51.152 real 0m23.337s 00:14:51.152 user 0m25.026s 00:14:51.152 sys 0m6.412s 00:14:51.152 11:22:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1126 -- # xtrace_disable 00:14:51.152 11:22:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:14:51.152 ************************************ 00:14:51.152 END TEST nvmf_ns_masking 00:14:51.152 ************************************ 00:14:51.152 11:22:46 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@27 -- # [[ 1 -eq 1 ]] 00:14:51.152 11:22:46 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@28 -- # run_test nvmf_nvme_cli /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=tcp 00:14:51.152 11:22:46 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:14:51.152 11:22:46 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:14:51.152 11:22:46 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:14:51.152 ************************************ 00:14:51.152 START TEST nvmf_nvme_cli 00:14:51.152 ************************************ 00:14:51.152 11:22:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=tcp 00:14:51.152 * Looking for test storage... 00:14:51.152 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:14:51.152 11:22:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:14:51.152 11:22:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@7 -- # uname -s 00:14:51.152 11:22:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:51.152 11:22:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:51.152 11:22:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:51.152 11:22:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:51.152 11:22:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:51.152 11:22:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:51.152 11:22:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:51.152 11:22:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:51.152 11:22:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:51.152 11:22:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:51.152 11:22:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:14:51.152 11:22:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:14:51.152 11:22:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:51.152 11:22:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:51.152 11:22:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:51.152 11:22:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:51.152 11:22:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:51.152 11:22:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:51.152 11:22:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:51.152 11:22:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:51.152 11:22:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:51.152 11:22:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:51.152 11:22:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:51.152 11:22:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@5 -- # export PATH 00:14:51.153 11:22:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:51.153 11:22:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@47 -- # : 0 00:14:51.153 11:22:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:14:51.153 11:22:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:14:51.153 11:22:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:51.153 11:22:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:51.153 11:22:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:51.153 11:22:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:14:51.153 11:22:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:14:51.153 11:22:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@51 -- # have_pci_nics=0 00:14:51.153 11:22:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@11 -- # MALLOC_BDEV_SIZE=64 00:14:51.153 11:22:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:14:51.153 11:22:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@14 -- # devs=() 00:14:51.153 11:22:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@16 -- # nvmftestinit 00:14:51.153 11:22:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:14:51.153 11:22:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:51.153 11:22:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@448 -- # prepare_net_devs 00:14:51.153 11:22:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@410 -- # local -g is_hw=no 00:14:51.153 11:22:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@412 -- # remove_spdk_ns 00:14:51.153 11:22:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:51.153 11:22:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:51.153 11:22:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:51.153 11:22:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:14:51.153 11:22:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:14:51.153 11:22:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@285 -- # xtrace_disable 00:14:51.153 11:22:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:56.429 11:22:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:14:56.429 11:22:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@291 -- # pci_devs=() 00:14:56.429 11:22:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@291 -- # local -a pci_devs 00:14:56.429 11:22:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@292 -- # pci_net_devs=() 00:14:56.429 11:22:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:14:56.429 11:22:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@293 -- # pci_drivers=() 00:14:56.429 11:22:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@293 -- # local -A pci_drivers 00:14:56.429 11:22:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@295 -- # net_devs=() 00:14:56.429 11:22:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@295 -- # local -ga net_devs 00:14:56.429 11:22:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@296 -- # e810=() 00:14:56.429 11:22:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@296 -- # local -ga e810 00:14:56.429 11:22:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@297 -- # x722=() 00:14:56.429 11:22:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@297 -- # local -ga x722 00:14:56.429 11:22:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@298 -- # mlx=() 00:14:56.429 11:22:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@298 -- # local -ga mlx 00:14:56.429 11:22:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:14:56.429 11:22:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:14:56.429 11:22:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:14:56.429 11:22:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:14:56.429 11:22:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:14:56.429 11:22:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:14:56.429 11:22:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:14:56.429 11:22:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:14:56.429 11:22:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:14:56.429 11:22:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:14:56.429 11:22:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:14:56.429 11:22:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:14:56.429 11:22:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:14:56.429 11:22:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:14:56.429 11:22:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:14:56.429 11:22:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:14:56.429 11:22:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:14:56.429 11:22:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:56.429 11:22:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:14:56.429 Found 0000:86:00.0 (0x8086 - 0x159b) 00:14:56.429 11:22:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:14:56.429 11:22:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:14:56.429 11:22:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:56.429 11:22:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:56.429 11:22:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:14:56.429 11:22:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:56.429 11:22:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:14:56.429 Found 0000:86:00.1 (0x8086 - 0x159b) 00:14:56.429 11:22:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:14:56.429 11:22:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:14:56.429 11:22:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:56.429 11:22:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:56.429 11:22:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:14:56.429 11:22:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:14:56.429 11:22:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:14:56.429 11:22:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:14:56.429 11:22:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:56.429 11:22:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:56.429 11:22:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:14:56.429 11:22:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:56.429 11:22:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@390 -- # [[ up == up ]] 00:14:56.429 11:22:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:14:56.429 11:22:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:56.429 11:22:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:14:56.429 Found net devices under 0000:86:00.0: cvl_0_0 00:14:56.429 11:22:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:14:56.429 11:22:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:56.429 11:22:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:56.429 11:22:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:14:56.429 11:22:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:56.429 11:22:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@390 -- # [[ up == up ]] 00:14:56.430 11:22:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:14:56.430 11:22:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:56.430 11:22:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:14:56.430 Found net devices under 0000:86:00.1: cvl_0_1 00:14:56.430 11:22:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:14:56.430 11:22:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:14:56.430 11:22:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@414 -- # is_hw=yes 00:14:56.430 11:22:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:14:56.430 11:22:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:14:56.430 11:22:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:14:56.430 11:22:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:56.430 11:22:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:56.430 11:22:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:14:56.430 11:22:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:14:56.430 11:22:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:14:56.430 11:22:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:14:56.430 11:22:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:14:56.430 11:22:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:14:56.430 11:22:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:56.430 11:22:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:14:56.430 11:22:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:14:56.430 11:22:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:14:56.430 11:22:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:14:56.430 11:22:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:14:56.430 11:22:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:14:56.430 11:22:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:14:56.430 11:22:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:14:56.689 11:22:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:14:56.689 11:22:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:14:56.689 11:22:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:14:56.689 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:56.689 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.192 ms 00:14:56.689 00:14:56.689 --- 10.0.0.2 ping statistics --- 00:14:56.689 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:56.689 rtt min/avg/max/mdev = 0.192/0.192/0.192/0.000 ms 00:14:56.689 11:22:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:14:56.689 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:56.689 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.223 ms 00:14:56.689 00:14:56.689 --- 10.0.0.1 ping statistics --- 00:14:56.689 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:56.689 rtt min/avg/max/mdev = 0.223/0.223/0.223/0.000 ms 00:14:56.689 11:22:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:56.689 11:22:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@422 -- # return 0 00:14:56.689 11:22:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:14:56.689 11:22:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:56.689 11:22:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:14:56.689 11:22:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:14:56.689 11:22:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:56.689 11:22:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:14:56.689 11:22:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:14:56.689 11:22:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@17 -- # nvmfappstart -m 0xF 00:14:56.689 11:22:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:14:56.689 11:22:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@724 -- # xtrace_disable 00:14:56.689 11:22:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:56.689 11:22:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@481 -- # nvmfpid=1488240 00:14:56.689 11:22:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:14:56.689 11:22:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@482 -- # waitforlisten 1488240 00:14:56.689 11:22:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@831 -- # '[' -z 1488240 ']' 00:14:56.689 11:22:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:56.689 11:22:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@836 -- # local max_retries=100 00:14:56.689 11:22:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:56.689 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:56.689 11:22:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@840 -- # xtrace_disable 00:14:56.689 11:22:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:56.689 [2024-07-26 11:22:52.277444] Starting SPDK v24.09-pre git sha1 487ff9e1a / DPDK 24.03.0 initialization... 00:14:56.689 [2024-07-26 11:22:52.277487] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:56.689 EAL: No free 2048 kB hugepages reported on node 1 00:14:56.947 [2024-07-26 11:22:52.350502] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:14:56.947 [2024-07-26 11:22:52.430950] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:56.947 [2024-07-26 11:22:52.430987] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:56.947 [2024-07-26 11:22:52.430993] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:56.947 [2024-07-26 11:22:52.430999] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:56.947 [2024-07-26 11:22:52.431005] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:56.947 [2024-07-26 11:22:52.431050] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:14:56.947 [2024-07-26 11:22:52.431158] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:14:56.947 [2024-07-26 11:22:52.431268] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:14:56.947 [2024-07-26 11:22:52.431270] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:14:57.512 11:22:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:14:57.512 11:22:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@864 -- # return 0 00:14:57.512 11:22:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:14:57.512 11:22:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@730 -- # xtrace_disable 00:14:57.512 11:22:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:57.512 11:22:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:57.512 11:22:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:14:57.512 11:22:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:57.512 11:22:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:57.512 [2024-07-26 11:22:53.127769] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:57.512 11:22:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:57.512 11:22:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@21 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:14:57.512 11:22:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:57.512 11:22:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:57.512 Malloc0 00:14:57.512 11:22:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:57.512 11:22:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:14:57.512 11:22:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:57.512 11:22:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:57.770 Malloc1 00:14:57.770 11:22:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:57.770 11:22:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME -d SPDK_Controller1 -i 291 00:14:57.770 11:22:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:57.770 11:22:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:57.770 11:22:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:57.770 11:22:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:14:57.770 11:22:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:57.770 11:22:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:57.770 11:22:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:57.770 11:22:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:14:57.770 11:22:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:57.770 11:22:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:57.770 11:22:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:57.770 11:22:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:57.770 11:22:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:57.770 11:22:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:57.770 [2024-07-26 11:22:53.204436] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:57.770 11:22:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:57.770 11:22:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@28 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:14:57.770 11:22:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:57.770 11:22:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:57.770 11:22:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:57.770 11:22:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@30 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid=00ad29c2-ccbd-e911-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 4420 00:14:57.770 00:14:57.770 Discovery Log Number of Records 2, Generation counter 2 00:14:57.770 =====Discovery Log Entry 0====== 00:14:57.770 trtype: tcp 00:14:57.770 adrfam: ipv4 00:14:57.770 subtype: current discovery subsystem 00:14:57.770 treq: not required 00:14:57.770 portid: 0 00:14:57.770 trsvcid: 4420 00:14:57.770 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:14:57.770 traddr: 10.0.0.2 00:14:57.770 eflags: explicit discovery connections, duplicate discovery information 00:14:57.770 sectype: none 00:14:57.770 =====Discovery Log Entry 1====== 00:14:57.770 trtype: tcp 00:14:57.770 adrfam: ipv4 00:14:57.770 subtype: nvme subsystem 00:14:57.770 treq: not required 00:14:57.770 portid: 0 00:14:57.770 trsvcid: 4420 00:14:57.770 subnqn: nqn.2016-06.io.spdk:cnode1 00:14:57.770 traddr: 10.0.0.2 00:14:57.770 eflags: none 00:14:57.770 sectype: none 00:14:57.770 11:22:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # devs=($(get_nvme_devs)) 00:14:57.770 11:22:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # get_nvme_devs 00:14:57.770 11:22:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@522 -- # local dev _ 00:14:57.770 11:22:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:14:57.770 11:22:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@521 -- # nvme list 00:14:57.770 11:22:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ Node == /dev/nvme* ]] 00:14:57.770 11:22:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:14:57.770 11:22:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ --------------------- == /dev/nvme* ]] 00:14:57.770 11:22:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:14:57.770 11:22:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # nvme_num_before_connection=0 00:14:57.770 11:22:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@32 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid=00ad29c2-ccbd-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:14:59.141 11:22:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@34 -- # waitforserial SPDKISFASTANDAWESOME 2 00:14:59.141 11:22:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1198 -- # local i=0 00:14:59.141 11:22:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:14:59.141 11:22:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1200 -- # [[ -n 2 ]] 00:14:59.141 11:22:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1201 -- # nvme_device_counter=2 00:14:59.141 11:22:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1205 -- # sleep 2 00:15:01.034 11:22:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:15:01.034 11:22:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:15:01.034 11:22:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:15:01.034 11:22:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1207 -- # nvme_devices=2 00:15:01.034 11:22:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:15:01.034 11:22:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1208 -- # return 0 00:15:01.034 11:22:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@35 -- # get_nvme_devs 00:15:01.034 11:22:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@522 -- # local dev _ 00:15:01.034 11:22:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:15:01.034 11:22:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@521 -- # nvme list 00:15:01.034 11:22:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ Node == /dev/nvme* ]] 00:15:01.034 11:22:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:15:01.034 11:22:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ --------------------- == /dev/nvme* ]] 00:15:01.034 11:22:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:15:01.034 11:22:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:15:01.034 11:22:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@526 -- # echo /dev/nvme0n2 00:15:01.034 11:22:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:15:01.034 11:22:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:15:01.034 11:22:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@526 -- # echo /dev/nvme0n1 00:15:01.034 11:22:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:15:01.034 11:22:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@35 -- # [[ -z /dev/nvme0n2 00:15:01.034 /dev/nvme0n1 ]] 00:15:01.034 11:22:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # devs=($(get_nvme_devs)) 00:15:01.034 11:22:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # get_nvme_devs 00:15:01.034 11:22:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@522 -- # local dev _ 00:15:01.034 11:22:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:15:01.034 11:22:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@521 -- # nvme list 00:15:01.291 11:22:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ Node == /dev/nvme* ]] 00:15:01.291 11:22:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:15:01.291 11:22:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ --------------------- == /dev/nvme* ]] 00:15:01.291 11:22:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:15:01.291 11:22:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:15:01.291 11:22:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@526 -- # echo /dev/nvme0n2 00:15:01.291 11:22:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:15:01.291 11:22:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:15:01.291 11:22:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@526 -- # echo /dev/nvme0n1 00:15:01.291 11:22:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:15:01.291 11:22:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # nvme_num=2 00:15:01.291 11:22:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@60 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:15:01.549 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:01.549 11:22:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@61 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:15:01.549 11:22:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1219 -- # local i=0 00:15:01.549 11:22:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:15:01.549 11:22:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:15:01.549 11:22:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:15:01.549 11:22:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:15:01.549 11:22:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1231 -- # return 0 00:15:01.549 11:22:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@62 -- # (( nvme_num <= nvme_num_before_connection )) 00:15:01.549 11:22:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@67 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:15:01.549 11:22:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:01.549 11:22:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:15:01.549 11:22:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:01.549 11:22:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:15:01.549 11:22:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@70 -- # nvmftestfini 00:15:01.549 11:22:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@488 -- # nvmfcleanup 00:15:01.549 11:22:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@117 -- # sync 00:15:01.549 11:22:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:15:01.549 11:22:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@120 -- # set +e 00:15:01.549 11:22:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@121 -- # for i in {1..20} 00:15:01.549 11:22:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:15:01.549 rmmod nvme_tcp 00:15:01.549 rmmod nvme_fabrics 00:15:01.549 rmmod nvme_keyring 00:15:01.549 11:22:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:15:01.549 11:22:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@124 -- # set -e 00:15:01.549 11:22:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@125 -- # return 0 00:15:01.549 11:22:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@489 -- # '[' -n 1488240 ']' 00:15:01.549 11:22:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@490 -- # killprocess 1488240 00:15:01.549 11:22:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@950 -- # '[' -z 1488240 ']' 00:15:01.549 11:22:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@954 -- # kill -0 1488240 00:15:01.549 11:22:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@955 -- # uname 00:15:01.549 11:22:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:15:01.549 11:22:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1488240 00:15:01.549 11:22:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:15:01.549 11:22:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:15:01.549 11:22:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1488240' 00:15:01.549 killing process with pid 1488240 00:15:01.549 11:22:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@969 -- # kill 1488240 00:15:01.549 11:22:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@974 -- # wait 1488240 00:15:01.807 11:22:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:15:01.808 11:22:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:15:01.808 11:22:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:15:01.808 11:22:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:15:01.808 11:22:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@278 -- # remove_spdk_ns 00:15:01.808 11:22:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:01.808 11:22:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:15:01.808 11:22:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:04.373 11:22:59 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:15:04.373 00:15:04.373 real 0m13.085s 00:15:04.373 user 0m21.228s 00:15:04.373 sys 0m4.925s 00:15:04.373 11:22:59 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1126 -- # xtrace_disable 00:15:04.373 11:22:59 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:15:04.373 ************************************ 00:15:04.374 END TEST nvmf_nvme_cli 00:15:04.374 ************************************ 00:15:04.374 11:22:59 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@30 -- # [[ 1 -eq 1 ]] 00:15:04.374 11:22:59 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@31 -- # run_test nvmf_vfio_user /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_vfio_user.sh --transport=tcp 00:15:04.374 11:22:59 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:15:04.374 11:22:59 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:15:04.374 11:22:59 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:15:04.374 ************************************ 00:15:04.374 START TEST nvmf_vfio_user 00:15:04.374 ************************************ 00:15:04.374 11:22:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_vfio_user.sh --transport=tcp 00:15:04.374 * Looking for test storage... 00:15:04.374 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:15:04.374 11:22:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:15:04.374 11:22:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@7 -- # uname -s 00:15:04.374 11:22:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:04.374 11:22:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:04.374 11:22:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:04.374 11:22:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:04.374 11:22:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:04.374 11:22:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:04.374 11:22:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:04.374 11:22:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:04.374 11:22:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:04.374 11:22:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:04.374 11:22:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:15:04.374 11:22:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:15:04.374 11:22:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:04.374 11:22:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:04.374 11:22:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:15:04.374 11:22:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:04.374 11:22:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:15:04.374 11:22:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:04.374 11:22:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:04.374 11:22:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:04.374 11:22:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:04.374 11:22:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:04.374 11:22:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:04.374 11:22:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@5 -- # export PATH 00:15:04.374 11:22:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:04.374 11:22:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@47 -- # : 0 00:15:04.374 11:22:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:15:04.374 11:22:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:15:04.374 11:22:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:04.374 11:22:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:04.374 11:22:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:04.374 11:22:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:15:04.374 11:22:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:15:04.374 11:22:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@51 -- # have_pci_nics=0 00:15:04.374 11:22:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@12 -- # MALLOC_BDEV_SIZE=64 00:15:04.374 11:22:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:15:04.374 11:22:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@14 -- # NUM_DEVICES=2 00:15:04.374 11:22:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:15:04.374 11:22:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@18 -- # export TEST_TRANSPORT=VFIOUSER 00:15:04.374 11:22:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@18 -- # TEST_TRANSPORT=VFIOUSER 00:15:04.374 11:22:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@47 -- # rm -rf /var/run/vfio-user 00:15:04.374 11:22:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@103 -- # setup_nvmf_vfio_user '' '' 00:15:04.374 11:22:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@51 -- # local nvmf_app_args= 00:15:04.374 11:22:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@52 -- # local transport_args= 00:15:04.374 11:22:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@55 -- # nvmfpid=1489532 00:15:04.374 11:22:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@57 -- # echo 'Process pid: 1489532' 00:15:04.374 Process pid: 1489532 00:15:04.374 11:22:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@59 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:15:04.374 11:22:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@60 -- # waitforlisten 1489532 00:15:04.374 11:22:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m '[0,1,2,3]' 00:15:04.374 11:22:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@831 -- # '[' -z 1489532 ']' 00:15:04.374 11:22:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:04.374 11:22:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@836 -- # local max_retries=100 00:15:04.374 11:22:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:04.374 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:04.374 11:22:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@840 -- # xtrace_disable 00:15:04.374 11:22:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:15:04.374 [2024-07-26 11:22:59.655710] Starting SPDK v24.09-pre git sha1 487ff9e1a / DPDK 24.03.0 initialization... 00:15:04.374 [2024-07-26 11:22:59.655762] [ DPDK EAL parameters: nvmf -l 0,1,2,3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:04.374 EAL: No free 2048 kB hugepages reported on node 1 00:15:04.374 [2024-07-26 11:22:59.722576] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:15:04.374 [2024-07-26 11:22:59.800350] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:04.374 [2024-07-26 11:22:59.800386] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:04.374 [2024-07-26 11:22:59.800393] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:04.374 [2024-07-26 11:22:59.800399] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:04.374 [2024-07-26 11:22:59.800403] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:04.374 [2024-07-26 11:22:59.800450] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:15:04.374 [2024-07-26 11:22:59.800557] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:15:04.374 [2024-07-26 11:22:59.800665] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:15:04.374 [2024-07-26 11:22:59.800666] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:15:04.940 11:23:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:15:04.940 11:23:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@864 -- # return 0 00:15:04.940 11:23:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@62 -- # sleep 1 00:15:05.871 11:23:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t VFIOUSER 00:15:06.129 11:23:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@66 -- # mkdir -p /var/run/vfio-user 00:15:06.129 11:23:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # seq 1 2 00:15:06.129 11:23:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:15:06.129 11:23:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user1/1 00:15:06.129 11:23:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:15:06.386 Malloc1 00:15:06.386 11:23:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode1 -a -s SPDK1 00:15:06.643 11:23:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc1 00:15:06.644 11:23:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode1 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user1/1 -s 0 00:15:06.901 11:23:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:15:06.901 11:23:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user2/2 00:15:06.901 11:23:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:15:07.159 Malloc2 00:15:07.159 11:23:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode2 -a -s SPDK2 00:15:07.159 11:23:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc2 00:15:07.417 11:23:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode2 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user2/2 -s 0 00:15:07.676 11:23:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@104 -- # run_nvmf_vfio_user 00:15:07.676 11:23:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # seq 1 2 00:15:07.676 11:23:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # for i in $(seq 1 $NUM_DEVICES) 00:15:07.676 11:23:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@81 -- # test_traddr=/var/run/vfio-user/domain/vfio-user1/1 00:15:07.676 11:23:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@82 -- # test_subnqn=nqn.2019-07.io.spdk:cnode1 00:15:07.676 11:23:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -g -L nvme -L nvme_vfio -L vfio_pci 00:15:07.676 [2024-07-26 11:23:03.176572] Starting SPDK v24.09-pre git sha1 487ff9e1a / DPDK 24.03.0 initialization... 00:15:07.676 [2024-07-26 11:23:03.176596] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1490061 ] 00:15:07.676 EAL: No free 2048 kB hugepages reported on node 1 00:15:07.676 [2024-07-26 11:23:03.204931] nvme_vfio_user.c: 259:nvme_vfio_ctrlr_scan: *DEBUG*: Scan controller : /var/run/vfio-user/domain/vfio-user1/1 00:15:07.676 [2024-07-26 11:23:03.215140] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 0, Size 0x2000, Offset 0x0, Flags 0xf, Cap offset 32 00:15:07.676 [2024-07-26 11:23:03.215165] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0x1000, Offset 0x1000, Map addr 0x7fc297c59000 00:15:07.676 [2024-07-26 11:23:03.216140] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 1, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:15:07.676 [2024-07-26 11:23:03.217140] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 2, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:15:07.677 [2024-07-26 11:23:03.218143] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 3, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:15:07.677 [2024-07-26 11:23:03.219150] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 4, Size 0x2000, Offset 0x0, Flags 0x3, Cap offset 0 00:15:07.677 [2024-07-26 11:23:03.220154] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 5, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:15:07.677 [2024-07-26 11:23:03.221160] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 6, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:15:07.677 [2024-07-26 11:23:03.222166] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 7, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:15:07.677 [2024-07-26 11:23:03.223176] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 8, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:15:07.677 [2024-07-26 11:23:03.224185] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 9, Size 0xc000, Offset 0x0, Flags 0xf, Cap offset 32 00:15:07.677 [2024-07-26 11:23:03.224193] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0xb000, Offset 0x1000, Map addr 0x7fc297c4e000 00:15:07.677 [2024-07-26 11:23:03.225111] vfio_user_pci.c: 65:vfio_add_mr: *DEBUG*: Add memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:15:07.677 [2024-07-26 11:23:03.233551] vfio_user_pci.c: 386:spdk_vfio_user_setup: *DEBUG*: Device vfio-user0, Path /var/run/vfio-user/domain/vfio-user1/1/cntrl Setup Successfully 00:15:07.677 [2024-07-26 11:23:03.233576] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to connect adminq (no timeout) 00:15:07.677 [2024-07-26 11:23:03.238264] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x0, value 0x201e0100ff 00:15:07.677 [2024-07-26 11:23:03.238302] nvme_pcie_common.c: 133:nvme_pcie_qpair_construct: *INFO*: max_completions_cap = 64 num_trackers = 192 00:15:07.677 [2024-07-26 11:23:03.238376] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for connect adminq (no timeout) 00:15:07.677 [2024-07-26 11:23:03.238390] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read vs (no timeout) 00:15:07.677 [2024-07-26 11:23:03.238395] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read vs wait for vs (no timeout) 00:15:07.677 [2024-07-26 11:23:03.239258] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x8, value 0x10300 00:15:07.677 [2024-07-26 11:23:03.239269] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read cap (no timeout) 00:15:07.677 [2024-07-26 11:23:03.239276] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read cap wait for cap (no timeout) 00:15:07.677 [2024-07-26 11:23:03.240260] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x0, value 0x201e0100ff 00:15:07.677 [2024-07-26 11:23:03.240267] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to check en (no timeout) 00:15:07.677 [2024-07-26 11:23:03.240273] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to check en wait for cc (timeout 15000 ms) 00:15:07.677 [2024-07-26 11:23:03.241267] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x0 00:15:07.677 [2024-07-26 11:23:03.241278] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:15:07.677 [2024-07-26 11:23:03.242274] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x0 00:15:07.677 [2024-07-26 11:23:03.242282] nvme_ctrlr.c:3873:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] CC.EN = 0 && CSTS.RDY = 0 00:15:07.677 [2024-07-26 11:23:03.242287] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to controller is disabled (timeout 15000 ms) 00:15:07.677 [2024-07-26 11:23:03.242292] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:15:07.677 [2024-07-26 11:23:03.242398] nvme_ctrlr.c:4066:nvme_ctrlr_process_init: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Setting CC.EN = 1 00:15:07.677 [2024-07-26 11:23:03.242402] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:15:07.677 [2024-07-26 11:23:03.242406] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x28, value 0x2000003c0000 00:15:07.677 [2024-07-26 11:23:03.243634] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x30, value 0x2000003be000 00:15:07.677 [2024-07-26 11:23:03.244285] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x24, value 0xff00ff 00:15:07.677 [2024-07-26 11:23:03.245298] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x460001 00:15:07.677 [2024-07-26 11:23:03.246299] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:15:07.677 [2024-07-26 11:23:03.246361] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:15:07.677 [2024-07-26 11:23:03.247309] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x1 00:15:07.677 [2024-07-26 11:23:03.247316] nvme_ctrlr.c:3908:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:15:07.677 [2024-07-26 11:23:03.247320] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to reset admin queue (timeout 30000 ms) 00:15:07.677 [2024-07-26 11:23:03.247337] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify controller (no timeout) 00:15:07.677 [2024-07-26 11:23:03.247343] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify controller (timeout 30000 ms) 00:15:07.677 [2024-07-26 11:23:03.247357] nvme_pcie_common.c:1202:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:15:07.677 [2024-07-26 11:23:03.247362] nvme_pcie_common.c:1230:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:15:07.677 [2024-07-26 11:23:03.247365] nvme_pcie_common.c:1290:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:15:07.677 [2024-07-26 11:23:03.247377] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000001 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:15:07.677 [2024-07-26 11:23:03.247421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0001 p:1 m:0 dnr:0 00:15:07.677 [2024-07-26 11:23:03.247431] nvme_ctrlr.c:2057:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] transport max_xfer_size 131072 00:15:07.677 [2024-07-26 11:23:03.247435] nvme_ctrlr.c:2061:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] MDTS max_xfer_size 131072 00:15:07.677 [2024-07-26 11:23:03.247442] nvme_ctrlr.c:2064:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] CNTLID 0x0001 00:15:07.677 [2024-07-26 11:23:03.247446] nvme_ctrlr.c:2075:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Identify CNTLID 0x0001 != Connect CNTLID 0x0000 00:15:07.677 [2024-07-26 11:23:03.247450] nvme_ctrlr.c:2088:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] transport max_sges 1 00:15:07.677 [2024-07-26 11:23:03.247454] nvme_ctrlr.c:2103:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] fuses compare and write: 1 00:15:07.677 [2024-07-26 11:23:03.247458] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to configure AER (timeout 30000 ms) 00:15:07.677 [2024-07-26 11:23:03.247465] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for configure aer (timeout 30000 ms) 00:15:07.677 [2024-07-26 11:23:03.247475] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:191 cdw10:0000000b PRP1 0x0 PRP2 0x0 00:15:07.678 [2024-07-26 11:23:03.247488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0002 p:1 m:0 dnr:0 00:15:07.678 [2024-07-26 11:23:03.247499] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:15:07.678 [2024-07-26 11:23:03.247507] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:15:07.678 [2024-07-26 11:23:03.247513] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:15:07.678 [2024-07-26 11:23:03.247520] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:15:07.678 [2024-07-26 11:23:03.247524] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set keep alive timeout (timeout 30000 ms) 00:15:07.678 [2024-07-26 11:23:03.247532] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:15:07.678 [2024-07-26 11:23:03.247540] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:191 cdw10:0000000f PRP1 0x0 PRP2 0x0 00:15:07.678 [2024-07-26 11:23:03.247546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0007 p:1 m:0 dnr:0 00:15:07.678 [2024-07-26 11:23:03.247551] nvme_ctrlr.c:3014:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Controller adjusted keep alive timeout to 0 ms 00:15:07.678 [2024-07-26 11:23:03.247555] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify controller iocs specific (timeout 30000 ms) 00:15:07.678 [2024-07-26 11:23:03.247562] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set number of queues (timeout 30000 ms) 00:15:07.678 [2024-07-26 11:23:03.247567] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for set number of queues (timeout 30000 ms) 00:15:07.678 [2024-07-26 11:23:03.247575] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:15:07.678 [2024-07-26 11:23:03.247583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:0008 p:1 m:0 dnr:0 00:15:07.678 [2024-07-26 11:23:03.247636] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify active ns (timeout 30000 ms) 00:15:07.678 [2024-07-26 11:23:03.247643] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify active ns (timeout 30000 ms) 00:15:07.678 [2024-07-26 11:23:03.247650] nvme_pcie_common.c:1202:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f9000 len:4096 00:15:07.678 [2024-07-26 11:23:03.247655] nvme_pcie_common.c:1230:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f9000 00:15:07.678 [2024-07-26 11:23:03.247658] nvme_pcie_common.c:1290:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:15:07.678 [2024-07-26 11:23:03.247664] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000002 cdw11:00000000 PRP1 0x2000002f9000 PRP2 0x0 00:15:07.678 [2024-07-26 11:23:03.247676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0009 p:1 m:0 dnr:0 00:15:07.678 [2024-07-26 11:23:03.247685] nvme_ctrlr.c:4697:spdk_nvme_ctrlr_get_ns: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Namespace 1 was added 00:15:07.678 [2024-07-26 11:23:03.247696] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify ns (timeout 30000 ms) 00:15:07.678 [2024-07-26 11:23:03.247702] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify ns (timeout 30000 ms) 00:15:07.678 [2024-07-26 11:23:03.247708] nvme_pcie_common.c:1202:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:15:07.678 [2024-07-26 11:23:03.247712] nvme_pcie_common.c:1230:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:15:07.678 [2024-07-26 11:23:03.247715] nvme_pcie_common.c:1290:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:15:07.678 [2024-07-26 11:23:03.247720] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000000 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:15:07.678 [2024-07-26 11:23:03.247738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000a p:1 m:0 dnr:0 00:15:07.678 [2024-07-26 11:23:03.247750] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:15:07.678 [2024-07-26 11:23:03.247757] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:15:07.678 [2024-07-26 11:23:03.247763] nvme_pcie_common.c:1202:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:15:07.678 [2024-07-26 11:23:03.247766] nvme_pcie_common.c:1230:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:15:07.678 [2024-07-26 11:23:03.247769] nvme_pcie_common.c:1290:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:15:07.678 [2024-07-26 11:23:03.247775] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:15:07.678 [2024-07-26 11:23:03.247785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000b p:1 m:0 dnr:0 00:15:07.678 [2024-07-26 11:23:03.247792] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify ns iocs specific (timeout 30000 ms) 00:15:07.678 [2024-07-26 11:23:03.247798] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set supported log pages (timeout 30000 ms) 00:15:07.678 [2024-07-26 11:23:03.247804] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set supported features (timeout 30000 ms) 00:15:07.678 [2024-07-26 11:23:03.247811] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set host behavior support feature (timeout 30000 ms) 00:15:07.678 [2024-07-26 11:23:03.247815] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set doorbell buffer config (timeout 30000 ms) 00:15:07.678 [2024-07-26 11:23:03.247820] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set host ID (timeout 30000 ms) 00:15:07.678 [2024-07-26 11:23:03.247824] nvme_ctrlr.c:3114:nvme_ctrlr_set_host_id: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] NVMe-oF transport - not sending Set Features - Host ID 00:15:07.678 [2024-07-26 11:23:03.247829] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to transport ready (timeout 30000 ms) 00:15:07.678 [2024-07-26 11:23:03.247834] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to ready (no timeout) 00:15:07.678 [2024-07-26 11:23:03.247848] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:191 cdw10:00000001 PRP1 0x0 PRP2 0x0 00:15:07.678 [2024-07-26 11:23:03.247857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000c p:1 m:0 dnr:0 00:15:07.678 [2024-07-26 11:23:03.247867] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:191 cdw10:00000002 PRP1 0x0 PRP2 0x0 00:15:07.678 [2024-07-26 11:23:03.247877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000d p:1 m:0 dnr:0 00:15:07.678 [2024-07-26 11:23:03.247886] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:191 cdw10:00000004 PRP1 0x0 PRP2 0x0 00:15:07.678 [2024-07-26 11:23:03.247896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000e p:1 m:0 dnr:0 00:15:07.678 [2024-07-26 11:23:03.247905] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:15:07.678 [2024-07-26 11:23:03.247915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:000f p:1 m:0 dnr:0 00:15:07.678 [2024-07-26 11:23:03.247927] nvme_pcie_common.c:1202:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f6000 len:8192 00:15:07.678 [2024-07-26 11:23:03.247931] nvme_pcie_common.c:1230:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f6000 00:15:07.679 [2024-07-26 11:23:03.247934] nvme_pcie_common.c:1239:nvme_pcie_prp_list_append: *DEBUG*: prp[0] = 0x2000002f7000 00:15:07.679 [2024-07-26 11:23:03.247936] nvme_pcie_common.c:1255:nvme_pcie_prp_list_append: *DEBUG*: prp2 = 0x2000002f7000 00:15:07.679 [2024-07-26 11:23:03.247939] nvme_pcie_common.c:1290:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 2 00:15:07.679 [2024-07-26 11:23:03.247945] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:191 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 PRP1 0x2000002f6000 PRP2 0x2000002f7000 00:15:07.679 [2024-07-26 11:23:03.247951] nvme_pcie_common.c:1202:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fc000 len:512 00:15:07.679 [2024-07-26 11:23:03.247955] nvme_pcie_common.c:1230:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fc000 00:15:07.679 [2024-07-26 11:23:03.247958] nvme_pcie_common.c:1290:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:15:07.679 [2024-07-26 11:23:03.247963] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:186 nsid:ffffffff cdw10:007f0002 cdw11:00000000 PRP1 0x2000002fc000 PRP2 0x0 00:15:07.679 [2024-07-26 11:23:03.247969] nvme_pcie_common.c:1202:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:512 00:15:07.679 [2024-07-26 11:23:03.247972] nvme_pcie_common.c:1230:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:15:07.679 [2024-07-26 11:23:03.247975] nvme_pcie_common.c:1290:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:15:07.679 [2024-07-26 11:23:03.247980] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:185 nsid:ffffffff cdw10:007f0003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:15:07.679 [2024-07-26 11:23:03.247987] nvme_pcie_common.c:1202:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f4000 len:4096 00:15:07.679 [2024-07-26 11:23:03.247990] nvme_pcie_common.c:1230:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f4000 00:15:07.679 [2024-07-26 11:23:03.247993] nvme_pcie_common.c:1290:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:15:07.679 [2024-07-26 11:23:03.247998] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:184 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 PRP1 0x2000002f4000 PRP2 0x0 00:15:07.679 [2024-07-26 11:23:03.248006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0010 p:1 m:0 dnr:0 00:15:07.679 [2024-07-26 11:23:03.248018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:186 cdw0:0 sqhd:0011 p:1 m:0 dnr:0 00:15:07.679 [2024-07-26 11:23:03.248028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:185 cdw0:0 sqhd:0012 p:1 m:0 dnr:0 00:15:07.679 [2024-07-26 11:23:03.248034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0013 p:1 m:0 dnr:0 00:15:07.679 ===================================================== 00:15:07.679 NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:15:07.679 ===================================================== 00:15:07.679 Controller Capabilities/Features 00:15:07.679 ================================ 00:15:07.679 Vendor ID: 4e58 00:15:07.679 Subsystem Vendor ID: 4e58 00:15:07.679 Serial Number: SPDK1 00:15:07.679 Model Number: SPDK bdev Controller 00:15:07.679 Firmware Version: 24.09 00:15:07.679 Recommended Arb Burst: 6 00:15:07.679 IEEE OUI Identifier: 8d 6b 50 00:15:07.679 Multi-path I/O 00:15:07.679 May have multiple subsystem ports: Yes 00:15:07.679 May have multiple controllers: Yes 00:15:07.679 Associated with SR-IOV VF: No 00:15:07.679 Max Data Transfer Size: 131072 00:15:07.679 Max Number of Namespaces: 32 00:15:07.679 Max Number of I/O Queues: 127 00:15:07.679 NVMe Specification Version (VS): 1.3 00:15:07.679 NVMe Specification Version (Identify): 1.3 00:15:07.679 Maximum Queue Entries: 256 00:15:07.679 Contiguous Queues Required: Yes 00:15:07.679 Arbitration Mechanisms Supported 00:15:07.679 Weighted Round Robin: Not Supported 00:15:07.679 Vendor Specific: Not Supported 00:15:07.679 Reset Timeout: 15000 ms 00:15:07.679 Doorbell Stride: 4 bytes 00:15:07.679 NVM Subsystem Reset: Not Supported 00:15:07.679 Command Sets Supported 00:15:07.679 NVM Command Set: Supported 00:15:07.679 Boot Partition: Not Supported 00:15:07.679 Memory Page Size Minimum: 4096 bytes 00:15:07.679 Memory Page Size Maximum: 4096 bytes 00:15:07.679 Persistent Memory Region: Not Supported 00:15:07.679 Optional Asynchronous Events Supported 00:15:07.679 Namespace Attribute Notices: Supported 00:15:07.679 Firmware Activation Notices: Not Supported 00:15:07.679 ANA Change Notices: Not Supported 00:15:07.679 PLE Aggregate Log Change Notices: Not Supported 00:15:07.679 LBA Status Info Alert Notices: Not Supported 00:15:07.679 EGE Aggregate Log Change Notices: Not Supported 00:15:07.679 Normal NVM Subsystem Shutdown event: Not Supported 00:15:07.679 Zone Descriptor Change Notices: Not Supported 00:15:07.679 Discovery Log Change Notices: Not Supported 00:15:07.679 Controller Attributes 00:15:07.679 128-bit Host Identifier: Supported 00:15:07.679 Non-Operational Permissive Mode: Not Supported 00:15:07.679 NVM Sets: Not Supported 00:15:07.679 Read Recovery Levels: Not Supported 00:15:07.679 Endurance Groups: Not Supported 00:15:07.679 Predictable Latency Mode: Not Supported 00:15:07.679 Traffic Based Keep ALive: Not Supported 00:15:07.679 Namespace Granularity: Not Supported 00:15:07.679 SQ Associations: Not Supported 00:15:07.679 UUID List: Not Supported 00:15:07.679 Multi-Domain Subsystem: Not Supported 00:15:07.679 Fixed Capacity Management: Not Supported 00:15:07.679 Variable Capacity Management: Not Supported 00:15:07.679 Delete Endurance Group: Not Supported 00:15:07.679 Delete NVM Set: Not Supported 00:15:07.679 Extended LBA Formats Supported: Not Supported 00:15:07.679 Flexible Data Placement Supported: Not Supported 00:15:07.679 00:15:07.679 Controller Memory Buffer Support 00:15:07.679 ================================ 00:15:07.679 Supported: No 00:15:07.679 00:15:07.679 Persistent Memory Region Support 00:15:07.679 ================================ 00:15:07.679 Supported: No 00:15:07.679 00:15:07.679 Admin Command Set Attributes 00:15:07.679 ============================ 00:15:07.679 Security Send/Receive: Not Supported 00:15:07.679 Format NVM: Not Supported 00:15:07.679 Firmware Activate/Download: Not Supported 00:15:07.679 Namespace Management: Not Supported 00:15:07.679 Device Self-Test: Not Supported 00:15:07.679 Directives: Not Supported 00:15:07.679 NVMe-MI: Not Supported 00:15:07.679 Virtualization Management: Not Supported 00:15:07.679 Doorbell Buffer Config: Not Supported 00:15:07.679 Get LBA Status Capability: Not Supported 00:15:07.679 Command & Feature Lockdown Capability: Not Supported 00:15:07.679 Abort Command Limit: 4 00:15:07.679 Async Event Request Limit: 4 00:15:07.679 Number of Firmware Slots: N/A 00:15:07.679 Firmware Slot 1 Read-Only: N/A 00:15:07.679 Firmware Activation Without Reset: N/A 00:15:07.679 Multiple Update Detection Support: N/A 00:15:07.679 Firmware Update Granularity: No Information Provided 00:15:07.679 Per-Namespace SMART Log: No 00:15:07.679 Asymmetric Namespace Access Log Page: Not Supported 00:15:07.679 Subsystem NQN: nqn.2019-07.io.spdk:cnode1 00:15:07.679 Command Effects Log Page: Supported 00:15:07.679 Get Log Page Extended Data: Supported 00:15:07.679 Telemetry Log Pages: Not Supported 00:15:07.679 Persistent Event Log Pages: Not Supported 00:15:07.679 Supported Log Pages Log Page: May Support 00:15:07.679 Commands Supported & Effects Log Page: Not Supported 00:15:07.679 Feature Identifiers & Effects Log Page:May Support 00:15:07.679 NVMe-MI Commands & Effects Log Page: May Support 00:15:07.679 Data Area 4 for Telemetry Log: Not Supported 00:15:07.679 Error Log Page Entries Supported: 128 00:15:07.679 Keep Alive: Supported 00:15:07.679 Keep Alive Granularity: 10000 ms 00:15:07.679 00:15:07.679 NVM Command Set Attributes 00:15:07.680 ========================== 00:15:07.680 Submission Queue Entry Size 00:15:07.680 Max: 64 00:15:07.680 Min: 64 00:15:07.680 Completion Queue Entry Size 00:15:07.680 Max: 16 00:15:07.680 Min: 16 00:15:07.680 Number of Namespaces: 32 00:15:07.680 Compare Command: Supported 00:15:07.680 Write Uncorrectable Command: Not Supported 00:15:07.680 Dataset Management Command: Supported 00:15:07.680 Write Zeroes Command: Supported 00:15:07.680 Set Features Save Field: Not Supported 00:15:07.680 Reservations: Not Supported 00:15:07.680 Timestamp: Not Supported 00:15:07.680 Copy: Supported 00:15:07.680 Volatile Write Cache: Present 00:15:07.680 Atomic Write Unit (Normal): 1 00:15:07.680 Atomic Write Unit (PFail): 1 00:15:07.680 Atomic Compare & Write Unit: 1 00:15:07.680 Fused Compare & Write: Supported 00:15:07.680 Scatter-Gather List 00:15:07.680 SGL Command Set: Supported (Dword aligned) 00:15:07.680 SGL Keyed: Not Supported 00:15:07.680 SGL Bit Bucket Descriptor: Not Supported 00:15:07.680 SGL Metadata Pointer: Not Supported 00:15:07.680 Oversized SGL: Not Supported 00:15:07.680 SGL Metadata Address: Not Supported 00:15:07.680 SGL Offset: Not Supported 00:15:07.680 Transport SGL Data Block: Not Supported 00:15:07.680 Replay Protected Memory Block: Not Supported 00:15:07.680 00:15:07.680 Firmware Slot Information 00:15:07.680 ========================= 00:15:07.680 Active slot: 1 00:15:07.680 Slot 1 Firmware Revision: 24.09 00:15:07.680 00:15:07.680 00:15:07.680 Commands Supported and Effects 00:15:07.680 ============================== 00:15:07.680 Admin Commands 00:15:07.680 -------------- 00:15:07.680 Get Log Page (02h): Supported 00:15:07.680 Identify (06h): Supported 00:15:07.680 Abort (08h): Supported 00:15:07.680 Set Features (09h): Supported 00:15:07.680 Get Features (0Ah): Supported 00:15:07.680 Asynchronous Event Request (0Ch): Supported 00:15:07.680 Keep Alive (18h): Supported 00:15:07.680 I/O Commands 00:15:07.680 ------------ 00:15:07.680 Flush (00h): Supported LBA-Change 00:15:07.680 Write (01h): Supported LBA-Change 00:15:07.680 Read (02h): Supported 00:15:07.680 Compare (05h): Supported 00:15:07.680 Write Zeroes (08h): Supported LBA-Change 00:15:07.680 Dataset Management (09h): Supported LBA-Change 00:15:07.680 Copy (19h): Supported LBA-Change 00:15:07.680 00:15:07.680 Error Log 00:15:07.680 ========= 00:15:07.680 00:15:07.680 Arbitration 00:15:07.680 =========== 00:15:07.680 Arbitration Burst: 1 00:15:07.680 00:15:07.680 Power Management 00:15:07.680 ================ 00:15:07.680 Number of Power States: 1 00:15:07.680 Current Power State: Power State #0 00:15:07.680 Power State #0: 00:15:07.680 Max Power: 0.00 W 00:15:07.680 Non-Operational State: Operational 00:15:07.680 Entry Latency: Not Reported 00:15:07.680 Exit Latency: Not Reported 00:15:07.680 Relative Read Throughput: 0 00:15:07.680 Relative Read Latency: 0 00:15:07.680 Relative Write Throughput: 0 00:15:07.680 Relative Write Latency: 0 00:15:07.680 Idle Power: Not Reported 00:15:07.680 Active Power: Not Reported 00:15:07.680 Non-Operational Permissive Mode: Not Supported 00:15:07.680 00:15:07.680 Health Information 00:15:07.680 ================== 00:15:07.680 Critical Warnings: 00:15:07.680 Available Spare Space: OK 00:15:07.680 Temperature: OK 00:15:07.680 Device Reliability: OK 00:15:07.680 Read Only: No 00:15:07.680 Volatile Memory Backup: OK 00:15:07.680 Current Temperature: 0 Kelvin (-273 Celsius) 00:15:07.680 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:15:07.680 Available Spare: 0% 00:15:07.680 Available Sp[2024-07-26 11:23:03.248116] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:184 cdw10:00000005 PRP1 0x0 PRP2 0x0 00:15:07.680 [2024-07-26 11:23:03.248123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0014 p:1 m:0 dnr:0 00:15:07.680 [2024-07-26 11:23:03.248146] nvme_ctrlr.c:4361:nvme_ctrlr_destruct_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Prepare to destruct SSD 00:15:07.680 [2024-07-26 11:23:03.248154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:07.680 [2024-07-26 11:23:03.248160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:07.680 [2024-07-26 11:23:03.248165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:07.680 [2024-07-26 11:23:03.248170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:07.680 [2024-07-26 11:23:03.251634] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x460001 00:15:07.680 [2024-07-26 11:23:03.251645] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x464001 00:15:07.680 [2024-07-26 11:23:03.252331] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:15:07.680 [2024-07-26 11:23:03.252379] nvme_ctrlr.c:1147:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] RTD3E = 0 us 00:15:07.680 [2024-07-26 11:23:03.252385] nvme_ctrlr.c:1150:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] shutdown timeout = 10000 ms 00:15:07.680 [2024-07-26 11:23:03.253347] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x9 00:15:07.680 [2024-07-26 11:23:03.253356] nvme_ctrlr.c:1269:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] shutdown complete in 0 milliseconds 00:15:07.680 [2024-07-26 11:23:03.253403] vfio_user_pci.c: 399:spdk_vfio_user_release: *DEBUG*: Release file /var/run/vfio-user/domain/vfio-user1/1/cntrl 00:15:07.680 [2024-07-26 11:23:03.254368] vfio_user_pci.c: 96:vfio_remove_mr: *DEBUG*: Remove memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:15:07.680 are Threshold: 0% 00:15:07.680 Life Percentage Used: 0% 00:15:07.680 Data Units Read: 0 00:15:07.680 Data Units Written: 0 00:15:07.680 Host Read Commands: 0 00:15:07.680 Host Write Commands: 0 00:15:07.680 Controller Busy Time: 0 minutes 00:15:07.680 Power Cycles: 0 00:15:07.680 Power On Hours: 0 hours 00:15:07.680 Unsafe Shutdowns: 0 00:15:07.680 Unrecoverable Media Errors: 0 00:15:07.680 Lifetime Error Log Entries: 0 00:15:07.680 Warning Temperature Time: 0 minutes 00:15:07.680 Critical Temperature Time: 0 minutes 00:15:07.680 00:15:07.680 Number of Queues 00:15:07.680 ================ 00:15:07.680 Number of I/O Submission Queues: 127 00:15:07.680 Number of I/O Completion Queues: 127 00:15:07.680 00:15:07.680 Active Namespaces 00:15:07.680 ================= 00:15:07.680 Namespace ID:1 00:15:07.680 Error Recovery Timeout: Unlimited 00:15:07.680 Command Set Identifier: NVM (00h) 00:15:07.680 Deallocate: Supported 00:15:07.680 Deallocated/Unwritten Error: Not Supported 00:15:07.680 Deallocated Read Value: Unknown 00:15:07.680 Deallocate in Write Zeroes: Not Supported 00:15:07.680 Deallocated Guard Field: 0xFFFF 00:15:07.680 Flush: Supported 00:15:07.680 Reservation: Supported 00:15:07.680 Namespace Sharing Capabilities: Multiple Controllers 00:15:07.680 Size (in LBAs): 131072 (0GiB) 00:15:07.680 Capacity (in LBAs): 131072 (0GiB) 00:15:07.680 Utilization (in LBAs): 131072 (0GiB) 00:15:07.680 NGUID: 60408C3AA76F4C369FA522CCBA132ED6 00:15:07.680 UUID: 60408c3a-a76f-4c36-9fa5-22ccba132ed6 00:15:07.680 Thin Provisioning: Not Supported 00:15:07.680 Per-NS Atomic Units: Yes 00:15:07.680 Atomic Boundary Size (Normal): 0 00:15:07.680 Atomic Boundary Size (PFail): 0 00:15:07.680 Atomic Boundary Offset: 0 00:15:07.680 Maximum Single Source Range Length: 65535 00:15:07.680 Maximum Copy Length: 65535 00:15:07.680 Maximum Source Range Count: 1 00:15:07.680 NGUID/EUI64 Never Reused: No 00:15:07.680 Namespace Write Protected: No 00:15:07.680 Number of LBA Formats: 1 00:15:07.680 Current LBA Format: LBA Format #00 00:15:07.680 LBA Format #00: Data Size: 512 Metadata Size: 0 00:15:07.680 00:15:07.680 11:23:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -s 256 -g -q 128 -o 4096 -w read -t 5 -c 0x2 00:15:07.680 EAL: No free 2048 kB hugepages reported on node 1 00:15:07.937 [2024-07-26 11:23:03.468410] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:15:13.197 Initializing NVMe Controllers 00:15:13.197 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:15:13.197 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 with lcore 1 00:15:13.197 Initialization complete. Launching workers. 00:15:13.197 ======================================================== 00:15:13.197 Latency(us) 00:15:13.197 Device Information : IOPS MiB/s Average min max 00:15:13.197 VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 from core 1: 39938.13 156.01 3204.77 943.31 6654.43 00:15:13.197 ======================================================== 00:15:13.197 Total : 39938.13 156.01 3204.77 943.31 6654.43 00:15:13.197 00:15:13.197 [2024-07-26 11:23:08.490558] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:15:13.197 11:23:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@85 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -s 256 -g -q 128 -o 4096 -w write -t 5 -c 0x2 00:15:13.197 EAL: No free 2048 kB hugepages reported on node 1 00:15:13.197 [2024-07-26 11:23:08.711576] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:15:18.455 Initializing NVMe Controllers 00:15:18.455 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:15:18.455 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 with lcore 1 00:15:18.455 Initialization complete. Launching workers. 00:15:18.455 ======================================================== 00:15:18.455 Latency(us) 00:15:18.455 Device Information : IOPS MiB/s Average min max 00:15:18.455 VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 from core 1: 15874.45 62.01 8068.67 5997.29 15964.00 00:15:18.455 ======================================================== 00:15:18.455 Total : 15874.45 62.01 8068.67 5997.29 15964.00 00:15:18.455 00:15:18.455 [2024-07-26 11:23:13.753650] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:15:18.455 11:23:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -g -q 32 -o 4096 -w randrw -M 50 -t 5 -c 0xE 00:15:18.455 EAL: No free 2048 kB hugepages reported on node 1 00:15:18.455 [2024-07-26 11:23:13.948612] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:15:23.717 [2024-07-26 11:23:19.016940] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:15:23.717 Initializing NVMe Controllers 00:15:23.717 Attaching to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:15:23.717 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:15:23.717 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 1 00:15:23.717 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 2 00:15:23.717 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 3 00:15:23.717 Initialization complete. Launching workers. 00:15:23.717 Starting thread on core 2 00:15:23.717 Starting thread on core 3 00:15:23.717 Starting thread on core 1 00:15:23.717 11:23:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -t 3 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -d 256 -g 00:15:23.717 EAL: No free 2048 kB hugepages reported on node 1 00:15:23.717 [2024-07-26 11:23:19.290008] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:15:26.999 [2024-07-26 11:23:22.437833] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:15:26.999 Initializing NVMe Controllers 00:15:26.999 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:15:26.999 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:15:26.999 Associating SPDK bdev Controller (SPDK1 ) with lcore 0 00:15:26.999 Associating SPDK bdev Controller (SPDK1 ) with lcore 1 00:15:26.999 Associating SPDK bdev Controller (SPDK1 ) with lcore 2 00:15:26.999 Associating SPDK bdev Controller (SPDK1 ) with lcore 3 00:15:26.999 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration run with configuration: 00:15:26.999 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i -1 00:15:26.999 Initialization complete. Launching workers. 00:15:26.999 Starting thread on core 1 with urgent priority queue 00:15:26.999 Starting thread on core 2 with urgent priority queue 00:15:26.999 Starting thread on core 3 with urgent priority queue 00:15:26.999 Starting thread on core 0 with urgent priority queue 00:15:26.999 SPDK bdev Controller (SPDK1 ) core 0: 2161.00 IO/s 46.27 secs/100000 ios 00:15:26.999 SPDK bdev Controller (SPDK1 ) core 1: 2036.67 IO/s 49.10 secs/100000 ios 00:15:26.999 SPDK bdev Controller (SPDK1 ) core 2: 1878.33 IO/s 53.24 secs/100000 ios 00:15:26.999 SPDK bdev Controller (SPDK1 ) core 3: 2158.00 IO/s 46.34 secs/100000 ios 00:15:26.999 ======================================================== 00:15:26.999 00:15:26.999 11:23:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/hello_world -d 256 -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' 00:15:26.999 EAL: No free 2048 kB hugepages reported on node 1 00:15:27.256 [2024-07-26 11:23:22.700364] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:15:27.256 Initializing NVMe Controllers 00:15:27.256 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:15:27.256 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:15:27.256 Namespace ID: 1 size: 0GB 00:15:27.256 Initialization complete. 00:15:27.256 INFO: using host memory buffer for IO 00:15:27.256 Hello world! 00:15:27.256 [2024-07-26 11:23:22.735568] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:15:27.256 11:23:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -g -d 256 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' 00:15:27.256 EAL: No free 2048 kB hugepages reported on node 1 00:15:27.513 [2024-07-26 11:23:22.997599] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:15:28.446 Initializing NVMe Controllers 00:15:28.446 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:15:28.446 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:15:28.446 Initialization complete. Launching workers. 00:15:28.446 submit (in ns) avg, min, max = 7697.9, 3200.0, 3998731.4 00:15:28.446 complete (in ns) avg, min, max = 20474.1, 1753.3, 3997284.8 00:15:28.446 00:15:28.446 Submit histogram 00:15:28.446 ================ 00:15:28.446 Range in us Cumulative Count 00:15:28.446 3.200 - 3.215: 0.0540% ( 9) 00:15:28.446 3.215 - 3.230: 0.6537% ( 100) 00:15:28.446 3.230 - 3.246: 3.2564% ( 434) 00:15:28.446 3.246 - 3.261: 8.3898% ( 856) 00:15:28.446 3.261 - 3.276: 14.3628% ( 996) 00:15:28.446 3.276 - 3.291: 21.3793% ( 1170) 00:15:28.446 3.291 - 3.307: 27.9940% ( 1103) 00:15:28.446 3.307 - 3.322: 33.4993% ( 918) 00:15:28.446 3.322 - 3.337: 39.5382% ( 1007) 00:15:28.446 3.337 - 3.352: 45.7931% ( 1043) 00:15:28.446 3.352 - 3.368: 51.3583% ( 928) 00:15:28.446 3.368 - 3.383: 57.1394% ( 964) 00:15:28.446 3.383 - 3.398: 64.7376% ( 1267) 00:15:28.446 3.398 - 3.413: 71.0405% ( 1051) 00:15:28.446 3.413 - 3.429: 76.2339% ( 866) 00:15:28.446 3.429 - 3.444: 80.8996% ( 778) 00:15:28.446 3.444 - 3.459: 83.8681% ( 495) 00:15:28.446 3.459 - 3.474: 86.0630% ( 366) 00:15:28.446 3.474 - 3.490: 87.2204% ( 193) 00:15:28.446 3.490 - 3.505: 87.8561% ( 106) 00:15:28.446 3.505 - 3.520: 88.2279% ( 62) 00:15:28.446 3.520 - 3.535: 88.7436% ( 86) 00:15:28.446 3.535 - 3.550: 89.4093% ( 111) 00:15:28.446 3.550 - 3.566: 90.1769% ( 128) 00:15:28.446 3.566 - 3.581: 91.1424% ( 161) 00:15:28.446 3.581 - 3.596: 92.0240% ( 147) 00:15:28.446 3.596 - 3.611: 92.8576% ( 139) 00:15:28.446 3.611 - 3.627: 93.8171% ( 160) 00:15:28.446 3.627 - 3.642: 94.7706% ( 159) 00:15:28.446 3.642 - 3.657: 95.7241% ( 159) 00:15:28.446 3.657 - 3.672: 96.6237% ( 150) 00:15:28.446 3.672 - 3.688: 97.4273% ( 134) 00:15:28.446 3.688 - 3.703: 98.0930% ( 111) 00:15:28.446 3.703 - 3.718: 98.4648% ( 62) 00:15:28.446 3.718 - 3.733: 98.8066% ( 57) 00:15:28.446 3.733 - 3.749: 99.1304% ( 54) 00:15:28.446 3.749 - 3.764: 99.3163% ( 31) 00:15:28.446 3.764 - 3.779: 99.5082% ( 32) 00:15:28.446 3.779 - 3.794: 99.6042% ( 16) 00:15:28.446 3.794 - 3.810: 99.6462% ( 7) 00:15:28.446 3.810 - 3.825: 99.6642% ( 3) 00:15:28.446 3.825 - 3.840: 99.6762% ( 2) 00:15:28.446 3.840 - 3.855: 99.6942% ( 3) 00:15:28.446 3.931 - 3.962: 99.7001% ( 1) 00:15:28.446 4.937 - 4.968: 99.7121% ( 2) 00:15:28.446 5.059 - 5.090: 99.7181% ( 1) 00:15:28.446 5.090 - 5.120: 99.7241% ( 1) 00:15:28.446 5.272 - 5.303: 99.7301% ( 1) 00:15:28.446 5.333 - 5.364: 99.7361% ( 1) 00:15:28.446 5.425 - 5.455: 99.7421% ( 1) 00:15:28.446 5.486 - 5.516: 99.7601% ( 3) 00:15:28.446 5.516 - 5.547: 99.7721% ( 2) 00:15:28.446 5.547 - 5.577: 99.7841% ( 2) 00:15:28.446 5.577 - 5.608: 99.7901% ( 1) 00:15:28.446 5.638 - 5.669: 99.7961% ( 1) 00:15:28.446 5.669 - 5.699: 99.8021% ( 1) 00:15:28.446 5.699 - 5.730: 99.8081% ( 1) 00:15:28.446 5.730 - 5.760: 99.8141% ( 1) 00:15:28.446 5.790 - 5.821: 99.8201% ( 1) 00:15:28.446 5.821 - 5.851: 99.8261% ( 1) 00:15:28.446 5.943 - 5.973: 99.8321% ( 1) 00:15:28.446 6.095 - 6.126: 99.8381% ( 1) 00:15:28.446 6.187 - 6.217: 99.8441% ( 1) 00:15:28.446 6.309 - 6.339: 99.8501% ( 1) 00:15:28.446 6.430 - 6.461: 99.8561% ( 1) 00:15:28.446 6.522 - 6.552: 99.8621% ( 1) 00:15:28.446 6.674 - 6.705: 99.8681% ( 1) 00:15:28.446 7.101 - 7.131: 99.8801% ( 2) 00:15:28.446 7.253 - 7.284: 99.8861% ( 1) 00:15:28.446 10.240 - 10.301: 99.8921% ( 1) 00:15:28.446 3994.575 - 4025.783: 100.0000% ( 18) 00:15:28.446 00:15:28.446 Complete histogram 00:15:28.446 ================== 00:15:28.446 Range in us Cumulative Count 00:15:28.446 1.752 - 1.760: 0.2519% ( 42) 00:15:28.446 1.760 - 1.768: 4.2639% ( 669) 00:15:28.446 1.768 - 1.775: 22.8906% ( 3106) 00:15:28.446 1.775 - 1.783: 53.7211% ( 5141) 00:15:28.446 1.783 - 1.790: 72.9175% ( 3201) 00:15:28.446 1.790 - 1.798: 78.8666% ( 992) 00:15:28.446 1.798 - [2024-07-26 11:23:24.018579] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:15:28.446 1.806: 81.1574% ( 382) 00:15:28.446 1.806 - 1.813: 83.4723% ( 386) 00:15:28.446 1.813 - 1.821: 86.7166% ( 541) 00:15:28.446 1.821 - 1.829: 90.4828% ( 628) 00:15:28.446 1.829 - 1.836: 93.4573% ( 496) 00:15:28.446 1.836 - 1.844: 95.4363% ( 330) 00:15:28.446 1.844 - 1.851: 96.8756% ( 240) 00:15:28.446 1.851 - 1.859: 97.9490% ( 179) 00:15:28.446 1.859 - 1.867: 98.6207% ( 112) 00:15:28.446 1.867 - 1.874: 98.8786% ( 43) 00:15:28.446 1.874 - 1.882: 98.9745% ( 16) 00:15:28.446 1.882 - 1.890: 99.0645% ( 15) 00:15:28.446 1.890 - 1.897: 99.1844% ( 20) 00:15:28.446 1.897 - 1.905: 99.2564% ( 12) 00:15:28.446 1.905 - 1.912: 99.3463% ( 15) 00:15:28.446 1.912 - 1.920: 99.3703% ( 4) 00:15:28.446 1.928 - 1.935: 99.3763% ( 1) 00:15:28.446 1.935 - 1.943: 99.3823% ( 1) 00:15:28.446 1.943 - 1.950: 99.3883% ( 1) 00:15:28.446 1.981 - 1.996: 99.3943% ( 1) 00:15:28.446 2.011 - 2.027: 99.4003% ( 1) 00:15:28.446 3.429 - 3.444: 99.4063% ( 1) 00:15:28.446 3.611 - 3.627: 99.4123% ( 1) 00:15:28.446 3.672 - 3.688: 99.4183% ( 1) 00:15:28.446 3.840 - 3.855: 99.4303% ( 2) 00:15:28.446 3.901 - 3.931: 99.4363% ( 1) 00:15:28.446 3.931 - 3.962: 99.4423% ( 1) 00:15:28.446 4.053 - 4.084: 99.4483% ( 1) 00:15:28.446 4.084 - 4.114: 99.4543% ( 1) 00:15:28.446 4.145 - 4.175: 99.4663% ( 2) 00:15:28.446 4.206 - 4.236: 99.4723% ( 1) 00:15:28.446 4.267 - 4.297: 99.4783% ( 1) 00:15:28.446 4.328 - 4.358: 99.4903% ( 2) 00:15:28.446 4.602 - 4.632: 99.4963% ( 1) 00:15:28.446 4.754 - 4.785: 99.5022% ( 1) 00:15:28.446 4.968 - 4.998: 99.5082% ( 1) 00:15:28.446 4.998 - 5.029: 99.5142% ( 1) 00:15:28.446 5.638 - 5.669: 99.5202% ( 1) 00:15:28.446 6.095 - 6.126: 99.5262% ( 1) 00:15:28.446 6.888 - 6.918: 99.5322% ( 1) 00:15:28.446 3978.971 - 3994.575: 99.5442% ( 2) 00:15:28.446 3994.575 - 4025.783: 100.0000% ( 76) 00:15:28.446 00:15:28.446 11:23:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@90 -- # aer_vfio_user /var/run/vfio-user/domain/vfio-user1/1 nqn.2019-07.io.spdk:cnode1 1 00:15:28.446 11:23:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@22 -- # local traddr=/var/run/vfio-user/domain/vfio-user1/1 00:15:28.446 11:23:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@23 -- # local subnqn=nqn.2019-07.io.spdk:cnode1 00:15:28.446 11:23:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@24 -- # local malloc_num=Malloc3 00:15:28.446 11:23:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:15:28.704 [ 00:15:28.704 { 00:15:28.704 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:15:28.704 "subtype": "Discovery", 00:15:28.704 "listen_addresses": [], 00:15:28.704 "allow_any_host": true, 00:15:28.704 "hosts": [] 00:15:28.704 }, 00:15:28.704 { 00:15:28.704 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:15:28.704 "subtype": "NVMe", 00:15:28.704 "listen_addresses": [ 00:15:28.704 { 00:15:28.704 "trtype": "VFIOUSER", 00:15:28.704 "adrfam": "IPv4", 00:15:28.704 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:15:28.704 "trsvcid": "0" 00:15:28.704 } 00:15:28.704 ], 00:15:28.704 "allow_any_host": true, 00:15:28.704 "hosts": [], 00:15:28.704 "serial_number": "SPDK1", 00:15:28.704 "model_number": "SPDK bdev Controller", 00:15:28.704 "max_namespaces": 32, 00:15:28.704 "min_cntlid": 1, 00:15:28.704 "max_cntlid": 65519, 00:15:28.704 "namespaces": [ 00:15:28.704 { 00:15:28.704 "nsid": 1, 00:15:28.704 "bdev_name": "Malloc1", 00:15:28.704 "name": "Malloc1", 00:15:28.704 "nguid": "60408C3AA76F4C369FA522CCBA132ED6", 00:15:28.704 "uuid": "60408c3a-a76f-4c36-9fa5-22ccba132ed6" 00:15:28.704 } 00:15:28.704 ] 00:15:28.704 }, 00:15:28.704 { 00:15:28.704 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:15:28.704 "subtype": "NVMe", 00:15:28.704 "listen_addresses": [ 00:15:28.704 { 00:15:28.704 "trtype": "VFIOUSER", 00:15:28.704 "adrfam": "IPv4", 00:15:28.704 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:15:28.704 "trsvcid": "0" 00:15:28.704 } 00:15:28.704 ], 00:15:28.704 "allow_any_host": true, 00:15:28.704 "hosts": [], 00:15:28.704 "serial_number": "SPDK2", 00:15:28.704 "model_number": "SPDK bdev Controller", 00:15:28.704 "max_namespaces": 32, 00:15:28.704 "min_cntlid": 1, 00:15:28.704 "max_cntlid": 65519, 00:15:28.704 "namespaces": [ 00:15:28.704 { 00:15:28.704 "nsid": 1, 00:15:28.704 "bdev_name": "Malloc2", 00:15:28.704 "name": "Malloc2", 00:15:28.704 "nguid": "7FEA71249E7641308EBF9504F4A76640", 00:15:28.704 "uuid": "7fea7124-9e76-4130-8ebf-9504f4a76640" 00:15:28.704 } 00:15:28.704 ] 00:15:28.704 } 00:15:28.704 ] 00:15:28.704 11:23:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@27 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:15:28.704 11:23:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@34 -- # aerpid=1493555 00:15:28.704 11:23:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@37 -- # waitforfile /tmp/aer_touch_file 00:15:28.704 11:23:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -n 2 -g -t /tmp/aer_touch_file 00:15:28.704 11:23:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1265 -- # local i=0 00:15:28.704 11:23:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:15:28.704 11:23:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1272 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:15:28.704 11:23:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1276 -- # return 0 00:15:28.704 11:23:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@38 -- # rm -f /tmp/aer_touch_file 00:15:28.704 11:23:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 --name Malloc3 00:15:28.704 EAL: No free 2048 kB hugepages reported on node 1 00:15:28.962 [2024-07-26 11:23:24.397047] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:15:28.962 Malloc3 00:15:28.962 11:23:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc3 -n 2 00:15:28.962 [2024-07-26 11:23:24.615673] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:15:29.219 11:23:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:15:29.219 Asynchronous Event Request test 00:15:29.219 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:15:29.219 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:15:29.219 Registering asynchronous event callbacks... 00:15:29.219 Starting namespace attribute notice tests for all controllers... 00:15:29.219 /var/run/vfio-user/domain/vfio-user1/1: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:15:29.219 aer_cb - Changed Namespace 00:15:29.219 Cleaning up... 00:15:29.219 [ 00:15:29.219 { 00:15:29.219 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:15:29.219 "subtype": "Discovery", 00:15:29.219 "listen_addresses": [], 00:15:29.219 "allow_any_host": true, 00:15:29.219 "hosts": [] 00:15:29.219 }, 00:15:29.219 { 00:15:29.219 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:15:29.219 "subtype": "NVMe", 00:15:29.219 "listen_addresses": [ 00:15:29.219 { 00:15:29.219 "trtype": "VFIOUSER", 00:15:29.219 "adrfam": "IPv4", 00:15:29.219 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:15:29.219 "trsvcid": "0" 00:15:29.219 } 00:15:29.219 ], 00:15:29.219 "allow_any_host": true, 00:15:29.219 "hosts": [], 00:15:29.219 "serial_number": "SPDK1", 00:15:29.219 "model_number": "SPDK bdev Controller", 00:15:29.219 "max_namespaces": 32, 00:15:29.219 "min_cntlid": 1, 00:15:29.219 "max_cntlid": 65519, 00:15:29.219 "namespaces": [ 00:15:29.219 { 00:15:29.219 "nsid": 1, 00:15:29.219 "bdev_name": "Malloc1", 00:15:29.219 "name": "Malloc1", 00:15:29.219 "nguid": "60408C3AA76F4C369FA522CCBA132ED6", 00:15:29.219 "uuid": "60408c3a-a76f-4c36-9fa5-22ccba132ed6" 00:15:29.219 }, 00:15:29.219 { 00:15:29.219 "nsid": 2, 00:15:29.219 "bdev_name": "Malloc3", 00:15:29.219 "name": "Malloc3", 00:15:29.219 "nguid": "D3AB392E87CD47DD9ECFDF23636998B7", 00:15:29.219 "uuid": "d3ab392e-87cd-47dd-9ecf-df23636998b7" 00:15:29.219 } 00:15:29.219 ] 00:15:29.219 }, 00:15:29.219 { 00:15:29.219 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:15:29.219 "subtype": "NVMe", 00:15:29.219 "listen_addresses": [ 00:15:29.219 { 00:15:29.219 "trtype": "VFIOUSER", 00:15:29.219 "adrfam": "IPv4", 00:15:29.219 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:15:29.219 "trsvcid": "0" 00:15:29.219 } 00:15:29.219 ], 00:15:29.219 "allow_any_host": true, 00:15:29.219 "hosts": [], 00:15:29.220 "serial_number": "SPDK2", 00:15:29.220 "model_number": "SPDK bdev Controller", 00:15:29.220 "max_namespaces": 32, 00:15:29.220 "min_cntlid": 1, 00:15:29.220 "max_cntlid": 65519, 00:15:29.220 "namespaces": [ 00:15:29.220 { 00:15:29.220 "nsid": 1, 00:15:29.220 "bdev_name": "Malloc2", 00:15:29.220 "name": "Malloc2", 00:15:29.220 "nguid": "7FEA71249E7641308EBF9504F4A76640", 00:15:29.220 "uuid": "7fea7124-9e76-4130-8ebf-9504f4a76640" 00:15:29.220 } 00:15:29.220 ] 00:15:29.220 } 00:15:29.220 ] 00:15:29.220 11:23:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@44 -- # wait 1493555 00:15:29.220 11:23:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # for i in $(seq 1 $NUM_DEVICES) 00:15:29.220 11:23:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@81 -- # test_traddr=/var/run/vfio-user/domain/vfio-user2/2 00:15:29.220 11:23:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@82 -- # test_subnqn=nqn.2019-07.io.spdk:cnode2 00:15:29.220 11:23:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -g -L nvme -L nvme_vfio -L vfio_pci 00:15:29.220 [2024-07-26 11:23:24.833658] Starting SPDK v24.09-pre git sha1 487ff9e1a / DPDK 24.03.0 initialization... 00:15:29.220 [2024-07-26 11:23:24.833681] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1493696 ] 00:15:29.220 EAL: No free 2048 kB hugepages reported on node 1 00:15:29.220 [2024-07-26 11:23:24.859781] nvme_vfio_user.c: 259:nvme_vfio_ctrlr_scan: *DEBUG*: Scan controller : /var/run/vfio-user/domain/vfio-user2/2 00:15:29.220 [2024-07-26 11:23:24.869890] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 0, Size 0x2000, Offset 0x0, Flags 0xf, Cap offset 32 00:15:29.220 [2024-07-26 11:23:24.869912] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0x1000, Offset 0x1000, Map addr 0x7f8f64ffb000 00:15:29.220 [2024-07-26 11:23:24.870873] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 1, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:15:29.220 [2024-07-26 11:23:24.871881] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 2, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:15:29.220 [2024-07-26 11:23:24.872888] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 3, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:15:29.220 [2024-07-26 11:23:24.873887] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 4, Size 0x2000, Offset 0x0, Flags 0x3, Cap offset 0 00:15:29.220 [2024-07-26 11:23:24.874891] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 5, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:15:29.220 [2024-07-26 11:23:24.875898] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 6, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:15:29.220 [2024-07-26 11:23:24.876912] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 7, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:15:29.220 [2024-07-26 11:23:24.877924] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 8, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:15:29.220 [2024-07-26 11:23:24.878930] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 9, Size 0xc000, Offset 0x0, Flags 0xf, Cap offset 32 00:15:29.220 [2024-07-26 11:23:24.878939] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0xb000, Offset 0x1000, Map addr 0x7f8f64ff0000 00:15:29.479 [2024-07-26 11:23:24.879852] vfio_user_pci.c: 65:vfio_add_mr: *DEBUG*: Add memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:15:29.479 [2024-07-26 11:23:24.892205] vfio_user_pci.c: 386:spdk_vfio_user_setup: *DEBUG*: Device vfio-user0, Path /var/run/vfio-user/domain/vfio-user2/2/cntrl Setup Successfully 00:15:29.479 [2024-07-26 11:23:24.892233] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to connect adminq (no timeout) 00:15:29.479 [2024-07-26 11:23:24.894299] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x0, value 0x201e0100ff 00:15:29.479 [2024-07-26 11:23:24.894335] nvme_pcie_common.c: 133:nvme_pcie_qpair_construct: *INFO*: max_completions_cap = 64 num_trackers = 192 00:15:29.479 [2024-07-26 11:23:24.894406] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for connect adminq (no timeout) 00:15:29.479 [2024-07-26 11:23:24.894420] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read vs (no timeout) 00:15:29.479 [2024-07-26 11:23:24.894425] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read vs wait for vs (no timeout) 00:15:29.479 [2024-07-26 11:23:24.895309] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x8, value 0x10300 00:15:29.479 [2024-07-26 11:23:24.895319] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read cap (no timeout) 00:15:29.479 [2024-07-26 11:23:24.895325] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read cap wait for cap (no timeout) 00:15:29.479 [2024-07-26 11:23:24.896317] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x0, value 0x201e0100ff 00:15:29.479 [2024-07-26 11:23:24.896326] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to check en (no timeout) 00:15:29.479 [2024-07-26 11:23:24.896332] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to check en wait for cc (timeout 15000 ms) 00:15:29.480 [2024-07-26 11:23:24.897323] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x0 00:15:29.480 [2024-07-26 11:23:24.897331] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:15:29.480 [2024-07-26 11:23:24.898329] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x0 00:15:29.480 [2024-07-26 11:23:24.898337] nvme_ctrlr.c:3873:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] CC.EN = 0 && CSTS.RDY = 0 00:15:29.480 [2024-07-26 11:23:24.898342] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to controller is disabled (timeout 15000 ms) 00:15:29.480 [2024-07-26 11:23:24.898347] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:15:29.480 [2024-07-26 11:23:24.898452] nvme_ctrlr.c:4066:nvme_ctrlr_process_init: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Setting CC.EN = 1 00:15:29.480 [2024-07-26 11:23:24.898456] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:15:29.480 [2024-07-26 11:23:24.898460] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x28, value 0x2000003c0000 00:15:29.480 [2024-07-26 11:23:24.899342] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x30, value 0x2000003be000 00:15:29.480 [2024-07-26 11:23:24.900341] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x24, value 0xff00ff 00:15:29.480 [2024-07-26 11:23:24.901352] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x460001 00:15:29.480 [2024-07-26 11:23:24.902356] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:15:29.480 [2024-07-26 11:23:24.902393] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:15:29.480 [2024-07-26 11:23:24.903366] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x1 00:15:29.480 [2024-07-26 11:23:24.903375] nvme_ctrlr.c:3908:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:15:29.480 [2024-07-26 11:23:24.903379] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to reset admin queue (timeout 30000 ms) 00:15:29.480 [2024-07-26 11:23:24.903395] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify controller (no timeout) 00:15:29.480 [2024-07-26 11:23:24.903402] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify controller (timeout 30000 ms) 00:15:29.480 [2024-07-26 11:23:24.903413] nvme_pcie_common.c:1202:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:15:29.480 [2024-07-26 11:23:24.903417] nvme_pcie_common.c:1230:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:15:29.480 [2024-07-26 11:23:24.903420] nvme_pcie_common.c:1290:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:15:29.480 [2024-07-26 11:23:24.903431] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000001 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:15:29.480 [2024-07-26 11:23:24.909636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0001 p:1 m:0 dnr:0 00:15:29.480 [2024-07-26 11:23:24.909647] nvme_ctrlr.c:2057:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] transport max_xfer_size 131072 00:15:29.480 [2024-07-26 11:23:24.909651] nvme_ctrlr.c:2061:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] MDTS max_xfer_size 131072 00:15:29.480 [2024-07-26 11:23:24.909655] nvme_ctrlr.c:2064:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] CNTLID 0x0001 00:15:29.480 [2024-07-26 11:23:24.909659] nvme_ctrlr.c:2075:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Identify CNTLID 0x0001 != Connect CNTLID 0x0000 00:15:29.480 [2024-07-26 11:23:24.909664] nvme_ctrlr.c:2088:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] transport max_sges 1 00:15:29.480 [2024-07-26 11:23:24.909668] nvme_ctrlr.c:2103:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] fuses compare and write: 1 00:15:29.480 [2024-07-26 11:23:24.909671] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to configure AER (timeout 30000 ms) 00:15:29.480 [2024-07-26 11:23:24.909678] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for configure aer (timeout 30000 ms) 00:15:29.480 [2024-07-26 11:23:24.909689] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:191 cdw10:0000000b PRP1 0x0 PRP2 0x0 00:15:29.480 [2024-07-26 11:23:24.917630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0002 p:1 m:0 dnr:0 00:15:29.480 [2024-07-26 11:23:24.917643] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:15:29.480 [2024-07-26 11:23:24.917651] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:15:29.480 [2024-07-26 11:23:24.917658] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:15:29.480 [2024-07-26 11:23:24.917665] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:15:29.480 [2024-07-26 11:23:24.917672] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set keep alive timeout (timeout 30000 ms) 00:15:29.480 [2024-07-26 11:23:24.917679] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:15:29.480 [2024-07-26 11:23:24.917688] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:191 cdw10:0000000f PRP1 0x0 PRP2 0x0 00:15:29.480 [2024-07-26 11:23:24.925631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0007 p:1 m:0 dnr:0 00:15:29.480 [2024-07-26 11:23:24.925638] nvme_ctrlr.c:3014:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Controller adjusted keep alive timeout to 0 ms 00:15:29.480 [2024-07-26 11:23:24.925643] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify controller iocs specific (timeout 30000 ms) 00:15:29.480 [2024-07-26 11:23:24.925650] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set number of queues (timeout 30000 ms) 00:15:29.480 [2024-07-26 11:23:24.925655] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for set number of queues (timeout 30000 ms) 00:15:29.480 [2024-07-26 11:23:24.925663] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:15:29.480 [2024-07-26 11:23:24.933631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:0008 p:1 m:0 dnr:0 00:15:29.480 [2024-07-26 11:23:24.933683] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify active ns (timeout 30000 ms) 00:15:29.480 [2024-07-26 11:23:24.933690] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify active ns (timeout 30000 ms) 00:15:29.480 [2024-07-26 11:23:24.933697] nvme_pcie_common.c:1202:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f9000 len:4096 00:15:29.480 [2024-07-26 11:23:24.933701] nvme_pcie_common.c:1230:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f9000 00:15:29.480 [2024-07-26 11:23:24.933704] nvme_pcie_common.c:1290:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:15:29.480 [2024-07-26 11:23:24.933710] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000002 cdw11:00000000 PRP1 0x2000002f9000 PRP2 0x0 00:15:29.480 [2024-07-26 11:23:24.941639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0009 p:1 m:0 dnr:0 00:15:29.480 [2024-07-26 11:23:24.941650] nvme_ctrlr.c:4697:spdk_nvme_ctrlr_get_ns: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Namespace 1 was added 00:15:29.480 [2024-07-26 11:23:24.941660] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify ns (timeout 30000 ms) 00:15:29.480 [2024-07-26 11:23:24.941667] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify ns (timeout 30000 ms) 00:15:29.480 [2024-07-26 11:23:24.941673] nvme_pcie_common.c:1202:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:15:29.480 [2024-07-26 11:23:24.941677] nvme_pcie_common.c:1230:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:15:29.480 [2024-07-26 11:23:24.941680] nvme_pcie_common.c:1290:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:15:29.480 [2024-07-26 11:23:24.941685] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000000 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:15:29.480 [2024-07-26 11:23:24.949632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000a p:1 m:0 dnr:0 00:15:29.480 [2024-07-26 11:23:24.949646] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify namespace id descriptors (timeout 30000 ms) 00:15:29.480 [2024-07-26 11:23:24.949655] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:15:29.480 [2024-07-26 11:23:24.949662] nvme_pcie_common.c:1202:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:15:29.480 [2024-07-26 11:23:24.949665] nvme_pcie_common.c:1230:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:15:29.480 [2024-07-26 11:23:24.949668] nvme_pcie_common.c:1290:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:15:29.480 [2024-07-26 11:23:24.949674] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:15:29.480 [2024-07-26 11:23:24.957630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000b p:1 m:0 dnr:0 00:15:29.480 [2024-07-26 11:23:24.957639] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify ns iocs specific (timeout 30000 ms) 00:15:29.481 [2024-07-26 11:23:24.957644] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set supported log pages (timeout 30000 ms) 00:15:29.481 [2024-07-26 11:23:24.957651] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set supported features (timeout 30000 ms) 00:15:29.481 [2024-07-26 11:23:24.957657] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set host behavior support feature (timeout 30000 ms) 00:15:29.481 [2024-07-26 11:23:24.957662] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set doorbell buffer config (timeout 30000 ms) 00:15:29.481 [2024-07-26 11:23:24.957666] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set host ID (timeout 30000 ms) 00:15:29.481 [2024-07-26 11:23:24.957670] nvme_ctrlr.c:3114:nvme_ctrlr_set_host_id: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] NVMe-oF transport - not sending Set Features - Host ID 00:15:29.481 [2024-07-26 11:23:24.957674] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to transport ready (timeout 30000 ms) 00:15:29.481 [2024-07-26 11:23:24.957678] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to ready (no timeout) 00:15:29.481 [2024-07-26 11:23:24.957693] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:191 cdw10:00000001 PRP1 0x0 PRP2 0x0 00:15:29.481 [2024-07-26 11:23:24.965631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000c p:1 m:0 dnr:0 00:15:29.481 [2024-07-26 11:23:24.965643] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:191 cdw10:00000002 PRP1 0x0 PRP2 0x0 00:15:29.481 [2024-07-26 11:23:24.973631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000d p:1 m:0 dnr:0 00:15:29.481 [2024-07-26 11:23:24.973642] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:191 cdw10:00000004 PRP1 0x0 PRP2 0x0 00:15:29.481 [2024-07-26 11:23:24.981630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000e p:1 m:0 dnr:0 00:15:29.481 [2024-07-26 11:23:24.981641] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:15:29.481 [2024-07-26 11:23:24.989631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:000f p:1 m:0 dnr:0 00:15:29.481 [2024-07-26 11:23:24.989646] nvme_pcie_common.c:1202:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f6000 len:8192 00:15:29.481 [2024-07-26 11:23:24.989650] nvme_pcie_common.c:1230:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f6000 00:15:29.481 [2024-07-26 11:23:24.989653] nvme_pcie_common.c:1239:nvme_pcie_prp_list_append: *DEBUG*: prp[0] = 0x2000002f7000 00:15:29.481 [2024-07-26 11:23:24.989658] nvme_pcie_common.c:1255:nvme_pcie_prp_list_append: *DEBUG*: prp2 = 0x2000002f7000 00:15:29.481 [2024-07-26 11:23:24.989661] nvme_pcie_common.c:1290:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 2 00:15:29.481 [2024-07-26 11:23:24.989666] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:191 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 PRP1 0x2000002f6000 PRP2 0x2000002f7000 00:15:29.481 [2024-07-26 11:23:24.989672] nvme_pcie_common.c:1202:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fc000 len:512 00:15:29.481 [2024-07-26 11:23:24.989676] nvme_pcie_common.c:1230:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fc000 00:15:29.481 [2024-07-26 11:23:24.989679] nvme_pcie_common.c:1290:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:15:29.481 [2024-07-26 11:23:24.989684] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:186 nsid:ffffffff cdw10:007f0002 cdw11:00000000 PRP1 0x2000002fc000 PRP2 0x0 00:15:29.481 [2024-07-26 11:23:24.989690] nvme_pcie_common.c:1202:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:512 00:15:29.481 [2024-07-26 11:23:24.989693] nvme_pcie_common.c:1230:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:15:29.481 [2024-07-26 11:23:24.989696] nvme_pcie_common.c:1290:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:15:29.481 [2024-07-26 11:23:24.989701] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:185 nsid:ffffffff cdw10:007f0003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:15:29.481 [2024-07-26 11:23:24.989707] nvme_pcie_common.c:1202:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f4000 len:4096 00:15:29.481 [2024-07-26 11:23:24.989711] nvme_pcie_common.c:1230:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f4000 00:15:29.481 [2024-07-26 11:23:24.989714] nvme_pcie_common.c:1290:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:15:29.481 [2024-07-26 11:23:24.989719] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:184 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 PRP1 0x2000002f4000 PRP2 0x0 00:15:29.481 [2024-07-26 11:23:24.997631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0010 p:1 m:0 dnr:0 00:15:29.481 [2024-07-26 11:23:24.997644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:186 cdw0:0 sqhd:0011 p:1 m:0 dnr:0 00:15:29.481 [2024-07-26 11:23:24.997653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:185 cdw0:0 sqhd:0012 p:1 m:0 dnr:0 00:15:29.481 [2024-07-26 11:23:24.997659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0013 p:1 m:0 dnr:0 00:15:29.481 ===================================================== 00:15:29.481 NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:15:29.481 ===================================================== 00:15:29.481 Controller Capabilities/Features 00:15:29.481 ================================ 00:15:29.481 Vendor ID: 4e58 00:15:29.481 Subsystem Vendor ID: 4e58 00:15:29.481 Serial Number: SPDK2 00:15:29.481 Model Number: SPDK bdev Controller 00:15:29.481 Firmware Version: 24.09 00:15:29.481 Recommended Arb Burst: 6 00:15:29.481 IEEE OUI Identifier: 8d 6b 50 00:15:29.481 Multi-path I/O 00:15:29.481 May have multiple subsystem ports: Yes 00:15:29.481 May have multiple controllers: Yes 00:15:29.481 Associated with SR-IOV VF: No 00:15:29.481 Max Data Transfer Size: 131072 00:15:29.481 Max Number of Namespaces: 32 00:15:29.481 Max Number of I/O Queues: 127 00:15:29.481 NVMe Specification Version (VS): 1.3 00:15:29.481 NVMe Specification Version (Identify): 1.3 00:15:29.481 Maximum Queue Entries: 256 00:15:29.481 Contiguous Queues Required: Yes 00:15:29.481 Arbitration Mechanisms Supported 00:15:29.481 Weighted Round Robin: Not Supported 00:15:29.481 Vendor Specific: Not Supported 00:15:29.481 Reset Timeout: 15000 ms 00:15:29.481 Doorbell Stride: 4 bytes 00:15:29.481 NVM Subsystem Reset: Not Supported 00:15:29.481 Command Sets Supported 00:15:29.481 NVM Command Set: Supported 00:15:29.481 Boot Partition: Not Supported 00:15:29.481 Memory Page Size Minimum: 4096 bytes 00:15:29.481 Memory Page Size Maximum: 4096 bytes 00:15:29.481 Persistent Memory Region: Not Supported 00:15:29.481 Optional Asynchronous Events Supported 00:15:29.481 Namespace Attribute Notices: Supported 00:15:29.481 Firmware Activation Notices: Not Supported 00:15:29.481 ANA Change Notices: Not Supported 00:15:29.481 PLE Aggregate Log Change Notices: Not Supported 00:15:29.481 LBA Status Info Alert Notices: Not Supported 00:15:29.481 EGE Aggregate Log Change Notices: Not Supported 00:15:29.481 Normal NVM Subsystem Shutdown event: Not Supported 00:15:29.481 Zone Descriptor Change Notices: Not Supported 00:15:29.481 Discovery Log Change Notices: Not Supported 00:15:29.481 Controller Attributes 00:15:29.481 128-bit Host Identifier: Supported 00:15:29.481 Non-Operational Permissive Mode: Not Supported 00:15:29.481 NVM Sets: Not Supported 00:15:29.481 Read Recovery Levels: Not Supported 00:15:29.481 Endurance Groups: Not Supported 00:15:29.481 Predictable Latency Mode: Not Supported 00:15:29.481 Traffic Based Keep ALive: Not Supported 00:15:29.481 Namespace Granularity: Not Supported 00:15:29.481 SQ Associations: Not Supported 00:15:29.481 UUID List: Not Supported 00:15:29.481 Multi-Domain Subsystem: Not Supported 00:15:29.481 Fixed Capacity Management: Not Supported 00:15:29.481 Variable Capacity Management: Not Supported 00:15:29.481 Delete Endurance Group: Not Supported 00:15:29.481 Delete NVM Set: Not Supported 00:15:29.481 Extended LBA Formats Supported: Not Supported 00:15:29.481 Flexible Data Placement Supported: Not Supported 00:15:29.481 00:15:29.481 Controller Memory Buffer Support 00:15:29.481 ================================ 00:15:29.481 Supported: No 00:15:29.481 00:15:29.481 Persistent Memory Region Support 00:15:29.481 ================================ 00:15:29.481 Supported: No 00:15:29.481 00:15:29.481 Admin Command Set Attributes 00:15:29.481 ============================ 00:15:29.481 Security Send/Receive: Not Supported 00:15:29.481 Format NVM: Not Supported 00:15:29.481 Firmware Activate/Download: Not Supported 00:15:29.481 Namespace Management: Not Supported 00:15:29.481 Device Self-Test: Not Supported 00:15:29.481 Directives: Not Supported 00:15:29.481 NVMe-MI: Not Supported 00:15:29.481 Virtualization Management: Not Supported 00:15:29.481 Doorbell Buffer Config: Not Supported 00:15:29.481 Get LBA Status Capability: Not Supported 00:15:29.481 Command & Feature Lockdown Capability: Not Supported 00:15:29.481 Abort Command Limit: 4 00:15:29.481 Async Event Request Limit: 4 00:15:29.481 Number of Firmware Slots: N/A 00:15:29.481 Firmware Slot 1 Read-Only: N/A 00:15:29.481 Firmware Activation Without Reset: N/A 00:15:29.481 Multiple Update Detection Support: N/A 00:15:29.481 Firmware Update Granularity: No Information Provided 00:15:29.481 Per-Namespace SMART Log: No 00:15:29.481 Asymmetric Namespace Access Log Page: Not Supported 00:15:29.481 Subsystem NQN: nqn.2019-07.io.spdk:cnode2 00:15:29.481 Command Effects Log Page: Supported 00:15:29.481 Get Log Page Extended Data: Supported 00:15:29.481 Telemetry Log Pages: Not Supported 00:15:29.481 Persistent Event Log Pages: Not Supported 00:15:29.481 Supported Log Pages Log Page: May Support 00:15:29.482 Commands Supported & Effects Log Page: Not Supported 00:15:29.482 Feature Identifiers & Effects Log Page:May Support 00:15:29.482 NVMe-MI Commands & Effects Log Page: May Support 00:15:29.482 Data Area 4 for Telemetry Log: Not Supported 00:15:29.482 Error Log Page Entries Supported: 128 00:15:29.482 Keep Alive: Supported 00:15:29.482 Keep Alive Granularity: 10000 ms 00:15:29.482 00:15:29.482 NVM Command Set Attributes 00:15:29.482 ========================== 00:15:29.482 Submission Queue Entry Size 00:15:29.482 Max: 64 00:15:29.482 Min: 64 00:15:29.482 Completion Queue Entry Size 00:15:29.482 Max: 16 00:15:29.482 Min: 16 00:15:29.482 Number of Namespaces: 32 00:15:29.482 Compare Command: Supported 00:15:29.482 Write Uncorrectable Command: Not Supported 00:15:29.482 Dataset Management Command: Supported 00:15:29.482 Write Zeroes Command: Supported 00:15:29.482 Set Features Save Field: Not Supported 00:15:29.482 Reservations: Not Supported 00:15:29.482 Timestamp: Not Supported 00:15:29.482 Copy: Supported 00:15:29.482 Volatile Write Cache: Present 00:15:29.482 Atomic Write Unit (Normal): 1 00:15:29.482 Atomic Write Unit (PFail): 1 00:15:29.482 Atomic Compare & Write Unit: 1 00:15:29.482 Fused Compare & Write: Supported 00:15:29.482 Scatter-Gather List 00:15:29.482 SGL Command Set: Supported (Dword aligned) 00:15:29.482 SGL Keyed: Not Supported 00:15:29.482 SGL Bit Bucket Descriptor: Not Supported 00:15:29.482 SGL Metadata Pointer: Not Supported 00:15:29.482 Oversized SGL: Not Supported 00:15:29.482 SGL Metadata Address: Not Supported 00:15:29.482 SGL Offset: Not Supported 00:15:29.482 Transport SGL Data Block: Not Supported 00:15:29.482 Replay Protected Memory Block: Not Supported 00:15:29.482 00:15:29.482 Firmware Slot Information 00:15:29.482 ========================= 00:15:29.482 Active slot: 1 00:15:29.482 Slot 1 Firmware Revision: 24.09 00:15:29.482 00:15:29.482 00:15:29.482 Commands Supported and Effects 00:15:29.482 ============================== 00:15:29.482 Admin Commands 00:15:29.482 -------------- 00:15:29.482 Get Log Page (02h): Supported 00:15:29.482 Identify (06h): Supported 00:15:29.482 Abort (08h): Supported 00:15:29.482 Set Features (09h): Supported 00:15:29.482 Get Features (0Ah): Supported 00:15:29.482 Asynchronous Event Request (0Ch): Supported 00:15:29.482 Keep Alive (18h): Supported 00:15:29.482 I/O Commands 00:15:29.482 ------------ 00:15:29.482 Flush (00h): Supported LBA-Change 00:15:29.482 Write (01h): Supported LBA-Change 00:15:29.482 Read (02h): Supported 00:15:29.482 Compare (05h): Supported 00:15:29.482 Write Zeroes (08h): Supported LBA-Change 00:15:29.482 Dataset Management (09h): Supported LBA-Change 00:15:29.482 Copy (19h): Supported LBA-Change 00:15:29.482 00:15:29.482 Error Log 00:15:29.482 ========= 00:15:29.482 00:15:29.482 Arbitration 00:15:29.482 =========== 00:15:29.482 Arbitration Burst: 1 00:15:29.482 00:15:29.482 Power Management 00:15:29.482 ================ 00:15:29.482 Number of Power States: 1 00:15:29.482 Current Power State: Power State #0 00:15:29.482 Power State #0: 00:15:29.482 Max Power: 0.00 W 00:15:29.482 Non-Operational State: Operational 00:15:29.482 Entry Latency: Not Reported 00:15:29.482 Exit Latency: Not Reported 00:15:29.482 Relative Read Throughput: 0 00:15:29.482 Relative Read Latency: 0 00:15:29.482 Relative Write Throughput: 0 00:15:29.482 Relative Write Latency: 0 00:15:29.482 Idle Power: Not Reported 00:15:29.482 Active Power: Not Reported 00:15:29.482 Non-Operational Permissive Mode: Not Supported 00:15:29.482 00:15:29.482 Health Information 00:15:29.482 ================== 00:15:29.482 Critical Warnings: 00:15:29.482 Available Spare Space: OK 00:15:29.482 Temperature: OK 00:15:29.482 Device Reliability: OK 00:15:29.482 Read Only: No 00:15:29.482 Volatile Memory Backup: OK 00:15:29.482 Current Temperature: 0 Kelvin (-273 Celsius) 00:15:29.482 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:15:29.482 Available Spare: 0% 00:15:29.482 Available Sp[2024-07-26 11:23:24.997741] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:184 cdw10:00000005 PRP1 0x0 PRP2 0x0 00:15:29.482 [2024-07-26 11:23:25.005630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0014 p:1 m:0 dnr:0 00:15:29.482 [2024-07-26 11:23:25.005657] nvme_ctrlr.c:4361:nvme_ctrlr_destruct_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Prepare to destruct SSD 00:15:29.482 [2024-07-26 11:23:25.005665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:29.482 [2024-07-26 11:23:25.005670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:29.482 [2024-07-26 11:23:25.005675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:29.482 [2024-07-26 11:23:25.005681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:29.482 [2024-07-26 11:23:25.005727] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x460001 00:15:29.482 [2024-07-26 11:23:25.005737] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x464001 00:15:29.482 [2024-07-26 11:23:25.006734] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:15:29.482 [2024-07-26 11:23:25.006774] nvme_ctrlr.c:1147:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] RTD3E = 0 us 00:15:29.482 [2024-07-26 11:23:25.006780] nvme_ctrlr.c:1150:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] shutdown timeout = 10000 ms 00:15:29.482 [2024-07-26 11:23:25.007734] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x9 00:15:29.482 [2024-07-26 11:23:25.007744] nvme_ctrlr.c:1269:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] shutdown complete in 0 milliseconds 00:15:29.482 [2024-07-26 11:23:25.007790] vfio_user_pci.c: 399:spdk_vfio_user_release: *DEBUG*: Release file /var/run/vfio-user/domain/vfio-user2/2/cntrl 00:15:29.482 [2024-07-26 11:23:25.010631] vfio_user_pci.c: 96:vfio_remove_mr: *DEBUG*: Remove memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:15:29.482 are Threshold: 0% 00:15:29.482 Life Percentage Used: 0% 00:15:29.482 Data Units Read: 0 00:15:29.482 Data Units Written: 0 00:15:29.482 Host Read Commands: 0 00:15:29.482 Host Write Commands: 0 00:15:29.482 Controller Busy Time: 0 minutes 00:15:29.482 Power Cycles: 0 00:15:29.482 Power On Hours: 0 hours 00:15:29.482 Unsafe Shutdowns: 0 00:15:29.482 Unrecoverable Media Errors: 0 00:15:29.482 Lifetime Error Log Entries: 0 00:15:29.482 Warning Temperature Time: 0 minutes 00:15:29.482 Critical Temperature Time: 0 minutes 00:15:29.482 00:15:29.482 Number of Queues 00:15:29.482 ================ 00:15:29.482 Number of I/O Submission Queues: 127 00:15:29.482 Number of I/O Completion Queues: 127 00:15:29.482 00:15:29.482 Active Namespaces 00:15:29.482 ================= 00:15:29.482 Namespace ID:1 00:15:29.482 Error Recovery Timeout: Unlimited 00:15:29.482 Command Set Identifier: NVM (00h) 00:15:29.482 Deallocate: Supported 00:15:29.482 Deallocated/Unwritten Error: Not Supported 00:15:29.482 Deallocated Read Value: Unknown 00:15:29.482 Deallocate in Write Zeroes: Not Supported 00:15:29.482 Deallocated Guard Field: 0xFFFF 00:15:29.482 Flush: Supported 00:15:29.482 Reservation: Supported 00:15:29.482 Namespace Sharing Capabilities: Multiple Controllers 00:15:29.482 Size (in LBAs): 131072 (0GiB) 00:15:29.482 Capacity (in LBAs): 131072 (0GiB) 00:15:29.482 Utilization (in LBAs): 131072 (0GiB) 00:15:29.482 NGUID: 7FEA71249E7641308EBF9504F4A76640 00:15:29.482 UUID: 7fea7124-9e76-4130-8ebf-9504f4a76640 00:15:29.482 Thin Provisioning: Not Supported 00:15:29.482 Per-NS Atomic Units: Yes 00:15:29.482 Atomic Boundary Size (Normal): 0 00:15:29.482 Atomic Boundary Size (PFail): 0 00:15:29.482 Atomic Boundary Offset: 0 00:15:29.482 Maximum Single Source Range Length: 65535 00:15:29.482 Maximum Copy Length: 65535 00:15:29.482 Maximum Source Range Count: 1 00:15:29.482 NGUID/EUI64 Never Reused: No 00:15:29.482 Namespace Write Protected: No 00:15:29.482 Number of LBA Formats: 1 00:15:29.482 Current LBA Format: LBA Format #00 00:15:29.482 LBA Format #00: Data Size: 512 Metadata Size: 0 00:15:29.482 00:15:29.483 11:23:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -s 256 -g -q 128 -o 4096 -w read -t 5 -c 0x2 00:15:29.483 EAL: No free 2048 kB hugepages reported on node 1 00:15:29.740 [2024-07-26 11:23:25.227987] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:15:35.001 Initializing NVMe Controllers 00:15:35.001 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:15:35.001 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 with lcore 1 00:15:35.001 Initialization complete. Launching workers. 00:15:35.001 ======================================================== 00:15:35.001 Latency(us) 00:15:35.001 Device Information : IOPS MiB/s Average min max 00:15:35.001 VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 from core 1: 39877.79 155.77 3210.19 945.50 10640.32 00:15:35.001 ======================================================== 00:15:35.001 Total : 39877.79 155.77 3210.19 945.50 10640.32 00:15:35.001 00:15:35.001 [2024-07-26 11:23:30.335900] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:15:35.001 11:23:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@85 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -s 256 -g -q 128 -o 4096 -w write -t 5 -c 0x2 00:15:35.001 EAL: No free 2048 kB hugepages reported on node 1 00:15:35.001 [2024-07-26 11:23:30.554549] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:15:40.262 Initializing NVMe Controllers 00:15:40.262 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:15:40.262 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 with lcore 1 00:15:40.262 Initialization complete. Launching workers. 00:15:40.262 ======================================================== 00:15:40.262 Latency(us) 00:15:40.262 Device Information : IOPS MiB/s Average min max 00:15:40.262 VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 from core 1: 39944.32 156.03 3204.06 956.98 6700.12 00:15:40.262 ======================================================== 00:15:40.262 Total : 39944.32 156.03 3204.06 956.98 6700.12 00:15:40.262 00:15:40.262 [2024-07-26 11:23:35.571825] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:15:40.262 11:23:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -g -q 32 -o 4096 -w randrw -M 50 -t 5 -c 0xE 00:15:40.262 EAL: No free 2048 kB hugepages reported on node 1 00:15:40.262 [2024-07-26 11:23:35.764076] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:15:45.608 [2024-07-26 11:23:40.911736] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:15:45.608 Initializing NVMe Controllers 00:15:45.608 Attaching to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:15:45.608 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:15:45.608 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 1 00:15:45.608 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 2 00:15:45.608 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 3 00:15:45.608 Initialization complete. Launching workers. 00:15:45.608 Starting thread on core 2 00:15:45.608 Starting thread on core 3 00:15:45.608 Starting thread on core 1 00:15:45.608 11:23:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -t 3 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -d 256 -g 00:15:45.608 EAL: No free 2048 kB hugepages reported on node 1 00:15:45.608 [2024-07-26 11:23:41.200749] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:15:48.888 [2024-07-26 11:23:44.254833] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:15:48.888 Initializing NVMe Controllers 00:15:48.888 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:15:48.888 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:15:48.888 Associating SPDK bdev Controller (SPDK2 ) with lcore 0 00:15:48.888 Associating SPDK bdev Controller (SPDK2 ) with lcore 1 00:15:48.888 Associating SPDK bdev Controller (SPDK2 ) with lcore 2 00:15:48.888 Associating SPDK bdev Controller (SPDK2 ) with lcore 3 00:15:48.888 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration run with configuration: 00:15:48.888 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i -1 00:15:48.888 Initialization complete. Launching workers. 00:15:48.888 Starting thread on core 1 with urgent priority queue 00:15:48.888 Starting thread on core 2 with urgent priority queue 00:15:48.888 Starting thread on core 3 with urgent priority queue 00:15:48.888 Starting thread on core 0 with urgent priority queue 00:15:48.888 SPDK bdev Controller (SPDK2 ) core 0: 1597.33 IO/s 62.60 secs/100000 ios 00:15:48.888 SPDK bdev Controller (SPDK2 ) core 1: 1739.33 IO/s 57.49 secs/100000 ios 00:15:48.888 SPDK bdev Controller (SPDK2 ) core 2: 2142.00 IO/s 46.69 secs/100000 ios 00:15:48.888 SPDK bdev Controller (SPDK2 ) core 3: 2144.00 IO/s 46.64 secs/100000 ios 00:15:48.888 ======================================================== 00:15:48.888 00:15:48.888 11:23:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/hello_world -d 256 -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' 00:15:48.888 EAL: No free 2048 kB hugepages reported on node 1 00:15:48.888 [2024-07-26 11:23:44.520212] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:15:48.888 Initializing NVMe Controllers 00:15:48.888 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:15:48.888 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:15:48.888 Namespace ID: 1 size: 0GB 00:15:48.888 Initialization complete. 00:15:48.888 INFO: using host memory buffer for IO 00:15:48.888 Hello world! 00:15:48.888 [2024-07-26 11:23:44.530283] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:15:49.146 11:23:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -g -d 256 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' 00:15:49.146 EAL: No free 2048 kB hugepages reported on node 1 00:15:49.146 [2024-07-26 11:23:44.792030] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:15:50.520 Initializing NVMe Controllers 00:15:50.520 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:15:50.520 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:15:50.520 Initialization complete. Launching workers. 00:15:50.520 submit (in ns) avg, min, max = 6732.9, 3127.6, 4000002.9 00:15:50.520 complete (in ns) avg, min, max = 21981.6, 1698.1, 4022293.3 00:15:50.520 00:15:50.520 Submit histogram 00:15:50.520 ================ 00:15:50.520 Range in us Cumulative Count 00:15:50.520 3.124 - 3.139: 0.0060% ( 1) 00:15:50.520 3.139 - 3.154: 0.0180% ( 2) 00:15:50.520 3.154 - 3.170: 0.0240% ( 1) 00:15:50.520 3.170 - 3.185: 0.0481% ( 4) 00:15:50.520 3.185 - 3.200: 0.1142% ( 11) 00:15:50.520 3.200 - 3.215: 0.9675% ( 142) 00:15:50.520 3.215 - 3.230: 4.4411% ( 578) 00:15:50.520 3.230 - 3.246: 9.9279% ( 913) 00:15:50.520 3.246 - 3.261: 16.4543% ( 1086) 00:15:50.520 3.261 - 3.276: 23.2151% ( 1125) 00:15:50.520 3.276 - 3.291: 29.1887% ( 994) 00:15:50.520 3.291 - 3.307: 34.0204% ( 804) 00:15:50.520 3.307 - 3.322: 40.0781% ( 1008) 00:15:50.520 3.322 - 3.337: 46.3642% ( 1046) 00:15:50.520 3.337 - 3.352: 52.0913% ( 953) 00:15:50.520 3.352 - 3.368: 57.0853% ( 831) 00:15:50.520 3.368 - 3.383: 64.2909% ( 1199) 00:15:50.520 3.383 - 3.398: 71.1899% ( 1148) 00:15:50.520 3.398 - 3.413: 76.1118% ( 819) 00:15:50.520 3.413 - 3.429: 80.9916% ( 812) 00:15:50.520 3.429 - 3.444: 83.8942% ( 483) 00:15:50.520 3.444 - 3.459: 85.9796% ( 347) 00:15:50.520 3.459 - 3.474: 86.9832% ( 167) 00:15:50.520 3.474 - 3.490: 87.5060% ( 87) 00:15:50.520 3.490 - 3.505: 87.8786% ( 62) 00:15:50.520 3.505 - 3.520: 88.3534% ( 79) 00:15:50.520 3.520 - 3.535: 89.1106% ( 126) 00:15:50.520 3.535 - 3.550: 90.1142% ( 167) 00:15:50.520 3.550 - 3.566: 90.9796% ( 144) 00:15:50.520 3.566 - 3.581: 91.8570% ( 146) 00:15:50.520 3.581 - 3.596: 92.6923% ( 139) 00:15:50.520 3.596 - 3.611: 93.6238% ( 155) 00:15:50.520 3.611 - 3.627: 94.6214% ( 166) 00:15:50.520 3.627 - 3.642: 95.7632% ( 190) 00:15:50.520 3.642 - 3.657: 96.6587% ( 149) 00:15:50.520 3.657 - 3.672: 97.3498% ( 115) 00:15:50.520 3.672 - 3.688: 97.9207% ( 95) 00:15:50.520 3.688 - 3.703: 98.4014% ( 80) 00:15:50.520 3.703 - 3.718: 98.7921% ( 65) 00:15:50.520 3.718 - 3.733: 99.1046% ( 52) 00:15:50.520 3.733 - 3.749: 99.3209% ( 36) 00:15:50.520 3.749 - 3.764: 99.4531% ( 22) 00:15:50.520 3.764 - 3.779: 99.5733% ( 20) 00:15:50.520 3.779 - 3.794: 99.6214% ( 8) 00:15:50.520 3.794 - 3.810: 99.6334% ( 2) 00:15:50.520 3.810 - 3.825: 99.6575% ( 4) 00:15:50.520 3.825 - 3.840: 99.6635% ( 1) 00:15:50.520 3.840 - 3.855: 99.6755% ( 2) 00:15:50.520 3.855 - 3.870: 99.6815% ( 1) 00:15:50.520 3.901 - 3.931: 99.6875% ( 1) 00:15:50.520 5.242 - 5.272: 99.6935% ( 1) 00:15:50.520 5.272 - 5.303: 99.7055% ( 2) 00:15:50.520 5.303 - 5.333: 99.7115% ( 1) 00:15:50.520 5.333 - 5.364: 99.7296% ( 3) 00:15:50.520 5.364 - 5.394: 99.7356% ( 1) 00:15:50.520 5.547 - 5.577: 99.7416% ( 1) 00:15:50.520 5.608 - 5.638: 99.7536% ( 2) 00:15:50.520 5.699 - 5.730: 99.7596% ( 1) 00:15:50.520 5.790 - 5.821: 99.7776% ( 3) 00:15:50.520 5.912 - 5.943: 99.7837% ( 1) 00:15:50.520 6.126 - 6.156: 99.7897% ( 1) 00:15:50.520 6.278 - 6.309: 99.7957% ( 1) 00:15:50.520 6.339 - 6.370: 99.8017% ( 1) 00:15:50.520 6.400 - 6.430: 99.8197% ( 3) 00:15:50.520 6.461 - 6.491: 99.8257% ( 1) 00:15:50.520 6.491 - 6.522: 99.8317% ( 1) 00:15:50.520 6.522 - 6.552: 99.8377% ( 1) 00:15:50.520 6.583 - 6.613: 99.8438% ( 1) 00:15:50.520 6.613 - 6.644: 99.8498% ( 1) 00:15:50.520 6.735 - 6.766: 99.8558% ( 1) 00:15:50.520 6.766 - 6.796: 99.8618% ( 1) 00:15:50.520 6.918 - 6.949: 99.8678% ( 1) 00:15:50.520 6.949 - 6.979: 99.8738% ( 1) 00:15:50.520 7.040 - 7.070: 99.8798% ( 1) 00:15:50.520 7.070 - 7.101: 99.8858% ( 1) 00:15:50.520 7.131 - 7.162: 99.8918% ( 1) 00:15:50.520 7.223 - 7.253: 99.8978% ( 1) 00:15:50.520 7.558 - 7.589: 99.9038% ( 1) 00:15:50.520 8.107 - 8.168: 99.9099% ( 1) 00:15:50.520 [2024-07-26 11:23:45.887609] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:15:50.520 8.290 - 8.350: 99.9159% ( 1) 00:15:50.520 3994.575 - 4025.783: 100.0000% ( 14) 00:15:50.520 00:15:50.520 Complete histogram 00:15:50.520 ================== 00:15:50.520 Range in us Cumulative Count 00:15:50.520 1.691 - 1.699: 0.0060% ( 1) 00:15:50.520 1.699 - 1.707: 0.0180% ( 2) 00:15:50.520 1.707 - 1.714: 0.0721% ( 9) 00:15:50.520 1.714 - 1.722: 0.1382% ( 11) 00:15:50.520 1.722 - 1.730: 0.1683% ( 5) 00:15:50.520 1.745 - 1.752: 0.3005% ( 22) 00:15:50.520 1.752 - 1.760: 2.8245% ( 420) 00:15:50.520 1.760 - 1.768: 12.7103% ( 1645) 00:15:50.520 1.768 - 1.775: 25.0300% ( 2050) 00:15:50.520 1.775 - 1.783: 29.9279% ( 815) 00:15:50.520 1.783 - 1.790: 31.5745% ( 274) 00:15:50.520 1.790 - 1.798: 32.6082% ( 172) 00:15:50.520 1.798 - 1.806: 33.6298% ( 170) 00:15:50.520 1.806 - 1.813: 37.9988% ( 727) 00:15:50.520 1.813 - 1.821: 54.5673% ( 2757) 00:15:50.520 1.821 - 1.829: 76.4123% ( 3635) 00:15:50.520 1.829 - 1.836: 88.2692% ( 1973) 00:15:50.520 1.836 - 1.844: 92.5000% ( 704) 00:15:50.520 1.844 - 1.851: 94.4351% ( 322) 00:15:50.520 1.851 - 1.859: 95.9916% ( 259) 00:15:50.520 1.859 - 1.867: 96.8630% ( 145) 00:15:50.520 1.867 - 1.874: 97.2957% ( 72) 00:15:50.520 1.874 - 1.882: 97.6082% ( 52) 00:15:50.520 1.882 - 1.890: 98.0168% ( 68) 00:15:50.520 1.890 - 1.897: 98.3654% ( 58) 00:15:50.520 1.897 - 1.905: 98.7320% ( 61) 00:15:50.520 1.905 - 1.912: 98.9904% ( 43) 00:15:50.520 1.912 - 1.920: 99.1106% ( 20) 00:15:50.520 1.920 - 1.928: 99.1707% ( 10) 00:15:50.520 1.928 - 1.935: 99.2428% ( 12) 00:15:50.520 1.935 - 1.943: 99.2608% ( 3) 00:15:50.520 1.943 - 1.950: 99.2728% ( 2) 00:15:50.520 1.950 - 1.966: 99.2788% ( 1) 00:15:50.520 1.966 - 1.981: 99.2849% ( 1) 00:15:50.520 1.981 - 1.996: 99.2909% ( 1) 00:15:50.520 1.996 - 2.011: 99.2969% ( 1) 00:15:50.520 2.210 - 2.225: 99.3029% ( 1) 00:15:50.520 3.398 - 3.413: 99.3149% ( 2) 00:15:50.520 3.429 - 3.444: 99.3209% ( 1) 00:15:50.520 3.474 - 3.490: 99.3269% ( 1) 00:15:50.520 3.703 - 3.718: 99.3329% ( 1) 00:15:50.520 3.886 - 3.901: 99.3389% ( 1) 00:15:50.520 3.901 - 3.931: 99.3450% ( 1) 00:15:50.520 3.931 - 3.962: 99.3510% ( 1) 00:15:50.520 3.962 - 3.992: 99.3570% ( 1) 00:15:50.520 3.992 - 4.023: 99.3630% ( 1) 00:15:50.520 4.267 - 4.297: 99.3690% ( 1) 00:15:50.520 4.297 - 4.328: 99.3750% ( 1) 00:15:50.520 4.358 - 4.389: 99.3870% ( 2) 00:15:50.520 4.754 - 4.785: 99.3930% ( 1) 00:15:50.520 4.815 - 4.846: 99.3990% ( 1) 00:15:50.520 5.181 - 5.211: 99.4050% ( 1) 00:15:50.520 5.211 - 5.242: 99.4111% ( 1) 00:15:50.520 5.242 - 5.272: 99.4171% ( 1) 00:15:50.520 5.425 - 5.455: 99.4231% ( 1) 00:15:50.521 5.455 - 5.486: 99.4291% ( 1) 00:15:50.521 5.486 - 5.516: 99.4351% ( 1) 00:15:50.521 5.669 - 5.699: 99.4411% ( 1) 00:15:50.521 5.699 - 5.730: 99.4471% ( 1) 00:15:50.521 5.912 - 5.943: 99.4591% ( 2) 00:15:50.521 5.943 - 5.973: 99.4651% ( 1) 00:15:50.521 6.430 - 6.461: 99.4712% ( 1) 00:15:50.521 6.522 - 6.552: 99.4772% ( 1) 00:15:50.521 6.796 - 6.827: 99.4832% ( 1) 00:15:50.521 7.131 - 7.162: 99.4892% ( 1) 00:15:50.521 11.947 - 12.008: 99.4952% ( 1) 00:15:50.521 3978.971 - 3994.575: 99.5012% ( 1) 00:15:50.521 3994.575 - 4025.783: 100.0000% ( 83) 00:15:50.521 00:15:50.521 11:23:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@90 -- # aer_vfio_user /var/run/vfio-user/domain/vfio-user2/2 nqn.2019-07.io.spdk:cnode2 2 00:15:50.521 11:23:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@22 -- # local traddr=/var/run/vfio-user/domain/vfio-user2/2 00:15:50.521 11:23:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@23 -- # local subnqn=nqn.2019-07.io.spdk:cnode2 00:15:50.521 11:23:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@24 -- # local malloc_num=Malloc4 00:15:50.521 11:23:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:15:50.521 [ 00:15:50.521 { 00:15:50.521 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:15:50.521 "subtype": "Discovery", 00:15:50.521 "listen_addresses": [], 00:15:50.521 "allow_any_host": true, 00:15:50.521 "hosts": [] 00:15:50.521 }, 00:15:50.521 { 00:15:50.521 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:15:50.521 "subtype": "NVMe", 00:15:50.521 "listen_addresses": [ 00:15:50.521 { 00:15:50.521 "trtype": "VFIOUSER", 00:15:50.521 "adrfam": "IPv4", 00:15:50.521 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:15:50.521 "trsvcid": "0" 00:15:50.521 } 00:15:50.521 ], 00:15:50.521 "allow_any_host": true, 00:15:50.521 "hosts": [], 00:15:50.521 "serial_number": "SPDK1", 00:15:50.521 "model_number": "SPDK bdev Controller", 00:15:50.521 "max_namespaces": 32, 00:15:50.521 "min_cntlid": 1, 00:15:50.521 "max_cntlid": 65519, 00:15:50.521 "namespaces": [ 00:15:50.521 { 00:15:50.521 "nsid": 1, 00:15:50.521 "bdev_name": "Malloc1", 00:15:50.521 "name": "Malloc1", 00:15:50.521 "nguid": "60408C3AA76F4C369FA522CCBA132ED6", 00:15:50.521 "uuid": "60408c3a-a76f-4c36-9fa5-22ccba132ed6" 00:15:50.521 }, 00:15:50.521 { 00:15:50.521 "nsid": 2, 00:15:50.521 "bdev_name": "Malloc3", 00:15:50.521 "name": "Malloc3", 00:15:50.521 "nguid": "D3AB392E87CD47DD9ECFDF23636998B7", 00:15:50.521 "uuid": "d3ab392e-87cd-47dd-9ecf-df23636998b7" 00:15:50.521 } 00:15:50.521 ] 00:15:50.521 }, 00:15:50.521 { 00:15:50.521 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:15:50.521 "subtype": "NVMe", 00:15:50.521 "listen_addresses": [ 00:15:50.521 { 00:15:50.521 "trtype": "VFIOUSER", 00:15:50.521 "adrfam": "IPv4", 00:15:50.521 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:15:50.521 "trsvcid": "0" 00:15:50.521 } 00:15:50.521 ], 00:15:50.521 "allow_any_host": true, 00:15:50.521 "hosts": [], 00:15:50.521 "serial_number": "SPDK2", 00:15:50.521 "model_number": "SPDK bdev Controller", 00:15:50.521 "max_namespaces": 32, 00:15:50.521 "min_cntlid": 1, 00:15:50.521 "max_cntlid": 65519, 00:15:50.521 "namespaces": [ 00:15:50.521 { 00:15:50.521 "nsid": 1, 00:15:50.521 "bdev_name": "Malloc2", 00:15:50.521 "name": "Malloc2", 00:15:50.521 "nguid": "7FEA71249E7641308EBF9504F4A76640", 00:15:50.521 "uuid": "7fea7124-9e76-4130-8ebf-9504f4a76640" 00:15:50.521 } 00:15:50.521 ] 00:15:50.521 } 00:15:50.521 ] 00:15:50.521 11:23:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@27 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:15:50.521 11:23:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -n 2 -g -t /tmp/aer_touch_file 00:15:50.521 11:23:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@34 -- # aerpid=1497168 00:15:50.521 11:23:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@37 -- # waitforfile /tmp/aer_touch_file 00:15:50.521 11:23:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1265 -- # local i=0 00:15:50.521 11:23:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:15:50.521 11:23:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1272 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:15:50.521 11:23:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1276 -- # return 0 00:15:50.521 11:23:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@38 -- # rm -f /tmp/aer_touch_file 00:15:50.521 11:23:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 --name Malloc4 00:15:50.521 EAL: No free 2048 kB hugepages reported on node 1 00:15:50.778 [2024-07-26 11:23:46.241642] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:15:50.778 Malloc4 00:15:50.778 11:23:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc4 -n 2 00:15:51.035 [2024-07-26 11:23:46.468334] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:15:51.035 11:23:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:15:51.035 Asynchronous Event Request test 00:15:51.035 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:15:51.035 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:15:51.035 Registering asynchronous event callbacks... 00:15:51.035 Starting namespace attribute notice tests for all controllers... 00:15:51.035 /var/run/vfio-user/domain/vfio-user2/2: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:15:51.035 aer_cb - Changed Namespace 00:15:51.035 Cleaning up... 00:15:51.035 [ 00:15:51.035 { 00:15:51.035 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:15:51.035 "subtype": "Discovery", 00:15:51.035 "listen_addresses": [], 00:15:51.035 "allow_any_host": true, 00:15:51.035 "hosts": [] 00:15:51.035 }, 00:15:51.035 { 00:15:51.035 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:15:51.035 "subtype": "NVMe", 00:15:51.035 "listen_addresses": [ 00:15:51.035 { 00:15:51.035 "trtype": "VFIOUSER", 00:15:51.035 "adrfam": "IPv4", 00:15:51.035 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:15:51.035 "trsvcid": "0" 00:15:51.035 } 00:15:51.035 ], 00:15:51.035 "allow_any_host": true, 00:15:51.035 "hosts": [], 00:15:51.035 "serial_number": "SPDK1", 00:15:51.035 "model_number": "SPDK bdev Controller", 00:15:51.035 "max_namespaces": 32, 00:15:51.035 "min_cntlid": 1, 00:15:51.035 "max_cntlid": 65519, 00:15:51.035 "namespaces": [ 00:15:51.035 { 00:15:51.035 "nsid": 1, 00:15:51.035 "bdev_name": "Malloc1", 00:15:51.035 "name": "Malloc1", 00:15:51.035 "nguid": "60408C3AA76F4C369FA522CCBA132ED6", 00:15:51.036 "uuid": "60408c3a-a76f-4c36-9fa5-22ccba132ed6" 00:15:51.036 }, 00:15:51.036 { 00:15:51.036 "nsid": 2, 00:15:51.036 "bdev_name": "Malloc3", 00:15:51.036 "name": "Malloc3", 00:15:51.036 "nguid": "D3AB392E87CD47DD9ECFDF23636998B7", 00:15:51.036 "uuid": "d3ab392e-87cd-47dd-9ecf-df23636998b7" 00:15:51.036 } 00:15:51.036 ] 00:15:51.036 }, 00:15:51.036 { 00:15:51.036 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:15:51.036 "subtype": "NVMe", 00:15:51.036 "listen_addresses": [ 00:15:51.036 { 00:15:51.036 "trtype": "VFIOUSER", 00:15:51.036 "adrfam": "IPv4", 00:15:51.036 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:15:51.036 "trsvcid": "0" 00:15:51.036 } 00:15:51.036 ], 00:15:51.036 "allow_any_host": true, 00:15:51.036 "hosts": [], 00:15:51.036 "serial_number": "SPDK2", 00:15:51.036 "model_number": "SPDK bdev Controller", 00:15:51.036 "max_namespaces": 32, 00:15:51.036 "min_cntlid": 1, 00:15:51.036 "max_cntlid": 65519, 00:15:51.036 "namespaces": [ 00:15:51.036 { 00:15:51.036 "nsid": 1, 00:15:51.036 "bdev_name": "Malloc2", 00:15:51.036 "name": "Malloc2", 00:15:51.036 "nguid": "7FEA71249E7641308EBF9504F4A76640", 00:15:51.036 "uuid": "7fea7124-9e76-4130-8ebf-9504f4a76640" 00:15:51.036 }, 00:15:51.036 { 00:15:51.036 "nsid": 2, 00:15:51.036 "bdev_name": "Malloc4", 00:15:51.036 "name": "Malloc4", 00:15:51.036 "nguid": "1C370266CA464DAD95CD86F1D80081C9", 00:15:51.036 "uuid": "1c370266-ca46-4dad-95cd-86f1d80081c9" 00:15:51.036 } 00:15:51.036 ] 00:15:51.036 } 00:15:51.036 ] 00:15:51.036 11:23:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@44 -- # wait 1497168 00:15:51.036 11:23:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@105 -- # stop_nvmf_vfio_user 00:15:51.036 11:23:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@95 -- # killprocess 1489532 00:15:51.036 11:23:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@950 -- # '[' -z 1489532 ']' 00:15:51.036 11:23:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@954 -- # kill -0 1489532 00:15:51.036 11:23:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@955 -- # uname 00:15:51.036 11:23:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:15:51.036 11:23:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1489532 00:15:51.294 11:23:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:15:51.294 11:23:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:15:51.294 11:23:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1489532' 00:15:51.294 killing process with pid 1489532 00:15:51.294 11:23:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@969 -- # kill 1489532 00:15:51.294 11:23:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@974 -- # wait 1489532 00:15:51.553 11:23:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@97 -- # rm -rf /var/run/vfio-user 00:15:51.553 11:23:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:15:51.553 11:23:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@108 -- # setup_nvmf_vfio_user --interrupt-mode '-M -I' 00:15:51.553 11:23:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@51 -- # local nvmf_app_args=--interrupt-mode 00:15:51.553 11:23:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@52 -- # local 'transport_args=-M -I' 00:15:51.553 11:23:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@55 -- # nvmfpid=1497405 00:15:51.553 11:23:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@57 -- # echo 'Process pid: 1497405' 00:15:51.553 Process pid: 1497405 00:15:51.553 11:23:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m '[0,1,2,3]' --interrupt-mode 00:15:51.553 11:23:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@59 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:15:51.553 11:23:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@60 -- # waitforlisten 1497405 00:15:51.553 11:23:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@831 -- # '[' -z 1497405 ']' 00:15:51.553 11:23:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:51.553 11:23:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@836 -- # local max_retries=100 00:15:51.553 11:23:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:51.553 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:51.553 11:23:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@840 -- # xtrace_disable 00:15:51.553 11:23:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:15:51.553 [2024-07-26 11:23:47.035596] thread.c:2948:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:15:51.553 [2024-07-26 11:23:47.036449] Starting SPDK v24.09-pre git sha1 487ff9e1a / DPDK 24.03.0 initialization... 00:15:51.553 [2024-07-26 11:23:47.036486] [ DPDK EAL parameters: nvmf -l 0,1,2,3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:51.553 EAL: No free 2048 kB hugepages reported on node 1 00:15:51.553 [2024-07-26 11:23:47.100762] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:15:51.553 [2024-07-26 11:23:47.179027] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:51.553 [2024-07-26 11:23:47.179066] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:51.553 [2024-07-26 11:23:47.179073] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:51.553 [2024-07-26 11:23:47.179079] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:51.553 [2024-07-26 11:23:47.179083] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:51.553 [2024-07-26 11:23:47.179146] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:15:51.553 [2024-07-26 11:23:47.179255] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:15:51.553 [2024-07-26 11:23:47.179358] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:15:51.553 [2024-07-26 11:23:47.179359] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:15:51.811 [2024-07-26 11:23:47.261366] thread.c:2099:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:15:51.811 [2024-07-26 11:23:47.261807] thread.c:2099:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:15:51.811 [2024-07-26 11:23:47.262006] thread.c:2099:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:15:51.811 [2024-07-26 11:23:47.262084] thread.c:2099:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:15:51.811 [2024-07-26 11:23:47.262501] thread.c:2099:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:15:52.377 11:23:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:15:52.377 11:23:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@864 -- # return 0 00:15:52.377 11:23:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@62 -- # sleep 1 00:15:53.310 11:23:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t VFIOUSER -M -I 00:15:53.569 11:23:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@66 -- # mkdir -p /var/run/vfio-user 00:15:53.569 11:23:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # seq 1 2 00:15:53.569 11:23:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:15:53.569 11:23:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user1/1 00:15:53.569 11:23:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:15:53.569 Malloc1 00:15:53.828 11:23:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode1 -a -s SPDK1 00:15:53.828 11:23:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc1 00:15:54.086 11:23:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode1 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user1/1 -s 0 00:15:54.344 11:23:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:15:54.344 11:23:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user2/2 00:15:54.344 11:23:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:15:54.344 Malloc2 00:15:54.344 11:23:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode2 -a -s SPDK2 00:15:54.602 11:23:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc2 00:15:54.859 11:23:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode2 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user2/2 -s 0 00:15:55.118 11:23:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@109 -- # stop_nvmf_vfio_user 00:15:55.118 11:23:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@95 -- # killprocess 1497405 00:15:55.118 11:23:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@950 -- # '[' -z 1497405 ']' 00:15:55.118 11:23:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@954 -- # kill -0 1497405 00:15:55.118 11:23:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@955 -- # uname 00:15:55.118 11:23:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:15:55.118 11:23:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1497405 00:15:55.118 11:23:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:15:55.118 11:23:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:15:55.118 11:23:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1497405' 00:15:55.118 killing process with pid 1497405 00:15:55.118 11:23:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@969 -- # kill 1497405 00:15:55.118 11:23:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@974 -- # wait 1497405 00:15:55.378 11:23:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@97 -- # rm -rf /var/run/vfio-user 00:15:55.378 11:23:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:15:55.378 00:15:55.378 real 0m51.323s 00:15:55.378 user 3m23.077s 00:15:55.378 sys 0m3.469s 00:15:55.378 11:23:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1126 -- # xtrace_disable 00:15:55.378 11:23:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:15:55.378 ************************************ 00:15:55.378 END TEST nvmf_vfio_user 00:15:55.378 ************************************ 00:15:55.378 11:23:50 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@32 -- # run_test nvmf_vfio_user_nvme_compliance /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/compliance.sh --transport=tcp 00:15:55.378 11:23:50 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:15:55.378 11:23:50 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:15:55.378 11:23:50 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:15:55.378 ************************************ 00:15:55.378 START TEST nvmf_vfio_user_nvme_compliance 00:15:55.378 ************************************ 00:15:55.378 11:23:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/compliance.sh --transport=tcp 00:15:55.378 * Looking for test storage... 00:15:55.378 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance 00:15:55.378 11:23:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:15:55.378 11:23:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@7 -- # uname -s 00:15:55.378 11:23:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:55.378 11:23:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:55.378 11:23:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:55.378 11:23:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:55.378 11:23:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:55.378 11:23:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:55.378 11:23:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:55.378 11:23:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:55.378 11:23:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:55.378 11:23:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:55.378 11:23:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:15:55.378 11:23:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:15:55.378 11:23:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:55.378 11:23:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:55.378 11:23:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:15:55.378 11:23:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:55.378 11:23:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:15:55.378 11:23:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:55.378 11:23:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:55.378 11:23:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:55.378 11:23:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:55.378 11:23:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:55.378 11:23:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:55.378 11:23:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@5 -- # export PATH 00:15:55.378 11:23:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:55.378 11:23:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@47 -- # : 0 00:15:55.378 11:23:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:15:55.378 11:23:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:15:55.378 11:23:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:55.378 11:23:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:55.378 11:23:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:55.378 11:23:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:15:55.378 11:23:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:15:55.378 11:23:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@51 -- # have_pci_nics=0 00:15:55.378 11:23:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@11 -- # MALLOC_BDEV_SIZE=64 00:15:55.378 11:23:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:15:55.378 11:23:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@14 -- # export TEST_TRANSPORT=VFIOUSER 00:15:55.378 11:23:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@14 -- # TEST_TRANSPORT=VFIOUSER 00:15:55.379 11:23:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@16 -- # rm -rf /var/run/vfio-user 00:15:55.379 11:23:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@20 -- # nvmfpid=1498089 00:15:55.379 11:23:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@21 -- # echo 'Process pid: 1498089' 00:15:55.379 Process pid: 1498089 00:15:55.379 11:23:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@23 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:15:55.379 11:23:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:15:55.379 11:23:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@24 -- # waitforlisten 1498089 00:15:55.379 11:23:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@831 -- # '[' -z 1498089 ']' 00:15:55.379 11:23:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:55.379 11:23:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@836 -- # local max_retries=100 00:15:55.379 11:23:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:55.379 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:55.379 11:23:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@840 -- # xtrace_disable 00:15:55.379 11:23:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:15:55.638 [2024-07-26 11:23:51.044325] Starting SPDK v24.09-pre git sha1 487ff9e1a / DPDK 24.03.0 initialization... 00:15:55.638 [2024-07-26 11:23:51.044376] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:55.638 EAL: No free 2048 kB hugepages reported on node 1 00:15:55.638 [2024-07-26 11:23:51.109617] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:15:55.638 [2024-07-26 11:23:51.181471] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:55.638 [2024-07-26 11:23:51.181507] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:55.638 [2024-07-26 11:23:51.181514] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:55.638 [2024-07-26 11:23:51.181520] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:55.638 [2024-07-26 11:23:51.181525] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:55.638 [2024-07-26 11:23:51.181589] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:15:55.638 [2024-07-26 11:23:51.181697] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:15:55.638 [2024-07-26 11:23:51.181697] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:15:56.204 11:23:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:15:56.204 11:23:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@864 -- # return 0 00:15:56.204 11:23:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@26 -- # sleep 1 00:15:57.581 11:23:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@28 -- # nqn=nqn.2021-09.io.spdk:cnode0 00:15:57.581 11:23:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@29 -- # traddr=/var/run/vfio-user 00:15:57.581 11:23:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@31 -- # rpc_cmd nvmf_create_transport -t VFIOUSER 00:15:57.581 11:23:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:57.581 11:23:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:15:57.581 11:23:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:57.581 11:23:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@33 -- # mkdir -p /var/run/vfio-user 00:15:57.581 11:23:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@35 -- # rpc_cmd bdev_malloc_create 64 512 -b malloc0 00:15:57.581 11:23:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:57.581 11:23:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:15:57.581 malloc0 00:15:57.581 11:23:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:57.581 11:23:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@36 -- # rpc_cmd nvmf_create_subsystem nqn.2021-09.io.spdk:cnode0 -a -s spdk -m 32 00:15:57.581 11:23:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:57.581 11:23:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:15:57.581 11:23:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:57.581 11:23:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@37 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2021-09.io.spdk:cnode0 malloc0 00:15:57.581 11:23:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:57.581 11:23:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:15:57.581 11:23:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:57.581 11:23:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@38 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2021-09.io.spdk:cnode0 -t VFIOUSER -a /var/run/vfio-user -s 0 00:15:57.581 11:23:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:57.581 11:23:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:15:57.581 11:23:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:57.581 11:23:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/nvme_compliance -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user subnqn:nqn.2021-09.io.spdk:cnode0' 00:15:57.581 EAL: No free 2048 kB hugepages reported on node 1 00:15:57.581 00:15:57.581 00:15:57.581 CUnit - A unit testing framework for C - Version 2.1-3 00:15:57.581 http://cunit.sourceforge.net/ 00:15:57.581 00:15:57.581 00:15:57.581 Suite: nvme_compliance 00:15:57.581 Test: admin_identify_ctrlr_verify_dptr ...[2024-07-26 11:23:53.073580] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:57.581 [2024-07-26 11:23:53.074915] vfio_user.c: 804:nvme_cmd_map_prps: *ERROR*: no PRP2, 3072 remaining 00:15:57.581 [2024-07-26 11:23:53.074930] vfio_user.c:5514:map_admin_cmd_req: *ERROR*: /var/run/vfio-user: map Admin Opc 6 failed 00:15:57.581 [2024-07-26 11:23:53.074936] vfio_user.c:5607:handle_cmd_req: *ERROR*: /var/run/vfio-user: process NVMe command opc 0x6 failed 00:15:57.581 [2024-07-26 11:23:53.076607] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:57.581 passed 00:15:57.581 Test: admin_identify_ctrlr_verify_fused ...[2024-07-26 11:23:53.151127] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:57.581 [2024-07-26 11:23:53.154152] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:57.581 passed 00:15:57.581 Test: admin_identify_ns ...[2024-07-26 11:23:53.232887] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:57.840 [2024-07-26 11:23:53.293635] ctrlr.c:2740:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:15:57.840 [2024-07-26 11:23:53.301638] ctrlr.c:2740:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 4294967295 00:15:57.840 [2024-07-26 11:23:53.322725] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:57.840 passed 00:15:57.840 Test: admin_get_features_mandatory_features ...[2024-07-26 11:23:53.396289] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:57.840 [2024-07-26 11:23:53.401323] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:57.840 passed 00:15:57.840 Test: admin_get_features_optional_features ...[2024-07-26 11:23:53.476811] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:57.840 [2024-07-26 11:23:53.479830] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:58.098 passed 00:15:58.098 Test: admin_set_features_number_of_queues ...[2024-07-26 11:23:53.556561] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:58.098 [2024-07-26 11:23:53.662717] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:58.098 passed 00:15:58.098 Test: admin_get_log_page_mandatory_logs ...[2024-07-26 11:23:53.736366] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:58.098 [2024-07-26 11:23:53.739384] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:58.355 passed 00:15:58.355 Test: admin_get_log_page_with_lpo ...[2024-07-26 11:23:53.816021] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:58.355 [2024-07-26 11:23:53.884636] ctrlr.c:2688:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: offset (516) > len (512) 00:15:58.355 [2024-07-26 11:23:53.897704] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:58.355 passed 00:15:58.355 Test: fabric_property_get ...[2024-07-26 11:23:53.973498] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:58.355 [2024-07-26 11:23:53.974725] vfio_user.c:5607:handle_cmd_req: *ERROR*: /var/run/vfio-user: process NVMe command opc 0x7f failed 00:15:58.355 [2024-07-26 11:23:53.976515] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:58.355 passed 00:15:58.613 Test: admin_delete_io_sq_use_admin_qid ...[2024-07-26 11:23:54.055017] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:58.613 [2024-07-26 11:23:54.056240] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:0 does not exist 00:15:58.613 [2024-07-26 11:23:54.058034] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:58.613 passed 00:15:58.613 Test: admin_delete_io_sq_delete_sq_twice ...[2024-07-26 11:23:54.131695] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:58.613 [2024-07-26 11:23:54.216637] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:15:58.613 [2024-07-26 11:23:54.232631] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:15:58.613 [2024-07-26 11:23:54.237714] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:58.613 passed 00:15:58.872 Test: admin_delete_io_cq_use_admin_qid ...[2024-07-26 11:23:54.314151] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:58.872 [2024-07-26 11:23:54.315386] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O cqid:0 does not exist 00:15:58.872 [2024-07-26 11:23:54.317182] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:58.872 passed 00:15:58.872 Test: admin_delete_io_cq_delete_cq_first ...[2024-07-26 11:23:54.392773] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:58.872 [2024-07-26 11:23:54.470635] vfio_user.c:2319:handle_del_io_q: *ERROR*: /var/run/vfio-user: the associated SQ must be deleted first 00:15:58.872 [2024-07-26 11:23:54.494632] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:15:58.872 [2024-07-26 11:23:54.499706] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:58.872 passed 00:15:59.130 Test: admin_create_io_cq_verify_iv_pc ...[2024-07-26 11:23:54.573451] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:59.130 [2024-07-26 11:23:54.574677] vfio_user.c:2158:handle_create_io_cq: *ERROR*: /var/run/vfio-user: IV is too big 00:15:59.130 [2024-07-26 11:23:54.574701] vfio_user.c:2152:handle_create_io_cq: *ERROR*: /var/run/vfio-user: non-PC CQ not supported 00:15:59.130 [2024-07-26 11:23:54.576469] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:59.130 passed 00:15:59.130 Test: admin_create_io_sq_verify_qsize_cqid ...[2024-07-26 11:23:54.653039] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:59.130 [2024-07-26 11:23:54.744633] vfio_user.c:2240:handle_create_io_q: *ERROR*: /var/run/vfio-user: invalid I/O queue size 1 00:15:59.130 [2024-07-26 11:23:54.752633] vfio_user.c:2240:handle_create_io_q: *ERROR*: /var/run/vfio-user: invalid I/O queue size 257 00:15:59.130 [2024-07-26 11:23:54.760635] vfio_user.c:2038:handle_create_io_sq: *ERROR*: /var/run/vfio-user: invalid cqid:0 00:15:59.130 [2024-07-26 11:23:54.768637] vfio_user.c:2038:handle_create_io_sq: *ERROR*: /var/run/vfio-user: invalid cqid:128 00:15:59.389 [2024-07-26 11:23:54.797717] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:59.389 passed 00:15:59.389 Test: admin_create_io_sq_verify_pc ...[2024-07-26 11:23:54.874260] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:59.389 [2024-07-26 11:23:54.889638] vfio_user.c:2051:handle_create_io_sq: *ERROR*: /var/run/vfio-user: non-PC SQ not supported 00:15:59.389 [2024-07-26 11:23:54.907433] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:59.389 passed 00:15:59.389 Test: admin_create_io_qp_max_qps ...[2024-07-26 11:23:54.984935] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:16:00.823 [2024-07-26 11:23:56.101634] nvme_ctrlr.c:5469:spdk_nvme_ctrlr_alloc_qid: *ERROR*: [/var/run/vfio-user] No free I/O queue IDs 00:16:01.082 [2024-07-26 11:23:56.493123] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:16:01.082 passed 00:16:01.082 Test: admin_create_io_sq_shared_cq ...[2024-07-26 11:23:56.570937] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:16:01.082 [2024-07-26 11:23:56.702640] vfio_user.c:2319:handle_del_io_q: *ERROR*: /var/run/vfio-user: the associated SQ must be deleted first 00:16:01.082 [2024-07-26 11:23:56.739680] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:16:01.341 passed 00:16:01.341 00:16:01.341 Run Summary: Type Total Ran Passed Failed Inactive 00:16:01.341 suites 1 1 n/a 0 0 00:16:01.341 tests 18 18 18 0 0 00:16:01.341 asserts 360 360 360 0 n/a 00:16:01.341 00:16:01.341 Elapsed time = 1.508 seconds 00:16:01.341 11:23:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@42 -- # killprocess 1498089 00:16:01.341 11:23:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@950 -- # '[' -z 1498089 ']' 00:16:01.341 11:23:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@954 -- # kill -0 1498089 00:16:01.341 11:23:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@955 -- # uname 00:16:01.341 11:23:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:16:01.341 11:23:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1498089 00:16:01.341 11:23:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:16:01.341 11:23:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:16:01.341 11:23:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1498089' 00:16:01.341 killing process with pid 1498089 00:16:01.341 11:23:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@969 -- # kill 1498089 00:16:01.341 11:23:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@974 -- # wait 1498089 00:16:01.600 11:23:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@44 -- # rm -rf /var/run/vfio-user 00:16:01.601 11:23:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@46 -- # trap - SIGINT SIGTERM EXIT 00:16:01.601 00:16:01.601 real 0m6.150s 00:16:01.601 user 0m17.545s 00:16:01.601 sys 0m0.459s 00:16:01.601 11:23:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1126 -- # xtrace_disable 00:16:01.601 11:23:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:16:01.601 ************************************ 00:16:01.601 END TEST nvmf_vfio_user_nvme_compliance 00:16:01.601 ************************************ 00:16:01.601 11:23:57 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@33 -- # run_test nvmf_vfio_user_fuzz /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/vfio_user_fuzz.sh --transport=tcp 00:16:01.601 11:23:57 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:16:01.601 11:23:57 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:16:01.601 11:23:57 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:16:01.601 ************************************ 00:16:01.601 START TEST nvmf_vfio_user_fuzz 00:16:01.601 ************************************ 00:16:01.601 11:23:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/vfio_user_fuzz.sh --transport=tcp 00:16:01.601 * Looking for test storage... 00:16:01.601 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:16:01.601 11:23:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:16:01.601 11:23:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@7 -- # uname -s 00:16:01.601 11:23:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:01.601 11:23:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:01.601 11:23:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:01.601 11:23:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:01.601 11:23:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:01.601 11:23:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:01.601 11:23:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:01.601 11:23:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:01.601 11:23:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:01.601 11:23:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:01.601 11:23:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:16:01.601 11:23:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:16:01.601 11:23:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:01.601 11:23:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:01.601 11:23:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:16:01.601 11:23:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:01.601 11:23:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:16:01.601 11:23:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:01.601 11:23:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:01.601 11:23:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:01.601 11:23:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:01.601 11:23:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:01.601 11:23:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:01.601 11:23:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@5 -- # export PATH 00:16:01.601 11:23:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:01.601 11:23:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@47 -- # : 0 00:16:01.601 11:23:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:16:01.601 11:23:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:16:01.601 11:23:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:01.601 11:23:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:01.601 11:23:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:01.601 11:23:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:16:01.601 11:23:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:16:01.601 11:23:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@51 -- # have_pci_nics=0 00:16:01.601 11:23:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@12 -- # MALLOC_BDEV_SIZE=64 00:16:01.601 11:23:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:16:01.601 11:23:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@15 -- # nqn=nqn.2021-09.io.spdk:cnode0 00:16:01.601 11:23:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@16 -- # traddr=/var/run/vfio-user 00:16:01.601 11:23:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@18 -- # export TEST_TRANSPORT=VFIOUSER 00:16:01.601 11:23:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@18 -- # TEST_TRANSPORT=VFIOUSER 00:16:01.601 11:23:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@20 -- # rm -rf /var/run/vfio-user 00:16:01.601 11:23:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@24 -- # nvmfpid=1499147 00:16:01.601 11:23:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@25 -- # echo 'Process pid: 1499147' 00:16:01.601 Process pid: 1499147 00:16:01.601 11:23:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@27 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:16:01.601 11:23:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:16:01.601 11:23:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@28 -- # waitforlisten 1499147 00:16:01.601 11:23:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@831 -- # '[' -z 1499147 ']' 00:16:01.601 11:23:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:01.601 11:23:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@836 -- # local max_retries=100 00:16:01.601 11:23:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:01.601 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:01.601 11:23:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@840 -- # xtrace_disable 00:16:01.601 11:23:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:16:02.536 11:23:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:16:02.536 11:23:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@864 -- # return 0 00:16:02.536 11:23:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@30 -- # sleep 1 00:16:03.472 11:23:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@32 -- # rpc_cmd nvmf_create_transport -t VFIOUSER 00:16:03.472 11:23:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:03.472 11:23:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:16:03.472 11:23:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:03.472 11:23:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@34 -- # mkdir -p /var/run/vfio-user 00:16:03.472 11:23:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@36 -- # rpc_cmd bdev_malloc_create 64 512 -b malloc0 00:16:03.472 11:23:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:03.472 11:23:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:16:03.472 malloc0 00:16:03.472 11:23:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:03.472 11:23:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2021-09.io.spdk:cnode0 -a -s spdk 00:16:03.472 11:23:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:03.472 11:23:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:16:03.472 11:23:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:03.472 11:23:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2021-09.io.spdk:cnode0 malloc0 00:16:03.472 11:23:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:03.472 11:23:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:16:03.472 11:23:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:03.472 11:23:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@39 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2021-09.io.spdk:cnode0 -t VFIOUSER -a /var/run/vfio-user -s 0 00:16:03.472 11:23:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:03.472 11:23:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:16:03.730 11:23:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:03.730 11:23:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@41 -- # trid='trtype:VFIOUSER subnqn:nqn.2021-09.io.spdk:cnode0 traddr:/var/run/vfio-user' 00:16:03.730 11:23:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -t 30 -S 123456 -F 'trtype:VFIOUSER subnqn:nqn.2021-09.io.spdk:cnode0 traddr:/var/run/vfio-user' -N -a 00:16:35.824 Fuzzing completed. Shutting down the fuzz application 00:16:35.824 00:16:35.824 Dumping successful admin opcodes: 00:16:35.824 8, 9, 10, 24, 00:16:35.824 Dumping successful io opcodes: 00:16:35.824 0, 00:16:35.824 NS: 0x200003a1ef00 I/O qp, Total commands completed: 993221, total successful commands: 3887, random_seed: 1656222080 00:16:35.824 NS: 0x200003a1ef00 admin qp, Total commands completed: 243759, total successful commands: 1963, random_seed: 4085594112 00:16:35.824 11:24:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@44 -- # rpc_cmd nvmf_delete_subsystem nqn.2021-09.io.spdk:cnode0 00:16:35.824 11:24:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:35.824 11:24:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:16:35.825 11:24:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:35.825 11:24:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@46 -- # killprocess 1499147 00:16:35.825 11:24:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@950 -- # '[' -z 1499147 ']' 00:16:35.825 11:24:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@954 -- # kill -0 1499147 00:16:35.825 11:24:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@955 -- # uname 00:16:35.825 11:24:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:16:35.825 11:24:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1499147 00:16:35.825 11:24:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:16:35.825 11:24:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:16:35.825 11:24:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1499147' 00:16:35.825 killing process with pid 1499147 00:16:35.825 11:24:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@969 -- # kill 1499147 00:16:35.825 11:24:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@974 -- # wait 1499147 00:16:35.825 11:24:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@48 -- # rm -rf /var/run/vfio-user /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/vfio_user_fuzz_log.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/vfio_user_fuzz_tgt_output.txt 00:16:35.825 11:24:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@50 -- # trap - SIGINT SIGTERM EXIT 00:16:35.825 00:16:35.825 real 0m32.968s 00:16:35.825 user 0m31.078s 00:16:35.825 sys 0m30.468s 00:16:35.825 11:24:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1126 -- # xtrace_disable 00:16:35.825 11:24:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:16:35.825 ************************************ 00:16:35.825 END TEST nvmf_vfio_user_fuzz 00:16:35.825 ************************************ 00:16:35.825 11:24:30 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@37 -- # run_test nvmf_auth_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/auth.sh --transport=tcp 00:16:35.825 11:24:30 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:16:35.825 11:24:30 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:16:35.825 11:24:30 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:16:35.825 ************************************ 00:16:35.825 START TEST nvmf_auth_target 00:16:35.825 ************************************ 00:16:35.825 11:24:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/auth.sh --transport=tcp 00:16:35.825 * Looking for test storage... 00:16:35.825 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:16:35.825 11:24:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:16:35.825 11:24:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@7 -- # uname -s 00:16:35.825 11:24:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:35.825 11:24:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:35.825 11:24:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:35.825 11:24:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:35.825 11:24:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:35.825 11:24:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:35.825 11:24:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:35.825 11:24:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:35.825 11:24:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:35.825 11:24:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:35.825 11:24:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:16:35.825 11:24:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:16:35.825 11:24:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:35.825 11:24:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:35.825 11:24:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:16:35.825 11:24:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:35.825 11:24:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:16:35.825 11:24:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:35.825 11:24:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:35.825 11:24:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:35.825 11:24:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:35.825 11:24:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:35.825 11:24:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:35.825 11:24:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@5 -- # export PATH 00:16:35.825 11:24:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:35.825 11:24:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@47 -- # : 0 00:16:35.825 11:24:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:16:35.825 11:24:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:16:35.825 11:24:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:35.825 11:24:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:35.825 11:24:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:35.825 11:24:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:16:35.825 11:24:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:16:35.825 11:24:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@51 -- # have_pci_nics=0 00:16:35.825 11:24:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:16:35.825 11:24:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@14 -- # dhgroups=("null" "ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:16:35.825 11:24:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@15 -- # subnqn=nqn.2024-03.io.spdk:cnode0 00:16:35.825 11:24:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@16 -- # hostnqn=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:16:35.825 11:24:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@17 -- # hostsock=/var/tmp/host.sock 00:16:35.825 11:24:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@18 -- # keys=() 00:16:35.825 11:24:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@18 -- # ckeys=() 00:16:35.825 11:24:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@59 -- # nvmftestinit 00:16:35.825 11:24:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:16:35.825 11:24:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:35.825 11:24:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@448 -- # prepare_net_devs 00:16:35.825 11:24:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@410 -- # local -g is_hw=no 00:16:35.825 11:24:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@412 -- # remove_spdk_ns 00:16:35.825 11:24:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:35.825 11:24:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:35.825 11:24:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:35.825 11:24:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:16:35.825 11:24:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:16:35.825 11:24:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@285 -- # xtrace_disable 00:16:35.825 11:24:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:41.101 11:24:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:16:41.101 11:24:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@291 -- # pci_devs=() 00:16:41.101 11:24:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@291 -- # local -a pci_devs 00:16:41.101 11:24:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@292 -- # pci_net_devs=() 00:16:41.101 11:24:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:16:41.101 11:24:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@293 -- # pci_drivers=() 00:16:41.101 11:24:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@293 -- # local -A pci_drivers 00:16:41.101 11:24:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@295 -- # net_devs=() 00:16:41.101 11:24:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@295 -- # local -ga net_devs 00:16:41.101 11:24:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@296 -- # e810=() 00:16:41.101 11:24:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@296 -- # local -ga e810 00:16:41.101 11:24:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@297 -- # x722=() 00:16:41.101 11:24:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@297 -- # local -ga x722 00:16:41.101 11:24:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@298 -- # mlx=() 00:16:41.101 11:24:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@298 -- # local -ga mlx 00:16:41.101 11:24:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:16:41.101 11:24:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:16:41.101 11:24:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:16:41.101 11:24:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:16:41.101 11:24:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:16:41.101 11:24:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:16:41.101 11:24:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:16:41.101 11:24:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:16:41.101 11:24:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:16:41.101 11:24:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:16:41.101 11:24:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:16:41.101 11:24:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:16:41.101 11:24:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:16:41.101 11:24:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:16:41.101 11:24:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:16:41.101 11:24:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:16:41.101 11:24:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:16:41.101 11:24:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:16:41.101 11:24:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:16:41.101 Found 0000:86:00.0 (0x8086 - 0x159b) 00:16:41.101 11:24:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:16:41.101 11:24:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:16:41.101 11:24:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:41.101 11:24:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:41.101 11:24:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:16:41.101 11:24:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:16:41.101 11:24:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:16:41.101 Found 0000:86:00.1 (0x8086 - 0x159b) 00:16:41.101 11:24:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:16:41.101 11:24:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:16:41.101 11:24:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:41.101 11:24:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:41.101 11:24:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:16:41.101 11:24:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:16:41.101 11:24:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:16:41.101 11:24:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:16:41.101 11:24:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:16:41.101 11:24:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:41.101 11:24:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:16:41.101 11:24:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:41.101 11:24:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:16:41.101 11:24:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:16:41.101 11:24:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:41.101 11:24:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:16:41.101 Found net devices under 0000:86:00.0: cvl_0_0 00:16:41.101 11:24:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:16:41.101 11:24:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:16:41.101 11:24:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:41.101 11:24:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:16:41.101 11:24:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:41.101 11:24:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:16:41.101 11:24:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:16:41.101 11:24:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:41.101 11:24:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:16:41.101 Found net devices under 0000:86:00.1: cvl_0_1 00:16:41.101 11:24:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:16:41.101 11:24:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:16:41.101 11:24:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@414 -- # is_hw=yes 00:16:41.102 11:24:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:16:41.102 11:24:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:16:41.102 11:24:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:16:41.102 11:24:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:41.102 11:24:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:41.102 11:24:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:16:41.102 11:24:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:16:41.102 11:24:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:16:41.102 11:24:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:16:41.102 11:24:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:16:41.102 11:24:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:16:41.102 11:24:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:41.102 11:24:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:16:41.102 11:24:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:16:41.102 11:24:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:16:41.102 11:24:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:16:41.102 11:24:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:16:41.102 11:24:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:16:41.102 11:24:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:16:41.102 11:24:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:16:41.102 11:24:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:16:41.102 11:24:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:16:41.102 11:24:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:16:41.102 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:41.102 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.115 ms 00:16:41.102 00:16:41.102 --- 10.0.0.2 ping statistics --- 00:16:41.102 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:41.102 rtt min/avg/max/mdev = 0.115/0.115/0.115/0.000 ms 00:16:41.102 11:24:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:16:41.102 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:41.102 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.121 ms 00:16:41.102 00:16:41.102 --- 10.0.0.1 ping statistics --- 00:16:41.102 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:41.102 rtt min/avg/max/mdev = 0.121/0.121/0.121/0.000 ms 00:16:41.102 11:24:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:41.102 11:24:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@422 -- # return 0 00:16:41.102 11:24:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:16:41.102 11:24:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:41.102 11:24:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:16:41.102 11:24:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:16:41.102 11:24:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:41.102 11:24:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:16:41.102 11:24:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:16:41.102 11:24:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # nvmfappstart -L nvmf_auth 00:16:41.102 11:24:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:16:41.102 11:24:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@724 -- # xtrace_disable 00:16:41.102 11:24:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:41.102 11:24:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@481 -- # nvmfpid=1508175 00:16:41.102 11:24:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvmf_auth 00:16:41.102 11:24:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@482 -- # waitforlisten 1508175 00:16:41.102 11:24:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@831 -- # '[' -z 1508175 ']' 00:16:41.102 11:24:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:41.102 11:24:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@836 -- # local max_retries=100 00:16:41.102 11:24:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:41.102 11:24:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # xtrace_disable 00:16:41.102 11:24:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:41.361 11:24:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:16:41.361 11:24:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # return 0 00:16:41.361 11:24:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:16:41.361 11:24:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@730 -- # xtrace_disable 00:16:41.361 11:24:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:41.361 11:24:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:41.361 11:24:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@62 -- # hostpid=1508206 00:16:41.361 11:24:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 2 -r /var/tmp/host.sock -L nvme_auth 00:16:41.361 11:24:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@64 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:16:41.361 11:24:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # gen_dhchap_key null 48 00:16:41.361 11:24:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:16:41.361 11:24:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:16:41.361 11:24:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:16:41.361 11:24:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=null 00:16:41.361 11:24:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # len=48 00:16:41.361 11:24:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:16:41.361 11:24:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@727 -- # key=4d74ca1472d4939aa7a407938a48fbd8df5ff33926607bcd 00:16:41.361 11:24:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:16:41.361 11:24:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.ezp 00:16:41.361 11:24:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 4d74ca1472d4939aa7a407938a48fbd8df5ff33926607bcd 0 00:16:41.361 11:24:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 4d74ca1472d4939aa7a407938a48fbd8df5ff33926607bcd 0 00:16:41.361 11:24:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:16:41.361 11:24:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:16:41.361 11:24:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # key=4d74ca1472d4939aa7a407938a48fbd8df5ff33926607bcd 00:16:41.361 11:24:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=0 00:16:41.361 11:24:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:16:41.361 11:24:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.ezp 00:16:41.361 11:24:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.ezp 00:16:41.361 11:24:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # keys[0]=/tmp/spdk.key-null.ezp 00:16:41.361 11:24:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # gen_dhchap_key sha512 64 00:16:41.361 11:24:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:16:41.361 11:24:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:16:41.361 11:24:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:16:41.361 11:24:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha512 00:16:41.361 11:24:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # len=64 00:16:41.361 11:24:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:16:41.361 11:24:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@727 -- # key=cd8a3f8e8e25b26e80d140a6d9836e75ae83cf82237de272800a289fefb729d7 00:16:41.361 11:24:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:16:41.361 11:24:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.PJz 00:16:41.361 11:24:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key cd8a3f8e8e25b26e80d140a6d9836e75ae83cf82237de272800a289fefb729d7 3 00:16:41.361 11:24:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 cd8a3f8e8e25b26e80d140a6d9836e75ae83cf82237de272800a289fefb729d7 3 00:16:41.361 11:24:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:16:41.361 11:24:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:16:41.361 11:24:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # key=cd8a3f8e8e25b26e80d140a6d9836e75ae83cf82237de272800a289fefb729d7 00:16:41.361 11:24:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=3 00:16:41.361 11:24:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:16:41.361 11:24:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.PJz 00:16:41.621 11:24:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.PJz 00:16:41.621 11:24:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # ckeys[0]=/tmp/spdk.key-sha512.PJz 00:16:41.621 11:24:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # gen_dhchap_key sha256 32 00:16:41.621 11:24:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:16:41.621 11:24:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:16:41.621 11:24:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:16:41.621 11:24:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha256 00:16:41.621 11:24:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # len=32 00:16:41.621 11:24:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:16:41.621 11:24:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@727 -- # key=42749feedbd5ceda2c3680225438c326 00:16:41.621 11:24:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:16:41.621 11:24:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.UHZ 00:16:41.621 11:24:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 42749feedbd5ceda2c3680225438c326 1 00:16:41.621 11:24:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 42749feedbd5ceda2c3680225438c326 1 00:16:41.621 11:24:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:16:41.621 11:24:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:16:41.621 11:24:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # key=42749feedbd5ceda2c3680225438c326 00:16:41.621 11:24:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=1 00:16:41.621 11:24:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:16:41.621 11:24:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.UHZ 00:16:41.621 11:24:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.UHZ 00:16:41.621 11:24:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # keys[1]=/tmp/spdk.key-sha256.UHZ 00:16:41.621 11:24:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # gen_dhchap_key sha384 48 00:16:41.621 11:24:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:16:41.621 11:24:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:16:41.621 11:24:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:16:41.621 11:24:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha384 00:16:41.621 11:24:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # len=48 00:16:41.621 11:24:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:16:41.621 11:24:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@727 -- # key=1e61a4a41efa39867ecb6f026e0d61ddae72b9483cf0b3ad 00:16:41.621 11:24:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:16:41.621 11:24:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.dQq 00:16:41.621 11:24:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 1e61a4a41efa39867ecb6f026e0d61ddae72b9483cf0b3ad 2 00:16:41.621 11:24:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 1e61a4a41efa39867ecb6f026e0d61ddae72b9483cf0b3ad 2 00:16:41.621 11:24:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:16:41.621 11:24:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:16:41.621 11:24:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # key=1e61a4a41efa39867ecb6f026e0d61ddae72b9483cf0b3ad 00:16:41.621 11:24:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=2 00:16:41.621 11:24:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:16:41.621 11:24:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.dQq 00:16:41.621 11:24:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.dQq 00:16:41.621 11:24:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckeys[1]=/tmp/spdk.key-sha384.dQq 00:16:41.621 11:24:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@69 -- # gen_dhchap_key sha384 48 00:16:41.621 11:24:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:16:41.621 11:24:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:16:41.621 11:24:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:16:41.621 11:24:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha384 00:16:41.621 11:24:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # len=48 00:16:41.621 11:24:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:16:41.621 11:24:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@727 -- # key=6f95867119e98fbfcd366662914a61eeabc893ce63630ea7 00:16:41.621 11:24:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:16:41.621 11:24:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.Um4 00:16:41.621 11:24:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 6f95867119e98fbfcd366662914a61eeabc893ce63630ea7 2 00:16:41.621 11:24:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 6f95867119e98fbfcd366662914a61eeabc893ce63630ea7 2 00:16:41.621 11:24:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:16:41.621 11:24:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:16:41.621 11:24:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # key=6f95867119e98fbfcd366662914a61eeabc893ce63630ea7 00:16:41.621 11:24:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=2 00:16:41.621 11:24:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:16:41.621 11:24:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.Um4 00:16:41.621 11:24:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.Um4 00:16:41.621 11:24:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@69 -- # keys[2]=/tmp/spdk.key-sha384.Um4 00:16:41.621 11:24:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@69 -- # gen_dhchap_key sha256 32 00:16:41.621 11:24:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:16:41.621 11:24:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:16:41.621 11:24:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:16:41.621 11:24:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha256 00:16:41.621 11:24:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # len=32 00:16:41.621 11:24:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:16:41.621 11:24:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@727 -- # key=77aced52c2c3a4b42a29a12e0f318641 00:16:41.621 11:24:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:16:41.621 11:24:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.wW0 00:16:41.621 11:24:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 77aced52c2c3a4b42a29a12e0f318641 1 00:16:41.621 11:24:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 77aced52c2c3a4b42a29a12e0f318641 1 00:16:41.621 11:24:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:16:41.621 11:24:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:16:41.621 11:24:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # key=77aced52c2c3a4b42a29a12e0f318641 00:16:41.621 11:24:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=1 00:16:41.621 11:24:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:16:41.621 11:24:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.wW0 00:16:41.621 11:24:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.wW0 00:16:41.621 11:24:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@69 -- # ckeys[2]=/tmp/spdk.key-sha256.wW0 00:16:41.621 11:24:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # gen_dhchap_key sha512 64 00:16:41.621 11:24:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:16:41.621 11:24:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:16:41.621 11:24:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:16:41.621 11:24:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha512 00:16:41.621 11:24:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # len=64 00:16:41.621 11:24:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:16:41.621 11:24:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@727 -- # key=c24332bb4b8e9ac8d3c17d404a7e5b08ced9b27e14d6d53e0334f6392988a3f9 00:16:41.621 11:24:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:16:41.622 11:24:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.fuh 00:16:41.622 11:24:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key c24332bb4b8e9ac8d3c17d404a7e5b08ced9b27e14d6d53e0334f6392988a3f9 3 00:16:41.622 11:24:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 c24332bb4b8e9ac8d3c17d404a7e5b08ced9b27e14d6d53e0334f6392988a3f9 3 00:16:41.622 11:24:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:16:41.622 11:24:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:16:41.622 11:24:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # key=c24332bb4b8e9ac8d3c17d404a7e5b08ced9b27e14d6d53e0334f6392988a3f9 00:16:41.622 11:24:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=3 00:16:41.622 11:24:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:16:41.880 11:24:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.fuh 00:16:41.880 11:24:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.fuh 00:16:41.880 11:24:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # keys[3]=/tmp/spdk.key-sha512.fuh 00:16:41.880 11:24:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # ckeys[3]= 00:16:41.880 11:24:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@72 -- # waitforlisten 1508175 00:16:41.880 11:24:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@831 -- # '[' -z 1508175 ']' 00:16:41.880 11:24:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:41.880 11:24:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@836 -- # local max_retries=100 00:16:41.880 11:24:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:41.880 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:41.880 11:24:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # xtrace_disable 00:16:41.880 11:24:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:41.880 11:24:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:16:41.880 11:24:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # return 0 00:16:41.880 11:24:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # waitforlisten 1508206 /var/tmp/host.sock 00:16:41.880 11:24:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@831 -- # '[' -z 1508206 ']' 00:16:41.880 11:24:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/host.sock 00:16:41.880 11:24:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@836 -- # local max_retries=100 00:16:41.880 11:24:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:16:41.880 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:16:41.880 11:24:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # xtrace_disable 00:16:41.880 11:24:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:42.139 11:24:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:16:42.139 11:24:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # return 0 00:16:42.139 11:24:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd 00:16:42.139 11:24:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:42.139 11:24:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:42.139 11:24:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:42.139 11:24:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@81 -- # for i in "${!keys[@]}" 00:16:42.139 11:24:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.ezp 00:16:42.139 11:24:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:42.139 11:24:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:42.139 11:24:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:42.139 11:24:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # hostrpc keyring_file_add_key key0 /tmp/spdk.key-null.ezp 00:16:42.139 11:24:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key0 /tmp/spdk.key-null.ezp 00:16:42.397 11:24:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@84 -- # [[ -n /tmp/spdk.key-sha512.PJz ]] 00:16:42.397 11:24:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@85 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.PJz 00:16:42.397 11:24:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:42.397 11:24:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:42.397 11:24:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:42.397 11:24:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@86 -- # hostrpc keyring_file_add_key ckey0 /tmp/spdk.key-sha512.PJz 00:16:42.397 11:24:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey0 /tmp/spdk.key-sha512.PJz 00:16:42.655 11:24:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@81 -- # for i in "${!keys[@]}" 00:16:42.655 11:24:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-sha256.UHZ 00:16:42.655 11:24:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:42.655 11:24:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:42.655 11:24:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:42.656 11:24:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # hostrpc keyring_file_add_key key1 /tmp/spdk.key-sha256.UHZ 00:16:42.656 11:24:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key1 /tmp/spdk.key-sha256.UHZ 00:16:42.656 11:24:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@84 -- # [[ -n /tmp/spdk.key-sha384.dQq ]] 00:16:42.656 11:24:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@85 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.dQq 00:16:42.656 11:24:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:42.656 11:24:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:42.656 11:24:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:42.656 11:24:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@86 -- # hostrpc keyring_file_add_key ckey1 /tmp/spdk.key-sha384.dQq 00:16:42.656 11:24:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey1 /tmp/spdk.key-sha384.dQq 00:16:42.914 11:24:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@81 -- # for i in "${!keys[@]}" 00:16:42.914 11:24:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha384.Um4 00:16:42.914 11:24:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:42.914 11:24:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:42.914 11:24:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:42.914 11:24:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # hostrpc keyring_file_add_key key2 /tmp/spdk.key-sha384.Um4 00:16:42.914 11:24:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key2 /tmp/spdk.key-sha384.Um4 00:16:43.172 11:24:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@84 -- # [[ -n /tmp/spdk.key-sha256.wW0 ]] 00:16:43.172 11:24:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@85 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.wW0 00:16:43.172 11:24:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:43.172 11:24:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:43.172 11:24:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:43.172 11:24:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@86 -- # hostrpc keyring_file_add_key ckey2 /tmp/spdk.key-sha256.wW0 00:16:43.172 11:24:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey2 /tmp/spdk.key-sha256.wW0 00:16:43.172 11:24:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@81 -- # for i in "${!keys[@]}" 00:16:43.172 11:24:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha512.fuh 00:16:43.172 11:24:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:43.172 11:24:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:43.172 11:24:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:43.172 11:24:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # hostrpc keyring_file_add_key key3 /tmp/spdk.key-sha512.fuh 00:16:43.172 11:24:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key3 /tmp/spdk.key-sha512.fuh 00:16:43.430 11:24:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@84 -- # [[ -n '' ]] 00:16:43.430 11:24:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@91 -- # for digest in "${digests[@]}" 00:16:43.430 11:24:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:16:43.430 11:24:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:43.430 11:24:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:16:43.430 11:24:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:16:43.689 11:24:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 null 0 00:16:43.689 11:24:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:43.689 11:24:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:16:43.689 11:24:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:16:43.689 11:24:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:16:43.689 11:24:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:43.689 11:24:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:43.689 11:24:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:43.689 11:24:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:43.689 11:24:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:43.689 11:24:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:43.689 11:24:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:43.947 00:16:43.947 11:24:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:43.947 11:24:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:43.947 11:24:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:43.947 11:24:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:43.947 11:24:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:43.947 11:24:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:43.947 11:24:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:43.947 11:24:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:43.947 11:24:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:43.947 { 00:16:43.947 "cntlid": 1, 00:16:43.947 "qid": 0, 00:16:43.947 "state": "enabled", 00:16:43.947 "thread": "nvmf_tgt_poll_group_000", 00:16:43.947 "listen_address": { 00:16:43.947 "trtype": "TCP", 00:16:43.947 "adrfam": "IPv4", 00:16:43.947 "traddr": "10.0.0.2", 00:16:43.947 "trsvcid": "4420" 00:16:43.947 }, 00:16:43.947 "peer_address": { 00:16:43.947 "trtype": "TCP", 00:16:43.947 "adrfam": "IPv4", 00:16:43.947 "traddr": "10.0.0.1", 00:16:43.947 "trsvcid": "52236" 00:16:43.947 }, 00:16:43.947 "auth": { 00:16:43.947 "state": "completed", 00:16:43.947 "digest": "sha256", 00:16:43.947 "dhgroup": "null" 00:16:43.947 } 00:16:43.947 } 00:16:43.947 ]' 00:16:43.947 11:24:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:43.947 11:24:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:44.205 11:24:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:44.205 11:24:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:16:44.205 11:24:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:44.205 11:24:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:44.205 11:24:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:44.205 11:24:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:44.464 11:24:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-secret DHHC-1:00:NGQ3NGNhMTQ3MmQ0OTM5YWE3YTQwNzkzOGE0OGZiZDhkZjVmZjMzOTI2NjA3YmNkP9i+rw==: --dhchap-ctrl-secret DHHC-1:03:Y2Q4YTNmOGU4ZTI1YjI2ZTgwZDE0MGE2ZDk4MzZlNzVhZTgzY2Y4MjIzN2RlMjcyODAwYTI4OWZlZmI3MjlkN5LDEeI=: 00:16:45.030 11:24:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:45.030 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:45.030 11:24:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:16:45.030 11:24:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:45.030 11:24:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:45.030 11:24:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:45.030 11:24:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:45.030 11:24:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:16:45.030 11:24:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:16:45.030 11:24:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 null 1 00:16:45.030 11:24:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:45.030 11:24:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:16:45.030 11:24:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:16:45.030 11:24:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:16:45.030 11:24:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:45.030 11:24:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:45.030 11:24:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:45.030 11:24:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:45.030 11:24:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:45.030 11:24:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:45.030 11:24:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:45.288 00:16:45.288 11:24:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:45.289 11:24:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:45.289 11:24:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:45.547 11:24:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:45.547 11:24:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:45.547 11:24:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:45.547 11:24:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:45.547 11:24:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:45.547 11:24:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:45.547 { 00:16:45.547 "cntlid": 3, 00:16:45.547 "qid": 0, 00:16:45.547 "state": "enabled", 00:16:45.547 "thread": "nvmf_tgt_poll_group_000", 00:16:45.547 "listen_address": { 00:16:45.547 "trtype": "TCP", 00:16:45.547 "adrfam": "IPv4", 00:16:45.547 "traddr": "10.0.0.2", 00:16:45.547 "trsvcid": "4420" 00:16:45.547 }, 00:16:45.547 "peer_address": { 00:16:45.547 "trtype": "TCP", 00:16:45.547 "adrfam": "IPv4", 00:16:45.547 "traddr": "10.0.0.1", 00:16:45.547 "trsvcid": "52252" 00:16:45.547 }, 00:16:45.547 "auth": { 00:16:45.547 "state": "completed", 00:16:45.547 "digest": "sha256", 00:16:45.547 "dhgroup": "null" 00:16:45.547 } 00:16:45.547 } 00:16:45.547 ]' 00:16:45.547 11:24:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:45.547 11:24:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:45.547 11:24:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:45.547 11:24:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:16:45.547 11:24:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:45.547 11:24:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:45.547 11:24:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:45.547 11:24:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:45.805 11:24:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-secret DHHC-1:01:NDI3NDlmZWVkYmQ1Y2VkYTJjMzY4MDIyNTQzOGMzMjZWX4m9: --dhchap-ctrl-secret DHHC-1:02:MWU2MWE0YTQxZWZhMzk4NjdlY2I2ZjAyNmUwZDYxZGRhZTcyYjk0ODNjZjBiM2FkruUOsA==: 00:16:46.371 11:24:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:46.371 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:46.371 11:24:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:16:46.371 11:24:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:46.371 11:24:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:46.371 11:24:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:46.371 11:24:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:46.371 11:24:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:16:46.371 11:24:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:16:46.629 11:24:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 null 2 00:16:46.629 11:24:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:46.629 11:24:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:16:46.629 11:24:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:16:46.629 11:24:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:16:46.629 11:24:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:46.629 11:24:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:46.629 11:24:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:46.630 11:24:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:46.630 11:24:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:46.630 11:24:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:46.630 11:24:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:46.888 00:16:46.888 11:24:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:46.888 11:24:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:46.888 11:24:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:47.146 11:24:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:47.146 11:24:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:47.146 11:24:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:47.146 11:24:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:47.146 11:24:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:47.146 11:24:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:47.146 { 00:16:47.146 "cntlid": 5, 00:16:47.146 "qid": 0, 00:16:47.146 "state": "enabled", 00:16:47.146 "thread": "nvmf_tgt_poll_group_000", 00:16:47.146 "listen_address": { 00:16:47.146 "trtype": "TCP", 00:16:47.146 "adrfam": "IPv4", 00:16:47.146 "traddr": "10.0.0.2", 00:16:47.146 "trsvcid": "4420" 00:16:47.146 }, 00:16:47.146 "peer_address": { 00:16:47.146 "trtype": "TCP", 00:16:47.146 "adrfam": "IPv4", 00:16:47.146 "traddr": "10.0.0.1", 00:16:47.146 "trsvcid": "52268" 00:16:47.146 }, 00:16:47.146 "auth": { 00:16:47.146 "state": "completed", 00:16:47.146 "digest": "sha256", 00:16:47.146 "dhgroup": "null" 00:16:47.146 } 00:16:47.146 } 00:16:47.146 ]' 00:16:47.146 11:24:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:47.146 11:24:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:47.146 11:24:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:47.146 11:24:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:16:47.146 11:24:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:47.146 11:24:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:47.146 11:24:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:47.146 11:24:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:47.404 11:24:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-secret DHHC-1:02:NmY5NTg2NzExOWU5OGZiZmNkMzY2NjYyOTE0YTYxZWVhYmM4OTNjZTYzNjMwZWE324S1pg==: --dhchap-ctrl-secret DHHC-1:01:NzdhY2VkNTJjMmMzYTRiNDJhMjlhMTJlMGYzMTg2NDGvAg5T: 00:16:47.970 11:24:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:47.970 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:47.970 11:24:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:16:47.970 11:24:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:47.970 11:24:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:47.970 11:24:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:47.970 11:24:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:47.970 11:24:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:16:47.970 11:24:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:16:47.970 11:24:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 null 3 00:16:47.970 11:24:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:47.970 11:24:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:16:47.970 11:24:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:16:47.970 11:24:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:16:47.970 11:24:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:47.970 11:24:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key3 00:16:47.970 11:24:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:47.970 11:24:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:48.227 11:24:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:48.228 11:24:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:48.228 11:24:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:48.228 00:16:48.228 11:24:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:48.228 11:24:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:48.228 11:24:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:48.485 11:24:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:48.485 11:24:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:48.485 11:24:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:48.485 11:24:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:48.485 11:24:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:48.485 11:24:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:48.485 { 00:16:48.485 "cntlid": 7, 00:16:48.485 "qid": 0, 00:16:48.485 "state": "enabled", 00:16:48.485 "thread": "nvmf_tgt_poll_group_000", 00:16:48.485 "listen_address": { 00:16:48.485 "trtype": "TCP", 00:16:48.485 "adrfam": "IPv4", 00:16:48.485 "traddr": "10.0.0.2", 00:16:48.485 "trsvcid": "4420" 00:16:48.485 }, 00:16:48.485 "peer_address": { 00:16:48.485 "trtype": "TCP", 00:16:48.485 "adrfam": "IPv4", 00:16:48.485 "traddr": "10.0.0.1", 00:16:48.485 "trsvcid": "52298" 00:16:48.485 }, 00:16:48.485 "auth": { 00:16:48.485 "state": "completed", 00:16:48.485 "digest": "sha256", 00:16:48.485 "dhgroup": "null" 00:16:48.485 } 00:16:48.485 } 00:16:48.485 ]' 00:16:48.485 11:24:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:48.485 11:24:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:48.485 11:24:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:48.485 11:24:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:16:48.485 11:24:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:48.743 11:24:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:48.743 11:24:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:48.743 11:24:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:48.743 11:24:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-secret DHHC-1:03:YzI0MzMyYmI0YjhlOWFjOGQzYzE3ZDQwNGE3ZTViMDhjZWQ5YjI3ZTE0ZDZkNTNlMDMzNGY2MzkyOTg4YTNmOQS9WdI=: 00:16:49.311 11:24:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:49.311 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:49.311 11:24:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:16:49.311 11:24:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:49.311 11:24:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:49.311 11:24:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:49.311 11:24:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:16:49.311 11:24:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:49.311 11:24:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:16:49.311 11:24:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:16:49.569 11:24:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe2048 0 00:16:49.569 11:24:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:49.569 11:24:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:16:49.569 11:24:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:16:49.569 11:24:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:16:49.569 11:24:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:49.569 11:24:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:49.569 11:24:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:49.569 11:24:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:49.569 11:24:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:49.569 11:24:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:49.569 11:24:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:49.827 00:16:49.827 11:24:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:49.827 11:24:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:49.827 11:24:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:50.085 11:24:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:50.085 11:24:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:50.085 11:24:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:50.085 11:24:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:50.085 11:24:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:50.085 11:24:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:50.085 { 00:16:50.085 "cntlid": 9, 00:16:50.085 "qid": 0, 00:16:50.085 "state": "enabled", 00:16:50.085 "thread": "nvmf_tgt_poll_group_000", 00:16:50.085 "listen_address": { 00:16:50.085 "trtype": "TCP", 00:16:50.085 "adrfam": "IPv4", 00:16:50.085 "traddr": "10.0.0.2", 00:16:50.085 "trsvcid": "4420" 00:16:50.085 }, 00:16:50.085 "peer_address": { 00:16:50.085 "trtype": "TCP", 00:16:50.085 "adrfam": "IPv4", 00:16:50.086 "traddr": "10.0.0.1", 00:16:50.086 "trsvcid": "58918" 00:16:50.086 }, 00:16:50.086 "auth": { 00:16:50.086 "state": "completed", 00:16:50.086 "digest": "sha256", 00:16:50.086 "dhgroup": "ffdhe2048" 00:16:50.086 } 00:16:50.086 } 00:16:50.086 ]' 00:16:50.086 11:24:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:50.086 11:24:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:50.086 11:24:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:50.086 11:24:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:16:50.086 11:24:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:50.086 11:24:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:50.086 11:24:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:50.086 11:24:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:50.344 11:24:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-secret DHHC-1:00:NGQ3NGNhMTQ3MmQ0OTM5YWE3YTQwNzkzOGE0OGZiZDhkZjVmZjMzOTI2NjA3YmNkP9i+rw==: --dhchap-ctrl-secret DHHC-1:03:Y2Q4YTNmOGU4ZTI1YjI2ZTgwZDE0MGE2ZDk4MzZlNzVhZTgzY2Y4MjIzN2RlMjcyODAwYTI4OWZlZmI3MjlkN5LDEeI=: 00:16:50.910 11:24:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:50.910 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:50.910 11:24:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:16:50.910 11:24:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:50.910 11:24:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:50.910 11:24:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:50.910 11:24:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:50.911 11:24:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:16:50.911 11:24:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:16:50.911 11:24:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe2048 1 00:16:50.911 11:24:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:50.911 11:24:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:16:50.911 11:24:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:16:50.911 11:24:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:16:50.911 11:24:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:50.911 11:24:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:50.911 11:24:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:50.911 11:24:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:50.911 11:24:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:50.911 11:24:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:50.911 11:24:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:51.169 00:16:51.169 11:24:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:51.169 11:24:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:51.169 11:24:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:51.427 11:24:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:51.427 11:24:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:51.427 11:24:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:51.427 11:24:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:51.427 11:24:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:51.427 11:24:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:51.427 { 00:16:51.427 "cntlid": 11, 00:16:51.427 "qid": 0, 00:16:51.427 "state": "enabled", 00:16:51.427 "thread": "nvmf_tgt_poll_group_000", 00:16:51.427 "listen_address": { 00:16:51.427 "trtype": "TCP", 00:16:51.427 "adrfam": "IPv4", 00:16:51.427 "traddr": "10.0.0.2", 00:16:51.427 "trsvcid": "4420" 00:16:51.427 }, 00:16:51.427 "peer_address": { 00:16:51.427 "trtype": "TCP", 00:16:51.427 "adrfam": "IPv4", 00:16:51.427 "traddr": "10.0.0.1", 00:16:51.427 "trsvcid": "58942" 00:16:51.427 }, 00:16:51.427 "auth": { 00:16:51.427 "state": "completed", 00:16:51.427 "digest": "sha256", 00:16:51.427 "dhgroup": "ffdhe2048" 00:16:51.427 } 00:16:51.427 } 00:16:51.427 ]' 00:16:51.427 11:24:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:51.427 11:24:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:51.427 11:24:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:51.685 11:24:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:16:51.685 11:24:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:51.685 11:24:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:51.685 11:24:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:51.685 11:24:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:51.685 11:24:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-secret DHHC-1:01:NDI3NDlmZWVkYmQ1Y2VkYTJjMzY4MDIyNTQzOGMzMjZWX4m9: --dhchap-ctrl-secret DHHC-1:02:MWU2MWE0YTQxZWZhMzk4NjdlY2I2ZjAyNmUwZDYxZGRhZTcyYjk0ODNjZjBiM2FkruUOsA==: 00:16:52.251 11:24:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:52.251 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:52.251 11:24:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:16:52.251 11:24:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:52.251 11:24:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:52.251 11:24:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:52.251 11:24:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:52.252 11:24:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:16:52.252 11:24:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:16:52.509 11:24:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe2048 2 00:16:52.509 11:24:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:52.509 11:24:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:16:52.509 11:24:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:16:52.509 11:24:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:16:52.509 11:24:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:52.509 11:24:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:52.509 11:24:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:52.509 11:24:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:52.509 11:24:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:52.509 11:24:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:52.509 11:24:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:52.767 00:16:52.767 11:24:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:52.767 11:24:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:52.767 11:24:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:53.025 11:24:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:53.025 11:24:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:53.025 11:24:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:53.025 11:24:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:53.025 11:24:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:53.025 11:24:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:53.025 { 00:16:53.025 "cntlid": 13, 00:16:53.025 "qid": 0, 00:16:53.025 "state": "enabled", 00:16:53.025 "thread": "nvmf_tgt_poll_group_000", 00:16:53.025 "listen_address": { 00:16:53.025 "trtype": "TCP", 00:16:53.025 "adrfam": "IPv4", 00:16:53.025 "traddr": "10.0.0.2", 00:16:53.025 "trsvcid": "4420" 00:16:53.025 }, 00:16:53.025 "peer_address": { 00:16:53.025 "trtype": "TCP", 00:16:53.025 "adrfam": "IPv4", 00:16:53.025 "traddr": "10.0.0.1", 00:16:53.025 "trsvcid": "58966" 00:16:53.025 }, 00:16:53.025 "auth": { 00:16:53.025 "state": "completed", 00:16:53.025 "digest": "sha256", 00:16:53.025 "dhgroup": "ffdhe2048" 00:16:53.025 } 00:16:53.025 } 00:16:53.025 ]' 00:16:53.025 11:24:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:53.026 11:24:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:53.026 11:24:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:53.026 11:24:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:16:53.026 11:24:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:53.026 11:24:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:53.026 11:24:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:53.026 11:24:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:53.282 11:24:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-secret DHHC-1:02:NmY5NTg2NzExOWU5OGZiZmNkMzY2NjYyOTE0YTYxZWVhYmM4OTNjZTYzNjMwZWE324S1pg==: --dhchap-ctrl-secret DHHC-1:01:NzdhY2VkNTJjMmMzYTRiNDJhMjlhMTJlMGYzMTg2NDGvAg5T: 00:16:53.847 11:24:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:53.847 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:53.847 11:24:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:16:53.847 11:24:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:53.847 11:24:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:53.847 11:24:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:53.847 11:24:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:53.847 11:24:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:16:53.847 11:24:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:16:54.103 11:24:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe2048 3 00:16:54.104 11:24:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:54.104 11:24:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:16:54.104 11:24:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:16:54.104 11:24:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:16:54.104 11:24:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:54.104 11:24:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key3 00:16:54.104 11:24:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:54.104 11:24:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:54.104 11:24:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:54.104 11:24:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:54.104 11:24:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:54.104 00:16:54.361 11:24:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:54.361 11:24:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:54.361 11:24:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:54.361 11:24:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:54.361 11:24:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:54.361 11:24:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:54.361 11:24:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:54.361 11:24:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:54.361 11:24:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:54.361 { 00:16:54.361 "cntlid": 15, 00:16:54.361 "qid": 0, 00:16:54.361 "state": "enabled", 00:16:54.361 "thread": "nvmf_tgt_poll_group_000", 00:16:54.361 "listen_address": { 00:16:54.361 "trtype": "TCP", 00:16:54.361 "adrfam": "IPv4", 00:16:54.361 "traddr": "10.0.0.2", 00:16:54.361 "trsvcid": "4420" 00:16:54.361 }, 00:16:54.361 "peer_address": { 00:16:54.361 "trtype": "TCP", 00:16:54.361 "adrfam": "IPv4", 00:16:54.361 "traddr": "10.0.0.1", 00:16:54.361 "trsvcid": "58998" 00:16:54.361 }, 00:16:54.361 "auth": { 00:16:54.361 "state": "completed", 00:16:54.361 "digest": "sha256", 00:16:54.361 "dhgroup": "ffdhe2048" 00:16:54.361 } 00:16:54.361 } 00:16:54.361 ]' 00:16:54.361 11:24:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:54.361 11:24:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:54.361 11:24:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:54.618 11:24:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:16:54.618 11:24:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:54.618 11:24:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:54.618 11:24:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:54.618 11:24:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:54.618 11:24:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-secret DHHC-1:03:YzI0MzMyYmI0YjhlOWFjOGQzYzE3ZDQwNGE3ZTViMDhjZWQ5YjI3ZTE0ZDZkNTNlMDMzNGY2MzkyOTg4YTNmOQS9WdI=: 00:16:55.184 11:24:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:55.184 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:55.184 11:24:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:16:55.184 11:24:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:55.184 11:24:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:55.184 11:24:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:55.184 11:24:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:16:55.184 11:24:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:55.184 11:24:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:16:55.184 11:24:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:16:55.442 11:24:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe3072 0 00:16:55.442 11:24:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:55.442 11:24:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:16:55.442 11:24:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:16:55.442 11:24:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:16:55.442 11:24:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:55.442 11:24:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:55.442 11:24:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:55.442 11:24:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:55.442 11:24:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:55.442 11:24:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:55.442 11:24:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:55.700 00:16:55.700 11:24:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:55.700 11:24:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:55.700 11:24:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:55.958 11:24:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:55.958 11:24:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:55.958 11:24:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:55.958 11:24:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:55.958 11:24:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:55.958 11:24:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:55.958 { 00:16:55.958 "cntlid": 17, 00:16:55.958 "qid": 0, 00:16:55.958 "state": "enabled", 00:16:55.958 "thread": "nvmf_tgt_poll_group_000", 00:16:55.958 "listen_address": { 00:16:55.958 "trtype": "TCP", 00:16:55.958 "adrfam": "IPv4", 00:16:55.958 "traddr": "10.0.0.2", 00:16:55.958 "trsvcid": "4420" 00:16:55.958 }, 00:16:55.958 "peer_address": { 00:16:55.958 "trtype": "TCP", 00:16:55.958 "adrfam": "IPv4", 00:16:55.958 "traddr": "10.0.0.1", 00:16:55.958 "trsvcid": "59034" 00:16:55.958 }, 00:16:55.958 "auth": { 00:16:55.958 "state": "completed", 00:16:55.958 "digest": "sha256", 00:16:55.958 "dhgroup": "ffdhe3072" 00:16:55.958 } 00:16:55.958 } 00:16:55.958 ]' 00:16:55.958 11:24:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:55.958 11:24:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:55.958 11:24:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:55.958 11:24:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:16:55.958 11:24:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:55.958 11:24:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:55.958 11:24:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:55.958 11:24:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:56.215 11:24:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-secret DHHC-1:00:NGQ3NGNhMTQ3MmQ0OTM5YWE3YTQwNzkzOGE0OGZiZDhkZjVmZjMzOTI2NjA3YmNkP9i+rw==: --dhchap-ctrl-secret DHHC-1:03:Y2Q4YTNmOGU4ZTI1YjI2ZTgwZDE0MGE2ZDk4MzZlNzVhZTgzY2Y4MjIzN2RlMjcyODAwYTI4OWZlZmI3MjlkN5LDEeI=: 00:16:56.780 11:24:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:56.780 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:56.780 11:24:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:16:56.780 11:24:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:56.780 11:24:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:56.781 11:24:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:56.781 11:24:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:56.781 11:24:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:16:56.781 11:24:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:16:57.039 11:24:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe3072 1 00:16:57.039 11:24:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:57.039 11:24:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:16:57.039 11:24:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:16:57.039 11:24:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:16:57.039 11:24:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:57.039 11:24:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:57.039 11:24:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:57.039 11:24:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:57.039 11:24:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:57.039 11:24:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:57.039 11:24:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:57.297 00:16:57.297 11:24:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:57.297 11:24:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:57.297 11:24:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:57.297 11:24:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:57.297 11:24:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:57.297 11:24:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:57.297 11:24:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:57.297 11:24:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:57.297 11:24:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:57.297 { 00:16:57.297 "cntlid": 19, 00:16:57.297 "qid": 0, 00:16:57.297 "state": "enabled", 00:16:57.297 "thread": "nvmf_tgt_poll_group_000", 00:16:57.297 "listen_address": { 00:16:57.297 "trtype": "TCP", 00:16:57.297 "adrfam": "IPv4", 00:16:57.297 "traddr": "10.0.0.2", 00:16:57.297 "trsvcid": "4420" 00:16:57.297 }, 00:16:57.297 "peer_address": { 00:16:57.297 "trtype": "TCP", 00:16:57.297 "adrfam": "IPv4", 00:16:57.297 "traddr": "10.0.0.1", 00:16:57.297 "trsvcid": "59048" 00:16:57.297 }, 00:16:57.297 "auth": { 00:16:57.297 "state": "completed", 00:16:57.297 "digest": "sha256", 00:16:57.297 "dhgroup": "ffdhe3072" 00:16:57.297 } 00:16:57.297 } 00:16:57.297 ]' 00:16:57.297 11:24:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:57.555 11:24:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:57.555 11:24:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:57.555 11:24:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:16:57.555 11:24:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:57.555 11:24:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:57.555 11:24:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:57.555 11:24:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:57.813 11:24:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-secret DHHC-1:01:NDI3NDlmZWVkYmQ1Y2VkYTJjMzY4MDIyNTQzOGMzMjZWX4m9: --dhchap-ctrl-secret DHHC-1:02:MWU2MWE0YTQxZWZhMzk4NjdlY2I2ZjAyNmUwZDYxZGRhZTcyYjk0ODNjZjBiM2FkruUOsA==: 00:16:58.383 11:24:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:58.383 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:58.383 11:24:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:16:58.383 11:24:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:58.383 11:24:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:58.383 11:24:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:58.383 11:24:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:58.383 11:24:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:16:58.383 11:24:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:16:58.383 11:24:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe3072 2 00:16:58.383 11:24:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:58.383 11:24:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:16:58.383 11:24:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:16:58.383 11:24:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:16:58.383 11:24:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:58.383 11:24:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:58.383 11:24:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:58.383 11:24:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:58.383 11:24:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:58.383 11:24:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:58.383 11:24:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:58.663 00:16:58.663 11:24:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:58.663 11:24:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:58.663 11:24:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:58.961 11:24:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:58.961 11:24:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:58.961 11:24:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:58.961 11:24:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:58.961 11:24:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:58.961 11:24:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:58.961 { 00:16:58.961 "cntlid": 21, 00:16:58.961 "qid": 0, 00:16:58.961 "state": "enabled", 00:16:58.961 "thread": "nvmf_tgt_poll_group_000", 00:16:58.961 "listen_address": { 00:16:58.961 "trtype": "TCP", 00:16:58.961 "adrfam": "IPv4", 00:16:58.961 "traddr": "10.0.0.2", 00:16:58.961 "trsvcid": "4420" 00:16:58.961 }, 00:16:58.961 "peer_address": { 00:16:58.961 "trtype": "TCP", 00:16:58.961 "adrfam": "IPv4", 00:16:58.961 "traddr": "10.0.0.1", 00:16:58.961 "trsvcid": "59076" 00:16:58.961 }, 00:16:58.961 "auth": { 00:16:58.961 "state": "completed", 00:16:58.961 "digest": "sha256", 00:16:58.961 "dhgroup": "ffdhe3072" 00:16:58.961 } 00:16:58.961 } 00:16:58.961 ]' 00:16:58.961 11:24:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:58.961 11:24:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:58.961 11:24:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:58.961 11:24:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:16:58.961 11:24:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:58.961 11:24:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:58.961 11:24:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:58.961 11:24:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:59.219 11:24:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-secret DHHC-1:02:NmY5NTg2NzExOWU5OGZiZmNkMzY2NjYyOTE0YTYxZWVhYmM4OTNjZTYzNjMwZWE324S1pg==: --dhchap-ctrl-secret DHHC-1:01:NzdhY2VkNTJjMmMzYTRiNDJhMjlhMTJlMGYzMTg2NDGvAg5T: 00:16:59.784 11:24:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:59.784 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:59.785 11:24:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:16:59.785 11:24:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:59.785 11:24:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:59.785 11:24:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:59.785 11:24:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:59.785 11:24:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:16:59.785 11:24:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:17:00.044 11:24:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe3072 3 00:17:00.044 11:24:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:00.044 11:24:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:17:00.044 11:24:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:17:00.044 11:24:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:17:00.044 11:24:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:00.044 11:24:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key3 00:17:00.044 11:24:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:00.044 11:24:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:00.044 11:24:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:00.044 11:24:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:00.044 11:24:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:00.302 00:17:00.302 11:24:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:00.302 11:24:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:00.302 11:24:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:00.302 11:24:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:00.302 11:24:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:00.302 11:24:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:00.302 11:24:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:00.302 11:24:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:00.302 11:24:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:00.302 { 00:17:00.302 "cntlid": 23, 00:17:00.302 "qid": 0, 00:17:00.302 "state": "enabled", 00:17:00.302 "thread": "nvmf_tgt_poll_group_000", 00:17:00.302 "listen_address": { 00:17:00.302 "trtype": "TCP", 00:17:00.302 "adrfam": "IPv4", 00:17:00.302 "traddr": "10.0.0.2", 00:17:00.302 "trsvcid": "4420" 00:17:00.302 }, 00:17:00.302 "peer_address": { 00:17:00.302 "trtype": "TCP", 00:17:00.302 "adrfam": "IPv4", 00:17:00.302 "traddr": "10.0.0.1", 00:17:00.302 "trsvcid": "54734" 00:17:00.302 }, 00:17:00.303 "auth": { 00:17:00.303 "state": "completed", 00:17:00.303 "digest": "sha256", 00:17:00.303 "dhgroup": "ffdhe3072" 00:17:00.303 } 00:17:00.303 } 00:17:00.303 ]' 00:17:00.303 11:24:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:00.561 11:24:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:00.561 11:24:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:00.561 11:24:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:17:00.561 11:24:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:00.561 11:24:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:00.561 11:24:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:00.561 11:24:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:00.820 11:24:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-secret DHHC-1:03:YzI0MzMyYmI0YjhlOWFjOGQzYzE3ZDQwNGE3ZTViMDhjZWQ5YjI3ZTE0ZDZkNTNlMDMzNGY2MzkyOTg4YTNmOQS9WdI=: 00:17:01.387 11:24:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:01.387 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:01.387 11:24:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:17:01.387 11:24:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:01.387 11:24:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:01.387 11:24:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:01.387 11:24:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:17:01.387 11:24:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:01.387 11:24:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:17:01.387 11:24:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:17:01.387 11:24:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe4096 0 00:17:01.387 11:24:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:01.387 11:24:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:17:01.387 11:24:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:17:01.387 11:24:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:17:01.387 11:24:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:01.387 11:24:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:01.387 11:24:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:01.387 11:24:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:01.387 11:24:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:01.387 11:24:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:01.387 11:24:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:01.646 00:17:01.646 11:24:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:01.646 11:24:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:01.646 11:24:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:01.904 11:24:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:01.904 11:24:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:01.904 11:24:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:01.904 11:24:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:01.904 11:24:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:01.904 11:24:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:01.904 { 00:17:01.904 "cntlid": 25, 00:17:01.904 "qid": 0, 00:17:01.904 "state": "enabled", 00:17:01.904 "thread": "nvmf_tgt_poll_group_000", 00:17:01.904 "listen_address": { 00:17:01.904 "trtype": "TCP", 00:17:01.904 "adrfam": "IPv4", 00:17:01.904 "traddr": "10.0.0.2", 00:17:01.904 "trsvcid": "4420" 00:17:01.904 }, 00:17:01.904 "peer_address": { 00:17:01.904 "trtype": "TCP", 00:17:01.904 "adrfam": "IPv4", 00:17:01.904 "traddr": "10.0.0.1", 00:17:01.904 "trsvcid": "54760" 00:17:01.904 }, 00:17:01.904 "auth": { 00:17:01.904 "state": "completed", 00:17:01.904 "digest": "sha256", 00:17:01.904 "dhgroup": "ffdhe4096" 00:17:01.904 } 00:17:01.904 } 00:17:01.904 ]' 00:17:01.904 11:24:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:01.904 11:24:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:01.904 11:24:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:01.904 11:24:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:17:01.904 11:24:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:02.163 11:24:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:02.163 11:24:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:02.163 11:24:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:02.163 11:24:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-secret DHHC-1:00:NGQ3NGNhMTQ3MmQ0OTM5YWE3YTQwNzkzOGE0OGZiZDhkZjVmZjMzOTI2NjA3YmNkP9i+rw==: --dhchap-ctrl-secret DHHC-1:03:Y2Q4YTNmOGU4ZTI1YjI2ZTgwZDE0MGE2ZDk4MzZlNzVhZTgzY2Y4MjIzN2RlMjcyODAwYTI4OWZlZmI3MjlkN5LDEeI=: 00:17:02.730 11:24:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:02.730 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:02.730 11:24:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:17:02.730 11:24:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:02.730 11:24:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:02.730 11:24:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:02.730 11:24:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:02.730 11:24:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:17:02.730 11:24:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:17:02.989 11:24:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe4096 1 00:17:02.989 11:24:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:02.989 11:24:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:17:02.989 11:24:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:17:02.989 11:24:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:17:02.989 11:24:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:02.989 11:24:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:02.989 11:24:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:02.989 11:24:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:02.989 11:24:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:02.989 11:24:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:02.989 11:24:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:03.248 00:17:03.248 11:24:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:03.248 11:24:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:03.248 11:24:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:03.507 11:24:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:03.507 11:24:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:03.507 11:24:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:03.507 11:24:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:03.507 11:24:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:03.507 11:24:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:03.507 { 00:17:03.507 "cntlid": 27, 00:17:03.507 "qid": 0, 00:17:03.507 "state": "enabled", 00:17:03.507 "thread": "nvmf_tgt_poll_group_000", 00:17:03.507 "listen_address": { 00:17:03.507 "trtype": "TCP", 00:17:03.507 "adrfam": "IPv4", 00:17:03.507 "traddr": "10.0.0.2", 00:17:03.507 "trsvcid": "4420" 00:17:03.507 }, 00:17:03.507 "peer_address": { 00:17:03.507 "trtype": "TCP", 00:17:03.507 "adrfam": "IPv4", 00:17:03.507 "traddr": "10.0.0.1", 00:17:03.507 "trsvcid": "54784" 00:17:03.507 }, 00:17:03.507 "auth": { 00:17:03.507 "state": "completed", 00:17:03.507 "digest": "sha256", 00:17:03.507 "dhgroup": "ffdhe4096" 00:17:03.507 } 00:17:03.507 } 00:17:03.507 ]' 00:17:03.507 11:24:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:03.507 11:24:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:03.507 11:24:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:03.507 11:24:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:17:03.507 11:24:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:03.507 11:24:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:03.507 11:24:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:03.507 11:24:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:03.766 11:24:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-secret DHHC-1:01:NDI3NDlmZWVkYmQ1Y2VkYTJjMzY4MDIyNTQzOGMzMjZWX4m9: --dhchap-ctrl-secret DHHC-1:02:MWU2MWE0YTQxZWZhMzk4NjdlY2I2ZjAyNmUwZDYxZGRhZTcyYjk0ODNjZjBiM2FkruUOsA==: 00:17:04.333 11:24:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:04.334 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:04.334 11:24:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:17:04.334 11:24:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:04.334 11:24:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:04.334 11:24:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:04.334 11:24:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:04.334 11:24:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:17:04.334 11:24:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:17:04.334 11:24:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe4096 2 00:17:04.334 11:24:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:04.334 11:24:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:17:04.334 11:24:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:17:04.334 11:24:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:17:04.334 11:24:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:04.334 11:24:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:04.334 11:24:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:04.334 11:24:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:04.592 11:24:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:04.592 11:24:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:04.592 11:24:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:04.850 00:17:04.850 11:25:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:04.850 11:25:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:04.850 11:25:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:04.850 11:25:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:04.850 11:25:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:04.850 11:25:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:04.850 11:25:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:04.850 11:25:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:04.850 11:25:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:04.850 { 00:17:04.850 "cntlid": 29, 00:17:04.850 "qid": 0, 00:17:04.850 "state": "enabled", 00:17:04.850 "thread": "nvmf_tgt_poll_group_000", 00:17:04.850 "listen_address": { 00:17:04.850 "trtype": "TCP", 00:17:04.850 "adrfam": "IPv4", 00:17:04.850 "traddr": "10.0.0.2", 00:17:04.850 "trsvcid": "4420" 00:17:04.850 }, 00:17:04.850 "peer_address": { 00:17:04.850 "trtype": "TCP", 00:17:04.850 "adrfam": "IPv4", 00:17:04.850 "traddr": "10.0.0.1", 00:17:04.850 "trsvcid": "54798" 00:17:04.850 }, 00:17:04.850 "auth": { 00:17:04.850 "state": "completed", 00:17:04.850 "digest": "sha256", 00:17:04.850 "dhgroup": "ffdhe4096" 00:17:04.850 } 00:17:04.850 } 00:17:04.850 ]' 00:17:04.850 11:25:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:05.109 11:25:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:05.109 11:25:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:05.109 11:25:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:17:05.109 11:25:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:05.109 11:25:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:05.109 11:25:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:05.109 11:25:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:05.109 11:25:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-secret DHHC-1:02:NmY5NTg2NzExOWU5OGZiZmNkMzY2NjYyOTE0YTYxZWVhYmM4OTNjZTYzNjMwZWE324S1pg==: --dhchap-ctrl-secret DHHC-1:01:NzdhY2VkNTJjMmMzYTRiNDJhMjlhMTJlMGYzMTg2NDGvAg5T: 00:17:05.676 11:25:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:05.676 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:05.676 11:25:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:17:05.676 11:25:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:05.676 11:25:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:05.676 11:25:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:05.676 11:25:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:05.676 11:25:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:17:05.676 11:25:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:17:05.935 11:25:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe4096 3 00:17:05.935 11:25:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:05.935 11:25:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:17:05.935 11:25:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:17:05.935 11:25:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:17:05.935 11:25:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:05.935 11:25:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key3 00:17:05.935 11:25:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:05.935 11:25:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:05.935 11:25:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:05.935 11:25:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:05.935 11:25:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:06.194 00:17:06.194 11:25:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:06.194 11:25:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:06.194 11:25:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:06.452 11:25:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:06.452 11:25:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:06.452 11:25:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:06.452 11:25:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:06.452 11:25:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:06.452 11:25:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:06.452 { 00:17:06.452 "cntlid": 31, 00:17:06.452 "qid": 0, 00:17:06.452 "state": "enabled", 00:17:06.452 "thread": "nvmf_tgt_poll_group_000", 00:17:06.452 "listen_address": { 00:17:06.452 "trtype": "TCP", 00:17:06.452 "adrfam": "IPv4", 00:17:06.452 "traddr": "10.0.0.2", 00:17:06.452 "trsvcid": "4420" 00:17:06.452 }, 00:17:06.452 "peer_address": { 00:17:06.452 "trtype": "TCP", 00:17:06.452 "adrfam": "IPv4", 00:17:06.452 "traddr": "10.0.0.1", 00:17:06.452 "trsvcid": "54828" 00:17:06.452 }, 00:17:06.452 "auth": { 00:17:06.452 "state": "completed", 00:17:06.452 "digest": "sha256", 00:17:06.452 "dhgroup": "ffdhe4096" 00:17:06.452 } 00:17:06.452 } 00:17:06.452 ]' 00:17:06.452 11:25:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:06.452 11:25:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:06.452 11:25:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:06.452 11:25:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:17:06.453 11:25:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:06.453 11:25:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:06.453 11:25:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:06.453 11:25:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:06.711 11:25:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-secret DHHC-1:03:YzI0MzMyYmI0YjhlOWFjOGQzYzE3ZDQwNGE3ZTViMDhjZWQ5YjI3ZTE0ZDZkNTNlMDMzNGY2MzkyOTg4YTNmOQS9WdI=: 00:17:07.277 11:25:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:07.277 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:07.278 11:25:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:17:07.278 11:25:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:07.278 11:25:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:07.278 11:25:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:07.278 11:25:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:17:07.278 11:25:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:07.278 11:25:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:17:07.278 11:25:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:17:07.536 11:25:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe6144 0 00:17:07.536 11:25:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:07.536 11:25:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:17:07.536 11:25:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:17:07.536 11:25:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:17:07.536 11:25:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:07.536 11:25:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:07.536 11:25:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:07.536 11:25:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:07.536 11:25:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:07.536 11:25:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:07.536 11:25:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:07.795 00:17:07.795 11:25:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:07.795 11:25:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:07.795 11:25:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:08.052 11:25:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:08.052 11:25:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:08.052 11:25:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:08.052 11:25:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:08.052 11:25:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:08.052 11:25:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:08.052 { 00:17:08.052 "cntlid": 33, 00:17:08.052 "qid": 0, 00:17:08.052 "state": "enabled", 00:17:08.052 "thread": "nvmf_tgt_poll_group_000", 00:17:08.052 "listen_address": { 00:17:08.052 "trtype": "TCP", 00:17:08.052 "adrfam": "IPv4", 00:17:08.052 "traddr": "10.0.0.2", 00:17:08.052 "trsvcid": "4420" 00:17:08.052 }, 00:17:08.052 "peer_address": { 00:17:08.052 "trtype": "TCP", 00:17:08.052 "adrfam": "IPv4", 00:17:08.052 "traddr": "10.0.0.1", 00:17:08.052 "trsvcid": "54860" 00:17:08.052 }, 00:17:08.052 "auth": { 00:17:08.052 "state": "completed", 00:17:08.052 "digest": "sha256", 00:17:08.052 "dhgroup": "ffdhe6144" 00:17:08.052 } 00:17:08.052 } 00:17:08.052 ]' 00:17:08.052 11:25:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:08.052 11:25:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:08.052 11:25:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:08.052 11:25:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:17:08.052 11:25:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:08.052 11:25:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:08.052 11:25:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:08.052 11:25:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:08.310 11:25:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-secret DHHC-1:00:NGQ3NGNhMTQ3MmQ0OTM5YWE3YTQwNzkzOGE0OGZiZDhkZjVmZjMzOTI2NjA3YmNkP9i+rw==: --dhchap-ctrl-secret DHHC-1:03:Y2Q4YTNmOGU4ZTI1YjI2ZTgwZDE0MGE2ZDk4MzZlNzVhZTgzY2Y4MjIzN2RlMjcyODAwYTI4OWZlZmI3MjlkN5LDEeI=: 00:17:08.876 11:25:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:08.876 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:08.876 11:25:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:17:08.876 11:25:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:08.876 11:25:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:08.876 11:25:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:08.876 11:25:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:08.876 11:25:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:17:08.876 11:25:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:17:09.134 11:25:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe6144 1 00:17:09.134 11:25:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:09.134 11:25:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:17:09.134 11:25:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:17:09.134 11:25:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:17:09.134 11:25:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:09.134 11:25:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:09.134 11:25:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:09.134 11:25:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:09.134 11:25:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:09.134 11:25:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:09.134 11:25:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:09.393 00:17:09.393 11:25:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:09.393 11:25:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:09.393 11:25:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:09.651 11:25:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:09.651 11:25:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:09.651 11:25:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:09.651 11:25:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:09.651 11:25:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:09.651 11:25:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:09.651 { 00:17:09.651 "cntlid": 35, 00:17:09.651 "qid": 0, 00:17:09.651 "state": "enabled", 00:17:09.651 "thread": "nvmf_tgt_poll_group_000", 00:17:09.651 "listen_address": { 00:17:09.651 "trtype": "TCP", 00:17:09.651 "adrfam": "IPv4", 00:17:09.651 "traddr": "10.0.0.2", 00:17:09.651 "trsvcid": "4420" 00:17:09.651 }, 00:17:09.651 "peer_address": { 00:17:09.651 "trtype": "TCP", 00:17:09.651 "adrfam": "IPv4", 00:17:09.651 "traddr": "10.0.0.1", 00:17:09.651 "trsvcid": "54892" 00:17:09.651 }, 00:17:09.651 "auth": { 00:17:09.651 "state": "completed", 00:17:09.651 "digest": "sha256", 00:17:09.651 "dhgroup": "ffdhe6144" 00:17:09.651 } 00:17:09.651 } 00:17:09.651 ]' 00:17:09.651 11:25:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:09.651 11:25:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:09.651 11:25:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:09.651 11:25:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:17:09.651 11:25:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:09.651 11:25:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:09.651 11:25:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:09.651 11:25:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:09.909 11:25:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-secret DHHC-1:01:NDI3NDlmZWVkYmQ1Y2VkYTJjMzY4MDIyNTQzOGMzMjZWX4m9: --dhchap-ctrl-secret DHHC-1:02:MWU2MWE0YTQxZWZhMzk4NjdlY2I2ZjAyNmUwZDYxZGRhZTcyYjk0ODNjZjBiM2FkruUOsA==: 00:17:10.475 11:25:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:10.475 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:10.475 11:25:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:17:10.475 11:25:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:10.475 11:25:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:10.475 11:25:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:10.475 11:25:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:10.475 11:25:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:17:10.475 11:25:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:17:10.733 11:25:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe6144 2 00:17:10.733 11:25:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:10.733 11:25:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:17:10.733 11:25:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:17:10.733 11:25:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:17:10.733 11:25:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:10.733 11:25:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:10.733 11:25:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:10.733 11:25:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:10.733 11:25:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:10.733 11:25:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:10.733 11:25:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:10.991 00:17:10.991 11:25:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:10.991 11:25:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:10.991 11:25:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:11.249 11:25:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:11.249 11:25:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:11.249 11:25:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:11.249 11:25:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:11.249 11:25:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:11.249 11:25:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:11.249 { 00:17:11.249 "cntlid": 37, 00:17:11.249 "qid": 0, 00:17:11.249 "state": "enabled", 00:17:11.249 "thread": "nvmf_tgt_poll_group_000", 00:17:11.249 "listen_address": { 00:17:11.249 "trtype": "TCP", 00:17:11.249 "adrfam": "IPv4", 00:17:11.249 "traddr": "10.0.0.2", 00:17:11.249 "trsvcid": "4420" 00:17:11.249 }, 00:17:11.249 "peer_address": { 00:17:11.249 "trtype": "TCP", 00:17:11.249 "adrfam": "IPv4", 00:17:11.249 "traddr": "10.0.0.1", 00:17:11.249 "trsvcid": "47570" 00:17:11.249 }, 00:17:11.249 "auth": { 00:17:11.249 "state": "completed", 00:17:11.249 "digest": "sha256", 00:17:11.249 "dhgroup": "ffdhe6144" 00:17:11.249 } 00:17:11.249 } 00:17:11.249 ]' 00:17:11.249 11:25:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:11.249 11:25:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:11.249 11:25:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:11.249 11:25:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:17:11.249 11:25:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:11.249 11:25:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:11.249 11:25:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:11.249 11:25:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:11.507 11:25:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-secret DHHC-1:02:NmY5NTg2NzExOWU5OGZiZmNkMzY2NjYyOTE0YTYxZWVhYmM4OTNjZTYzNjMwZWE324S1pg==: --dhchap-ctrl-secret DHHC-1:01:NzdhY2VkNTJjMmMzYTRiNDJhMjlhMTJlMGYzMTg2NDGvAg5T: 00:17:12.072 11:25:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:12.072 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:12.072 11:25:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:17:12.072 11:25:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:12.072 11:25:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:12.072 11:25:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:12.072 11:25:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:12.072 11:25:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:17:12.072 11:25:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:17:12.330 11:25:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe6144 3 00:17:12.330 11:25:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:12.330 11:25:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:17:12.330 11:25:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:17:12.330 11:25:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:17:12.330 11:25:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:12.330 11:25:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key3 00:17:12.330 11:25:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:12.330 11:25:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:12.330 11:25:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:12.330 11:25:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:12.330 11:25:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:12.588 00:17:12.588 11:25:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:12.588 11:25:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:12.588 11:25:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:12.846 11:25:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:12.846 11:25:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:12.846 11:25:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:12.846 11:25:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:12.846 11:25:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:12.846 11:25:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:12.846 { 00:17:12.846 "cntlid": 39, 00:17:12.846 "qid": 0, 00:17:12.846 "state": "enabled", 00:17:12.846 "thread": "nvmf_tgt_poll_group_000", 00:17:12.846 "listen_address": { 00:17:12.846 "trtype": "TCP", 00:17:12.846 "adrfam": "IPv4", 00:17:12.846 "traddr": "10.0.0.2", 00:17:12.846 "trsvcid": "4420" 00:17:12.846 }, 00:17:12.846 "peer_address": { 00:17:12.846 "trtype": "TCP", 00:17:12.846 "adrfam": "IPv4", 00:17:12.846 "traddr": "10.0.0.1", 00:17:12.846 "trsvcid": "47600" 00:17:12.846 }, 00:17:12.846 "auth": { 00:17:12.846 "state": "completed", 00:17:12.846 "digest": "sha256", 00:17:12.846 "dhgroup": "ffdhe6144" 00:17:12.846 } 00:17:12.846 } 00:17:12.846 ]' 00:17:12.846 11:25:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:12.846 11:25:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:12.846 11:25:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:12.846 11:25:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:17:12.846 11:25:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:12.846 11:25:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:12.846 11:25:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:12.846 11:25:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:13.104 11:25:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-secret DHHC-1:03:YzI0MzMyYmI0YjhlOWFjOGQzYzE3ZDQwNGE3ZTViMDhjZWQ5YjI3ZTE0ZDZkNTNlMDMzNGY2MzkyOTg4YTNmOQS9WdI=: 00:17:13.669 11:25:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:13.669 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:13.669 11:25:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:17:13.669 11:25:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:13.669 11:25:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:13.669 11:25:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:13.669 11:25:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:17:13.669 11:25:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:13.669 11:25:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:17:13.669 11:25:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:17:13.669 11:25:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe8192 0 00:17:13.669 11:25:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:13.669 11:25:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:17:13.669 11:25:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:17:13.669 11:25:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:17:13.669 11:25:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:13.669 11:25:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:13.669 11:25:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:13.669 11:25:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:13.927 11:25:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:13.927 11:25:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:13.927 11:25:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:14.185 00:17:14.185 11:25:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:14.185 11:25:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:14.185 11:25:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:14.443 11:25:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:14.443 11:25:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:14.443 11:25:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:14.443 11:25:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:14.443 11:25:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:14.443 11:25:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:14.443 { 00:17:14.443 "cntlid": 41, 00:17:14.443 "qid": 0, 00:17:14.443 "state": "enabled", 00:17:14.443 "thread": "nvmf_tgt_poll_group_000", 00:17:14.443 "listen_address": { 00:17:14.443 "trtype": "TCP", 00:17:14.443 "adrfam": "IPv4", 00:17:14.443 "traddr": "10.0.0.2", 00:17:14.443 "trsvcid": "4420" 00:17:14.443 }, 00:17:14.443 "peer_address": { 00:17:14.443 "trtype": "TCP", 00:17:14.443 "adrfam": "IPv4", 00:17:14.443 "traddr": "10.0.0.1", 00:17:14.443 "trsvcid": "47636" 00:17:14.443 }, 00:17:14.443 "auth": { 00:17:14.443 "state": "completed", 00:17:14.443 "digest": "sha256", 00:17:14.443 "dhgroup": "ffdhe8192" 00:17:14.443 } 00:17:14.443 } 00:17:14.443 ]' 00:17:14.443 11:25:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:14.443 11:25:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:14.443 11:25:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:14.443 11:25:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:17:14.443 11:25:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:14.443 11:25:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:14.443 11:25:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:14.443 11:25:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:14.701 11:25:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-secret DHHC-1:00:NGQ3NGNhMTQ3MmQ0OTM5YWE3YTQwNzkzOGE0OGZiZDhkZjVmZjMzOTI2NjA3YmNkP9i+rw==: --dhchap-ctrl-secret DHHC-1:03:Y2Q4YTNmOGU4ZTI1YjI2ZTgwZDE0MGE2ZDk4MzZlNzVhZTgzY2Y4MjIzN2RlMjcyODAwYTI4OWZlZmI3MjlkN5LDEeI=: 00:17:15.267 11:25:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:15.267 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:15.267 11:25:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:17:15.267 11:25:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:15.267 11:25:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:15.267 11:25:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:15.267 11:25:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:15.267 11:25:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:17:15.267 11:25:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:17:15.526 11:25:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe8192 1 00:17:15.526 11:25:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:15.526 11:25:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:17:15.526 11:25:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:17:15.526 11:25:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:17:15.526 11:25:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:15.526 11:25:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:15.526 11:25:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:15.526 11:25:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:15.526 11:25:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:15.526 11:25:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:15.526 11:25:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:16.092 00:17:16.092 11:25:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:16.092 11:25:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:16.092 11:25:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:16.092 11:25:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:16.092 11:25:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:16.092 11:25:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:16.092 11:25:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:16.092 11:25:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:16.092 11:25:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:16.092 { 00:17:16.092 "cntlid": 43, 00:17:16.092 "qid": 0, 00:17:16.092 "state": "enabled", 00:17:16.092 "thread": "nvmf_tgt_poll_group_000", 00:17:16.092 "listen_address": { 00:17:16.092 "trtype": "TCP", 00:17:16.092 "adrfam": "IPv4", 00:17:16.092 "traddr": "10.0.0.2", 00:17:16.092 "trsvcid": "4420" 00:17:16.092 }, 00:17:16.092 "peer_address": { 00:17:16.092 "trtype": "TCP", 00:17:16.092 "adrfam": "IPv4", 00:17:16.092 "traddr": "10.0.0.1", 00:17:16.092 "trsvcid": "47668" 00:17:16.092 }, 00:17:16.092 "auth": { 00:17:16.092 "state": "completed", 00:17:16.092 "digest": "sha256", 00:17:16.092 "dhgroup": "ffdhe8192" 00:17:16.093 } 00:17:16.093 } 00:17:16.093 ]' 00:17:16.093 11:25:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:16.093 11:25:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:16.093 11:25:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:16.351 11:25:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:17:16.351 11:25:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:16.351 11:25:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:16.351 11:25:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:16.351 11:25:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:16.351 11:25:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-secret DHHC-1:01:NDI3NDlmZWVkYmQ1Y2VkYTJjMzY4MDIyNTQzOGMzMjZWX4m9: --dhchap-ctrl-secret DHHC-1:02:MWU2MWE0YTQxZWZhMzk4NjdlY2I2ZjAyNmUwZDYxZGRhZTcyYjk0ODNjZjBiM2FkruUOsA==: 00:17:16.917 11:25:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:16.917 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:16.917 11:25:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:17:16.917 11:25:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:16.917 11:25:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:16.917 11:25:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:16.917 11:25:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:16.917 11:25:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:17:16.917 11:25:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:17:17.175 11:25:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe8192 2 00:17:17.175 11:25:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:17.175 11:25:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:17:17.175 11:25:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:17:17.175 11:25:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:17:17.175 11:25:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:17.175 11:25:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:17.175 11:25:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:17.175 11:25:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:17.175 11:25:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:17.175 11:25:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:17.175 11:25:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:17.741 00:17:17.741 11:25:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:17.741 11:25:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:17.741 11:25:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:17.741 11:25:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:17.741 11:25:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:17.741 11:25:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:17.741 11:25:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:17.741 11:25:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:17.741 11:25:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:17.741 { 00:17:17.741 "cntlid": 45, 00:17:17.741 "qid": 0, 00:17:17.741 "state": "enabled", 00:17:17.741 "thread": "nvmf_tgt_poll_group_000", 00:17:17.741 "listen_address": { 00:17:17.741 "trtype": "TCP", 00:17:17.741 "adrfam": "IPv4", 00:17:17.741 "traddr": "10.0.0.2", 00:17:17.741 "trsvcid": "4420" 00:17:17.741 }, 00:17:17.741 "peer_address": { 00:17:17.741 "trtype": "TCP", 00:17:17.741 "adrfam": "IPv4", 00:17:17.741 "traddr": "10.0.0.1", 00:17:17.741 "trsvcid": "47698" 00:17:17.741 }, 00:17:17.741 "auth": { 00:17:17.741 "state": "completed", 00:17:17.741 "digest": "sha256", 00:17:17.741 "dhgroup": "ffdhe8192" 00:17:17.741 } 00:17:17.741 } 00:17:17.741 ]' 00:17:17.741 11:25:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:17.999 11:25:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:17.999 11:25:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:17.999 11:25:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:17:17.999 11:25:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:17.999 11:25:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:17.999 11:25:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:17.999 11:25:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:18.256 11:25:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-secret DHHC-1:02:NmY5NTg2NzExOWU5OGZiZmNkMzY2NjYyOTE0YTYxZWVhYmM4OTNjZTYzNjMwZWE324S1pg==: --dhchap-ctrl-secret DHHC-1:01:NzdhY2VkNTJjMmMzYTRiNDJhMjlhMTJlMGYzMTg2NDGvAg5T: 00:17:18.822 11:25:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:18.822 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:18.822 11:25:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:17:18.822 11:25:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:18.822 11:25:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:18.822 11:25:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:18.822 11:25:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:18.822 11:25:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:17:18.822 11:25:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:17:18.822 11:25:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe8192 3 00:17:18.822 11:25:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:18.822 11:25:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:17:18.822 11:25:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:17:18.822 11:25:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:17:18.822 11:25:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:18.822 11:25:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key3 00:17:18.822 11:25:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:18.822 11:25:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:19.080 11:25:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:19.080 11:25:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:19.080 11:25:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:19.338 00:17:19.338 11:25:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:19.338 11:25:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:19.338 11:25:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:19.596 11:25:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:19.596 11:25:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:19.596 11:25:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:19.596 11:25:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:19.596 11:25:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:19.596 11:25:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:19.596 { 00:17:19.596 "cntlid": 47, 00:17:19.596 "qid": 0, 00:17:19.596 "state": "enabled", 00:17:19.596 "thread": "nvmf_tgt_poll_group_000", 00:17:19.596 "listen_address": { 00:17:19.596 "trtype": "TCP", 00:17:19.596 "adrfam": "IPv4", 00:17:19.596 "traddr": "10.0.0.2", 00:17:19.596 "trsvcid": "4420" 00:17:19.596 }, 00:17:19.596 "peer_address": { 00:17:19.596 "trtype": "TCP", 00:17:19.596 "adrfam": "IPv4", 00:17:19.596 "traddr": "10.0.0.1", 00:17:19.596 "trsvcid": "47708" 00:17:19.596 }, 00:17:19.596 "auth": { 00:17:19.596 "state": "completed", 00:17:19.596 "digest": "sha256", 00:17:19.596 "dhgroup": "ffdhe8192" 00:17:19.596 } 00:17:19.596 } 00:17:19.596 ]' 00:17:19.596 11:25:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:19.596 11:25:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:19.596 11:25:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:19.596 11:25:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:17:19.596 11:25:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:19.877 11:25:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:19.877 11:25:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:19.877 11:25:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:19.877 11:25:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-secret DHHC-1:03:YzI0MzMyYmI0YjhlOWFjOGQzYzE3ZDQwNGE3ZTViMDhjZWQ5YjI3ZTE0ZDZkNTNlMDMzNGY2MzkyOTg4YTNmOQS9WdI=: 00:17:20.476 11:25:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:20.476 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:20.476 11:25:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:17:20.476 11:25:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:20.476 11:25:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:20.476 11:25:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:20.476 11:25:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@91 -- # for digest in "${digests[@]}" 00:17:20.476 11:25:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:17:20.476 11:25:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:20.476 11:25:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:17:20.476 11:25:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:17:20.735 11:25:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 null 0 00:17:20.735 11:25:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:20.735 11:25:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:17:20.735 11:25:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:17:20.735 11:25:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:17:20.735 11:25:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:20.735 11:25:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:20.735 11:25:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:20.735 11:25:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:20.735 11:25:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:20.735 11:25:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:20.735 11:25:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:20.992 00:17:20.993 11:25:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:20.993 11:25:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:20.993 11:25:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:20.993 11:25:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:20.993 11:25:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:20.993 11:25:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:20.993 11:25:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:21.250 11:25:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:21.250 11:25:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:21.250 { 00:17:21.250 "cntlid": 49, 00:17:21.250 "qid": 0, 00:17:21.250 "state": "enabled", 00:17:21.250 "thread": "nvmf_tgt_poll_group_000", 00:17:21.250 "listen_address": { 00:17:21.250 "trtype": "TCP", 00:17:21.250 "adrfam": "IPv4", 00:17:21.250 "traddr": "10.0.0.2", 00:17:21.250 "trsvcid": "4420" 00:17:21.250 }, 00:17:21.250 "peer_address": { 00:17:21.250 "trtype": "TCP", 00:17:21.250 "adrfam": "IPv4", 00:17:21.250 "traddr": "10.0.0.1", 00:17:21.250 "trsvcid": "41742" 00:17:21.250 }, 00:17:21.250 "auth": { 00:17:21.250 "state": "completed", 00:17:21.250 "digest": "sha384", 00:17:21.251 "dhgroup": "null" 00:17:21.251 } 00:17:21.251 } 00:17:21.251 ]' 00:17:21.251 11:25:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:21.251 11:25:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:21.251 11:25:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:21.251 11:25:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:17:21.251 11:25:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:21.251 11:25:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:21.251 11:25:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:21.251 11:25:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:21.508 11:25:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-secret DHHC-1:00:NGQ3NGNhMTQ3MmQ0OTM5YWE3YTQwNzkzOGE0OGZiZDhkZjVmZjMzOTI2NjA3YmNkP9i+rw==: --dhchap-ctrl-secret DHHC-1:03:Y2Q4YTNmOGU4ZTI1YjI2ZTgwZDE0MGE2ZDk4MzZlNzVhZTgzY2Y4MjIzN2RlMjcyODAwYTI4OWZlZmI3MjlkN5LDEeI=: 00:17:22.074 11:25:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:22.074 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:22.074 11:25:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:17:22.074 11:25:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:22.074 11:25:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:22.074 11:25:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:22.074 11:25:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:22.074 11:25:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:17:22.074 11:25:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:17:22.074 11:25:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 null 1 00:17:22.074 11:25:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:22.074 11:25:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:17:22.074 11:25:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:17:22.074 11:25:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:17:22.074 11:25:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:22.074 11:25:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:22.074 11:25:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:22.074 11:25:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:22.332 11:25:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:22.332 11:25:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:22.332 11:25:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:22.332 00:17:22.332 11:25:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:22.332 11:25:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:22.332 11:25:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:22.590 11:25:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:22.590 11:25:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:22.590 11:25:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:22.590 11:25:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:22.590 11:25:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:22.590 11:25:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:22.590 { 00:17:22.590 "cntlid": 51, 00:17:22.590 "qid": 0, 00:17:22.590 "state": "enabled", 00:17:22.590 "thread": "nvmf_tgt_poll_group_000", 00:17:22.590 "listen_address": { 00:17:22.590 "trtype": "TCP", 00:17:22.590 "adrfam": "IPv4", 00:17:22.590 "traddr": "10.0.0.2", 00:17:22.590 "trsvcid": "4420" 00:17:22.590 }, 00:17:22.590 "peer_address": { 00:17:22.590 "trtype": "TCP", 00:17:22.590 "adrfam": "IPv4", 00:17:22.590 "traddr": "10.0.0.1", 00:17:22.590 "trsvcid": "41778" 00:17:22.590 }, 00:17:22.590 "auth": { 00:17:22.590 "state": "completed", 00:17:22.590 "digest": "sha384", 00:17:22.590 "dhgroup": "null" 00:17:22.590 } 00:17:22.590 } 00:17:22.590 ]' 00:17:22.590 11:25:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:22.590 11:25:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:22.590 11:25:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:22.590 11:25:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:17:22.590 11:25:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:22.848 11:25:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:22.848 11:25:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:22.848 11:25:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:22.848 11:25:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-secret DHHC-1:01:NDI3NDlmZWVkYmQ1Y2VkYTJjMzY4MDIyNTQzOGMzMjZWX4m9: --dhchap-ctrl-secret DHHC-1:02:MWU2MWE0YTQxZWZhMzk4NjdlY2I2ZjAyNmUwZDYxZGRhZTcyYjk0ODNjZjBiM2FkruUOsA==: 00:17:23.413 11:25:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:23.413 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:23.413 11:25:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:17:23.413 11:25:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:23.413 11:25:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:23.413 11:25:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:23.413 11:25:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:23.414 11:25:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:17:23.414 11:25:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:17:23.672 11:25:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 null 2 00:17:23.672 11:25:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:23.672 11:25:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:17:23.672 11:25:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:17:23.672 11:25:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:17:23.672 11:25:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:23.672 11:25:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:23.672 11:25:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:23.672 11:25:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:23.672 11:25:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:23.672 11:25:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:23.672 11:25:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:23.930 00:17:23.930 11:25:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:23.930 11:25:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:23.930 11:25:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:24.188 11:25:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:24.188 11:25:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:24.188 11:25:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:24.188 11:25:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:24.188 11:25:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:24.188 11:25:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:24.188 { 00:17:24.188 "cntlid": 53, 00:17:24.188 "qid": 0, 00:17:24.188 "state": "enabled", 00:17:24.188 "thread": "nvmf_tgt_poll_group_000", 00:17:24.188 "listen_address": { 00:17:24.188 "trtype": "TCP", 00:17:24.188 "adrfam": "IPv4", 00:17:24.188 "traddr": "10.0.0.2", 00:17:24.188 "trsvcid": "4420" 00:17:24.188 }, 00:17:24.188 "peer_address": { 00:17:24.188 "trtype": "TCP", 00:17:24.188 "adrfam": "IPv4", 00:17:24.188 "traddr": "10.0.0.1", 00:17:24.188 "trsvcid": "41806" 00:17:24.188 }, 00:17:24.188 "auth": { 00:17:24.188 "state": "completed", 00:17:24.188 "digest": "sha384", 00:17:24.188 "dhgroup": "null" 00:17:24.188 } 00:17:24.188 } 00:17:24.188 ]' 00:17:24.188 11:25:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:24.188 11:25:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:24.188 11:25:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:24.188 11:25:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:17:24.189 11:25:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:24.189 11:25:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:24.189 11:25:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:24.189 11:25:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:24.446 11:25:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-secret DHHC-1:02:NmY5NTg2NzExOWU5OGZiZmNkMzY2NjYyOTE0YTYxZWVhYmM4OTNjZTYzNjMwZWE324S1pg==: --dhchap-ctrl-secret DHHC-1:01:NzdhY2VkNTJjMmMzYTRiNDJhMjlhMTJlMGYzMTg2NDGvAg5T: 00:17:25.012 11:25:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:25.012 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:25.012 11:25:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:17:25.012 11:25:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:25.012 11:25:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:25.012 11:25:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:25.012 11:25:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:25.012 11:25:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:17:25.012 11:25:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:17:25.271 11:25:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 null 3 00:17:25.271 11:25:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:25.271 11:25:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:17:25.271 11:25:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:17:25.271 11:25:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:17:25.271 11:25:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:25.271 11:25:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key3 00:17:25.271 11:25:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:25.271 11:25:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:25.271 11:25:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:25.271 11:25:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:25.271 11:25:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:25.271 00:17:25.529 11:25:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:25.529 11:25:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:25.529 11:25:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:25.529 11:25:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:25.529 11:25:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:25.529 11:25:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:25.529 11:25:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:25.529 11:25:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:25.529 11:25:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:25.529 { 00:17:25.529 "cntlid": 55, 00:17:25.529 "qid": 0, 00:17:25.529 "state": "enabled", 00:17:25.529 "thread": "nvmf_tgt_poll_group_000", 00:17:25.529 "listen_address": { 00:17:25.529 "trtype": "TCP", 00:17:25.529 "adrfam": "IPv4", 00:17:25.529 "traddr": "10.0.0.2", 00:17:25.529 "trsvcid": "4420" 00:17:25.529 }, 00:17:25.529 "peer_address": { 00:17:25.529 "trtype": "TCP", 00:17:25.529 "adrfam": "IPv4", 00:17:25.529 "traddr": "10.0.0.1", 00:17:25.529 "trsvcid": "41836" 00:17:25.529 }, 00:17:25.529 "auth": { 00:17:25.529 "state": "completed", 00:17:25.529 "digest": "sha384", 00:17:25.529 "dhgroup": "null" 00:17:25.529 } 00:17:25.529 } 00:17:25.529 ]' 00:17:25.530 11:25:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:25.530 11:25:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:25.530 11:25:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:25.787 11:25:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:17:25.787 11:25:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:25.787 11:25:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:25.787 11:25:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:25.787 11:25:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:25.787 11:25:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-secret DHHC-1:03:YzI0MzMyYmI0YjhlOWFjOGQzYzE3ZDQwNGE3ZTViMDhjZWQ5YjI3ZTE0ZDZkNTNlMDMzNGY2MzkyOTg4YTNmOQS9WdI=: 00:17:26.351 11:25:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:26.351 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:26.351 11:25:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:17:26.351 11:25:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:26.352 11:25:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:26.352 11:25:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:26.352 11:25:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:17:26.352 11:25:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:26.352 11:25:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:17:26.352 11:25:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:17:26.609 11:25:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe2048 0 00:17:26.609 11:25:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:26.609 11:25:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:17:26.609 11:25:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:17:26.609 11:25:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:17:26.609 11:25:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:26.609 11:25:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:26.609 11:25:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:26.609 11:25:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:26.609 11:25:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:26.609 11:25:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:26.609 11:25:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:26.866 00:17:26.866 11:25:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:26.866 11:25:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:26.866 11:25:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:27.124 11:25:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:27.124 11:25:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:27.124 11:25:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:27.124 11:25:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:27.124 11:25:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:27.124 11:25:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:27.124 { 00:17:27.124 "cntlid": 57, 00:17:27.124 "qid": 0, 00:17:27.124 "state": "enabled", 00:17:27.124 "thread": "nvmf_tgt_poll_group_000", 00:17:27.124 "listen_address": { 00:17:27.124 "trtype": "TCP", 00:17:27.124 "adrfam": "IPv4", 00:17:27.124 "traddr": "10.0.0.2", 00:17:27.124 "trsvcid": "4420" 00:17:27.124 }, 00:17:27.124 "peer_address": { 00:17:27.124 "trtype": "TCP", 00:17:27.124 "adrfam": "IPv4", 00:17:27.124 "traddr": "10.0.0.1", 00:17:27.124 "trsvcid": "41856" 00:17:27.124 }, 00:17:27.124 "auth": { 00:17:27.124 "state": "completed", 00:17:27.124 "digest": "sha384", 00:17:27.124 "dhgroup": "ffdhe2048" 00:17:27.124 } 00:17:27.124 } 00:17:27.124 ]' 00:17:27.124 11:25:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:27.124 11:25:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:27.124 11:25:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:27.124 11:25:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:17:27.124 11:25:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:27.124 11:25:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:27.124 11:25:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:27.124 11:25:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:27.381 11:25:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-secret DHHC-1:00:NGQ3NGNhMTQ3MmQ0OTM5YWE3YTQwNzkzOGE0OGZiZDhkZjVmZjMzOTI2NjA3YmNkP9i+rw==: --dhchap-ctrl-secret DHHC-1:03:Y2Q4YTNmOGU4ZTI1YjI2ZTgwZDE0MGE2ZDk4MzZlNzVhZTgzY2Y4MjIzN2RlMjcyODAwYTI4OWZlZmI3MjlkN5LDEeI=: 00:17:27.947 11:25:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:27.947 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:27.947 11:25:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:17:27.947 11:25:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:27.947 11:25:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:27.947 11:25:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:27.947 11:25:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:27.947 11:25:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:17:27.947 11:25:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:17:27.947 11:25:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe2048 1 00:17:28.204 11:25:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:28.204 11:25:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:17:28.204 11:25:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:17:28.204 11:25:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:17:28.204 11:25:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:28.204 11:25:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:28.204 11:25:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:28.204 11:25:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:28.204 11:25:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:28.204 11:25:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:28.204 11:25:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:28.204 00:17:28.205 11:25:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:28.205 11:25:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:28.205 11:25:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:28.462 11:25:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:28.462 11:25:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:28.462 11:25:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:28.462 11:25:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:28.462 11:25:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:28.462 11:25:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:28.462 { 00:17:28.462 "cntlid": 59, 00:17:28.462 "qid": 0, 00:17:28.462 "state": "enabled", 00:17:28.462 "thread": "nvmf_tgt_poll_group_000", 00:17:28.462 "listen_address": { 00:17:28.462 "trtype": "TCP", 00:17:28.462 "adrfam": "IPv4", 00:17:28.462 "traddr": "10.0.0.2", 00:17:28.462 "trsvcid": "4420" 00:17:28.462 }, 00:17:28.462 "peer_address": { 00:17:28.462 "trtype": "TCP", 00:17:28.462 "adrfam": "IPv4", 00:17:28.462 "traddr": "10.0.0.1", 00:17:28.462 "trsvcid": "41890" 00:17:28.462 }, 00:17:28.462 "auth": { 00:17:28.462 "state": "completed", 00:17:28.462 "digest": "sha384", 00:17:28.462 "dhgroup": "ffdhe2048" 00:17:28.462 } 00:17:28.462 } 00:17:28.462 ]' 00:17:28.462 11:25:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:28.462 11:25:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:28.462 11:25:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:28.720 11:25:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:17:28.720 11:25:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:28.720 11:25:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:28.720 11:25:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:28.720 11:25:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:28.720 11:25:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-secret DHHC-1:01:NDI3NDlmZWVkYmQ1Y2VkYTJjMzY4MDIyNTQzOGMzMjZWX4m9: --dhchap-ctrl-secret DHHC-1:02:MWU2MWE0YTQxZWZhMzk4NjdlY2I2ZjAyNmUwZDYxZGRhZTcyYjk0ODNjZjBiM2FkruUOsA==: 00:17:29.286 11:25:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:29.286 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:29.286 11:25:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:17:29.286 11:25:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:29.286 11:25:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:29.286 11:25:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:29.286 11:25:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:29.286 11:25:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:17:29.286 11:25:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:17:29.544 11:25:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe2048 2 00:17:29.544 11:25:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:29.544 11:25:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:17:29.544 11:25:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:17:29.544 11:25:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:17:29.544 11:25:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:29.544 11:25:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:29.544 11:25:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:29.544 11:25:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:29.544 11:25:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:29.544 11:25:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:29.544 11:25:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:29.803 00:17:29.803 11:25:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:29.803 11:25:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:29.803 11:25:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:30.060 11:25:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:30.060 11:25:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:30.060 11:25:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:30.060 11:25:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:30.060 11:25:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:30.060 11:25:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:30.060 { 00:17:30.060 "cntlid": 61, 00:17:30.060 "qid": 0, 00:17:30.060 "state": "enabled", 00:17:30.060 "thread": "nvmf_tgt_poll_group_000", 00:17:30.060 "listen_address": { 00:17:30.060 "trtype": "TCP", 00:17:30.060 "adrfam": "IPv4", 00:17:30.060 "traddr": "10.0.0.2", 00:17:30.060 "trsvcid": "4420" 00:17:30.060 }, 00:17:30.060 "peer_address": { 00:17:30.060 "trtype": "TCP", 00:17:30.060 "adrfam": "IPv4", 00:17:30.060 "traddr": "10.0.0.1", 00:17:30.060 "trsvcid": "51288" 00:17:30.060 }, 00:17:30.060 "auth": { 00:17:30.060 "state": "completed", 00:17:30.060 "digest": "sha384", 00:17:30.060 "dhgroup": "ffdhe2048" 00:17:30.060 } 00:17:30.060 } 00:17:30.060 ]' 00:17:30.060 11:25:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:30.060 11:25:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:30.060 11:25:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:30.060 11:25:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:17:30.060 11:25:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:30.060 11:25:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:30.060 11:25:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:30.060 11:25:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:30.317 11:25:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-secret DHHC-1:02:NmY5NTg2NzExOWU5OGZiZmNkMzY2NjYyOTE0YTYxZWVhYmM4OTNjZTYzNjMwZWE324S1pg==: --dhchap-ctrl-secret DHHC-1:01:NzdhY2VkNTJjMmMzYTRiNDJhMjlhMTJlMGYzMTg2NDGvAg5T: 00:17:30.883 11:25:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:30.883 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:30.883 11:25:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:17:30.883 11:25:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:30.883 11:25:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:30.883 11:25:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:30.883 11:25:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:30.883 11:25:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:17:30.883 11:25:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:17:31.141 11:25:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe2048 3 00:17:31.141 11:25:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:31.141 11:25:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:17:31.141 11:25:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:17:31.141 11:25:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:17:31.141 11:25:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:31.141 11:25:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key3 00:17:31.141 11:25:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:31.141 11:25:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:31.141 11:25:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:31.141 11:25:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:31.141 11:25:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:31.141 00:17:31.399 11:25:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:31.399 11:25:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:31.399 11:25:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:31.399 11:25:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:31.399 11:25:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:31.399 11:25:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:31.399 11:25:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:31.399 11:25:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:31.399 11:25:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:31.399 { 00:17:31.399 "cntlid": 63, 00:17:31.399 "qid": 0, 00:17:31.399 "state": "enabled", 00:17:31.399 "thread": "nvmf_tgt_poll_group_000", 00:17:31.399 "listen_address": { 00:17:31.399 "trtype": "TCP", 00:17:31.399 "adrfam": "IPv4", 00:17:31.399 "traddr": "10.0.0.2", 00:17:31.399 "trsvcid": "4420" 00:17:31.399 }, 00:17:31.400 "peer_address": { 00:17:31.400 "trtype": "TCP", 00:17:31.400 "adrfam": "IPv4", 00:17:31.400 "traddr": "10.0.0.1", 00:17:31.400 "trsvcid": "51304" 00:17:31.400 }, 00:17:31.400 "auth": { 00:17:31.400 "state": "completed", 00:17:31.400 "digest": "sha384", 00:17:31.400 "dhgroup": "ffdhe2048" 00:17:31.400 } 00:17:31.400 } 00:17:31.400 ]' 00:17:31.400 11:25:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:31.400 11:25:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:31.400 11:25:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:31.657 11:25:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:17:31.657 11:25:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:31.657 11:25:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:31.657 11:25:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:31.657 11:25:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:31.915 11:25:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-secret DHHC-1:03:YzI0MzMyYmI0YjhlOWFjOGQzYzE3ZDQwNGE3ZTViMDhjZWQ5YjI3ZTE0ZDZkNTNlMDMzNGY2MzkyOTg4YTNmOQS9WdI=: 00:17:32.481 11:25:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:32.481 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:32.481 11:25:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:17:32.481 11:25:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:32.481 11:25:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:32.481 11:25:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:32.481 11:25:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:17:32.481 11:25:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:32.481 11:25:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:17:32.481 11:25:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:17:32.481 11:25:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe3072 0 00:17:32.481 11:25:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:32.481 11:25:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:17:32.481 11:25:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:17:32.481 11:25:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:17:32.481 11:25:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:32.481 11:25:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:32.481 11:25:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:32.481 11:25:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:32.481 11:25:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:32.481 11:25:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:32.481 11:25:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:32.739 00:17:32.739 11:25:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:32.739 11:25:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:32.739 11:25:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:32.997 11:25:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:32.997 11:25:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:32.997 11:25:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:32.997 11:25:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:32.997 11:25:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:32.997 11:25:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:32.997 { 00:17:32.997 "cntlid": 65, 00:17:32.997 "qid": 0, 00:17:32.997 "state": "enabled", 00:17:32.997 "thread": "nvmf_tgt_poll_group_000", 00:17:32.997 "listen_address": { 00:17:32.997 "trtype": "TCP", 00:17:32.997 "adrfam": "IPv4", 00:17:32.997 "traddr": "10.0.0.2", 00:17:32.997 "trsvcid": "4420" 00:17:32.997 }, 00:17:32.997 "peer_address": { 00:17:32.997 "trtype": "TCP", 00:17:32.997 "adrfam": "IPv4", 00:17:32.997 "traddr": "10.0.0.1", 00:17:32.997 "trsvcid": "51330" 00:17:32.997 }, 00:17:32.997 "auth": { 00:17:32.997 "state": "completed", 00:17:32.997 "digest": "sha384", 00:17:32.997 "dhgroup": "ffdhe3072" 00:17:32.997 } 00:17:32.997 } 00:17:32.997 ]' 00:17:32.997 11:25:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:32.997 11:25:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:32.997 11:25:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:32.997 11:25:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:17:32.997 11:25:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:32.997 11:25:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:32.997 11:25:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:32.997 11:25:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:33.256 11:25:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-secret DHHC-1:00:NGQ3NGNhMTQ3MmQ0OTM5YWE3YTQwNzkzOGE0OGZiZDhkZjVmZjMzOTI2NjA3YmNkP9i+rw==: --dhchap-ctrl-secret DHHC-1:03:Y2Q4YTNmOGU4ZTI1YjI2ZTgwZDE0MGE2ZDk4MzZlNzVhZTgzY2Y4MjIzN2RlMjcyODAwYTI4OWZlZmI3MjlkN5LDEeI=: 00:17:33.919 11:25:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:33.919 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:33.919 11:25:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:17:33.919 11:25:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:33.919 11:25:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:33.919 11:25:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:33.919 11:25:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:33.919 11:25:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:17:33.919 11:25:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:17:33.919 11:25:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe3072 1 00:17:33.919 11:25:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:33.919 11:25:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:17:33.919 11:25:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:17:33.919 11:25:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:17:33.919 11:25:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:33.919 11:25:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:33.919 11:25:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:33.919 11:25:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:33.919 11:25:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:33.919 11:25:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:33.919 11:25:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:34.177 00:17:34.177 11:25:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:34.177 11:25:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:34.177 11:25:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:34.434 11:25:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:34.434 11:25:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:34.434 11:25:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:34.434 11:25:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:34.434 11:25:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:34.434 11:25:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:34.434 { 00:17:34.434 "cntlid": 67, 00:17:34.434 "qid": 0, 00:17:34.434 "state": "enabled", 00:17:34.434 "thread": "nvmf_tgt_poll_group_000", 00:17:34.434 "listen_address": { 00:17:34.434 "trtype": "TCP", 00:17:34.434 "adrfam": "IPv4", 00:17:34.434 "traddr": "10.0.0.2", 00:17:34.434 "trsvcid": "4420" 00:17:34.434 }, 00:17:34.434 "peer_address": { 00:17:34.434 "trtype": "TCP", 00:17:34.434 "adrfam": "IPv4", 00:17:34.434 "traddr": "10.0.0.1", 00:17:34.434 "trsvcid": "51358" 00:17:34.434 }, 00:17:34.434 "auth": { 00:17:34.434 "state": "completed", 00:17:34.434 "digest": "sha384", 00:17:34.434 "dhgroup": "ffdhe3072" 00:17:34.434 } 00:17:34.434 } 00:17:34.434 ]' 00:17:34.434 11:25:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:34.434 11:25:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:34.434 11:25:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:34.434 11:25:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:17:34.434 11:25:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:34.692 11:25:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:34.692 11:25:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:34.692 11:25:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:34.692 11:25:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-secret DHHC-1:01:NDI3NDlmZWVkYmQ1Y2VkYTJjMzY4MDIyNTQzOGMzMjZWX4m9: --dhchap-ctrl-secret DHHC-1:02:MWU2MWE0YTQxZWZhMzk4NjdlY2I2ZjAyNmUwZDYxZGRhZTcyYjk0ODNjZjBiM2FkruUOsA==: 00:17:35.256 11:25:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:35.256 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:35.256 11:25:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:17:35.256 11:25:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:35.256 11:25:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:35.256 11:25:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:35.256 11:25:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:35.256 11:25:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:17:35.256 11:25:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:17:35.513 11:25:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe3072 2 00:17:35.513 11:25:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:35.513 11:25:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:17:35.513 11:25:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:17:35.513 11:25:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:17:35.513 11:25:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:35.513 11:25:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:35.513 11:25:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:35.513 11:25:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:35.513 11:25:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:35.513 11:25:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:35.513 11:25:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:35.770 00:17:35.770 11:25:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:35.770 11:25:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:35.770 11:25:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:36.027 11:25:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:36.027 11:25:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:36.027 11:25:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:36.027 11:25:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:36.027 11:25:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:36.027 11:25:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:36.027 { 00:17:36.027 "cntlid": 69, 00:17:36.027 "qid": 0, 00:17:36.027 "state": "enabled", 00:17:36.027 "thread": "nvmf_tgt_poll_group_000", 00:17:36.027 "listen_address": { 00:17:36.027 "trtype": "TCP", 00:17:36.027 "adrfam": "IPv4", 00:17:36.027 "traddr": "10.0.0.2", 00:17:36.027 "trsvcid": "4420" 00:17:36.027 }, 00:17:36.027 "peer_address": { 00:17:36.027 "trtype": "TCP", 00:17:36.027 "adrfam": "IPv4", 00:17:36.027 "traddr": "10.0.0.1", 00:17:36.027 "trsvcid": "51384" 00:17:36.027 }, 00:17:36.027 "auth": { 00:17:36.027 "state": "completed", 00:17:36.027 "digest": "sha384", 00:17:36.027 "dhgroup": "ffdhe3072" 00:17:36.027 } 00:17:36.027 } 00:17:36.027 ]' 00:17:36.027 11:25:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:36.027 11:25:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:36.027 11:25:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:36.027 11:25:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:17:36.027 11:25:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:36.028 11:25:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:36.028 11:25:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:36.028 11:25:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:36.285 11:25:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-secret DHHC-1:02:NmY5NTg2NzExOWU5OGZiZmNkMzY2NjYyOTE0YTYxZWVhYmM4OTNjZTYzNjMwZWE324S1pg==: --dhchap-ctrl-secret DHHC-1:01:NzdhY2VkNTJjMmMzYTRiNDJhMjlhMTJlMGYzMTg2NDGvAg5T: 00:17:36.850 11:25:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:36.850 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:36.850 11:25:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:17:36.850 11:25:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:36.850 11:25:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:36.850 11:25:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:36.850 11:25:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:36.850 11:25:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:17:36.850 11:25:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:17:37.107 11:25:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe3072 3 00:17:37.108 11:25:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:37.108 11:25:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:17:37.108 11:25:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:17:37.108 11:25:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:17:37.108 11:25:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:37.108 11:25:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key3 00:17:37.108 11:25:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:37.108 11:25:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:37.108 11:25:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:37.108 11:25:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:37.108 11:25:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:37.108 00:17:37.365 11:25:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:37.365 11:25:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:37.365 11:25:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:37.365 11:25:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:37.365 11:25:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:37.365 11:25:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:37.365 11:25:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:37.365 11:25:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:37.365 11:25:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:37.365 { 00:17:37.365 "cntlid": 71, 00:17:37.365 "qid": 0, 00:17:37.365 "state": "enabled", 00:17:37.365 "thread": "nvmf_tgt_poll_group_000", 00:17:37.365 "listen_address": { 00:17:37.365 "trtype": "TCP", 00:17:37.365 "adrfam": "IPv4", 00:17:37.365 "traddr": "10.0.0.2", 00:17:37.365 "trsvcid": "4420" 00:17:37.365 }, 00:17:37.365 "peer_address": { 00:17:37.365 "trtype": "TCP", 00:17:37.365 "adrfam": "IPv4", 00:17:37.365 "traddr": "10.0.0.1", 00:17:37.365 "trsvcid": "51420" 00:17:37.365 }, 00:17:37.365 "auth": { 00:17:37.365 "state": "completed", 00:17:37.365 "digest": "sha384", 00:17:37.365 "dhgroup": "ffdhe3072" 00:17:37.365 } 00:17:37.365 } 00:17:37.365 ]' 00:17:37.366 11:25:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:37.366 11:25:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:37.366 11:25:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:37.623 11:25:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:17:37.623 11:25:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:37.623 11:25:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:37.623 11:25:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:37.623 11:25:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:37.623 11:25:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-secret DHHC-1:03:YzI0MzMyYmI0YjhlOWFjOGQzYzE3ZDQwNGE3ZTViMDhjZWQ5YjI3ZTE0ZDZkNTNlMDMzNGY2MzkyOTg4YTNmOQS9WdI=: 00:17:38.188 11:25:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:38.188 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:38.188 11:25:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:17:38.188 11:25:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:38.188 11:25:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:38.188 11:25:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:38.188 11:25:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:17:38.188 11:25:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:38.188 11:25:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:17:38.188 11:25:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:17:38.446 11:25:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe4096 0 00:17:38.446 11:25:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:38.446 11:25:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:17:38.446 11:25:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:17:38.446 11:25:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:17:38.446 11:25:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:38.446 11:25:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:38.446 11:25:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:38.446 11:25:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:38.446 11:25:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:38.446 11:25:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:38.446 11:25:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:38.704 00:17:38.704 11:25:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:38.704 11:25:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:38.704 11:25:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:38.962 11:25:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:38.962 11:25:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:38.962 11:25:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:38.962 11:25:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:38.962 11:25:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:38.962 11:25:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:38.962 { 00:17:38.962 "cntlid": 73, 00:17:38.962 "qid": 0, 00:17:38.962 "state": "enabled", 00:17:38.962 "thread": "nvmf_tgt_poll_group_000", 00:17:38.962 "listen_address": { 00:17:38.962 "trtype": "TCP", 00:17:38.962 "adrfam": "IPv4", 00:17:38.962 "traddr": "10.0.0.2", 00:17:38.962 "trsvcid": "4420" 00:17:38.962 }, 00:17:38.962 "peer_address": { 00:17:38.962 "trtype": "TCP", 00:17:38.962 "adrfam": "IPv4", 00:17:38.962 "traddr": "10.0.0.1", 00:17:38.962 "trsvcid": "51438" 00:17:38.962 }, 00:17:38.962 "auth": { 00:17:38.962 "state": "completed", 00:17:38.962 "digest": "sha384", 00:17:38.962 "dhgroup": "ffdhe4096" 00:17:38.962 } 00:17:38.962 } 00:17:38.962 ]' 00:17:38.962 11:25:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:38.962 11:25:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:38.962 11:25:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:38.962 11:25:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:17:38.962 11:25:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:38.962 11:25:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:38.962 11:25:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:38.962 11:25:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:39.219 11:25:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-secret DHHC-1:00:NGQ3NGNhMTQ3MmQ0OTM5YWE3YTQwNzkzOGE0OGZiZDhkZjVmZjMzOTI2NjA3YmNkP9i+rw==: --dhchap-ctrl-secret DHHC-1:03:Y2Q4YTNmOGU4ZTI1YjI2ZTgwZDE0MGE2ZDk4MzZlNzVhZTgzY2Y4MjIzN2RlMjcyODAwYTI4OWZlZmI3MjlkN5LDEeI=: 00:17:39.786 11:25:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:39.786 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:39.786 11:25:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:17:39.786 11:25:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:39.786 11:25:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:39.786 11:25:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:39.786 11:25:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:39.786 11:25:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:17:39.786 11:25:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:17:40.044 11:25:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe4096 1 00:17:40.044 11:25:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:40.044 11:25:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:17:40.044 11:25:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:17:40.044 11:25:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:17:40.044 11:25:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:40.044 11:25:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:40.044 11:25:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:40.044 11:25:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:40.044 11:25:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:40.044 11:25:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:40.044 11:25:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:40.302 00:17:40.302 11:25:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:40.302 11:25:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:40.302 11:25:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:40.302 11:25:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:40.302 11:25:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:40.302 11:25:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:40.302 11:25:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:40.302 11:25:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:40.302 11:25:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:40.302 { 00:17:40.302 "cntlid": 75, 00:17:40.302 "qid": 0, 00:17:40.302 "state": "enabled", 00:17:40.303 "thread": "nvmf_tgt_poll_group_000", 00:17:40.303 "listen_address": { 00:17:40.303 "trtype": "TCP", 00:17:40.303 "adrfam": "IPv4", 00:17:40.303 "traddr": "10.0.0.2", 00:17:40.303 "trsvcid": "4420" 00:17:40.303 }, 00:17:40.303 "peer_address": { 00:17:40.303 "trtype": "TCP", 00:17:40.303 "adrfam": "IPv4", 00:17:40.303 "traddr": "10.0.0.1", 00:17:40.303 "trsvcid": "41744" 00:17:40.303 }, 00:17:40.303 "auth": { 00:17:40.303 "state": "completed", 00:17:40.303 "digest": "sha384", 00:17:40.303 "dhgroup": "ffdhe4096" 00:17:40.303 } 00:17:40.303 } 00:17:40.303 ]' 00:17:40.303 11:25:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:40.561 11:25:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:40.561 11:25:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:40.561 11:25:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:17:40.561 11:25:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:40.561 11:25:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:40.561 11:25:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:40.561 11:25:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:40.819 11:25:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-secret DHHC-1:01:NDI3NDlmZWVkYmQ1Y2VkYTJjMzY4MDIyNTQzOGMzMjZWX4m9: --dhchap-ctrl-secret DHHC-1:02:MWU2MWE0YTQxZWZhMzk4NjdlY2I2ZjAyNmUwZDYxZGRhZTcyYjk0ODNjZjBiM2FkruUOsA==: 00:17:41.385 11:25:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:41.386 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:41.386 11:25:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:17:41.386 11:25:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:41.386 11:25:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:41.386 11:25:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:41.386 11:25:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:41.386 11:25:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:17:41.386 11:25:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:17:41.386 11:25:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe4096 2 00:17:41.386 11:25:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:41.386 11:25:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:17:41.386 11:25:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:17:41.386 11:25:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:17:41.386 11:25:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:41.386 11:25:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:41.386 11:25:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:41.386 11:25:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:41.386 11:25:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:41.386 11:25:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:41.386 11:25:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:41.644 00:17:41.644 11:25:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:41.644 11:25:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:41.644 11:25:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:41.903 11:25:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:41.903 11:25:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:41.903 11:25:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:41.903 11:25:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:41.903 11:25:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:41.903 11:25:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:41.903 { 00:17:41.903 "cntlid": 77, 00:17:41.903 "qid": 0, 00:17:41.903 "state": "enabled", 00:17:41.903 "thread": "nvmf_tgt_poll_group_000", 00:17:41.903 "listen_address": { 00:17:41.903 "trtype": "TCP", 00:17:41.903 "adrfam": "IPv4", 00:17:41.903 "traddr": "10.0.0.2", 00:17:41.903 "trsvcid": "4420" 00:17:41.903 }, 00:17:41.903 "peer_address": { 00:17:41.903 "trtype": "TCP", 00:17:41.903 "adrfam": "IPv4", 00:17:41.903 "traddr": "10.0.0.1", 00:17:41.903 "trsvcid": "41768" 00:17:41.903 }, 00:17:41.903 "auth": { 00:17:41.903 "state": "completed", 00:17:41.903 "digest": "sha384", 00:17:41.903 "dhgroup": "ffdhe4096" 00:17:41.903 } 00:17:41.903 } 00:17:41.903 ]' 00:17:41.903 11:25:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:41.903 11:25:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:41.903 11:25:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:41.903 11:25:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:17:41.903 11:25:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:42.161 11:25:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:42.161 11:25:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:42.161 11:25:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:42.161 11:25:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-secret DHHC-1:02:NmY5NTg2NzExOWU5OGZiZmNkMzY2NjYyOTE0YTYxZWVhYmM4OTNjZTYzNjMwZWE324S1pg==: --dhchap-ctrl-secret DHHC-1:01:NzdhY2VkNTJjMmMzYTRiNDJhMjlhMTJlMGYzMTg2NDGvAg5T: 00:17:42.728 11:25:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:42.728 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:42.728 11:25:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:17:42.728 11:25:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:42.728 11:25:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:42.728 11:25:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:42.728 11:25:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:42.728 11:25:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:17:42.728 11:25:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:17:42.986 11:25:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe4096 3 00:17:42.986 11:25:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:42.986 11:25:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:17:42.986 11:25:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:17:42.986 11:25:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:17:42.986 11:25:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:42.986 11:25:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key3 00:17:42.986 11:25:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:42.986 11:25:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:42.986 11:25:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:42.986 11:25:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:42.987 11:25:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:43.245 00:17:43.245 11:25:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:43.245 11:25:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:43.245 11:25:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:43.503 11:25:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:43.503 11:25:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:43.503 11:25:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:43.503 11:25:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:43.503 11:25:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:43.503 11:25:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:43.503 { 00:17:43.503 "cntlid": 79, 00:17:43.503 "qid": 0, 00:17:43.503 "state": "enabled", 00:17:43.503 "thread": "nvmf_tgt_poll_group_000", 00:17:43.503 "listen_address": { 00:17:43.503 "trtype": "TCP", 00:17:43.503 "adrfam": "IPv4", 00:17:43.503 "traddr": "10.0.0.2", 00:17:43.503 "trsvcid": "4420" 00:17:43.503 }, 00:17:43.503 "peer_address": { 00:17:43.503 "trtype": "TCP", 00:17:43.503 "adrfam": "IPv4", 00:17:43.503 "traddr": "10.0.0.1", 00:17:43.503 "trsvcid": "41786" 00:17:43.503 }, 00:17:43.503 "auth": { 00:17:43.503 "state": "completed", 00:17:43.503 "digest": "sha384", 00:17:43.503 "dhgroup": "ffdhe4096" 00:17:43.503 } 00:17:43.503 } 00:17:43.503 ]' 00:17:43.503 11:25:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:43.503 11:25:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:43.503 11:25:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:43.503 11:25:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:17:43.503 11:25:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:43.503 11:25:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:43.503 11:25:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:43.503 11:25:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:43.761 11:25:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-secret DHHC-1:03:YzI0MzMyYmI0YjhlOWFjOGQzYzE3ZDQwNGE3ZTViMDhjZWQ5YjI3ZTE0ZDZkNTNlMDMzNGY2MzkyOTg4YTNmOQS9WdI=: 00:17:44.328 11:25:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:44.328 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:44.328 11:25:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:17:44.328 11:25:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:44.328 11:25:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:44.328 11:25:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:44.328 11:25:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:17:44.328 11:25:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:44.328 11:25:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:17:44.328 11:25:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:17:44.586 11:25:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe6144 0 00:17:44.586 11:25:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:44.586 11:25:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:17:44.586 11:25:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:17:44.586 11:25:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:17:44.586 11:25:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:44.586 11:25:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:44.586 11:25:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:44.586 11:25:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:44.586 11:25:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:44.586 11:25:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:44.586 11:25:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:44.844 00:17:44.844 11:25:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:44.844 11:25:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:44.844 11:25:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:45.103 11:25:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:45.103 11:25:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:45.103 11:25:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:45.103 11:25:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:45.103 11:25:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:45.103 11:25:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:45.103 { 00:17:45.103 "cntlid": 81, 00:17:45.103 "qid": 0, 00:17:45.103 "state": "enabled", 00:17:45.103 "thread": "nvmf_tgt_poll_group_000", 00:17:45.103 "listen_address": { 00:17:45.103 "trtype": "TCP", 00:17:45.103 "adrfam": "IPv4", 00:17:45.103 "traddr": "10.0.0.2", 00:17:45.103 "trsvcid": "4420" 00:17:45.103 }, 00:17:45.103 "peer_address": { 00:17:45.103 "trtype": "TCP", 00:17:45.103 "adrfam": "IPv4", 00:17:45.103 "traddr": "10.0.0.1", 00:17:45.103 "trsvcid": "41818" 00:17:45.103 }, 00:17:45.103 "auth": { 00:17:45.103 "state": "completed", 00:17:45.103 "digest": "sha384", 00:17:45.103 "dhgroup": "ffdhe6144" 00:17:45.103 } 00:17:45.103 } 00:17:45.103 ]' 00:17:45.103 11:25:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:45.103 11:25:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:45.103 11:25:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:45.103 11:25:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:17:45.103 11:25:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:45.103 11:25:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:45.103 11:25:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:45.103 11:25:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:45.361 11:25:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-secret DHHC-1:00:NGQ3NGNhMTQ3MmQ0OTM5YWE3YTQwNzkzOGE0OGZiZDhkZjVmZjMzOTI2NjA3YmNkP9i+rw==: --dhchap-ctrl-secret DHHC-1:03:Y2Q4YTNmOGU4ZTI1YjI2ZTgwZDE0MGE2ZDk4MzZlNzVhZTgzY2Y4MjIzN2RlMjcyODAwYTI4OWZlZmI3MjlkN5LDEeI=: 00:17:45.929 11:25:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:45.929 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:45.929 11:25:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:17:45.929 11:25:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:45.929 11:25:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:45.929 11:25:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:45.929 11:25:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:45.929 11:25:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:17:45.929 11:25:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:17:46.187 11:25:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe6144 1 00:17:46.187 11:25:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:46.187 11:25:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:17:46.187 11:25:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:17:46.187 11:25:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:17:46.187 11:25:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:46.187 11:25:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:46.187 11:25:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:46.187 11:25:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:46.187 11:25:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:46.187 11:25:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:46.187 11:25:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:46.445 00:17:46.445 11:25:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:46.445 11:25:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:46.445 11:25:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:46.703 11:25:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:46.703 11:25:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:46.703 11:25:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:46.703 11:25:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:46.703 11:25:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:46.703 11:25:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:46.703 { 00:17:46.703 "cntlid": 83, 00:17:46.703 "qid": 0, 00:17:46.703 "state": "enabled", 00:17:46.703 "thread": "nvmf_tgt_poll_group_000", 00:17:46.703 "listen_address": { 00:17:46.703 "trtype": "TCP", 00:17:46.703 "adrfam": "IPv4", 00:17:46.703 "traddr": "10.0.0.2", 00:17:46.703 "trsvcid": "4420" 00:17:46.703 }, 00:17:46.703 "peer_address": { 00:17:46.703 "trtype": "TCP", 00:17:46.703 "adrfam": "IPv4", 00:17:46.703 "traddr": "10.0.0.1", 00:17:46.703 "trsvcid": "41850" 00:17:46.703 }, 00:17:46.703 "auth": { 00:17:46.703 "state": "completed", 00:17:46.703 "digest": "sha384", 00:17:46.703 "dhgroup": "ffdhe6144" 00:17:46.703 } 00:17:46.703 } 00:17:46.703 ]' 00:17:46.703 11:25:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:46.703 11:25:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:46.703 11:25:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:46.703 11:25:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:17:46.703 11:25:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:46.703 11:25:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:46.703 11:25:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:46.703 11:25:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:46.962 11:25:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-secret DHHC-1:01:NDI3NDlmZWVkYmQ1Y2VkYTJjMzY4MDIyNTQzOGMzMjZWX4m9: --dhchap-ctrl-secret DHHC-1:02:MWU2MWE0YTQxZWZhMzk4NjdlY2I2ZjAyNmUwZDYxZGRhZTcyYjk0ODNjZjBiM2FkruUOsA==: 00:17:47.529 11:25:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:47.529 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:47.529 11:25:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:17:47.529 11:25:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:47.529 11:25:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:47.529 11:25:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:47.529 11:25:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:47.529 11:25:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:17:47.529 11:25:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:17:47.788 11:25:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe6144 2 00:17:47.788 11:25:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:47.788 11:25:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:17:47.788 11:25:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:17:47.788 11:25:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:17:47.788 11:25:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:47.788 11:25:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:47.788 11:25:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:47.788 11:25:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:47.788 11:25:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:47.788 11:25:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:47.789 11:25:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:48.046 00:17:48.046 11:25:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:48.046 11:25:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:48.046 11:25:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:48.305 11:25:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:48.305 11:25:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:48.305 11:25:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:48.305 11:25:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:48.305 11:25:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:48.305 11:25:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:48.305 { 00:17:48.305 "cntlid": 85, 00:17:48.305 "qid": 0, 00:17:48.305 "state": "enabled", 00:17:48.305 "thread": "nvmf_tgt_poll_group_000", 00:17:48.305 "listen_address": { 00:17:48.305 "trtype": "TCP", 00:17:48.305 "adrfam": "IPv4", 00:17:48.305 "traddr": "10.0.0.2", 00:17:48.305 "trsvcid": "4420" 00:17:48.305 }, 00:17:48.305 "peer_address": { 00:17:48.305 "trtype": "TCP", 00:17:48.305 "adrfam": "IPv4", 00:17:48.305 "traddr": "10.0.0.1", 00:17:48.305 "trsvcid": "41876" 00:17:48.305 }, 00:17:48.305 "auth": { 00:17:48.305 "state": "completed", 00:17:48.305 "digest": "sha384", 00:17:48.305 "dhgroup": "ffdhe6144" 00:17:48.305 } 00:17:48.305 } 00:17:48.305 ]' 00:17:48.305 11:25:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:48.305 11:25:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:48.305 11:25:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:48.305 11:25:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:17:48.305 11:25:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:48.305 11:25:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:48.305 11:25:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:48.305 11:25:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:48.563 11:25:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-secret DHHC-1:02:NmY5NTg2NzExOWU5OGZiZmNkMzY2NjYyOTE0YTYxZWVhYmM4OTNjZTYzNjMwZWE324S1pg==: --dhchap-ctrl-secret DHHC-1:01:NzdhY2VkNTJjMmMzYTRiNDJhMjlhMTJlMGYzMTg2NDGvAg5T: 00:17:49.130 11:25:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:49.130 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:49.130 11:25:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:17:49.130 11:25:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:49.130 11:25:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:49.130 11:25:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:49.130 11:25:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:49.130 11:25:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:17:49.130 11:25:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:17:49.388 11:25:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe6144 3 00:17:49.388 11:25:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:49.389 11:25:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:17:49.389 11:25:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:17:49.389 11:25:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:17:49.389 11:25:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:49.389 11:25:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key3 00:17:49.389 11:25:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:49.389 11:25:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:49.389 11:25:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:49.389 11:25:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:49.389 11:25:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:49.647 00:17:49.647 11:25:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:49.647 11:25:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:49.647 11:25:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:49.906 11:25:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:49.906 11:25:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:49.906 11:25:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:49.906 11:25:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:49.906 11:25:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:49.906 11:25:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:49.906 { 00:17:49.906 "cntlid": 87, 00:17:49.906 "qid": 0, 00:17:49.906 "state": "enabled", 00:17:49.906 "thread": "nvmf_tgt_poll_group_000", 00:17:49.906 "listen_address": { 00:17:49.906 "trtype": "TCP", 00:17:49.906 "adrfam": "IPv4", 00:17:49.906 "traddr": "10.0.0.2", 00:17:49.906 "trsvcid": "4420" 00:17:49.906 }, 00:17:49.906 "peer_address": { 00:17:49.906 "trtype": "TCP", 00:17:49.906 "adrfam": "IPv4", 00:17:49.906 "traddr": "10.0.0.1", 00:17:49.906 "trsvcid": "41910" 00:17:49.906 }, 00:17:49.906 "auth": { 00:17:49.906 "state": "completed", 00:17:49.906 "digest": "sha384", 00:17:49.906 "dhgroup": "ffdhe6144" 00:17:49.906 } 00:17:49.906 } 00:17:49.906 ]' 00:17:49.906 11:25:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:49.906 11:25:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:49.906 11:25:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:49.906 11:25:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:17:49.906 11:25:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:49.906 11:25:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:49.906 11:25:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:49.906 11:25:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:50.165 11:25:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-secret DHHC-1:03:YzI0MzMyYmI0YjhlOWFjOGQzYzE3ZDQwNGE3ZTViMDhjZWQ5YjI3ZTE0ZDZkNTNlMDMzNGY2MzkyOTg4YTNmOQS9WdI=: 00:17:50.731 11:25:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:50.731 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:50.731 11:25:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:17:50.731 11:25:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:50.731 11:25:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:50.731 11:25:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:50.731 11:25:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:17:50.731 11:25:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:50.731 11:25:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:17:50.731 11:25:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:17:50.989 11:25:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe8192 0 00:17:50.989 11:25:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:50.989 11:25:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:17:50.990 11:25:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:17:50.990 11:25:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:17:50.990 11:25:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:50.990 11:25:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:50.990 11:25:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:50.990 11:25:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:50.990 11:25:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:50.990 11:25:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:50.990 11:25:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:51.247 00:17:51.247 11:25:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:51.247 11:25:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:51.247 11:25:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:51.506 11:25:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:51.506 11:25:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:51.506 11:25:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:51.506 11:25:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:51.506 11:25:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:51.506 11:25:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:51.506 { 00:17:51.506 "cntlid": 89, 00:17:51.506 "qid": 0, 00:17:51.506 "state": "enabled", 00:17:51.506 "thread": "nvmf_tgt_poll_group_000", 00:17:51.506 "listen_address": { 00:17:51.506 "trtype": "TCP", 00:17:51.506 "adrfam": "IPv4", 00:17:51.506 "traddr": "10.0.0.2", 00:17:51.506 "trsvcid": "4420" 00:17:51.506 }, 00:17:51.506 "peer_address": { 00:17:51.506 "trtype": "TCP", 00:17:51.506 "adrfam": "IPv4", 00:17:51.506 "traddr": "10.0.0.1", 00:17:51.506 "trsvcid": "44290" 00:17:51.506 }, 00:17:51.506 "auth": { 00:17:51.506 "state": "completed", 00:17:51.506 "digest": "sha384", 00:17:51.506 "dhgroup": "ffdhe8192" 00:17:51.506 } 00:17:51.506 } 00:17:51.506 ]' 00:17:51.506 11:25:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:51.506 11:25:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:51.506 11:25:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:51.506 11:25:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:17:51.506 11:25:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:51.764 11:25:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:51.764 11:25:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:51.764 11:25:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:51.764 11:25:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-secret DHHC-1:00:NGQ3NGNhMTQ3MmQ0OTM5YWE3YTQwNzkzOGE0OGZiZDhkZjVmZjMzOTI2NjA3YmNkP9i+rw==: --dhchap-ctrl-secret DHHC-1:03:Y2Q4YTNmOGU4ZTI1YjI2ZTgwZDE0MGE2ZDk4MzZlNzVhZTgzY2Y4MjIzN2RlMjcyODAwYTI4OWZlZmI3MjlkN5LDEeI=: 00:17:52.329 11:25:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:52.329 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:52.329 11:25:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:17:52.329 11:25:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:52.329 11:25:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:52.329 11:25:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:52.329 11:25:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:52.329 11:25:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:17:52.329 11:25:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:17:52.587 11:25:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe8192 1 00:17:52.587 11:25:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:52.587 11:25:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:17:52.587 11:25:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:17:52.587 11:25:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:17:52.587 11:25:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:52.587 11:25:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:52.587 11:25:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:52.587 11:25:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:52.587 11:25:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:52.587 11:25:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:52.587 11:25:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:53.152 00:17:53.152 11:25:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:53.152 11:25:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:53.152 11:25:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:53.152 11:25:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:53.152 11:25:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:53.152 11:25:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:53.152 11:25:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:53.152 11:25:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:53.152 11:25:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:53.152 { 00:17:53.152 "cntlid": 91, 00:17:53.152 "qid": 0, 00:17:53.152 "state": "enabled", 00:17:53.152 "thread": "nvmf_tgt_poll_group_000", 00:17:53.152 "listen_address": { 00:17:53.152 "trtype": "TCP", 00:17:53.152 "adrfam": "IPv4", 00:17:53.152 "traddr": "10.0.0.2", 00:17:53.152 "trsvcid": "4420" 00:17:53.152 }, 00:17:53.152 "peer_address": { 00:17:53.152 "trtype": "TCP", 00:17:53.152 "adrfam": "IPv4", 00:17:53.152 "traddr": "10.0.0.1", 00:17:53.152 "trsvcid": "44314" 00:17:53.152 }, 00:17:53.152 "auth": { 00:17:53.152 "state": "completed", 00:17:53.152 "digest": "sha384", 00:17:53.152 "dhgroup": "ffdhe8192" 00:17:53.152 } 00:17:53.152 } 00:17:53.152 ]' 00:17:53.152 11:25:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:53.410 11:25:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:53.410 11:25:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:53.410 11:25:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:17:53.410 11:25:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:53.410 11:25:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:53.410 11:25:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:53.410 11:25:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:53.668 11:25:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-secret DHHC-1:01:NDI3NDlmZWVkYmQ1Y2VkYTJjMzY4MDIyNTQzOGMzMjZWX4m9: --dhchap-ctrl-secret DHHC-1:02:MWU2MWE0YTQxZWZhMzk4NjdlY2I2ZjAyNmUwZDYxZGRhZTcyYjk0ODNjZjBiM2FkruUOsA==: 00:17:54.234 11:25:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:54.234 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:54.234 11:25:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:17:54.234 11:25:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:54.234 11:25:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:54.234 11:25:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:54.234 11:25:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:54.234 11:25:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:17:54.234 11:25:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:17:54.234 11:25:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe8192 2 00:17:54.234 11:25:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:54.234 11:25:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:17:54.234 11:25:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:17:54.234 11:25:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:17:54.234 11:25:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:54.234 11:25:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:54.234 11:25:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:54.234 11:25:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:54.234 11:25:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:54.234 11:25:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:54.234 11:25:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:54.801 00:17:54.801 11:25:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:54.801 11:25:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:54.801 11:25:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:55.059 11:25:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:55.059 11:25:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:55.059 11:25:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:55.059 11:25:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:55.059 11:25:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:55.059 11:25:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:55.059 { 00:17:55.059 "cntlid": 93, 00:17:55.059 "qid": 0, 00:17:55.059 "state": "enabled", 00:17:55.059 "thread": "nvmf_tgt_poll_group_000", 00:17:55.059 "listen_address": { 00:17:55.059 "trtype": "TCP", 00:17:55.059 "adrfam": "IPv4", 00:17:55.059 "traddr": "10.0.0.2", 00:17:55.059 "trsvcid": "4420" 00:17:55.059 }, 00:17:55.059 "peer_address": { 00:17:55.059 "trtype": "TCP", 00:17:55.059 "adrfam": "IPv4", 00:17:55.059 "traddr": "10.0.0.1", 00:17:55.059 "trsvcid": "44340" 00:17:55.059 }, 00:17:55.059 "auth": { 00:17:55.059 "state": "completed", 00:17:55.059 "digest": "sha384", 00:17:55.059 "dhgroup": "ffdhe8192" 00:17:55.059 } 00:17:55.059 } 00:17:55.059 ]' 00:17:55.059 11:25:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:55.059 11:25:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:55.059 11:25:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:55.059 11:25:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:17:55.059 11:25:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:55.059 11:25:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:55.059 11:25:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:55.059 11:25:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:55.317 11:25:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-secret DHHC-1:02:NmY5NTg2NzExOWU5OGZiZmNkMzY2NjYyOTE0YTYxZWVhYmM4OTNjZTYzNjMwZWE324S1pg==: --dhchap-ctrl-secret DHHC-1:01:NzdhY2VkNTJjMmMzYTRiNDJhMjlhMTJlMGYzMTg2NDGvAg5T: 00:17:55.882 11:25:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:55.882 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:55.882 11:25:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:17:55.882 11:25:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:55.882 11:25:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:55.882 11:25:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:55.882 11:25:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:55.882 11:25:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:17:55.882 11:25:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:17:55.882 11:25:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe8192 3 00:17:55.882 11:25:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:55.882 11:25:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:17:55.882 11:25:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:17:55.882 11:25:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:17:55.882 11:25:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:55.882 11:25:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key3 00:17:55.882 11:25:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:55.882 11:25:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:55.882 11:25:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:55.882 11:25:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:55.882 11:25:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:56.449 00:17:56.450 11:25:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:56.450 11:25:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:56.450 11:25:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:56.707 11:25:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:56.707 11:25:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:56.707 11:25:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:56.707 11:25:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:56.707 11:25:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:56.707 11:25:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:56.707 { 00:17:56.707 "cntlid": 95, 00:17:56.707 "qid": 0, 00:17:56.707 "state": "enabled", 00:17:56.707 "thread": "nvmf_tgt_poll_group_000", 00:17:56.707 "listen_address": { 00:17:56.707 "trtype": "TCP", 00:17:56.707 "adrfam": "IPv4", 00:17:56.707 "traddr": "10.0.0.2", 00:17:56.707 "trsvcid": "4420" 00:17:56.707 }, 00:17:56.707 "peer_address": { 00:17:56.707 "trtype": "TCP", 00:17:56.707 "adrfam": "IPv4", 00:17:56.707 "traddr": "10.0.0.1", 00:17:56.707 "trsvcid": "44372" 00:17:56.707 }, 00:17:56.707 "auth": { 00:17:56.707 "state": "completed", 00:17:56.707 "digest": "sha384", 00:17:56.707 "dhgroup": "ffdhe8192" 00:17:56.707 } 00:17:56.707 } 00:17:56.708 ]' 00:17:56.708 11:25:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:56.708 11:25:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:56.708 11:25:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:56.708 11:25:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:17:56.708 11:25:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:56.708 11:25:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:56.708 11:25:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:56.708 11:25:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:56.966 11:25:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-secret DHHC-1:03:YzI0MzMyYmI0YjhlOWFjOGQzYzE3ZDQwNGE3ZTViMDhjZWQ5YjI3ZTE0ZDZkNTNlMDMzNGY2MzkyOTg4YTNmOQS9WdI=: 00:17:57.531 11:25:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:57.531 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:57.531 11:25:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:17:57.531 11:25:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:57.531 11:25:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:57.531 11:25:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:57.531 11:25:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@91 -- # for digest in "${digests[@]}" 00:17:57.531 11:25:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:17:57.531 11:25:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:57.531 11:25:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:17:57.531 11:25:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:17:57.789 11:25:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 null 0 00:17:57.789 11:25:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:57.789 11:25:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:17:57.789 11:25:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:17:57.789 11:25:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:17:57.789 11:25:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:57.789 11:25:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:57.789 11:25:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:57.789 11:25:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:57.789 11:25:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:57.789 11:25:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:57.789 11:25:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:58.047 00:17:58.047 11:25:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:58.047 11:25:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:58.047 11:25:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:58.047 11:25:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:58.047 11:25:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:58.047 11:25:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:58.047 11:25:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:58.047 11:25:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:58.047 11:25:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:58.047 { 00:17:58.047 "cntlid": 97, 00:17:58.047 "qid": 0, 00:17:58.047 "state": "enabled", 00:17:58.047 "thread": "nvmf_tgt_poll_group_000", 00:17:58.047 "listen_address": { 00:17:58.047 "trtype": "TCP", 00:17:58.047 "adrfam": "IPv4", 00:17:58.047 "traddr": "10.0.0.2", 00:17:58.047 "trsvcid": "4420" 00:17:58.047 }, 00:17:58.047 "peer_address": { 00:17:58.047 "trtype": "TCP", 00:17:58.047 "adrfam": "IPv4", 00:17:58.047 "traddr": "10.0.0.1", 00:17:58.047 "trsvcid": "44396" 00:17:58.047 }, 00:17:58.047 "auth": { 00:17:58.047 "state": "completed", 00:17:58.047 "digest": "sha512", 00:17:58.047 "dhgroup": "null" 00:17:58.047 } 00:17:58.047 } 00:17:58.047 ]' 00:17:58.047 11:25:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:58.305 11:25:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:58.305 11:25:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:58.305 11:25:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:17:58.305 11:25:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:58.305 11:25:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:58.305 11:25:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:58.305 11:25:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:58.562 11:25:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-secret DHHC-1:00:NGQ3NGNhMTQ3MmQ0OTM5YWE3YTQwNzkzOGE0OGZiZDhkZjVmZjMzOTI2NjA3YmNkP9i+rw==: --dhchap-ctrl-secret DHHC-1:03:Y2Q4YTNmOGU4ZTI1YjI2ZTgwZDE0MGE2ZDk4MzZlNzVhZTgzY2Y4MjIzN2RlMjcyODAwYTI4OWZlZmI3MjlkN5LDEeI=: 00:17:59.127 11:25:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:59.127 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:59.127 11:25:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:17:59.127 11:25:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:59.127 11:25:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:59.127 11:25:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:59.127 11:25:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:59.127 11:25:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:17:59.127 11:25:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:17:59.127 11:25:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 null 1 00:17:59.127 11:25:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:59.127 11:25:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:17:59.127 11:25:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:17:59.127 11:25:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:17:59.127 11:25:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:59.127 11:25:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:59.127 11:25:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:59.127 11:25:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:59.127 11:25:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:59.127 11:25:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:59.127 11:25:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:59.385 00:17:59.385 11:25:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:59.385 11:25:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:59.385 11:25:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:59.643 11:25:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:59.643 11:25:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:59.643 11:25:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:59.643 11:25:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:59.643 11:25:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:59.643 11:25:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:59.643 { 00:17:59.643 "cntlid": 99, 00:17:59.643 "qid": 0, 00:17:59.643 "state": "enabled", 00:17:59.643 "thread": "nvmf_tgt_poll_group_000", 00:17:59.643 "listen_address": { 00:17:59.643 "trtype": "TCP", 00:17:59.643 "adrfam": "IPv4", 00:17:59.643 "traddr": "10.0.0.2", 00:17:59.643 "trsvcid": "4420" 00:17:59.643 }, 00:17:59.643 "peer_address": { 00:17:59.643 "trtype": "TCP", 00:17:59.643 "adrfam": "IPv4", 00:17:59.643 "traddr": "10.0.0.1", 00:17:59.643 "trsvcid": "44420" 00:17:59.643 }, 00:17:59.643 "auth": { 00:17:59.643 "state": "completed", 00:17:59.643 "digest": "sha512", 00:17:59.643 "dhgroup": "null" 00:17:59.643 } 00:17:59.643 } 00:17:59.643 ]' 00:17:59.643 11:25:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:59.643 11:25:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:59.643 11:25:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:59.643 11:25:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:17:59.644 11:25:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:59.902 11:25:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:59.902 11:25:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:59.902 11:25:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:59.902 11:25:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-secret DHHC-1:01:NDI3NDlmZWVkYmQ1Y2VkYTJjMzY4MDIyNTQzOGMzMjZWX4m9: --dhchap-ctrl-secret DHHC-1:02:MWU2MWE0YTQxZWZhMzk4NjdlY2I2ZjAyNmUwZDYxZGRhZTcyYjk0ODNjZjBiM2FkruUOsA==: 00:18:00.468 11:25:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:00.468 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:00.468 11:25:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:18:00.468 11:25:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:00.468 11:25:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:00.468 11:25:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:00.468 11:25:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:00.468 11:25:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:18:00.468 11:25:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:18:00.727 11:25:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 null 2 00:18:00.727 11:25:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:00.727 11:25:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:18:00.727 11:25:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:18:00.727 11:25:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:18:00.727 11:25:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:00.727 11:25:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:00.727 11:25:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:00.727 11:25:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:00.727 11:25:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:00.727 11:25:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:00.727 11:25:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:00.986 00:18:00.986 11:25:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:00.986 11:25:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:00.986 11:25:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:00.986 11:25:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:00.986 11:25:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:00.986 11:25:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:00.986 11:25:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:01.245 11:25:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:01.245 11:25:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:01.245 { 00:18:01.245 "cntlid": 101, 00:18:01.245 "qid": 0, 00:18:01.245 "state": "enabled", 00:18:01.245 "thread": "nvmf_tgt_poll_group_000", 00:18:01.245 "listen_address": { 00:18:01.245 "trtype": "TCP", 00:18:01.245 "adrfam": "IPv4", 00:18:01.245 "traddr": "10.0.0.2", 00:18:01.245 "trsvcid": "4420" 00:18:01.245 }, 00:18:01.245 "peer_address": { 00:18:01.245 "trtype": "TCP", 00:18:01.245 "adrfam": "IPv4", 00:18:01.245 "traddr": "10.0.0.1", 00:18:01.245 "trsvcid": "46042" 00:18:01.245 }, 00:18:01.245 "auth": { 00:18:01.245 "state": "completed", 00:18:01.245 "digest": "sha512", 00:18:01.245 "dhgroup": "null" 00:18:01.245 } 00:18:01.245 } 00:18:01.245 ]' 00:18:01.245 11:25:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:01.245 11:25:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:01.245 11:25:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:01.245 11:25:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:18:01.245 11:25:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:01.245 11:25:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:01.245 11:25:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:01.245 11:25:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:01.523 11:25:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-secret DHHC-1:02:NmY5NTg2NzExOWU5OGZiZmNkMzY2NjYyOTE0YTYxZWVhYmM4OTNjZTYzNjMwZWE324S1pg==: --dhchap-ctrl-secret DHHC-1:01:NzdhY2VkNTJjMmMzYTRiNDJhMjlhMTJlMGYzMTg2NDGvAg5T: 00:18:02.132 11:25:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:02.132 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:02.132 11:25:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:18:02.132 11:25:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:02.132 11:25:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:02.132 11:25:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:02.132 11:25:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:02.132 11:25:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:18:02.132 11:25:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:18:02.132 11:25:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 null 3 00:18:02.132 11:25:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:02.132 11:25:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:18:02.132 11:25:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:18:02.132 11:25:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:18:02.132 11:25:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:02.132 11:25:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key3 00:18:02.132 11:25:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:02.132 11:25:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:02.132 11:25:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:02.132 11:25:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:02.132 11:25:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:02.404 00:18:02.404 11:25:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:02.404 11:25:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:02.404 11:25:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:02.663 11:25:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:02.663 11:25:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:02.663 11:25:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:02.663 11:25:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:02.663 11:25:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:02.663 11:25:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:02.663 { 00:18:02.663 "cntlid": 103, 00:18:02.663 "qid": 0, 00:18:02.663 "state": "enabled", 00:18:02.663 "thread": "nvmf_tgt_poll_group_000", 00:18:02.663 "listen_address": { 00:18:02.663 "trtype": "TCP", 00:18:02.663 "adrfam": "IPv4", 00:18:02.663 "traddr": "10.0.0.2", 00:18:02.663 "trsvcid": "4420" 00:18:02.663 }, 00:18:02.663 "peer_address": { 00:18:02.663 "trtype": "TCP", 00:18:02.663 "adrfam": "IPv4", 00:18:02.663 "traddr": "10.0.0.1", 00:18:02.663 "trsvcid": "46076" 00:18:02.663 }, 00:18:02.663 "auth": { 00:18:02.663 "state": "completed", 00:18:02.663 "digest": "sha512", 00:18:02.663 "dhgroup": "null" 00:18:02.663 } 00:18:02.663 } 00:18:02.663 ]' 00:18:02.663 11:25:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:02.663 11:25:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:02.663 11:25:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:02.663 11:25:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:18:02.663 11:25:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:02.663 11:25:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:02.663 11:25:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:02.663 11:25:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:02.921 11:25:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-secret DHHC-1:03:YzI0MzMyYmI0YjhlOWFjOGQzYzE3ZDQwNGE3ZTViMDhjZWQ5YjI3ZTE0ZDZkNTNlMDMzNGY2MzkyOTg4YTNmOQS9WdI=: 00:18:03.487 11:25:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:03.487 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:03.487 11:25:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:18:03.487 11:25:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:03.487 11:25:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:03.487 11:25:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:03.487 11:25:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:18:03.487 11:25:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:03.487 11:25:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:18:03.487 11:25:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:18:03.745 11:25:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe2048 0 00:18:03.745 11:25:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:03.745 11:25:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:18:03.745 11:25:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:18:03.745 11:25:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:18:03.745 11:25:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:03.745 11:25:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:03.746 11:25:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:03.746 11:25:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:03.746 11:25:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:03.746 11:25:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:03.746 11:25:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:03.746 00:18:04.003 11:25:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:04.003 11:25:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:04.003 11:25:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:04.003 11:25:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:04.003 11:25:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:04.003 11:25:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:04.003 11:25:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:04.003 11:25:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:04.003 11:25:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:04.003 { 00:18:04.003 "cntlid": 105, 00:18:04.003 "qid": 0, 00:18:04.003 "state": "enabled", 00:18:04.003 "thread": "nvmf_tgt_poll_group_000", 00:18:04.003 "listen_address": { 00:18:04.003 "trtype": "TCP", 00:18:04.003 "adrfam": "IPv4", 00:18:04.003 "traddr": "10.0.0.2", 00:18:04.003 "trsvcid": "4420" 00:18:04.003 }, 00:18:04.003 "peer_address": { 00:18:04.003 "trtype": "TCP", 00:18:04.003 "adrfam": "IPv4", 00:18:04.003 "traddr": "10.0.0.1", 00:18:04.003 "trsvcid": "46096" 00:18:04.003 }, 00:18:04.003 "auth": { 00:18:04.003 "state": "completed", 00:18:04.003 "digest": "sha512", 00:18:04.003 "dhgroup": "ffdhe2048" 00:18:04.003 } 00:18:04.003 } 00:18:04.003 ]' 00:18:04.003 11:25:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:04.260 11:25:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:04.260 11:25:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:04.260 11:25:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:18:04.260 11:25:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:04.260 11:25:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:04.260 11:25:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:04.260 11:25:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:04.517 11:25:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-secret DHHC-1:00:NGQ3NGNhMTQ3MmQ0OTM5YWE3YTQwNzkzOGE0OGZiZDhkZjVmZjMzOTI2NjA3YmNkP9i+rw==: --dhchap-ctrl-secret DHHC-1:03:Y2Q4YTNmOGU4ZTI1YjI2ZTgwZDE0MGE2ZDk4MzZlNzVhZTgzY2Y4MjIzN2RlMjcyODAwYTI4OWZlZmI3MjlkN5LDEeI=: 00:18:05.081 11:26:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:05.081 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:05.081 11:26:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:18:05.081 11:26:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:05.081 11:26:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:05.081 11:26:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:05.081 11:26:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:05.081 11:26:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:18:05.081 11:26:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:18:05.081 11:26:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe2048 1 00:18:05.081 11:26:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:05.081 11:26:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:18:05.081 11:26:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:18:05.081 11:26:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:18:05.081 11:26:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:05.081 11:26:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:05.081 11:26:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:05.081 11:26:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:05.081 11:26:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:05.081 11:26:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:05.081 11:26:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:05.339 00:18:05.339 11:26:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:05.339 11:26:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:05.339 11:26:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:05.597 11:26:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:05.597 11:26:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:05.597 11:26:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:05.597 11:26:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:05.597 11:26:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:05.597 11:26:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:05.597 { 00:18:05.597 "cntlid": 107, 00:18:05.597 "qid": 0, 00:18:05.597 "state": "enabled", 00:18:05.597 "thread": "nvmf_tgt_poll_group_000", 00:18:05.597 "listen_address": { 00:18:05.597 "trtype": "TCP", 00:18:05.597 "adrfam": "IPv4", 00:18:05.597 "traddr": "10.0.0.2", 00:18:05.597 "trsvcid": "4420" 00:18:05.597 }, 00:18:05.597 "peer_address": { 00:18:05.597 "trtype": "TCP", 00:18:05.597 "adrfam": "IPv4", 00:18:05.597 "traddr": "10.0.0.1", 00:18:05.597 "trsvcid": "46116" 00:18:05.597 }, 00:18:05.597 "auth": { 00:18:05.597 "state": "completed", 00:18:05.597 "digest": "sha512", 00:18:05.597 "dhgroup": "ffdhe2048" 00:18:05.597 } 00:18:05.597 } 00:18:05.597 ]' 00:18:05.597 11:26:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:05.597 11:26:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:05.597 11:26:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:05.597 11:26:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:18:05.597 11:26:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:05.597 11:26:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:05.597 11:26:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:05.597 11:26:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:05.855 11:26:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-secret DHHC-1:01:NDI3NDlmZWVkYmQ1Y2VkYTJjMzY4MDIyNTQzOGMzMjZWX4m9: --dhchap-ctrl-secret DHHC-1:02:MWU2MWE0YTQxZWZhMzk4NjdlY2I2ZjAyNmUwZDYxZGRhZTcyYjk0ODNjZjBiM2FkruUOsA==: 00:18:06.422 11:26:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:06.422 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:06.422 11:26:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:18:06.422 11:26:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:06.422 11:26:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:06.422 11:26:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:06.422 11:26:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:06.422 11:26:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:18:06.422 11:26:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:18:06.680 11:26:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe2048 2 00:18:06.680 11:26:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:06.680 11:26:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:18:06.680 11:26:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:18:06.680 11:26:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:18:06.680 11:26:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:06.680 11:26:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:06.680 11:26:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:06.680 11:26:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:06.680 11:26:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:06.680 11:26:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:06.680 11:26:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:06.938 00:18:06.938 11:26:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:06.938 11:26:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:06.938 11:26:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:06.938 11:26:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:06.938 11:26:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:06.938 11:26:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:06.938 11:26:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:06.938 11:26:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:06.938 11:26:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:06.938 { 00:18:06.938 "cntlid": 109, 00:18:06.938 "qid": 0, 00:18:06.938 "state": "enabled", 00:18:06.938 "thread": "nvmf_tgt_poll_group_000", 00:18:06.938 "listen_address": { 00:18:06.938 "trtype": "TCP", 00:18:06.938 "adrfam": "IPv4", 00:18:06.938 "traddr": "10.0.0.2", 00:18:06.938 "trsvcid": "4420" 00:18:06.938 }, 00:18:06.938 "peer_address": { 00:18:06.938 "trtype": "TCP", 00:18:06.938 "adrfam": "IPv4", 00:18:06.938 "traddr": "10.0.0.1", 00:18:06.938 "trsvcid": "46148" 00:18:06.938 }, 00:18:06.938 "auth": { 00:18:06.938 "state": "completed", 00:18:06.938 "digest": "sha512", 00:18:06.938 "dhgroup": "ffdhe2048" 00:18:06.938 } 00:18:06.938 } 00:18:06.938 ]' 00:18:06.938 11:26:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:07.196 11:26:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:07.196 11:26:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:07.196 11:26:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:18:07.196 11:26:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:07.196 11:26:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:07.196 11:26:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:07.196 11:26:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:07.454 11:26:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-secret DHHC-1:02:NmY5NTg2NzExOWU5OGZiZmNkMzY2NjYyOTE0YTYxZWVhYmM4OTNjZTYzNjMwZWE324S1pg==: --dhchap-ctrl-secret DHHC-1:01:NzdhY2VkNTJjMmMzYTRiNDJhMjlhMTJlMGYzMTg2NDGvAg5T: 00:18:08.019 11:26:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:08.019 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:08.019 11:26:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:18:08.019 11:26:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:08.019 11:26:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:08.019 11:26:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:08.019 11:26:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:08.019 11:26:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:18:08.019 11:26:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:18:08.019 11:26:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe2048 3 00:18:08.019 11:26:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:08.019 11:26:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:18:08.019 11:26:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:18:08.019 11:26:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:18:08.019 11:26:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:08.019 11:26:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key3 00:18:08.019 11:26:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:08.019 11:26:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:08.019 11:26:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:08.019 11:26:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:08.019 11:26:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:08.277 00:18:08.277 11:26:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:08.277 11:26:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:08.277 11:26:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:08.535 11:26:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:08.535 11:26:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:08.535 11:26:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:08.535 11:26:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:08.535 11:26:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:08.535 11:26:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:08.535 { 00:18:08.535 "cntlid": 111, 00:18:08.535 "qid": 0, 00:18:08.535 "state": "enabled", 00:18:08.535 "thread": "nvmf_tgt_poll_group_000", 00:18:08.535 "listen_address": { 00:18:08.535 "trtype": "TCP", 00:18:08.535 "adrfam": "IPv4", 00:18:08.535 "traddr": "10.0.0.2", 00:18:08.535 "trsvcid": "4420" 00:18:08.535 }, 00:18:08.535 "peer_address": { 00:18:08.535 "trtype": "TCP", 00:18:08.535 "adrfam": "IPv4", 00:18:08.535 "traddr": "10.0.0.1", 00:18:08.535 "trsvcid": "46164" 00:18:08.535 }, 00:18:08.535 "auth": { 00:18:08.535 "state": "completed", 00:18:08.535 "digest": "sha512", 00:18:08.535 "dhgroup": "ffdhe2048" 00:18:08.535 } 00:18:08.535 } 00:18:08.535 ]' 00:18:08.535 11:26:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:08.535 11:26:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:08.535 11:26:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:08.535 11:26:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:18:08.535 11:26:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:08.535 11:26:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:08.535 11:26:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:08.535 11:26:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:08.793 11:26:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-secret DHHC-1:03:YzI0MzMyYmI0YjhlOWFjOGQzYzE3ZDQwNGE3ZTViMDhjZWQ5YjI3ZTE0ZDZkNTNlMDMzNGY2MzkyOTg4YTNmOQS9WdI=: 00:18:09.358 11:26:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:09.358 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:09.358 11:26:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:18:09.358 11:26:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:09.358 11:26:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:09.358 11:26:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:09.358 11:26:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:18:09.358 11:26:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:09.359 11:26:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:18:09.359 11:26:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:18:09.616 11:26:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe3072 0 00:18:09.616 11:26:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:09.616 11:26:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:18:09.616 11:26:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:18:09.616 11:26:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:18:09.616 11:26:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:09.616 11:26:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:09.616 11:26:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:09.616 11:26:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:09.616 11:26:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:09.616 11:26:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:09.616 11:26:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:09.876 00:18:09.876 11:26:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:09.876 11:26:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:09.876 11:26:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:09.876 11:26:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:09.876 11:26:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:09.876 11:26:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:09.876 11:26:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:09.876 11:26:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:09.876 11:26:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:09.876 { 00:18:09.876 "cntlid": 113, 00:18:09.876 "qid": 0, 00:18:09.876 "state": "enabled", 00:18:09.876 "thread": "nvmf_tgt_poll_group_000", 00:18:09.876 "listen_address": { 00:18:09.876 "trtype": "TCP", 00:18:09.876 "adrfam": "IPv4", 00:18:09.876 "traddr": "10.0.0.2", 00:18:09.876 "trsvcid": "4420" 00:18:09.876 }, 00:18:09.876 "peer_address": { 00:18:09.876 "trtype": "TCP", 00:18:09.876 "adrfam": "IPv4", 00:18:09.876 "traddr": "10.0.0.1", 00:18:09.876 "trsvcid": "41004" 00:18:09.876 }, 00:18:09.876 "auth": { 00:18:09.876 "state": "completed", 00:18:09.876 "digest": "sha512", 00:18:09.876 "dhgroup": "ffdhe3072" 00:18:09.876 } 00:18:09.876 } 00:18:09.876 ]' 00:18:09.876 11:26:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:10.134 11:26:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:10.134 11:26:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:10.134 11:26:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:18:10.134 11:26:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:10.134 11:26:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:10.134 11:26:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:10.134 11:26:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:10.392 11:26:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-secret DHHC-1:00:NGQ3NGNhMTQ3MmQ0OTM5YWE3YTQwNzkzOGE0OGZiZDhkZjVmZjMzOTI2NjA3YmNkP9i+rw==: --dhchap-ctrl-secret DHHC-1:03:Y2Q4YTNmOGU4ZTI1YjI2ZTgwZDE0MGE2ZDk4MzZlNzVhZTgzY2Y4MjIzN2RlMjcyODAwYTI4OWZlZmI3MjlkN5LDEeI=: 00:18:10.958 11:26:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:10.958 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:10.958 11:26:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:18:10.958 11:26:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:10.958 11:26:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:10.958 11:26:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:10.958 11:26:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:10.958 11:26:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:18:10.958 11:26:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:18:10.958 11:26:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe3072 1 00:18:10.958 11:26:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:10.958 11:26:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:18:10.958 11:26:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:18:10.958 11:26:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:18:10.958 11:26:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:10.959 11:26:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:10.959 11:26:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:10.959 11:26:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:10.959 11:26:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:10.959 11:26:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:10.959 11:26:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:11.217 00:18:11.217 11:26:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:11.217 11:26:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:11.217 11:26:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:11.475 11:26:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:11.475 11:26:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:11.475 11:26:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:11.475 11:26:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:11.475 11:26:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:11.475 11:26:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:11.475 { 00:18:11.475 "cntlid": 115, 00:18:11.475 "qid": 0, 00:18:11.475 "state": "enabled", 00:18:11.475 "thread": "nvmf_tgt_poll_group_000", 00:18:11.475 "listen_address": { 00:18:11.475 "trtype": "TCP", 00:18:11.475 "adrfam": "IPv4", 00:18:11.475 "traddr": "10.0.0.2", 00:18:11.475 "trsvcid": "4420" 00:18:11.475 }, 00:18:11.475 "peer_address": { 00:18:11.475 "trtype": "TCP", 00:18:11.475 "adrfam": "IPv4", 00:18:11.475 "traddr": "10.0.0.1", 00:18:11.475 "trsvcid": "41034" 00:18:11.475 }, 00:18:11.475 "auth": { 00:18:11.475 "state": "completed", 00:18:11.475 "digest": "sha512", 00:18:11.475 "dhgroup": "ffdhe3072" 00:18:11.475 } 00:18:11.475 } 00:18:11.475 ]' 00:18:11.475 11:26:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:11.475 11:26:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:11.475 11:26:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:11.475 11:26:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:18:11.475 11:26:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:11.733 11:26:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:11.733 11:26:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:11.733 11:26:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:11.733 11:26:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-secret DHHC-1:01:NDI3NDlmZWVkYmQ1Y2VkYTJjMzY4MDIyNTQzOGMzMjZWX4m9: --dhchap-ctrl-secret DHHC-1:02:MWU2MWE0YTQxZWZhMzk4NjdlY2I2ZjAyNmUwZDYxZGRhZTcyYjk0ODNjZjBiM2FkruUOsA==: 00:18:12.298 11:26:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:12.298 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:12.298 11:26:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:18:12.298 11:26:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:12.298 11:26:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:12.298 11:26:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:12.298 11:26:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:12.298 11:26:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:18:12.298 11:26:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:18:12.557 11:26:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe3072 2 00:18:12.557 11:26:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:12.557 11:26:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:18:12.557 11:26:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:18:12.557 11:26:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:18:12.557 11:26:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:12.557 11:26:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:12.557 11:26:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:12.557 11:26:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:12.557 11:26:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:12.557 11:26:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:12.557 11:26:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:12.815 00:18:12.815 11:26:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:12.815 11:26:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:12.815 11:26:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:12.815 11:26:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:12.815 11:26:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:12.815 11:26:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:12.815 11:26:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:13.073 11:26:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:13.073 11:26:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:13.073 { 00:18:13.073 "cntlid": 117, 00:18:13.073 "qid": 0, 00:18:13.073 "state": "enabled", 00:18:13.073 "thread": "nvmf_tgt_poll_group_000", 00:18:13.073 "listen_address": { 00:18:13.073 "trtype": "TCP", 00:18:13.073 "adrfam": "IPv4", 00:18:13.073 "traddr": "10.0.0.2", 00:18:13.073 "trsvcid": "4420" 00:18:13.073 }, 00:18:13.073 "peer_address": { 00:18:13.073 "trtype": "TCP", 00:18:13.073 "adrfam": "IPv4", 00:18:13.073 "traddr": "10.0.0.1", 00:18:13.073 "trsvcid": "41072" 00:18:13.073 }, 00:18:13.073 "auth": { 00:18:13.073 "state": "completed", 00:18:13.073 "digest": "sha512", 00:18:13.073 "dhgroup": "ffdhe3072" 00:18:13.073 } 00:18:13.073 } 00:18:13.073 ]' 00:18:13.073 11:26:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:13.073 11:26:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:13.073 11:26:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:13.073 11:26:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:18:13.073 11:26:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:13.073 11:26:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:13.073 11:26:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:13.073 11:26:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:13.330 11:26:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-secret DHHC-1:02:NmY5NTg2NzExOWU5OGZiZmNkMzY2NjYyOTE0YTYxZWVhYmM4OTNjZTYzNjMwZWE324S1pg==: --dhchap-ctrl-secret DHHC-1:01:NzdhY2VkNTJjMmMzYTRiNDJhMjlhMTJlMGYzMTg2NDGvAg5T: 00:18:13.896 11:26:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:13.896 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:13.896 11:26:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:18:13.896 11:26:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:13.896 11:26:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:13.896 11:26:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:13.896 11:26:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:13.896 11:26:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:18:13.896 11:26:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:18:13.896 11:26:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe3072 3 00:18:13.896 11:26:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:13.896 11:26:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:18:13.896 11:26:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:18:13.896 11:26:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:18:13.896 11:26:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:13.896 11:26:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key3 00:18:13.896 11:26:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:13.896 11:26:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:13.896 11:26:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:13.896 11:26:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:13.896 11:26:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:14.153 00:18:14.153 11:26:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:14.153 11:26:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:14.153 11:26:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:14.410 11:26:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:14.410 11:26:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:14.410 11:26:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:14.410 11:26:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:14.410 11:26:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:14.410 11:26:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:14.410 { 00:18:14.410 "cntlid": 119, 00:18:14.410 "qid": 0, 00:18:14.410 "state": "enabled", 00:18:14.410 "thread": "nvmf_tgt_poll_group_000", 00:18:14.410 "listen_address": { 00:18:14.410 "trtype": "TCP", 00:18:14.410 "adrfam": "IPv4", 00:18:14.410 "traddr": "10.0.0.2", 00:18:14.410 "trsvcid": "4420" 00:18:14.410 }, 00:18:14.410 "peer_address": { 00:18:14.410 "trtype": "TCP", 00:18:14.410 "adrfam": "IPv4", 00:18:14.410 "traddr": "10.0.0.1", 00:18:14.410 "trsvcid": "41112" 00:18:14.410 }, 00:18:14.410 "auth": { 00:18:14.410 "state": "completed", 00:18:14.410 "digest": "sha512", 00:18:14.410 "dhgroup": "ffdhe3072" 00:18:14.410 } 00:18:14.410 } 00:18:14.410 ]' 00:18:14.410 11:26:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:14.410 11:26:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:14.410 11:26:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:14.410 11:26:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:18:14.410 11:26:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:14.668 11:26:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:14.668 11:26:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:14.668 11:26:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:14.668 11:26:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-secret DHHC-1:03:YzI0MzMyYmI0YjhlOWFjOGQzYzE3ZDQwNGE3ZTViMDhjZWQ5YjI3ZTE0ZDZkNTNlMDMzNGY2MzkyOTg4YTNmOQS9WdI=: 00:18:15.235 11:26:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:15.235 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:15.235 11:26:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:18:15.235 11:26:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:15.235 11:26:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:15.235 11:26:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:15.235 11:26:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:18:15.235 11:26:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:15.235 11:26:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:18:15.235 11:26:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:18:15.493 11:26:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe4096 0 00:18:15.493 11:26:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:15.493 11:26:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:18:15.493 11:26:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:18:15.493 11:26:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:18:15.493 11:26:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:15.493 11:26:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:15.493 11:26:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:15.493 11:26:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:15.493 11:26:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:15.493 11:26:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:15.493 11:26:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:15.751 00:18:15.751 11:26:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:15.751 11:26:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:15.751 11:26:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:16.009 11:26:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:16.009 11:26:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:16.009 11:26:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:16.009 11:26:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:16.009 11:26:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:16.009 11:26:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:16.010 { 00:18:16.010 "cntlid": 121, 00:18:16.010 "qid": 0, 00:18:16.010 "state": "enabled", 00:18:16.010 "thread": "nvmf_tgt_poll_group_000", 00:18:16.010 "listen_address": { 00:18:16.010 "trtype": "TCP", 00:18:16.010 "adrfam": "IPv4", 00:18:16.010 "traddr": "10.0.0.2", 00:18:16.010 "trsvcid": "4420" 00:18:16.010 }, 00:18:16.010 "peer_address": { 00:18:16.010 "trtype": "TCP", 00:18:16.010 "adrfam": "IPv4", 00:18:16.010 "traddr": "10.0.0.1", 00:18:16.010 "trsvcid": "41146" 00:18:16.010 }, 00:18:16.010 "auth": { 00:18:16.010 "state": "completed", 00:18:16.010 "digest": "sha512", 00:18:16.010 "dhgroup": "ffdhe4096" 00:18:16.010 } 00:18:16.010 } 00:18:16.010 ]' 00:18:16.010 11:26:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:16.010 11:26:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:16.010 11:26:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:16.010 11:26:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:18:16.010 11:26:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:16.010 11:26:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:16.010 11:26:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:16.010 11:26:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:16.268 11:26:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-secret DHHC-1:00:NGQ3NGNhMTQ3MmQ0OTM5YWE3YTQwNzkzOGE0OGZiZDhkZjVmZjMzOTI2NjA3YmNkP9i+rw==: --dhchap-ctrl-secret DHHC-1:03:Y2Q4YTNmOGU4ZTI1YjI2ZTgwZDE0MGE2ZDk4MzZlNzVhZTgzY2Y4MjIzN2RlMjcyODAwYTI4OWZlZmI3MjlkN5LDEeI=: 00:18:16.835 11:26:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:16.835 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:16.835 11:26:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:18:16.835 11:26:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:16.835 11:26:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:16.835 11:26:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:16.835 11:26:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:16.835 11:26:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:18:16.835 11:26:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:18:17.093 11:26:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe4096 1 00:18:17.093 11:26:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:17.093 11:26:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:18:17.093 11:26:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:18:17.093 11:26:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:18:17.093 11:26:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:17.093 11:26:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:17.093 11:26:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:17.093 11:26:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:17.093 11:26:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:17.093 11:26:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:17.093 11:26:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:17.093 00:18:17.352 11:26:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:17.352 11:26:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:17.352 11:26:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:17.352 11:26:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:17.352 11:26:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:17.352 11:26:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:17.352 11:26:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:17.352 11:26:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:17.352 11:26:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:17.352 { 00:18:17.352 "cntlid": 123, 00:18:17.352 "qid": 0, 00:18:17.352 "state": "enabled", 00:18:17.352 "thread": "nvmf_tgt_poll_group_000", 00:18:17.352 "listen_address": { 00:18:17.352 "trtype": "TCP", 00:18:17.352 "adrfam": "IPv4", 00:18:17.352 "traddr": "10.0.0.2", 00:18:17.352 "trsvcid": "4420" 00:18:17.352 }, 00:18:17.352 "peer_address": { 00:18:17.352 "trtype": "TCP", 00:18:17.352 "adrfam": "IPv4", 00:18:17.352 "traddr": "10.0.0.1", 00:18:17.352 "trsvcid": "41176" 00:18:17.352 }, 00:18:17.352 "auth": { 00:18:17.352 "state": "completed", 00:18:17.352 "digest": "sha512", 00:18:17.352 "dhgroup": "ffdhe4096" 00:18:17.352 } 00:18:17.352 } 00:18:17.352 ]' 00:18:17.352 11:26:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:17.610 11:26:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:17.610 11:26:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:17.610 11:26:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:18:17.610 11:26:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:17.610 11:26:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:17.610 11:26:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:17.610 11:26:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:17.610 11:26:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-secret DHHC-1:01:NDI3NDlmZWVkYmQ1Y2VkYTJjMzY4MDIyNTQzOGMzMjZWX4m9: --dhchap-ctrl-secret DHHC-1:02:MWU2MWE0YTQxZWZhMzk4NjdlY2I2ZjAyNmUwZDYxZGRhZTcyYjk0ODNjZjBiM2FkruUOsA==: 00:18:18.181 11:26:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:18.181 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:18.181 11:26:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:18:18.181 11:26:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:18.181 11:26:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:18.181 11:26:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:18.181 11:26:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:18.181 11:26:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:18:18.181 11:26:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:18:18.439 11:26:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe4096 2 00:18:18.439 11:26:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:18.439 11:26:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:18:18.439 11:26:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:18:18.439 11:26:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:18:18.439 11:26:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:18.439 11:26:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:18.439 11:26:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:18.439 11:26:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:18.440 11:26:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:18.440 11:26:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:18.440 11:26:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:18.697 00:18:18.697 11:26:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:18.697 11:26:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:18.697 11:26:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:18.956 11:26:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:18.956 11:26:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:18.956 11:26:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:18.956 11:26:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:18.956 11:26:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:18.956 11:26:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:18.956 { 00:18:18.956 "cntlid": 125, 00:18:18.956 "qid": 0, 00:18:18.956 "state": "enabled", 00:18:18.956 "thread": "nvmf_tgt_poll_group_000", 00:18:18.956 "listen_address": { 00:18:18.956 "trtype": "TCP", 00:18:18.956 "adrfam": "IPv4", 00:18:18.956 "traddr": "10.0.0.2", 00:18:18.956 "trsvcid": "4420" 00:18:18.956 }, 00:18:18.956 "peer_address": { 00:18:18.956 "trtype": "TCP", 00:18:18.956 "adrfam": "IPv4", 00:18:18.956 "traddr": "10.0.0.1", 00:18:18.956 "trsvcid": "41192" 00:18:18.956 }, 00:18:18.956 "auth": { 00:18:18.956 "state": "completed", 00:18:18.956 "digest": "sha512", 00:18:18.956 "dhgroup": "ffdhe4096" 00:18:18.956 } 00:18:18.956 } 00:18:18.956 ]' 00:18:18.956 11:26:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:18.956 11:26:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:18.956 11:26:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:18.956 11:26:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:18:18.956 11:26:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:18.956 11:26:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:18.956 11:26:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:18.956 11:26:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:19.215 11:26:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-secret DHHC-1:02:NmY5NTg2NzExOWU5OGZiZmNkMzY2NjYyOTE0YTYxZWVhYmM4OTNjZTYzNjMwZWE324S1pg==: --dhchap-ctrl-secret DHHC-1:01:NzdhY2VkNTJjMmMzYTRiNDJhMjlhMTJlMGYzMTg2NDGvAg5T: 00:18:19.782 11:26:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:19.782 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:19.782 11:26:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:18:19.782 11:26:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:19.782 11:26:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:19.782 11:26:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:19.782 11:26:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:19.782 11:26:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:18:19.782 11:26:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:18:20.041 11:26:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe4096 3 00:18:20.041 11:26:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:20.041 11:26:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:18:20.041 11:26:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:18:20.041 11:26:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:18:20.041 11:26:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:20.041 11:26:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key3 00:18:20.041 11:26:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:20.041 11:26:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:20.041 11:26:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:20.041 11:26:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:20.041 11:26:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:20.364 00:18:20.364 11:26:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:20.364 11:26:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:20.364 11:26:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:20.624 11:26:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:20.624 11:26:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:20.624 11:26:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:20.624 11:26:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:20.624 11:26:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:20.624 11:26:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:20.624 { 00:18:20.624 "cntlid": 127, 00:18:20.624 "qid": 0, 00:18:20.624 "state": "enabled", 00:18:20.624 "thread": "nvmf_tgt_poll_group_000", 00:18:20.624 "listen_address": { 00:18:20.624 "trtype": "TCP", 00:18:20.624 "adrfam": "IPv4", 00:18:20.624 "traddr": "10.0.0.2", 00:18:20.624 "trsvcid": "4420" 00:18:20.624 }, 00:18:20.624 "peer_address": { 00:18:20.624 "trtype": "TCP", 00:18:20.624 "adrfam": "IPv4", 00:18:20.624 "traddr": "10.0.0.1", 00:18:20.624 "trsvcid": "45354" 00:18:20.624 }, 00:18:20.624 "auth": { 00:18:20.624 "state": "completed", 00:18:20.624 "digest": "sha512", 00:18:20.624 "dhgroup": "ffdhe4096" 00:18:20.624 } 00:18:20.624 } 00:18:20.624 ]' 00:18:20.624 11:26:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:20.624 11:26:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:20.624 11:26:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:20.624 11:26:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:18:20.624 11:26:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:20.624 11:26:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:20.624 11:26:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:20.624 11:26:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:20.882 11:26:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-secret DHHC-1:03:YzI0MzMyYmI0YjhlOWFjOGQzYzE3ZDQwNGE3ZTViMDhjZWQ5YjI3ZTE0ZDZkNTNlMDMzNGY2MzkyOTg4YTNmOQS9WdI=: 00:18:21.449 11:26:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:21.449 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:21.449 11:26:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:18:21.449 11:26:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:21.449 11:26:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:21.449 11:26:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:21.449 11:26:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:18:21.449 11:26:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:21.449 11:26:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:18:21.449 11:26:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:18:21.449 11:26:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe6144 0 00:18:21.449 11:26:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:21.449 11:26:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:18:21.449 11:26:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:18:21.449 11:26:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:18:21.449 11:26:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:21.449 11:26:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:21.449 11:26:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:21.449 11:26:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:21.449 11:26:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:21.449 11:26:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:21.449 11:26:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:21.780 00:18:21.780 11:26:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:21.780 11:26:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:21.780 11:26:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:22.038 11:26:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:22.038 11:26:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:22.038 11:26:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:22.038 11:26:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:22.038 11:26:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:22.038 11:26:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:22.038 { 00:18:22.038 "cntlid": 129, 00:18:22.038 "qid": 0, 00:18:22.038 "state": "enabled", 00:18:22.038 "thread": "nvmf_tgt_poll_group_000", 00:18:22.038 "listen_address": { 00:18:22.038 "trtype": "TCP", 00:18:22.038 "adrfam": "IPv4", 00:18:22.038 "traddr": "10.0.0.2", 00:18:22.038 "trsvcid": "4420" 00:18:22.038 }, 00:18:22.038 "peer_address": { 00:18:22.038 "trtype": "TCP", 00:18:22.038 "adrfam": "IPv4", 00:18:22.038 "traddr": "10.0.0.1", 00:18:22.038 "trsvcid": "45380" 00:18:22.038 }, 00:18:22.038 "auth": { 00:18:22.038 "state": "completed", 00:18:22.038 "digest": "sha512", 00:18:22.038 "dhgroup": "ffdhe6144" 00:18:22.038 } 00:18:22.038 } 00:18:22.038 ]' 00:18:22.038 11:26:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:22.038 11:26:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:22.038 11:26:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:22.296 11:26:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:18:22.296 11:26:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:22.296 11:26:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:22.296 11:26:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:22.296 11:26:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:22.296 11:26:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-secret DHHC-1:00:NGQ3NGNhMTQ3MmQ0OTM5YWE3YTQwNzkzOGE0OGZiZDhkZjVmZjMzOTI2NjA3YmNkP9i+rw==: --dhchap-ctrl-secret DHHC-1:03:Y2Q4YTNmOGU4ZTI1YjI2ZTgwZDE0MGE2ZDk4MzZlNzVhZTgzY2Y4MjIzN2RlMjcyODAwYTI4OWZlZmI3MjlkN5LDEeI=: 00:18:22.863 11:26:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:22.863 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:22.863 11:26:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:18:22.863 11:26:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:22.863 11:26:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:22.863 11:26:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:22.863 11:26:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:22.863 11:26:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:18:22.863 11:26:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:18:23.122 11:26:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe6144 1 00:18:23.122 11:26:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:23.122 11:26:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:18:23.122 11:26:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:18:23.122 11:26:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:18:23.122 11:26:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:23.122 11:26:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:23.122 11:26:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:23.122 11:26:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:23.122 11:26:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:23.122 11:26:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:23.122 11:26:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:23.381 00:18:23.381 11:26:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:23.381 11:26:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:23.381 11:26:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:23.639 11:26:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:23.639 11:26:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:23.639 11:26:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:23.639 11:26:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:23.639 11:26:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:23.639 11:26:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:23.639 { 00:18:23.639 "cntlid": 131, 00:18:23.639 "qid": 0, 00:18:23.639 "state": "enabled", 00:18:23.639 "thread": "nvmf_tgt_poll_group_000", 00:18:23.639 "listen_address": { 00:18:23.639 "trtype": "TCP", 00:18:23.639 "adrfam": "IPv4", 00:18:23.639 "traddr": "10.0.0.2", 00:18:23.639 "trsvcid": "4420" 00:18:23.639 }, 00:18:23.639 "peer_address": { 00:18:23.639 "trtype": "TCP", 00:18:23.639 "adrfam": "IPv4", 00:18:23.639 "traddr": "10.0.0.1", 00:18:23.639 "trsvcid": "45410" 00:18:23.639 }, 00:18:23.639 "auth": { 00:18:23.639 "state": "completed", 00:18:23.640 "digest": "sha512", 00:18:23.640 "dhgroup": "ffdhe6144" 00:18:23.640 } 00:18:23.640 } 00:18:23.640 ]' 00:18:23.640 11:26:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:23.640 11:26:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:23.640 11:26:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:23.640 11:26:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:18:23.640 11:26:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:23.898 11:26:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:23.898 11:26:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:23.898 11:26:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:23.898 11:26:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-secret DHHC-1:01:NDI3NDlmZWVkYmQ1Y2VkYTJjMzY4MDIyNTQzOGMzMjZWX4m9: --dhchap-ctrl-secret DHHC-1:02:MWU2MWE0YTQxZWZhMzk4NjdlY2I2ZjAyNmUwZDYxZGRhZTcyYjk0ODNjZjBiM2FkruUOsA==: 00:18:24.465 11:26:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:24.465 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:24.465 11:26:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:18:24.465 11:26:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:24.465 11:26:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:24.465 11:26:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:24.465 11:26:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:24.465 11:26:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:18:24.465 11:26:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:18:24.724 11:26:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe6144 2 00:18:24.724 11:26:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:24.724 11:26:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:18:24.724 11:26:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:18:24.724 11:26:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:18:24.724 11:26:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:24.724 11:26:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:24.724 11:26:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:24.724 11:26:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:24.724 11:26:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:24.724 11:26:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:24.724 11:26:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:24.983 00:18:24.983 11:26:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:24.983 11:26:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:24.983 11:26:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:25.242 11:26:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:25.242 11:26:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:25.242 11:26:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:25.242 11:26:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:25.242 11:26:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:25.242 11:26:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:25.242 { 00:18:25.242 "cntlid": 133, 00:18:25.242 "qid": 0, 00:18:25.242 "state": "enabled", 00:18:25.242 "thread": "nvmf_tgt_poll_group_000", 00:18:25.242 "listen_address": { 00:18:25.242 "trtype": "TCP", 00:18:25.242 "adrfam": "IPv4", 00:18:25.242 "traddr": "10.0.0.2", 00:18:25.242 "trsvcid": "4420" 00:18:25.242 }, 00:18:25.242 "peer_address": { 00:18:25.242 "trtype": "TCP", 00:18:25.242 "adrfam": "IPv4", 00:18:25.242 "traddr": "10.0.0.1", 00:18:25.242 "trsvcid": "45446" 00:18:25.242 }, 00:18:25.242 "auth": { 00:18:25.242 "state": "completed", 00:18:25.242 "digest": "sha512", 00:18:25.242 "dhgroup": "ffdhe6144" 00:18:25.242 } 00:18:25.242 } 00:18:25.242 ]' 00:18:25.242 11:26:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:25.242 11:26:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:25.242 11:26:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:25.242 11:26:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:18:25.242 11:26:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:25.242 11:26:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:25.242 11:26:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:25.242 11:26:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:25.501 11:26:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-secret DHHC-1:02:NmY5NTg2NzExOWU5OGZiZmNkMzY2NjYyOTE0YTYxZWVhYmM4OTNjZTYzNjMwZWE324S1pg==: --dhchap-ctrl-secret DHHC-1:01:NzdhY2VkNTJjMmMzYTRiNDJhMjlhMTJlMGYzMTg2NDGvAg5T: 00:18:26.068 11:26:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:26.068 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:26.068 11:26:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:18:26.068 11:26:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:26.068 11:26:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:26.068 11:26:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:26.068 11:26:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:26.068 11:26:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:18:26.068 11:26:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:18:26.327 11:26:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe6144 3 00:18:26.327 11:26:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:26.327 11:26:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:18:26.327 11:26:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:18:26.327 11:26:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:18:26.327 11:26:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:26.327 11:26:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key3 00:18:26.327 11:26:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:26.327 11:26:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:26.327 11:26:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:26.327 11:26:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:26.327 11:26:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:26.586 00:18:26.586 11:26:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:26.586 11:26:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:26.586 11:26:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:26.844 11:26:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:26.845 11:26:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:26.845 11:26:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:26.845 11:26:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:26.845 11:26:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:26.845 11:26:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:26.845 { 00:18:26.845 "cntlid": 135, 00:18:26.845 "qid": 0, 00:18:26.845 "state": "enabled", 00:18:26.845 "thread": "nvmf_tgt_poll_group_000", 00:18:26.845 "listen_address": { 00:18:26.845 "trtype": "TCP", 00:18:26.845 "adrfam": "IPv4", 00:18:26.845 "traddr": "10.0.0.2", 00:18:26.845 "trsvcid": "4420" 00:18:26.845 }, 00:18:26.845 "peer_address": { 00:18:26.845 "trtype": "TCP", 00:18:26.845 "adrfam": "IPv4", 00:18:26.845 "traddr": "10.0.0.1", 00:18:26.845 "trsvcid": "45466" 00:18:26.845 }, 00:18:26.845 "auth": { 00:18:26.845 "state": "completed", 00:18:26.845 "digest": "sha512", 00:18:26.845 "dhgroup": "ffdhe6144" 00:18:26.845 } 00:18:26.845 } 00:18:26.845 ]' 00:18:26.845 11:26:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:26.845 11:26:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:26.845 11:26:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:26.845 11:26:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:18:26.845 11:26:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:26.845 11:26:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:26.845 11:26:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:26.845 11:26:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:27.103 11:26:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-secret DHHC-1:03:YzI0MzMyYmI0YjhlOWFjOGQzYzE3ZDQwNGE3ZTViMDhjZWQ5YjI3ZTE0ZDZkNTNlMDMzNGY2MzkyOTg4YTNmOQS9WdI=: 00:18:27.669 11:26:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:27.669 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:27.669 11:26:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:18:27.669 11:26:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:27.669 11:26:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:27.669 11:26:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:27.669 11:26:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:18:27.669 11:26:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:27.669 11:26:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:18:27.669 11:26:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:18:27.928 11:26:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe8192 0 00:18:27.928 11:26:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:27.928 11:26:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:18:27.928 11:26:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:18:27.928 11:26:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:18:27.928 11:26:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:27.928 11:26:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:27.928 11:26:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:27.928 11:26:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:27.928 11:26:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:27.928 11:26:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:27.928 11:26:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:28.186 00:18:28.186 11:26:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:28.186 11:26:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:28.186 11:26:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:28.445 11:26:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:28.445 11:26:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:28.445 11:26:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:28.445 11:26:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:28.445 11:26:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:28.445 11:26:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:28.445 { 00:18:28.445 "cntlid": 137, 00:18:28.445 "qid": 0, 00:18:28.445 "state": "enabled", 00:18:28.445 "thread": "nvmf_tgt_poll_group_000", 00:18:28.445 "listen_address": { 00:18:28.445 "trtype": "TCP", 00:18:28.445 "adrfam": "IPv4", 00:18:28.445 "traddr": "10.0.0.2", 00:18:28.445 "trsvcid": "4420" 00:18:28.445 }, 00:18:28.445 "peer_address": { 00:18:28.445 "trtype": "TCP", 00:18:28.445 "adrfam": "IPv4", 00:18:28.445 "traddr": "10.0.0.1", 00:18:28.445 "trsvcid": "45502" 00:18:28.445 }, 00:18:28.445 "auth": { 00:18:28.445 "state": "completed", 00:18:28.445 "digest": "sha512", 00:18:28.445 "dhgroup": "ffdhe8192" 00:18:28.445 } 00:18:28.445 } 00:18:28.445 ]' 00:18:28.445 11:26:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:28.445 11:26:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:28.445 11:26:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:28.445 11:26:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:18:28.445 11:26:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:28.704 11:26:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:28.704 11:26:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:28.704 11:26:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:28.704 11:26:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-secret DHHC-1:00:NGQ3NGNhMTQ3MmQ0OTM5YWE3YTQwNzkzOGE0OGZiZDhkZjVmZjMzOTI2NjA3YmNkP9i+rw==: --dhchap-ctrl-secret DHHC-1:03:Y2Q4YTNmOGU4ZTI1YjI2ZTgwZDE0MGE2ZDk4MzZlNzVhZTgzY2Y4MjIzN2RlMjcyODAwYTI4OWZlZmI3MjlkN5LDEeI=: 00:18:29.272 11:26:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:29.272 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:29.272 11:26:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:18:29.272 11:26:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:29.272 11:26:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:29.272 11:26:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:29.272 11:26:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:29.272 11:26:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:18:29.272 11:26:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:18:29.531 11:26:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe8192 1 00:18:29.531 11:26:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:29.531 11:26:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:18:29.531 11:26:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:18:29.531 11:26:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:18:29.531 11:26:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:29.531 11:26:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:29.531 11:26:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:29.531 11:26:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:29.531 11:26:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:29.531 11:26:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:29.531 11:26:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:30.098 00:18:30.098 11:26:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:30.098 11:26:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:30.098 11:26:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:30.098 11:26:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:30.098 11:26:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:30.098 11:26:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:30.098 11:26:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:30.098 11:26:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:30.098 11:26:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:30.098 { 00:18:30.098 "cntlid": 139, 00:18:30.098 "qid": 0, 00:18:30.098 "state": "enabled", 00:18:30.098 "thread": "nvmf_tgt_poll_group_000", 00:18:30.098 "listen_address": { 00:18:30.098 "trtype": "TCP", 00:18:30.098 "adrfam": "IPv4", 00:18:30.098 "traddr": "10.0.0.2", 00:18:30.098 "trsvcid": "4420" 00:18:30.098 }, 00:18:30.098 "peer_address": { 00:18:30.098 "trtype": "TCP", 00:18:30.098 "adrfam": "IPv4", 00:18:30.098 "traddr": "10.0.0.1", 00:18:30.098 "trsvcid": "60468" 00:18:30.098 }, 00:18:30.098 "auth": { 00:18:30.098 "state": "completed", 00:18:30.098 "digest": "sha512", 00:18:30.098 "dhgroup": "ffdhe8192" 00:18:30.098 } 00:18:30.098 } 00:18:30.098 ]' 00:18:30.098 11:26:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:30.098 11:26:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:30.098 11:26:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:30.356 11:26:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:18:30.356 11:26:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:30.356 11:26:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:30.356 11:26:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:30.356 11:26:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:30.356 11:26:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-secret DHHC-1:01:NDI3NDlmZWVkYmQ1Y2VkYTJjMzY4MDIyNTQzOGMzMjZWX4m9: --dhchap-ctrl-secret DHHC-1:02:MWU2MWE0YTQxZWZhMzk4NjdlY2I2ZjAyNmUwZDYxZGRhZTcyYjk0ODNjZjBiM2FkruUOsA==: 00:18:30.921 11:26:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:30.921 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:30.921 11:26:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:18:30.921 11:26:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:30.921 11:26:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:30.921 11:26:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:30.921 11:26:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:30.921 11:26:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:18:30.921 11:26:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:18:31.179 11:26:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe8192 2 00:18:31.179 11:26:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:31.179 11:26:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:18:31.179 11:26:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:18:31.179 11:26:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:18:31.179 11:26:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:31.179 11:26:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:31.179 11:26:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:31.179 11:26:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:31.179 11:26:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:31.179 11:26:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:31.179 11:26:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:31.747 00:18:31.747 11:26:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:31.747 11:26:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:31.747 11:26:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:32.005 11:26:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:32.005 11:26:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:32.005 11:26:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:32.005 11:26:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:32.005 11:26:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:32.005 11:26:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:32.005 { 00:18:32.005 "cntlid": 141, 00:18:32.005 "qid": 0, 00:18:32.005 "state": "enabled", 00:18:32.005 "thread": "nvmf_tgt_poll_group_000", 00:18:32.005 "listen_address": { 00:18:32.005 "trtype": "TCP", 00:18:32.005 "adrfam": "IPv4", 00:18:32.005 "traddr": "10.0.0.2", 00:18:32.005 "trsvcid": "4420" 00:18:32.005 }, 00:18:32.005 "peer_address": { 00:18:32.005 "trtype": "TCP", 00:18:32.005 "adrfam": "IPv4", 00:18:32.005 "traddr": "10.0.0.1", 00:18:32.005 "trsvcid": "60494" 00:18:32.005 }, 00:18:32.005 "auth": { 00:18:32.005 "state": "completed", 00:18:32.005 "digest": "sha512", 00:18:32.005 "dhgroup": "ffdhe8192" 00:18:32.005 } 00:18:32.005 } 00:18:32.005 ]' 00:18:32.006 11:26:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:32.006 11:26:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:32.006 11:26:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:32.006 11:26:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:18:32.006 11:26:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:32.006 11:26:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:32.006 11:26:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:32.006 11:26:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:32.264 11:26:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-secret DHHC-1:02:NmY5NTg2NzExOWU5OGZiZmNkMzY2NjYyOTE0YTYxZWVhYmM4OTNjZTYzNjMwZWE324S1pg==: --dhchap-ctrl-secret DHHC-1:01:NzdhY2VkNTJjMmMzYTRiNDJhMjlhMTJlMGYzMTg2NDGvAg5T: 00:18:32.830 11:26:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:32.830 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:32.830 11:26:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:18:32.830 11:26:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:32.830 11:26:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:32.830 11:26:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:32.830 11:26:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:32.830 11:26:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:18:32.830 11:26:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:18:32.830 11:26:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe8192 3 00:18:32.830 11:26:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:32.830 11:26:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:18:32.830 11:26:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:18:32.830 11:26:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:18:32.830 11:26:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:33.089 11:26:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key3 00:18:33.089 11:26:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:33.089 11:26:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:33.089 11:26:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:33.089 11:26:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:33.089 11:26:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:33.348 00:18:33.348 11:26:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:33.348 11:26:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:33.348 11:26:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:33.607 11:26:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:33.607 11:26:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:33.607 11:26:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:33.607 11:26:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:33.607 11:26:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:33.607 11:26:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:33.607 { 00:18:33.607 "cntlid": 143, 00:18:33.607 "qid": 0, 00:18:33.607 "state": "enabled", 00:18:33.607 "thread": "nvmf_tgt_poll_group_000", 00:18:33.607 "listen_address": { 00:18:33.607 "trtype": "TCP", 00:18:33.607 "adrfam": "IPv4", 00:18:33.607 "traddr": "10.0.0.2", 00:18:33.607 "trsvcid": "4420" 00:18:33.607 }, 00:18:33.607 "peer_address": { 00:18:33.607 "trtype": "TCP", 00:18:33.607 "adrfam": "IPv4", 00:18:33.607 "traddr": "10.0.0.1", 00:18:33.607 "trsvcid": "60514" 00:18:33.607 }, 00:18:33.607 "auth": { 00:18:33.607 "state": "completed", 00:18:33.607 "digest": "sha512", 00:18:33.607 "dhgroup": "ffdhe8192" 00:18:33.607 } 00:18:33.607 } 00:18:33.607 ]' 00:18:33.607 11:26:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:33.607 11:26:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:33.607 11:26:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:33.607 11:26:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:18:33.607 11:26:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:33.607 11:26:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:33.607 11:26:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:33.607 11:26:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:33.866 11:26:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-secret DHHC-1:03:YzI0MzMyYmI0YjhlOWFjOGQzYzE3ZDQwNGE3ZTViMDhjZWQ5YjI3ZTE0ZDZkNTNlMDMzNGY2MzkyOTg4YTNmOQS9WdI=: 00:18:34.431 11:26:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:34.431 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:34.431 11:26:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:18:34.431 11:26:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:34.431 11:26:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:34.431 11:26:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:34.431 11:26:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@102 -- # IFS=, 00:18:34.432 11:26:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@103 -- # printf %s sha256,sha384,sha512 00:18:34.432 11:26:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@102 -- # IFS=, 00:18:34.432 11:26:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@103 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:18:34.432 11:26:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@102 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:18:34.432 11:26:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:18:34.689 11:26:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@114 -- # connect_authenticate sha512 ffdhe8192 0 00:18:34.689 11:26:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:34.689 11:26:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:18:34.689 11:26:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:18:34.689 11:26:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:18:34.690 11:26:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:34.690 11:26:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:34.690 11:26:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:34.690 11:26:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:34.690 11:26:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:34.690 11:26:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:34.690 11:26:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:35.256 00:18:35.256 11:26:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:35.256 11:26:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:35.256 11:26:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:35.256 11:26:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:35.256 11:26:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:35.256 11:26:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:35.256 11:26:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:35.256 11:26:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:35.256 11:26:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:35.256 { 00:18:35.256 "cntlid": 145, 00:18:35.256 "qid": 0, 00:18:35.256 "state": "enabled", 00:18:35.256 "thread": "nvmf_tgt_poll_group_000", 00:18:35.256 "listen_address": { 00:18:35.256 "trtype": "TCP", 00:18:35.256 "adrfam": "IPv4", 00:18:35.256 "traddr": "10.0.0.2", 00:18:35.256 "trsvcid": "4420" 00:18:35.256 }, 00:18:35.256 "peer_address": { 00:18:35.256 "trtype": "TCP", 00:18:35.256 "adrfam": "IPv4", 00:18:35.256 "traddr": "10.0.0.1", 00:18:35.256 "trsvcid": "60542" 00:18:35.256 }, 00:18:35.256 "auth": { 00:18:35.256 "state": "completed", 00:18:35.256 "digest": "sha512", 00:18:35.256 "dhgroup": "ffdhe8192" 00:18:35.256 } 00:18:35.256 } 00:18:35.256 ]' 00:18:35.256 11:26:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:35.256 11:26:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:35.256 11:26:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:35.514 11:26:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:18:35.514 11:26:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:35.514 11:26:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:35.514 11:26:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:35.514 11:26:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:35.514 11:26:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-secret DHHC-1:00:NGQ3NGNhMTQ3MmQ0OTM5YWE3YTQwNzkzOGE0OGZiZDhkZjVmZjMzOTI2NjA3YmNkP9i+rw==: --dhchap-ctrl-secret DHHC-1:03:Y2Q4YTNmOGU4ZTI1YjI2ZTgwZDE0MGE2ZDk4MzZlNzVhZTgzY2Y4MjIzN2RlMjcyODAwYTI4OWZlZmI3MjlkN5LDEeI=: 00:18:36.080 11:26:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:36.080 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:36.080 11:26:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:18:36.080 11:26:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:36.080 11:26:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:36.080 11:26:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:36.080 11:26:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@117 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key1 00:18:36.080 11:26:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:36.080 11:26:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:36.080 11:26:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:36.080 11:26:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:18:36.080 11:26:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:18:36.080 11:26:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:18:36.080 11:26:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=hostrpc 00:18:36.080 11:26:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:36.080 11:26:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t hostrpc 00:18:36.080 11:26:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:36.080 11:26:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:18:36.080 11:26:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:18:36.644 request: 00:18:36.644 { 00:18:36.644 "name": "nvme0", 00:18:36.644 "trtype": "tcp", 00:18:36.644 "traddr": "10.0.0.2", 00:18:36.644 "adrfam": "ipv4", 00:18:36.644 "trsvcid": "4420", 00:18:36.644 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:18:36.644 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:18:36.644 "prchk_reftag": false, 00:18:36.644 "prchk_guard": false, 00:18:36.644 "hdgst": false, 00:18:36.644 "ddgst": false, 00:18:36.644 "dhchap_key": "key2", 00:18:36.644 "method": "bdev_nvme_attach_controller", 00:18:36.644 "req_id": 1 00:18:36.644 } 00:18:36.644 Got JSON-RPC error response 00:18:36.644 response: 00:18:36.644 { 00:18:36.644 "code": -5, 00:18:36.644 "message": "Input/output error" 00:18:36.644 } 00:18:36.644 11:26:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:18:36.644 11:26:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:18:36.644 11:26:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:18:36.644 11:26:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:18:36.644 11:26:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:18:36.644 11:26:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:36.644 11:26:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:36.644 11:26:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:36.644 11:26:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@124 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:36.644 11:26:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:36.644 11:26:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:36.644 11:26:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:36.644 11:26:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@125 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:18:36.645 11:26:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:18:36.645 11:26:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:18:36.645 11:26:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=hostrpc 00:18:36.645 11:26:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:36.645 11:26:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t hostrpc 00:18:36.645 11:26:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:36.645 11:26:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:18:36.645 11:26:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:18:37.211 request: 00:18:37.211 { 00:18:37.211 "name": "nvme0", 00:18:37.211 "trtype": "tcp", 00:18:37.211 "traddr": "10.0.0.2", 00:18:37.211 "adrfam": "ipv4", 00:18:37.211 "trsvcid": "4420", 00:18:37.211 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:18:37.211 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:18:37.211 "prchk_reftag": false, 00:18:37.211 "prchk_guard": false, 00:18:37.211 "hdgst": false, 00:18:37.211 "ddgst": false, 00:18:37.211 "dhchap_key": "key1", 00:18:37.211 "dhchap_ctrlr_key": "ckey2", 00:18:37.211 "method": "bdev_nvme_attach_controller", 00:18:37.211 "req_id": 1 00:18:37.211 } 00:18:37.211 Got JSON-RPC error response 00:18:37.211 response: 00:18:37.211 { 00:18:37.211 "code": -5, 00:18:37.211 "message": "Input/output error" 00:18:37.211 } 00:18:37.211 11:26:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:18:37.211 11:26:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:18:37.211 11:26:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:18:37.211 11:26:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:18:37.211 11:26:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@128 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:18:37.211 11:26:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:37.211 11:26:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:37.211 11:26:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:37.211 11:26:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@131 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key1 00:18:37.211 11:26:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:37.211 11:26:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:37.211 11:26:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:37.211 11:26:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@132 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:37.212 11:26:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:18:37.212 11:26:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:37.212 11:26:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=hostrpc 00:18:37.212 11:26:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:37.212 11:26:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t hostrpc 00:18:37.212 11:26:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:37.212 11:26:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:37.212 11:26:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:37.470 request: 00:18:37.470 { 00:18:37.470 "name": "nvme0", 00:18:37.470 "trtype": "tcp", 00:18:37.470 "traddr": "10.0.0.2", 00:18:37.470 "adrfam": "ipv4", 00:18:37.470 "trsvcid": "4420", 00:18:37.470 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:18:37.471 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:18:37.471 "prchk_reftag": false, 00:18:37.471 "prchk_guard": false, 00:18:37.471 "hdgst": false, 00:18:37.471 "ddgst": false, 00:18:37.471 "dhchap_key": "key1", 00:18:37.471 "dhchap_ctrlr_key": "ckey1", 00:18:37.471 "method": "bdev_nvme_attach_controller", 00:18:37.471 "req_id": 1 00:18:37.471 } 00:18:37.471 Got JSON-RPC error response 00:18:37.471 response: 00:18:37.471 { 00:18:37.471 "code": -5, 00:18:37.471 "message": "Input/output error" 00:18:37.471 } 00:18:37.471 11:26:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:18:37.471 11:26:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:18:37.471 11:26:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:18:37.471 11:26:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:18:37.471 11:26:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@135 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:18:37.471 11:26:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:37.471 11:26:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:37.471 11:26:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:37.471 11:26:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@138 -- # killprocess 1508175 00:18:37.471 11:26:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@950 -- # '[' -z 1508175 ']' 00:18:37.471 11:26:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # kill -0 1508175 00:18:37.471 11:26:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@955 -- # uname 00:18:37.471 11:26:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:18:37.471 11:26:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1508175 00:18:37.471 11:26:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:18:37.471 11:26:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:18:37.471 11:26:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1508175' 00:18:37.471 killing process with pid 1508175 00:18:37.471 11:26:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@969 -- # kill 1508175 00:18:37.471 11:26:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@974 -- # wait 1508175 00:18:37.729 11:26:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@139 -- # nvmfappstart --wait-for-rpc -L nvmf_auth 00:18:37.729 11:26:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:18:37.729 11:26:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@724 -- # xtrace_disable 00:18:37.729 11:26:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:37.729 11:26:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@481 -- # nvmfpid=1528343 00:18:37.729 11:26:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc -L nvmf_auth 00:18:37.729 11:26:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@482 -- # waitforlisten 1528343 00:18:37.729 11:26:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@831 -- # '[' -z 1528343 ']' 00:18:37.729 11:26:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:37.729 11:26:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@836 -- # local max_retries=100 00:18:37.729 11:26:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:37.729 11:26:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # xtrace_disable 00:18:37.729 11:26:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:38.660 11:26:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:18:38.660 11:26:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # return 0 00:18:38.660 11:26:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:18:38.660 11:26:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@730 -- # xtrace_disable 00:18:38.660 11:26:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:38.660 11:26:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:38.660 11:26:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@140 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:18:38.660 11:26:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@142 -- # waitforlisten 1528343 00:18:38.661 11:26:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@831 -- # '[' -z 1528343 ']' 00:18:38.661 11:26:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:38.661 11:26:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@836 -- # local max_retries=100 00:18:38.661 11:26:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:38.661 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:38.661 11:26:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # xtrace_disable 00:18:38.661 11:26:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:38.918 11:26:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:18:38.918 11:26:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # return 0 00:18:38.918 11:26:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@143 -- # rpc_cmd 00:18:38.918 11:26:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:38.918 11:26:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:38.918 11:26:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:38.918 11:26:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@153 -- # connect_authenticate sha512 ffdhe8192 3 00:18:38.918 11:26:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:38.918 11:26:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:18:38.918 11:26:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:18:38.918 11:26:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:18:38.918 11:26:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:38.918 11:26:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key3 00:18:38.918 11:26:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:38.918 11:26:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:38.918 11:26:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:38.918 11:26:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:38.918 11:26:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:39.483 00:18:39.483 11:26:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:39.483 11:26:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:39.483 11:26:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:39.739 11:26:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:39.739 11:26:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:39.739 11:26:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:39.739 11:26:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:39.739 11:26:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:39.739 11:26:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:39.739 { 00:18:39.739 "cntlid": 1, 00:18:39.739 "qid": 0, 00:18:39.739 "state": "enabled", 00:18:39.739 "thread": "nvmf_tgt_poll_group_000", 00:18:39.739 "listen_address": { 00:18:39.739 "trtype": "TCP", 00:18:39.739 "adrfam": "IPv4", 00:18:39.739 "traddr": "10.0.0.2", 00:18:39.739 "trsvcid": "4420" 00:18:39.739 }, 00:18:39.739 "peer_address": { 00:18:39.739 "trtype": "TCP", 00:18:39.739 "adrfam": "IPv4", 00:18:39.739 "traddr": "10.0.0.1", 00:18:39.739 "trsvcid": "60584" 00:18:39.739 }, 00:18:39.739 "auth": { 00:18:39.739 "state": "completed", 00:18:39.739 "digest": "sha512", 00:18:39.739 "dhgroup": "ffdhe8192" 00:18:39.739 } 00:18:39.739 } 00:18:39.739 ]' 00:18:39.739 11:26:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:39.739 11:26:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:39.739 11:26:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:39.739 11:26:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:18:39.739 11:26:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:39.739 11:26:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:39.739 11:26:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:39.739 11:26:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:39.998 11:26:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-secret DHHC-1:03:YzI0MzMyYmI0YjhlOWFjOGQzYzE3ZDQwNGE3ZTViMDhjZWQ5YjI3ZTE0ZDZkNTNlMDMzNGY2MzkyOTg4YTNmOQS9WdI=: 00:18:40.564 11:26:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:40.564 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:40.564 11:26:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:18:40.564 11:26:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:40.564 11:26:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:40.564 11:26:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:40.564 11:26:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@156 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key3 00:18:40.564 11:26:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:40.564 11:26:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:40.564 11:26:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:40.564 11:26:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@157 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 00:18:40.564 11:26:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 00:18:40.822 11:26:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@158 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:40.822 11:26:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:18:40.822 11:26:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:40.822 11:26:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=hostrpc 00:18:40.822 11:26:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:40.822 11:26:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t hostrpc 00:18:40.822 11:26:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:40.822 11:26:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:40.822 11:26:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:40.822 request: 00:18:40.822 { 00:18:40.822 "name": "nvme0", 00:18:40.822 "trtype": "tcp", 00:18:40.822 "traddr": "10.0.0.2", 00:18:40.822 "adrfam": "ipv4", 00:18:40.822 "trsvcid": "4420", 00:18:40.822 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:18:40.822 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:18:40.822 "prchk_reftag": false, 00:18:40.822 "prchk_guard": false, 00:18:40.822 "hdgst": false, 00:18:40.822 "ddgst": false, 00:18:40.822 "dhchap_key": "key3", 00:18:40.822 "method": "bdev_nvme_attach_controller", 00:18:40.822 "req_id": 1 00:18:40.822 } 00:18:40.822 Got JSON-RPC error response 00:18:40.822 response: 00:18:40.822 { 00:18:40.822 "code": -5, 00:18:40.822 "message": "Input/output error" 00:18:40.822 } 00:18:40.822 11:26:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:18:40.822 11:26:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:18:40.822 11:26:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:18:40.822 11:26:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:18:40.822 11:26:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@163 -- # IFS=, 00:18:40.822 11:26:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@164 -- # printf %s sha256,sha384,sha512 00:18:40.822 11:26:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@163 -- # hostrpc bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:18:40.822 11:26:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:18:41.080 11:26:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@169 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:41.080 11:26:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:18:41.080 11:26:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:41.080 11:26:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=hostrpc 00:18:41.080 11:26:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:41.080 11:26:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t hostrpc 00:18:41.080 11:26:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:41.080 11:26:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:41.080 11:26:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:41.338 request: 00:18:41.338 { 00:18:41.338 "name": "nvme0", 00:18:41.338 "trtype": "tcp", 00:18:41.338 "traddr": "10.0.0.2", 00:18:41.338 "adrfam": "ipv4", 00:18:41.338 "trsvcid": "4420", 00:18:41.338 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:18:41.338 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:18:41.338 "prchk_reftag": false, 00:18:41.338 "prchk_guard": false, 00:18:41.338 "hdgst": false, 00:18:41.338 "ddgst": false, 00:18:41.338 "dhchap_key": "key3", 00:18:41.338 "method": "bdev_nvme_attach_controller", 00:18:41.338 "req_id": 1 00:18:41.338 } 00:18:41.338 Got JSON-RPC error response 00:18:41.338 response: 00:18:41.338 { 00:18:41.338 "code": -5, 00:18:41.338 "message": "Input/output error" 00:18:41.338 } 00:18:41.338 11:26:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:18:41.338 11:26:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:18:41.338 11:26:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:18:41.338 11:26:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:18:41.338 11:26:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # IFS=, 00:18:41.338 11:26:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # printf %s sha256,sha384,sha512 00:18:41.338 11:26:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # IFS=, 00:18:41.338 11:26:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:18:41.338 11:26:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:18:41.338 11:26:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:18:41.338 11:26:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@186 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:18:41.338 11:26:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:41.338 11:26:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:41.596 11:26:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:41.596 11:26:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@187 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:18:41.596 11:26:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:41.596 11:26:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:41.596 11:26:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:41.596 11:26:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@188 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:18:41.596 11:26:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:18:41.596 11:26:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:18:41.596 11:26:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=hostrpc 00:18:41.596 11:26:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:41.596 11:26:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t hostrpc 00:18:41.596 11:26:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:41.596 11:26:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:18:41.596 11:26:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:18:41.596 request: 00:18:41.596 { 00:18:41.596 "name": "nvme0", 00:18:41.596 "trtype": "tcp", 00:18:41.596 "traddr": "10.0.0.2", 00:18:41.596 "adrfam": "ipv4", 00:18:41.596 "trsvcid": "4420", 00:18:41.596 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:18:41.596 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:18:41.596 "prchk_reftag": false, 00:18:41.596 "prchk_guard": false, 00:18:41.596 "hdgst": false, 00:18:41.596 "ddgst": false, 00:18:41.596 "dhchap_key": "key0", 00:18:41.596 "dhchap_ctrlr_key": "key1", 00:18:41.596 "method": "bdev_nvme_attach_controller", 00:18:41.596 "req_id": 1 00:18:41.596 } 00:18:41.596 Got JSON-RPC error response 00:18:41.596 response: 00:18:41.596 { 00:18:41.596 "code": -5, 00:18:41.596 "message": "Input/output error" 00:18:41.596 } 00:18:41.596 11:26:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:18:41.596 11:26:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:18:41.596 11:26:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:18:41.596 11:26:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:18:41.596 11:26:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@192 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:18:41.596 11:26:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:18:41.853 00:18:41.853 11:26:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@195 -- # hostrpc bdev_nvme_get_controllers 00:18:41.853 11:26:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@195 -- # jq -r '.[].name' 00:18:41.853 11:26:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:42.111 11:26:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@195 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:42.111 11:26:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@196 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:42.111 11:26:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:42.370 11:26:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@198 -- # trap - SIGINT SIGTERM EXIT 00:18:42.370 11:26:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@199 -- # cleanup 00:18:42.370 11:26:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@21 -- # killprocess 1508206 00:18:42.370 11:26:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@950 -- # '[' -z 1508206 ']' 00:18:42.370 11:26:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # kill -0 1508206 00:18:42.370 11:26:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@955 -- # uname 00:18:42.370 11:26:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:18:42.370 11:26:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1508206 00:18:42.370 11:26:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:18:42.370 11:26:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:18:42.370 11:26:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1508206' 00:18:42.370 killing process with pid 1508206 00:18:42.370 11:26:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@969 -- # kill 1508206 00:18:42.370 11:26:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@974 -- # wait 1508206 00:18:42.628 11:26:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@22 -- # nvmftestfini 00:18:42.628 11:26:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@488 -- # nvmfcleanup 00:18:42.628 11:26:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@117 -- # sync 00:18:42.628 11:26:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:18:42.628 11:26:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@120 -- # set +e 00:18:42.628 11:26:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@121 -- # for i in {1..20} 00:18:42.628 11:26:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:18:42.628 rmmod nvme_tcp 00:18:42.628 rmmod nvme_fabrics 00:18:42.628 rmmod nvme_keyring 00:18:42.628 11:26:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:18:42.628 11:26:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@124 -- # set -e 00:18:42.628 11:26:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@125 -- # return 0 00:18:42.628 11:26:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@489 -- # '[' -n 1528343 ']' 00:18:42.628 11:26:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@490 -- # killprocess 1528343 00:18:42.628 11:26:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@950 -- # '[' -z 1528343 ']' 00:18:42.628 11:26:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # kill -0 1528343 00:18:42.628 11:26:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@955 -- # uname 00:18:42.628 11:26:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:18:42.628 11:26:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1528343 00:18:42.628 11:26:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:18:42.628 11:26:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:18:42.629 11:26:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1528343' 00:18:42.629 killing process with pid 1528343 00:18:42.629 11:26:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@969 -- # kill 1528343 00:18:42.629 11:26:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@974 -- # wait 1528343 00:18:42.887 11:26:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:18:42.887 11:26:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:18:42.887 11:26:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:18:42.887 11:26:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:18:42.887 11:26:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@278 -- # remove_spdk_ns 00:18:42.887 11:26:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:42.887 11:26:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:42.887 11:26:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:44.858 11:26:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:18:44.858 11:26:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@23 -- # rm -f /tmp/spdk.key-null.ezp /tmp/spdk.key-sha256.UHZ /tmp/spdk.key-sha384.Um4 /tmp/spdk.key-sha512.fuh /tmp/spdk.key-sha512.PJz /tmp/spdk.key-sha384.dQq /tmp/spdk.key-sha256.wW0 '' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf-auth.log 00:18:44.858 00:18:44.858 real 2m10.370s 00:18:44.858 user 4m59.196s 00:18:44.858 sys 0m20.560s 00:18:44.858 11:26:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1126 -- # xtrace_disable 00:18:44.858 11:26:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:44.858 ************************************ 00:18:44.858 END TEST nvmf_auth_target 00:18:44.858 ************************************ 00:18:45.116 11:26:40 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@39 -- # '[' tcp = tcp ']' 00:18:45.116 11:26:40 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@40 -- # run_test nvmf_bdevio_no_huge /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:18:45.116 11:26:40 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:18:45.116 11:26:40 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:18:45.116 11:26:40 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:18:45.116 ************************************ 00:18:45.116 START TEST nvmf_bdevio_no_huge 00:18:45.116 ************************************ 00:18:45.116 11:26:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:18:45.116 * Looking for test storage... 00:18:45.116 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:18:45.116 11:26:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:18:45.116 11:26:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # uname -s 00:18:45.116 11:26:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:45.116 11:26:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:45.116 11:26:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:45.116 11:26:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:45.116 11:26:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:45.117 11:26:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:45.117 11:26:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:45.117 11:26:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:45.117 11:26:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:45.117 11:26:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:45.117 11:26:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:18:45.117 11:26:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:18:45.117 11:26:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:45.117 11:26:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:45.117 11:26:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:18:45.117 11:26:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:45.117 11:26:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:18:45.117 11:26:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:45.117 11:26:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:45.117 11:26:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:45.117 11:26:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:45.117 11:26:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:45.117 11:26:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:45.117 11:26:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@5 -- # export PATH 00:18:45.117 11:26:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:45.117 11:26:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@47 -- # : 0 00:18:45.117 11:26:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:18:45.117 11:26:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:18:45.117 11:26:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:45.117 11:26:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:45.117 11:26:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:45.117 11:26:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:18:45.117 11:26:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:18:45.117 11:26:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@51 -- # have_pci_nics=0 00:18:45.117 11:26:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:18:45.117 11:26:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:18:45.117 11:26:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@14 -- # nvmftestinit 00:18:45.117 11:26:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:18:45.117 11:26:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:45.117 11:26:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@448 -- # prepare_net_devs 00:18:45.117 11:26:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@410 -- # local -g is_hw=no 00:18:45.117 11:26:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@412 -- # remove_spdk_ns 00:18:45.117 11:26:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:45.117 11:26:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:45.117 11:26:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:45.117 11:26:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:18:45.117 11:26:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:18:45.117 11:26:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@285 -- # xtrace_disable 00:18:45.117 11:26:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:18:51.727 11:26:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:18:51.727 11:26:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@291 -- # pci_devs=() 00:18:51.727 11:26:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@291 -- # local -a pci_devs 00:18:51.727 11:26:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@292 -- # pci_net_devs=() 00:18:51.727 11:26:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:18:51.727 11:26:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@293 -- # pci_drivers=() 00:18:51.727 11:26:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@293 -- # local -A pci_drivers 00:18:51.727 11:26:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@295 -- # net_devs=() 00:18:51.727 11:26:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@295 -- # local -ga net_devs 00:18:51.727 11:26:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@296 -- # e810=() 00:18:51.727 11:26:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@296 -- # local -ga e810 00:18:51.727 11:26:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@297 -- # x722=() 00:18:51.727 11:26:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@297 -- # local -ga x722 00:18:51.727 11:26:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@298 -- # mlx=() 00:18:51.727 11:26:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@298 -- # local -ga mlx 00:18:51.727 11:26:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:18:51.727 11:26:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:18:51.727 11:26:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:18:51.727 11:26:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:18:51.727 11:26:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:18:51.727 11:26:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:18:51.727 11:26:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:18:51.727 11:26:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:18:51.727 11:26:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:18:51.727 11:26:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:18:51.727 11:26:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:18:51.727 11:26:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:18:51.727 11:26:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:18:51.727 11:26:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:18:51.727 11:26:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:18:51.727 11:26:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:18:51.727 11:26:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:18:51.727 11:26:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:18:51.727 11:26:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:18:51.727 Found 0000:86:00.0 (0x8086 - 0x159b) 00:18:51.727 11:26:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:18:51.727 11:26:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:18:51.727 11:26:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:51.727 11:26:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:51.727 11:26:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:18:51.727 11:26:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:18:51.727 11:26:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:18:51.727 Found 0000:86:00.1 (0x8086 - 0x159b) 00:18:51.727 11:26:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:18:51.727 11:26:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:18:51.727 11:26:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:51.727 11:26:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:51.727 11:26:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:18:51.727 11:26:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:18:51.727 11:26:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:18:51.727 11:26:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:18:51.727 11:26:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:18:51.727 11:26:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:51.727 11:26:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:18:51.727 11:26:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:51.727 11:26:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@390 -- # [[ up == up ]] 00:18:51.727 11:26:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:18:51.727 11:26:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:51.727 11:26:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:18:51.727 Found net devices under 0000:86:00.0: cvl_0_0 00:18:51.727 11:26:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:18:51.727 11:26:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:18:51.727 11:26:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:51.727 11:26:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:18:51.727 11:26:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:51.727 11:26:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@390 -- # [[ up == up ]] 00:18:51.727 11:26:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:18:51.727 11:26:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:51.727 11:26:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:18:51.727 Found net devices under 0000:86:00.1: cvl_0_1 00:18:51.727 11:26:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:18:51.727 11:26:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:18:51.727 11:26:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@414 -- # is_hw=yes 00:18:51.727 11:26:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:18:51.727 11:26:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:18:51.727 11:26:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:18:51.727 11:26:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:51.727 11:26:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:18:51.727 11:26:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:18:51.727 11:26:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:18:51.727 11:26:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:18:51.727 11:26:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:18:51.727 11:26:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:18:51.727 11:26:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:18:51.727 11:26:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:51.727 11:26:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:18:51.727 11:26:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:18:51.727 11:26:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:18:51.727 11:26:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:18:51.727 11:26:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:18:51.727 11:26:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:18:51.727 11:26:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:18:51.727 11:26:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:18:51.728 11:26:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:18:51.728 11:26:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:18:51.728 11:26:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:18:51.728 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:51.728 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.187 ms 00:18:51.728 00:18:51.728 --- 10.0.0.2 ping statistics --- 00:18:51.728 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:51.728 rtt min/avg/max/mdev = 0.187/0.187/0.187/0.000 ms 00:18:51.728 11:26:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:18:51.728 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:51.728 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.244 ms 00:18:51.728 00:18:51.728 --- 10.0.0.1 ping statistics --- 00:18:51.728 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:51.728 rtt min/avg/max/mdev = 0.244/0.244/0.244/0.000 ms 00:18:51.728 11:26:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:51.728 11:26:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@422 -- # return 0 00:18:51.728 11:26:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:18:51.728 11:26:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:51.728 11:26:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:18:51.728 11:26:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:18:51.728 11:26:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:51.728 11:26:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:18:51.728 11:26:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:18:51.728 11:26:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:18:51.728 11:26:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:18:51.728 11:26:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@724 -- # xtrace_disable 00:18:51.728 11:26:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:18:51.728 11:26:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@481 -- # nvmfpid=1532758 00:18:51.728 11:26:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@482 -- # waitforlisten 1532758 00:18:51.728 11:26:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --no-huge -s 1024 -m 0x78 00:18:51.728 11:26:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@831 -- # '[' -z 1532758 ']' 00:18:51.728 11:26:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:51.728 11:26:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@836 -- # local max_retries=100 00:18:51.728 11:26:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:51.728 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:51.728 11:26:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@840 -- # xtrace_disable 00:18:51.728 11:26:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:18:51.728 [2024-07-26 11:26:46.474398] Starting SPDK v24.09-pre git sha1 487ff9e1a / DPDK 24.03.0 initialization... 00:18:51.728 [2024-07-26 11:26:46.474448] [ DPDK EAL parameters: nvmf -c 0x78 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk0 --proc-type=auto ] 00:18:51.728 [2024-07-26 11:26:46.551067] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:18:51.728 [2024-07-26 11:26:46.635399] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:51.728 [2024-07-26 11:26:46.635429] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:51.728 [2024-07-26 11:26:46.635438] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:51.728 [2024-07-26 11:26:46.635444] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:51.728 [2024-07-26 11:26:46.635448] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:51.728 [2024-07-26 11:26:46.635498] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:18:51.728 [2024-07-26 11:26:46.635605] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 5 00:18:51.728 [2024-07-26 11:26:46.635711] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:18:51.728 [2024-07-26 11:26:46.635712] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 6 00:18:51.728 11:26:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:18:51.728 11:26:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@864 -- # return 0 00:18:51.728 11:26:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:18:51.728 11:26:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@730 -- # xtrace_disable 00:18:51.728 11:26:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:18:51.728 11:26:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:51.728 11:26:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:18:51.728 11:26:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:51.728 11:26:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:18:51.728 [2024-07-26 11:26:47.325683] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:51.728 11:26:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:51.728 11:26:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:18:51.728 11:26:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:51.728 11:26:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:18:51.728 Malloc0 00:18:51.728 11:26:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:51.728 11:26:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:18:51.728 11:26:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:51.728 11:26:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:18:51.728 11:26:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:51.728 11:26:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:18:51.728 11:26:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:51.728 11:26:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:18:51.728 11:26:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:51.728 11:26:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:18:51.728 11:26:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:51.728 11:26:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:18:51.728 [2024-07-26 11:26:47.369944] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:51.728 11:26:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:51.728 11:26:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 --no-huge -s 1024 00:18:51.728 11:26:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:18:51.728 11:26:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@532 -- # config=() 00:18:51.728 11:26:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@532 -- # local subsystem config 00:18:51.728 11:26:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:18:51.728 11:26:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:18:51.728 { 00:18:51.728 "params": { 00:18:51.728 "name": "Nvme$subsystem", 00:18:51.728 "trtype": "$TEST_TRANSPORT", 00:18:51.728 "traddr": "$NVMF_FIRST_TARGET_IP", 00:18:51.728 "adrfam": "ipv4", 00:18:51.728 "trsvcid": "$NVMF_PORT", 00:18:51.728 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:18:51.728 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:18:51.728 "hdgst": ${hdgst:-false}, 00:18:51.728 "ddgst": ${ddgst:-false} 00:18:51.728 }, 00:18:51.728 "method": "bdev_nvme_attach_controller" 00:18:51.728 } 00:18:51.728 EOF 00:18:51.728 )") 00:18:51.728 11:26:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@554 -- # cat 00:18:51.728 11:26:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@556 -- # jq . 00:18:51.987 11:26:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@557 -- # IFS=, 00:18:51.987 11:26:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:18:51.987 "params": { 00:18:51.987 "name": "Nvme1", 00:18:51.987 "trtype": "tcp", 00:18:51.987 "traddr": "10.0.0.2", 00:18:51.987 "adrfam": "ipv4", 00:18:51.987 "trsvcid": "4420", 00:18:51.987 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:18:51.987 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:18:51.987 "hdgst": false, 00:18:51.987 "ddgst": false 00:18:51.987 }, 00:18:51.987 "method": "bdev_nvme_attach_controller" 00:18:51.987 }' 00:18:51.987 [2024-07-26 11:26:47.420777] Starting SPDK v24.09-pre git sha1 487ff9e1a / DPDK 24.03.0 initialization... 00:18:51.987 [2024-07-26 11:26:47.420824] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk_pid1532852 ] 00:18:51.987 [2024-07-26 11:26:47.490223] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:18:51.987 [2024-07-26 11:26:47.575720] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:18:51.987 [2024-07-26 11:26:47.575827] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:18:51.987 [2024-07-26 11:26:47.575828] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:18:52.245 I/O targets: 00:18:52.245 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:18:52.245 00:18:52.245 00:18:52.245 CUnit - A unit testing framework for C - Version 2.1-3 00:18:52.245 http://cunit.sourceforge.net/ 00:18:52.245 00:18:52.245 00:18:52.245 Suite: bdevio tests on: Nvme1n1 00:18:52.502 Test: blockdev write read block ...passed 00:18:52.502 Test: blockdev write zeroes read block ...passed 00:18:52.502 Test: blockdev write zeroes read no split ...passed 00:18:52.502 Test: blockdev write zeroes read split ...passed 00:18:52.502 Test: blockdev write zeroes read split partial ...passed 00:18:52.502 Test: blockdev reset ...[2024-07-26 11:26:48.004534] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:52.502 [2024-07-26 11:26:48.004593] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x513300 (9): Bad file descriptor 00:18:52.502 [2024-07-26 11:26:48.098487] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:18:52.502 passed 00:18:52.502 Test: blockdev write read 8 blocks ...passed 00:18:52.502 Test: blockdev write read size > 128k ...passed 00:18:52.502 Test: blockdev write read invalid size ...passed 00:18:52.759 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:18:52.759 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:18:52.759 Test: blockdev write read max offset ...passed 00:18:52.759 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:18:52.759 Test: blockdev writev readv 8 blocks ...passed 00:18:52.759 Test: blockdev writev readv 30 x 1block ...passed 00:18:52.759 Test: blockdev writev readv block ...passed 00:18:52.759 Test: blockdev writev readv size > 128k ...passed 00:18:52.759 Test: blockdev writev readv size > 128k in two iovs ...passed 00:18:52.759 Test: blockdev comparev and writev ...[2024-07-26 11:26:48.353379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:18:52.759 [2024-07-26 11:26:48.353404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:52.759 [2024-07-26 11:26:48.353418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:18:52.759 [2024-07-26 11:26:48.353425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:52.759 [2024-07-26 11:26:48.353655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:18:52.759 [2024-07-26 11:26:48.353665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:18:52.759 [2024-07-26 11:26:48.353676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:18:52.759 [2024-07-26 11:26:48.353683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:18:52.759 [2024-07-26 11:26:48.353909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:18:52.759 [2024-07-26 11:26:48.353918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:18:52.759 [2024-07-26 11:26:48.353929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:18:52.759 [2024-07-26 11:26:48.353935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:18:52.759 [2024-07-26 11:26:48.354163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:18:52.759 [2024-07-26 11:26:48.354172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:18:52.759 [2024-07-26 11:26:48.354184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:18:52.759 [2024-07-26 11:26:48.354190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:18:52.759 passed 00:18:53.016 Test: blockdev nvme passthru rw ...passed 00:18:53.016 Test: blockdev nvme passthru vendor specific ...[2024-07-26 11:26:48.436912] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:18:53.016 [2024-07-26 11:26:48.436927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:18:53.016 [2024-07-26 11:26:48.437040] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:18:53.016 [2024-07-26 11:26:48.437049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:18:53.016 [2024-07-26 11:26:48.437158] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:18:53.016 [2024-07-26 11:26:48.437166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:18:53.016 [2024-07-26 11:26:48.437269] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:18:53.016 [2024-07-26 11:26:48.437277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:18:53.017 passed 00:18:53.017 Test: blockdev nvme admin passthru ...passed 00:18:53.017 Test: blockdev copy ...passed 00:18:53.017 00:18:53.017 Run Summary: Type Total Ran Passed Failed Inactive 00:18:53.017 suites 1 1 n/a 0 0 00:18:53.017 tests 23 23 23 0 0 00:18:53.017 asserts 152 152 152 0 n/a 00:18:53.017 00:18:53.017 Elapsed time = 1.243 seconds 00:18:53.274 11:26:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:18:53.274 11:26:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:53.274 11:26:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:18:53.274 11:26:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:53.274 11:26:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:18:53.274 11:26:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@30 -- # nvmftestfini 00:18:53.274 11:26:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@488 -- # nvmfcleanup 00:18:53.274 11:26:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@117 -- # sync 00:18:53.274 11:26:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:18:53.274 11:26:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@120 -- # set +e 00:18:53.274 11:26:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@121 -- # for i in {1..20} 00:18:53.274 11:26:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:18:53.274 rmmod nvme_tcp 00:18:53.274 rmmod nvme_fabrics 00:18:53.274 rmmod nvme_keyring 00:18:53.274 11:26:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:18:53.274 11:26:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@124 -- # set -e 00:18:53.274 11:26:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@125 -- # return 0 00:18:53.274 11:26:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@489 -- # '[' -n 1532758 ']' 00:18:53.274 11:26:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@490 -- # killprocess 1532758 00:18:53.274 11:26:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@950 -- # '[' -z 1532758 ']' 00:18:53.274 11:26:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@954 -- # kill -0 1532758 00:18:53.274 11:26:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@955 -- # uname 00:18:53.274 11:26:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:18:53.274 11:26:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1532758 00:18:53.274 11:26:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@956 -- # process_name=reactor_3 00:18:53.274 11:26:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@960 -- # '[' reactor_3 = sudo ']' 00:18:53.274 11:26:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1532758' 00:18:53.274 killing process with pid 1532758 00:18:53.274 11:26:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@969 -- # kill 1532758 00:18:53.274 11:26:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@974 -- # wait 1532758 00:18:53.533 11:26:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:18:53.533 11:26:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:18:53.533 11:26:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:18:53.533 11:26:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:18:53.533 11:26:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@278 -- # remove_spdk_ns 00:18:53.533 11:26:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:53.533 11:26:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:53.533 11:26:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:56.069 11:26:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:18:56.069 00:18:56.069 real 0m10.683s 00:18:56.069 user 0m14.118s 00:18:56.069 sys 0m5.218s 00:18:56.069 11:26:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1126 -- # xtrace_disable 00:18:56.069 11:26:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:18:56.069 ************************************ 00:18:56.069 END TEST nvmf_bdevio_no_huge 00:18:56.069 ************************************ 00:18:56.069 11:26:51 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@41 -- # run_test nvmf_tls /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/tls.sh --transport=tcp 00:18:56.069 11:26:51 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:18:56.069 11:26:51 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:18:56.069 11:26:51 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:18:56.069 ************************************ 00:18:56.069 START TEST nvmf_tls 00:18:56.069 ************************************ 00:18:56.069 11:26:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/tls.sh --transport=tcp 00:18:56.069 * Looking for test storage... 00:18:56.069 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:18:56.069 11:26:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:18:56.069 11:26:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@7 -- # uname -s 00:18:56.069 11:26:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:56.069 11:26:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:56.069 11:26:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:56.069 11:26:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:56.069 11:26:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:56.069 11:26:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:56.069 11:26:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:56.069 11:26:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:56.069 11:26:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:56.069 11:26:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:56.069 11:26:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:18:56.069 11:26:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:18:56.069 11:26:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:56.069 11:26:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:56.069 11:26:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:18:56.069 11:26:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:56.069 11:26:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:18:56.069 11:26:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:56.069 11:26:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:56.069 11:26:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:56.069 11:26:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:56.069 11:26:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:56.069 11:26:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:56.069 11:26:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@5 -- # export PATH 00:18:56.069 11:26:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:56.069 11:26:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@47 -- # : 0 00:18:56.069 11:26:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:18:56.069 11:26:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:18:56.069 11:26:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:56.069 11:26:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:56.069 11:26:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:56.069 11:26:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:18:56.069 11:26:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:18:56.069 11:26:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@51 -- # have_pci_nics=0 00:18:56.069 11:26:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:18:56.069 11:26:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@62 -- # nvmftestinit 00:18:56.069 11:26:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:18:56.069 11:26:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:56.069 11:26:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@448 -- # prepare_net_devs 00:18:56.069 11:26:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@410 -- # local -g is_hw=no 00:18:56.069 11:26:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@412 -- # remove_spdk_ns 00:18:56.069 11:26:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:56.069 11:26:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:56.069 11:26:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:56.069 11:26:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:18:56.070 11:26:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:18:56.070 11:26:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@285 -- # xtrace_disable 00:18:56.070 11:26:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:01.346 11:26:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:19:01.346 11:26:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@291 -- # pci_devs=() 00:19:01.346 11:26:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@291 -- # local -a pci_devs 00:19:01.346 11:26:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@292 -- # pci_net_devs=() 00:19:01.346 11:26:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:19:01.346 11:26:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@293 -- # pci_drivers=() 00:19:01.346 11:26:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@293 -- # local -A pci_drivers 00:19:01.346 11:26:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@295 -- # net_devs=() 00:19:01.346 11:26:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@295 -- # local -ga net_devs 00:19:01.346 11:26:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@296 -- # e810=() 00:19:01.346 11:26:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@296 -- # local -ga e810 00:19:01.346 11:26:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@297 -- # x722=() 00:19:01.346 11:26:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@297 -- # local -ga x722 00:19:01.346 11:26:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@298 -- # mlx=() 00:19:01.346 11:26:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@298 -- # local -ga mlx 00:19:01.346 11:26:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:01.346 11:26:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:01.346 11:26:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:01.346 11:26:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:01.346 11:26:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:01.346 11:26:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:01.346 11:26:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:01.346 11:26:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:01.346 11:26:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:01.346 11:26:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:01.346 11:26:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:01.346 11:26:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:19:01.346 11:26:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:19:01.346 11:26:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:19:01.346 11:26:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:19:01.346 11:26:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:19:01.346 11:26:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:19:01.346 11:26:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:19:01.346 11:26:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:19:01.346 Found 0000:86:00.0 (0x8086 - 0x159b) 00:19:01.346 11:26:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:19:01.346 11:26:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:19:01.346 11:26:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:01.346 11:26:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:01.346 11:26:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:19:01.346 11:26:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:19:01.346 11:26:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:19:01.346 Found 0000:86:00.1 (0x8086 - 0x159b) 00:19:01.346 11:26:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:19:01.346 11:26:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:19:01.346 11:26:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:01.346 11:26:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:01.346 11:26:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:19:01.346 11:26:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:19:01.346 11:26:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:19:01.346 11:26:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:19:01.346 11:26:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:19:01.346 11:26:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:01.346 11:26:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:19:01.346 11:26:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:01.346 11:26:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@390 -- # [[ up == up ]] 00:19:01.346 11:26:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:19:01.346 11:26:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:01.346 11:26:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:19:01.346 Found net devices under 0000:86:00.0: cvl_0_0 00:19:01.346 11:26:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:19:01.346 11:26:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:19:01.346 11:26:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:01.346 11:26:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:19:01.346 11:26:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:01.346 11:26:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@390 -- # [[ up == up ]] 00:19:01.346 11:26:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:19:01.346 11:26:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:01.346 11:26:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:19:01.346 Found net devices under 0000:86:00.1: cvl_0_1 00:19:01.346 11:26:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:19:01.346 11:26:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:19:01.346 11:26:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@414 -- # is_hw=yes 00:19:01.346 11:26:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:19:01.346 11:26:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:19:01.346 11:26:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:19:01.346 11:26:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:01.346 11:26:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:19:01.346 11:26:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:19:01.346 11:26:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:19:01.346 11:26:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:19:01.346 11:26:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:19:01.346 11:26:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:19:01.346 11:26:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:19:01.346 11:26:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:01.346 11:26:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:19:01.346 11:26:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:19:01.346 11:26:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:19:01.346 11:26:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:19:01.606 11:26:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:19:01.606 11:26:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:19:01.606 11:26:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:19:01.606 11:26:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:19:01.606 11:26:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:19:01.606 11:26:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:19:01.606 11:26:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:19:01.606 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:01.606 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.251 ms 00:19:01.606 00:19:01.606 --- 10.0.0.2 ping statistics --- 00:19:01.606 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:01.606 rtt min/avg/max/mdev = 0.251/0.251/0.251/0.000 ms 00:19:01.606 11:26:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:19:01.606 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:01.606 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.193 ms 00:19:01.606 00:19:01.606 --- 10.0.0.1 ping statistics --- 00:19:01.606 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:01.606 rtt min/avg/max/mdev = 0.193/0.193/0.193/0.000 ms 00:19:01.606 11:26:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:01.606 11:26:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@422 -- # return 0 00:19:01.606 11:26:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:19:01.606 11:26:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:01.606 11:26:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:19:01.606 11:26:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:19:01.606 11:26:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:01.606 11:26:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:19:01.606 11:26:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:19:01.606 11:26:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@63 -- # nvmfappstart -m 0x2 --wait-for-rpc 00:19:01.606 11:26:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:19:01.606 11:26:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:19:01.606 11:26:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:01.606 11:26:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=1536600 00:19:01.606 11:26:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 1536600 00:19:01.606 11:26:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 --wait-for-rpc 00:19:01.606 11:26:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 1536600 ']' 00:19:01.606 11:26:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:01.606 11:26:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:19:01.606 11:26:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:01.606 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:01.606 11:26:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:19:01.606 11:26:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:01.606 [2024-07-26 11:26:57.263699] Starting SPDK v24.09-pre git sha1 487ff9e1a / DPDK 24.03.0 initialization... 00:19:01.606 [2024-07-26 11:26:57.263742] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:01.865 EAL: No free 2048 kB hugepages reported on node 1 00:19:01.865 [2024-07-26 11:26:57.334702] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:01.865 [2024-07-26 11:26:57.406262] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:01.865 [2024-07-26 11:26:57.406300] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:01.865 [2024-07-26 11:26:57.406307] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:01.865 [2024-07-26 11:26:57.406313] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:01.865 [2024-07-26 11:26:57.406317] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:01.865 [2024-07-26 11:26:57.406337] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:19:02.430 11:26:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:19:02.430 11:26:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:19:02.430 11:26:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:19:02.430 11:26:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:19:02.430 11:26:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:02.689 11:26:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:02.689 11:26:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@65 -- # '[' tcp '!=' tcp ']' 00:19:02.689 11:26:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_set_default_impl -i ssl 00:19:02.689 true 00:19:02.689 11:26:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:19:02.689 11:26:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@73 -- # jq -r .tls_version 00:19:02.948 11:26:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@73 -- # version=0 00:19:02.948 11:26:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@74 -- # [[ 0 != \0 ]] 00:19:02.948 11:26:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:19:03.206 11:26:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@81 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:19:03.206 11:26:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@81 -- # jq -r .tls_version 00:19:03.206 11:26:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@81 -- # version=13 00:19:03.206 11:26:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@82 -- # [[ 13 != \1\3 ]] 00:19:03.206 11:26:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 7 00:19:03.466 11:26:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:19:03.466 11:26:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@89 -- # jq -r .tls_version 00:19:03.725 11:26:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@89 -- # version=7 00:19:03.725 11:26:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@90 -- # [[ 7 != \7 ]] 00:19:03.725 11:26:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@96 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:19:03.725 11:26:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@96 -- # jq -r .enable_ktls 00:19:03.725 11:26:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@96 -- # ktls=false 00:19:03.725 11:26:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@97 -- # [[ false != \f\a\l\s\e ]] 00:19:03.725 11:26:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@103 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --enable-ktls 00:19:03.983 11:26:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:19:03.983 11:26:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@104 -- # jq -r .enable_ktls 00:19:04.241 11:26:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@104 -- # ktls=true 00:19:04.241 11:26:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@105 -- # [[ true != \t\r\u\e ]] 00:19:04.241 11:26:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --disable-ktls 00:19:04.241 11:26:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@112 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:19:04.241 11:26:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@112 -- # jq -r .enable_ktls 00:19:04.500 11:26:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@112 -- # ktls=false 00:19:04.500 11:26:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@113 -- # [[ false != \f\a\l\s\e ]] 00:19:04.500 11:26:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@118 -- # format_interchange_psk 00112233445566778899aabbccddeeff 1 00:19:04.500 11:26:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 1 00:19:04.500 11:26:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@702 -- # local prefix key digest 00:19:04.500 11:26:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:19:04.500 11:26:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff 00:19:04.500 11:26:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@704 -- # digest=1 00:19:04.500 11:26:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@705 -- # python - 00:19:04.500 11:27:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@118 -- # key=NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:19:04.500 11:27:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@119 -- # format_interchange_psk ffeeddccbbaa99887766554433221100 1 00:19:04.500 11:27:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 ffeeddccbbaa99887766554433221100 1 00:19:04.500 11:27:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@702 -- # local prefix key digest 00:19:04.500 11:27:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:19:04.500 11:27:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@704 -- # key=ffeeddccbbaa99887766554433221100 00:19:04.500 11:27:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@704 -- # digest=1 00:19:04.500 11:27:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@705 -- # python - 00:19:04.500 11:27:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@119 -- # key_2=NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:19:04.500 11:27:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@121 -- # mktemp 00:19:04.500 11:27:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@121 -- # key_path=/tmp/tmp.mXRMjH7tXk 00:19:04.500 11:27:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@122 -- # mktemp 00:19:04.500 11:27:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@122 -- # key_2_path=/tmp/tmp.WpzLDhyiIC 00:19:04.500 11:27:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@124 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:19:04.500 11:27:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@125 -- # echo -n NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:19:04.500 11:27:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@127 -- # chmod 0600 /tmp/tmp.mXRMjH7tXk 00:19:04.500 11:27:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@128 -- # chmod 0600 /tmp/tmp.WpzLDhyiIC 00:19:04.500 11:27:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@130 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:19:04.758 11:27:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@131 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py framework_start_init 00:19:05.016 11:27:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@133 -- # setup_nvmf_tgt /tmp/tmp.mXRMjH7tXk 00:19:05.016 11:27:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.mXRMjH7tXk 00:19:05.016 11:27:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:19:05.273 [2024-07-26 11:27:00.692809] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:05.273 11:27:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:19:05.273 11:27:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:19:05.531 [2024-07-26 11:27:01.045700] tcp.c: 956:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:19:05.531 [2024-07-26 11:27:01.045889] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:05.531 11:27:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:19:05.789 malloc0 00:19:05.789 11:27:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:19:05.789 11:27:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.mXRMjH7tXk 00:19:06.048 [2024-07-26 11:27:01.599240] tcp.c:3725:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:19:06.048 11:27:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@137 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -S ssl -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 hostnqn:nqn.2016-06.io.spdk:host1' --psk-path /tmp/tmp.mXRMjH7tXk 00:19:06.048 EAL: No free 2048 kB hugepages reported on node 1 00:19:18.250 Initializing NVMe Controllers 00:19:18.250 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:19:18.250 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:19:18.250 Initialization complete. Launching workers. 00:19:18.250 ======================================================== 00:19:18.250 Latency(us) 00:19:18.250 Device Information : IOPS MiB/s Average min max 00:19:18.250 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 16932.86 66.14 3780.01 842.95 8076.24 00:19:18.250 ======================================================== 00:19:18.250 Total : 16932.86 66.14 3780.01 842.95 8076.24 00:19:18.250 00:19:18.250 11:27:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@143 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.mXRMjH7tXk 00:19:18.250 11:27:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:19:18.250 11:27:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:19:18.250 11:27:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:19:18.250 11:27:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.mXRMjH7tXk' 00:19:18.250 11:27:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:19:18.250 11:27:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=1539544 00:19:18.250 11:27:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:19:18.250 11:27:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:19:18.250 11:27:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 1539544 /var/tmp/bdevperf.sock 00:19:18.251 11:27:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 1539544 ']' 00:19:18.251 11:27:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:18.251 11:27:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:19:18.251 11:27:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:18.251 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:18.251 11:27:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:19:18.251 11:27:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:18.251 [2024-07-26 11:27:11.784925] Starting SPDK v24.09-pre git sha1 487ff9e1a / DPDK 24.03.0 initialization... 00:19:18.251 [2024-07-26 11:27:11.784972] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1539544 ] 00:19:18.251 EAL: No free 2048 kB hugepages reported on node 1 00:19:18.251 [2024-07-26 11:27:11.851953] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:18.251 [2024-07-26 11:27:11.923379] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:19:18.251 11:27:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:19:18.251 11:27:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:19:18.251 11:27:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.mXRMjH7tXk 00:19:18.251 [2024-07-26 11:27:12.733260] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:19:18.251 [2024-07-26 11:27:12.733337] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:19:18.251 TLSTESTn1 00:19:18.251 11:27:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:19:18.251 Running I/O for 10 seconds... 00:19:28.284 00:19:28.284 Latency(us) 00:19:28.284 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:28.285 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:19:28.285 Verification LBA range: start 0x0 length 0x2000 00:19:28.285 TLSTESTn1 : 10.02 5573.13 21.77 0.00 0.00 22928.68 4868.39 25215.76 00:19:28.285 =================================================================================================================== 00:19:28.285 Total : 5573.13 21.77 0.00 0.00 22928.68 4868.39 25215.76 00:19:28.285 0 00:19:28.285 11:27:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@44 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:19:28.285 11:27:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@45 -- # killprocess 1539544 00:19:28.285 11:27:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 1539544 ']' 00:19:28.285 11:27:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 1539544 00:19:28.285 11:27:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:19:28.285 11:27:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:19:28.285 11:27:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1539544 00:19:28.285 11:27:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:19:28.285 11:27:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:19:28.285 11:27:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1539544' 00:19:28.285 killing process with pid 1539544 00:19:28.285 11:27:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 1539544 00:19:28.285 Received shutdown signal, test time was about 10.000000 seconds 00:19:28.285 00:19:28.285 Latency(us) 00:19:28.285 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:28.285 =================================================================================================================== 00:19:28.285 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:19:28.285 [2024-07-26 11:27:23.024924] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:19:28.285 11:27:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 1539544 00:19:28.285 11:27:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@146 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.WpzLDhyiIC 00:19:28.285 11:27:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # local es=0 00:19:28.285 11:27:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.WpzLDhyiIC 00:19:28.285 11:27:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@638 -- # local arg=run_bdevperf 00:19:28.285 11:27:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:19:28.285 11:27:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # type -t run_bdevperf 00:19:28.285 11:27:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:19:28.285 11:27:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.WpzLDhyiIC 00:19:28.285 11:27:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:19:28.285 11:27:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:19:28.285 11:27:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:19:28.285 11:27:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.WpzLDhyiIC' 00:19:28.285 11:27:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:19:28.285 11:27:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=1541440 00:19:28.285 11:27:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:19:28.285 11:27:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:19:28.285 11:27:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 1541440 /var/tmp/bdevperf.sock 00:19:28.285 11:27:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 1541440 ']' 00:19:28.285 11:27:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:28.285 11:27:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:19:28.285 11:27:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:28.285 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:28.285 11:27:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:19:28.285 11:27:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:28.285 [2024-07-26 11:27:23.254987] Starting SPDK v24.09-pre git sha1 487ff9e1a / DPDK 24.03.0 initialization... 00:19:28.285 [2024-07-26 11:27:23.255036] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1541440 ] 00:19:28.285 EAL: No free 2048 kB hugepages reported on node 1 00:19:28.285 [2024-07-26 11:27:23.314209] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:28.285 [2024-07-26 11:27:23.385812] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:19:28.543 11:27:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:19:28.543 11:27:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:19:28.543 11:27:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.WpzLDhyiIC 00:19:28.802 [2024-07-26 11:27:24.210595] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:19:28.802 [2024-07-26 11:27:24.210667] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:19:28.803 [2024-07-26 11:27:24.215142] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:19:28.803 [2024-07-26 11:27:24.215795] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6c6570 (107): Transport endpoint is not connected 00:19:28.803 [2024-07-26 11:27:24.216788] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6c6570 (9): Bad file descriptor 00:19:28.803 [2024-07-26 11:27:24.217788] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:19:28.803 [2024-07-26 11:27:24.217799] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:19:28.803 [2024-07-26 11:27:24.217808] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:19:28.803 request: 00:19:28.803 { 00:19:28.803 "name": "TLSTEST", 00:19:28.803 "trtype": "tcp", 00:19:28.803 "traddr": "10.0.0.2", 00:19:28.803 "adrfam": "ipv4", 00:19:28.803 "trsvcid": "4420", 00:19:28.803 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:28.803 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:19:28.803 "prchk_reftag": false, 00:19:28.803 "prchk_guard": false, 00:19:28.803 "hdgst": false, 00:19:28.803 "ddgst": false, 00:19:28.803 "psk": "/tmp/tmp.WpzLDhyiIC", 00:19:28.803 "method": "bdev_nvme_attach_controller", 00:19:28.803 "req_id": 1 00:19:28.803 } 00:19:28.803 Got JSON-RPC error response 00:19:28.803 response: 00:19:28.803 { 00:19:28.803 "code": -5, 00:19:28.803 "message": "Input/output error" 00:19:28.803 } 00:19:28.803 11:27:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@36 -- # killprocess 1541440 00:19:28.803 11:27:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 1541440 ']' 00:19:28.803 11:27:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 1541440 00:19:28.803 11:27:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:19:28.803 11:27:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:19:28.803 11:27:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1541440 00:19:28.803 11:27:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:19:28.803 11:27:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:19:28.803 11:27:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1541440' 00:19:28.803 killing process with pid 1541440 00:19:28.803 11:27:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 1541440 00:19:28.803 Received shutdown signal, test time was about 10.000000 seconds 00:19:28.803 00:19:28.803 Latency(us) 00:19:28.803 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:28.803 =================================================================================================================== 00:19:28.803 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:19:28.803 [2024-07-26 11:27:24.287467] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:19:28.803 11:27:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 1541440 00:19:28.803 11:27:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # return 1 00:19:28.803 11:27:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # es=1 00:19:28.803 11:27:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:19:28.803 11:27:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:19:28.803 11:27:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:19:28.803 11:27:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@149 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.mXRMjH7tXk 00:19:28.803 11:27:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # local es=0 00:19:28.803 11:27:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.mXRMjH7tXk 00:19:28.803 11:27:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@638 -- # local arg=run_bdevperf 00:19:28.803 11:27:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:19:28.803 11:27:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # type -t run_bdevperf 00:19:29.062 11:27:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:19:29.062 11:27:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.mXRMjH7tXk 00:19:29.062 11:27:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:19:29.062 11:27:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:19:29.062 11:27:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host2 00:19:29.062 11:27:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.mXRMjH7tXk' 00:19:29.062 11:27:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:19:29.062 11:27:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=1541594 00:19:29.062 11:27:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:19:29.062 11:27:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:19:29.062 11:27:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 1541594 /var/tmp/bdevperf.sock 00:19:29.062 11:27:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 1541594 ']' 00:19:29.062 11:27:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:29.062 11:27:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:19:29.062 11:27:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:29.062 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:29.062 11:27:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:19:29.062 11:27:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:29.062 [2024-07-26 11:27:24.509112] Starting SPDK v24.09-pre git sha1 487ff9e1a / DPDK 24.03.0 initialization... 00:19:29.062 [2024-07-26 11:27:24.509162] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1541594 ] 00:19:29.062 EAL: No free 2048 kB hugepages reported on node 1 00:19:29.062 [2024-07-26 11:27:24.567073] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:29.062 [2024-07-26 11:27:24.640643] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:19:29.998 11:27:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:19:29.998 11:27:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:19:29.998 11:27:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 --psk /tmp/tmp.mXRMjH7tXk 00:19:29.998 [2024-07-26 11:27:25.462742] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:19:29.998 [2024-07-26 11:27:25.462814] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:19:29.998 [2024-07-26 11:27:25.467301] tcp.c: 894:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:19:29.998 [2024-07-26 11:27:25.467325] posix.c: 574:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:19:29.998 [2024-07-26 11:27:25.467349] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:19:29.998 [2024-07-26 11:27:25.468011] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14f7570 (107): Transport endpoint is not connected 00:19:29.998 [2024-07-26 11:27:25.469002] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14f7570 (9): Bad file descriptor 00:19:29.998 [2024-07-26 11:27:25.470003] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:19:29.998 [2024-07-26 11:27:25.470013] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:19:29.998 [2024-07-26 11:27:25.470024] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:19:29.998 request: 00:19:29.998 { 00:19:29.998 "name": "TLSTEST", 00:19:29.998 "trtype": "tcp", 00:19:29.998 "traddr": "10.0.0.2", 00:19:29.998 "adrfam": "ipv4", 00:19:29.998 "trsvcid": "4420", 00:19:29.998 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:29.998 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:19:29.998 "prchk_reftag": false, 00:19:29.998 "prchk_guard": false, 00:19:29.998 "hdgst": false, 00:19:29.998 "ddgst": false, 00:19:29.998 "psk": "/tmp/tmp.mXRMjH7tXk", 00:19:29.998 "method": "bdev_nvme_attach_controller", 00:19:29.998 "req_id": 1 00:19:29.998 } 00:19:29.998 Got JSON-RPC error response 00:19:29.998 response: 00:19:29.998 { 00:19:29.998 "code": -5, 00:19:29.998 "message": "Input/output error" 00:19:29.998 } 00:19:29.998 11:27:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@36 -- # killprocess 1541594 00:19:29.998 11:27:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 1541594 ']' 00:19:29.998 11:27:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 1541594 00:19:29.998 11:27:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:19:29.998 11:27:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:19:29.998 11:27:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1541594 00:19:29.998 11:27:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:19:29.998 11:27:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:19:29.998 11:27:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1541594' 00:19:29.998 killing process with pid 1541594 00:19:29.998 11:27:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 1541594 00:19:29.998 Received shutdown signal, test time was about 10.000000 seconds 00:19:29.998 00:19:29.998 Latency(us) 00:19:29.998 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:29.998 =================================================================================================================== 00:19:29.999 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:19:29.999 [2024-07-26 11:27:25.535970] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:19:29.999 11:27:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 1541594 00:19:30.258 11:27:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # return 1 00:19:30.258 11:27:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # es=1 00:19:30.258 11:27:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:19:30.258 11:27:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:19:30.258 11:27:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:19:30.258 11:27:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@152 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.mXRMjH7tXk 00:19:30.258 11:27:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # local es=0 00:19:30.258 11:27:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.mXRMjH7tXk 00:19:30.258 11:27:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@638 -- # local arg=run_bdevperf 00:19:30.258 11:27:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:19:30.258 11:27:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # type -t run_bdevperf 00:19:30.258 11:27:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:19:30.258 11:27:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.mXRMjH7tXk 00:19:30.258 11:27:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:19:30.258 11:27:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode2 00:19:30.258 11:27:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:19:30.258 11:27:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.mXRMjH7tXk' 00:19:30.258 11:27:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:19:30.258 11:27:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=1541781 00:19:30.258 11:27:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:19:30.258 11:27:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:19:30.258 11:27:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 1541781 /var/tmp/bdevperf.sock 00:19:30.258 11:27:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 1541781 ']' 00:19:30.258 11:27:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:30.258 11:27:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:19:30.258 11:27:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:30.258 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:30.258 11:27:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:19:30.258 11:27:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:30.258 [2024-07-26 11:27:25.755882] Starting SPDK v24.09-pre git sha1 487ff9e1a / DPDK 24.03.0 initialization... 00:19:30.258 [2024-07-26 11:27:25.755929] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1541781 ] 00:19:30.258 EAL: No free 2048 kB hugepages reported on node 1 00:19:30.258 [2024-07-26 11:27:25.816276] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:30.258 [2024-07-26 11:27:25.889049] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:19:31.195 11:27:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:19:31.195 11:27:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:19:31.195 11:27:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.mXRMjH7tXk 00:19:31.195 [2024-07-26 11:27:26.719302] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:19:31.195 [2024-07-26 11:27:26.719381] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:19:31.195 [2024-07-26 11:27:26.729569] tcp.c: 894:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:19:31.195 [2024-07-26 11:27:26.729591] posix.c: 574:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:19:31.195 [2024-07-26 11:27:26.729613] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:19:31.195 [2024-07-26 11:27:26.730576] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c88570 (107): Transport endpoint is not connected 00:19:31.195 [2024-07-26 11:27:26.731569] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c88570 (9): Bad file descriptor 00:19:31.195 [2024-07-26 11:27:26.732571] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2] Ctrlr is in error state 00:19:31.195 [2024-07-26 11:27:26.732581] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:19:31.195 [2024-07-26 11:27:26.732590] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2] in failed state. 00:19:31.195 request: 00:19:31.195 { 00:19:31.195 "name": "TLSTEST", 00:19:31.195 "trtype": "tcp", 00:19:31.195 "traddr": "10.0.0.2", 00:19:31.195 "adrfam": "ipv4", 00:19:31.195 "trsvcid": "4420", 00:19:31.195 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:19:31.195 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:19:31.195 "prchk_reftag": false, 00:19:31.195 "prchk_guard": false, 00:19:31.195 "hdgst": false, 00:19:31.195 "ddgst": false, 00:19:31.195 "psk": "/tmp/tmp.mXRMjH7tXk", 00:19:31.195 "method": "bdev_nvme_attach_controller", 00:19:31.195 "req_id": 1 00:19:31.195 } 00:19:31.195 Got JSON-RPC error response 00:19:31.195 response: 00:19:31.195 { 00:19:31.195 "code": -5, 00:19:31.196 "message": "Input/output error" 00:19:31.196 } 00:19:31.196 11:27:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@36 -- # killprocess 1541781 00:19:31.196 11:27:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 1541781 ']' 00:19:31.196 11:27:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 1541781 00:19:31.196 11:27:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:19:31.196 11:27:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:19:31.196 11:27:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1541781 00:19:31.196 11:27:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:19:31.196 11:27:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:19:31.196 11:27:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1541781' 00:19:31.196 killing process with pid 1541781 00:19:31.196 11:27:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 1541781 00:19:31.196 Received shutdown signal, test time was about 10.000000 seconds 00:19:31.196 00:19:31.196 Latency(us) 00:19:31.196 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:31.196 =================================================================================================================== 00:19:31.196 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:19:31.196 [2024-07-26 11:27:26.802485] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:19:31.196 11:27:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 1541781 00:19:31.456 11:27:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # return 1 00:19:31.456 11:27:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # es=1 00:19:31.456 11:27:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:19:31.456 11:27:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:19:31.456 11:27:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:19:31.456 11:27:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@155 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:19:31.456 11:27:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # local es=0 00:19:31.456 11:27:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:19:31.456 11:27:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@638 -- # local arg=run_bdevperf 00:19:31.456 11:27:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:19:31.456 11:27:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # type -t run_bdevperf 00:19:31.456 11:27:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:19:31.456 11:27:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:19:31.456 11:27:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:19:31.456 11:27:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:19:31.456 11:27:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:19:31.456 11:27:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk= 00:19:31.456 11:27:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:19:31.456 11:27:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=1542020 00:19:31.456 11:27:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:19:31.456 11:27:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:19:31.456 11:27:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 1542020 /var/tmp/bdevperf.sock 00:19:31.456 11:27:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 1542020 ']' 00:19:31.456 11:27:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:31.456 11:27:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:19:31.456 11:27:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:31.456 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:31.456 11:27:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:19:31.456 11:27:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:31.456 [2024-07-26 11:27:27.019241] Starting SPDK v24.09-pre git sha1 487ff9e1a / DPDK 24.03.0 initialization... 00:19:31.456 [2024-07-26 11:27:27.019289] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1542020 ] 00:19:31.456 EAL: No free 2048 kB hugepages reported on node 1 00:19:31.456 [2024-07-26 11:27:27.083135] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:31.716 [2024-07-26 11:27:27.153890] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:19:32.283 11:27:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:19:32.283 11:27:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:19:32.284 11:27:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:19:32.542 [2024-07-26 11:27:27.972759] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:19:32.542 [2024-07-26 11:27:27.974198] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c64af0 (9): Bad file descriptor 00:19:32.542 [2024-07-26 11:27:27.975196] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:19:32.542 [2024-07-26 11:27:27.975210] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:19:32.542 [2024-07-26 11:27:27.975219] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:19:32.542 request: 00:19:32.542 { 00:19:32.542 "name": "TLSTEST", 00:19:32.542 "trtype": "tcp", 00:19:32.542 "traddr": "10.0.0.2", 00:19:32.542 "adrfam": "ipv4", 00:19:32.542 "trsvcid": "4420", 00:19:32.542 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:32.542 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:19:32.542 "prchk_reftag": false, 00:19:32.542 "prchk_guard": false, 00:19:32.542 "hdgst": false, 00:19:32.542 "ddgst": false, 00:19:32.542 "method": "bdev_nvme_attach_controller", 00:19:32.542 "req_id": 1 00:19:32.542 } 00:19:32.542 Got JSON-RPC error response 00:19:32.542 response: 00:19:32.542 { 00:19:32.542 "code": -5, 00:19:32.542 "message": "Input/output error" 00:19:32.542 } 00:19:32.542 11:27:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@36 -- # killprocess 1542020 00:19:32.542 11:27:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 1542020 ']' 00:19:32.542 11:27:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 1542020 00:19:32.542 11:27:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:19:32.542 11:27:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:19:32.542 11:27:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1542020 00:19:32.542 11:27:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:19:32.542 11:27:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:19:32.542 11:27:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1542020' 00:19:32.542 killing process with pid 1542020 00:19:32.542 11:27:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 1542020 00:19:32.542 Received shutdown signal, test time was about 10.000000 seconds 00:19:32.542 00:19:32.542 Latency(us) 00:19:32.542 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:32.542 =================================================================================================================== 00:19:32.542 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:19:32.543 11:27:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 1542020 00:19:32.802 11:27:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # return 1 00:19:32.802 11:27:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # es=1 00:19:32.802 11:27:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:19:32.802 11:27:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:19:32.802 11:27:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:19:32.802 11:27:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@158 -- # killprocess 1536600 00:19:32.802 11:27:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 1536600 ']' 00:19:32.802 11:27:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 1536600 00:19:32.802 11:27:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:19:32.802 11:27:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:19:32.802 11:27:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1536600 00:19:32.802 11:27:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:19:32.802 11:27:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:19:32.802 11:27:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1536600' 00:19:32.802 killing process with pid 1536600 00:19:32.802 11:27:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 1536600 00:19:32.802 [2024-07-26 11:27:28.261695] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:19:32.802 11:27:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 1536600 00:19:32.802 11:27:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@159 -- # format_interchange_psk 00112233445566778899aabbccddeeff0011223344556677 2 00:19:32.802 11:27:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff0011223344556677 2 00:19:32.802 11:27:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@702 -- # local prefix key digest 00:19:32.802 11:27:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:19:32.802 11:27:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff0011223344556677 00:19:32.802 11:27:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@704 -- # digest=2 00:19:32.802 11:27:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@705 -- # python - 00:19:33.061 11:27:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@159 -- # key_long=NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:19:33.061 11:27:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@160 -- # mktemp 00:19:33.061 11:27:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@160 -- # key_long_path=/tmp/tmp.V8XslquD3v 00:19:33.061 11:27:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@161 -- # echo -n NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:19:33.061 11:27:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@162 -- # chmod 0600 /tmp/tmp.V8XslquD3v 00:19:33.061 11:27:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@163 -- # nvmfappstart -m 0x2 00:19:33.061 11:27:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:19:33.061 11:27:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:19:33.061 11:27:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:33.061 11:27:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=1542275 00:19:33.061 11:27:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 1542275 00:19:33.061 11:27:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:19:33.061 11:27:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 1542275 ']' 00:19:33.061 11:27:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:33.061 11:27:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:19:33.061 11:27:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:33.061 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:33.061 11:27:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:19:33.061 11:27:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:33.061 [2024-07-26 11:27:28.558723] Starting SPDK v24.09-pre git sha1 487ff9e1a / DPDK 24.03.0 initialization... 00:19:33.061 [2024-07-26 11:27:28.558766] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:33.061 EAL: No free 2048 kB hugepages reported on node 1 00:19:33.061 [2024-07-26 11:27:28.630138] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:33.061 [2024-07-26 11:27:28.705151] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:33.061 [2024-07-26 11:27:28.705191] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:33.061 [2024-07-26 11:27:28.705198] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:33.061 [2024-07-26 11:27:28.705203] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:33.062 [2024-07-26 11:27:28.705208] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:33.062 [2024-07-26 11:27:28.705231] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:19:33.999 11:27:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:19:33.999 11:27:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:19:33.999 11:27:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:19:33.999 11:27:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:19:33.999 11:27:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:33.999 11:27:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:33.999 11:27:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@165 -- # setup_nvmf_tgt /tmp/tmp.V8XslquD3v 00:19:33.999 11:27:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.V8XslquD3v 00:19:33.999 11:27:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:19:33.999 [2024-07-26 11:27:29.551162] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:33.999 11:27:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:19:34.257 11:27:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:19:34.257 [2024-07-26 11:27:29.900060] tcp.c: 956:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:19:34.257 [2024-07-26 11:27:29.900236] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:34.516 11:27:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:19:34.516 malloc0 00:19:34.516 11:27:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:19:34.775 11:27:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.V8XslquD3v 00:19:35.033 [2024-07-26 11:27:30.469782] tcp.c:3725:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:19:35.033 11:27:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@167 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.V8XslquD3v 00:19:35.033 11:27:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:19:35.033 11:27:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:19:35.033 11:27:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:19:35.033 11:27:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.V8XslquD3v' 00:19:35.033 11:27:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:19:35.033 11:27:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=1542750 00:19:35.033 11:27:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:19:35.034 11:27:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:19:35.034 11:27:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 1542750 /var/tmp/bdevperf.sock 00:19:35.034 11:27:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 1542750 ']' 00:19:35.034 11:27:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:35.034 11:27:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:19:35.034 11:27:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:35.034 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:35.034 11:27:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:19:35.034 11:27:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:35.034 [2024-07-26 11:27:30.543321] Starting SPDK v24.09-pre git sha1 487ff9e1a / DPDK 24.03.0 initialization... 00:19:35.034 [2024-07-26 11:27:30.543369] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1542750 ] 00:19:35.034 EAL: No free 2048 kB hugepages reported on node 1 00:19:35.034 [2024-07-26 11:27:30.600430] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:35.034 [2024-07-26 11:27:30.673565] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:19:35.969 11:27:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:19:35.969 11:27:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:19:35.969 11:27:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.V8XslquD3v 00:19:35.969 [2024-07-26 11:27:31.502818] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:19:35.969 [2024-07-26 11:27:31.502886] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:19:35.970 TLSTESTn1 00:19:35.970 11:27:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:19:36.228 Running I/O for 10 seconds... 00:19:46.206 00:19:46.206 Latency(us) 00:19:46.206 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:46.206 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:19:46.206 Verification LBA range: start 0x0 length 0x2000 00:19:46.207 TLSTESTn1 : 10.01 5554.47 21.70 0.00 0.00 23008.26 7084.13 25215.76 00:19:46.207 =================================================================================================================== 00:19:46.207 Total : 5554.47 21.70 0.00 0.00 23008.26 7084.13 25215.76 00:19:46.207 0 00:19:46.207 11:27:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@44 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:19:46.207 11:27:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@45 -- # killprocess 1542750 00:19:46.207 11:27:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 1542750 ']' 00:19:46.207 11:27:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 1542750 00:19:46.207 11:27:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:19:46.207 11:27:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:19:46.207 11:27:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1542750 00:19:46.207 11:27:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:19:46.207 11:27:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:19:46.207 11:27:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1542750' 00:19:46.207 killing process with pid 1542750 00:19:46.207 11:27:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 1542750 00:19:46.207 Received shutdown signal, test time was about 10.000000 seconds 00:19:46.207 00:19:46.207 Latency(us) 00:19:46.207 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:46.207 =================================================================================================================== 00:19:46.207 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:19:46.207 [2024-07-26 11:27:41.807804] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:19:46.207 11:27:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 1542750 00:19:46.466 11:27:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@170 -- # chmod 0666 /tmp/tmp.V8XslquD3v 00:19:46.466 11:27:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@171 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.V8XslquD3v 00:19:46.466 11:27:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # local es=0 00:19:46.466 11:27:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.V8XslquD3v 00:19:46.466 11:27:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@638 -- # local arg=run_bdevperf 00:19:46.466 11:27:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:19:46.466 11:27:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # type -t run_bdevperf 00:19:46.466 11:27:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:19:46.466 11:27:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.V8XslquD3v 00:19:46.466 11:27:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:19:46.466 11:27:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:19:46.466 11:27:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:19:46.466 11:27:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.V8XslquD3v' 00:19:46.466 11:27:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:19:46.466 11:27:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=1544585 00:19:46.466 11:27:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:19:46.466 11:27:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:19:46.466 11:27:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 1544585 /var/tmp/bdevperf.sock 00:19:46.466 11:27:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 1544585 ']' 00:19:46.466 11:27:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:46.466 11:27:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:19:46.466 11:27:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:46.466 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:46.466 11:27:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:19:46.466 11:27:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:46.466 [2024-07-26 11:27:42.040820] Starting SPDK v24.09-pre git sha1 487ff9e1a / DPDK 24.03.0 initialization... 00:19:46.466 [2024-07-26 11:27:42.040867] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1544585 ] 00:19:46.466 EAL: No free 2048 kB hugepages reported on node 1 00:19:46.466 [2024-07-26 11:27:42.108499] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:46.725 [2024-07-26 11:27:42.182262] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:19:47.292 11:27:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:19:47.292 11:27:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:19:47.292 11:27:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.V8XslquD3v 00:19:47.551 [2024-07-26 11:27:43.004399] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:19:47.551 [2024-07-26 11:27:43.004448] bdev_nvme.c:6153:bdev_nvme_load_psk: *ERROR*: Incorrect permissions for PSK file 00:19:47.551 [2024-07-26 11:27:43.004460] bdev_nvme.c:6258:bdev_nvme_create: *ERROR*: Could not load PSK from /tmp/tmp.V8XslquD3v 00:19:47.551 request: 00:19:47.551 { 00:19:47.551 "name": "TLSTEST", 00:19:47.551 "trtype": "tcp", 00:19:47.551 "traddr": "10.0.0.2", 00:19:47.551 "adrfam": "ipv4", 00:19:47.551 "trsvcid": "4420", 00:19:47.551 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:47.551 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:19:47.551 "prchk_reftag": false, 00:19:47.551 "prchk_guard": false, 00:19:47.551 "hdgst": false, 00:19:47.551 "ddgst": false, 00:19:47.551 "psk": "/tmp/tmp.V8XslquD3v", 00:19:47.551 "method": "bdev_nvme_attach_controller", 00:19:47.551 "req_id": 1 00:19:47.551 } 00:19:47.551 Got JSON-RPC error response 00:19:47.551 response: 00:19:47.551 { 00:19:47.551 "code": -1, 00:19:47.551 "message": "Operation not permitted" 00:19:47.551 } 00:19:47.551 11:27:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@36 -- # killprocess 1544585 00:19:47.551 11:27:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 1544585 ']' 00:19:47.551 11:27:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 1544585 00:19:47.551 11:27:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:19:47.551 11:27:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:19:47.551 11:27:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1544585 00:19:47.551 11:27:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:19:47.551 11:27:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:19:47.551 11:27:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1544585' 00:19:47.551 killing process with pid 1544585 00:19:47.551 11:27:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 1544585 00:19:47.551 Received shutdown signal, test time was about 10.000000 seconds 00:19:47.551 00:19:47.551 Latency(us) 00:19:47.551 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:47.551 =================================================================================================================== 00:19:47.551 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:19:47.551 11:27:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 1544585 00:19:47.810 11:27:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # return 1 00:19:47.810 11:27:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # es=1 00:19:47.810 11:27:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:19:47.810 11:27:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:19:47.810 11:27:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:19:47.810 11:27:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@174 -- # killprocess 1542275 00:19:47.810 11:27:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 1542275 ']' 00:19:47.810 11:27:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 1542275 00:19:47.810 11:27:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:19:47.810 11:27:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:19:47.810 11:27:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1542275 00:19:47.810 11:27:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:19:47.810 11:27:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:19:47.810 11:27:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1542275' 00:19:47.810 killing process with pid 1542275 00:19:47.810 11:27:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 1542275 00:19:47.810 [2024-07-26 11:27:43.296607] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:19:47.810 11:27:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 1542275 00:19:48.068 11:27:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@175 -- # nvmfappstart -m 0x2 00:19:48.068 11:27:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:19:48.068 11:27:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:19:48.068 11:27:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:48.068 11:27:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=1544829 00:19:48.068 11:27:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 1544829 00:19:48.068 11:27:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:19:48.068 11:27:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 1544829 ']' 00:19:48.068 11:27:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:48.068 11:27:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:19:48.068 11:27:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:48.068 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:48.068 11:27:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:19:48.068 11:27:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:48.068 [2024-07-26 11:27:43.539447] Starting SPDK v24.09-pre git sha1 487ff9e1a / DPDK 24.03.0 initialization... 00:19:48.068 [2024-07-26 11:27:43.539489] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:48.068 EAL: No free 2048 kB hugepages reported on node 1 00:19:48.068 [2024-07-26 11:27:43.607396] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:48.068 [2024-07-26 11:27:43.682338] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:48.068 [2024-07-26 11:27:43.682374] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:48.068 [2024-07-26 11:27:43.682381] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:48.068 [2024-07-26 11:27:43.682387] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:48.068 [2024-07-26 11:27:43.682392] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:48.068 [2024-07-26 11:27:43.682408] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:19:49.004 11:27:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:19:49.004 11:27:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:19:49.004 11:27:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:19:49.004 11:27:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:19:49.004 11:27:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:49.004 11:27:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:49.004 11:27:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@177 -- # NOT setup_nvmf_tgt /tmp/tmp.V8XslquD3v 00:19:49.004 11:27:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # local es=0 00:19:49.004 11:27:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # valid_exec_arg setup_nvmf_tgt /tmp/tmp.V8XslquD3v 00:19:49.004 11:27:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@638 -- # local arg=setup_nvmf_tgt 00:19:49.004 11:27:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:19:49.004 11:27:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # type -t setup_nvmf_tgt 00:19:49.004 11:27:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:19:49.004 11:27:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # setup_nvmf_tgt /tmp/tmp.V8XslquD3v 00:19:49.004 11:27:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.V8XslquD3v 00:19:49.004 11:27:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:19:49.004 [2024-07-26 11:27:44.548173] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:49.004 11:27:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:19:49.262 11:27:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:19:49.262 [2024-07-26 11:27:44.905080] tcp.c: 956:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:19:49.262 [2024-07-26 11:27:44.905269] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:49.521 11:27:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:19:49.521 malloc0 00:19:49.521 11:27:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:19:49.780 11:27:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.V8XslquD3v 00:19:49.780 [2024-07-26 11:27:45.422395] tcp.c:3635:tcp_load_psk: *ERROR*: Incorrect permissions for PSK file 00:19:49.780 [2024-07-26 11:27:45.422419] tcp.c:3721:nvmf_tcp_subsystem_add_host: *ERROR*: Could not retrieve PSK from file 00:19:49.780 [2024-07-26 11:27:45.422439] subsystem.c:1052:spdk_nvmf_subsystem_add_host_ext: *ERROR*: Unable to add host to TCP transport 00:19:49.780 request: 00:19:49.780 { 00:19:49.781 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:49.781 "host": "nqn.2016-06.io.spdk:host1", 00:19:49.781 "psk": "/tmp/tmp.V8XslquD3v", 00:19:49.781 "method": "nvmf_subsystem_add_host", 00:19:49.781 "req_id": 1 00:19:49.781 } 00:19:49.781 Got JSON-RPC error response 00:19:49.781 response: 00:19:49.781 { 00:19:49.781 "code": -32603, 00:19:49.781 "message": "Internal error" 00:19:49.781 } 00:19:50.040 11:27:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # es=1 00:19:50.040 11:27:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:19:50.040 11:27:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:19:50.040 11:27:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:19:50.040 11:27:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@180 -- # killprocess 1544829 00:19:50.040 11:27:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 1544829 ']' 00:19:50.040 11:27:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 1544829 00:19:50.040 11:27:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:19:50.040 11:27:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:19:50.040 11:27:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1544829 00:19:50.040 11:27:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:19:50.040 11:27:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:19:50.040 11:27:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1544829' 00:19:50.040 killing process with pid 1544829 00:19:50.040 11:27:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 1544829 00:19:50.040 11:27:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 1544829 00:19:50.040 11:27:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@181 -- # chmod 0600 /tmp/tmp.V8XslquD3v 00:19:50.040 11:27:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@184 -- # nvmfappstart -m 0x2 00:19:50.040 11:27:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:19:50.040 11:27:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:19:50.040 11:27:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:50.040 11:27:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=1545120 00:19:50.040 11:27:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:19:50.040 11:27:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 1545120 00:19:50.040 11:27:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 1545120 ']' 00:19:50.040 11:27:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:50.040 11:27:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:19:50.040 11:27:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:50.040 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:50.040 11:27:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:19:50.040 11:27:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:50.299 [2024-07-26 11:27:45.739847] Starting SPDK v24.09-pre git sha1 487ff9e1a / DPDK 24.03.0 initialization... 00:19:50.299 [2024-07-26 11:27:45.739892] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:50.299 EAL: No free 2048 kB hugepages reported on node 1 00:19:50.299 [2024-07-26 11:27:45.807405] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:50.300 [2024-07-26 11:27:45.877883] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:50.300 [2024-07-26 11:27:45.877923] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:50.300 [2024-07-26 11:27:45.877930] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:50.300 [2024-07-26 11:27:45.877936] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:50.300 [2024-07-26 11:27:45.877940] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:50.300 [2024-07-26 11:27:45.877957] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:19:51.236 11:27:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:19:51.236 11:27:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:19:51.236 11:27:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:19:51.236 11:27:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:19:51.236 11:27:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:51.236 11:27:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:51.236 11:27:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@185 -- # setup_nvmf_tgt /tmp/tmp.V8XslquD3v 00:19:51.236 11:27:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.V8XslquD3v 00:19:51.236 11:27:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:19:51.236 [2024-07-26 11:27:46.732507] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:51.236 11:27:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:19:51.495 11:27:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:19:51.495 [2024-07-26 11:27:47.085405] tcp.c: 956:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:19:51.495 [2024-07-26 11:27:47.085589] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:51.495 11:27:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:19:51.753 malloc0 00:19:51.753 11:27:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:19:52.012 11:27:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.V8XslquD3v 00:19:52.012 [2024-07-26 11:27:47.642825] tcp.c:3725:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:19:52.271 11:27:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@187 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:19:52.271 11:27:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@188 -- # bdevperf_pid=1545577 00:19:52.271 11:27:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@190 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:19:52.271 11:27:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@191 -- # waitforlisten 1545577 /var/tmp/bdevperf.sock 00:19:52.271 11:27:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 1545577 ']' 00:19:52.271 11:27:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:52.271 11:27:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:19:52.271 11:27:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:52.271 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:52.271 11:27:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:19:52.271 11:27:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:52.271 [2024-07-26 11:27:47.704734] Starting SPDK v24.09-pre git sha1 487ff9e1a / DPDK 24.03.0 initialization... 00:19:52.271 [2024-07-26 11:27:47.704777] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1545577 ] 00:19:52.271 EAL: No free 2048 kB hugepages reported on node 1 00:19:52.271 [2024-07-26 11:27:47.771153] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:52.271 [2024-07-26 11:27:47.843584] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:19:53.207 11:27:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:19:53.208 11:27:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:19:53.208 11:27:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@192 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.V8XslquD3v 00:19:53.208 [2024-07-26 11:27:48.669675] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:19:53.208 [2024-07-26 11:27:48.669746] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:19:53.208 TLSTESTn1 00:19:53.208 11:27:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@196 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py save_config 00:19:53.467 11:27:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@196 -- # tgtconf='{ 00:19:53.467 "subsystems": [ 00:19:53.467 { 00:19:53.467 "subsystem": "keyring", 00:19:53.467 "config": [] 00:19:53.467 }, 00:19:53.467 { 00:19:53.467 "subsystem": "iobuf", 00:19:53.467 "config": [ 00:19:53.467 { 00:19:53.467 "method": "iobuf_set_options", 00:19:53.467 "params": { 00:19:53.467 "small_pool_count": 8192, 00:19:53.467 "large_pool_count": 1024, 00:19:53.467 "small_bufsize": 8192, 00:19:53.467 "large_bufsize": 135168 00:19:53.467 } 00:19:53.467 } 00:19:53.467 ] 00:19:53.467 }, 00:19:53.467 { 00:19:53.467 "subsystem": "sock", 00:19:53.467 "config": [ 00:19:53.467 { 00:19:53.467 "method": "sock_set_default_impl", 00:19:53.467 "params": { 00:19:53.467 "impl_name": "posix" 00:19:53.467 } 00:19:53.467 }, 00:19:53.467 { 00:19:53.467 "method": "sock_impl_set_options", 00:19:53.467 "params": { 00:19:53.467 "impl_name": "ssl", 00:19:53.467 "recv_buf_size": 4096, 00:19:53.467 "send_buf_size": 4096, 00:19:53.467 "enable_recv_pipe": true, 00:19:53.467 "enable_quickack": false, 00:19:53.467 "enable_placement_id": 0, 00:19:53.467 "enable_zerocopy_send_server": true, 00:19:53.467 "enable_zerocopy_send_client": false, 00:19:53.467 "zerocopy_threshold": 0, 00:19:53.467 "tls_version": 0, 00:19:53.467 "enable_ktls": false 00:19:53.467 } 00:19:53.467 }, 00:19:53.467 { 00:19:53.467 "method": "sock_impl_set_options", 00:19:53.467 "params": { 00:19:53.467 "impl_name": "posix", 00:19:53.467 "recv_buf_size": 2097152, 00:19:53.467 "send_buf_size": 2097152, 00:19:53.467 "enable_recv_pipe": true, 00:19:53.467 "enable_quickack": false, 00:19:53.467 "enable_placement_id": 0, 00:19:53.467 "enable_zerocopy_send_server": true, 00:19:53.467 "enable_zerocopy_send_client": false, 00:19:53.467 "zerocopy_threshold": 0, 00:19:53.467 "tls_version": 0, 00:19:53.467 "enable_ktls": false 00:19:53.467 } 00:19:53.467 } 00:19:53.467 ] 00:19:53.467 }, 00:19:53.467 { 00:19:53.467 "subsystem": "vmd", 00:19:53.467 "config": [] 00:19:53.467 }, 00:19:53.467 { 00:19:53.467 "subsystem": "accel", 00:19:53.467 "config": [ 00:19:53.467 { 00:19:53.467 "method": "accel_set_options", 00:19:53.467 "params": { 00:19:53.467 "small_cache_size": 128, 00:19:53.467 "large_cache_size": 16, 00:19:53.467 "task_count": 2048, 00:19:53.467 "sequence_count": 2048, 00:19:53.467 "buf_count": 2048 00:19:53.467 } 00:19:53.467 } 00:19:53.467 ] 00:19:53.467 }, 00:19:53.467 { 00:19:53.467 "subsystem": "bdev", 00:19:53.467 "config": [ 00:19:53.467 { 00:19:53.467 "method": "bdev_set_options", 00:19:53.467 "params": { 00:19:53.467 "bdev_io_pool_size": 65535, 00:19:53.467 "bdev_io_cache_size": 256, 00:19:53.467 "bdev_auto_examine": true, 00:19:53.467 "iobuf_small_cache_size": 128, 00:19:53.467 "iobuf_large_cache_size": 16 00:19:53.467 } 00:19:53.467 }, 00:19:53.467 { 00:19:53.467 "method": "bdev_raid_set_options", 00:19:53.467 "params": { 00:19:53.467 "process_window_size_kb": 1024, 00:19:53.467 "process_max_bandwidth_mb_sec": 0 00:19:53.467 } 00:19:53.467 }, 00:19:53.467 { 00:19:53.467 "method": "bdev_iscsi_set_options", 00:19:53.467 "params": { 00:19:53.467 "timeout_sec": 30 00:19:53.467 } 00:19:53.467 }, 00:19:53.467 { 00:19:53.467 "method": "bdev_nvme_set_options", 00:19:53.467 "params": { 00:19:53.467 "action_on_timeout": "none", 00:19:53.467 "timeout_us": 0, 00:19:53.467 "timeout_admin_us": 0, 00:19:53.467 "keep_alive_timeout_ms": 10000, 00:19:53.467 "arbitration_burst": 0, 00:19:53.467 "low_priority_weight": 0, 00:19:53.467 "medium_priority_weight": 0, 00:19:53.467 "high_priority_weight": 0, 00:19:53.467 "nvme_adminq_poll_period_us": 10000, 00:19:53.467 "nvme_ioq_poll_period_us": 0, 00:19:53.467 "io_queue_requests": 0, 00:19:53.467 "delay_cmd_submit": true, 00:19:53.467 "transport_retry_count": 4, 00:19:53.467 "bdev_retry_count": 3, 00:19:53.467 "transport_ack_timeout": 0, 00:19:53.467 "ctrlr_loss_timeout_sec": 0, 00:19:53.467 "reconnect_delay_sec": 0, 00:19:53.467 "fast_io_fail_timeout_sec": 0, 00:19:53.467 "disable_auto_failback": false, 00:19:53.467 "generate_uuids": false, 00:19:53.467 "transport_tos": 0, 00:19:53.467 "nvme_error_stat": false, 00:19:53.467 "rdma_srq_size": 0, 00:19:53.467 "io_path_stat": false, 00:19:53.467 "allow_accel_sequence": false, 00:19:53.467 "rdma_max_cq_size": 0, 00:19:53.467 "rdma_cm_event_timeout_ms": 0, 00:19:53.467 "dhchap_digests": [ 00:19:53.467 "sha256", 00:19:53.468 "sha384", 00:19:53.468 "sha512" 00:19:53.468 ], 00:19:53.468 "dhchap_dhgroups": [ 00:19:53.468 "null", 00:19:53.468 "ffdhe2048", 00:19:53.468 "ffdhe3072", 00:19:53.468 "ffdhe4096", 00:19:53.468 "ffdhe6144", 00:19:53.468 "ffdhe8192" 00:19:53.468 ] 00:19:53.468 } 00:19:53.468 }, 00:19:53.468 { 00:19:53.468 "method": "bdev_nvme_set_hotplug", 00:19:53.468 "params": { 00:19:53.468 "period_us": 100000, 00:19:53.468 "enable": false 00:19:53.468 } 00:19:53.468 }, 00:19:53.468 { 00:19:53.468 "method": "bdev_malloc_create", 00:19:53.468 "params": { 00:19:53.468 "name": "malloc0", 00:19:53.468 "num_blocks": 8192, 00:19:53.468 "block_size": 4096, 00:19:53.468 "physical_block_size": 4096, 00:19:53.468 "uuid": "7b7ebee6-adb3-4e70-b6ee-faa7e0b29844", 00:19:53.468 "optimal_io_boundary": 0, 00:19:53.468 "md_size": 0, 00:19:53.468 "dif_type": 0, 00:19:53.468 "dif_is_head_of_md": false, 00:19:53.468 "dif_pi_format": 0 00:19:53.468 } 00:19:53.468 }, 00:19:53.468 { 00:19:53.468 "method": "bdev_wait_for_examine" 00:19:53.468 } 00:19:53.468 ] 00:19:53.468 }, 00:19:53.468 { 00:19:53.468 "subsystem": "nbd", 00:19:53.468 "config": [] 00:19:53.468 }, 00:19:53.468 { 00:19:53.468 "subsystem": "scheduler", 00:19:53.468 "config": [ 00:19:53.468 { 00:19:53.468 "method": "framework_set_scheduler", 00:19:53.468 "params": { 00:19:53.468 "name": "static" 00:19:53.468 } 00:19:53.468 } 00:19:53.468 ] 00:19:53.468 }, 00:19:53.468 { 00:19:53.468 "subsystem": "nvmf", 00:19:53.468 "config": [ 00:19:53.468 { 00:19:53.468 "method": "nvmf_set_config", 00:19:53.468 "params": { 00:19:53.468 "discovery_filter": "match_any", 00:19:53.468 "admin_cmd_passthru": { 00:19:53.468 "identify_ctrlr": false 00:19:53.468 } 00:19:53.468 } 00:19:53.468 }, 00:19:53.468 { 00:19:53.468 "method": "nvmf_set_max_subsystems", 00:19:53.468 "params": { 00:19:53.468 "max_subsystems": 1024 00:19:53.468 } 00:19:53.468 }, 00:19:53.468 { 00:19:53.468 "method": "nvmf_set_crdt", 00:19:53.468 "params": { 00:19:53.468 "crdt1": 0, 00:19:53.468 "crdt2": 0, 00:19:53.468 "crdt3": 0 00:19:53.468 } 00:19:53.468 }, 00:19:53.468 { 00:19:53.468 "method": "nvmf_create_transport", 00:19:53.468 "params": { 00:19:53.468 "trtype": "TCP", 00:19:53.468 "max_queue_depth": 128, 00:19:53.468 "max_io_qpairs_per_ctrlr": 127, 00:19:53.468 "in_capsule_data_size": 4096, 00:19:53.468 "max_io_size": 131072, 00:19:53.468 "io_unit_size": 131072, 00:19:53.468 "max_aq_depth": 128, 00:19:53.468 "num_shared_buffers": 511, 00:19:53.468 "buf_cache_size": 4294967295, 00:19:53.468 "dif_insert_or_strip": false, 00:19:53.468 "zcopy": false, 00:19:53.468 "c2h_success": false, 00:19:53.468 "sock_priority": 0, 00:19:53.468 "abort_timeout_sec": 1, 00:19:53.468 "ack_timeout": 0, 00:19:53.468 "data_wr_pool_size": 0 00:19:53.468 } 00:19:53.468 }, 00:19:53.468 { 00:19:53.468 "method": "nvmf_create_subsystem", 00:19:53.468 "params": { 00:19:53.468 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:53.468 "allow_any_host": false, 00:19:53.468 "serial_number": "SPDK00000000000001", 00:19:53.468 "model_number": "SPDK bdev Controller", 00:19:53.468 "max_namespaces": 10, 00:19:53.468 "min_cntlid": 1, 00:19:53.468 "max_cntlid": 65519, 00:19:53.468 "ana_reporting": false 00:19:53.468 } 00:19:53.468 }, 00:19:53.468 { 00:19:53.468 "method": "nvmf_subsystem_add_host", 00:19:53.468 "params": { 00:19:53.468 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:53.468 "host": "nqn.2016-06.io.spdk:host1", 00:19:53.468 "psk": "/tmp/tmp.V8XslquD3v" 00:19:53.468 } 00:19:53.468 }, 00:19:53.468 { 00:19:53.468 "method": "nvmf_subsystem_add_ns", 00:19:53.468 "params": { 00:19:53.468 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:53.468 "namespace": { 00:19:53.468 "nsid": 1, 00:19:53.468 "bdev_name": "malloc0", 00:19:53.468 "nguid": "7B7EBEE6ADB34E70B6EEFAA7E0B29844", 00:19:53.468 "uuid": "7b7ebee6-adb3-4e70-b6ee-faa7e0b29844", 00:19:53.468 "no_auto_visible": false 00:19:53.468 } 00:19:53.468 } 00:19:53.468 }, 00:19:53.468 { 00:19:53.468 "method": "nvmf_subsystem_add_listener", 00:19:53.468 "params": { 00:19:53.468 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:53.468 "listen_address": { 00:19:53.468 "trtype": "TCP", 00:19:53.468 "adrfam": "IPv4", 00:19:53.468 "traddr": "10.0.0.2", 00:19:53.468 "trsvcid": "4420" 00:19:53.468 }, 00:19:53.468 "secure_channel": true 00:19:53.468 } 00:19:53.468 } 00:19:53.468 ] 00:19:53.468 } 00:19:53.468 ] 00:19:53.468 }' 00:19:53.468 11:27:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@197 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:19:53.728 11:27:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@197 -- # bdevperfconf='{ 00:19:53.728 "subsystems": [ 00:19:53.728 { 00:19:53.728 "subsystem": "keyring", 00:19:53.728 "config": [] 00:19:53.728 }, 00:19:53.728 { 00:19:53.728 "subsystem": "iobuf", 00:19:53.728 "config": [ 00:19:53.728 { 00:19:53.728 "method": "iobuf_set_options", 00:19:53.728 "params": { 00:19:53.728 "small_pool_count": 8192, 00:19:53.728 "large_pool_count": 1024, 00:19:53.728 "small_bufsize": 8192, 00:19:53.728 "large_bufsize": 135168 00:19:53.728 } 00:19:53.728 } 00:19:53.728 ] 00:19:53.728 }, 00:19:53.728 { 00:19:53.728 "subsystem": "sock", 00:19:53.728 "config": [ 00:19:53.728 { 00:19:53.728 "method": "sock_set_default_impl", 00:19:53.728 "params": { 00:19:53.728 "impl_name": "posix" 00:19:53.728 } 00:19:53.728 }, 00:19:53.728 { 00:19:53.728 "method": "sock_impl_set_options", 00:19:53.728 "params": { 00:19:53.728 "impl_name": "ssl", 00:19:53.728 "recv_buf_size": 4096, 00:19:53.728 "send_buf_size": 4096, 00:19:53.728 "enable_recv_pipe": true, 00:19:53.728 "enable_quickack": false, 00:19:53.728 "enable_placement_id": 0, 00:19:53.728 "enable_zerocopy_send_server": true, 00:19:53.728 "enable_zerocopy_send_client": false, 00:19:53.728 "zerocopy_threshold": 0, 00:19:53.728 "tls_version": 0, 00:19:53.728 "enable_ktls": false 00:19:53.728 } 00:19:53.728 }, 00:19:53.728 { 00:19:53.728 "method": "sock_impl_set_options", 00:19:53.728 "params": { 00:19:53.728 "impl_name": "posix", 00:19:53.728 "recv_buf_size": 2097152, 00:19:53.728 "send_buf_size": 2097152, 00:19:53.728 "enable_recv_pipe": true, 00:19:53.728 "enable_quickack": false, 00:19:53.728 "enable_placement_id": 0, 00:19:53.728 "enable_zerocopy_send_server": true, 00:19:53.728 "enable_zerocopy_send_client": false, 00:19:53.728 "zerocopy_threshold": 0, 00:19:53.728 "tls_version": 0, 00:19:53.728 "enable_ktls": false 00:19:53.728 } 00:19:53.728 } 00:19:53.728 ] 00:19:53.728 }, 00:19:53.728 { 00:19:53.728 "subsystem": "vmd", 00:19:53.728 "config": [] 00:19:53.728 }, 00:19:53.728 { 00:19:53.728 "subsystem": "accel", 00:19:53.728 "config": [ 00:19:53.728 { 00:19:53.728 "method": "accel_set_options", 00:19:53.728 "params": { 00:19:53.728 "small_cache_size": 128, 00:19:53.728 "large_cache_size": 16, 00:19:53.728 "task_count": 2048, 00:19:53.728 "sequence_count": 2048, 00:19:53.728 "buf_count": 2048 00:19:53.728 } 00:19:53.728 } 00:19:53.728 ] 00:19:53.728 }, 00:19:53.728 { 00:19:53.728 "subsystem": "bdev", 00:19:53.728 "config": [ 00:19:53.728 { 00:19:53.728 "method": "bdev_set_options", 00:19:53.728 "params": { 00:19:53.728 "bdev_io_pool_size": 65535, 00:19:53.728 "bdev_io_cache_size": 256, 00:19:53.728 "bdev_auto_examine": true, 00:19:53.728 "iobuf_small_cache_size": 128, 00:19:53.728 "iobuf_large_cache_size": 16 00:19:53.728 } 00:19:53.728 }, 00:19:53.728 { 00:19:53.728 "method": "bdev_raid_set_options", 00:19:53.728 "params": { 00:19:53.728 "process_window_size_kb": 1024, 00:19:53.728 "process_max_bandwidth_mb_sec": 0 00:19:53.728 } 00:19:53.728 }, 00:19:53.728 { 00:19:53.728 "method": "bdev_iscsi_set_options", 00:19:53.728 "params": { 00:19:53.728 "timeout_sec": 30 00:19:53.728 } 00:19:53.728 }, 00:19:53.728 { 00:19:53.728 "method": "bdev_nvme_set_options", 00:19:53.728 "params": { 00:19:53.728 "action_on_timeout": "none", 00:19:53.728 "timeout_us": 0, 00:19:53.728 "timeout_admin_us": 0, 00:19:53.728 "keep_alive_timeout_ms": 10000, 00:19:53.728 "arbitration_burst": 0, 00:19:53.728 "low_priority_weight": 0, 00:19:53.728 "medium_priority_weight": 0, 00:19:53.728 "high_priority_weight": 0, 00:19:53.728 "nvme_adminq_poll_period_us": 10000, 00:19:53.728 "nvme_ioq_poll_period_us": 0, 00:19:53.728 "io_queue_requests": 512, 00:19:53.728 "delay_cmd_submit": true, 00:19:53.728 "transport_retry_count": 4, 00:19:53.728 "bdev_retry_count": 3, 00:19:53.728 "transport_ack_timeout": 0, 00:19:53.728 "ctrlr_loss_timeout_sec": 0, 00:19:53.728 "reconnect_delay_sec": 0, 00:19:53.728 "fast_io_fail_timeout_sec": 0, 00:19:53.728 "disable_auto_failback": false, 00:19:53.728 "generate_uuids": false, 00:19:53.728 "transport_tos": 0, 00:19:53.728 "nvme_error_stat": false, 00:19:53.728 "rdma_srq_size": 0, 00:19:53.728 "io_path_stat": false, 00:19:53.728 "allow_accel_sequence": false, 00:19:53.728 "rdma_max_cq_size": 0, 00:19:53.728 "rdma_cm_event_timeout_ms": 0, 00:19:53.728 "dhchap_digests": [ 00:19:53.728 "sha256", 00:19:53.728 "sha384", 00:19:53.728 "sha512" 00:19:53.728 ], 00:19:53.728 "dhchap_dhgroups": [ 00:19:53.728 "null", 00:19:53.728 "ffdhe2048", 00:19:53.728 "ffdhe3072", 00:19:53.728 "ffdhe4096", 00:19:53.728 "ffdhe6144", 00:19:53.728 "ffdhe8192" 00:19:53.728 ] 00:19:53.728 } 00:19:53.728 }, 00:19:53.728 { 00:19:53.728 "method": "bdev_nvme_attach_controller", 00:19:53.728 "params": { 00:19:53.728 "name": "TLSTEST", 00:19:53.728 "trtype": "TCP", 00:19:53.728 "adrfam": "IPv4", 00:19:53.728 "traddr": "10.0.0.2", 00:19:53.728 "trsvcid": "4420", 00:19:53.728 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:53.728 "prchk_reftag": false, 00:19:53.728 "prchk_guard": false, 00:19:53.728 "ctrlr_loss_timeout_sec": 0, 00:19:53.728 "reconnect_delay_sec": 0, 00:19:53.728 "fast_io_fail_timeout_sec": 0, 00:19:53.728 "psk": "/tmp/tmp.V8XslquD3v", 00:19:53.728 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:19:53.728 "hdgst": false, 00:19:53.728 "ddgst": false 00:19:53.728 } 00:19:53.728 }, 00:19:53.728 { 00:19:53.728 "method": "bdev_nvme_set_hotplug", 00:19:53.728 "params": { 00:19:53.728 "period_us": 100000, 00:19:53.728 "enable": false 00:19:53.728 } 00:19:53.728 }, 00:19:53.728 { 00:19:53.728 "method": "bdev_wait_for_examine" 00:19:53.728 } 00:19:53.728 ] 00:19:53.728 }, 00:19:53.728 { 00:19:53.728 "subsystem": "nbd", 00:19:53.728 "config": [] 00:19:53.728 } 00:19:53.728 ] 00:19:53.728 }' 00:19:53.728 11:27:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@199 -- # killprocess 1545577 00:19:53.728 11:27:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 1545577 ']' 00:19:53.728 11:27:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 1545577 00:19:53.728 11:27:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:19:53.728 11:27:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:19:53.729 11:27:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1545577 00:19:53.729 11:27:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:19:53.729 11:27:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:19:53.729 11:27:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1545577' 00:19:53.729 killing process with pid 1545577 00:19:53.729 11:27:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 1545577 00:19:53.729 Received shutdown signal, test time was about 10.000000 seconds 00:19:53.729 00:19:53.729 Latency(us) 00:19:53.729 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:53.729 =================================================================================================================== 00:19:53.729 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:19:53.729 [2024-07-26 11:27:49.328862] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:19:53.729 11:27:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 1545577 00:19:53.988 11:27:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@200 -- # killprocess 1545120 00:19:53.988 11:27:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 1545120 ']' 00:19:53.988 11:27:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 1545120 00:19:53.988 11:27:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:19:53.988 11:27:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:19:53.988 11:27:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1545120 00:19:53.988 11:27:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:19:53.988 11:27:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:19:53.988 11:27:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1545120' 00:19:53.988 killing process with pid 1545120 00:19:53.988 11:27:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 1545120 00:19:53.988 [2024-07-26 11:27:49.556654] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:19:53.988 11:27:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 1545120 00:19:54.248 11:27:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@203 -- # nvmfappstart -m 0x2 -c /dev/fd/62 00:19:54.248 11:27:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:19:54.248 11:27:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:19:54.248 11:27:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@203 -- # echo '{ 00:19:54.248 "subsystems": [ 00:19:54.248 { 00:19:54.248 "subsystem": "keyring", 00:19:54.248 "config": [] 00:19:54.248 }, 00:19:54.248 { 00:19:54.248 "subsystem": "iobuf", 00:19:54.248 "config": [ 00:19:54.248 { 00:19:54.248 "method": "iobuf_set_options", 00:19:54.248 "params": { 00:19:54.248 "small_pool_count": 8192, 00:19:54.248 "large_pool_count": 1024, 00:19:54.248 "small_bufsize": 8192, 00:19:54.248 "large_bufsize": 135168 00:19:54.248 } 00:19:54.248 } 00:19:54.248 ] 00:19:54.248 }, 00:19:54.248 { 00:19:54.248 "subsystem": "sock", 00:19:54.248 "config": [ 00:19:54.248 { 00:19:54.248 "method": "sock_set_default_impl", 00:19:54.248 "params": { 00:19:54.248 "impl_name": "posix" 00:19:54.248 } 00:19:54.248 }, 00:19:54.248 { 00:19:54.248 "method": "sock_impl_set_options", 00:19:54.248 "params": { 00:19:54.248 "impl_name": "ssl", 00:19:54.248 "recv_buf_size": 4096, 00:19:54.248 "send_buf_size": 4096, 00:19:54.248 "enable_recv_pipe": true, 00:19:54.248 "enable_quickack": false, 00:19:54.248 "enable_placement_id": 0, 00:19:54.248 "enable_zerocopy_send_server": true, 00:19:54.248 "enable_zerocopy_send_client": false, 00:19:54.248 "zerocopy_threshold": 0, 00:19:54.248 "tls_version": 0, 00:19:54.248 "enable_ktls": false 00:19:54.248 } 00:19:54.248 }, 00:19:54.248 { 00:19:54.248 "method": "sock_impl_set_options", 00:19:54.248 "params": { 00:19:54.248 "impl_name": "posix", 00:19:54.248 "recv_buf_size": 2097152, 00:19:54.248 "send_buf_size": 2097152, 00:19:54.248 "enable_recv_pipe": true, 00:19:54.248 "enable_quickack": false, 00:19:54.248 "enable_placement_id": 0, 00:19:54.248 "enable_zerocopy_send_server": true, 00:19:54.248 "enable_zerocopy_send_client": false, 00:19:54.248 "zerocopy_threshold": 0, 00:19:54.248 "tls_version": 0, 00:19:54.248 "enable_ktls": false 00:19:54.248 } 00:19:54.248 } 00:19:54.248 ] 00:19:54.248 }, 00:19:54.248 { 00:19:54.248 "subsystem": "vmd", 00:19:54.248 "config": [] 00:19:54.248 }, 00:19:54.248 { 00:19:54.248 "subsystem": "accel", 00:19:54.248 "config": [ 00:19:54.248 { 00:19:54.248 "method": "accel_set_options", 00:19:54.248 "params": { 00:19:54.248 "small_cache_size": 128, 00:19:54.248 "large_cache_size": 16, 00:19:54.248 "task_count": 2048, 00:19:54.248 "sequence_count": 2048, 00:19:54.248 "buf_count": 2048 00:19:54.248 } 00:19:54.248 } 00:19:54.248 ] 00:19:54.248 }, 00:19:54.248 { 00:19:54.248 "subsystem": "bdev", 00:19:54.248 "config": [ 00:19:54.248 { 00:19:54.248 "method": "bdev_set_options", 00:19:54.248 "params": { 00:19:54.248 "bdev_io_pool_size": 65535, 00:19:54.248 "bdev_io_cache_size": 256, 00:19:54.248 "bdev_auto_examine": true, 00:19:54.248 "iobuf_small_cache_size": 128, 00:19:54.248 "iobuf_large_cache_size": 16 00:19:54.248 } 00:19:54.248 }, 00:19:54.248 { 00:19:54.248 "method": "bdev_raid_set_options", 00:19:54.248 "params": { 00:19:54.248 "process_window_size_kb": 1024, 00:19:54.248 "process_max_bandwidth_mb_sec": 0 00:19:54.248 } 00:19:54.248 }, 00:19:54.248 { 00:19:54.248 "method": "bdev_iscsi_set_options", 00:19:54.248 "params": { 00:19:54.248 "timeout_sec": 30 00:19:54.248 } 00:19:54.248 }, 00:19:54.248 { 00:19:54.248 "method": "bdev_nvme_set_options", 00:19:54.248 "params": { 00:19:54.248 "action_on_timeout": "none", 00:19:54.248 "timeout_us": 0, 00:19:54.248 "timeout_admin_us": 0, 00:19:54.248 "keep_alive_timeout_ms": 10000, 00:19:54.248 "arbitration_burst": 0, 00:19:54.248 "low_priority_weight": 0, 00:19:54.248 "medium_priority_weight": 0, 00:19:54.248 "high_priority_weight": 0, 00:19:54.248 "nvme_adminq_poll_period_us": 10000, 00:19:54.248 "nvme_ioq_poll_period_us": 0, 00:19:54.248 "io_queue_requests": 0, 00:19:54.248 "delay_cmd_submit": true, 00:19:54.248 "transport_retry_count": 4, 00:19:54.249 "bdev_retry_count": 3, 00:19:54.249 "transport_ack_timeout": 0, 00:19:54.249 "ctrlr_loss_timeout_sec": 0, 00:19:54.249 "reconnect_delay_sec": 0, 00:19:54.249 "fast_io_fail_timeout_sec": 0, 00:19:54.249 "disable_auto_failback": false, 00:19:54.249 "generate_uuids": false, 00:19:54.249 "transport_tos": 0, 00:19:54.249 "nvme_error_stat": false, 00:19:54.249 "rdma_srq_size": 0, 00:19:54.249 "io_path_stat": false, 00:19:54.249 "allow_accel_sequence": false, 00:19:54.249 "rdma_max_cq_size": 0, 00:19:54.249 "rdma_cm_event_timeout_ms": 0, 00:19:54.249 "dhchap_digests": [ 00:19:54.249 "sha256", 00:19:54.249 "sha384", 00:19:54.249 "sha512" 00:19:54.249 ], 00:19:54.249 "dhchap_dhgroups": [ 00:19:54.249 "null", 00:19:54.249 "ffdhe2048", 00:19:54.249 "ffdhe3072", 00:19:54.249 "ffdhe4096", 00:19:54.249 "ffdhe6144", 00:19:54.249 "ffdhe8192" 00:19:54.249 ] 00:19:54.249 } 00:19:54.249 }, 00:19:54.249 { 00:19:54.249 "method": "bdev_nvme_set_hotplug", 00:19:54.249 "params": { 00:19:54.249 "period_us": 100000, 00:19:54.249 "enable": false 00:19:54.249 } 00:19:54.249 }, 00:19:54.249 { 00:19:54.249 "method": "bdev_malloc_create", 00:19:54.249 "params": { 00:19:54.249 "name": "malloc0", 00:19:54.249 "num_blocks": 8192, 00:19:54.249 "block_size": 4096, 00:19:54.249 "physical_block_size": 4096, 00:19:54.249 "uuid": "7b7ebee6-adb3-4e70-b6ee-faa7e0b29844", 00:19:54.249 "optimal_io_boundary": 0, 00:19:54.249 "md_size": 0, 00:19:54.249 "dif_type": 0, 00:19:54.249 "dif_is_head_of_md": false, 00:19:54.249 "dif_pi_format": 0 00:19:54.249 } 00:19:54.249 }, 00:19:54.249 { 00:19:54.249 "method": "bdev_wait_for_examine" 00:19:54.249 } 00:19:54.249 ] 00:19:54.249 }, 00:19:54.249 { 00:19:54.249 "subsystem": "nbd", 00:19:54.249 "config": [] 00:19:54.249 }, 00:19:54.249 { 00:19:54.249 "subsystem": "scheduler", 00:19:54.249 "config": [ 00:19:54.249 { 00:19:54.249 "method": "framework_set_scheduler", 00:19:54.249 "params": { 00:19:54.249 "name": "static" 00:19:54.249 } 00:19:54.249 } 00:19:54.249 ] 00:19:54.249 }, 00:19:54.249 { 00:19:54.249 "subsystem": "nvmf", 00:19:54.249 "config": [ 00:19:54.249 { 00:19:54.249 "method": "nvmf_set_config", 00:19:54.249 "params": { 00:19:54.249 "discovery_filter": "match_any", 00:19:54.249 "admin_cmd_passthru": { 00:19:54.249 "identify_ctrlr": false 00:19:54.249 } 00:19:54.249 } 00:19:54.249 }, 00:19:54.249 { 00:19:54.249 "method": "nvmf_set_max_subsystems", 00:19:54.249 "params": { 00:19:54.249 "max_subsystems": 1024 00:19:54.249 } 00:19:54.249 }, 00:19:54.249 { 00:19:54.249 "method": "nvmf_set_crdt", 00:19:54.249 "params": { 00:19:54.249 "crdt1": 0, 00:19:54.249 "crdt2": 0, 00:19:54.249 "crdt3": 0 00:19:54.249 } 00:19:54.249 }, 00:19:54.249 { 00:19:54.249 "method": "nvmf_create_transport", 00:19:54.249 "params": { 00:19:54.249 "trtype": "TCP", 00:19:54.249 "max_queue_depth": 128, 00:19:54.249 "max_io_qpairs_per_ctrlr": 127, 00:19:54.249 "in_capsule_data_size": 4096, 00:19:54.249 "max_io_size": 131072, 00:19:54.249 "io_unit_size": 131072, 00:19:54.249 "max_aq_depth": 128, 00:19:54.249 "num_shared_buffers": 511, 00:19:54.249 "buf_cache_size": 4294967295, 00:19:54.249 "dif_insert_or_strip": false, 00:19:54.249 "zcopy": false, 00:19:54.249 "c2h_success": false, 00:19:54.249 "sock_priority": 0, 00:19:54.249 "abort_timeout_sec": 1, 00:19:54.249 "ack_timeout": 0, 00:19:54.249 "data_wr_pool_size": 0 00:19:54.249 } 00:19:54.249 }, 00:19:54.249 { 00:19:54.249 "method": "nvmf_create_subsystem", 00:19:54.249 "params": { 00:19:54.249 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:54.249 "allow_any_host": false, 00:19:54.249 "serial_number": "SPDK00000000000001", 00:19:54.249 "model_number": "SPDK bdev Controller", 00:19:54.249 "max_namespaces": 10, 00:19:54.249 "min_cntlid": 1, 00:19:54.249 "max_cntlid": 65519, 00:19:54.249 "ana_reporting": false 00:19:54.249 } 00:19:54.249 }, 00:19:54.249 { 00:19:54.249 "method": "nvmf_subsystem_add_host", 00:19:54.249 "params": { 00:19:54.249 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:54.249 "host": "nqn.2016-06.io.spdk:host1", 00:19:54.249 "psk": "/tmp/tmp.V8XslquD3v" 00:19:54.249 } 00:19:54.249 }, 00:19:54.249 { 00:19:54.249 "method": "nvmf_subsystem_add_ns", 00:19:54.249 "params": { 00:19:54.249 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:54.249 "namespace": { 00:19:54.249 "nsid": 1, 00:19:54.249 "bdev_name": "malloc0", 00:19:54.249 "nguid": "7B7EBEE6ADB34E70B6EEFAA7E0B29844", 00:19:54.249 "uuid": "7b7ebee6-adb3-4e70-b6ee-faa7e0b29844", 00:19:54.249 "no_auto_visible": false 00:19:54.249 } 00:19:54.249 } 00:19:54.249 }, 00:19:54.249 { 00:19:54.249 "method": "nvmf_subsystem_add_listener", 00:19:54.249 "params": { 00:19:54.249 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:54.249 "listen_address": { 00:19:54.249 "trtype": "TCP", 00:19:54.249 "adrfam": "IPv4", 00:19:54.249 "traddr": "10.0.0.2", 00:19:54.249 "trsvcid": "4420" 00:19:54.249 }, 00:19:54.249 "secure_channel": true 00:19:54.249 } 00:19:54.249 } 00:19:54.249 ] 00:19:54.249 } 00:19:54.249 ] 00:19:54.249 }' 00:19:54.249 11:27:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:54.249 11:27:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=1545838 00:19:54.249 11:27:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 1545838 00:19:54.249 11:27:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 -c /dev/fd/62 00:19:54.249 11:27:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 1545838 ']' 00:19:54.249 11:27:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:54.249 11:27:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:19:54.249 11:27:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:54.249 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:54.249 11:27:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:19:54.249 11:27:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:54.249 [2024-07-26 11:27:49.804464] Starting SPDK v24.09-pre git sha1 487ff9e1a / DPDK 24.03.0 initialization... 00:19:54.249 [2024-07-26 11:27:49.804509] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:54.249 EAL: No free 2048 kB hugepages reported on node 1 00:19:54.250 [2024-07-26 11:27:49.873708] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:54.509 [2024-07-26 11:27:49.951054] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:54.509 [2024-07-26 11:27:49.951091] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:54.509 [2024-07-26 11:27:49.951098] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:54.509 [2024-07-26 11:27:49.951103] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:54.509 [2024-07-26 11:27:49.951108] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:54.509 [2024-07-26 11:27:49.951159] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:19:54.509 [2024-07-26 11:27:50.153070] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:54.767 [2024-07-26 11:27:50.178277] tcp.c:3725:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:19:54.767 [2024-07-26 11:27:50.194252] tcp.c: 956:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:19:54.767 [2024-07-26 11:27:50.194438] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:55.028 11:27:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:19:55.028 11:27:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:19:55.028 11:27:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:19:55.028 11:27:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:19:55.028 11:27:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:55.028 11:27:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:55.028 11:27:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@207 -- # bdevperf_pid=1546076 00:19:55.028 11:27:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@208 -- # waitforlisten 1546076 /var/tmp/bdevperf.sock 00:19:55.028 11:27:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 1546076 ']' 00:19:55.028 11:27:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@204 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -c /dev/fd/63 00:19:55.028 11:27:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:55.028 11:27:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:19:55.028 11:27:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@204 -- # echo '{ 00:19:55.028 "subsystems": [ 00:19:55.028 { 00:19:55.028 "subsystem": "keyring", 00:19:55.028 "config": [] 00:19:55.028 }, 00:19:55.028 { 00:19:55.028 "subsystem": "iobuf", 00:19:55.028 "config": [ 00:19:55.028 { 00:19:55.028 "method": "iobuf_set_options", 00:19:55.028 "params": { 00:19:55.028 "small_pool_count": 8192, 00:19:55.028 "large_pool_count": 1024, 00:19:55.028 "small_bufsize": 8192, 00:19:55.028 "large_bufsize": 135168 00:19:55.028 } 00:19:55.028 } 00:19:55.028 ] 00:19:55.028 }, 00:19:55.028 { 00:19:55.028 "subsystem": "sock", 00:19:55.028 "config": [ 00:19:55.028 { 00:19:55.028 "method": "sock_set_default_impl", 00:19:55.028 "params": { 00:19:55.028 "impl_name": "posix" 00:19:55.028 } 00:19:55.028 }, 00:19:55.028 { 00:19:55.028 "method": "sock_impl_set_options", 00:19:55.028 "params": { 00:19:55.028 "impl_name": "ssl", 00:19:55.028 "recv_buf_size": 4096, 00:19:55.028 "send_buf_size": 4096, 00:19:55.028 "enable_recv_pipe": true, 00:19:55.028 "enable_quickack": false, 00:19:55.028 "enable_placement_id": 0, 00:19:55.028 "enable_zerocopy_send_server": true, 00:19:55.028 "enable_zerocopy_send_client": false, 00:19:55.028 "zerocopy_threshold": 0, 00:19:55.028 "tls_version": 0, 00:19:55.028 "enable_ktls": false 00:19:55.028 } 00:19:55.028 }, 00:19:55.028 { 00:19:55.028 "method": "sock_impl_set_options", 00:19:55.028 "params": { 00:19:55.028 "impl_name": "posix", 00:19:55.028 "recv_buf_size": 2097152, 00:19:55.028 "send_buf_size": 2097152, 00:19:55.028 "enable_recv_pipe": true, 00:19:55.028 "enable_quickack": false, 00:19:55.028 "enable_placement_id": 0, 00:19:55.028 "enable_zerocopy_send_server": true, 00:19:55.028 "enable_zerocopy_send_client": false, 00:19:55.028 "zerocopy_threshold": 0, 00:19:55.028 "tls_version": 0, 00:19:55.028 "enable_ktls": false 00:19:55.028 } 00:19:55.028 } 00:19:55.028 ] 00:19:55.028 }, 00:19:55.028 { 00:19:55.028 "subsystem": "vmd", 00:19:55.028 "config": [] 00:19:55.028 }, 00:19:55.028 { 00:19:55.028 "subsystem": "accel", 00:19:55.028 "config": [ 00:19:55.028 { 00:19:55.028 "method": "accel_set_options", 00:19:55.028 "params": { 00:19:55.028 "small_cache_size": 128, 00:19:55.028 "large_cache_size": 16, 00:19:55.028 "task_count": 2048, 00:19:55.028 "sequence_count": 2048, 00:19:55.028 "buf_count": 2048 00:19:55.028 } 00:19:55.028 } 00:19:55.028 ] 00:19:55.028 }, 00:19:55.028 { 00:19:55.028 "subsystem": "bdev", 00:19:55.028 "config": [ 00:19:55.028 { 00:19:55.028 "method": "bdev_set_options", 00:19:55.028 "params": { 00:19:55.028 "bdev_io_pool_size": 65535, 00:19:55.028 "bdev_io_cache_size": 256, 00:19:55.028 "bdev_auto_examine": true, 00:19:55.028 "iobuf_small_cache_size": 128, 00:19:55.028 "iobuf_large_cache_size": 16 00:19:55.028 } 00:19:55.028 }, 00:19:55.028 { 00:19:55.028 "method": "bdev_raid_set_options", 00:19:55.028 "params": { 00:19:55.028 "process_window_size_kb": 1024, 00:19:55.028 "process_max_bandwidth_mb_sec": 0 00:19:55.028 } 00:19:55.028 }, 00:19:55.028 { 00:19:55.028 "method": "bdev_iscsi_set_options", 00:19:55.028 "params": { 00:19:55.028 "timeout_sec": 30 00:19:55.028 } 00:19:55.028 }, 00:19:55.028 { 00:19:55.028 "method": "bdev_nvme_set_options", 00:19:55.028 "params": { 00:19:55.028 "action_on_timeout": "none", 00:19:55.028 "timeout_us": 0, 00:19:55.028 "timeout_admin_us": 0, 00:19:55.028 "keep_alive_timeout_ms": 10000, 00:19:55.028 "arbitration_burst": 0, 00:19:55.028 "low_priority_weight": 0, 00:19:55.028 "medium_priority_weight": 0, 00:19:55.028 "high_priority_weight": 0, 00:19:55.028 "nvme_adminq_poll_period_us": 10000, 00:19:55.028 "nvme_ioq_poll_period_us": 0, 00:19:55.028 "io_queue_requests": 512, 00:19:55.028 "delay_cmd_submit": true, 00:19:55.028 "transport_retry_count": 4, 00:19:55.028 "bdev_retry_count": 3, 00:19:55.028 "transport_ack_timeout": 0, 00:19:55.028 "ctrlr_loss_timeout_sec": 0, 00:19:55.028 "reconnect_delay_sec": 0, 00:19:55.028 "fast_io_fail_timeout_sec": 0, 00:19:55.028 "disable_auto_failback": false, 00:19:55.028 "generate_uuids": false, 00:19:55.028 "transport_tos": 0, 00:19:55.028 "nvme_error_stat": false, 00:19:55.028 "rdma_srq_size": 0, 00:19:55.028 "io_path_stat": false, 00:19:55.028 "allow_accel_sequence": false, 00:19:55.028 "rdma_max_cq_size": 0, 00:19:55.028 "rdma_cm_event_timeout_ms": 0, 00:19:55.028 "dhchap_digests": [ 00:19:55.028 "sha256", 00:19:55.028 "sha384", 00:19:55.028 "sha512" 00:19:55.028 ], 00:19:55.028 "dhchap_dhgroups": [ 00:19:55.028 "null", 00:19:55.028 "ffdhe2048", 00:19:55.028 "ffdhe3072", 00:19:55.028 "ffdhe4096", 00:19:55.028 "ffdhe6144", 00:19:55.028 "ffdhe8192" 00:19:55.028 ] 00:19:55.028 } 00:19:55.028 }, 00:19:55.028 { 00:19:55.028 "method": "bdev_nvme_attach_controller", 00:19:55.028 "params": { 00:19:55.028 "name": "TLSTEST", 00:19:55.028 "trtype": "TCP", 00:19:55.028 "adrfam": "IPv4", 00:19:55.028 "traddr": "10.0.0.2", 00:19:55.028 "trsvcid": "4420", 00:19:55.028 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:55.028 "prchk_reftag": false, 00:19:55.028 "prchk_guard": false, 00:19:55.028 "ctrlr_loss_timeout_sec": 0, 00:19:55.028 "reconnect_delay_sec": 0, 00:19:55.028 "fast_io_fail_timeout_sec": 0, 00:19:55.028 "psk": "/tmp/tmp.V8XslquD3v", 00:19:55.028 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:19:55.028 "hdgst": false, 00:19:55.028 "ddgst": false 00:19:55.028 } 00:19:55.028 }, 00:19:55.028 { 00:19:55.028 "method": "bdev_nvme_set_hotplug", 00:19:55.029 "params": { 00:19:55.029 "period_us": 100000, 00:19:55.029 "enable": false 00:19:55.029 } 00:19:55.029 }, 00:19:55.029 { 00:19:55.029 "method": "bdev_wait_for_examine" 00:19:55.029 } 00:19:55.029 ] 00:19:55.029 }, 00:19:55.029 { 00:19:55.029 "subsystem": "nbd", 00:19:55.029 "config": [] 00:19:55.029 } 00:19:55.029 ] 00:19:55.029 }' 00:19:55.029 11:27:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:55.029 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:55.029 11:27:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:19:55.029 11:27:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:55.029 [2024-07-26 11:27:50.674699] Starting SPDK v24.09-pre git sha1 487ff9e1a / DPDK 24.03.0 initialization... 00:19:55.029 [2024-07-26 11:27:50.674742] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1546076 ] 00:19:55.288 EAL: No free 2048 kB hugepages reported on node 1 00:19:55.288 [2024-07-26 11:27:50.742168] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:55.288 [2024-07-26 11:27:50.819444] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:19:55.547 [2024-07-26 11:27:50.960250] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:19:55.547 [2024-07-26 11:27:50.960318] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:19:56.146 11:27:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:19:56.146 11:27:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:19:56.146 11:27:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@211 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:19:56.146 Running I/O for 10 seconds... 00:20:06.156 00:20:06.156 Latency(us) 00:20:06.156 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:06.156 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:20:06.156 Verification LBA range: start 0x0 length 0x2000 00:20:06.156 TLSTESTn1 : 10.02 5553.55 21.69 0.00 0.00 23012.05 6272.73 28960.67 00:20:06.156 =================================================================================================================== 00:20:06.156 Total : 5553.55 21.69 0.00 0.00 23012.05 6272.73 28960.67 00:20:06.156 0 00:20:06.156 11:28:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@213 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:20:06.156 11:28:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@214 -- # killprocess 1546076 00:20:06.156 11:28:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 1546076 ']' 00:20:06.156 11:28:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 1546076 00:20:06.156 11:28:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:20:06.156 11:28:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:20:06.156 11:28:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1546076 00:20:06.156 11:28:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:20:06.156 11:28:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:20:06.156 11:28:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1546076' 00:20:06.157 killing process with pid 1546076 00:20:06.157 11:28:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 1546076 00:20:06.157 Received shutdown signal, test time was about 10.000000 seconds 00:20:06.157 00:20:06.157 Latency(us) 00:20:06.157 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:06.157 =================================================================================================================== 00:20:06.157 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:20:06.157 [2024-07-26 11:28:01.657737] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:20:06.157 11:28:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 1546076 00:20:06.416 11:28:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@215 -- # killprocess 1545838 00:20:06.416 11:28:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 1545838 ']' 00:20:06.416 11:28:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 1545838 00:20:06.416 11:28:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:20:06.416 11:28:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:20:06.416 11:28:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1545838 00:20:06.416 11:28:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:20:06.416 11:28:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:20:06.416 11:28:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1545838' 00:20:06.416 killing process with pid 1545838 00:20:06.416 11:28:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 1545838 00:20:06.416 [2024-07-26 11:28:01.881482] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:20:06.416 11:28:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 1545838 00:20:06.416 11:28:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@218 -- # nvmfappstart 00:20:06.416 11:28:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:20:06.416 11:28:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:20:06.416 11:28:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:06.675 11:28:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=1547925 00:20:06.675 11:28:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:20:06.675 11:28:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 1547925 00:20:06.675 11:28:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 1547925 ']' 00:20:06.675 11:28:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:06.675 11:28:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:20:06.675 11:28:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:06.675 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:06.675 11:28:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:20:06.675 11:28:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:06.675 [2024-07-26 11:28:02.125985] Starting SPDK v24.09-pre git sha1 487ff9e1a / DPDK 24.03.0 initialization... 00:20:06.675 [2024-07-26 11:28:02.126029] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:06.675 EAL: No free 2048 kB hugepages reported on node 1 00:20:06.675 [2024-07-26 11:28:02.193683] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:06.675 [2024-07-26 11:28:02.269745] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:06.675 [2024-07-26 11:28:02.269779] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:06.675 [2024-07-26 11:28:02.269785] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:06.675 [2024-07-26 11:28:02.269791] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:06.675 [2024-07-26 11:28:02.269796] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:06.675 [2024-07-26 11:28:02.269812] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:20:07.610 11:28:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:20:07.610 11:28:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:20:07.610 11:28:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:20:07.610 11:28:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:20:07.610 11:28:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:07.610 11:28:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:07.610 11:28:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@219 -- # setup_nvmf_tgt /tmp/tmp.V8XslquD3v 00:20:07.610 11:28:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.V8XslquD3v 00:20:07.610 11:28:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:20:07.610 [2024-07-26 11:28:03.107775] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:07.610 11:28:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:20:07.880 11:28:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:20:07.880 [2024-07-26 11:28:03.444654] tcp.c: 956:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:20:07.880 [2024-07-26 11:28:03.444848] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:07.880 11:28:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:20:08.142 malloc0 00:20:08.142 11:28:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:20:08.142 11:28:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.V8XslquD3v 00:20:08.401 [2024-07-26 11:28:03.933938] tcp.c:3725:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:20:08.401 11:28:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@222 -- # bdevperf_pid=1548182 00:20:08.401 11:28:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@224 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:20:08.401 11:28:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@225 -- # waitforlisten 1548182 /var/tmp/bdevperf.sock 00:20:08.401 11:28:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@220 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:20:08.401 11:28:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 1548182 ']' 00:20:08.401 11:28:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:08.401 11:28:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:20:08.401 11:28:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:08.401 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:08.401 11:28:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:20:08.401 11:28:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:08.401 [2024-07-26 11:28:03.992111] Starting SPDK v24.09-pre git sha1 487ff9e1a / DPDK 24.03.0 initialization... 00:20:08.401 [2024-07-26 11:28:03.992156] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1548182 ] 00:20:08.401 EAL: No free 2048 kB hugepages reported on node 1 00:20:08.401 [2024-07-26 11:28:04.055659] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:08.659 [2024-07-26 11:28:04.133897] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:20:09.226 11:28:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:20:09.226 11:28:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:20:09.227 11:28:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@227 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.V8XslquD3v 00:20:09.485 11:28:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@228 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:20:09.485 [2024-07-26 11:28:05.120459] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:20:09.743 nvme0n1 00:20:09.743 11:28:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@232 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:20:09.743 Running I/O for 1 seconds... 00:20:10.693 00:20:10.693 Latency(us) 00:20:10.693 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:10.693 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:20:10.693 Verification LBA range: start 0x0 length 0x2000 00:20:10.693 nvme0n1 : 1.01 5273.57 20.60 0.00 0.00 24092.32 4681.14 32455.92 00:20:10.693 =================================================================================================================== 00:20:10.693 Total : 5273.57 20.60 0.00 0.00 24092.32 4681.14 32455.92 00:20:10.693 0 00:20:10.693 11:28:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@234 -- # killprocess 1548182 00:20:10.693 11:28:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 1548182 ']' 00:20:10.693 11:28:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 1548182 00:20:10.693 11:28:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:20:10.952 11:28:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:20:10.952 11:28:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1548182 00:20:10.952 11:28:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:20:10.952 11:28:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:20:10.952 11:28:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1548182' 00:20:10.952 killing process with pid 1548182 00:20:10.952 11:28:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 1548182 00:20:10.952 Received shutdown signal, test time was about 1.000000 seconds 00:20:10.952 00:20:10.952 Latency(us) 00:20:10.953 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:10.953 =================================================================================================================== 00:20:10.953 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:20:10.953 11:28:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 1548182 00:20:10.953 11:28:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@235 -- # killprocess 1547925 00:20:10.953 11:28:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 1547925 ']' 00:20:10.953 11:28:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 1547925 00:20:10.953 11:28:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:20:10.953 11:28:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:20:10.953 11:28:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1547925 00:20:11.212 11:28:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:20:11.212 11:28:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:20:11.212 11:28:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1547925' 00:20:11.212 killing process with pid 1547925 00:20:11.212 11:28:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 1547925 00:20:11.212 [2024-07-26 11:28:06.618811] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:20:11.212 11:28:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 1547925 00:20:11.212 11:28:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@240 -- # nvmfappstart 00:20:11.212 11:28:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:20:11.212 11:28:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:20:11.212 11:28:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:11.212 11:28:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=1548654 00:20:11.212 11:28:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 1548654 00:20:11.212 11:28:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:20:11.212 11:28:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 1548654 ']' 00:20:11.212 11:28:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:11.212 11:28:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:20:11.212 11:28:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:11.212 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:11.212 11:28:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:20:11.212 11:28:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:11.212 [2024-07-26 11:28:06.864114] Starting SPDK v24.09-pre git sha1 487ff9e1a / DPDK 24.03.0 initialization... 00:20:11.212 [2024-07-26 11:28:06.864160] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:11.471 EAL: No free 2048 kB hugepages reported on node 1 00:20:11.471 [2024-07-26 11:28:06.933260] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:11.471 [2024-07-26 11:28:07.000590] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:11.471 [2024-07-26 11:28:07.000636] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:11.471 [2024-07-26 11:28:07.000643] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:11.471 [2024-07-26 11:28:07.000649] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:11.471 [2024-07-26 11:28:07.000654] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:11.471 [2024-07-26 11:28:07.000688] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:20:12.038 11:28:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:20:12.038 11:28:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:20:12.038 11:28:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:20:12.038 11:28:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:20:12.038 11:28:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:12.297 11:28:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:12.297 11:28:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@241 -- # rpc_cmd 00:20:12.297 11:28:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:12.297 11:28:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:12.297 [2024-07-26 11:28:07.708606] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:12.297 malloc0 00:20:12.297 [2024-07-26 11:28:07.736721] tcp.c: 956:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:20:12.297 [2024-07-26 11:28:07.744937] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:12.297 11:28:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:12.297 11:28:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@254 -- # bdevperf_pid=1548899 00:20:12.297 11:28:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@252 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:20:12.297 11:28:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@256 -- # waitforlisten 1548899 /var/tmp/bdevperf.sock 00:20:12.297 11:28:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 1548899 ']' 00:20:12.297 11:28:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:12.297 11:28:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:20:12.297 11:28:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:12.297 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:12.297 11:28:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:20:12.297 11:28:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:12.297 [2024-07-26 11:28:07.815774] Starting SPDK v24.09-pre git sha1 487ff9e1a / DPDK 24.03.0 initialization... 00:20:12.297 [2024-07-26 11:28:07.815812] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1548899 ] 00:20:12.297 EAL: No free 2048 kB hugepages reported on node 1 00:20:12.297 [2024-07-26 11:28:07.878953] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:12.556 [2024-07-26 11:28:07.958879] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:20:13.123 11:28:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:20:13.124 11:28:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:20:13.124 11:28:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@257 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.V8XslquD3v 00:20:13.124 11:28:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@258 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:20:13.383 [2024-07-26 11:28:08.934177] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:20:13.383 nvme0n1 00:20:13.383 11:28:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@262 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:20:13.642 Running I/O for 1 seconds... 00:20:14.578 00:20:14.578 Latency(us) 00:20:14.578 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:14.578 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:20:14.578 Verification LBA range: start 0x0 length 0x2000 00:20:14.578 nvme0n1 : 1.01 5493.44 21.46 0.00 0.00 23117.00 6116.69 23218.47 00:20:14.578 =================================================================================================================== 00:20:14.578 Total : 5493.44 21.46 0.00 0.00 23117.00 6116.69 23218.47 00:20:14.578 0 00:20:14.578 11:28:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@265 -- # rpc_cmd save_config 00:20:14.578 11:28:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:14.578 11:28:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:14.578 11:28:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:14.838 11:28:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@265 -- # tgtcfg='{ 00:20:14.838 "subsystems": [ 00:20:14.838 { 00:20:14.838 "subsystem": "keyring", 00:20:14.838 "config": [ 00:20:14.838 { 00:20:14.838 "method": "keyring_file_add_key", 00:20:14.838 "params": { 00:20:14.838 "name": "key0", 00:20:14.838 "path": "/tmp/tmp.V8XslquD3v" 00:20:14.838 } 00:20:14.838 } 00:20:14.838 ] 00:20:14.838 }, 00:20:14.838 { 00:20:14.838 "subsystem": "iobuf", 00:20:14.838 "config": [ 00:20:14.838 { 00:20:14.838 "method": "iobuf_set_options", 00:20:14.838 "params": { 00:20:14.838 "small_pool_count": 8192, 00:20:14.838 "large_pool_count": 1024, 00:20:14.838 "small_bufsize": 8192, 00:20:14.838 "large_bufsize": 135168 00:20:14.838 } 00:20:14.838 } 00:20:14.838 ] 00:20:14.838 }, 00:20:14.838 { 00:20:14.838 "subsystem": "sock", 00:20:14.838 "config": [ 00:20:14.838 { 00:20:14.838 "method": "sock_set_default_impl", 00:20:14.838 "params": { 00:20:14.838 "impl_name": "posix" 00:20:14.838 } 00:20:14.838 }, 00:20:14.838 { 00:20:14.838 "method": "sock_impl_set_options", 00:20:14.838 "params": { 00:20:14.838 "impl_name": "ssl", 00:20:14.838 "recv_buf_size": 4096, 00:20:14.838 "send_buf_size": 4096, 00:20:14.838 "enable_recv_pipe": true, 00:20:14.838 "enable_quickack": false, 00:20:14.838 "enable_placement_id": 0, 00:20:14.838 "enable_zerocopy_send_server": true, 00:20:14.838 "enable_zerocopy_send_client": false, 00:20:14.838 "zerocopy_threshold": 0, 00:20:14.838 "tls_version": 0, 00:20:14.838 "enable_ktls": false 00:20:14.838 } 00:20:14.838 }, 00:20:14.838 { 00:20:14.838 "method": "sock_impl_set_options", 00:20:14.838 "params": { 00:20:14.838 "impl_name": "posix", 00:20:14.838 "recv_buf_size": 2097152, 00:20:14.838 "send_buf_size": 2097152, 00:20:14.838 "enable_recv_pipe": true, 00:20:14.838 "enable_quickack": false, 00:20:14.838 "enable_placement_id": 0, 00:20:14.838 "enable_zerocopy_send_server": true, 00:20:14.838 "enable_zerocopy_send_client": false, 00:20:14.838 "zerocopy_threshold": 0, 00:20:14.838 "tls_version": 0, 00:20:14.838 "enable_ktls": false 00:20:14.838 } 00:20:14.838 } 00:20:14.838 ] 00:20:14.838 }, 00:20:14.838 { 00:20:14.838 "subsystem": "vmd", 00:20:14.838 "config": [] 00:20:14.838 }, 00:20:14.838 { 00:20:14.838 "subsystem": "accel", 00:20:14.838 "config": [ 00:20:14.838 { 00:20:14.838 "method": "accel_set_options", 00:20:14.838 "params": { 00:20:14.838 "small_cache_size": 128, 00:20:14.838 "large_cache_size": 16, 00:20:14.838 "task_count": 2048, 00:20:14.838 "sequence_count": 2048, 00:20:14.838 "buf_count": 2048 00:20:14.838 } 00:20:14.838 } 00:20:14.838 ] 00:20:14.838 }, 00:20:14.838 { 00:20:14.838 "subsystem": "bdev", 00:20:14.838 "config": [ 00:20:14.838 { 00:20:14.838 "method": "bdev_set_options", 00:20:14.838 "params": { 00:20:14.838 "bdev_io_pool_size": 65535, 00:20:14.838 "bdev_io_cache_size": 256, 00:20:14.838 "bdev_auto_examine": true, 00:20:14.838 "iobuf_small_cache_size": 128, 00:20:14.838 "iobuf_large_cache_size": 16 00:20:14.838 } 00:20:14.838 }, 00:20:14.838 { 00:20:14.838 "method": "bdev_raid_set_options", 00:20:14.838 "params": { 00:20:14.838 "process_window_size_kb": 1024, 00:20:14.838 "process_max_bandwidth_mb_sec": 0 00:20:14.838 } 00:20:14.838 }, 00:20:14.838 { 00:20:14.838 "method": "bdev_iscsi_set_options", 00:20:14.838 "params": { 00:20:14.838 "timeout_sec": 30 00:20:14.838 } 00:20:14.838 }, 00:20:14.838 { 00:20:14.838 "method": "bdev_nvme_set_options", 00:20:14.838 "params": { 00:20:14.838 "action_on_timeout": "none", 00:20:14.838 "timeout_us": 0, 00:20:14.838 "timeout_admin_us": 0, 00:20:14.838 "keep_alive_timeout_ms": 10000, 00:20:14.838 "arbitration_burst": 0, 00:20:14.838 "low_priority_weight": 0, 00:20:14.838 "medium_priority_weight": 0, 00:20:14.838 "high_priority_weight": 0, 00:20:14.838 "nvme_adminq_poll_period_us": 10000, 00:20:14.838 "nvme_ioq_poll_period_us": 0, 00:20:14.838 "io_queue_requests": 0, 00:20:14.838 "delay_cmd_submit": true, 00:20:14.838 "transport_retry_count": 4, 00:20:14.838 "bdev_retry_count": 3, 00:20:14.838 "transport_ack_timeout": 0, 00:20:14.838 "ctrlr_loss_timeout_sec": 0, 00:20:14.838 "reconnect_delay_sec": 0, 00:20:14.838 "fast_io_fail_timeout_sec": 0, 00:20:14.838 "disable_auto_failback": false, 00:20:14.838 "generate_uuids": false, 00:20:14.838 "transport_tos": 0, 00:20:14.838 "nvme_error_stat": false, 00:20:14.838 "rdma_srq_size": 0, 00:20:14.838 "io_path_stat": false, 00:20:14.838 "allow_accel_sequence": false, 00:20:14.838 "rdma_max_cq_size": 0, 00:20:14.838 "rdma_cm_event_timeout_ms": 0, 00:20:14.838 "dhchap_digests": [ 00:20:14.838 "sha256", 00:20:14.838 "sha384", 00:20:14.838 "sha512" 00:20:14.838 ], 00:20:14.838 "dhchap_dhgroups": [ 00:20:14.838 "null", 00:20:14.838 "ffdhe2048", 00:20:14.838 "ffdhe3072", 00:20:14.838 "ffdhe4096", 00:20:14.838 "ffdhe6144", 00:20:14.838 "ffdhe8192" 00:20:14.838 ] 00:20:14.838 } 00:20:14.838 }, 00:20:14.838 { 00:20:14.838 "method": "bdev_nvme_set_hotplug", 00:20:14.838 "params": { 00:20:14.838 "period_us": 100000, 00:20:14.838 "enable": false 00:20:14.838 } 00:20:14.838 }, 00:20:14.838 { 00:20:14.838 "method": "bdev_malloc_create", 00:20:14.838 "params": { 00:20:14.838 "name": "malloc0", 00:20:14.838 "num_blocks": 8192, 00:20:14.838 "block_size": 4096, 00:20:14.838 "physical_block_size": 4096, 00:20:14.838 "uuid": "228310fa-a15e-4d04-b572-97f558c51f83", 00:20:14.838 "optimal_io_boundary": 0, 00:20:14.838 "md_size": 0, 00:20:14.838 "dif_type": 0, 00:20:14.838 "dif_is_head_of_md": false, 00:20:14.838 "dif_pi_format": 0 00:20:14.838 } 00:20:14.838 }, 00:20:14.838 { 00:20:14.838 "method": "bdev_wait_for_examine" 00:20:14.838 } 00:20:14.838 ] 00:20:14.838 }, 00:20:14.838 { 00:20:14.838 "subsystem": "nbd", 00:20:14.838 "config": [] 00:20:14.838 }, 00:20:14.838 { 00:20:14.838 "subsystem": "scheduler", 00:20:14.838 "config": [ 00:20:14.838 { 00:20:14.838 "method": "framework_set_scheduler", 00:20:14.838 "params": { 00:20:14.838 "name": "static" 00:20:14.838 } 00:20:14.838 } 00:20:14.838 ] 00:20:14.838 }, 00:20:14.838 { 00:20:14.838 "subsystem": "nvmf", 00:20:14.838 "config": [ 00:20:14.838 { 00:20:14.838 "method": "nvmf_set_config", 00:20:14.838 "params": { 00:20:14.838 "discovery_filter": "match_any", 00:20:14.838 "admin_cmd_passthru": { 00:20:14.838 "identify_ctrlr": false 00:20:14.839 } 00:20:14.839 } 00:20:14.839 }, 00:20:14.839 { 00:20:14.839 "method": "nvmf_set_max_subsystems", 00:20:14.839 "params": { 00:20:14.839 "max_subsystems": 1024 00:20:14.839 } 00:20:14.839 }, 00:20:14.839 { 00:20:14.839 "method": "nvmf_set_crdt", 00:20:14.839 "params": { 00:20:14.839 "crdt1": 0, 00:20:14.839 "crdt2": 0, 00:20:14.839 "crdt3": 0 00:20:14.839 } 00:20:14.839 }, 00:20:14.839 { 00:20:14.839 "method": "nvmf_create_transport", 00:20:14.839 "params": { 00:20:14.839 "trtype": "TCP", 00:20:14.839 "max_queue_depth": 128, 00:20:14.839 "max_io_qpairs_per_ctrlr": 127, 00:20:14.839 "in_capsule_data_size": 4096, 00:20:14.839 "max_io_size": 131072, 00:20:14.839 "io_unit_size": 131072, 00:20:14.839 "max_aq_depth": 128, 00:20:14.839 "num_shared_buffers": 511, 00:20:14.839 "buf_cache_size": 4294967295, 00:20:14.839 "dif_insert_or_strip": false, 00:20:14.839 "zcopy": false, 00:20:14.839 "c2h_success": false, 00:20:14.839 "sock_priority": 0, 00:20:14.839 "abort_timeout_sec": 1, 00:20:14.839 "ack_timeout": 0, 00:20:14.839 "data_wr_pool_size": 0 00:20:14.839 } 00:20:14.839 }, 00:20:14.839 { 00:20:14.839 "method": "nvmf_create_subsystem", 00:20:14.839 "params": { 00:20:14.839 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:14.839 "allow_any_host": false, 00:20:14.839 "serial_number": "00000000000000000000", 00:20:14.839 "model_number": "SPDK bdev Controller", 00:20:14.839 "max_namespaces": 32, 00:20:14.839 "min_cntlid": 1, 00:20:14.839 "max_cntlid": 65519, 00:20:14.839 "ana_reporting": false 00:20:14.839 } 00:20:14.839 }, 00:20:14.839 { 00:20:14.839 "method": "nvmf_subsystem_add_host", 00:20:14.839 "params": { 00:20:14.839 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:14.839 "host": "nqn.2016-06.io.spdk:host1", 00:20:14.839 "psk": "key0" 00:20:14.839 } 00:20:14.839 }, 00:20:14.839 { 00:20:14.839 "method": "nvmf_subsystem_add_ns", 00:20:14.839 "params": { 00:20:14.839 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:14.839 "namespace": { 00:20:14.839 "nsid": 1, 00:20:14.839 "bdev_name": "malloc0", 00:20:14.839 "nguid": "228310FAA15E4D04B57297F558C51F83", 00:20:14.839 "uuid": "228310fa-a15e-4d04-b572-97f558c51f83", 00:20:14.839 "no_auto_visible": false 00:20:14.839 } 00:20:14.839 } 00:20:14.839 }, 00:20:14.839 { 00:20:14.839 "method": "nvmf_subsystem_add_listener", 00:20:14.839 "params": { 00:20:14.839 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:14.839 "listen_address": { 00:20:14.839 "trtype": "TCP", 00:20:14.839 "adrfam": "IPv4", 00:20:14.839 "traddr": "10.0.0.2", 00:20:14.839 "trsvcid": "4420" 00:20:14.839 }, 00:20:14.839 "secure_channel": false, 00:20:14.839 "sock_impl": "ssl" 00:20:14.839 } 00:20:14.839 } 00:20:14.839 ] 00:20:14.839 } 00:20:14.839 ] 00:20:14.839 }' 00:20:14.839 11:28:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@266 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:20:14.839 11:28:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@266 -- # bperfcfg='{ 00:20:14.839 "subsystems": [ 00:20:14.839 { 00:20:14.839 "subsystem": "keyring", 00:20:14.839 "config": [ 00:20:14.839 { 00:20:14.839 "method": "keyring_file_add_key", 00:20:14.839 "params": { 00:20:14.839 "name": "key0", 00:20:14.839 "path": "/tmp/tmp.V8XslquD3v" 00:20:14.839 } 00:20:14.839 } 00:20:14.839 ] 00:20:14.839 }, 00:20:14.839 { 00:20:14.839 "subsystem": "iobuf", 00:20:14.839 "config": [ 00:20:14.839 { 00:20:14.839 "method": "iobuf_set_options", 00:20:14.839 "params": { 00:20:14.839 "small_pool_count": 8192, 00:20:14.839 "large_pool_count": 1024, 00:20:14.839 "small_bufsize": 8192, 00:20:14.839 "large_bufsize": 135168 00:20:14.839 } 00:20:14.839 } 00:20:14.839 ] 00:20:14.839 }, 00:20:14.839 { 00:20:14.839 "subsystem": "sock", 00:20:14.839 "config": [ 00:20:14.839 { 00:20:14.839 "method": "sock_set_default_impl", 00:20:14.839 "params": { 00:20:14.839 "impl_name": "posix" 00:20:14.839 } 00:20:14.839 }, 00:20:14.839 { 00:20:14.839 "method": "sock_impl_set_options", 00:20:14.839 "params": { 00:20:14.839 "impl_name": "ssl", 00:20:14.839 "recv_buf_size": 4096, 00:20:14.839 "send_buf_size": 4096, 00:20:14.839 "enable_recv_pipe": true, 00:20:14.839 "enable_quickack": false, 00:20:14.839 "enable_placement_id": 0, 00:20:14.839 "enable_zerocopy_send_server": true, 00:20:14.839 "enable_zerocopy_send_client": false, 00:20:14.839 "zerocopy_threshold": 0, 00:20:14.839 "tls_version": 0, 00:20:14.839 "enable_ktls": false 00:20:14.839 } 00:20:14.839 }, 00:20:14.839 { 00:20:14.839 "method": "sock_impl_set_options", 00:20:14.839 "params": { 00:20:14.839 "impl_name": "posix", 00:20:14.839 "recv_buf_size": 2097152, 00:20:14.839 "send_buf_size": 2097152, 00:20:14.839 "enable_recv_pipe": true, 00:20:14.839 "enable_quickack": false, 00:20:14.839 "enable_placement_id": 0, 00:20:14.839 "enable_zerocopy_send_server": true, 00:20:14.839 "enable_zerocopy_send_client": false, 00:20:14.839 "zerocopy_threshold": 0, 00:20:14.839 "tls_version": 0, 00:20:14.839 "enable_ktls": false 00:20:14.839 } 00:20:14.839 } 00:20:14.839 ] 00:20:14.839 }, 00:20:14.839 { 00:20:14.839 "subsystem": "vmd", 00:20:14.839 "config": [] 00:20:14.839 }, 00:20:14.839 { 00:20:14.839 "subsystem": "accel", 00:20:14.839 "config": [ 00:20:14.839 { 00:20:14.839 "method": "accel_set_options", 00:20:14.839 "params": { 00:20:14.839 "small_cache_size": 128, 00:20:14.839 "large_cache_size": 16, 00:20:14.839 "task_count": 2048, 00:20:14.839 "sequence_count": 2048, 00:20:14.839 "buf_count": 2048 00:20:14.839 } 00:20:14.839 } 00:20:14.839 ] 00:20:14.839 }, 00:20:14.839 { 00:20:14.839 "subsystem": "bdev", 00:20:14.839 "config": [ 00:20:14.839 { 00:20:14.839 "method": "bdev_set_options", 00:20:14.839 "params": { 00:20:14.839 "bdev_io_pool_size": 65535, 00:20:14.839 "bdev_io_cache_size": 256, 00:20:14.839 "bdev_auto_examine": true, 00:20:14.839 "iobuf_small_cache_size": 128, 00:20:14.839 "iobuf_large_cache_size": 16 00:20:14.839 } 00:20:14.839 }, 00:20:14.839 { 00:20:14.839 "method": "bdev_raid_set_options", 00:20:14.839 "params": { 00:20:14.839 "process_window_size_kb": 1024, 00:20:14.839 "process_max_bandwidth_mb_sec": 0 00:20:14.839 } 00:20:14.839 }, 00:20:14.839 { 00:20:14.839 "method": "bdev_iscsi_set_options", 00:20:14.839 "params": { 00:20:14.839 "timeout_sec": 30 00:20:14.839 } 00:20:14.839 }, 00:20:14.839 { 00:20:14.839 "method": "bdev_nvme_set_options", 00:20:14.839 "params": { 00:20:14.839 "action_on_timeout": "none", 00:20:14.839 "timeout_us": 0, 00:20:14.839 "timeout_admin_us": 0, 00:20:14.839 "keep_alive_timeout_ms": 10000, 00:20:14.839 "arbitration_burst": 0, 00:20:14.839 "low_priority_weight": 0, 00:20:14.839 "medium_priority_weight": 0, 00:20:14.839 "high_priority_weight": 0, 00:20:14.839 "nvme_adminq_poll_period_us": 10000, 00:20:14.839 "nvme_ioq_poll_period_us": 0, 00:20:14.839 "io_queue_requests": 512, 00:20:14.839 "delay_cmd_submit": true, 00:20:14.839 "transport_retry_count": 4, 00:20:14.839 "bdev_retry_count": 3, 00:20:14.839 "transport_ack_timeout": 0, 00:20:14.839 "ctrlr_loss_timeout_sec": 0, 00:20:14.839 "reconnect_delay_sec": 0, 00:20:14.839 "fast_io_fail_timeout_sec": 0, 00:20:14.839 "disable_auto_failback": false, 00:20:14.839 "generate_uuids": false, 00:20:14.839 "transport_tos": 0, 00:20:14.839 "nvme_error_stat": false, 00:20:14.839 "rdma_srq_size": 0, 00:20:14.839 "io_path_stat": false, 00:20:14.839 "allow_accel_sequence": false, 00:20:14.839 "rdma_max_cq_size": 0, 00:20:14.839 "rdma_cm_event_timeout_ms": 0, 00:20:14.839 "dhchap_digests": [ 00:20:14.839 "sha256", 00:20:14.840 "sha384", 00:20:14.840 "sha512" 00:20:14.840 ], 00:20:14.840 "dhchap_dhgroups": [ 00:20:14.840 "null", 00:20:14.840 "ffdhe2048", 00:20:14.840 "ffdhe3072", 00:20:14.840 "ffdhe4096", 00:20:14.840 "ffdhe6144", 00:20:14.840 "ffdhe8192" 00:20:14.840 ] 00:20:14.840 } 00:20:14.840 }, 00:20:14.840 { 00:20:14.840 "method": "bdev_nvme_attach_controller", 00:20:14.840 "params": { 00:20:14.840 "name": "nvme0", 00:20:14.840 "trtype": "TCP", 00:20:14.840 "adrfam": "IPv4", 00:20:14.840 "traddr": "10.0.0.2", 00:20:14.840 "trsvcid": "4420", 00:20:14.840 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:14.840 "prchk_reftag": false, 00:20:14.840 "prchk_guard": false, 00:20:14.840 "ctrlr_loss_timeout_sec": 0, 00:20:14.840 "reconnect_delay_sec": 0, 00:20:14.840 "fast_io_fail_timeout_sec": 0, 00:20:14.840 "psk": "key0", 00:20:14.840 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:14.840 "hdgst": false, 00:20:14.840 "ddgst": false 00:20:14.840 } 00:20:14.840 }, 00:20:14.840 { 00:20:14.840 "method": "bdev_nvme_set_hotplug", 00:20:14.840 "params": { 00:20:14.840 "period_us": 100000, 00:20:14.840 "enable": false 00:20:14.840 } 00:20:14.840 }, 00:20:14.840 { 00:20:14.840 "method": "bdev_enable_histogram", 00:20:14.840 "params": { 00:20:14.840 "name": "nvme0n1", 00:20:14.840 "enable": true 00:20:14.840 } 00:20:14.840 }, 00:20:14.840 { 00:20:14.840 "method": "bdev_wait_for_examine" 00:20:14.840 } 00:20:14.840 ] 00:20:14.840 }, 00:20:14.840 { 00:20:14.840 "subsystem": "nbd", 00:20:14.840 "config": [] 00:20:14.840 } 00:20:14.840 ] 00:20:14.840 }' 00:20:14.840 11:28:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@268 -- # killprocess 1548899 00:20:14.840 11:28:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 1548899 ']' 00:20:14.840 11:28:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 1548899 00:20:14.840 11:28:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:20:14.840 11:28:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:20:14.840 11:28:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1548899 00:20:15.099 11:28:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:20:15.099 11:28:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:20:15.099 11:28:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1548899' 00:20:15.099 killing process with pid 1548899 00:20:15.099 11:28:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 1548899 00:20:15.099 Received shutdown signal, test time was about 1.000000 seconds 00:20:15.099 00:20:15.099 Latency(us) 00:20:15.099 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:15.099 =================================================================================================================== 00:20:15.099 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:20:15.099 11:28:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 1548899 00:20:15.099 11:28:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@269 -- # killprocess 1548654 00:20:15.099 11:28:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 1548654 ']' 00:20:15.099 11:28:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 1548654 00:20:15.099 11:28:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:20:15.099 11:28:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:20:15.099 11:28:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1548654 00:20:15.099 11:28:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:20:15.099 11:28:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:20:15.099 11:28:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1548654' 00:20:15.099 killing process with pid 1548654 00:20:15.099 11:28:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 1548654 00:20:15.099 11:28:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 1548654 00:20:15.359 11:28:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@271 -- # nvmfappstart -c /dev/fd/62 00:20:15.359 11:28:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:20:15.359 11:28:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:20:15.359 11:28:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@271 -- # echo '{ 00:20:15.359 "subsystems": [ 00:20:15.359 { 00:20:15.359 "subsystem": "keyring", 00:20:15.359 "config": [ 00:20:15.359 { 00:20:15.359 "method": "keyring_file_add_key", 00:20:15.359 "params": { 00:20:15.359 "name": "key0", 00:20:15.359 "path": "/tmp/tmp.V8XslquD3v" 00:20:15.359 } 00:20:15.359 } 00:20:15.359 ] 00:20:15.359 }, 00:20:15.359 { 00:20:15.359 "subsystem": "iobuf", 00:20:15.359 "config": [ 00:20:15.359 { 00:20:15.359 "method": "iobuf_set_options", 00:20:15.359 "params": { 00:20:15.359 "small_pool_count": 8192, 00:20:15.359 "large_pool_count": 1024, 00:20:15.359 "small_bufsize": 8192, 00:20:15.359 "large_bufsize": 135168 00:20:15.359 } 00:20:15.359 } 00:20:15.359 ] 00:20:15.359 }, 00:20:15.359 { 00:20:15.359 "subsystem": "sock", 00:20:15.359 "config": [ 00:20:15.359 { 00:20:15.359 "method": "sock_set_default_impl", 00:20:15.359 "params": { 00:20:15.359 "impl_name": "posix" 00:20:15.359 } 00:20:15.359 }, 00:20:15.359 { 00:20:15.359 "method": "sock_impl_set_options", 00:20:15.359 "params": { 00:20:15.359 "impl_name": "ssl", 00:20:15.359 "recv_buf_size": 4096, 00:20:15.359 "send_buf_size": 4096, 00:20:15.359 "enable_recv_pipe": true, 00:20:15.359 "enable_quickack": false, 00:20:15.359 "enable_placement_id": 0, 00:20:15.359 "enable_zerocopy_send_server": true, 00:20:15.359 "enable_zerocopy_send_client": false, 00:20:15.359 "zerocopy_threshold": 0, 00:20:15.359 "tls_version": 0, 00:20:15.359 "enable_ktls": false 00:20:15.359 } 00:20:15.359 }, 00:20:15.359 { 00:20:15.359 "method": "sock_impl_set_options", 00:20:15.359 "params": { 00:20:15.359 "impl_name": "posix", 00:20:15.359 "recv_buf_size": 2097152, 00:20:15.359 "send_buf_size": 2097152, 00:20:15.359 "enable_recv_pipe": true, 00:20:15.359 "enable_quickack": false, 00:20:15.359 "enable_placement_id": 0, 00:20:15.359 "enable_zerocopy_send_server": true, 00:20:15.359 "enable_zerocopy_send_client": false, 00:20:15.359 "zerocopy_threshold": 0, 00:20:15.359 "tls_version": 0, 00:20:15.359 "enable_ktls": false 00:20:15.359 } 00:20:15.359 } 00:20:15.359 ] 00:20:15.359 }, 00:20:15.359 { 00:20:15.359 "subsystem": "vmd", 00:20:15.359 "config": [] 00:20:15.359 }, 00:20:15.359 { 00:20:15.359 "subsystem": "accel", 00:20:15.359 "config": [ 00:20:15.359 { 00:20:15.359 "method": "accel_set_options", 00:20:15.359 "params": { 00:20:15.359 "small_cache_size": 128, 00:20:15.359 "large_cache_size": 16, 00:20:15.359 "task_count": 2048, 00:20:15.359 "sequence_count": 2048, 00:20:15.359 "buf_count": 2048 00:20:15.359 } 00:20:15.359 } 00:20:15.359 ] 00:20:15.359 }, 00:20:15.359 { 00:20:15.359 "subsystem": "bdev", 00:20:15.359 "config": [ 00:20:15.359 { 00:20:15.359 "method": "bdev_set_options", 00:20:15.359 "params": { 00:20:15.359 "bdev_io_pool_size": 65535, 00:20:15.359 "bdev_io_cache_size": 256, 00:20:15.359 "bdev_auto_examine": true, 00:20:15.359 "iobuf_small_cache_size": 128, 00:20:15.359 "iobuf_large_cache_size": 16 00:20:15.359 } 00:20:15.359 }, 00:20:15.359 { 00:20:15.359 "method": "bdev_raid_set_options", 00:20:15.359 "params": { 00:20:15.359 "process_window_size_kb": 1024, 00:20:15.359 "process_max_bandwidth_mb_sec": 0 00:20:15.359 } 00:20:15.359 }, 00:20:15.359 { 00:20:15.359 "method": "bdev_iscsi_set_options", 00:20:15.359 "params": { 00:20:15.359 "timeout_sec": 30 00:20:15.359 } 00:20:15.359 }, 00:20:15.359 { 00:20:15.359 "method": "bdev_nvme_set_options", 00:20:15.359 "params": { 00:20:15.359 "action_on_timeout": "none", 00:20:15.359 "timeout_us": 0, 00:20:15.359 "timeout_admin_us": 0, 00:20:15.359 "keep_alive_timeout_ms": 10000, 00:20:15.359 "arbitration_burst": 0, 00:20:15.359 "low_priority_weight": 0, 00:20:15.359 "medium_priority_weight": 0, 00:20:15.359 "high_priority_weight": 0, 00:20:15.359 "nvme_adminq_poll_period_us": 10000, 00:20:15.359 "nvme_ioq_poll_period_us": 0, 00:20:15.359 "io_queue_requests": 0, 00:20:15.359 "delay_cmd_submit": true, 00:20:15.359 "transport_retry_count": 4, 00:20:15.359 "bdev_retry_count": 3, 00:20:15.359 "transport_ack_timeout": 0, 00:20:15.359 "ctrlr_loss_timeout_sec": 0, 00:20:15.359 "reconnect_delay_sec": 0, 00:20:15.359 "fast_io_fail_timeout_sec": 0, 00:20:15.359 "disable_auto_failback": false, 00:20:15.359 "generate_uuids": false, 00:20:15.359 "transport_tos": 0, 00:20:15.359 "nvme_error_stat": false, 00:20:15.359 "rdma_srq_size": 0, 00:20:15.359 "io_path_stat": false, 00:20:15.359 "allow_accel_sequence": false, 00:20:15.359 "rdma_max_cq_size": 0, 00:20:15.359 "rdma_cm_event_timeout_ms": 0, 00:20:15.359 "dhchap_digests": [ 00:20:15.359 "sha256", 00:20:15.359 "sha384", 00:20:15.359 "sha512" 00:20:15.359 ], 00:20:15.359 "dhchap_dhgroups": [ 00:20:15.359 "null", 00:20:15.359 "ffdhe2048", 00:20:15.359 "ffdhe3072", 00:20:15.359 "ffdhe4096", 00:20:15.359 "ffdhe6144", 00:20:15.359 "ffdhe8192" 00:20:15.359 ] 00:20:15.359 } 00:20:15.359 }, 00:20:15.359 { 00:20:15.359 "method": "bdev_nvme_set_hotplug", 00:20:15.359 "params": { 00:20:15.359 "period_us": 100000, 00:20:15.359 "enable": false 00:20:15.359 } 00:20:15.359 }, 00:20:15.359 { 00:20:15.359 "method": "bdev_malloc_create", 00:20:15.359 "params": { 00:20:15.359 "name": "malloc0", 00:20:15.359 "num_blocks": 8192, 00:20:15.359 "block_size": 4096, 00:20:15.359 "physical_block_size": 4096, 00:20:15.359 "uuid": "228310fa-a15e-4d04-b572-97f558c51f83", 00:20:15.359 "optimal_io_boundary": 0, 00:20:15.359 "md_size": 0, 00:20:15.359 "dif_type": 0, 00:20:15.359 "dif_is_head_of_md": false, 00:20:15.359 "dif_pi_format": 0 00:20:15.359 } 00:20:15.359 }, 00:20:15.359 { 00:20:15.359 "method": "bdev_wait_for_examine" 00:20:15.359 } 00:20:15.359 ] 00:20:15.359 }, 00:20:15.359 { 00:20:15.359 "subsystem": "nbd", 00:20:15.359 "config": [] 00:20:15.359 }, 00:20:15.359 { 00:20:15.360 "subsystem": "scheduler", 00:20:15.360 "config": [ 00:20:15.360 { 00:20:15.360 "method": "framework_set_scheduler", 00:20:15.360 "params": { 00:20:15.360 "name": "static" 00:20:15.360 } 00:20:15.360 } 00:20:15.360 ] 00:20:15.360 }, 00:20:15.360 { 00:20:15.360 "subsystem": "nvmf", 00:20:15.360 "config": [ 00:20:15.360 { 00:20:15.360 "method": "nvmf_set_config", 00:20:15.360 "params": { 00:20:15.360 "discovery_filter": "match_any", 00:20:15.360 "admin_cmd_passthru": { 00:20:15.360 "identify_ctrlr": false 00:20:15.360 } 00:20:15.360 } 00:20:15.360 }, 00:20:15.360 { 00:20:15.360 "method": "nvmf_set_max_subsystems", 00:20:15.360 "params": { 00:20:15.360 "max_subsystems": 1024 00:20:15.360 } 00:20:15.360 }, 00:20:15.360 { 00:20:15.360 "method": "nvmf_set_crdt", 00:20:15.360 "params": { 00:20:15.360 "crdt1": 0, 00:20:15.360 "crdt2": 0, 00:20:15.360 "crdt3": 0 00:20:15.360 } 00:20:15.360 }, 00:20:15.360 { 00:20:15.360 "method": "nvmf_create_transport", 00:20:15.360 "params": { 00:20:15.360 "trtype": "TCP", 00:20:15.360 "max_queue_depth": 128, 00:20:15.360 "max_io_qpairs_per_ctrlr": 127, 00:20:15.360 "in_capsule_data_size": 4096, 00:20:15.360 "max_io_size": 131072, 00:20:15.360 "io_unit_size": 131072, 00:20:15.360 "max_aq_depth": 128, 00:20:15.360 "num_shared_buffers": 511, 00:20:15.360 "buf_cache_size": 4294967295, 00:20:15.360 "dif_insert_or_strip": false, 00:20:15.360 "zcopy": false, 00:20:15.360 "c2h_success": false, 00:20:15.360 "sock_priority": 0, 00:20:15.360 "abort_timeout_sec": 1, 00:20:15.360 "ack_timeout": 0, 00:20:15.360 "data_wr_pool_size": 0 00:20:15.360 } 00:20:15.360 }, 00:20:15.360 { 00:20:15.360 "method": "nvmf_create_subsystem", 00:20:15.360 "params": { 00:20:15.360 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:15.360 "allow_any_host": false, 00:20:15.360 "serial_number": "00000000000000000000", 00:20:15.360 "model_number": "SPDK bdev Controller", 00:20:15.360 "max_namespaces": 32, 00:20:15.360 "min_cntlid": 1, 00:20:15.360 "max_cntlid": 65519, 00:20:15.360 "ana_reporting": false 00:20:15.360 } 00:20:15.360 }, 00:20:15.360 { 00:20:15.360 "method": "nvmf_subsystem_add_host", 00:20:15.360 "params": { 00:20:15.360 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:15.360 "host": "nqn.2016-06.io.spdk:host1", 00:20:15.360 "psk": "key0" 00:20:15.360 } 00:20:15.360 }, 00:20:15.360 { 00:20:15.360 "method": "nvmf_subsystem_add_ns", 00:20:15.360 "params": { 00:20:15.360 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:15.360 "namespace": { 00:20:15.360 "nsid": 1, 00:20:15.360 "bdev_name": "malloc0", 00:20:15.360 "nguid": "228310FAA15E4D04B57297F558C51F83", 00:20:15.360 "uuid": "228310fa-a15e-4d04-b572-97f558c51f83", 00:20:15.360 "no_auto_visible": false 00:20:15.360 } 00:20:15.360 } 00:20:15.360 }, 00:20:15.360 { 00:20:15.360 "method": "nvmf_subsystem_add_listener", 00:20:15.360 "params": { 00:20:15.360 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:15.360 "listen_address": { 00:20:15.360 "trtype": "TCP", 00:20:15.360 "adrfam": "IPv4", 00:20:15.360 "traddr": "10.0.0.2", 00:20:15.360 "trsvcid": "4420" 00:20:15.360 }, 00:20:15.360 "secure_channel": false, 00:20:15.360 "sock_impl": "ssl" 00:20:15.360 } 00:20:15.360 } 00:20:15.360 ] 00:20:15.360 } 00:20:15.360 ] 00:20:15.360 }' 00:20:15.360 11:28:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:15.360 11:28:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=1549382 00:20:15.360 11:28:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 1549382 00:20:15.360 11:28:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -c /dev/fd/62 00:20:15.360 11:28:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 1549382 ']' 00:20:15.360 11:28:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:15.360 11:28:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:20:15.360 11:28:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:15.360 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:15.360 11:28:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:20:15.360 11:28:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:15.360 [2024-07-26 11:28:11.000526] Starting SPDK v24.09-pre git sha1 487ff9e1a / DPDK 24.03.0 initialization... 00:20:15.360 [2024-07-26 11:28:11.000573] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:15.619 EAL: No free 2048 kB hugepages reported on node 1 00:20:15.619 [2024-07-26 11:28:11.072069] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:15.619 [2024-07-26 11:28:11.142182] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:15.619 [2024-07-26 11:28:11.142218] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:15.619 [2024-07-26 11:28:11.142225] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:15.619 [2024-07-26 11:28:11.142231] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:15.619 [2024-07-26 11:28:11.142236] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:15.619 [2024-07-26 11:28:11.142280] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:20:15.878 [2024-07-26 11:28:11.352413] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:15.878 [2024-07-26 11:28:11.398328] tcp.c: 956:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:20:15.878 [2024-07-26 11:28:11.398544] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:16.447 11:28:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:20:16.447 11:28:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:20:16.447 11:28:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:20:16.447 11:28:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:20:16.447 11:28:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:16.447 11:28:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:16.447 11:28:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@274 -- # bdevperf_pid=1549626 00:20:16.447 11:28:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@275 -- # waitforlisten 1549626 /var/tmp/bdevperf.sock 00:20:16.447 11:28:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 1549626 ']' 00:20:16.447 11:28:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:16.447 11:28:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@272 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 -c /dev/fd/63 00:20:16.447 11:28:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:20:16.447 11:28:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:16.447 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:16.447 11:28:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@272 -- # echo '{ 00:20:16.447 "subsystems": [ 00:20:16.447 { 00:20:16.447 "subsystem": "keyring", 00:20:16.447 "config": [ 00:20:16.447 { 00:20:16.447 "method": "keyring_file_add_key", 00:20:16.447 "params": { 00:20:16.447 "name": "key0", 00:20:16.447 "path": "/tmp/tmp.V8XslquD3v" 00:20:16.447 } 00:20:16.447 } 00:20:16.447 ] 00:20:16.447 }, 00:20:16.447 { 00:20:16.447 "subsystem": "iobuf", 00:20:16.447 "config": [ 00:20:16.447 { 00:20:16.447 "method": "iobuf_set_options", 00:20:16.447 "params": { 00:20:16.447 "small_pool_count": 8192, 00:20:16.447 "large_pool_count": 1024, 00:20:16.447 "small_bufsize": 8192, 00:20:16.447 "large_bufsize": 135168 00:20:16.447 } 00:20:16.447 } 00:20:16.447 ] 00:20:16.447 }, 00:20:16.447 { 00:20:16.447 "subsystem": "sock", 00:20:16.447 "config": [ 00:20:16.447 { 00:20:16.447 "method": "sock_set_default_impl", 00:20:16.447 "params": { 00:20:16.447 "impl_name": "posix" 00:20:16.447 } 00:20:16.447 }, 00:20:16.447 { 00:20:16.447 "method": "sock_impl_set_options", 00:20:16.447 "params": { 00:20:16.447 "impl_name": "ssl", 00:20:16.447 "recv_buf_size": 4096, 00:20:16.447 "send_buf_size": 4096, 00:20:16.447 "enable_recv_pipe": true, 00:20:16.447 "enable_quickack": false, 00:20:16.447 "enable_placement_id": 0, 00:20:16.447 "enable_zerocopy_send_server": true, 00:20:16.447 "enable_zerocopy_send_client": false, 00:20:16.447 "zerocopy_threshold": 0, 00:20:16.447 "tls_version": 0, 00:20:16.447 "enable_ktls": false 00:20:16.447 } 00:20:16.447 }, 00:20:16.447 { 00:20:16.447 "method": "sock_impl_set_options", 00:20:16.447 "params": { 00:20:16.447 "impl_name": "posix", 00:20:16.447 "recv_buf_size": 2097152, 00:20:16.447 "send_buf_size": 2097152, 00:20:16.447 "enable_recv_pipe": true, 00:20:16.447 "enable_quickack": false, 00:20:16.447 "enable_placement_id": 0, 00:20:16.447 "enable_zerocopy_send_server": true, 00:20:16.447 "enable_zerocopy_send_client": false, 00:20:16.447 "zerocopy_threshold": 0, 00:20:16.447 "tls_version": 0, 00:20:16.447 "enable_ktls": false 00:20:16.447 } 00:20:16.447 } 00:20:16.447 ] 00:20:16.447 }, 00:20:16.447 { 00:20:16.447 "subsystem": "vmd", 00:20:16.447 "config": [] 00:20:16.447 }, 00:20:16.447 { 00:20:16.447 "subsystem": "accel", 00:20:16.447 "config": [ 00:20:16.447 { 00:20:16.447 "method": "accel_set_options", 00:20:16.447 "params": { 00:20:16.447 "small_cache_size": 128, 00:20:16.447 "large_cache_size": 16, 00:20:16.447 "task_count": 2048, 00:20:16.447 "sequence_count": 2048, 00:20:16.447 "buf_count": 2048 00:20:16.447 } 00:20:16.447 } 00:20:16.447 ] 00:20:16.447 }, 00:20:16.447 { 00:20:16.447 "subsystem": "bdev", 00:20:16.447 "config": [ 00:20:16.447 { 00:20:16.447 "method": "bdev_set_options", 00:20:16.447 "params": { 00:20:16.447 "bdev_io_pool_size": 65535, 00:20:16.447 "bdev_io_cache_size": 256, 00:20:16.447 "bdev_auto_examine": true, 00:20:16.448 "iobuf_small_cache_size": 128, 00:20:16.448 "iobuf_large_cache_size": 16 00:20:16.448 } 00:20:16.448 }, 00:20:16.448 { 00:20:16.448 "method": "bdev_raid_set_options", 00:20:16.448 "params": { 00:20:16.448 "process_window_size_kb": 1024, 00:20:16.448 "process_max_bandwidth_mb_sec": 0 00:20:16.448 } 00:20:16.448 }, 00:20:16.448 { 00:20:16.448 "method": "bdev_iscsi_set_options", 00:20:16.448 "params": { 00:20:16.448 "timeout_sec": 30 00:20:16.448 } 00:20:16.448 }, 00:20:16.448 { 00:20:16.448 "method": "bdev_nvme_set_options", 00:20:16.448 "params": { 00:20:16.448 "action_on_timeout": "none", 00:20:16.448 "timeout_us": 0, 00:20:16.448 "timeout_admin_us": 0, 00:20:16.448 "keep_alive_timeout_ms": 10000, 00:20:16.448 "arbitration_burst": 0, 00:20:16.448 "low_priority_weight": 0, 00:20:16.448 "medium_priority_weight": 0, 00:20:16.448 "high_priority_weight": 0, 00:20:16.448 "nvme_adminq_poll_period_us": 10000, 00:20:16.448 "nvme_ioq_poll_period_us": 0, 00:20:16.448 "io_queue_requests": 512, 00:20:16.448 "delay_cmd_submit": true, 00:20:16.448 "transport_retry_count": 4, 00:20:16.448 "bdev_retry_count": 3, 00:20:16.448 "transport_ack_timeout": 0, 00:20:16.448 "ctrlr_loss_timeout_sec": 0, 00:20:16.448 "reconnect_delay_sec": 0, 00:20:16.448 "fast_io_fail_timeout_sec": 0, 00:20:16.448 "disable_auto_failback": false, 00:20:16.448 "generate_uuids": false, 00:20:16.448 "transport_tos": 0, 00:20:16.448 "nvme_error_stat": false, 00:20:16.448 "rdma_srq_size": 0, 00:20:16.448 "io_path_stat": false, 00:20:16.448 "allow_accel_sequence": false, 00:20:16.448 "rdma_max_cq_size": 0, 00:20:16.448 "rdma_cm_event_timeout_ms": 0, 00:20:16.448 "dhchap_digests": [ 00:20:16.448 "sha256", 00:20:16.448 "sha384", 00:20:16.448 "sha512" 00:20:16.448 ], 00:20:16.448 "dhchap_dhgroups": [ 00:20:16.448 "null", 00:20:16.448 "ffdhe2048", 00:20:16.448 "ffdhe3072", 00:20:16.448 "ffdhe4096", 00:20:16.448 "ffdhe6144", 00:20:16.448 "ffdhe8192" 00:20:16.448 ] 00:20:16.448 } 00:20:16.448 }, 00:20:16.448 { 00:20:16.448 "method": "bdev_nvme_attach_controller", 00:20:16.448 "params": { 00:20:16.448 "name": "nvme0", 00:20:16.448 "trtype": "TCP", 00:20:16.448 "adrfam": "IPv4", 00:20:16.448 "traddr": "10.0.0.2", 00:20:16.448 "trsvcid": "4420", 00:20:16.448 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:16.448 "prchk_reftag": false, 00:20:16.448 "prchk_guard": false, 00:20:16.448 "ctrlr_loss_timeout_sec": 0, 00:20:16.448 "reconnect_delay_sec": 0, 00:20:16.448 "fast_io_fail_timeout_sec": 0, 00:20:16.448 "psk": "key0", 00:20:16.448 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:16.448 "hdgst": false, 00:20:16.448 "ddgst": false 00:20:16.448 } 00:20:16.448 }, 00:20:16.448 { 00:20:16.448 "method": "bdev_nvme_set_hotplug", 00:20:16.448 "params": { 00:20:16.448 "period_us": 100000, 00:20:16.448 "enable": false 00:20:16.448 } 00:20:16.448 }, 00:20:16.448 { 00:20:16.448 "method": "bdev_enable_histogram", 00:20:16.448 "params": { 00:20:16.448 "name": "nvme0n1", 00:20:16.448 "enable": true 00:20:16.448 } 00:20:16.448 }, 00:20:16.448 { 00:20:16.448 "method": "bdev_wait_for_examine" 00:20:16.448 } 00:20:16.448 ] 00:20:16.448 }, 00:20:16.448 { 00:20:16.448 "subsystem": "nbd", 00:20:16.448 "config": [] 00:20:16.448 } 00:20:16.448 ] 00:20:16.448 }' 00:20:16.448 11:28:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:20:16.448 11:28:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:16.448 [2024-07-26 11:28:11.882720] Starting SPDK v24.09-pre git sha1 487ff9e1a / DPDK 24.03.0 initialization... 00:20:16.448 [2024-07-26 11:28:11.882766] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1549626 ] 00:20:16.448 EAL: No free 2048 kB hugepages reported on node 1 00:20:16.448 [2024-07-26 11:28:11.949920] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:16.448 [2024-07-26 11:28:12.023197] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:20:16.707 [2024-07-26 11:28:12.174562] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:20:17.275 11:28:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:20:17.275 11:28:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:20:17.275 11:28:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@277 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:20:17.275 11:28:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@277 -- # jq -r '.[].name' 00:20:17.275 11:28:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@277 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:17.275 11:28:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@278 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:20:17.533 Running I/O for 1 seconds... 00:20:18.470 00:20:18.470 Latency(us) 00:20:18.470 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:18.470 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:20:18.470 Verification LBA range: start 0x0 length 0x2000 00:20:18.470 nvme0n1 : 1.01 5545.08 21.66 0.00 0.00 22911.76 4712.35 21221.18 00:20:18.470 =================================================================================================================== 00:20:18.470 Total : 5545.08 21.66 0.00 0.00 22911.76 4712.35 21221.18 00:20:18.470 0 00:20:18.470 11:28:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@280 -- # trap - SIGINT SIGTERM EXIT 00:20:18.470 11:28:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@281 -- # cleanup 00:20:18.470 11:28:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@15 -- # process_shm --id 0 00:20:18.470 11:28:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@808 -- # type=--id 00:20:18.470 11:28:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@809 -- # id=0 00:20:18.470 11:28:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@810 -- # '[' --id = --pid ']' 00:20:18.470 11:28:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@814 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:20:18.470 11:28:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@814 -- # shm_files=nvmf_trace.0 00:20:18.470 11:28:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@816 -- # [[ -z nvmf_trace.0 ]] 00:20:18.470 11:28:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@820 -- # for n in $shm_files 00:20:18.470 11:28:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@821 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:20:18.470 nvmf_trace.0 00:20:18.470 11:28:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@823 -- # return 0 00:20:18.470 11:28:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@16 -- # killprocess 1549626 00:20:18.470 11:28:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 1549626 ']' 00:20:18.470 11:28:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 1549626 00:20:18.470 11:28:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:20:18.470 11:28:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:20:18.470 11:28:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1549626 00:20:18.470 11:28:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:20:18.470 11:28:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:20:18.470 11:28:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1549626' 00:20:18.470 killing process with pid 1549626 00:20:18.470 11:28:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 1549626 00:20:18.470 Received shutdown signal, test time was about 1.000000 seconds 00:20:18.470 00:20:18.470 Latency(us) 00:20:18.470 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:18.470 =================================================================================================================== 00:20:18.470 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:20:18.470 11:28:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 1549626 00:20:18.729 11:28:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@17 -- # nvmftestfini 00:20:18.729 11:28:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@488 -- # nvmfcleanup 00:20:18.729 11:28:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@117 -- # sync 00:20:18.729 11:28:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:20:18.729 11:28:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@120 -- # set +e 00:20:18.729 11:28:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@121 -- # for i in {1..20} 00:20:18.729 11:28:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:20:18.729 rmmod nvme_tcp 00:20:18.729 rmmod nvme_fabrics 00:20:18.729 rmmod nvme_keyring 00:20:18.729 11:28:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:20:18.729 11:28:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@124 -- # set -e 00:20:18.729 11:28:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@125 -- # return 0 00:20:18.729 11:28:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@489 -- # '[' -n 1549382 ']' 00:20:18.729 11:28:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@490 -- # killprocess 1549382 00:20:18.729 11:28:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 1549382 ']' 00:20:18.729 11:28:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 1549382 00:20:18.729 11:28:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:20:18.729 11:28:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:20:18.729 11:28:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1549382 00:20:18.988 11:28:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:20:18.988 11:28:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:20:18.988 11:28:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1549382' 00:20:18.988 killing process with pid 1549382 00:20:18.988 11:28:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 1549382 00:20:18.988 11:28:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 1549382 00:20:18.988 11:28:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:20:18.988 11:28:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:20:18.988 11:28:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:20:18.988 11:28:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:20:18.988 11:28:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@278 -- # remove_spdk_ns 00:20:18.988 11:28:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:18.988 11:28:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:18.988 11:28:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:21.524 11:28:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:20:21.524 11:28:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@18 -- # rm -f /tmp/tmp.mXRMjH7tXk /tmp/tmp.WpzLDhyiIC /tmp/tmp.V8XslquD3v 00:20:21.524 00:20:21.524 real 1m25.346s 00:20:21.524 user 2m11.053s 00:20:21.524 sys 0m29.598s 00:20:21.524 11:28:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1126 -- # xtrace_disable 00:20:21.524 11:28:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:21.524 ************************************ 00:20:21.524 END TEST nvmf_tls 00:20:21.524 ************************************ 00:20:21.524 11:28:16 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@42 -- # run_test nvmf_fips /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:20:21.524 11:28:16 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:20:21.524 11:28:16 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:20:21.524 11:28:16 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:20:21.524 ************************************ 00:20:21.524 START TEST nvmf_fips 00:20:21.524 ************************************ 00:20:21.524 11:28:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:20:21.524 * Looking for test storage... 00:20:21.524 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips 00:20:21.524 11:28:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:20:21.524 11:28:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@7 -- # uname -s 00:20:21.524 11:28:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:21.524 11:28:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:21.525 11:28:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:21.525 11:28:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:21.525 11:28:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:21.525 11:28:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:21.525 11:28:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:21.525 11:28:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:21.525 11:28:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:21.525 11:28:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:21.525 11:28:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:20:21.525 11:28:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:20:21.525 11:28:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:21.525 11:28:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:21.525 11:28:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:20:21.525 11:28:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:21.525 11:28:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:20:21.525 11:28:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:21.525 11:28:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:21.525 11:28:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:21.525 11:28:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:21.525 11:28:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:21.525 11:28:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:21.525 11:28:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@5 -- # export PATH 00:20:21.525 11:28:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:21.525 11:28:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@47 -- # : 0 00:20:21.525 11:28:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:20:21.525 11:28:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:20:21.525 11:28:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:21.525 11:28:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:21.525 11:28:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:21.525 11:28:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:20:21.525 11:28:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:20:21.525 11:28:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@51 -- # have_pci_nics=0 00:20:21.525 11:28:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:20:21.525 11:28:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@89 -- # check_openssl_version 00:20:21.525 11:28:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@83 -- # local target=3.0.0 00:20:21.525 11:28:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@85 -- # openssl version 00:20:21.525 11:28:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@85 -- # awk '{print $2}' 00:20:21.525 11:28:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@85 -- # ge 3.0.9 3.0.0 00:20:21.525 11:28:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@373 -- # cmp_versions 3.0.9 '>=' 3.0.0 00:20:21.525 11:28:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@330 -- # local ver1 ver1_l 00:20:21.525 11:28:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@331 -- # local ver2 ver2_l 00:20:21.525 11:28:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@333 -- # IFS=.-: 00:20:21.525 11:28:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@333 -- # read -ra ver1 00:20:21.525 11:28:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@334 -- # IFS=.-: 00:20:21.525 11:28:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@334 -- # read -ra ver2 00:20:21.525 11:28:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@335 -- # local 'op=>=' 00:20:21.525 11:28:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # ver1_l=3 00:20:21.525 11:28:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@338 -- # ver2_l=3 00:20:21.525 11:28:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@340 -- # local lt=0 gt=0 eq=0 v 00:20:21.525 11:28:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@341 -- # case "$op" in 00:20:21.525 11:28:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@345 -- # : 1 00:20:21.525 11:28:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@361 -- # (( v = 0 )) 00:20:21.525 11:28:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@361 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:20:21.525 11:28:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@362 -- # decimal 3 00:20:21.525 11:28:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@350 -- # local d=3 00:20:21.525 11:28:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@351 -- # [[ 3 =~ ^[0-9]+$ ]] 00:20:21.525 11:28:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@352 -- # echo 3 00:20:21.525 11:28:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@362 -- # ver1[v]=3 00:20:21.525 11:28:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@363 -- # decimal 3 00:20:21.525 11:28:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@350 -- # local d=3 00:20:21.525 11:28:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@351 -- # [[ 3 =~ ^[0-9]+$ ]] 00:20:21.525 11:28:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@352 -- # echo 3 00:20:21.525 11:28:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@363 -- # ver2[v]=3 00:20:21.525 11:28:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( ver1[v] > ver2[v] )) 00:20:21.525 11:28:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # (( ver1[v] < ver2[v] )) 00:20:21.525 11:28:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@361 -- # (( v++ )) 00:20:21.525 11:28:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@361 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:20:21.525 11:28:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@362 -- # decimal 0 00:20:21.525 11:28:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@350 -- # local d=0 00:20:21.525 11:28:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@351 -- # [[ 0 =~ ^[0-9]+$ ]] 00:20:21.525 11:28:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@352 -- # echo 0 00:20:21.525 11:28:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@362 -- # ver1[v]=0 00:20:21.525 11:28:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@363 -- # decimal 0 00:20:21.525 11:28:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@350 -- # local d=0 00:20:21.525 11:28:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@351 -- # [[ 0 =~ ^[0-9]+$ ]] 00:20:21.525 11:28:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@352 -- # echo 0 00:20:21.525 11:28:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@363 -- # ver2[v]=0 00:20:21.525 11:28:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( ver1[v] > ver2[v] )) 00:20:21.525 11:28:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # (( ver1[v] < ver2[v] )) 00:20:21.525 11:28:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@361 -- # (( v++ )) 00:20:21.525 11:28:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@361 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:20:21.525 11:28:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@362 -- # decimal 9 00:20:21.525 11:28:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@350 -- # local d=9 00:20:21.525 11:28:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@351 -- # [[ 9 =~ ^[0-9]+$ ]] 00:20:21.525 11:28:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@352 -- # echo 9 00:20:21.525 11:28:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@362 -- # ver1[v]=9 00:20:21.525 11:28:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@363 -- # decimal 0 00:20:21.525 11:28:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@350 -- # local d=0 00:20:21.525 11:28:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@351 -- # [[ 0 =~ ^[0-9]+$ ]] 00:20:21.525 11:28:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@352 -- # echo 0 00:20:21.525 11:28:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@363 -- # ver2[v]=0 00:20:21.525 11:28:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( ver1[v] > ver2[v] )) 00:20:21.525 11:28:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # return 0 00:20:21.525 11:28:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@95 -- # openssl info -modulesdir 00:20:21.525 11:28:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@95 -- # [[ ! -f /usr/lib64/ossl-modules/fips.so ]] 00:20:21.525 11:28:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@100 -- # openssl fipsinstall -help 00:20:21.525 11:28:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@100 -- # warn='This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode' 00:20:21.526 11:28:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@101 -- # [[ This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode == \T\h\i\s\ \c\o\m\m\a\n\d\ \i\s\ \n\o\t\ \e\n\a\b\l\e\d* ]] 00:20:21.526 11:28:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@104 -- # export callback=build_openssl_config 00:20:21.526 11:28:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@104 -- # callback=build_openssl_config 00:20:21.526 11:28:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@113 -- # build_openssl_config 00:20:21.526 11:28:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@37 -- # cat 00:20:21.526 11:28:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@57 -- # [[ ! -t 0 ]] 00:20:21.526 11:28:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@58 -- # cat - 00:20:21.526 11:28:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@114 -- # export OPENSSL_CONF=spdk_fips.conf 00:20:21.526 11:28:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@114 -- # OPENSSL_CONF=spdk_fips.conf 00:20:21.526 11:28:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@116 -- # mapfile -t providers 00:20:21.526 11:28:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@116 -- # openssl list -providers 00:20:21.526 11:28:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@116 -- # grep name 00:20:21.526 11:28:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@120 -- # (( 2 != 2 )) 00:20:21.526 11:28:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@120 -- # [[ name: openssl base provider != *base* ]] 00:20:21.526 11:28:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@120 -- # [[ name: red hat enterprise linux 9 - openssl fips provider != *fips* ]] 00:20:21.526 11:28:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@127 -- # NOT openssl md5 /dev/fd/62 00:20:21.526 11:28:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@650 -- # local es=0 00:20:21.526 11:28:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@127 -- # : 00:20:21.526 11:28:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@652 -- # valid_exec_arg openssl md5 /dev/fd/62 00:20:21.526 11:28:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@638 -- # local arg=openssl 00:20:21.526 11:28:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:20:21.526 11:28:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@642 -- # type -t openssl 00:20:21.526 11:28:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:20:21.526 11:28:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # type -P openssl 00:20:21.526 11:28:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:20:21.526 11:28:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # arg=/usr/bin/openssl 00:20:21.526 11:28:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # [[ -x /usr/bin/openssl ]] 00:20:21.526 11:28:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@653 -- # openssl md5 /dev/fd/62 00:20:21.526 Error setting digest 00:20:21.526 00F2CF85D67F0000:error:0308010C:digital envelope routines:inner_evp_generic_fetch:unsupported:crypto/evp/evp_fetch.c:373:Global default library context, Algorithm (MD5 : 97), Properties () 00:20:21.526 00F2CF85D67F0000:error:03000086:digital envelope routines:evp_md_init_internal:initialization error:crypto/evp/digest.c:254: 00:20:21.526 11:28:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@653 -- # es=1 00:20:21.526 11:28:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:20:21.526 11:28:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:20:21.526 11:28:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:20:21.526 11:28:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@130 -- # nvmftestinit 00:20:21.526 11:28:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:20:21.526 11:28:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:21.526 11:28:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@448 -- # prepare_net_devs 00:20:21.526 11:28:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@410 -- # local -g is_hw=no 00:20:21.526 11:28:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@412 -- # remove_spdk_ns 00:20:21.526 11:28:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:21.526 11:28:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:21.526 11:28:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:21.526 11:28:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:20:21.526 11:28:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:20:21.526 11:28:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@285 -- # xtrace_disable 00:20:21.526 11:28:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:20:26.801 11:28:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:20:26.801 11:28:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@291 -- # pci_devs=() 00:20:26.801 11:28:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@291 -- # local -a pci_devs 00:20:26.801 11:28:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@292 -- # pci_net_devs=() 00:20:26.801 11:28:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:20:26.801 11:28:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@293 -- # pci_drivers=() 00:20:26.801 11:28:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@293 -- # local -A pci_drivers 00:20:26.801 11:28:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@295 -- # net_devs=() 00:20:26.801 11:28:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@295 -- # local -ga net_devs 00:20:26.801 11:28:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@296 -- # e810=() 00:20:26.801 11:28:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@296 -- # local -ga e810 00:20:26.801 11:28:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@297 -- # x722=() 00:20:26.801 11:28:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@297 -- # local -ga x722 00:20:26.801 11:28:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@298 -- # mlx=() 00:20:26.801 11:28:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@298 -- # local -ga mlx 00:20:26.801 11:28:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:26.801 11:28:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:26.801 11:28:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:26.801 11:28:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:26.801 11:28:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:26.801 11:28:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:26.801 11:28:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:26.801 11:28:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:26.801 11:28:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:26.801 11:28:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:26.801 11:28:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:26.801 11:28:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:20:26.801 11:28:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:20:26.801 11:28:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:20:26.801 11:28:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:20:26.801 11:28:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:20:26.801 11:28:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:20:26.801 11:28:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:26.801 11:28:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:20:26.801 Found 0000:86:00.0 (0x8086 - 0x159b) 00:20:26.801 11:28:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:20:26.801 11:28:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:20:26.801 11:28:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:26.801 11:28:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:26.801 11:28:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:20:26.801 11:28:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:26.801 11:28:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:20:26.801 Found 0000:86:00.1 (0x8086 - 0x159b) 00:20:26.801 11:28:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:20:26.801 11:28:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:20:26.801 11:28:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:26.801 11:28:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:26.801 11:28:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:20:26.801 11:28:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:20:26.801 11:28:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:20:26.801 11:28:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:20:26.801 11:28:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:26.801 11:28:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:26.801 11:28:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:20:26.801 11:28:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:26.801 11:28:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@390 -- # [[ up == up ]] 00:20:26.801 11:28:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:20:26.801 11:28:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:26.801 11:28:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:20:26.801 Found net devices under 0000:86:00.0: cvl_0_0 00:20:26.801 11:28:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:20:26.801 11:28:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:26.801 11:28:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:26.801 11:28:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:20:26.801 11:28:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:26.801 11:28:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@390 -- # [[ up == up ]] 00:20:26.801 11:28:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:20:26.801 11:28:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:26.801 11:28:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:20:26.801 Found net devices under 0000:86:00.1: cvl_0_1 00:20:26.801 11:28:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:20:26.801 11:28:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:20:26.801 11:28:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@414 -- # is_hw=yes 00:20:26.801 11:28:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:20:26.801 11:28:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:20:26.801 11:28:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:20:26.801 11:28:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:26.802 11:28:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:26.802 11:28:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:20:26.802 11:28:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:20:26.802 11:28:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:20:26.802 11:28:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:20:26.802 11:28:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:20:26.802 11:28:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:20:26.802 11:28:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:26.802 11:28:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:20:26.802 11:28:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:20:26.802 11:28:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:20:26.802 11:28:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:20:27.060 11:28:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:20:27.060 11:28:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:20:27.060 11:28:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:20:27.060 11:28:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:20:27.060 11:28:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:20:27.060 11:28:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:20:27.060 11:28:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:20:27.060 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:27.060 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.149 ms 00:20:27.060 00:20:27.060 --- 10.0.0.2 ping statistics --- 00:20:27.060 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:27.060 rtt min/avg/max/mdev = 0.149/0.149/0.149/0.000 ms 00:20:27.060 11:28:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:20:27.060 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:27.060 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.158 ms 00:20:27.060 00:20:27.060 --- 10.0.0.1 ping statistics --- 00:20:27.060 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:27.060 rtt min/avg/max/mdev = 0.158/0.158/0.158/0.000 ms 00:20:27.060 11:28:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:27.060 11:28:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@422 -- # return 0 00:20:27.060 11:28:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:20:27.060 11:28:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:27.060 11:28:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:20:27.060 11:28:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:20:27.060 11:28:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:27.060 11:28:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:20:27.060 11:28:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:20:27.061 11:28:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@131 -- # nvmfappstart -m 0x2 00:20:27.061 11:28:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:20:27.061 11:28:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@724 -- # xtrace_disable 00:20:27.061 11:28:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:20:27.061 11:28:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@481 -- # nvmfpid=1553434 00:20:27.061 11:28:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@482 -- # waitforlisten 1553434 00:20:27.061 11:28:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:20:27.061 11:28:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@831 -- # '[' -z 1553434 ']' 00:20:27.061 11:28:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:27.061 11:28:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@836 -- # local max_retries=100 00:20:27.061 11:28:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:27.061 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:27.061 11:28:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@840 -- # xtrace_disable 00:20:27.061 11:28:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:20:27.319 [2024-07-26 11:28:22.776804] Starting SPDK v24.09-pre git sha1 487ff9e1a / DPDK 24.03.0 initialization... 00:20:27.319 [2024-07-26 11:28:22.776849] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:27.319 EAL: No free 2048 kB hugepages reported on node 1 00:20:27.319 [2024-07-26 11:28:22.832022] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:27.319 [2024-07-26 11:28:22.905398] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:27.319 [2024-07-26 11:28:22.905434] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:27.319 [2024-07-26 11:28:22.905441] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:27.319 [2024-07-26 11:28:22.905448] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:27.319 [2024-07-26 11:28:22.905453] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:27.319 [2024-07-26 11:28:22.905471] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:20:28.251 11:28:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:20:28.251 11:28:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@864 -- # return 0 00:20:28.251 11:28:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:20:28.251 11:28:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@730 -- # xtrace_disable 00:20:28.251 11:28:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:20:28.252 11:28:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:28.252 11:28:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@133 -- # trap cleanup EXIT 00:20:28.252 11:28:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@136 -- # key=NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:20:28.252 11:28:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@137 -- # key_path=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:20:28.252 11:28:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@138 -- # echo -n NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:20:28.252 11:28:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@139 -- # chmod 0600 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:20:28.252 11:28:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@141 -- # setup_nvmf_tgt_conf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:20:28.252 11:28:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@22 -- # local key=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:20:28.252 11:28:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:20:28.252 [2024-07-26 11:28:23.760540] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:28.252 [2024-07-26 11:28:23.776547] tcp.c: 956:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:20:28.252 [2024-07-26 11:28:23.776734] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:28.252 [2024-07-26 11:28:23.804769] tcp.c:3725:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:20:28.252 malloc0 00:20:28.252 11:28:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@144 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:20:28.252 11:28:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@147 -- # bdevperf_pid=1553674 00:20:28.252 11:28:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@145 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:20:28.252 11:28:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@148 -- # waitforlisten 1553674 /var/tmp/bdevperf.sock 00:20:28.252 11:28:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@831 -- # '[' -z 1553674 ']' 00:20:28.252 11:28:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:28.252 11:28:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@836 -- # local max_retries=100 00:20:28.252 11:28:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:28.252 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:28.252 11:28:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@840 -- # xtrace_disable 00:20:28.252 11:28:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:20:28.252 [2024-07-26 11:28:23.898268] Starting SPDK v24.09-pre git sha1 487ff9e1a / DPDK 24.03.0 initialization... 00:20:28.252 [2024-07-26 11:28:23.898316] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1553674 ] 00:20:28.511 EAL: No free 2048 kB hugepages reported on node 1 00:20:28.511 [2024-07-26 11:28:23.965774] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:28.511 [2024-07-26 11:28:24.038397] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:20:29.147 11:28:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:20:29.147 11:28:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@864 -- # return 0 00:20:29.147 11:28:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@150 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:20:29.406 [2024-07-26 11:28:24.841076] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:20:29.406 [2024-07-26 11:28:24.841168] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:20:29.406 TLSTESTn1 00:20:29.406 11:28:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@154 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:20:29.406 Running I/O for 10 seconds... 00:20:41.624 00:20:41.624 Latency(us) 00:20:41.624 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:41.624 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:20:41.624 Verification LBA range: start 0x0 length 0x2000 00:20:41.624 TLSTESTn1 : 10.01 5587.24 21.83 0.00 0.00 22874.31 6085.49 29459.99 00:20:41.624 =================================================================================================================== 00:20:41.624 Total : 5587.24 21.83 0.00 0.00 22874.31 6085.49 29459.99 00:20:41.624 0 00:20:41.624 11:28:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@1 -- # cleanup 00:20:41.624 11:28:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@15 -- # process_shm --id 0 00:20:41.624 11:28:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@808 -- # type=--id 00:20:41.624 11:28:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@809 -- # id=0 00:20:41.624 11:28:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@810 -- # '[' --id = --pid ']' 00:20:41.624 11:28:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@814 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:20:41.624 11:28:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@814 -- # shm_files=nvmf_trace.0 00:20:41.624 11:28:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@816 -- # [[ -z nvmf_trace.0 ]] 00:20:41.624 11:28:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@820 -- # for n in $shm_files 00:20:41.624 11:28:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@821 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:20:41.624 nvmf_trace.0 00:20:41.624 11:28:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@823 -- # return 0 00:20:41.624 11:28:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@16 -- # killprocess 1553674 00:20:41.624 11:28:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@950 -- # '[' -z 1553674 ']' 00:20:41.624 11:28:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@954 -- # kill -0 1553674 00:20:41.624 11:28:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@955 -- # uname 00:20:41.624 11:28:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:20:41.624 11:28:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1553674 00:20:41.624 11:28:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:20:41.624 11:28:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:20:41.624 11:28:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1553674' 00:20:41.624 killing process with pid 1553674 00:20:41.624 11:28:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@969 -- # kill 1553674 00:20:41.624 Received shutdown signal, test time was about 10.000000 seconds 00:20:41.624 00:20:41.624 Latency(us) 00:20:41.624 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:41.624 =================================================================================================================== 00:20:41.624 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:20:41.624 [2024-07-26 11:28:35.209541] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:20:41.624 11:28:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@974 -- # wait 1553674 00:20:41.624 11:28:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@17 -- # nvmftestfini 00:20:41.624 11:28:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@488 -- # nvmfcleanup 00:20:41.624 11:28:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@117 -- # sync 00:20:41.624 11:28:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:20:41.624 11:28:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@120 -- # set +e 00:20:41.624 11:28:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@121 -- # for i in {1..20} 00:20:41.624 11:28:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:20:41.624 rmmod nvme_tcp 00:20:41.624 rmmod nvme_fabrics 00:20:41.624 rmmod nvme_keyring 00:20:41.624 11:28:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:20:41.624 11:28:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@124 -- # set -e 00:20:41.624 11:28:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@125 -- # return 0 00:20:41.624 11:28:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@489 -- # '[' -n 1553434 ']' 00:20:41.624 11:28:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@490 -- # killprocess 1553434 00:20:41.624 11:28:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@950 -- # '[' -z 1553434 ']' 00:20:41.624 11:28:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@954 -- # kill -0 1553434 00:20:41.624 11:28:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@955 -- # uname 00:20:41.624 11:28:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:20:41.624 11:28:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1553434 00:20:41.624 11:28:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:20:41.624 11:28:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:20:41.624 11:28:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1553434' 00:20:41.624 killing process with pid 1553434 00:20:41.624 11:28:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@969 -- # kill 1553434 00:20:41.624 [2024-07-26 11:28:35.504643] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:20:41.624 11:28:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@974 -- # wait 1553434 00:20:41.624 11:28:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:20:41.624 11:28:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:20:41.624 11:28:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:20:41.624 11:28:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:20:41.624 11:28:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@278 -- # remove_spdk_ns 00:20:41.624 11:28:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:41.624 11:28:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:41.624 11:28:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:42.192 11:28:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:20:42.192 11:28:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@18 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:20:42.192 00:20:42.192 real 0m21.027s 00:20:42.192 user 0m22.612s 00:20:42.192 sys 0m9.218s 00:20:42.192 11:28:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1126 -- # xtrace_disable 00:20:42.192 11:28:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:20:42.192 ************************************ 00:20:42.192 END TEST nvmf_fips 00:20:42.192 ************************************ 00:20:42.192 11:28:37 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@45 -- # '[' 0 -eq 1 ']' 00:20:42.192 11:28:37 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@51 -- # [[ phy == phy ]] 00:20:42.192 11:28:37 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@52 -- # '[' tcp = tcp ']' 00:20:42.192 11:28:37 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@53 -- # gather_supported_nvmf_pci_devs 00:20:42.192 11:28:37 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@285 -- # xtrace_disable 00:20:42.193 11:28:37 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:20:48.763 11:28:43 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:20:48.763 11:28:43 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@291 -- # pci_devs=() 00:20:48.763 11:28:43 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@291 -- # local -a pci_devs 00:20:48.763 11:28:43 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@292 -- # pci_net_devs=() 00:20:48.763 11:28:43 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:20:48.763 11:28:43 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@293 -- # pci_drivers=() 00:20:48.763 11:28:43 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@293 -- # local -A pci_drivers 00:20:48.763 11:28:43 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@295 -- # net_devs=() 00:20:48.763 11:28:43 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@295 -- # local -ga net_devs 00:20:48.763 11:28:43 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@296 -- # e810=() 00:20:48.763 11:28:43 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@296 -- # local -ga e810 00:20:48.763 11:28:43 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@297 -- # x722=() 00:20:48.763 11:28:43 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@297 -- # local -ga x722 00:20:48.763 11:28:43 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@298 -- # mlx=() 00:20:48.763 11:28:43 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@298 -- # local -ga mlx 00:20:48.763 11:28:43 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:48.763 11:28:43 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:48.763 11:28:43 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:48.763 11:28:43 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:48.763 11:28:43 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:48.763 11:28:43 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:48.763 11:28:43 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:48.763 11:28:43 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:48.763 11:28:43 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:48.763 11:28:43 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:48.763 11:28:43 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:48.763 11:28:43 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:20:48.763 11:28:43 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:20:48.763 11:28:43 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:20:48.763 11:28:43 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:20:48.763 11:28:43 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:20:48.763 11:28:43 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:20:48.763 11:28:43 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:48.763 11:28:43 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:20:48.763 Found 0000:86:00.0 (0x8086 - 0x159b) 00:20:48.763 11:28:43 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:20:48.763 11:28:43 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:20:48.763 11:28:43 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:48.763 11:28:43 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:48.763 11:28:43 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:20:48.763 11:28:43 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:48.763 11:28:43 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:20:48.763 Found 0000:86:00.1 (0x8086 - 0x159b) 00:20:48.763 11:28:43 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:20:48.763 11:28:43 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:20:48.763 11:28:43 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:48.763 11:28:43 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:48.763 11:28:43 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:20:48.763 11:28:43 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:20:48.763 11:28:43 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:20:48.763 11:28:43 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:20:48.763 11:28:43 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:48.763 11:28:43 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:48.763 11:28:43 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:20:48.763 11:28:43 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:48.763 11:28:43 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@390 -- # [[ up == up ]] 00:20:48.763 11:28:43 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:20:48.763 11:28:43 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:48.763 11:28:43 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:20:48.763 Found net devices under 0000:86:00.0: cvl_0_0 00:20:48.763 11:28:43 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:20:48.763 11:28:43 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:48.763 11:28:43 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:48.763 11:28:43 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:20:48.763 11:28:43 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:48.763 11:28:43 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@390 -- # [[ up == up ]] 00:20:48.763 11:28:43 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:20:48.763 11:28:43 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:48.763 11:28:43 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:20:48.763 Found net devices under 0000:86:00.1: cvl_0_1 00:20:48.763 11:28:43 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:20:48.763 11:28:43 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:20:48.763 11:28:43 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@54 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:20:48.763 11:28:43 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@55 -- # (( 2 > 0 )) 00:20:48.763 11:28:43 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@56 -- # run_test nvmf_perf_adq /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/perf_adq.sh --transport=tcp 00:20:48.763 11:28:43 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:20:48.763 11:28:43 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:20:48.763 11:28:43 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:20:48.763 ************************************ 00:20:48.763 START TEST nvmf_perf_adq 00:20:48.763 ************************************ 00:20:48.763 11:28:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/perf_adq.sh --transport=tcp 00:20:48.763 * Looking for test storage... 00:20:48.763 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:20:48.763 11:28:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:20:48.763 11:28:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@7 -- # uname -s 00:20:48.763 11:28:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:48.763 11:28:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:48.763 11:28:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:48.763 11:28:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:48.763 11:28:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:48.763 11:28:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:48.763 11:28:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:48.763 11:28:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:48.763 11:28:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:48.763 11:28:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:48.763 11:28:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:20:48.764 11:28:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:20:48.764 11:28:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:48.764 11:28:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:48.764 11:28:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:20:48.764 11:28:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:48.764 11:28:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:20:48.764 11:28:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:48.764 11:28:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:48.764 11:28:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:48.764 11:28:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:48.764 11:28:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:48.764 11:28:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:48.764 11:28:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@5 -- # export PATH 00:20:48.764 11:28:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:48.764 11:28:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@47 -- # : 0 00:20:48.764 11:28:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:20:48.764 11:28:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:20:48.764 11:28:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:48.764 11:28:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:48.764 11:28:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:48.764 11:28:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:20:48.764 11:28:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:20:48.764 11:28:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@51 -- # have_pci_nics=0 00:20:48.764 11:28:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@11 -- # gather_supported_nvmf_pci_devs 00:20:48.764 11:28:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@285 -- # xtrace_disable 00:20:48.764 11:28:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:54.039 11:28:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:20:54.039 11:28:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@291 -- # pci_devs=() 00:20:54.039 11:28:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@291 -- # local -a pci_devs 00:20:54.039 11:28:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@292 -- # pci_net_devs=() 00:20:54.039 11:28:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:20:54.039 11:28:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@293 -- # pci_drivers=() 00:20:54.039 11:28:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@293 -- # local -A pci_drivers 00:20:54.039 11:28:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@295 -- # net_devs=() 00:20:54.039 11:28:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@295 -- # local -ga net_devs 00:20:54.039 11:28:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@296 -- # e810=() 00:20:54.039 11:28:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@296 -- # local -ga e810 00:20:54.039 11:28:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@297 -- # x722=() 00:20:54.039 11:28:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@297 -- # local -ga x722 00:20:54.039 11:28:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@298 -- # mlx=() 00:20:54.039 11:28:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@298 -- # local -ga mlx 00:20:54.039 11:28:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:54.039 11:28:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:54.039 11:28:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:54.039 11:28:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:54.039 11:28:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:54.039 11:28:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:54.039 11:28:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:54.039 11:28:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:54.039 11:28:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:54.039 11:28:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:54.039 11:28:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:54.039 11:28:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:20:54.039 11:28:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:20:54.039 11:28:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:20:54.039 11:28:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:20:54.039 11:28:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:20:54.039 11:28:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:20:54.039 11:28:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:54.039 11:28:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:20:54.039 Found 0000:86:00.0 (0x8086 - 0x159b) 00:20:54.039 11:28:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:20:54.039 11:28:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:20:54.039 11:28:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:54.039 11:28:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:54.039 11:28:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:20:54.039 11:28:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:54.039 11:28:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:20:54.039 Found 0000:86:00.1 (0x8086 - 0x159b) 00:20:54.039 11:28:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:20:54.039 11:28:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:20:54.039 11:28:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:54.039 11:28:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:54.039 11:28:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:20:54.039 11:28:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:20:54.039 11:28:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:20:54.039 11:28:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:20:54.039 11:28:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:54.039 11:28:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:54.039 11:28:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:20:54.039 11:28:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:54.039 11:28:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:20:54.039 11:28:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:20:54.039 11:28:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:54.039 11:28:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:20:54.039 Found net devices under 0000:86:00.0: cvl_0_0 00:20:54.039 11:28:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:20:54.040 11:28:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:54.040 11:28:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:54.040 11:28:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:20:54.040 11:28:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:54.040 11:28:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:20:54.040 11:28:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:20:54.040 11:28:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:54.040 11:28:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:20:54.040 Found net devices under 0000:86:00.1: cvl_0_1 00:20:54.040 11:28:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:20:54.040 11:28:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:20:54.040 11:28:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@12 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:20:54.040 11:28:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@13 -- # (( 2 == 0 )) 00:20:54.040 11:28:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@18 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:20:54.040 11:28:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@60 -- # adq_reload_driver 00:20:54.040 11:28:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@53 -- # rmmod ice 00:20:54.608 11:28:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@54 -- # modprobe ice 00:20:56.512 11:28:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@55 -- # sleep 5 00:21:01.784 11:28:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@68 -- # nvmftestinit 00:21:01.784 11:28:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:21:01.784 11:28:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:01.784 11:28:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@448 -- # prepare_net_devs 00:21:01.784 11:28:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # local -g is_hw=no 00:21:01.784 11:28:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@412 -- # remove_spdk_ns 00:21:01.784 11:28:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:01.784 11:28:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:01.784 11:28:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:01.784 11:28:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:21:01.784 11:28:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:21:01.784 11:28:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@285 -- # xtrace_disable 00:21:01.784 11:28:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:01.784 11:28:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:01.784 11:28:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@291 -- # pci_devs=() 00:21:01.784 11:28:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@291 -- # local -a pci_devs 00:21:01.784 11:28:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@292 -- # pci_net_devs=() 00:21:01.784 11:28:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:21:01.784 11:28:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@293 -- # pci_drivers=() 00:21:01.784 11:28:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@293 -- # local -A pci_drivers 00:21:01.784 11:28:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@295 -- # net_devs=() 00:21:01.784 11:28:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@295 -- # local -ga net_devs 00:21:01.785 11:28:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@296 -- # e810=() 00:21:01.785 11:28:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@296 -- # local -ga e810 00:21:01.785 11:28:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@297 -- # x722=() 00:21:01.785 11:28:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@297 -- # local -ga x722 00:21:01.785 11:28:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@298 -- # mlx=() 00:21:01.785 11:28:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@298 -- # local -ga mlx 00:21:01.785 11:28:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:01.785 11:28:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:01.785 11:28:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:01.785 11:28:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:01.785 11:28:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:01.785 11:28:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:01.785 11:28:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:01.785 11:28:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:01.785 11:28:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:01.785 11:28:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:01.785 11:28:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:01.785 11:28:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:21:01.785 11:28:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:21:01.785 11:28:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:21:01.785 11:28:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:21:01.785 11:28:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:21:01.785 11:28:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:21:01.785 11:28:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:01.785 11:28:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:21:01.785 Found 0000:86:00.0 (0x8086 - 0x159b) 00:21:01.785 11:28:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:21:01.785 11:28:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:21:01.785 11:28:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:01.785 11:28:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:01.785 11:28:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:21:01.785 11:28:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:01.785 11:28:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:21:01.785 Found 0000:86:00.1 (0x8086 - 0x159b) 00:21:01.785 11:28:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:21:01.785 11:28:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:21:01.785 11:28:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:01.785 11:28:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:01.785 11:28:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:21:01.785 11:28:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:21:01.785 11:28:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:21:01.785 11:28:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:21:01.785 11:28:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:01.785 11:28:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:01.785 11:28:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:21:01.785 11:28:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:01.785 11:28:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:21:01.785 11:28:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:21:01.785 11:28:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:01.785 11:28:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:21:01.785 Found net devices under 0000:86:00.0: cvl_0_0 00:21:01.785 11:28:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:21:01.785 11:28:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:01.785 11:28:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:01.785 11:28:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:21:01.785 11:28:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:01.785 11:28:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:21:01.785 11:28:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:21:01.785 11:28:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:01.785 11:28:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:21:01.785 Found net devices under 0000:86:00.1: cvl_0_1 00:21:01.785 11:28:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:21:01.785 11:28:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:21:01.785 11:28:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@414 -- # is_hw=yes 00:21:01.785 11:28:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:21:01.785 11:28:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:21:01.785 11:28:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:21:01.785 11:28:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:01.785 11:28:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:01.785 11:28:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:01.785 11:28:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:21:01.785 11:28:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:21:01.785 11:28:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:21:01.785 11:28:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:21:01.785 11:28:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:21:01.785 11:28:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:01.785 11:28:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:21:01.785 11:28:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:21:01.785 11:28:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:21:01.785 11:28:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:21:01.785 11:28:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:21:01.785 11:28:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:21:01.785 11:28:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:21:01.785 11:28:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:21:01.785 11:28:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:21:01.785 11:28:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:21:01.785 11:28:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:21:01.785 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:01.785 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.147 ms 00:21:01.785 00:21:01.785 --- 10.0.0.2 ping statistics --- 00:21:01.785 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:01.785 rtt min/avg/max/mdev = 0.147/0.147/0.147/0.000 ms 00:21:01.785 11:28:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:21:01.785 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:01.785 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.076 ms 00:21:01.785 00:21:01.785 --- 10.0.0.1 ping statistics --- 00:21:01.785 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:01.785 rtt min/avg/max/mdev = 0.076/0.076/0.076/0.000 ms 00:21:01.785 11:28:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:01.785 11:28:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # return 0 00:21:01.785 11:28:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:21:01.785 11:28:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:01.785 11:28:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:21:01.785 11:28:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:21:01.785 11:28:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:01.785 11:28:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:21:01.785 11:28:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:21:01.785 11:28:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@69 -- # nvmfappstart -m 0xF --wait-for-rpc 00:21:01.785 11:28:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:21:01.785 11:28:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@724 -- # xtrace_disable 00:21:01.786 11:28:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:01.786 11:28:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@481 -- # nvmfpid=1563542 00:21:01.786 11:28:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@482 -- # waitforlisten 1563542 00:21:01.786 11:28:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:21:01.786 11:28:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@831 -- # '[' -z 1563542 ']' 00:21:01.786 11:28:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:01.786 11:28:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@836 -- # local max_retries=100 00:21:01.786 11:28:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:01.786 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:01.786 11:28:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@840 -- # xtrace_disable 00:21:01.786 11:28:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:01.786 [2024-07-26 11:28:57.403385] Starting SPDK v24.09-pre git sha1 487ff9e1a / DPDK 24.03.0 initialization... 00:21:01.786 [2024-07-26 11:28:57.403430] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:01.786 EAL: No free 2048 kB hugepages reported on node 1 00:21:02.044 [2024-07-26 11:28:57.476297] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:21:02.044 [2024-07-26 11:28:57.550591] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:02.044 [2024-07-26 11:28:57.550635] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:02.044 [2024-07-26 11:28:57.550642] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:02.044 [2024-07-26 11:28:57.550648] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:02.044 [2024-07-26 11:28:57.550653] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:02.044 [2024-07-26 11:28:57.550768] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:21:02.044 [2024-07-26 11:28:57.550872] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:21:02.044 [2024-07-26 11:28:57.550979] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:21:02.044 [2024-07-26 11:28:57.550981] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:21:02.609 11:28:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:21:02.609 11:28:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@864 -- # return 0 00:21:02.609 11:28:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:21:02.609 11:28:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@730 -- # xtrace_disable 00:21:02.609 11:28:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:02.609 11:28:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:02.609 11:28:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@70 -- # adq_configure_nvmf_target 0 00:21:02.609 11:28:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # rpc_cmd sock_get_default_impl 00:21:02.609 11:28:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # jq -r .impl_name 00:21:02.609 11:28:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:02.609 11:28:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:02.609 11:28:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:02.609 11:28:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # socket_impl=posix 00:21:02.609 11:28:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@43 -- # rpc_cmd sock_impl_set_options --enable-placement-id 0 --enable-zerocopy-send-server -i posix 00:21:02.609 11:28:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:02.609 11:28:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:02.867 11:28:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:02.867 11:28:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@44 -- # rpc_cmd framework_start_init 00:21:02.867 11:28:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:02.867 11:28:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:02.867 11:28:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:02.867 11:28:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o --io-unit-size 8192 --sock-priority 0 00:21:02.867 11:28:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:02.867 11:28:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:02.867 [2024-07-26 11:28:58.374712] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:02.867 11:28:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:02.867 11:28:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@46 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:21:02.867 11:28:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:02.867 11:28:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:02.867 Malloc1 00:21:02.867 11:28:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:02.867 11:28:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@47 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:21:02.867 11:28:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:02.867 11:28:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:02.867 11:28:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:02.867 11:28:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@48 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:21:02.867 11:28:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:02.867 11:28:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:02.867 11:28:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:02.867 11:28:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:21:02.867 11:28:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:02.867 11:28:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:02.867 [2024-07-26 11:28:58.422334] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:02.867 11:28:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:02.868 11:28:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@74 -- # perfpid=1563642 00:21:02.868 11:28:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@75 -- # sleep 2 00:21:02.868 11:28:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randread -t 10 -c 0xF0 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:21:02.868 EAL: No free 2048 kB hugepages reported on node 1 00:21:05.396 11:29:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@77 -- # rpc_cmd nvmf_get_stats 00:21:05.397 11:29:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:05.397 11:29:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:05.397 11:29:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:05.397 11:29:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@77 -- # nvmf_stats='{ 00:21:05.397 "tick_rate": 2100000000, 00:21:05.397 "poll_groups": [ 00:21:05.397 { 00:21:05.397 "name": "nvmf_tgt_poll_group_000", 00:21:05.397 "admin_qpairs": 1, 00:21:05.397 "io_qpairs": 1, 00:21:05.397 "current_admin_qpairs": 1, 00:21:05.397 "current_io_qpairs": 1, 00:21:05.397 "pending_bdev_io": 0, 00:21:05.397 "completed_nvme_io": 21010, 00:21:05.397 "transports": [ 00:21:05.397 { 00:21:05.397 "trtype": "TCP" 00:21:05.397 } 00:21:05.397 ] 00:21:05.397 }, 00:21:05.397 { 00:21:05.397 "name": "nvmf_tgt_poll_group_001", 00:21:05.397 "admin_qpairs": 0, 00:21:05.397 "io_qpairs": 1, 00:21:05.397 "current_admin_qpairs": 0, 00:21:05.397 "current_io_qpairs": 1, 00:21:05.397 "pending_bdev_io": 0, 00:21:05.397 "completed_nvme_io": 21216, 00:21:05.397 "transports": [ 00:21:05.397 { 00:21:05.397 "trtype": "TCP" 00:21:05.397 } 00:21:05.397 ] 00:21:05.397 }, 00:21:05.397 { 00:21:05.397 "name": "nvmf_tgt_poll_group_002", 00:21:05.397 "admin_qpairs": 0, 00:21:05.397 "io_qpairs": 1, 00:21:05.397 "current_admin_qpairs": 0, 00:21:05.397 "current_io_qpairs": 1, 00:21:05.397 "pending_bdev_io": 0, 00:21:05.397 "completed_nvme_io": 20873, 00:21:05.397 "transports": [ 00:21:05.397 { 00:21:05.397 "trtype": "TCP" 00:21:05.397 } 00:21:05.397 ] 00:21:05.397 }, 00:21:05.397 { 00:21:05.397 "name": "nvmf_tgt_poll_group_003", 00:21:05.397 "admin_qpairs": 0, 00:21:05.397 "io_qpairs": 1, 00:21:05.397 "current_admin_qpairs": 0, 00:21:05.397 "current_io_qpairs": 1, 00:21:05.397 "pending_bdev_io": 0, 00:21:05.397 "completed_nvme_io": 21032, 00:21:05.397 "transports": [ 00:21:05.397 { 00:21:05.397 "trtype": "TCP" 00:21:05.397 } 00:21:05.397 ] 00:21:05.397 } 00:21:05.397 ] 00:21:05.397 }' 00:21:05.397 11:29:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@78 -- # jq -r '.poll_groups[] | select(.current_io_qpairs == 1) | length' 00:21:05.397 11:29:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@78 -- # wc -l 00:21:05.397 11:29:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@78 -- # count=4 00:21:05.397 11:29:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@79 -- # [[ 4 -ne 4 ]] 00:21:05.397 11:29:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@83 -- # wait 1563642 00:21:13.506 Initializing NVMe Controllers 00:21:13.506 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:21:13.506 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 4 00:21:13.506 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 5 00:21:13.506 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 6 00:21:13.506 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 7 00:21:13.506 Initialization complete. Launching workers. 00:21:13.506 ======================================================== 00:21:13.506 Latency(us) 00:21:13.506 Device Information : IOPS MiB/s Average min max 00:21:13.506 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 4: 11094.50 43.34 5769.95 2249.92 8815.49 00:21:13.506 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 5: 11247.30 43.93 5691.83 1805.75 13645.67 00:21:13.506 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 6: 11061.10 43.21 5786.66 1893.07 10420.84 00:21:13.506 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 7: 11128.40 43.47 5752.76 1812.43 10375.81 00:21:13.506 ======================================================== 00:21:13.506 Total : 44531.30 173.95 5750.07 1805.75 13645.67 00:21:13.506 00:21:13.506 11:29:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@84 -- # nvmftestfini 00:21:13.506 11:29:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@488 -- # nvmfcleanup 00:21:13.506 11:29:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@117 -- # sync 00:21:13.506 11:29:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:21:13.506 11:29:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@120 -- # set +e 00:21:13.506 11:29:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@121 -- # for i in {1..20} 00:21:13.506 11:29:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:21:13.506 rmmod nvme_tcp 00:21:13.506 rmmod nvme_fabrics 00:21:13.506 rmmod nvme_keyring 00:21:13.506 11:29:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:21:13.506 11:29:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@124 -- # set -e 00:21:13.506 11:29:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@125 -- # return 0 00:21:13.506 11:29:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@489 -- # '[' -n 1563542 ']' 00:21:13.506 11:29:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@490 -- # killprocess 1563542 00:21:13.506 11:29:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@950 -- # '[' -z 1563542 ']' 00:21:13.506 11:29:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@954 -- # kill -0 1563542 00:21:13.506 11:29:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@955 -- # uname 00:21:13.506 11:29:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:21:13.506 11:29:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1563542 00:21:13.506 11:29:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:21:13.506 11:29:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:21:13.506 11:29:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1563542' 00:21:13.506 killing process with pid 1563542 00:21:13.506 11:29:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@969 -- # kill 1563542 00:21:13.506 11:29:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@974 -- # wait 1563542 00:21:13.506 11:29:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:21:13.506 11:29:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:21:13.506 11:29:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:21:13.506 11:29:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:21:13.506 11:29:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@278 -- # remove_spdk_ns 00:21:13.506 11:29:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:13.506 11:29:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:13.506 11:29:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:15.410 11:29:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:21:15.410 11:29:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@86 -- # adq_reload_driver 00:21:15.410 11:29:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@53 -- # rmmod ice 00:21:16.857 11:29:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@54 -- # modprobe ice 00:21:18.759 11:29:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@55 -- # sleep 5 00:21:24.034 11:29:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@89 -- # nvmftestinit 00:21:24.034 11:29:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:21:24.034 11:29:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:24.034 11:29:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@448 -- # prepare_net_devs 00:21:24.034 11:29:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # local -g is_hw=no 00:21:24.034 11:29:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@412 -- # remove_spdk_ns 00:21:24.034 11:29:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:24.034 11:29:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:24.034 11:29:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:24.034 11:29:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:21:24.034 11:29:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:21:24.034 11:29:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@285 -- # xtrace_disable 00:21:24.034 11:29:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:24.034 11:29:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:24.034 11:29:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@291 -- # pci_devs=() 00:21:24.034 11:29:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@291 -- # local -a pci_devs 00:21:24.034 11:29:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@292 -- # pci_net_devs=() 00:21:24.034 11:29:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:21:24.034 11:29:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@293 -- # pci_drivers=() 00:21:24.034 11:29:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@293 -- # local -A pci_drivers 00:21:24.034 11:29:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@295 -- # net_devs=() 00:21:24.034 11:29:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@295 -- # local -ga net_devs 00:21:24.034 11:29:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@296 -- # e810=() 00:21:24.034 11:29:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@296 -- # local -ga e810 00:21:24.034 11:29:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@297 -- # x722=() 00:21:24.034 11:29:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@297 -- # local -ga x722 00:21:24.035 11:29:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@298 -- # mlx=() 00:21:24.035 11:29:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@298 -- # local -ga mlx 00:21:24.035 11:29:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:24.035 11:29:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:24.035 11:29:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:24.035 11:29:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:24.035 11:29:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:24.035 11:29:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:24.035 11:29:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:24.035 11:29:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:24.035 11:29:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:24.035 11:29:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:24.035 11:29:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:24.035 11:29:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:21:24.035 11:29:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:21:24.035 11:29:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:21:24.035 11:29:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:21:24.035 11:29:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:21:24.035 11:29:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:21:24.035 11:29:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:24.035 11:29:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:21:24.035 Found 0000:86:00.0 (0x8086 - 0x159b) 00:21:24.035 11:29:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:21:24.035 11:29:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:21:24.035 11:29:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:24.035 11:29:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:24.035 11:29:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:21:24.035 11:29:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:24.035 11:29:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:21:24.035 Found 0000:86:00.1 (0x8086 - 0x159b) 00:21:24.035 11:29:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:21:24.035 11:29:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:21:24.035 11:29:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:24.035 11:29:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:24.035 11:29:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:21:24.035 11:29:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:21:24.035 11:29:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:21:24.035 11:29:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:21:24.035 11:29:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:24.035 11:29:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:24.035 11:29:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:21:24.035 11:29:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:24.035 11:29:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:21:24.035 11:29:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:21:24.035 11:29:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:24.035 11:29:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:21:24.035 Found net devices under 0000:86:00.0: cvl_0_0 00:21:24.035 11:29:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:21:24.035 11:29:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:24.035 11:29:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:24.035 11:29:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:21:24.035 11:29:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:24.035 11:29:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:21:24.035 11:29:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:21:24.035 11:29:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:24.035 11:29:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:21:24.035 Found net devices under 0000:86:00.1: cvl_0_1 00:21:24.035 11:29:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:21:24.035 11:29:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:21:24.035 11:29:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@414 -- # is_hw=yes 00:21:24.035 11:29:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:21:24.035 11:29:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:21:24.035 11:29:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:21:24.035 11:29:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:24.035 11:29:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:24.035 11:29:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:24.035 11:29:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:21:24.035 11:29:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:21:24.035 11:29:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:21:24.035 11:29:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:21:24.035 11:29:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:21:24.035 11:29:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:24.035 11:29:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:21:24.035 11:29:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:21:24.035 11:29:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:21:24.035 11:29:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:21:24.035 11:29:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:21:24.035 11:29:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:21:24.035 11:29:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:21:24.035 11:29:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:21:24.035 11:29:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:21:24.035 11:29:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:21:24.035 11:29:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:21:24.035 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:24.035 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.185 ms 00:21:24.035 00:21:24.035 --- 10.0.0.2 ping statistics --- 00:21:24.035 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:24.035 rtt min/avg/max/mdev = 0.185/0.185/0.185/0.000 ms 00:21:24.035 11:29:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:21:24.035 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:24.035 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.224 ms 00:21:24.035 00:21:24.035 --- 10.0.0.1 ping statistics --- 00:21:24.035 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:24.035 rtt min/avg/max/mdev = 0.224/0.224/0.224/0.000 ms 00:21:24.035 11:29:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:24.035 11:29:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # return 0 00:21:24.035 11:29:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:21:24.035 11:29:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:24.035 11:29:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:21:24.035 11:29:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:21:24.035 11:29:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:24.035 11:29:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:21:24.035 11:29:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:21:24.035 11:29:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@90 -- # adq_configure_driver 00:21:24.035 11:29:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@22 -- # ip netns exec cvl_0_0_ns_spdk ethtool --offload cvl_0_0 hw-tc-offload on 00:21:24.035 11:29:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@24 -- # ip netns exec cvl_0_0_ns_spdk ethtool --set-priv-flags cvl_0_0 channel-pkt-inspect-optimize off 00:21:24.035 11:29:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@26 -- # sysctl -w net.core.busy_poll=1 00:21:24.035 net.core.busy_poll = 1 00:21:24.035 11:29:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@27 -- # sysctl -w net.core.busy_read=1 00:21:24.035 net.core.busy_read = 1 00:21:24.036 11:29:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@29 -- # tc=/usr/sbin/tc 00:21:24.036 11:29:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@31 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc qdisc add dev cvl_0_0 root mqprio num_tc 2 map 0 1 queues 2@0 2@2 hw 1 mode channel 00:21:24.295 11:29:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@33 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc qdisc add dev cvl_0_0 ingress 00:21:24.295 11:29:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@35 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc filter add dev cvl_0_0 protocol ip parent ffff: prio 1 flower dst_ip 10.0.0.2/32 ip_proto tcp dst_port 4420 skip_sw hw_tc 1 00:21:24.295 11:29:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@38 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/nvmf/set_xps_rxqs cvl_0_0 00:21:24.295 11:29:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@91 -- # nvmfappstart -m 0xF --wait-for-rpc 00:21:24.295 11:29:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:21:24.295 11:29:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@724 -- # xtrace_disable 00:21:24.295 11:29:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:24.295 11:29:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@481 -- # nvmfpid=1567438 00:21:24.295 11:29:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@482 -- # waitforlisten 1567438 00:21:24.295 11:29:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:21:24.295 11:29:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@831 -- # '[' -z 1567438 ']' 00:21:24.295 11:29:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:24.295 11:29:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@836 -- # local max_retries=100 00:21:24.295 11:29:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:24.295 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:24.295 11:29:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@840 -- # xtrace_disable 00:21:24.295 11:29:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:24.295 [2024-07-26 11:29:19.899109] Starting SPDK v24.09-pre git sha1 487ff9e1a / DPDK 24.03.0 initialization... 00:21:24.295 [2024-07-26 11:29:19.899154] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:24.295 EAL: No free 2048 kB hugepages reported on node 1 00:21:24.553 [2024-07-26 11:29:19.968117] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:21:24.553 [2024-07-26 11:29:20.054642] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:24.553 [2024-07-26 11:29:20.054676] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:24.553 [2024-07-26 11:29:20.054683] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:24.553 [2024-07-26 11:29:20.054689] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:24.553 [2024-07-26 11:29:20.054694] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:24.553 [2024-07-26 11:29:20.054760] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:21:24.553 [2024-07-26 11:29:20.054865] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:21:24.553 [2024-07-26 11:29:20.054970] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:21:24.553 [2024-07-26 11:29:20.054971] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:21:25.119 11:29:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:21:25.119 11:29:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@864 -- # return 0 00:21:25.119 11:29:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:21:25.119 11:29:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@730 -- # xtrace_disable 00:21:25.119 11:29:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:25.119 11:29:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:25.119 11:29:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@92 -- # adq_configure_nvmf_target 1 00:21:25.119 11:29:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # rpc_cmd sock_get_default_impl 00:21:25.119 11:29:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # jq -r .impl_name 00:21:25.119 11:29:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:25.119 11:29:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:25.119 11:29:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:25.376 11:29:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # socket_impl=posix 00:21:25.376 11:29:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@43 -- # rpc_cmd sock_impl_set_options --enable-placement-id 1 --enable-zerocopy-send-server -i posix 00:21:25.376 11:29:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:25.376 11:29:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:25.376 11:29:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:25.376 11:29:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@44 -- # rpc_cmd framework_start_init 00:21:25.376 11:29:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:25.376 11:29:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:25.376 11:29:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:25.376 11:29:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o --io-unit-size 8192 --sock-priority 1 00:21:25.376 11:29:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:25.376 11:29:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:25.376 [2024-07-26 11:29:20.892533] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:25.376 11:29:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:25.376 11:29:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@46 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:21:25.376 11:29:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:25.376 11:29:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:25.376 Malloc1 00:21:25.376 11:29:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:25.376 11:29:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@47 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:21:25.376 11:29:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:25.376 11:29:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:25.376 11:29:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:25.376 11:29:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@48 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:21:25.376 11:29:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:25.376 11:29:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:25.376 11:29:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:25.376 11:29:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:21:25.376 11:29:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:25.376 11:29:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:25.376 [2024-07-26 11:29:20.940047] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:25.376 11:29:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:25.377 11:29:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@96 -- # perfpid=1567685 00:21:25.377 11:29:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@97 -- # sleep 2 00:21:25.377 11:29:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randread -t 10 -c 0xF0 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:21:25.377 EAL: No free 2048 kB hugepages reported on node 1 00:21:27.909 11:29:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@99 -- # rpc_cmd nvmf_get_stats 00:21:27.909 11:29:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:27.910 11:29:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:27.910 11:29:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:27.910 11:29:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@99 -- # nvmf_stats='{ 00:21:27.910 "tick_rate": 2100000000, 00:21:27.910 "poll_groups": [ 00:21:27.910 { 00:21:27.910 "name": "nvmf_tgt_poll_group_000", 00:21:27.910 "admin_qpairs": 1, 00:21:27.910 "io_qpairs": 1, 00:21:27.910 "current_admin_qpairs": 1, 00:21:27.910 "current_io_qpairs": 1, 00:21:27.910 "pending_bdev_io": 0, 00:21:27.910 "completed_nvme_io": 28211, 00:21:27.910 "transports": [ 00:21:27.910 { 00:21:27.910 "trtype": "TCP" 00:21:27.910 } 00:21:27.910 ] 00:21:27.910 }, 00:21:27.910 { 00:21:27.910 "name": "nvmf_tgt_poll_group_001", 00:21:27.910 "admin_qpairs": 0, 00:21:27.910 "io_qpairs": 3, 00:21:27.910 "current_admin_qpairs": 0, 00:21:27.910 "current_io_qpairs": 3, 00:21:27.910 "pending_bdev_io": 0, 00:21:27.910 "completed_nvme_io": 30626, 00:21:27.910 "transports": [ 00:21:27.910 { 00:21:27.910 "trtype": "TCP" 00:21:27.910 } 00:21:27.910 ] 00:21:27.910 }, 00:21:27.910 { 00:21:27.910 "name": "nvmf_tgt_poll_group_002", 00:21:27.910 "admin_qpairs": 0, 00:21:27.910 "io_qpairs": 0, 00:21:27.910 "current_admin_qpairs": 0, 00:21:27.910 "current_io_qpairs": 0, 00:21:27.910 "pending_bdev_io": 0, 00:21:27.910 "completed_nvme_io": 0, 00:21:27.910 "transports": [ 00:21:27.910 { 00:21:27.910 "trtype": "TCP" 00:21:27.910 } 00:21:27.910 ] 00:21:27.910 }, 00:21:27.910 { 00:21:27.911 "name": "nvmf_tgt_poll_group_003", 00:21:27.911 "admin_qpairs": 0, 00:21:27.911 "io_qpairs": 0, 00:21:27.911 "current_admin_qpairs": 0, 00:21:27.911 "current_io_qpairs": 0, 00:21:27.911 "pending_bdev_io": 0, 00:21:27.911 "completed_nvme_io": 0, 00:21:27.911 "transports": [ 00:21:27.911 { 00:21:27.911 "trtype": "TCP" 00:21:27.911 } 00:21:27.911 ] 00:21:27.911 } 00:21:27.911 ] 00:21:27.911 }' 00:21:27.911 11:29:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@100 -- # jq -r '.poll_groups[] | select(.current_io_qpairs == 0) | length' 00:21:27.911 11:29:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@100 -- # wc -l 00:21:27.911 11:29:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@100 -- # count=2 00:21:27.911 11:29:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@101 -- # [[ 2 -lt 2 ]] 00:21:27.911 11:29:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@106 -- # wait 1567685 00:21:36.013 Initializing NVMe Controllers 00:21:36.013 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:21:36.013 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 4 00:21:36.013 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 5 00:21:36.013 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 6 00:21:36.013 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 7 00:21:36.013 Initialization complete. Launching workers. 00:21:36.013 ======================================================== 00:21:36.013 Latency(us) 00:21:36.013 Device Information : IOPS MiB/s Average min max 00:21:36.013 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 4: 5534.50 21.62 11604.20 1579.72 59961.02 00:21:36.013 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 5: 15572.70 60.83 4109.47 1391.71 6880.01 00:21:36.013 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 6: 5207.00 20.34 12319.33 1562.87 60561.99 00:21:36.013 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 7: 5074.80 19.82 12616.80 1736.62 59396.61 00:21:36.013 ======================================================== 00:21:36.013 Total : 31389.00 122.61 8168.26 1391.71 60561.99 00:21:36.013 00:21:36.013 11:29:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@107 -- # nvmftestfini 00:21:36.013 11:29:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@488 -- # nvmfcleanup 00:21:36.013 11:29:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@117 -- # sync 00:21:36.013 11:29:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:21:36.013 11:29:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@120 -- # set +e 00:21:36.013 11:29:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@121 -- # for i in {1..20} 00:21:36.013 11:29:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:21:36.013 rmmod nvme_tcp 00:21:36.013 rmmod nvme_fabrics 00:21:36.013 rmmod nvme_keyring 00:21:36.013 11:29:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:21:36.013 11:29:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@124 -- # set -e 00:21:36.013 11:29:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@125 -- # return 0 00:21:36.013 11:29:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@489 -- # '[' -n 1567438 ']' 00:21:36.013 11:29:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@490 -- # killprocess 1567438 00:21:36.013 11:29:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@950 -- # '[' -z 1567438 ']' 00:21:36.013 11:29:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@954 -- # kill -0 1567438 00:21:36.013 11:29:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@955 -- # uname 00:21:36.013 11:29:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:21:36.013 11:29:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1567438 00:21:36.013 11:29:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:21:36.013 11:29:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:21:36.013 11:29:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1567438' 00:21:36.013 killing process with pid 1567438 00:21:36.013 11:29:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@969 -- # kill 1567438 00:21:36.013 11:29:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@974 -- # wait 1567438 00:21:36.013 11:29:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:21:36.013 11:29:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:21:36.013 11:29:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:21:36.013 11:29:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:21:36.013 11:29:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@278 -- # remove_spdk_ns 00:21:36.013 11:29:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:36.013 11:29:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:36.013 11:29:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:37.917 11:29:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:21:37.917 11:29:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:21:37.917 00:21:37.917 real 0m50.237s 00:21:37.917 user 2m49.449s 00:21:37.917 sys 0m9.536s 00:21:37.917 11:29:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1126 -- # xtrace_disable 00:21:37.917 11:29:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:37.917 ************************************ 00:21:37.917 END TEST nvmf_perf_adq 00:21:37.917 ************************************ 00:21:37.917 11:29:33 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@63 -- # run_test nvmf_shutdown /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=tcp 00:21:37.917 11:29:33 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:21:37.917 11:29:33 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:21:37.917 11:29:33 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:21:38.176 ************************************ 00:21:38.176 START TEST nvmf_shutdown 00:21:38.176 ************************************ 00:21:38.176 11:29:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=tcp 00:21:38.176 * Looking for test storage... 00:21:38.176 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:21:38.176 11:29:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:21:38.176 11:29:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@7 -- # uname -s 00:21:38.176 11:29:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:38.176 11:29:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:38.176 11:29:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:38.176 11:29:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:38.176 11:29:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:38.176 11:29:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:38.176 11:29:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:38.176 11:29:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:38.176 11:29:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:38.176 11:29:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:38.176 11:29:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:21:38.176 11:29:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:21:38.176 11:29:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:38.176 11:29:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:38.176 11:29:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:21:38.176 11:29:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:38.176 11:29:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:21:38.176 11:29:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:38.176 11:29:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:38.176 11:29:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:38.176 11:29:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:38.176 11:29:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:38.176 11:29:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:38.176 11:29:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@5 -- # export PATH 00:21:38.176 11:29:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:38.176 11:29:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@47 -- # : 0 00:21:38.176 11:29:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:21:38.176 11:29:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:21:38.176 11:29:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:38.176 11:29:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:38.176 11:29:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:38.177 11:29:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:21:38.177 11:29:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:21:38.177 11:29:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@51 -- # have_pci_nics=0 00:21:38.177 11:29:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@11 -- # MALLOC_BDEV_SIZE=64 00:21:38.177 11:29:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:21:38.177 11:29:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@147 -- # run_test nvmf_shutdown_tc1 nvmf_shutdown_tc1 00:21:38.177 11:29:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:21:38.177 11:29:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1107 -- # xtrace_disable 00:21:38.177 11:29:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:21:38.177 ************************************ 00:21:38.177 START TEST nvmf_shutdown_tc1 00:21:38.177 ************************************ 00:21:38.177 11:29:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@1125 -- # nvmf_shutdown_tc1 00:21:38.177 11:29:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@74 -- # starttarget 00:21:38.177 11:29:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@15 -- # nvmftestinit 00:21:38.177 11:29:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:21:38.177 11:29:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:38.177 11:29:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@448 -- # prepare_net_devs 00:21:38.177 11:29:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@410 -- # local -g is_hw=no 00:21:38.177 11:29:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@412 -- # remove_spdk_ns 00:21:38.177 11:29:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:38.177 11:29:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:38.177 11:29:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:38.177 11:29:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:21:38.177 11:29:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:21:38.177 11:29:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@285 -- # xtrace_disable 00:21:38.177 11:29:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:21:44.749 11:29:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:44.749 11:29:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@291 -- # pci_devs=() 00:21:44.749 11:29:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@291 -- # local -a pci_devs 00:21:44.749 11:29:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:21:44.749 11:29:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:21:44.749 11:29:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@293 -- # pci_drivers=() 00:21:44.749 11:29:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:21:44.749 11:29:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@295 -- # net_devs=() 00:21:44.749 11:29:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@295 -- # local -ga net_devs 00:21:44.749 11:29:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@296 -- # e810=() 00:21:44.749 11:29:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@296 -- # local -ga e810 00:21:44.749 11:29:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@297 -- # x722=() 00:21:44.749 11:29:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@297 -- # local -ga x722 00:21:44.749 11:29:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@298 -- # mlx=() 00:21:44.749 11:29:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@298 -- # local -ga mlx 00:21:44.749 11:29:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:44.749 11:29:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:44.749 11:29:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:44.749 11:29:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:44.749 11:29:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:44.749 11:29:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:44.749 11:29:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:44.749 11:29:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:44.749 11:29:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:44.749 11:29:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:44.749 11:29:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:44.749 11:29:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:21:44.749 11:29:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:21:44.749 11:29:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:21:44.749 11:29:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:21:44.749 11:29:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:21:44.749 11:29:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:21:44.749 11:29:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:44.749 11:29:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:21:44.749 Found 0000:86:00.0 (0x8086 - 0x159b) 00:21:44.749 11:29:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:21:44.749 11:29:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:21:44.749 11:29:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:44.749 11:29:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:44.749 11:29:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:21:44.749 11:29:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:44.749 11:29:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:21:44.749 Found 0000:86:00.1 (0x8086 - 0x159b) 00:21:44.749 11:29:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:21:44.749 11:29:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:21:44.749 11:29:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:44.749 11:29:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:44.749 11:29:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:21:44.749 11:29:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:21:44.749 11:29:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:21:44.749 11:29:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:21:44.749 11:29:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:44.749 11:29:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:44.749 11:29:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:21:44.749 11:29:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:44.749 11:29:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:21:44.749 11:29:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:21:44.749 11:29:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:44.749 11:29:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:21:44.749 Found net devices under 0000:86:00.0: cvl_0_0 00:21:44.749 11:29:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:21:44.749 11:29:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:44.749 11:29:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:44.749 11:29:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:21:44.749 11:29:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:44.749 11:29:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:21:44.749 11:29:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:21:44.749 11:29:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:44.749 11:29:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:21:44.749 Found net devices under 0000:86:00.1: cvl_0_1 00:21:44.749 11:29:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:21:44.749 11:29:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:21:44.749 11:29:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@414 -- # is_hw=yes 00:21:44.749 11:29:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:21:44.750 11:29:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:21:44.750 11:29:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:21:44.750 11:29:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:44.750 11:29:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:44.750 11:29:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:44.750 11:29:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:21:44.750 11:29:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:21:44.750 11:29:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:21:44.750 11:29:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:21:44.750 11:29:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:21:44.750 11:29:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:44.750 11:29:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:21:44.750 11:29:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:21:44.750 11:29:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:21:44.750 11:29:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:21:44.750 11:29:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:21:44.750 11:29:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:21:44.750 11:29:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:21:44.750 11:29:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:21:44.750 11:29:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:21:44.750 11:29:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:21:44.750 11:29:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:21:44.750 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:44.750 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.168 ms 00:21:44.750 00:21:44.750 --- 10.0.0.2 ping statistics --- 00:21:44.750 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:44.750 rtt min/avg/max/mdev = 0.168/0.168/0.168/0.000 ms 00:21:44.750 11:29:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:21:44.750 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:44.750 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.191 ms 00:21:44.750 00:21:44.750 --- 10.0.0.1 ping statistics --- 00:21:44.750 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:44.750 rtt min/avg/max/mdev = 0.191/0.191/0.191/0.000 ms 00:21:44.750 11:29:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:44.750 11:29:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@422 -- # return 0 00:21:44.750 11:29:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:21:44.750 11:29:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:44.750 11:29:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:21:44.750 11:29:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:21:44.750 11:29:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:44.750 11:29:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:21:44.750 11:29:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:21:44.750 11:29:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@18 -- # nvmfappstart -m 0x1E 00:21:44.750 11:29:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:21:44.750 11:29:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@724 -- # xtrace_disable 00:21:44.750 11:29:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:21:44.750 11:29:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@481 -- # nvmfpid=1572895 00:21:44.750 11:29:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@482 -- # waitforlisten 1572895 00:21:44.750 11:29:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:21:44.750 11:29:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@831 -- # '[' -z 1572895 ']' 00:21:44.750 11:29:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:44.750 11:29:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@836 -- # local max_retries=100 00:21:44.750 11:29:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:44.750 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:44.750 11:29:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@840 -- # xtrace_disable 00:21:44.750 11:29:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:21:44.750 [2024-07-26 11:29:39.650389] Starting SPDK v24.09-pre git sha1 487ff9e1a / DPDK 24.03.0 initialization... 00:21:44.750 [2024-07-26 11:29:39.650428] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:44.750 EAL: No free 2048 kB hugepages reported on node 1 00:21:44.750 [2024-07-26 11:29:39.720021] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:21:44.750 [2024-07-26 11:29:39.798798] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:44.750 [2024-07-26 11:29:39.798831] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:44.750 [2024-07-26 11:29:39.798838] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:44.750 [2024-07-26 11:29:39.798843] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:44.750 [2024-07-26 11:29:39.798852] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:44.750 [2024-07-26 11:29:39.798961] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:21:44.750 [2024-07-26 11:29:39.799086] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:21:44.750 [2024-07-26 11:29:39.799190] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:21:44.750 [2024-07-26 11:29:39.799192] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:21:45.008 11:29:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:21:45.008 11:29:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@864 -- # return 0 00:21:45.008 11:29:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:21:45.008 11:29:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@730 -- # xtrace_disable 00:21:45.008 11:29:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:21:45.008 11:29:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:45.009 11:29:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:21:45.009 11:29:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:45.009 11:29:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:21:45.009 [2024-07-26 11:29:40.499095] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:45.009 11:29:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:45.009 11:29:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@22 -- # num_subsystems=({1..10}) 00:21:45.009 11:29:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@24 -- # timing_enter create_subsystems 00:21:45.009 11:29:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@724 -- # xtrace_disable 00:21:45.009 11:29:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:21:45.009 11:29:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@26 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:21:45.009 11:29:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:21:45.009 11:29:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:21:45.009 11:29:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:21:45.009 11:29:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:21:45.009 11:29:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:21:45.009 11:29:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:21:45.009 11:29:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:21:45.009 11:29:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:21:45.009 11:29:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:21:45.009 11:29:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:21:45.009 11:29:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:21:45.009 11:29:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:21:45.009 11:29:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:21:45.009 11:29:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:21:45.009 11:29:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:21:45.009 11:29:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:21:45.009 11:29:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:21:45.009 11:29:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:21:45.009 11:29:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:21:45.009 11:29:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:21:45.009 11:29:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@35 -- # rpc_cmd 00:21:45.009 11:29:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:45.009 11:29:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:21:45.009 Malloc1 00:21:45.009 [2024-07-26 11:29:40.594814] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:45.009 Malloc2 00:21:45.009 Malloc3 00:21:45.267 Malloc4 00:21:45.267 Malloc5 00:21:45.267 Malloc6 00:21:45.267 Malloc7 00:21:45.267 Malloc8 00:21:45.267 Malloc9 00:21:45.526 Malloc10 00:21:45.526 11:29:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:45.526 11:29:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@36 -- # timing_exit create_subsystems 00:21:45.526 11:29:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@730 -- # xtrace_disable 00:21:45.526 11:29:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:21:45.526 11:29:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@78 -- # perfpid=1573178 00:21:45.526 11:29:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@79 -- # waitforlisten 1573178 /var/tmp/bdevperf.sock 00:21:45.526 11:29:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@831 -- # '[' -z 1573178 ']' 00:21:45.526 11:29:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:45.526 11:29:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json /dev/fd/63 00:21:45.526 11:29:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@836 -- # local max_retries=100 00:21:45.526 11:29:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@77 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:21:45.526 11:29:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:45.526 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:45.526 11:29:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@840 -- # xtrace_disable 00:21:45.526 11:29:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@532 -- # config=() 00:21:45.526 11:29:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:21:45.526 11:29:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@532 -- # local subsystem config 00:21:45.526 11:29:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:45.526 11:29:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:45.526 { 00:21:45.526 "params": { 00:21:45.526 "name": "Nvme$subsystem", 00:21:45.526 "trtype": "$TEST_TRANSPORT", 00:21:45.526 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:45.526 "adrfam": "ipv4", 00:21:45.526 "trsvcid": "$NVMF_PORT", 00:21:45.526 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:45.526 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:45.526 "hdgst": ${hdgst:-false}, 00:21:45.526 "ddgst": ${ddgst:-false} 00:21:45.526 }, 00:21:45.526 "method": "bdev_nvme_attach_controller" 00:21:45.526 } 00:21:45.526 EOF 00:21:45.526 )") 00:21:45.526 11:29:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:21:45.526 11:29:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:45.526 11:29:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:45.526 { 00:21:45.526 "params": { 00:21:45.526 "name": "Nvme$subsystem", 00:21:45.526 "trtype": "$TEST_TRANSPORT", 00:21:45.526 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:45.526 "adrfam": "ipv4", 00:21:45.526 "trsvcid": "$NVMF_PORT", 00:21:45.526 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:45.526 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:45.526 "hdgst": ${hdgst:-false}, 00:21:45.526 "ddgst": ${ddgst:-false} 00:21:45.526 }, 00:21:45.526 "method": "bdev_nvme_attach_controller" 00:21:45.526 } 00:21:45.526 EOF 00:21:45.526 )") 00:21:45.526 11:29:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:21:45.526 11:29:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:45.526 11:29:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:45.526 { 00:21:45.526 "params": { 00:21:45.526 "name": "Nvme$subsystem", 00:21:45.526 "trtype": "$TEST_TRANSPORT", 00:21:45.526 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:45.526 "adrfam": "ipv4", 00:21:45.526 "trsvcid": "$NVMF_PORT", 00:21:45.526 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:45.526 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:45.526 "hdgst": ${hdgst:-false}, 00:21:45.526 "ddgst": ${ddgst:-false} 00:21:45.526 }, 00:21:45.526 "method": "bdev_nvme_attach_controller" 00:21:45.526 } 00:21:45.526 EOF 00:21:45.526 )") 00:21:45.526 11:29:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:21:45.526 11:29:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:45.526 11:29:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:45.526 { 00:21:45.526 "params": { 00:21:45.526 "name": "Nvme$subsystem", 00:21:45.526 "trtype": "$TEST_TRANSPORT", 00:21:45.526 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:45.526 "adrfam": "ipv4", 00:21:45.526 "trsvcid": "$NVMF_PORT", 00:21:45.526 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:45.526 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:45.526 "hdgst": ${hdgst:-false}, 00:21:45.526 "ddgst": ${ddgst:-false} 00:21:45.526 }, 00:21:45.526 "method": "bdev_nvme_attach_controller" 00:21:45.526 } 00:21:45.526 EOF 00:21:45.526 )") 00:21:45.526 11:29:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:21:45.526 11:29:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:45.526 11:29:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:45.526 { 00:21:45.526 "params": { 00:21:45.526 "name": "Nvme$subsystem", 00:21:45.526 "trtype": "$TEST_TRANSPORT", 00:21:45.526 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:45.526 "adrfam": "ipv4", 00:21:45.526 "trsvcid": "$NVMF_PORT", 00:21:45.526 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:45.526 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:45.526 "hdgst": ${hdgst:-false}, 00:21:45.526 "ddgst": ${ddgst:-false} 00:21:45.526 }, 00:21:45.526 "method": "bdev_nvme_attach_controller" 00:21:45.526 } 00:21:45.526 EOF 00:21:45.526 )") 00:21:45.526 11:29:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:21:45.526 11:29:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:45.526 11:29:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:45.526 { 00:21:45.526 "params": { 00:21:45.526 "name": "Nvme$subsystem", 00:21:45.526 "trtype": "$TEST_TRANSPORT", 00:21:45.526 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:45.526 "adrfam": "ipv4", 00:21:45.526 "trsvcid": "$NVMF_PORT", 00:21:45.526 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:45.526 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:45.526 "hdgst": ${hdgst:-false}, 00:21:45.526 "ddgst": ${ddgst:-false} 00:21:45.526 }, 00:21:45.526 "method": "bdev_nvme_attach_controller" 00:21:45.526 } 00:21:45.526 EOF 00:21:45.526 )") 00:21:45.526 11:29:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:21:45.526 11:29:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:45.526 11:29:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:45.526 { 00:21:45.526 "params": { 00:21:45.526 "name": "Nvme$subsystem", 00:21:45.526 "trtype": "$TEST_TRANSPORT", 00:21:45.526 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:45.526 "adrfam": "ipv4", 00:21:45.526 "trsvcid": "$NVMF_PORT", 00:21:45.526 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:45.526 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:45.526 "hdgst": ${hdgst:-false}, 00:21:45.526 "ddgst": ${ddgst:-false} 00:21:45.526 }, 00:21:45.526 "method": "bdev_nvme_attach_controller" 00:21:45.526 } 00:21:45.526 EOF 00:21:45.526 )") 00:21:45.526 11:29:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:21:45.526 [2024-07-26 11:29:41.068861] Starting SPDK v24.09-pre git sha1 487ff9e1a / DPDK 24.03.0 initialization... 00:21:45.526 [2024-07-26 11:29:41.068912] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:21:45.526 11:29:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:45.526 11:29:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:45.526 { 00:21:45.526 "params": { 00:21:45.526 "name": "Nvme$subsystem", 00:21:45.526 "trtype": "$TEST_TRANSPORT", 00:21:45.526 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:45.527 "adrfam": "ipv4", 00:21:45.527 "trsvcid": "$NVMF_PORT", 00:21:45.527 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:45.527 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:45.527 "hdgst": ${hdgst:-false}, 00:21:45.527 "ddgst": ${ddgst:-false} 00:21:45.527 }, 00:21:45.527 "method": "bdev_nvme_attach_controller" 00:21:45.527 } 00:21:45.527 EOF 00:21:45.527 )") 00:21:45.527 11:29:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:21:45.527 11:29:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:45.527 11:29:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:45.527 { 00:21:45.527 "params": { 00:21:45.527 "name": "Nvme$subsystem", 00:21:45.527 "trtype": "$TEST_TRANSPORT", 00:21:45.527 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:45.527 "adrfam": "ipv4", 00:21:45.527 "trsvcid": "$NVMF_PORT", 00:21:45.527 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:45.527 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:45.527 "hdgst": ${hdgst:-false}, 00:21:45.527 "ddgst": ${ddgst:-false} 00:21:45.527 }, 00:21:45.527 "method": "bdev_nvme_attach_controller" 00:21:45.527 } 00:21:45.527 EOF 00:21:45.527 )") 00:21:45.527 11:29:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:21:45.527 11:29:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:45.527 11:29:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:45.527 { 00:21:45.527 "params": { 00:21:45.527 "name": "Nvme$subsystem", 00:21:45.527 "trtype": "$TEST_TRANSPORT", 00:21:45.527 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:45.527 "adrfam": "ipv4", 00:21:45.527 "trsvcid": "$NVMF_PORT", 00:21:45.527 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:45.527 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:45.527 "hdgst": ${hdgst:-false}, 00:21:45.527 "ddgst": ${ddgst:-false} 00:21:45.527 }, 00:21:45.527 "method": "bdev_nvme_attach_controller" 00:21:45.527 } 00:21:45.527 EOF 00:21:45.527 )") 00:21:45.527 11:29:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:21:45.527 11:29:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@556 -- # jq . 00:21:45.527 EAL: No free 2048 kB hugepages reported on node 1 00:21:45.527 11:29:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@557 -- # IFS=, 00:21:45.527 11:29:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:21:45.527 "params": { 00:21:45.527 "name": "Nvme1", 00:21:45.527 "trtype": "tcp", 00:21:45.527 "traddr": "10.0.0.2", 00:21:45.527 "adrfam": "ipv4", 00:21:45.527 "trsvcid": "4420", 00:21:45.527 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:21:45.527 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:21:45.527 "hdgst": false, 00:21:45.527 "ddgst": false 00:21:45.527 }, 00:21:45.527 "method": "bdev_nvme_attach_controller" 00:21:45.527 },{ 00:21:45.527 "params": { 00:21:45.527 "name": "Nvme2", 00:21:45.527 "trtype": "tcp", 00:21:45.527 "traddr": "10.0.0.2", 00:21:45.527 "adrfam": "ipv4", 00:21:45.527 "trsvcid": "4420", 00:21:45.527 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:21:45.527 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:21:45.527 "hdgst": false, 00:21:45.527 "ddgst": false 00:21:45.527 }, 00:21:45.527 "method": "bdev_nvme_attach_controller" 00:21:45.527 },{ 00:21:45.527 "params": { 00:21:45.527 "name": "Nvme3", 00:21:45.527 "trtype": "tcp", 00:21:45.527 "traddr": "10.0.0.2", 00:21:45.527 "adrfam": "ipv4", 00:21:45.527 "trsvcid": "4420", 00:21:45.527 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:21:45.527 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:21:45.527 "hdgst": false, 00:21:45.527 "ddgst": false 00:21:45.527 }, 00:21:45.527 "method": "bdev_nvme_attach_controller" 00:21:45.527 },{ 00:21:45.527 "params": { 00:21:45.527 "name": "Nvme4", 00:21:45.527 "trtype": "tcp", 00:21:45.527 "traddr": "10.0.0.2", 00:21:45.527 "adrfam": "ipv4", 00:21:45.527 "trsvcid": "4420", 00:21:45.527 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:21:45.527 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:21:45.527 "hdgst": false, 00:21:45.527 "ddgst": false 00:21:45.527 }, 00:21:45.527 "method": "bdev_nvme_attach_controller" 00:21:45.527 },{ 00:21:45.527 "params": { 00:21:45.527 "name": "Nvme5", 00:21:45.527 "trtype": "tcp", 00:21:45.527 "traddr": "10.0.0.2", 00:21:45.527 "adrfam": "ipv4", 00:21:45.527 "trsvcid": "4420", 00:21:45.527 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:21:45.527 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:21:45.527 "hdgst": false, 00:21:45.527 "ddgst": false 00:21:45.527 }, 00:21:45.527 "method": "bdev_nvme_attach_controller" 00:21:45.527 },{ 00:21:45.527 "params": { 00:21:45.527 "name": "Nvme6", 00:21:45.527 "trtype": "tcp", 00:21:45.527 "traddr": "10.0.0.2", 00:21:45.527 "adrfam": "ipv4", 00:21:45.527 "trsvcid": "4420", 00:21:45.527 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:21:45.527 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:21:45.527 "hdgst": false, 00:21:45.527 "ddgst": false 00:21:45.527 }, 00:21:45.527 "method": "bdev_nvme_attach_controller" 00:21:45.527 },{ 00:21:45.527 "params": { 00:21:45.527 "name": "Nvme7", 00:21:45.527 "trtype": "tcp", 00:21:45.527 "traddr": "10.0.0.2", 00:21:45.527 "adrfam": "ipv4", 00:21:45.527 "trsvcid": "4420", 00:21:45.527 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:21:45.527 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:21:45.527 "hdgst": false, 00:21:45.527 "ddgst": false 00:21:45.527 }, 00:21:45.527 "method": "bdev_nvme_attach_controller" 00:21:45.527 },{ 00:21:45.527 "params": { 00:21:45.527 "name": "Nvme8", 00:21:45.527 "trtype": "tcp", 00:21:45.527 "traddr": "10.0.0.2", 00:21:45.527 "adrfam": "ipv4", 00:21:45.527 "trsvcid": "4420", 00:21:45.527 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:21:45.527 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:21:45.527 "hdgst": false, 00:21:45.527 "ddgst": false 00:21:45.527 }, 00:21:45.527 "method": "bdev_nvme_attach_controller" 00:21:45.527 },{ 00:21:45.527 "params": { 00:21:45.527 "name": "Nvme9", 00:21:45.527 "trtype": "tcp", 00:21:45.527 "traddr": "10.0.0.2", 00:21:45.527 "adrfam": "ipv4", 00:21:45.527 "trsvcid": "4420", 00:21:45.527 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:21:45.527 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:21:45.527 "hdgst": false, 00:21:45.527 "ddgst": false 00:21:45.527 }, 00:21:45.527 "method": "bdev_nvme_attach_controller" 00:21:45.527 },{ 00:21:45.527 "params": { 00:21:45.527 "name": "Nvme10", 00:21:45.527 "trtype": "tcp", 00:21:45.527 "traddr": "10.0.0.2", 00:21:45.527 "adrfam": "ipv4", 00:21:45.527 "trsvcid": "4420", 00:21:45.527 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:21:45.527 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:21:45.527 "hdgst": false, 00:21:45.527 "ddgst": false 00:21:45.527 }, 00:21:45.527 "method": "bdev_nvme_attach_controller" 00:21:45.527 }' 00:21:45.527 [2024-07-26 11:29:41.140270] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:45.785 [2024-07-26 11:29:41.213404] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:21:47.158 11:29:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:21:47.158 11:29:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@864 -- # return 0 00:21:47.158 11:29:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@80 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:21:47.158 11:29:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:47.158 11:29:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:21:47.158 11:29:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:47.158 11:29:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@83 -- # kill -9 1573178 00:21:47.158 11:29:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@84 -- # rm -f /var/run/spdk_bdev1 00:21:47.158 11:29:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@87 -- # sleep 1 00:21:48.090 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh: line 73: 1573178 Killed $rootdir/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json <(gen_nvmf_target_json "${num_subsystems[@]}") 00:21:48.090 11:29:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@88 -- # kill -0 1572895 00:21:48.090 11:29:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@91 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:21:48.090 11:29:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@91 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:21:48.090 11:29:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@532 -- # config=() 00:21:48.090 11:29:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@532 -- # local subsystem config 00:21:48.090 11:29:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:48.090 11:29:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:48.090 { 00:21:48.090 "params": { 00:21:48.090 "name": "Nvme$subsystem", 00:21:48.090 "trtype": "$TEST_TRANSPORT", 00:21:48.090 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:48.090 "adrfam": "ipv4", 00:21:48.090 "trsvcid": "$NVMF_PORT", 00:21:48.090 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:48.090 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:48.090 "hdgst": ${hdgst:-false}, 00:21:48.090 "ddgst": ${ddgst:-false} 00:21:48.090 }, 00:21:48.090 "method": "bdev_nvme_attach_controller" 00:21:48.090 } 00:21:48.090 EOF 00:21:48.090 )") 00:21:48.090 11:29:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:21:48.090 11:29:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:48.090 11:29:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:48.090 { 00:21:48.090 "params": { 00:21:48.090 "name": "Nvme$subsystem", 00:21:48.090 "trtype": "$TEST_TRANSPORT", 00:21:48.090 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:48.090 "adrfam": "ipv4", 00:21:48.090 "trsvcid": "$NVMF_PORT", 00:21:48.090 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:48.090 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:48.090 "hdgst": ${hdgst:-false}, 00:21:48.090 "ddgst": ${ddgst:-false} 00:21:48.090 }, 00:21:48.090 "method": "bdev_nvme_attach_controller" 00:21:48.090 } 00:21:48.090 EOF 00:21:48.090 )") 00:21:48.090 11:29:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:21:48.090 11:29:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:48.090 11:29:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:48.090 { 00:21:48.090 "params": { 00:21:48.090 "name": "Nvme$subsystem", 00:21:48.090 "trtype": "$TEST_TRANSPORT", 00:21:48.090 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:48.090 "adrfam": "ipv4", 00:21:48.090 "trsvcid": "$NVMF_PORT", 00:21:48.090 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:48.090 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:48.090 "hdgst": ${hdgst:-false}, 00:21:48.090 "ddgst": ${ddgst:-false} 00:21:48.090 }, 00:21:48.090 "method": "bdev_nvme_attach_controller" 00:21:48.090 } 00:21:48.090 EOF 00:21:48.090 )") 00:21:48.090 11:29:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:21:48.090 11:29:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:48.090 11:29:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:48.090 { 00:21:48.090 "params": { 00:21:48.090 "name": "Nvme$subsystem", 00:21:48.090 "trtype": "$TEST_TRANSPORT", 00:21:48.090 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:48.090 "adrfam": "ipv4", 00:21:48.090 "trsvcid": "$NVMF_PORT", 00:21:48.090 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:48.090 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:48.090 "hdgst": ${hdgst:-false}, 00:21:48.090 "ddgst": ${ddgst:-false} 00:21:48.090 }, 00:21:48.090 "method": "bdev_nvme_attach_controller" 00:21:48.090 } 00:21:48.090 EOF 00:21:48.090 )") 00:21:48.090 11:29:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:21:48.090 11:29:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:48.090 11:29:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:48.090 { 00:21:48.090 "params": { 00:21:48.090 "name": "Nvme$subsystem", 00:21:48.090 "trtype": "$TEST_TRANSPORT", 00:21:48.090 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:48.090 "adrfam": "ipv4", 00:21:48.090 "trsvcid": "$NVMF_PORT", 00:21:48.090 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:48.090 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:48.090 "hdgst": ${hdgst:-false}, 00:21:48.090 "ddgst": ${ddgst:-false} 00:21:48.090 }, 00:21:48.090 "method": "bdev_nvme_attach_controller" 00:21:48.090 } 00:21:48.090 EOF 00:21:48.090 )") 00:21:48.090 11:29:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:21:48.090 11:29:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:48.091 11:29:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:48.091 { 00:21:48.091 "params": { 00:21:48.091 "name": "Nvme$subsystem", 00:21:48.091 "trtype": "$TEST_TRANSPORT", 00:21:48.091 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:48.091 "adrfam": "ipv4", 00:21:48.091 "trsvcid": "$NVMF_PORT", 00:21:48.091 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:48.091 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:48.091 "hdgst": ${hdgst:-false}, 00:21:48.091 "ddgst": ${ddgst:-false} 00:21:48.091 }, 00:21:48.091 "method": "bdev_nvme_attach_controller" 00:21:48.091 } 00:21:48.091 EOF 00:21:48.091 )") 00:21:48.091 11:29:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:21:48.091 11:29:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:48.091 11:29:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:48.091 { 00:21:48.091 "params": { 00:21:48.091 "name": "Nvme$subsystem", 00:21:48.091 "trtype": "$TEST_TRANSPORT", 00:21:48.091 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:48.091 "adrfam": "ipv4", 00:21:48.091 "trsvcid": "$NVMF_PORT", 00:21:48.091 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:48.091 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:48.091 "hdgst": ${hdgst:-false}, 00:21:48.091 "ddgst": ${ddgst:-false} 00:21:48.091 }, 00:21:48.091 "method": "bdev_nvme_attach_controller" 00:21:48.091 } 00:21:48.091 EOF 00:21:48.091 )") 00:21:48.091 11:29:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:21:48.091 [2024-07-26 11:29:43.578302] Starting SPDK v24.09-pre git sha1 487ff9e1a / DPDK 24.03.0 initialization... 00:21:48.091 [2024-07-26 11:29:43.578350] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1573660 ] 00:21:48.091 11:29:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:48.091 11:29:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:48.091 { 00:21:48.091 "params": { 00:21:48.091 "name": "Nvme$subsystem", 00:21:48.091 "trtype": "$TEST_TRANSPORT", 00:21:48.091 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:48.091 "adrfam": "ipv4", 00:21:48.091 "trsvcid": "$NVMF_PORT", 00:21:48.091 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:48.091 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:48.091 "hdgst": ${hdgst:-false}, 00:21:48.091 "ddgst": ${ddgst:-false} 00:21:48.091 }, 00:21:48.091 "method": "bdev_nvme_attach_controller" 00:21:48.091 } 00:21:48.091 EOF 00:21:48.091 )") 00:21:48.091 11:29:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:21:48.091 11:29:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:48.091 11:29:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:48.091 { 00:21:48.091 "params": { 00:21:48.091 "name": "Nvme$subsystem", 00:21:48.091 "trtype": "$TEST_TRANSPORT", 00:21:48.091 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:48.091 "adrfam": "ipv4", 00:21:48.091 "trsvcid": "$NVMF_PORT", 00:21:48.091 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:48.091 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:48.091 "hdgst": ${hdgst:-false}, 00:21:48.091 "ddgst": ${ddgst:-false} 00:21:48.091 }, 00:21:48.091 "method": "bdev_nvme_attach_controller" 00:21:48.091 } 00:21:48.091 EOF 00:21:48.091 )") 00:21:48.091 11:29:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:21:48.091 11:29:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:48.091 11:29:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:48.091 { 00:21:48.091 "params": { 00:21:48.091 "name": "Nvme$subsystem", 00:21:48.091 "trtype": "$TEST_TRANSPORT", 00:21:48.091 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:48.091 "adrfam": "ipv4", 00:21:48.091 "trsvcid": "$NVMF_PORT", 00:21:48.091 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:48.091 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:48.091 "hdgst": ${hdgst:-false}, 00:21:48.091 "ddgst": ${ddgst:-false} 00:21:48.091 }, 00:21:48.091 "method": "bdev_nvme_attach_controller" 00:21:48.091 } 00:21:48.091 EOF 00:21:48.091 )") 00:21:48.091 11:29:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:21:48.091 11:29:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@556 -- # jq . 00:21:48.091 EAL: No free 2048 kB hugepages reported on node 1 00:21:48.091 11:29:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@557 -- # IFS=, 00:21:48.091 11:29:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:21:48.091 "params": { 00:21:48.091 "name": "Nvme1", 00:21:48.091 "trtype": "tcp", 00:21:48.091 "traddr": "10.0.0.2", 00:21:48.091 "adrfam": "ipv4", 00:21:48.091 "trsvcid": "4420", 00:21:48.091 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:21:48.091 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:21:48.091 "hdgst": false, 00:21:48.091 "ddgst": false 00:21:48.091 }, 00:21:48.091 "method": "bdev_nvme_attach_controller" 00:21:48.091 },{ 00:21:48.091 "params": { 00:21:48.091 "name": "Nvme2", 00:21:48.091 "trtype": "tcp", 00:21:48.091 "traddr": "10.0.0.2", 00:21:48.091 "adrfam": "ipv4", 00:21:48.091 "trsvcid": "4420", 00:21:48.091 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:21:48.091 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:21:48.091 "hdgst": false, 00:21:48.091 "ddgst": false 00:21:48.091 }, 00:21:48.091 "method": "bdev_nvme_attach_controller" 00:21:48.091 },{ 00:21:48.091 "params": { 00:21:48.091 "name": "Nvme3", 00:21:48.091 "trtype": "tcp", 00:21:48.091 "traddr": "10.0.0.2", 00:21:48.091 "adrfam": "ipv4", 00:21:48.091 "trsvcid": "4420", 00:21:48.091 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:21:48.091 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:21:48.091 "hdgst": false, 00:21:48.091 "ddgst": false 00:21:48.091 }, 00:21:48.091 "method": "bdev_nvme_attach_controller" 00:21:48.091 },{ 00:21:48.091 "params": { 00:21:48.091 "name": "Nvme4", 00:21:48.091 "trtype": "tcp", 00:21:48.091 "traddr": "10.0.0.2", 00:21:48.091 "adrfam": "ipv4", 00:21:48.091 "trsvcid": "4420", 00:21:48.091 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:21:48.091 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:21:48.091 "hdgst": false, 00:21:48.091 "ddgst": false 00:21:48.091 }, 00:21:48.091 "method": "bdev_nvme_attach_controller" 00:21:48.091 },{ 00:21:48.091 "params": { 00:21:48.091 "name": "Nvme5", 00:21:48.091 "trtype": "tcp", 00:21:48.091 "traddr": "10.0.0.2", 00:21:48.091 "adrfam": "ipv4", 00:21:48.091 "trsvcid": "4420", 00:21:48.091 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:21:48.091 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:21:48.091 "hdgst": false, 00:21:48.091 "ddgst": false 00:21:48.091 }, 00:21:48.091 "method": "bdev_nvme_attach_controller" 00:21:48.091 },{ 00:21:48.091 "params": { 00:21:48.091 "name": "Nvme6", 00:21:48.091 "trtype": "tcp", 00:21:48.091 "traddr": "10.0.0.2", 00:21:48.091 "adrfam": "ipv4", 00:21:48.091 "trsvcid": "4420", 00:21:48.091 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:21:48.091 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:21:48.091 "hdgst": false, 00:21:48.091 "ddgst": false 00:21:48.091 }, 00:21:48.091 "method": "bdev_nvme_attach_controller" 00:21:48.091 },{ 00:21:48.091 "params": { 00:21:48.091 "name": "Nvme7", 00:21:48.091 "trtype": "tcp", 00:21:48.091 "traddr": "10.0.0.2", 00:21:48.091 "adrfam": "ipv4", 00:21:48.091 "trsvcid": "4420", 00:21:48.091 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:21:48.091 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:21:48.091 "hdgst": false, 00:21:48.091 "ddgst": false 00:21:48.091 }, 00:21:48.091 "method": "bdev_nvme_attach_controller" 00:21:48.091 },{ 00:21:48.091 "params": { 00:21:48.091 "name": "Nvme8", 00:21:48.091 "trtype": "tcp", 00:21:48.091 "traddr": "10.0.0.2", 00:21:48.091 "adrfam": "ipv4", 00:21:48.091 "trsvcid": "4420", 00:21:48.091 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:21:48.091 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:21:48.091 "hdgst": false, 00:21:48.091 "ddgst": false 00:21:48.091 }, 00:21:48.091 "method": "bdev_nvme_attach_controller" 00:21:48.091 },{ 00:21:48.091 "params": { 00:21:48.091 "name": "Nvme9", 00:21:48.091 "trtype": "tcp", 00:21:48.091 "traddr": "10.0.0.2", 00:21:48.091 "adrfam": "ipv4", 00:21:48.091 "trsvcid": "4420", 00:21:48.091 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:21:48.091 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:21:48.091 "hdgst": false, 00:21:48.091 "ddgst": false 00:21:48.091 }, 00:21:48.091 "method": "bdev_nvme_attach_controller" 00:21:48.091 },{ 00:21:48.091 "params": { 00:21:48.091 "name": "Nvme10", 00:21:48.091 "trtype": "tcp", 00:21:48.092 "traddr": "10.0.0.2", 00:21:48.092 "adrfam": "ipv4", 00:21:48.092 "trsvcid": "4420", 00:21:48.092 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:21:48.092 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:21:48.092 "hdgst": false, 00:21:48.092 "ddgst": false 00:21:48.092 }, 00:21:48.092 "method": "bdev_nvme_attach_controller" 00:21:48.092 }' 00:21:48.092 [2024-07-26 11:29:43.648804] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:48.092 [2024-07-26 11:29:43.721912] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:21:49.986 Running I/O for 1 seconds... 00:21:50.918 00:21:50.918 Latency(us) 00:21:50.918 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:50.918 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:50.918 Verification LBA range: start 0x0 length 0x400 00:21:50.918 Nvme1n1 : 1.09 299.86 18.74 0.00 0.00 209932.79 9861.61 214708.42 00:21:50.918 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:50.918 Verification LBA range: start 0x0 length 0x400 00:21:50.918 Nvme2n1 : 1.12 285.04 17.82 0.00 0.00 219514.98 15416.56 212711.13 00:21:50.918 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:50.918 Verification LBA range: start 0x0 length 0x400 00:21:50.918 Nvme3n1 : 1.13 282.96 17.69 0.00 0.00 218174.22 17850.76 212711.13 00:21:50.918 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:50.918 Verification LBA range: start 0x0 length 0x400 00:21:50.918 Nvme4n1 : 1.12 284.57 17.79 0.00 0.00 213852.75 12982.37 206719.27 00:21:50.918 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:50.918 Verification LBA range: start 0x0 length 0x400 00:21:50.918 Nvme5n1 : 1.13 282.31 17.64 0.00 0.00 212555.97 15915.89 210713.84 00:21:50.918 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:50.918 Verification LBA range: start 0x0 length 0x400 00:21:50.918 Nvme6n1 : 1.14 280.67 17.54 0.00 0.00 210759.09 17601.10 212711.13 00:21:50.918 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:50.918 Verification LBA range: start 0x0 length 0x400 00:21:50.918 Nvme7n1 : 1.11 291.55 18.22 0.00 0.00 198673.26 1521.37 205720.62 00:21:50.918 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:50.918 Verification LBA range: start 0x0 length 0x400 00:21:50.918 Nvme8n1 : 1.14 281.38 17.59 0.00 0.00 204076.57 14854.83 214708.42 00:21:50.918 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:50.918 Verification LBA range: start 0x0 length 0x400 00:21:50.918 Nvme9n1 : 1.14 285.40 17.84 0.00 0.00 198246.30 6928.09 220700.28 00:21:50.918 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:50.918 Verification LBA range: start 0x0 length 0x400 00:21:50.918 Nvme10n1 : 1.15 279.23 17.45 0.00 0.00 199727.40 16227.96 228689.43 00:21:50.918 =================================================================================================================== 00:21:50.918 Total : 2852.97 178.31 0.00 0.00 208519.34 1521.37 228689.43 00:21:50.918 11:29:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@94 -- # stoptarget 00:21:50.918 11:29:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@41 -- # rm -f ./local-job0-0-verify.state 00:21:50.918 11:29:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@42 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:21:50.918 11:29:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:21:50.918 11:29:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@45 -- # nvmftestfini 00:21:50.918 11:29:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@488 -- # nvmfcleanup 00:21:50.918 11:29:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@117 -- # sync 00:21:50.918 11:29:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:21:50.918 11:29:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@120 -- # set +e 00:21:50.918 11:29:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@121 -- # for i in {1..20} 00:21:50.918 11:29:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:21:51.176 rmmod nvme_tcp 00:21:51.176 rmmod nvme_fabrics 00:21:51.176 rmmod nvme_keyring 00:21:51.176 11:29:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:21:51.176 11:29:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@124 -- # set -e 00:21:51.176 11:29:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@125 -- # return 0 00:21:51.176 11:29:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@489 -- # '[' -n 1572895 ']' 00:21:51.176 11:29:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@490 -- # killprocess 1572895 00:21:51.176 11:29:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@950 -- # '[' -z 1572895 ']' 00:21:51.176 11:29:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@954 -- # kill -0 1572895 00:21:51.176 11:29:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@955 -- # uname 00:21:51.176 11:29:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:21:51.176 11:29:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1572895 00:21:51.176 11:29:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:21:51.176 11:29:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:21:51.176 11:29:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1572895' 00:21:51.176 killing process with pid 1572895 00:21:51.176 11:29:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@969 -- # kill 1572895 00:21:51.176 11:29:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@974 -- # wait 1572895 00:21:51.434 11:29:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:21:51.434 11:29:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:21:51.434 11:29:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:21:51.434 11:29:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:21:51.434 11:29:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:21:51.434 11:29:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:51.435 11:29:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:51.435 11:29:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:54.046 11:29:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:21:54.046 00:21:54.046 real 0m15.359s 00:21:54.046 user 0m34.470s 00:21:54.046 sys 0m5.755s 00:21:54.046 11:29:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@1126 -- # xtrace_disable 00:21:54.047 11:29:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:21:54.047 ************************************ 00:21:54.047 END TEST nvmf_shutdown_tc1 00:21:54.047 ************************************ 00:21:54.047 11:29:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@148 -- # run_test nvmf_shutdown_tc2 nvmf_shutdown_tc2 00:21:54.047 11:29:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:21:54.047 11:29:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1107 -- # xtrace_disable 00:21:54.047 11:29:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:21:54.047 ************************************ 00:21:54.047 START TEST nvmf_shutdown_tc2 00:21:54.047 ************************************ 00:21:54.047 11:29:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@1125 -- # nvmf_shutdown_tc2 00:21:54.047 11:29:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@99 -- # starttarget 00:21:54.047 11:29:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@15 -- # nvmftestinit 00:21:54.047 11:29:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:21:54.047 11:29:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:54.047 11:29:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@448 -- # prepare_net_devs 00:21:54.047 11:29:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@410 -- # local -g is_hw=no 00:21:54.047 11:29:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@412 -- # remove_spdk_ns 00:21:54.047 11:29:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:54.047 11:29:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:54.047 11:29:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:54.047 11:29:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:21:54.047 11:29:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:21:54.047 11:29:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@285 -- # xtrace_disable 00:21:54.047 11:29:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:21:54.047 11:29:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:54.047 11:29:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@291 -- # pci_devs=() 00:21:54.047 11:29:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@291 -- # local -a pci_devs 00:21:54.047 11:29:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:21:54.047 11:29:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:21:54.047 11:29:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@293 -- # pci_drivers=() 00:21:54.047 11:29:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:21:54.047 11:29:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@295 -- # net_devs=() 00:21:54.047 11:29:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@295 -- # local -ga net_devs 00:21:54.047 11:29:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@296 -- # e810=() 00:21:54.047 11:29:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@296 -- # local -ga e810 00:21:54.047 11:29:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@297 -- # x722=() 00:21:54.047 11:29:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@297 -- # local -ga x722 00:21:54.047 11:29:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@298 -- # mlx=() 00:21:54.047 11:29:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@298 -- # local -ga mlx 00:21:54.047 11:29:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:54.047 11:29:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:54.047 11:29:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:54.047 11:29:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:54.047 11:29:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:54.047 11:29:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:54.047 11:29:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:54.047 11:29:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:54.047 11:29:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:54.047 11:29:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:54.047 11:29:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:54.047 11:29:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:21:54.047 11:29:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:21:54.047 11:29:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:21:54.047 11:29:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:21:54.047 11:29:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:21:54.047 11:29:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:21:54.047 11:29:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:54.047 11:29:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:21:54.047 Found 0000:86:00.0 (0x8086 - 0x159b) 00:21:54.047 11:29:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:21:54.047 11:29:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:21:54.047 11:29:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:54.047 11:29:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:54.047 11:29:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:21:54.047 11:29:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:54.047 11:29:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:21:54.047 Found 0000:86:00.1 (0x8086 - 0x159b) 00:21:54.047 11:29:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:21:54.047 11:29:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:21:54.047 11:29:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:54.047 11:29:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:54.047 11:29:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:21:54.047 11:29:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:21:54.047 11:29:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:21:54.047 11:29:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:21:54.047 11:29:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:54.047 11:29:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:54.047 11:29:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:21:54.047 11:29:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:54.047 11:29:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:21:54.047 11:29:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:21:54.047 11:29:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:54.047 11:29:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:21:54.047 Found net devices under 0000:86:00.0: cvl_0_0 00:21:54.047 11:29:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:21:54.047 11:29:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:54.047 11:29:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:54.047 11:29:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:21:54.047 11:29:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:54.047 11:29:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:21:54.047 11:29:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:21:54.048 11:29:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:54.048 11:29:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:21:54.048 Found net devices under 0000:86:00.1: cvl_0_1 00:21:54.048 11:29:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:21:54.048 11:29:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:21:54.048 11:29:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@414 -- # is_hw=yes 00:21:54.048 11:29:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:21:54.048 11:29:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:21:54.048 11:29:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:21:54.048 11:29:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:54.048 11:29:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:54.048 11:29:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:54.048 11:29:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:21:54.048 11:29:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:21:54.048 11:29:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:21:54.048 11:29:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:21:54.048 11:29:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:21:54.048 11:29:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:54.048 11:29:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:21:54.048 11:29:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:21:54.048 11:29:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:21:54.048 11:29:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:21:54.048 11:29:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:21:54.048 11:29:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:21:54.048 11:29:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:21:54.048 11:29:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:21:54.048 11:29:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:21:54.048 11:29:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:21:54.048 11:29:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:21:54.048 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:54.048 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.192 ms 00:21:54.048 00:21:54.048 --- 10.0.0.2 ping statistics --- 00:21:54.048 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:54.048 rtt min/avg/max/mdev = 0.192/0.192/0.192/0.000 ms 00:21:54.048 11:29:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:21:54.048 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:54.048 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.204 ms 00:21:54.048 00:21:54.048 --- 10.0.0.1 ping statistics --- 00:21:54.048 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:54.048 rtt min/avg/max/mdev = 0.204/0.204/0.204/0.000 ms 00:21:54.048 11:29:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:54.048 11:29:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@422 -- # return 0 00:21:54.048 11:29:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:21:54.048 11:29:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:54.048 11:29:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:21:54.048 11:29:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:21:54.048 11:29:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:54.048 11:29:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:21:54.048 11:29:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:21:54.048 11:29:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@18 -- # nvmfappstart -m 0x1E 00:21:54.048 11:29:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:21:54.048 11:29:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@724 -- # xtrace_disable 00:21:54.048 11:29:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:21:54.048 11:29:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@481 -- # nvmfpid=1574689 00:21:54.048 11:29:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@482 -- # waitforlisten 1574689 00:21:54.048 11:29:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:21:54.048 11:29:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@831 -- # '[' -z 1574689 ']' 00:21:54.048 11:29:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:54.048 11:29:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@836 -- # local max_retries=100 00:21:54.048 11:29:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:54.048 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:54.048 11:29:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@840 -- # xtrace_disable 00:21:54.048 11:29:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:21:54.048 [2024-07-26 11:29:49.565104] Starting SPDK v24.09-pre git sha1 487ff9e1a / DPDK 24.03.0 initialization... 00:21:54.048 [2024-07-26 11:29:49.565144] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:54.048 EAL: No free 2048 kB hugepages reported on node 1 00:21:54.048 [2024-07-26 11:29:49.636448] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:21:54.306 [2024-07-26 11:29:49.714990] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:54.306 [2024-07-26 11:29:49.715023] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:54.306 [2024-07-26 11:29:49.715029] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:54.307 [2024-07-26 11:29:49.715035] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:54.307 [2024-07-26 11:29:49.715040] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:54.307 [2024-07-26 11:29:49.715149] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:21:54.307 [2024-07-26 11:29:49.715258] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:21:54.307 [2024-07-26 11:29:49.715385] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:21:54.307 [2024-07-26 11:29:49.715386] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:21:54.872 11:29:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:21:54.872 11:29:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@864 -- # return 0 00:21:54.872 11:29:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:21:54.872 11:29:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@730 -- # xtrace_disable 00:21:54.872 11:29:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:21:54.872 11:29:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:54.873 11:29:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:21:54.873 11:29:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:54.873 11:29:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:21:54.873 [2024-07-26 11:29:50.412977] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:54.873 11:29:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:54.873 11:29:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@22 -- # num_subsystems=({1..10}) 00:21:54.873 11:29:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@24 -- # timing_enter create_subsystems 00:21:54.873 11:29:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@724 -- # xtrace_disable 00:21:54.873 11:29:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:21:54.873 11:29:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@26 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:21:54.873 11:29:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:21:54.873 11:29:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:21:54.873 11:29:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:21:54.873 11:29:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:21:54.873 11:29:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:21:54.873 11:29:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:21:54.873 11:29:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:21:54.873 11:29:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:21:54.873 11:29:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:21:54.873 11:29:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:21:54.873 11:29:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:21:54.873 11:29:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:21:54.873 11:29:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:21:54.873 11:29:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:21:54.873 11:29:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:21:54.873 11:29:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:21:54.873 11:29:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:21:54.873 11:29:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:21:54.873 11:29:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:21:54.873 11:29:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:21:54.873 11:29:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@35 -- # rpc_cmd 00:21:54.873 11:29:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:54.873 11:29:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:21:54.873 Malloc1 00:21:54.873 [2024-07-26 11:29:50.508431] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:54.873 Malloc2 00:21:55.131 Malloc3 00:21:55.131 Malloc4 00:21:55.131 Malloc5 00:21:55.131 Malloc6 00:21:55.131 Malloc7 00:21:55.131 Malloc8 00:21:55.390 Malloc9 00:21:55.390 Malloc10 00:21:55.390 11:29:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:55.390 11:29:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@36 -- # timing_exit create_subsystems 00:21:55.390 11:29:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@730 -- # xtrace_disable 00:21:55.390 11:29:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:21:55.390 11:29:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@103 -- # perfpid=1574976 00:21:55.390 11:29:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@104 -- # waitforlisten 1574976 /var/tmp/bdevperf.sock 00:21:55.390 11:29:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@831 -- # '[' -z 1574976 ']' 00:21:55.390 11:29:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:55.390 11:29:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@102 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:21:55.390 11:29:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@102 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:21:55.390 11:29:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@836 -- # local max_retries=100 00:21:55.390 11:29:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:55.390 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:55.390 11:29:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@532 -- # config=() 00:21:55.390 11:29:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@840 -- # xtrace_disable 00:21:55.390 11:29:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@532 -- # local subsystem config 00:21:55.390 11:29:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:21:55.390 11:29:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:55.390 11:29:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:55.390 { 00:21:55.390 "params": { 00:21:55.390 "name": "Nvme$subsystem", 00:21:55.390 "trtype": "$TEST_TRANSPORT", 00:21:55.390 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:55.390 "adrfam": "ipv4", 00:21:55.390 "trsvcid": "$NVMF_PORT", 00:21:55.390 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:55.390 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:55.390 "hdgst": ${hdgst:-false}, 00:21:55.390 "ddgst": ${ddgst:-false} 00:21:55.390 }, 00:21:55.390 "method": "bdev_nvme_attach_controller" 00:21:55.390 } 00:21:55.390 EOF 00:21:55.390 )") 00:21:55.390 11:29:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:21:55.390 11:29:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:55.390 11:29:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:55.390 { 00:21:55.390 "params": { 00:21:55.390 "name": "Nvme$subsystem", 00:21:55.390 "trtype": "$TEST_TRANSPORT", 00:21:55.390 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:55.390 "adrfam": "ipv4", 00:21:55.390 "trsvcid": "$NVMF_PORT", 00:21:55.390 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:55.390 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:55.390 "hdgst": ${hdgst:-false}, 00:21:55.390 "ddgst": ${ddgst:-false} 00:21:55.390 }, 00:21:55.390 "method": "bdev_nvme_attach_controller" 00:21:55.390 } 00:21:55.390 EOF 00:21:55.390 )") 00:21:55.390 11:29:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:21:55.391 11:29:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:55.391 11:29:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:55.391 { 00:21:55.391 "params": { 00:21:55.391 "name": "Nvme$subsystem", 00:21:55.391 "trtype": "$TEST_TRANSPORT", 00:21:55.391 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:55.391 "adrfam": "ipv4", 00:21:55.391 "trsvcid": "$NVMF_PORT", 00:21:55.391 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:55.391 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:55.391 "hdgst": ${hdgst:-false}, 00:21:55.391 "ddgst": ${ddgst:-false} 00:21:55.391 }, 00:21:55.391 "method": "bdev_nvme_attach_controller" 00:21:55.391 } 00:21:55.391 EOF 00:21:55.391 )") 00:21:55.391 11:29:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:21:55.391 11:29:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:55.391 11:29:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:55.391 { 00:21:55.391 "params": { 00:21:55.391 "name": "Nvme$subsystem", 00:21:55.391 "trtype": "$TEST_TRANSPORT", 00:21:55.391 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:55.391 "adrfam": "ipv4", 00:21:55.391 "trsvcid": "$NVMF_PORT", 00:21:55.391 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:55.391 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:55.391 "hdgst": ${hdgst:-false}, 00:21:55.391 "ddgst": ${ddgst:-false} 00:21:55.391 }, 00:21:55.391 "method": "bdev_nvme_attach_controller" 00:21:55.391 } 00:21:55.391 EOF 00:21:55.391 )") 00:21:55.391 11:29:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:21:55.391 11:29:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:55.391 11:29:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:55.391 { 00:21:55.391 "params": { 00:21:55.391 "name": "Nvme$subsystem", 00:21:55.391 "trtype": "$TEST_TRANSPORT", 00:21:55.391 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:55.391 "adrfam": "ipv4", 00:21:55.391 "trsvcid": "$NVMF_PORT", 00:21:55.391 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:55.391 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:55.391 "hdgst": ${hdgst:-false}, 00:21:55.391 "ddgst": ${ddgst:-false} 00:21:55.391 }, 00:21:55.391 "method": "bdev_nvme_attach_controller" 00:21:55.391 } 00:21:55.391 EOF 00:21:55.391 )") 00:21:55.391 11:29:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:21:55.391 11:29:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:55.391 11:29:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:55.391 { 00:21:55.391 "params": { 00:21:55.391 "name": "Nvme$subsystem", 00:21:55.391 "trtype": "$TEST_TRANSPORT", 00:21:55.391 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:55.391 "adrfam": "ipv4", 00:21:55.391 "trsvcid": "$NVMF_PORT", 00:21:55.391 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:55.391 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:55.391 "hdgst": ${hdgst:-false}, 00:21:55.391 "ddgst": ${ddgst:-false} 00:21:55.391 }, 00:21:55.391 "method": "bdev_nvme_attach_controller" 00:21:55.391 } 00:21:55.391 EOF 00:21:55.391 )") 00:21:55.391 11:29:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:21:55.391 11:29:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:55.391 11:29:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:55.391 { 00:21:55.391 "params": { 00:21:55.391 "name": "Nvme$subsystem", 00:21:55.391 "trtype": "$TEST_TRANSPORT", 00:21:55.391 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:55.391 "adrfam": "ipv4", 00:21:55.391 "trsvcid": "$NVMF_PORT", 00:21:55.391 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:55.391 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:55.391 "hdgst": ${hdgst:-false}, 00:21:55.391 "ddgst": ${ddgst:-false} 00:21:55.391 }, 00:21:55.391 "method": "bdev_nvme_attach_controller" 00:21:55.391 } 00:21:55.391 EOF 00:21:55.391 )") 00:21:55.391 11:29:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:21:55.391 [2024-07-26 11:29:50.978262] Starting SPDK v24.09-pre git sha1 487ff9e1a / DPDK 24.03.0 initialization... 00:21:55.391 [2024-07-26 11:29:50.978310] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1574976 ] 00:21:55.391 11:29:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:55.391 11:29:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:55.391 { 00:21:55.391 "params": { 00:21:55.391 "name": "Nvme$subsystem", 00:21:55.391 "trtype": "$TEST_TRANSPORT", 00:21:55.391 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:55.391 "adrfam": "ipv4", 00:21:55.391 "trsvcid": "$NVMF_PORT", 00:21:55.391 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:55.391 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:55.391 "hdgst": ${hdgst:-false}, 00:21:55.391 "ddgst": ${ddgst:-false} 00:21:55.391 }, 00:21:55.391 "method": "bdev_nvme_attach_controller" 00:21:55.391 } 00:21:55.391 EOF 00:21:55.391 )") 00:21:55.391 11:29:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:21:55.391 11:29:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:55.391 11:29:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:55.391 { 00:21:55.391 "params": { 00:21:55.391 "name": "Nvme$subsystem", 00:21:55.391 "trtype": "$TEST_TRANSPORT", 00:21:55.391 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:55.391 "adrfam": "ipv4", 00:21:55.391 "trsvcid": "$NVMF_PORT", 00:21:55.391 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:55.391 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:55.391 "hdgst": ${hdgst:-false}, 00:21:55.391 "ddgst": ${ddgst:-false} 00:21:55.391 }, 00:21:55.391 "method": "bdev_nvme_attach_controller" 00:21:55.391 } 00:21:55.391 EOF 00:21:55.391 )") 00:21:55.391 11:29:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:21:55.391 11:29:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:55.391 11:29:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:55.391 { 00:21:55.391 "params": { 00:21:55.391 "name": "Nvme$subsystem", 00:21:55.391 "trtype": "$TEST_TRANSPORT", 00:21:55.391 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:55.391 "adrfam": "ipv4", 00:21:55.391 "trsvcid": "$NVMF_PORT", 00:21:55.391 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:55.391 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:55.391 "hdgst": ${hdgst:-false}, 00:21:55.391 "ddgst": ${ddgst:-false} 00:21:55.391 }, 00:21:55.391 "method": "bdev_nvme_attach_controller" 00:21:55.391 } 00:21:55.391 EOF 00:21:55.391 )") 00:21:55.391 11:29:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:21:55.391 11:29:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@556 -- # jq . 00:21:55.391 EAL: No free 2048 kB hugepages reported on node 1 00:21:55.392 11:29:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@557 -- # IFS=, 00:21:55.392 11:29:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:21:55.392 "params": { 00:21:55.392 "name": "Nvme1", 00:21:55.392 "trtype": "tcp", 00:21:55.392 "traddr": "10.0.0.2", 00:21:55.392 "adrfam": "ipv4", 00:21:55.392 "trsvcid": "4420", 00:21:55.392 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:21:55.392 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:21:55.392 "hdgst": false, 00:21:55.392 "ddgst": false 00:21:55.392 }, 00:21:55.392 "method": "bdev_nvme_attach_controller" 00:21:55.392 },{ 00:21:55.392 "params": { 00:21:55.392 "name": "Nvme2", 00:21:55.392 "trtype": "tcp", 00:21:55.392 "traddr": "10.0.0.2", 00:21:55.392 "adrfam": "ipv4", 00:21:55.392 "trsvcid": "4420", 00:21:55.392 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:21:55.392 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:21:55.392 "hdgst": false, 00:21:55.392 "ddgst": false 00:21:55.392 }, 00:21:55.392 "method": "bdev_nvme_attach_controller" 00:21:55.392 },{ 00:21:55.392 "params": { 00:21:55.392 "name": "Nvme3", 00:21:55.392 "trtype": "tcp", 00:21:55.392 "traddr": "10.0.0.2", 00:21:55.392 "adrfam": "ipv4", 00:21:55.392 "trsvcid": "4420", 00:21:55.392 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:21:55.392 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:21:55.392 "hdgst": false, 00:21:55.392 "ddgst": false 00:21:55.392 }, 00:21:55.392 "method": "bdev_nvme_attach_controller" 00:21:55.392 },{ 00:21:55.392 "params": { 00:21:55.392 "name": "Nvme4", 00:21:55.392 "trtype": "tcp", 00:21:55.392 "traddr": "10.0.0.2", 00:21:55.392 "adrfam": "ipv4", 00:21:55.392 "trsvcid": "4420", 00:21:55.392 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:21:55.392 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:21:55.392 "hdgst": false, 00:21:55.392 "ddgst": false 00:21:55.392 }, 00:21:55.392 "method": "bdev_nvme_attach_controller" 00:21:55.392 },{ 00:21:55.392 "params": { 00:21:55.392 "name": "Nvme5", 00:21:55.392 "trtype": "tcp", 00:21:55.392 "traddr": "10.0.0.2", 00:21:55.392 "adrfam": "ipv4", 00:21:55.392 "trsvcid": "4420", 00:21:55.392 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:21:55.392 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:21:55.392 "hdgst": false, 00:21:55.392 "ddgst": false 00:21:55.392 }, 00:21:55.392 "method": "bdev_nvme_attach_controller" 00:21:55.392 },{ 00:21:55.392 "params": { 00:21:55.392 "name": "Nvme6", 00:21:55.392 "trtype": "tcp", 00:21:55.392 "traddr": "10.0.0.2", 00:21:55.392 "adrfam": "ipv4", 00:21:55.392 "trsvcid": "4420", 00:21:55.392 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:21:55.392 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:21:55.392 "hdgst": false, 00:21:55.392 "ddgst": false 00:21:55.392 }, 00:21:55.392 "method": "bdev_nvme_attach_controller" 00:21:55.392 },{ 00:21:55.392 "params": { 00:21:55.392 "name": "Nvme7", 00:21:55.392 "trtype": "tcp", 00:21:55.392 "traddr": "10.0.0.2", 00:21:55.392 "adrfam": "ipv4", 00:21:55.392 "trsvcid": "4420", 00:21:55.392 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:21:55.392 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:21:55.392 "hdgst": false, 00:21:55.392 "ddgst": false 00:21:55.392 }, 00:21:55.392 "method": "bdev_nvme_attach_controller" 00:21:55.392 },{ 00:21:55.392 "params": { 00:21:55.392 "name": "Nvme8", 00:21:55.392 "trtype": "tcp", 00:21:55.392 "traddr": "10.0.0.2", 00:21:55.392 "adrfam": "ipv4", 00:21:55.392 "trsvcid": "4420", 00:21:55.392 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:21:55.392 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:21:55.392 "hdgst": false, 00:21:55.392 "ddgst": false 00:21:55.392 }, 00:21:55.392 "method": "bdev_nvme_attach_controller" 00:21:55.392 },{ 00:21:55.392 "params": { 00:21:55.392 "name": "Nvme9", 00:21:55.392 "trtype": "tcp", 00:21:55.392 "traddr": "10.0.0.2", 00:21:55.392 "adrfam": "ipv4", 00:21:55.392 "trsvcid": "4420", 00:21:55.392 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:21:55.392 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:21:55.392 "hdgst": false, 00:21:55.392 "ddgst": false 00:21:55.392 }, 00:21:55.392 "method": "bdev_nvme_attach_controller" 00:21:55.392 },{ 00:21:55.392 "params": { 00:21:55.392 "name": "Nvme10", 00:21:55.392 "trtype": "tcp", 00:21:55.392 "traddr": "10.0.0.2", 00:21:55.392 "adrfam": "ipv4", 00:21:55.392 "trsvcid": "4420", 00:21:55.392 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:21:55.392 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:21:55.392 "hdgst": false, 00:21:55.392 "ddgst": false 00:21:55.392 }, 00:21:55.392 "method": "bdev_nvme_attach_controller" 00:21:55.392 }' 00:21:55.392 [2024-07-26 11:29:51.045539] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:55.650 [2024-07-26 11:29:51.117676] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:21:57.022 Running I/O for 10 seconds... 00:21:57.022 11:29:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:21:57.022 11:29:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@864 -- # return 0 00:21:57.022 11:29:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@105 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:21:57.022 11:29:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:57.022 11:29:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:21:57.280 11:29:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:57.280 11:29:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@107 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:21:57.281 11:29:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@50 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:21:57.281 11:29:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@54 -- # '[' -z Nvme1n1 ']' 00:21:57.281 11:29:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@57 -- # local ret=1 00:21:57.281 11:29:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@58 -- # local i 00:21:57.281 11:29:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i = 10 )) 00:21:57.281 11:29:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:21:57.281 11:29:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:21:57.281 11:29:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:21:57.281 11:29:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:57.281 11:29:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:21:57.281 11:29:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:57.281 11:29:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # read_io_count=3 00:21:57.281 11:29:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@63 -- # '[' 3 -ge 100 ']' 00:21:57.281 11:29:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@67 -- # sleep 0.25 00:21:57.539 11:29:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i-- )) 00:21:57.539 11:29:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:21:57.539 11:29:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:21:57.539 11:29:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:21:57.539 11:29:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:57.539 11:29:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:21:57.539 11:29:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:57.539 11:29:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # read_io_count=67 00:21:57.539 11:29:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@63 -- # '[' 67 -ge 100 ']' 00:21:57.539 11:29:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@67 -- # sleep 0.25 00:21:57.797 11:29:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i-- )) 00:21:57.797 11:29:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:21:57.797 11:29:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:21:57.797 11:29:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:21:57.797 11:29:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:57.797 11:29:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:21:57.797 11:29:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:57.797 11:29:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # read_io_count=131 00:21:57.797 11:29:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@63 -- # '[' 131 -ge 100 ']' 00:21:57.797 11:29:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@64 -- # ret=0 00:21:57.797 11:29:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@65 -- # break 00:21:57.797 11:29:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@69 -- # return 0 00:21:57.797 11:29:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@110 -- # killprocess 1574976 00:21:57.797 11:29:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@950 -- # '[' -z 1574976 ']' 00:21:57.797 11:29:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@954 -- # kill -0 1574976 00:21:57.797 11:29:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@955 -- # uname 00:21:57.798 11:29:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:21:57.798 11:29:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1574976 00:21:57.798 11:29:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:21:57.798 11:29:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:21:57.798 11:29:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1574976' 00:21:57.798 killing process with pid 1574976 00:21:57.798 11:29:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@969 -- # kill 1574976 00:21:57.798 11:29:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@974 -- # wait 1574976 00:21:58.054 Received shutdown signal, test time was about 0.896433 seconds 00:21:58.054 00:21:58.054 Latency(us) 00:21:58.054 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:58.054 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:58.054 Verification LBA range: start 0x0 length 0x400 00:21:58.054 Nvme1n1 : 0.89 287.47 17.97 0.00 0.00 220292.88 16477.62 210713.84 00:21:58.054 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:58.054 Verification LBA range: start 0x0 length 0x400 00:21:58.054 Nvme2n1 : 0.88 292.16 18.26 0.00 0.00 212971.28 17101.78 212711.13 00:21:58.054 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:58.054 Verification LBA range: start 0x0 length 0x400 00:21:58.054 Nvme3n1 : 0.86 297.71 18.61 0.00 0.00 205019.55 12732.71 213709.78 00:21:58.054 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:58.054 Verification LBA range: start 0x0 length 0x400 00:21:58.054 Nvme4n1 : 0.89 289.07 18.07 0.00 0.00 207401.08 14105.84 209715.20 00:21:58.054 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:58.054 Verification LBA range: start 0x0 length 0x400 00:21:58.054 Nvme5n1 : 0.88 289.90 18.12 0.00 0.00 203177.20 30957.96 196732.83 00:21:58.054 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:58.054 Verification LBA range: start 0x0 length 0x400 00:21:58.054 Nvme6n1 : 0.89 286.81 17.93 0.00 0.00 201563.92 17476.27 211712.49 00:21:58.054 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:58.054 Verification LBA range: start 0x0 length 0x400 00:21:58.054 Nvme7n1 : 0.88 291.03 18.19 0.00 0.00 194640.09 14043.43 213709.78 00:21:58.054 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:58.054 Verification LBA range: start 0x0 length 0x400 00:21:58.054 Nvme8n1 : 0.87 298.11 18.63 0.00 0.00 185474.86 3620.08 205720.62 00:21:58.054 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:58.054 Verification LBA range: start 0x0 length 0x400 00:21:58.054 Nvme9n1 : 0.90 285.78 17.86 0.00 0.00 190969.66 17226.61 208716.56 00:21:58.054 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:58.054 Verification LBA range: start 0x0 length 0x400 00:21:58.054 Nvme10n1 : 0.86 222.34 13.90 0.00 0.00 238142.09 18974.23 225693.50 00:21:58.054 =================================================================================================================== 00:21:58.054 Total : 2840.38 177.52 0.00 0.00 205108.75 3620.08 225693.50 00:21:58.054 11:29:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@113 -- # sleep 1 00:21:59.427 11:29:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@114 -- # kill -0 1574689 00:21:59.427 11:29:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@116 -- # stoptarget 00:21:59.427 11:29:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@41 -- # rm -f ./local-job0-0-verify.state 00:21:59.427 11:29:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@42 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:21:59.427 11:29:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:21:59.427 11:29:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@45 -- # nvmftestfini 00:21:59.427 11:29:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@488 -- # nvmfcleanup 00:21:59.427 11:29:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@117 -- # sync 00:21:59.427 11:29:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:21:59.427 11:29:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@120 -- # set +e 00:21:59.427 11:29:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@121 -- # for i in {1..20} 00:21:59.428 11:29:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:21:59.428 rmmod nvme_tcp 00:21:59.428 rmmod nvme_fabrics 00:21:59.428 rmmod nvme_keyring 00:21:59.428 11:29:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:21:59.428 11:29:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@124 -- # set -e 00:21:59.428 11:29:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@125 -- # return 0 00:21:59.428 11:29:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@489 -- # '[' -n 1574689 ']' 00:21:59.428 11:29:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@490 -- # killprocess 1574689 00:21:59.428 11:29:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@950 -- # '[' -z 1574689 ']' 00:21:59.428 11:29:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@954 -- # kill -0 1574689 00:21:59.428 11:29:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@955 -- # uname 00:21:59.428 11:29:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:21:59.428 11:29:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1574689 00:21:59.428 11:29:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:21:59.428 11:29:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:21:59.428 11:29:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1574689' 00:21:59.428 killing process with pid 1574689 00:21:59.428 11:29:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@969 -- # kill 1574689 00:21:59.428 11:29:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@974 -- # wait 1574689 00:21:59.687 11:29:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:21:59.687 11:29:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:21:59.687 11:29:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:21:59.687 11:29:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:21:59.687 11:29:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:21:59.687 11:29:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:59.687 11:29:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:59.687 11:29:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:01.593 11:29:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:22:01.593 00:22:01.593 real 0m8.038s 00:22:01.593 user 0m24.369s 00:22:01.593 sys 0m1.350s 00:22:01.593 11:29:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@1126 -- # xtrace_disable 00:22:01.593 11:29:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:22:01.593 ************************************ 00:22:01.593 END TEST nvmf_shutdown_tc2 00:22:01.593 ************************************ 00:22:01.853 11:29:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@149 -- # run_test nvmf_shutdown_tc3 nvmf_shutdown_tc3 00:22:01.853 11:29:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:22:01.853 11:29:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1107 -- # xtrace_disable 00:22:01.853 11:29:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:22:01.853 ************************************ 00:22:01.853 START TEST nvmf_shutdown_tc3 00:22:01.853 ************************************ 00:22:01.853 11:29:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@1125 -- # nvmf_shutdown_tc3 00:22:01.853 11:29:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@121 -- # starttarget 00:22:01.853 11:29:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@15 -- # nvmftestinit 00:22:01.853 11:29:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:22:01.853 11:29:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:01.853 11:29:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@448 -- # prepare_net_devs 00:22:01.853 11:29:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@410 -- # local -g is_hw=no 00:22:01.853 11:29:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@412 -- # remove_spdk_ns 00:22:01.853 11:29:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:01.853 11:29:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:01.853 11:29:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:01.853 11:29:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:22:01.853 11:29:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:22:01.853 11:29:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@285 -- # xtrace_disable 00:22:01.853 11:29:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:22:01.853 11:29:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:01.853 11:29:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@291 -- # pci_devs=() 00:22:01.853 11:29:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@291 -- # local -a pci_devs 00:22:01.853 11:29:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:22:01.853 11:29:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:22:01.853 11:29:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@293 -- # pci_drivers=() 00:22:01.853 11:29:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:22:01.853 11:29:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@295 -- # net_devs=() 00:22:01.853 11:29:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@295 -- # local -ga net_devs 00:22:01.853 11:29:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@296 -- # e810=() 00:22:01.853 11:29:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@296 -- # local -ga e810 00:22:01.853 11:29:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@297 -- # x722=() 00:22:01.853 11:29:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@297 -- # local -ga x722 00:22:01.853 11:29:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@298 -- # mlx=() 00:22:01.853 11:29:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@298 -- # local -ga mlx 00:22:01.853 11:29:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:01.853 11:29:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:01.853 11:29:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:01.853 11:29:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:01.853 11:29:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:01.853 11:29:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:01.853 11:29:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:01.853 11:29:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:01.853 11:29:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:01.853 11:29:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:01.853 11:29:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:01.853 11:29:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:22:01.853 11:29:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:22:01.853 11:29:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:22:01.853 11:29:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:22:01.853 11:29:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:22:01.853 11:29:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:22:01.853 11:29:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:01.853 11:29:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:22:01.853 Found 0000:86:00.0 (0x8086 - 0x159b) 00:22:01.853 11:29:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:01.853 11:29:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:01.853 11:29:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:01.853 11:29:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:01.853 11:29:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:01.853 11:29:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:01.853 11:29:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:22:01.853 Found 0000:86:00.1 (0x8086 - 0x159b) 00:22:01.853 11:29:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:01.853 11:29:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:01.853 11:29:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:01.853 11:29:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:01.853 11:29:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:01.853 11:29:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:22:01.853 11:29:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:22:01.853 11:29:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:22:01.854 11:29:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:01.854 11:29:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:01.854 11:29:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:22:01.854 11:29:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:01.854 11:29:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:22:01.854 11:29:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:01.854 11:29:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:01.854 11:29:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:22:01.854 Found net devices under 0000:86:00.0: cvl_0_0 00:22:01.854 11:29:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:01.854 11:29:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:01.854 11:29:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:01.854 11:29:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:22:01.854 11:29:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:01.854 11:29:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:22:01.854 11:29:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:01.854 11:29:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:01.854 11:29:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:22:01.854 Found net devices under 0000:86:00.1: cvl_0_1 00:22:01.854 11:29:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:01.854 11:29:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:22:01.854 11:29:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@414 -- # is_hw=yes 00:22:01.854 11:29:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:22:01.854 11:29:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:22:01.854 11:29:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:22:01.854 11:29:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:01.854 11:29:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:01.854 11:29:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:01.854 11:29:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:22:01.854 11:29:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:01.854 11:29:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:01.854 11:29:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:22:01.854 11:29:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:01.854 11:29:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:01.854 11:29:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:22:01.854 11:29:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:22:01.854 11:29:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:22:01.854 11:29:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:01.854 11:29:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:01.854 11:29:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:01.854 11:29:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:22:01.854 11:29:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:02.114 11:29:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:02.114 11:29:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:02.114 11:29:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:22:02.114 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:02.114 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.192 ms 00:22:02.114 00:22:02.114 --- 10.0.0.2 ping statistics --- 00:22:02.114 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:02.114 rtt min/avg/max/mdev = 0.192/0.192/0.192/0.000 ms 00:22:02.114 11:29:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:02.114 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:02.114 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.219 ms 00:22:02.114 00:22:02.114 --- 10.0.0.1 ping statistics --- 00:22:02.114 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:02.114 rtt min/avg/max/mdev = 0.219/0.219/0.219/0.000 ms 00:22:02.114 11:29:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:02.114 11:29:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@422 -- # return 0 00:22:02.114 11:29:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:22:02.114 11:29:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:02.114 11:29:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:22:02.114 11:29:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:22:02.114 11:29:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:02.114 11:29:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:22:02.114 11:29:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:22:02.114 11:29:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@18 -- # nvmfappstart -m 0x1E 00:22:02.114 11:29:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:22:02.114 11:29:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@724 -- # xtrace_disable 00:22:02.114 11:29:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:22:02.114 11:29:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@481 -- # nvmfpid=1576237 00:22:02.114 11:29:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@482 -- # waitforlisten 1576237 00:22:02.114 11:29:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:22:02.114 11:29:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@831 -- # '[' -z 1576237 ']' 00:22:02.114 11:29:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:02.114 11:29:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@836 -- # local max_retries=100 00:22:02.114 11:29:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:02.114 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:02.114 11:29:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@840 -- # xtrace_disable 00:22:02.114 11:29:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:22:02.114 [2024-07-26 11:29:57.695423] Starting SPDK v24.09-pre git sha1 487ff9e1a / DPDK 24.03.0 initialization... 00:22:02.114 [2024-07-26 11:29:57.695468] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:02.114 EAL: No free 2048 kB hugepages reported on node 1 00:22:02.114 [2024-07-26 11:29:57.768002] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:22:02.372 [2024-07-26 11:29:57.845812] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:02.372 [2024-07-26 11:29:57.845847] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:02.372 [2024-07-26 11:29:57.845857] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:02.372 [2024-07-26 11:29:57.845862] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:02.372 [2024-07-26 11:29:57.845867] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:02.372 [2024-07-26 11:29:57.845977] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:22:02.372 [2024-07-26 11:29:57.846096] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:22:02.372 [2024-07-26 11:29:57.846204] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:22:02.372 [2024-07-26 11:29:57.846204] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:22:02.937 11:29:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:22:02.937 11:29:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@864 -- # return 0 00:22:02.937 11:29:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:22:02.937 11:29:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@730 -- # xtrace_disable 00:22:02.937 11:29:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:22:02.937 11:29:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:02.937 11:29:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:22:02.937 11:29:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:02.938 11:29:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:22:02.938 [2024-07-26 11:29:58.531748] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:02.938 11:29:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:02.938 11:29:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@22 -- # num_subsystems=({1..10}) 00:22:02.938 11:29:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@24 -- # timing_enter create_subsystems 00:22:02.938 11:29:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@724 -- # xtrace_disable 00:22:02.938 11:29:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:22:02.938 11:29:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@26 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:22:02.938 11:29:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:22:02.938 11:29:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:22:02.938 11:29:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:22:02.938 11:29:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:22:02.938 11:29:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:22:02.938 11:29:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:22:02.938 11:29:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:22:02.938 11:29:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:22:02.938 11:29:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:22:02.938 11:29:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:22:02.938 11:29:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:22:02.938 11:29:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:22:02.938 11:29:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:22:02.938 11:29:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:22:02.938 11:29:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:22:02.938 11:29:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:22:02.938 11:29:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:22:02.938 11:29:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:22:02.938 11:29:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:22:02.938 11:29:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:22:02.938 11:29:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@35 -- # rpc_cmd 00:22:02.938 11:29:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:02.938 11:29:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:22:03.195 Malloc1 00:22:03.195 [2024-07-26 11:29:58.627120] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:03.195 Malloc2 00:22:03.195 Malloc3 00:22:03.195 Malloc4 00:22:03.195 Malloc5 00:22:03.195 Malloc6 00:22:03.452 Malloc7 00:22:03.452 Malloc8 00:22:03.452 Malloc9 00:22:03.452 Malloc10 00:22:03.452 11:29:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:03.452 11:29:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@36 -- # timing_exit create_subsystems 00:22:03.452 11:29:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@730 -- # xtrace_disable 00:22:03.452 11:29:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:22:03.452 11:29:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@125 -- # perfpid=1576516 00:22:03.452 11:29:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@126 -- # waitforlisten 1576516 /var/tmp/bdevperf.sock 00:22:03.452 11:29:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@831 -- # '[' -z 1576516 ']' 00:22:03.452 11:29:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:03.452 11:29:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@836 -- # local max_retries=100 00:22:03.452 11:29:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:22:03.452 11:29:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@124 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:22:03.452 11:29:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:03.452 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:03.452 11:29:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@840 -- # xtrace_disable 00:22:03.452 11:29:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@532 -- # config=() 00:22:03.452 11:29:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:22:03.452 11:29:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@532 -- # local subsystem config 00:22:03.452 11:29:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:22:03.452 11:29:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:22:03.452 { 00:22:03.452 "params": { 00:22:03.452 "name": "Nvme$subsystem", 00:22:03.452 "trtype": "$TEST_TRANSPORT", 00:22:03.452 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:03.452 "adrfam": "ipv4", 00:22:03.452 "trsvcid": "$NVMF_PORT", 00:22:03.452 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:03.452 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:03.452 "hdgst": ${hdgst:-false}, 00:22:03.452 "ddgst": ${ddgst:-false} 00:22:03.452 }, 00:22:03.452 "method": "bdev_nvme_attach_controller" 00:22:03.452 } 00:22:03.452 EOF 00:22:03.452 )") 00:22:03.452 11:29:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:22:03.452 11:29:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:22:03.452 11:29:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:22:03.452 { 00:22:03.452 "params": { 00:22:03.452 "name": "Nvme$subsystem", 00:22:03.452 "trtype": "$TEST_TRANSPORT", 00:22:03.452 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:03.452 "adrfam": "ipv4", 00:22:03.452 "trsvcid": "$NVMF_PORT", 00:22:03.452 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:03.452 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:03.452 "hdgst": ${hdgst:-false}, 00:22:03.452 "ddgst": ${ddgst:-false} 00:22:03.452 }, 00:22:03.452 "method": "bdev_nvme_attach_controller" 00:22:03.452 } 00:22:03.452 EOF 00:22:03.452 )") 00:22:03.452 11:29:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:22:03.452 11:29:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:22:03.453 11:29:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:22:03.453 { 00:22:03.453 "params": { 00:22:03.453 "name": "Nvme$subsystem", 00:22:03.453 "trtype": "$TEST_TRANSPORT", 00:22:03.453 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:03.453 "adrfam": "ipv4", 00:22:03.453 "trsvcid": "$NVMF_PORT", 00:22:03.453 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:03.453 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:03.453 "hdgst": ${hdgst:-false}, 00:22:03.453 "ddgst": ${ddgst:-false} 00:22:03.453 }, 00:22:03.453 "method": "bdev_nvme_attach_controller" 00:22:03.453 } 00:22:03.453 EOF 00:22:03.453 )") 00:22:03.453 11:29:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:22:03.453 11:29:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:22:03.453 11:29:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:22:03.453 { 00:22:03.453 "params": { 00:22:03.453 "name": "Nvme$subsystem", 00:22:03.453 "trtype": "$TEST_TRANSPORT", 00:22:03.453 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:03.453 "adrfam": "ipv4", 00:22:03.453 "trsvcid": "$NVMF_PORT", 00:22:03.453 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:03.453 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:03.453 "hdgst": ${hdgst:-false}, 00:22:03.453 "ddgst": ${ddgst:-false} 00:22:03.453 }, 00:22:03.453 "method": "bdev_nvme_attach_controller" 00:22:03.453 } 00:22:03.453 EOF 00:22:03.453 )") 00:22:03.453 11:29:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:22:03.453 11:29:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:22:03.453 11:29:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:22:03.453 { 00:22:03.453 "params": { 00:22:03.453 "name": "Nvme$subsystem", 00:22:03.453 "trtype": "$TEST_TRANSPORT", 00:22:03.453 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:03.453 "adrfam": "ipv4", 00:22:03.453 "trsvcid": "$NVMF_PORT", 00:22:03.453 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:03.453 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:03.453 "hdgst": ${hdgst:-false}, 00:22:03.453 "ddgst": ${ddgst:-false} 00:22:03.453 }, 00:22:03.453 "method": "bdev_nvme_attach_controller" 00:22:03.453 } 00:22:03.453 EOF 00:22:03.453 )") 00:22:03.453 11:29:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:22:03.453 11:29:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:22:03.453 11:29:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:22:03.453 { 00:22:03.453 "params": { 00:22:03.453 "name": "Nvme$subsystem", 00:22:03.453 "trtype": "$TEST_TRANSPORT", 00:22:03.453 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:03.453 "adrfam": "ipv4", 00:22:03.453 "trsvcid": "$NVMF_PORT", 00:22:03.453 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:03.453 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:03.453 "hdgst": ${hdgst:-false}, 00:22:03.453 "ddgst": ${ddgst:-false} 00:22:03.453 }, 00:22:03.453 "method": "bdev_nvme_attach_controller" 00:22:03.453 } 00:22:03.453 EOF 00:22:03.453 )") 00:22:03.453 11:29:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:22:03.453 11:29:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:22:03.453 11:29:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:22:03.453 { 00:22:03.453 "params": { 00:22:03.453 "name": "Nvme$subsystem", 00:22:03.453 "trtype": "$TEST_TRANSPORT", 00:22:03.453 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:03.453 "adrfam": "ipv4", 00:22:03.453 "trsvcid": "$NVMF_PORT", 00:22:03.453 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:03.453 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:03.453 "hdgst": ${hdgst:-false}, 00:22:03.453 "ddgst": ${ddgst:-false} 00:22:03.453 }, 00:22:03.453 "method": "bdev_nvme_attach_controller" 00:22:03.453 } 00:22:03.453 EOF 00:22:03.453 )") 00:22:03.453 11:29:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:22:03.453 [2024-07-26 11:29:59.099900] Starting SPDK v24.09-pre git sha1 487ff9e1a / DPDK 24.03.0 initialization... 00:22:03.453 [2024-07-26 11:29:59.099947] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1576516 ] 00:22:03.453 11:29:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:22:03.453 11:29:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:22:03.453 { 00:22:03.453 "params": { 00:22:03.453 "name": "Nvme$subsystem", 00:22:03.453 "trtype": "$TEST_TRANSPORT", 00:22:03.453 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:03.453 "adrfam": "ipv4", 00:22:03.453 "trsvcid": "$NVMF_PORT", 00:22:03.453 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:03.453 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:03.453 "hdgst": ${hdgst:-false}, 00:22:03.453 "ddgst": ${ddgst:-false} 00:22:03.453 }, 00:22:03.453 "method": "bdev_nvme_attach_controller" 00:22:03.453 } 00:22:03.453 EOF 00:22:03.453 )") 00:22:03.453 11:29:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:22:03.453 11:29:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:22:03.453 11:29:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:22:03.453 { 00:22:03.453 "params": { 00:22:03.453 "name": "Nvme$subsystem", 00:22:03.453 "trtype": "$TEST_TRANSPORT", 00:22:03.453 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:03.453 "adrfam": "ipv4", 00:22:03.453 "trsvcid": "$NVMF_PORT", 00:22:03.453 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:03.453 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:03.453 "hdgst": ${hdgst:-false}, 00:22:03.453 "ddgst": ${ddgst:-false} 00:22:03.453 }, 00:22:03.453 "method": "bdev_nvme_attach_controller" 00:22:03.453 } 00:22:03.453 EOF 00:22:03.453 )") 00:22:03.712 11:29:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:22:03.712 11:29:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:22:03.712 11:29:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:22:03.712 { 00:22:03.712 "params": { 00:22:03.712 "name": "Nvme$subsystem", 00:22:03.712 "trtype": "$TEST_TRANSPORT", 00:22:03.712 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:03.712 "adrfam": "ipv4", 00:22:03.712 "trsvcid": "$NVMF_PORT", 00:22:03.712 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:03.712 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:03.712 "hdgst": ${hdgst:-false}, 00:22:03.712 "ddgst": ${ddgst:-false} 00:22:03.712 }, 00:22:03.712 "method": "bdev_nvme_attach_controller" 00:22:03.712 } 00:22:03.712 EOF 00:22:03.712 )") 00:22:03.712 11:29:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:22:03.712 11:29:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@556 -- # jq . 00:22:03.712 EAL: No free 2048 kB hugepages reported on node 1 00:22:03.712 11:29:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@557 -- # IFS=, 00:22:03.712 11:29:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:22:03.712 "params": { 00:22:03.712 "name": "Nvme1", 00:22:03.712 "trtype": "tcp", 00:22:03.712 "traddr": "10.0.0.2", 00:22:03.712 "adrfam": "ipv4", 00:22:03.712 "trsvcid": "4420", 00:22:03.712 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:03.712 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:22:03.712 "hdgst": false, 00:22:03.712 "ddgst": false 00:22:03.712 }, 00:22:03.712 "method": "bdev_nvme_attach_controller" 00:22:03.712 },{ 00:22:03.712 "params": { 00:22:03.712 "name": "Nvme2", 00:22:03.712 "trtype": "tcp", 00:22:03.712 "traddr": "10.0.0.2", 00:22:03.712 "adrfam": "ipv4", 00:22:03.712 "trsvcid": "4420", 00:22:03.712 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:22:03.712 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:22:03.712 "hdgst": false, 00:22:03.712 "ddgst": false 00:22:03.712 }, 00:22:03.712 "method": "bdev_nvme_attach_controller" 00:22:03.712 },{ 00:22:03.712 "params": { 00:22:03.712 "name": "Nvme3", 00:22:03.712 "trtype": "tcp", 00:22:03.712 "traddr": "10.0.0.2", 00:22:03.712 "adrfam": "ipv4", 00:22:03.712 "trsvcid": "4420", 00:22:03.712 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:22:03.712 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:22:03.712 "hdgst": false, 00:22:03.712 "ddgst": false 00:22:03.712 }, 00:22:03.712 "method": "bdev_nvme_attach_controller" 00:22:03.712 },{ 00:22:03.712 "params": { 00:22:03.712 "name": "Nvme4", 00:22:03.712 "trtype": "tcp", 00:22:03.712 "traddr": "10.0.0.2", 00:22:03.712 "adrfam": "ipv4", 00:22:03.712 "trsvcid": "4420", 00:22:03.712 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:22:03.712 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:22:03.712 "hdgst": false, 00:22:03.712 "ddgst": false 00:22:03.712 }, 00:22:03.712 "method": "bdev_nvme_attach_controller" 00:22:03.712 },{ 00:22:03.712 "params": { 00:22:03.712 "name": "Nvme5", 00:22:03.712 "trtype": "tcp", 00:22:03.712 "traddr": "10.0.0.2", 00:22:03.712 "adrfam": "ipv4", 00:22:03.712 "trsvcid": "4420", 00:22:03.712 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:22:03.712 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:22:03.712 "hdgst": false, 00:22:03.712 "ddgst": false 00:22:03.712 }, 00:22:03.712 "method": "bdev_nvme_attach_controller" 00:22:03.712 },{ 00:22:03.712 "params": { 00:22:03.712 "name": "Nvme6", 00:22:03.712 "trtype": "tcp", 00:22:03.712 "traddr": "10.0.0.2", 00:22:03.712 "adrfam": "ipv4", 00:22:03.712 "trsvcid": "4420", 00:22:03.712 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:22:03.712 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:22:03.712 "hdgst": false, 00:22:03.712 "ddgst": false 00:22:03.712 }, 00:22:03.712 "method": "bdev_nvme_attach_controller" 00:22:03.712 },{ 00:22:03.712 "params": { 00:22:03.712 "name": "Nvme7", 00:22:03.712 "trtype": "tcp", 00:22:03.712 "traddr": "10.0.0.2", 00:22:03.712 "adrfam": "ipv4", 00:22:03.712 "trsvcid": "4420", 00:22:03.712 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:22:03.712 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:22:03.712 "hdgst": false, 00:22:03.712 "ddgst": false 00:22:03.712 }, 00:22:03.712 "method": "bdev_nvme_attach_controller" 00:22:03.712 },{ 00:22:03.712 "params": { 00:22:03.712 "name": "Nvme8", 00:22:03.713 "trtype": "tcp", 00:22:03.713 "traddr": "10.0.0.2", 00:22:03.713 "adrfam": "ipv4", 00:22:03.713 "trsvcid": "4420", 00:22:03.713 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:22:03.713 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:22:03.713 "hdgst": false, 00:22:03.713 "ddgst": false 00:22:03.713 }, 00:22:03.713 "method": "bdev_nvme_attach_controller" 00:22:03.713 },{ 00:22:03.713 "params": { 00:22:03.713 "name": "Nvme9", 00:22:03.713 "trtype": "tcp", 00:22:03.713 "traddr": "10.0.0.2", 00:22:03.713 "adrfam": "ipv4", 00:22:03.713 "trsvcid": "4420", 00:22:03.713 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:22:03.713 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:22:03.713 "hdgst": false, 00:22:03.713 "ddgst": false 00:22:03.713 }, 00:22:03.713 "method": "bdev_nvme_attach_controller" 00:22:03.713 },{ 00:22:03.713 "params": { 00:22:03.713 "name": "Nvme10", 00:22:03.713 "trtype": "tcp", 00:22:03.713 "traddr": "10.0.0.2", 00:22:03.713 "adrfam": "ipv4", 00:22:03.713 "trsvcid": "4420", 00:22:03.713 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:22:03.713 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:22:03.713 "hdgst": false, 00:22:03.713 "ddgst": false 00:22:03.713 }, 00:22:03.713 "method": "bdev_nvme_attach_controller" 00:22:03.713 }' 00:22:03.713 [2024-07-26 11:29:59.165897] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:03.713 [2024-07-26 11:29:59.238372] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:22:05.611 Running I/O for 10 seconds... 00:22:05.611 11:30:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:22:05.611 11:30:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@864 -- # return 0 00:22:05.611 11:30:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@127 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:22:05.611 11:30:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:05.611 11:30:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:22:05.611 11:30:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:05.611 11:30:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@130 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:22:05.611 11:30:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@132 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:22:05.611 11:30:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@50 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:22:05.611 11:30:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@54 -- # '[' -z Nvme1n1 ']' 00:22:05.611 11:30:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@57 -- # local ret=1 00:22:05.611 11:30:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@58 -- # local i 00:22:05.611 11:30:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i = 10 )) 00:22:05.611 11:30:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:22:05.611 11:30:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:22:05.611 11:30:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:05.611 11:30:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:22:05.611 11:30:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:22:05.611 11:30:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:05.611 11:30:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # read_io_count=3 00:22:05.611 11:30:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@63 -- # '[' 3 -ge 100 ']' 00:22:05.611 11:30:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@67 -- # sleep 0.25 00:22:05.869 11:30:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i-- )) 00:22:05.869 11:30:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:22:05.869 11:30:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:22:05.869 11:30:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:22:05.869 11:30:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:05.869 11:30:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:22:05.869 11:30:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:05.869 11:30:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # read_io_count=67 00:22:05.869 11:30:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@63 -- # '[' 67 -ge 100 ']' 00:22:05.869 11:30:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@67 -- # sleep 0.25 00:22:06.137 11:30:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i-- )) 00:22:06.137 11:30:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:22:06.137 11:30:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:22:06.137 11:30:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:06.137 11:30:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:22:06.137 11:30:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:22:06.137 11:30:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:06.137 11:30:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # read_io_count=195 00:22:06.137 11:30:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@63 -- # '[' 195 -ge 100 ']' 00:22:06.137 11:30:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@64 -- # ret=0 00:22:06.137 11:30:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@65 -- # break 00:22:06.137 11:30:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@69 -- # return 0 00:22:06.137 11:30:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@135 -- # killprocess 1576237 00:22:06.137 11:30:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@950 -- # '[' -z 1576237 ']' 00:22:06.137 11:30:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@954 -- # kill -0 1576237 00:22:06.137 11:30:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@955 -- # uname 00:22:06.137 11:30:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:22:06.137 11:30:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1576237 00:22:06.137 11:30:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:22:06.137 11:30:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:22:06.137 11:30:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1576237' 00:22:06.137 killing process with pid 1576237 00:22:06.137 11:30:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@969 -- # kill 1576237 00:22:06.137 11:30:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@974 -- # wait 1576237 00:22:06.137 [2024-07-26 11:30:01.724757] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x85d180 is same with the state(5) to be set 00:22:06.137 [2024-07-26 11:30:01.724812] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x85d180 is same with the state(5) to be set 00:22:06.137 [2024-07-26 11:30:01.724819] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x85d180 is same with the state(5) to be set 00:22:06.137 [2024-07-26 11:30:01.724825] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x85d180 is same with the state(5) to be set 00:22:06.137 [2024-07-26 11:30:01.724832] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x85d180 is same with the state(5) to be set 00:22:06.137 [2024-07-26 11:30:01.724841] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x85d180 is same with the state(5) to be set 00:22:06.137 [2024-07-26 11:30:01.724848] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x85d180 is same with the state(5) to be set 00:22:06.137 [2024-07-26 11:30:01.724854] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x85d180 is same with the state(5) to be set 00:22:06.137 [2024-07-26 11:30:01.724862] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x85d180 is same with the state(5) to be set 00:22:06.137 [2024-07-26 11:30:01.724871] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x85d180 is same with the state(5) to be set 00:22:06.137 [2024-07-26 11:30:01.724877] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x85d180 is same with the state(5) to be set 00:22:06.137 [2024-07-26 11:30:01.724883] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x85d180 is same with the state(5) to be set 00:22:06.137 [2024-07-26 11:30:01.724888] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x85d180 is same with the state(5) to be set 00:22:06.138 [2024-07-26 11:30:01.724894] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x85d180 is same with the state(5) to be set 00:22:06.138 [2024-07-26 11:30:01.724900] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x85d180 is same with the state(5) to be set 00:22:06.138 [2024-07-26 11:30:01.724906] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x85d180 is same with the state(5) to be set 00:22:06.138 [2024-07-26 11:30:01.724917] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x85d180 is same with the state(5) to be set 00:22:06.138 [2024-07-26 11:30:01.724923] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x85d180 is same with the state(5) to be set 00:22:06.138 [2024-07-26 11:30:01.724929] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x85d180 is same with the state(5) to be set 00:22:06.138 [2024-07-26 11:30:01.724935] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x85d180 is same with the state(5) to be set 00:22:06.138 [2024-07-26 11:30:01.724940] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x85d180 is same with the state(5) to be set 00:22:06.138 [2024-07-26 11:30:01.724946] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x85d180 is same with the state(5) to be set 00:22:06.138 [2024-07-26 11:30:01.724952] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x85d180 is same with the state(5) to be set 00:22:06.138 [2024-07-26 11:30:01.724958] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x85d180 is same with the state(5) to be set 00:22:06.138 [2024-07-26 11:30:01.724964] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x85d180 is same with the state(5) to be set 00:22:06.138 [2024-07-26 11:30:01.724970] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x85d180 is same with the state(5) to be set 00:22:06.138 [2024-07-26 11:30:01.724975] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x85d180 is same with the state(5) to be set 00:22:06.138 [2024-07-26 11:30:01.724981] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x85d180 is same with the state(5) to be set 00:22:06.138 [2024-07-26 11:30:01.724987] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x85d180 is same with the state(5) to be set 00:22:06.138 [2024-07-26 11:30:01.724993] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x85d180 is same with the state(5) to be set 00:22:06.138 [2024-07-26 11:30:01.724998] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x85d180 is same with the state(5) to be set 00:22:06.138 [2024-07-26 11:30:01.725004] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x85d180 is same with the state(5) to be set 00:22:06.138 [2024-07-26 11:30:01.725009] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x85d180 is same with the state(5) to be set 00:22:06.138 [2024-07-26 11:30:01.725015] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x85d180 is same with the state(5) to be set 00:22:06.138 [2024-07-26 11:30:01.725021] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x85d180 is same with the state(5) to be set 00:22:06.138 [2024-07-26 11:30:01.725027] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x85d180 is same with the state(5) to be set 00:22:06.138 [2024-07-26 11:30:01.725032] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x85d180 is same with the state(5) to be set 00:22:06.138 [2024-07-26 11:30:01.725038] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x85d180 is same with the state(5) to be set 00:22:06.138 [2024-07-26 11:30:01.725044] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x85d180 is same with the state(5) to be set 00:22:06.138 [2024-07-26 11:30:01.725050] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x85d180 is same with the state(5) to be set 00:22:06.138 [2024-07-26 11:30:01.725056] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x85d180 is same with the state(5) to be set 00:22:06.138 [2024-07-26 11:30:01.725061] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x85d180 is same with the state(5) to be set 00:22:06.138 [2024-07-26 11:30:01.725067] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x85d180 is same with the state(5) to be set 00:22:06.138 [2024-07-26 11:30:01.725075] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x85d180 is same with the state(5) to be set 00:22:06.138 [2024-07-26 11:30:01.725082] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x85d180 is same with the state(5) to be set 00:22:06.138 [2024-07-26 11:30:01.725088] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x85d180 is same with the state(5) to be set 00:22:06.138 [2024-07-26 11:30:01.725094] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x85d180 is same with the state(5) to be set 00:22:06.138 [2024-07-26 11:30:01.725100] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x85d180 is same with the state(5) to be set 00:22:06.138 [2024-07-26 11:30:01.725106] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x85d180 is same with the state(5) to be set 00:22:06.138 [2024-07-26 11:30:01.725111] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x85d180 is same with the state(5) to be set 00:22:06.138 [2024-07-26 11:30:01.725117] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x85d180 is same with the state(5) to be set 00:22:06.138 [2024-07-26 11:30:01.725123] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x85d180 is same with the state(5) to be set 00:22:06.138 [2024-07-26 11:30:01.725129] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x85d180 is same with the state(5) to be set 00:22:06.138 [2024-07-26 11:30:01.725135] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x85d180 is same with the state(5) to be set 00:22:06.138 [2024-07-26 11:30:01.725140] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x85d180 is same with the state(5) to be set 00:22:06.138 [2024-07-26 11:30:01.725145] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x85d180 is same with the state(5) to be set 00:22:06.138 [2024-07-26 11:30:01.725151] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x85d180 is same with the state(5) to be set 00:22:06.138 [2024-07-26 11:30:01.725156] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x85d180 is same with the state(5) to be set 00:22:06.138 [2024-07-26 11:30:01.725162] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x85d180 is same with the state(5) to be set 00:22:06.138 [2024-07-26 11:30:01.725167] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x85d180 is same with the state(5) to be set 00:22:06.138 [2024-07-26 11:30:01.725173] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x85d180 is same with the state(5) to be set 00:22:06.138 [2024-07-26 11:30:01.725179] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x85d180 is same with the state(5) to be set 00:22:06.138 [2024-07-26 11:30:01.725184] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x85d180 is same with the state(5) to be set 00:22:06.138 [2024-07-26 11:30:01.726242] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x85f320 is same with the state(5) to be set 00:22:06.138 [2024-07-26 11:30:01.726270] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x85f320 is same with the state(5) to be set 00:22:06.138 [2024-07-26 11:30:01.726277] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x85f320 is same with the state(5) to be set 00:22:06.138 [2024-07-26 11:30:01.726284] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x85f320 is same with the state(5) to be set 00:22:06.138 [2024-07-26 11:30:01.726290] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x85f320 is same with the state(5) to be set 00:22:06.138 [2024-07-26 11:30:01.726296] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x85f320 is same with the state(5) to be set 00:22:06.138 [2024-07-26 11:30:01.726302] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x85f320 is same with the state(5) to be set 00:22:06.138 [2024-07-26 11:30:01.726309] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x85f320 is same with the state(5) to be set 00:22:06.138 [2024-07-26 11:30:01.726321] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x85f320 is same with the state(5) to be set 00:22:06.138 [2024-07-26 11:30:01.726327] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x85f320 is same with the state(5) to be set 00:22:06.138 [2024-07-26 11:30:01.726333] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x85f320 is same with the state(5) to be set 00:22:06.138 [2024-07-26 11:30:01.727146] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x85d640 is same with the state(5) to be set 00:22:06.138 [2024-07-26 11:30:01.727156] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x85d640 is same with the state(5) to be set 00:22:06.138 [2024-07-26 11:30:01.727162] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x85d640 is same with the state(5) to be set 00:22:06.138 [2024-07-26 11:30:01.727169] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x85d640 is same with the state(5) to be set 00:22:06.138 [2024-07-26 11:30:01.727175] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x85d640 is same with the state(5) to be set 00:22:06.138 [2024-07-26 11:30:01.727181] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x85d640 is same with the state(5) to be set 00:22:06.138 [2024-07-26 11:30:01.727188] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x85d640 is same with the state(5) to be set 00:22:06.138 [2024-07-26 11:30:01.727194] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x85d640 is same with the state(5) to be set 00:22:06.138 [2024-07-26 11:30:01.727201] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x85d640 is same with the state(5) to be set 00:22:06.138 [2024-07-26 11:30:01.727207] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x85d640 is same with the state(5) to be set 00:22:06.138 [2024-07-26 11:30:01.727212] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x85d640 is same with the state(5) to be set 00:22:06.138 [2024-07-26 11:30:01.727218] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x85d640 is same with the state(5) to be set 00:22:06.138 [2024-07-26 11:30:01.727224] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x85d640 is same with the state(5) to be set 00:22:06.138 [2024-07-26 11:30:01.727230] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x85d640 is same with the state(5) to be set 00:22:06.138 [2024-07-26 11:30:01.727237] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x85d640 is same with the state(5) to be set 00:22:06.138 [2024-07-26 11:30:01.727243] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x85d640 is same with the state(5) to be set 00:22:06.138 [2024-07-26 11:30:01.727249] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x85d640 is same with the state(5) to be set 00:22:06.138 [2024-07-26 11:30:01.727255] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x85d640 is same with the state(5) to be set 00:22:06.138 [2024-07-26 11:30:01.727261] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x85d640 is same with the state(5) to be set 00:22:06.138 [2024-07-26 11:30:01.727267] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x85d640 is same with the state(5) to be set 00:22:06.139 [2024-07-26 11:30:01.727273] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x85d640 is same with the state(5) to be set 00:22:06.139 [2024-07-26 11:30:01.727279] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x85d640 is same with the state(5) to be set 00:22:06.139 [2024-07-26 11:30:01.727292] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x85d640 is same with the state(5) to be set 00:22:06.139 [2024-07-26 11:30:01.727298] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x85d640 is same with the state(5) to be set 00:22:06.139 [2024-07-26 11:30:01.727307] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x85d640 is same with the state(5) to be set 00:22:06.139 [2024-07-26 11:30:01.727313] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x85d640 is same with the state(5) to be set 00:22:06.139 [2024-07-26 11:30:01.727319] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x85d640 is same with the state(5) to be set 00:22:06.139 [2024-07-26 11:30:01.727326] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x85d640 is same with the state(5) to be set 00:22:06.139 [2024-07-26 11:30:01.727332] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x85d640 is same with the state(5) to be set 00:22:06.139 [2024-07-26 11:30:01.727338] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x85d640 is same with the state(5) to be set 00:22:06.139 [2024-07-26 11:30:01.727344] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x85d640 is same with the state(5) to be set 00:22:06.139 [2024-07-26 11:30:01.727351] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x85d640 is same with the state(5) to be set 00:22:06.139 [2024-07-26 11:30:01.727356] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x85d640 is same with the state(5) to be set 00:22:06.139 [2024-07-26 11:30:01.727362] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x85d640 is same with the state(5) to be set 00:22:06.139 [2024-07-26 11:30:01.727368] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x85d640 is same with the state(5) to be set 00:22:06.139 [2024-07-26 11:30:01.727374] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x85d640 is same with the state(5) to be set 00:22:06.139 [2024-07-26 11:30:01.727381] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x85d640 is same with the state(5) to be set 00:22:06.139 [2024-07-26 11:30:01.727387] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x85d640 is same with the state(5) to be set 00:22:06.139 [2024-07-26 11:30:01.727393] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x85d640 is same with the state(5) to be set 00:22:06.139 [2024-07-26 11:30:01.727398] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x85d640 is same with the state(5) to be set 00:22:06.139 [2024-07-26 11:30:01.727404] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x85d640 is same with the state(5) to be set 00:22:06.139 [2024-07-26 11:30:01.727410] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x85d640 is same with the state(5) to be set 00:22:06.139 [2024-07-26 11:30:01.727416] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x85d640 is same with the state(5) to be set 00:22:06.139 [2024-07-26 11:30:01.727421] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x85d640 is same with the state(5) to be set 00:22:06.139 [2024-07-26 11:30:01.727427] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x85d640 is same with the state(5) to be set 00:22:06.139 [2024-07-26 11:30:01.727433] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x85d640 is same with the state(5) to be set 00:22:06.139 [2024-07-26 11:30:01.727439] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x85d640 is same with the state(5) to be set 00:22:06.139 [2024-07-26 11:30:01.727445] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x85d640 is same with the state(5) to be set 00:22:06.139 [2024-07-26 11:30:01.727451] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x85d640 is same with the state(5) to be set 00:22:06.139 [2024-07-26 11:30:01.727457] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x85d640 is same with the state(5) to be set 00:22:06.139 [2024-07-26 11:30:01.727462] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x85d640 is same with the state(5) to be set 00:22:06.139 [2024-07-26 11:30:01.727470] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x85d640 is same with the state(5) to be set 00:22:06.139 [2024-07-26 11:30:01.727476] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x85d640 is same with the state(5) to be set 00:22:06.139 [2024-07-26 11:30:01.727482] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x85d640 is same with the state(5) to be set 00:22:06.139 [2024-07-26 11:30:01.727490] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x85d640 is same with the state(5) to be set 00:22:06.139 [2024-07-26 11:30:01.727496] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x85d640 is same with the state(5) to be set 00:22:06.139 [2024-07-26 11:30:01.727501] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x85d640 is same with the state(5) to be set 00:22:06.139 [2024-07-26 11:30:01.727507] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x85d640 is same with the state(5) to be set 00:22:06.139 [2024-07-26 11:30:01.727513] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x85d640 is same with the state(5) to be set 00:22:06.139 [2024-07-26 11:30:01.727519] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x85d640 is same with the state(5) to be set 00:22:06.139 [2024-07-26 11:30:01.727525] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x85d640 is same with the state(5) to be set 00:22:06.139 [2024-07-26 11:30:01.727531] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x85d640 is same with the state(5) to be set 00:22:06.139 [2024-07-26 11:30:01.727538] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x85d640 is same with the state(5) to be set 00:22:06.139 [2024-07-26 11:30:01.728821] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x85db00 is same with the state(5) to be set 00:22:06.139 [2024-07-26 11:30:01.728848] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x85db00 is same with the state(5) to be set 00:22:06.139 [2024-07-26 11:30:01.728857] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x85db00 is same with the state(5) to be set 00:22:06.139 [2024-07-26 11:30:01.728863] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x85db00 is same with the state(5) to be set 00:22:06.139 [2024-07-26 11:30:01.728869] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x85db00 is same with the state(5) to be set 00:22:06.139 [2024-07-26 11:30:01.728875] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x85db00 is same with the state(5) to be set 00:22:06.139 [2024-07-26 11:30:01.728881] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x85db00 is same with the state(5) to be set 00:22:06.139 [2024-07-26 11:30:01.728889] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x85db00 is same with the state(5) to be set 00:22:06.139 [2024-07-26 11:30:01.728895] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x85db00 is same with the state(5) to be set 00:22:06.139 [2024-07-26 11:30:01.728901] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x85db00 is same with the state(5) to be set 00:22:06.139 [2024-07-26 11:30:01.728907] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x85db00 is same with the state(5) to be set 00:22:06.139 [2024-07-26 11:30:01.728913] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x85db00 is same with the state(5) to be set 00:22:06.139 [2024-07-26 11:30:01.728918] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x85db00 is same with the state(5) to be set 00:22:06.139 [2024-07-26 11:30:01.728924] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x85db00 is same with the state(5) to be set 00:22:06.139 [2024-07-26 11:30:01.728930] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x85db00 is same with the state(5) to be set 00:22:06.139 [2024-07-26 11:30:01.728936] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x85db00 is same with the state(5) to be set 00:22:06.139 [2024-07-26 11:30:01.728945] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x85db00 is same with the state(5) to be set 00:22:06.139 [2024-07-26 11:30:01.728951] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x85db00 is same with the state(5) to be set 00:22:06.139 [2024-07-26 11:30:01.728957] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x85db00 is same with the state(5) to be set 00:22:06.139 [2024-07-26 11:30:01.728964] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x85db00 is same with the state(5) to be set 00:22:06.139 [2024-07-26 11:30:01.728970] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x85db00 is same with the state(5) to be set 00:22:06.139 [2024-07-26 11:30:01.728976] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x85db00 is same with the state(5) to be set 00:22:06.139 [2024-07-26 11:30:01.728982] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x85db00 is same with the state(5) to be set 00:22:06.139 [2024-07-26 11:30:01.728988] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x85db00 is same with the state(5) to be set 00:22:06.139 [2024-07-26 11:30:01.728994] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x85db00 is same with the state(5) to be set 00:22:06.139 [2024-07-26 11:30:01.729000] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x85db00 is same with the state(5) to be set 00:22:06.139 [2024-07-26 11:30:01.729006] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x85db00 is same with the state(5) to be set 00:22:06.139 [2024-07-26 11:30:01.729012] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x85db00 is same with the state(5) to be set 00:22:06.139 [2024-07-26 11:30:01.729021] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x85db00 is same with the state(5) to be set 00:22:06.139 [2024-07-26 11:30:01.729027] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x85db00 is same with the state(5) to be set 00:22:06.139 [2024-07-26 11:30:01.729034] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x85db00 is same with the state(5) to be set 00:22:06.139 [2024-07-26 11:30:01.729039] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x85db00 is same with the state(5) to be set 00:22:06.139 [2024-07-26 11:30:01.729045] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x85db00 is same with the state(5) to be set 00:22:06.139 [2024-07-26 11:30:01.729051] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x85db00 is same with the state(5) to be set 00:22:06.139 [2024-07-26 11:30:01.729057] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x85db00 is same with the state(5) to be set 00:22:06.139 [2024-07-26 11:30:01.729063] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x85db00 is same with the state(5) to be set 00:22:06.139 [2024-07-26 11:30:01.729069] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x85db00 is same with the state(5) to be set 00:22:06.140 [2024-07-26 11:30:01.729074] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x85db00 is same with the state(5) to be set 00:22:06.140 [2024-07-26 11:30:01.729080] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x85db00 is same with the state(5) to be set 00:22:06.140 [2024-07-26 11:30:01.729086] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x85db00 is same with the state(5) to be set 00:22:06.140 [2024-07-26 11:30:01.729092] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x85db00 is same with the state(5) to be set 00:22:06.140 [2024-07-26 11:30:01.729098] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x85db00 is same with the state(5) to be set 00:22:06.140 [2024-07-26 11:30:01.729104] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x85db00 is same with the state(5) to be set 00:22:06.140 [2024-07-26 11:30:01.729112] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x85db00 is same with the state(5) to be set 00:22:06.140 [2024-07-26 11:30:01.729118] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x85db00 is same with the state(5) to be set 00:22:06.140 [2024-07-26 11:30:01.729124] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x85db00 is same with the state(5) to be set 00:22:06.140 [2024-07-26 11:30:01.729129] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x85db00 is same with the state(5) to be set 00:22:06.140 [2024-07-26 11:30:01.729135] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x85db00 is same with the state(5) to be set 00:22:06.140 [2024-07-26 11:30:01.729141] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x85db00 is same with the state(5) to be set 00:22:06.140 [2024-07-26 11:30:01.729147] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x85db00 is same with the state(5) to be set 00:22:06.140 [2024-07-26 11:30:01.729152] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x85db00 is same with the state(5) to be set 00:22:06.140 [2024-07-26 11:30:01.729158] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x85db00 is same with the state(5) to be set 00:22:06.140 [2024-07-26 11:30:01.729164] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x85db00 is same with the state(5) to be set 00:22:06.140 [2024-07-26 11:30:01.729170] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x85db00 is same with the state(5) to be set 00:22:06.140 [2024-07-26 11:30:01.729176] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x85db00 is same with the state(5) to be set 00:22:06.140 [2024-07-26 11:30:01.729182] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x85db00 is same with the state(5) to be set 00:22:06.140 [2024-07-26 11:30:01.729187] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x85db00 is same with the state(5) to be set 00:22:06.140 [2024-07-26 11:30:01.729193] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x85db00 is same with the state(5) to be set 00:22:06.140 [2024-07-26 11:30:01.729198] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x85db00 is same with the state(5) to be set 00:22:06.140 [2024-07-26 11:30:01.729204] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x85db00 is same with the state(5) to be set 00:22:06.140 [2024-07-26 11:30:01.729210] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x85db00 is same with the state(5) to be set 00:22:06.140 [2024-07-26 11:30:01.729217] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x85db00 is same with the state(5) to be set 00:22:06.140 [2024-07-26 11:30:01.729223] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x85db00 is same with the state(5) to be set 00:22:06.140 [2024-07-26 11:30:01.730143] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x85dfe0 is same with the state(5) to be set 00:22:06.140 [2024-07-26 11:30:01.730166] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x85dfe0 is same with the state(5) to be set 00:22:06.140 [2024-07-26 11:30:01.730173] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x85dfe0 is same with the state(5) to be set 00:22:06.140 [2024-07-26 11:30:01.730179] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x85dfe0 is same with the state(5) to be set 00:22:06.140 [2024-07-26 11:30:01.730185] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x85dfe0 is same with the state(5) to be set 00:22:06.140 [2024-07-26 11:30:01.730192] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x85dfe0 is same with the state(5) to be set 00:22:06.140 [2024-07-26 11:30:01.730198] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x85dfe0 is same with the state(5) to be set 00:22:06.140 [2024-07-26 11:30:01.730208] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x85dfe0 is same with the state(5) to be set 00:22:06.140 [2024-07-26 11:30:01.730214] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x85dfe0 is same with the state(5) to be set 00:22:06.140 [2024-07-26 11:30:01.730220] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x85dfe0 is same with the state(5) to be set 00:22:06.140 [2024-07-26 11:30:01.730226] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x85dfe0 is same with the state(5) to be set 00:22:06.140 [2024-07-26 11:30:01.730232] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x85dfe0 is same with the state(5) to be set 00:22:06.140 [2024-07-26 11:30:01.730238] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x85dfe0 is same with the state(5) to be set 00:22:06.140 [2024-07-26 11:30:01.730244] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x85dfe0 is same with the state(5) to be set 00:22:06.140 [2024-07-26 11:30:01.730250] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x85dfe0 is same with the state(5) to be set 00:22:06.140 [2024-07-26 11:30:01.730256] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x85dfe0 is same with the state(5) to be set 00:22:06.140 [2024-07-26 11:30:01.730262] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x85dfe0 is same with the state(5) to be set 00:22:06.140 [2024-07-26 11:30:01.730268] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x85dfe0 is same with the state(5) to be set 00:22:06.140 [2024-07-26 11:30:01.730273] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x85dfe0 is same with the state(5) to be set 00:22:06.140 [2024-07-26 11:30:01.730279] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x85dfe0 is same with the state(5) to be set 00:22:06.140 [2024-07-26 11:30:01.730286] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x85dfe0 is same with the state(5) to be set 00:22:06.140 [2024-07-26 11:30:01.730291] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x85dfe0 is same with the state(5) to be set 00:22:06.140 [2024-07-26 11:30:01.730298] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x85dfe0 is same with the state(5) to be set 00:22:06.140 [2024-07-26 11:30:01.730304] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x85dfe0 is same with the state(5) to be set 00:22:06.140 [2024-07-26 11:30:01.730310] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x85dfe0 is same with the state(5) to be set 00:22:06.140 [2024-07-26 11:30:01.730316] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x85dfe0 is same with the state(5) to be set 00:22:06.140 [2024-07-26 11:30:01.730322] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x85dfe0 is same with the state(5) to be set 00:22:06.140 [2024-07-26 11:30:01.730327] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x85dfe0 is same with the state(5) to be set 00:22:06.140 [2024-07-26 11:30:01.730334] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x85dfe0 is same with the state(5) to be set 00:22:06.140 [2024-07-26 11:30:01.730339] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x85dfe0 is same with the state(5) to be set 00:22:06.140 [2024-07-26 11:30:01.730346] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x85dfe0 is same with the state(5) to be set 00:22:06.140 [2024-07-26 11:30:01.730352] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x85dfe0 is same with the state(5) to be set 00:22:06.140 [2024-07-26 11:30:01.730359] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x85dfe0 is same with the state(5) to be set 00:22:06.140 [2024-07-26 11:30:01.730365] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x85dfe0 is same with the state(5) to be set 00:22:06.140 [2024-07-26 11:30:01.730371] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x85dfe0 is same with the state(5) to be set 00:22:06.140 [2024-07-26 11:30:01.730378] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x85dfe0 is same with the state(5) to be set 00:22:06.140 [2024-07-26 11:30:01.730384] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x85dfe0 is same with the state(5) to be set 00:22:06.140 [2024-07-26 11:30:01.730391] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x85dfe0 is same with the state(5) to be set 00:22:06.140 [2024-07-26 11:30:01.730397] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x85dfe0 is same with the state(5) to be set 00:22:06.140 [2024-07-26 11:30:01.730402] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x85dfe0 is same with the state(5) to be set 00:22:06.140 [2024-07-26 11:30:01.730408] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x85dfe0 is same with the state(5) to be set 00:22:06.140 [2024-07-26 11:30:01.730414] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x85dfe0 is same with the state(5) to be set 00:22:06.140 [2024-07-26 11:30:01.730420] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x85dfe0 is same with the state(5) to be set 00:22:06.140 [2024-07-26 11:30:01.730426] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x85dfe0 is same with the state(5) to be set 00:22:06.140 [2024-07-26 11:30:01.730432] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x85dfe0 is same with the state(5) to be set 00:22:06.140 [2024-07-26 11:30:01.730438] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x85dfe0 is same with the state(5) to be set 00:22:06.140 [2024-07-26 11:30:01.730444] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x85dfe0 is same with the state(5) to be set 00:22:06.140 [2024-07-26 11:30:01.730449] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x85dfe0 is same with the state(5) to be set 00:22:06.140 [2024-07-26 11:30:01.730455] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x85dfe0 is same with the state(5) to be set 00:22:06.140 [2024-07-26 11:30:01.730461] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x85dfe0 is same with the state(5) to be set 00:22:06.140 [2024-07-26 11:30:01.730467] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x85dfe0 is same with the state(5) to be set 00:22:06.140 [2024-07-26 11:30:01.730475] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x85dfe0 is same with the state(5) to be set 00:22:06.140 [2024-07-26 11:30:01.730482] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x85dfe0 is same with the state(5) to be set 00:22:06.140 [2024-07-26 11:30:01.730488] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x85dfe0 is same with the state(5) to be set 00:22:06.140 [2024-07-26 11:30:01.730494] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x85dfe0 is same with the state(5) to be set 00:22:06.141 [2024-07-26 11:30:01.730500] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x85dfe0 is same with the state(5) to be set 00:22:06.141 [2024-07-26 11:30:01.730506] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x85dfe0 is same with the state(5) to be set 00:22:06.141 [2024-07-26 11:30:01.730512] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x85dfe0 is same with the state(5) to be set 00:22:06.141 [2024-07-26 11:30:01.730517] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x85dfe0 is same with the state(5) to be set 00:22:06.141 [2024-07-26 11:30:01.730523] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x85dfe0 is same with the state(5) to be set 00:22:06.141 [2024-07-26 11:30:01.730529] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x85dfe0 is same with the state(5) to be set 00:22:06.141 [2024-07-26 11:30:01.730535] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x85dfe0 is same with the state(5) to be set 00:22:06.141 [2024-07-26 11:30:01.730542] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x85dfe0 is same with the state(5) to be set 00:22:06.141 [2024-07-26 11:30:01.731120] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x85e4a0 is same with the state(5) to be set 00:22:06.141 [2024-07-26 11:30:01.731133] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x85e4a0 is same with the state(5) to be set 00:22:06.141 [2024-07-26 11:30:01.731139] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x85e4a0 is same with the state(5) to be set 00:22:06.141 [2024-07-26 11:30:01.731145] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x85e4a0 is same with the state(5) to be set 00:22:06.141 [2024-07-26 11:30:01.731151] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x85e4a0 is same with the state(5) to be set 00:22:06.141 [2024-07-26 11:30:01.731157] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x85e4a0 is same with the state(5) to be set 00:22:06.141 [2024-07-26 11:30:01.731163] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x85e4a0 is same with the state(5) to be set 00:22:06.141 [2024-07-26 11:30:01.731169] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x85e4a0 is same with the state(5) to be set 00:22:06.141 [2024-07-26 11:30:01.731174] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x85e4a0 is same with the state(5) to be set 00:22:06.141 [2024-07-26 11:30:01.731180] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x85e4a0 is same with the state(5) to be set 00:22:06.141 [2024-07-26 11:30:01.731186] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x85e4a0 is same with the state(5) to be set 00:22:06.141 [2024-07-26 11:30:01.731191] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x85e4a0 is same with the state(5) to be set 00:22:06.141 [2024-07-26 11:30:01.731197] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x85e4a0 is same with the state(5) to be set 00:22:06.141 [2024-07-26 11:30:01.731203] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x85e4a0 is same with the state(5) to be set 00:22:06.141 [2024-07-26 11:30:01.731209] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x85e4a0 is same with the state(5) to be set 00:22:06.141 [2024-07-26 11:30:01.731215] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x85e4a0 is same with the state(5) to be set 00:22:06.141 [2024-07-26 11:30:01.731220] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x85e4a0 is same with the state(5) to be set 00:22:06.141 [2024-07-26 11:30:01.731226] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x85e4a0 is same with the state(5) to be set 00:22:06.141 [2024-07-26 11:30:01.731232] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x85e4a0 is same with the state(5) to be set 00:22:06.141 [2024-07-26 11:30:01.731237] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x85e4a0 is same with the state(5) to be set 00:22:06.141 [2024-07-26 11:30:01.731243] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x85e4a0 is same with the state(5) to be set 00:22:06.141 [2024-07-26 11:30:01.731248] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x85e4a0 is same with the state(5) to be set 00:22:06.141 [2024-07-26 11:30:01.731254] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x85e4a0 is same with the state(5) to be set 00:22:06.141 [2024-07-26 11:30:01.731260] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x85e4a0 is same with the state(5) to be set 00:22:06.141 [2024-07-26 11:30:01.731266] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x85e4a0 is same with the state(5) to be set 00:22:06.141 [2024-07-26 11:30:01.731272] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x85e4a0 is same with the state(5) to be set 00:22:06.141 [2024-07-26 11:30:01.731280] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x85e4a0 is same with the state(5) to be set 00:22:06.141 [2024-07-26 11:30:01.731286] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x85e4a0 is same with the state(5) to be set 00:22:06.141 [2024-07-26 11:30:01.731291] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x85e4a0 is same with the state(5) to be set 00:22:06.141 [2024-07-26 11:30:01.731297] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x85e4a0 is same with the state(5) to be set 00:22:06.141 [2024-07-26 11:30:01.731304] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x85e4a0 is same with the state(5) to be set 00:22:06.141 [2024-07-26 11:30:01.731309] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x85e4a0 is same with the state(5) to be set 00:22:06.141 [2024-07-26 11:30:01.731315] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x85e4a0 is same with the state(5) to be set 00:22:06.141 [2024-07-26 11:30:01.731321] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x85e4a0 is same with the state(5) to be set 00:22:06.141 [2024-07-26 11:30:01.731327] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x85e4a0 is same with the state(5) to be set 00:22:06.141 [2024-07-26 11:30:01.731333] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x85e4a0 is same with the state(5) to be set 00:22:06.141 [2024-07-26 11:30:01.731339] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x85e4a0 is same with the state(5) to be set 00:22:06.141 [2024-07-26 11:30:01.731344] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x85e4a0 is same with the state(5) to be set 00:22:06.141 [2024-07-26 11:30:01.731350] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x85e4a0 is same with the state(5) to be set 00:22:06.141 [2024-07-26 11:30:01.731356] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x85e4a0 is same with the state(5) to be set 00:22:06.141 [2024-07-26 11:30:01.731363] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x85e4a0 is same with the state(5) to be set 00:22:06.141 [2024-07-26 11:30:01.731368] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x85e4a0 is same with the state(5) to be set 00:22:06.141 [2024-07-26 11:30:01.731374] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x85e4a0 is same with the state(5) to be set 00:22:06.141 [2024-07-26 11:30:01.731380] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x85e4a0 is same with the state(5) to be set 00:22:06.141 [2024-07-26 11:30:01.731386] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x85e4a0 is same with the state(5) to be set 00:22:06.141 [2024-07-26 11:30:01.731391] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x85e4a0 is same with the state(5) to be set 00:22:06.141 [2024-07-26 11:30:01.731397] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x85e4a0 is same with the state(5) to be set 00:22:06.141 [2024-07-26 11:30:01.731403] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x85e4a0 is same with the state(5) to be set 00:22:06.141 [2024-07-26 11:30:01.731408] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x85e4a0 is same with the state(5) to be set 00:22:06.141 [2024-07-26 11:30:01.731414] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x85e4a0 is same with the state(5) to be set 00:22:06.141 [2024-07-26 11:30:01.731420] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x85e4a0 is same with the state(5) to be set 00:22:06.141 [2024-07-26 11:30:01.731425] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x85e4a0 is same with the state(5) to be set 00:22:06.141 [2024-07-26 11:30:01.731431] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x85e4a0 is same with the state(5) to be set 00:22:06.141 [2024-07-26 11:30:01.731438] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x85e4a0 is same with the state(5) to be set 00:22:06.141 [2024-07-26 11:30:01.731444] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x85e4a0 is same with the state(5) to be set 00:22:06.141 [2024-07-26 11:30:01.731450] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x85e4a0 is same with the state(5) to be set 00:22:06.141 [2024-07-26 11:30:01.731456] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x85e4a0 is same with the state(5) to be set 00:22:06.141 [2024-07-26 11:30:01.731461] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x85e4a0 is same with the state(5) to be set 00:22:06.141 [2024-07-26 11:30:01.731467] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x85e4a0 is same with the state(5) to be set 00:22:06.141 [2024-07-26 11:30:01.731472] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x85e4a0 is same with the state(5) to be set 00:22:06.141 [2024-07-26 11:30:01.731478] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x85e4a0 is same with the state(5) to be set 00:22:06.141 [2024-07-26 11:30:01.731484] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x85e4a0 is same with the state(5) to be set 00:22:06.141 [2024-07-26 11:30:01.731489] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x85e4a0 is same with the state(5) to be set 00:22:06.141 [2024-07-26 11:30:01.732238] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x85e980 is same with the state(5) to be set 00:22:06.141 [2024-07-26 11:30:01.732249] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x85e980 is same with the state(5) to be set 00:22:06.141 [2024-07-26 11:30:01.732255] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x85e980 is same with the state(5) to be set 00:22:06.141 [2024-07-26 11:30:01.732261] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x85e980 is same with the state(5) to be set 00:22:06.141 [2024-07-26 11:30:01.732267] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x85e980 is same with the state(5) to be set 00:22:06.141 [2024-07-26 11:30:01.732273] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x85e980 is same with the state(5) to be set 00:22:06.142 [2024-07-26 11:30:01.732279] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x85e980 is same with the state(5) to be set 00:22:06.142 [2024-07-26 11:30:01.732285] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x85e980 is same with the state(5) to be set 00:22:06.142 [2024-07-26 11:30:01.732291] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x85e980 is same with the state(5) to be set 00:22:06.142 [2024-07-26 11:30:01.732296] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x85e980 is same with the state(5) to be set 00:22:06.142 [2024-07-26 11:30:01.732302] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x85e980 is same with the state(5) to be set 00:22:06.142 [2024-07-26 11:30:01.732308] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x85e980 is same with the state(5) to be set 00:22:06.142 [2024-07-26 11:30:01.732314] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x85e980 is same with the state(5) to be set 00:22:06.142 [2024-07-26 11:30:01.732320] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x85e980 is same with the state(5) to be set 00:22:06.142 [2024-07-26 11:30:01.732326] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x85e980 is same with the state(5) to be set 00:22:06.142 [2024-07-26 11:30:01.732332] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x85e980 is same with the state(5) to be set 00:22:06.142 [2024-07-26 11:30:01.732338] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x85e980 is same with the state(5) to be set 00:22:06.142 [2024-07-26 11:30:01.732344] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x85e980 is same with the state(5) to be set 00:22:06.142 [2024-07-26 11:30:01.732351] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x85e980 is same with the state(5) to be set 00:22:06.142 [2024-07-26 11:30:01.732357] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x85e980 is same with the state(5) to be set 00:22:06.142 [2024-07-26 11:30:01.732362] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x85e980 is same with the state(5) to be set 00:22:06.142 [2024-07-26 11:30:01.732368] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x85e980 is same with the state(5) to be set 00:22:06.142 [2024-07-26 11:30:01.732374] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x85e980 is same with the state(5) to be set 00:22:06.142 [2024-07-26 11:30:01.732379] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x85e980 is same with the state(5) to be set 00:22:06.142 [2024-07-26 11:30:01.732385] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x85e980 is same with the state(5) to be set 00:22:06.142 [2024-07-26 11:30:01.732391] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x85e980 is same with the state(5) to be set 00:22:06.142 [2024-07-26 11:30:01.732397] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x85e980 is same with the state(5) to be set 00:22:06.142 [2024-07-26 11:30:01.732402] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x85e980 is same with the state(5) to be set 00:22:06.142 [2024-07-26 11:30:01.732408] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x85e980 is same with the state(5) to be set 00:22:06.142 [2024-07-26 11:30:01.732414] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x85e980 is same with the state(5) to be set 00:22:06.142 [2024-07-26 11:30:01.732420] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x85e980 is same with the state(5) to be set 00:22:06.142 [2024-07-26 11:30:01.732426] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x85e980 is same with the state(5) to be set 00:22:06.142 [2024-07-26 11:30:01.732432] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x85e980 is same with the state(5) to be set 00:22:06.142 [2024-07-26 11:30:01.732440] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x85e980 is same with the state(5) to be set 00:22:06.142 [2024-07-26 11:30:01.732446] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x85e980 is same with the state(5) to be set 00:22:06.142 [2024-07-26 11:30:01.732452] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x85e980 is same with the state(5) to be set 00:22:06.142 [2024-07-26 11:30:01.732458] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x85e980 is same with the state(5) to be set 00:22:06.142 [2024-07-26 11:30:01.732464] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x85e980 is same with the state(5) to be set 00:22:06.142 [2024-07-26 11:30:01.732471] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x85e980 is same with the state(5) to be set 00:22:06.142 [2024-07-26 11:30:01.732476] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x85e980 is same with the state(5) to be set 00:22:06.142 [2024-07-26 11:30:01.732482] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x85e980 is same with the state(5) to be set 00:22:06.142 [2024-07-26 11:30:01.732488] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x85e980 is same with the state(5) to be set 00:22:06.142 [2024-07-26 11:30:01.732494] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x85e980 is same with the state(5) to be set 00:22:06.142 [2024-07-26 11:30:01.732499] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x85e980 is same with the state(5) to be set 00:22:06.142 [2024-07-26 11:30:01.732505] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x85e980 is same with the state(5) to be set 00:22:06.142 [2024-07-26 11:30:01.732512] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x85e980 is same with the state(5) to be set 00:22:06.142 [2024-07-26 11:30:01.732519] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x85e980 is same with the state(5) to be set 00:22:06.142 [2024-07-26 11:30:01.732524] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x85e980 is same with the state(5) to be set 00:22:06.142 [2024-07-26 11:30:01.732530] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x85e980 is same with the state(5) to be set 00:22:06.142 [2024-07-26 11:30:01.732536] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x85e980 is same with the state(5) to be set 00:22:06.142 [2024-07-26 11:30:01.732541] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x85e980 is same with the state(5) to be set 00:22:06.142 [2024-07-26 11:30:01.732547] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x85e980 is same with the state(5) to be set 00:22:06.142 [2024-07-26 11:30:01.732553] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x85e980 is same with the state(5) to be set 00:22:06.142 [2024-07-26 11:30:01.732559] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x85e980 is same with the state(5) to be set 00:22:06.142 [2024-07-26 11:30:01.732565] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x85e980 is same with the state(5) to be set 00:22:06.142 [2024-07-26 11:30:01.732570] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x85e980 is same with the state(5) to be set 00:22:06.142 [2024-07-26 11:30:01.732576] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x85e980 is same with the state(5) to be set 00:22:06.142 [2024-07-26 11:30:01.732581] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x85e980 is same with the state(5) to be set 00:22:06.142 [2024-07-26 11:30:01.732587] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x85e980 is same with the state(5) to be set 00:22:06.142 [2024-07-26 11:30:01.732593] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x85e980 is same with the state(5) to be set 00:22:06.142 [2024-07-26 11:30:01.732598] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x85e980 is same with the state(5) to be set 00:22:06.142 [2024-07-26 11:30:01.732604] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x85e980 is same with the state(5) to be set 00:22:06.142 [2024-07-26 11:30:01.732610] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x85e980 is same with the state(5) to be set 00:22:06.142 [2024-07-26 11:30:01.733515] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa4a230 is same with the state(5) to be set 00:22:06.142 [2024-07-26 11:30:01.733528] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa4a230 is same with the state(5) to be set 00:22:06.142 [2024-07-26 11:30:01.733535] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa4a230 is same with the state(5) to be set 00:22:06.142 [2024-07-26 11:30:01.733541] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa4a230 is same with the state(5) to be set 00:22:06.142 [2024-07-26 11:30:01.733547] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa4a230 is same with the state(5) to be set 00:22:06.142 [2024-07-26 11:30:01.733553] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa4a230 is same with the state(5) to be set 00:22:06.143 [2024-07-26 11:30:01.733560] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa4a230 is same with the state(5) to be set 00:22:06.143 [2024-07-26 11:30:01.733566] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa4a230 is same with the state(5) to be set 00:22:06.143 [2024-07-26 11:30:01.733571] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa4a230 is same with the state(5) to be set 00:22:06.143 [2024-07-26 11:30:01.733579] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa4a230 is same with the state(5) to be set 00:22:06.143 [2024-07-26 11:30:01.733585] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa4a230 is same with the state(5) to be set 00:22:06.143 [2024-07-26 11:30:01.733592] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa4a230 is same with the state(5) to be set 00:22:06.143 [2024-07-26 11:30:01.733598] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa4a230 is same with the state(5) to be set 00:22:06.143 [2024-07-26 11:30:01.733604] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa4a230 is same with the state(5) to be set 00:22:06.143 [2024-07-26 11:30:01.733611] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa4a230 is same with the state(5) to be set 00:22:06.143 [2024-07-26 11:30:01.733617] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa4a230 is same with the state(5) to be set 00:22:06.143 [2024-07-26 11:30:01.733623] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa4a230 is same with the state(5) to be set 00:22:06.143 [2024-07-26 11:30:01.733633] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa4a230 is same with the state(5) to be set 00:22:06.143 [2024-07-26 11:30:01.733639] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa4a230 is same with the state(5) to be set 00:22:06.143 [2024-07-26 11:30:01.733645] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa4a230 is same with the state(5) to be set 00:22:06.143 [2024-07-26 11:30:01.733651] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa4a230 is same with the state(5) to be set 00:22:06.143 [2024-07-26 11:30:01.733658] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa4a230 is same with the state(5) to be set 00:22:06.143 [2024-07-26 11:30:01.733664] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa4a230 is same with the state(5) to be set 00:22:06.143 [2024-07-26 11:30:01.733670] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa4a230 is same with the state(5) to be set 00:22:06.143 [2024-07-26 11:30:01.733676] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa4a230 is same with the state(5) to be set 00:22:06.143 [2024-07-26 11:30:01.733682] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa4a230 is same with the state(5) to be set 00:22:06.143 [2024-07-26 11:30:01.733688] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa4a230 is same with the state(5) to be set 00:22:06.143 [2024-07-26 11:30:01.733694] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa4a230 is same with the state(5) to be set 00:22:06.143 [2024-07-26 11:30:01.733700] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa4a230 is same with the state(5) to be set 00:22:06.143 [2024-07-26 11:30:01.733706] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa4a230 is same with the state(5) to be set 00:22:06.143 [2024-07-26 11:30:01.733712] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa4a230 is same with the state(5) to be set 00:22:06.143 [2024-07-26 11:30:01.733729] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa4a230 is same with the state(5) to be set 00:22:06.143 [2024-07-26 11:30:01.733736] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa4a230 is same with the state(5) to be set 00:22:06.143 [2024-07-26 11:30:01.733741] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa4a230 is same with the state(5) to be set 00:22:06.143 [2024-07-26 11:30:01.733749] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa4a230 is same with the state(5) to be set 00:22:06.143 [2024-07-26 11:30:01.733755] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa4a230 is same with the state(5) to be set 00:22:06.143 [2024-07-26 11:30:01.733762] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa4a230 is same with the state(5) to be set 00:22:06.143 [2024-07-26 11:30:01.733770] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa4a230 is same with the state(5) to be set 00:22:06.143 [2024-07-26 11:30:01.733776] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa4a230 is same with the state(5) to be set 00:22:06.143 [2024-07-26 11:30:01.733782] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa4a230 is same with the state(5) to be set 00:22:06.143 [2024-07-26 11:30:01.733788] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa4a230 is same with the state(5) to be set 00:22:06.143 [2024-07-26 11:30:01.733794] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa4a230 is same with the state(5) to be set 00:22:06.143 [2024-07-26 11:30:01.733800] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa4a230 is same with the state(5) to be set 00:22:06.143 [2024-07-26 11:30:01.733807] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa4a230 is same with the state(5) to be set 00:22:06.143 [2024-07-26 11:30:01.733812] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa4a230 is same with the state(5) to be set 00:22:06.143 [2024-07-26 11:30:01.733818] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa4a230 is same with the state(5) to be set 00:22:06.143 [2024-07-26 11:30:01.733824] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa4a230 is same with the state(5) to be set 00:22:06.143 [2024-07-26 11:30:01.733829] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa4a230 is same with the state(5) to be set 00:22:06.143 [2024-07-26 11:30:01.733835] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa4a230 is same with the state(5) to be set 00:22:06.143 [2024-07-26 11:30:01.733841] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa4a230 is same with the state(5) to be set 00:22:06.143 [2024-07-26 11:30:01.733847] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa4a230 is same with the state(5) to be set 00:22:06.143 [2024-07-26 11:30:01.733854] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa4a230 is same with the state(5) to be set 00:22:06.143 [2024-07-26 11:30:01.733860] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa4a230 is same with the state(5) to be set 00:22:06.143 [2024-07-26 11:30:01.733866] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa4a230 is same with the state(5) to be set 00:22:06.143 [2024-07-26 11:30:01.733872] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa4a230 is same with the state(5) to be set 00:22:06.143 [2024-07-26 11:30:01.733878] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa4a230 is same with the state(5) to be set 00:22:06.143 [2024-07-26 11:30:01.733884] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa4a230 is same with the state(5) to be set 00:22:06.143 [2024-07-26 11:30:01.733889] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa4a230 is same with the state(5) to be set 00:22:06.143 [2024-07-26 11:30:01.733895] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa4a230 is same with the state(5) to be set 00:22:06.143 [2024-07-26 11:30:01.733902] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa4a230 is same with the state(5) to be set 00:22:06.143 [2024-07-26 11:30:01.733908] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa4a230 is same with the state(5) to be set 00:22:06.143 [2024-07-26 11:30:01.733914] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa4a230 is same with the state(5) to be set 00:22:06.143 [2024-07-26 11:30:01.733920] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa4a230 is same with the state(5) to be set 00:22:06.143 [2024-07-26 11:30:01.733926] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa4a230 is same with the state(5) to be set 00:22:06.143 [2024-07-26 11:30:01.755364] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:06.143 [2024-07-26 11:30:01.755418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.143 [2024-07-26 11:30:01.755429] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:06.143 [2024-07-26 11:30:01.755437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.143 [2024-07-26 11:30:01.755445] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:06.143 [2024-07-26 11:30:01.755452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.143 [2024-07-26 11:30:01.755459] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:06.143 [2024-07-26 11:30:01.755466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.143 [2024-07-26 11:30:01.755474] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ad9bc0 is same with the state(5) to be set 00:22:06.143 [2024-07-26 11:30:01.755510] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:06.143 [2024-07-26 11:30:01.755519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.143 [2024-07-26 11:30:01.755527] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:06.143 [2024-07-26 11:30:01.755535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.143 [2024-07-26 11:30:01.755542] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:06.143 [2024-07-26 11:30:01.755549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.143 [2024-07-26 11:30:01.755557] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:06.143 [2024-07-26 11:30:01.755564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.143 [2024-07-26 11:30:01.755570] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ab0840 is same with the state(5) to be set 00:22:06.143 [2024-07-26 11:30:01.755596] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:06.143 [2024-07-26 11:30:01.755604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.144 [2024-07-26 11:30:01.755611] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:06.144 [2024-07-26 11:30:01.755618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.144 [2024-07-26 11:30:01.755625] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:06.144 [2024-07-26 11:30:01.755638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.144 [2024-07-26 11:30:01.755645] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:06.144 [2024-07-26 11:30:01.755652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.144 [2024-07-26 11:30:01.755664] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aafab0 is same with the state(5) to be set 00:22:06.144 [2024-07-26 11:30:01.755691] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:06.144 [2024-07-26 11:30:01.755699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.144 [2024-07-26 11:30:01.755706] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:06.144 [2024-07-26 11:30:01.755713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.144 [2024-07-26 11:30:01.755720] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:06.144 [2024-07-26 11:30:01.755727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.144 [2024-07-26 11:30:01.755735] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:06.144 [2024-07-26 11:30:01.755741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.144 [2024-07-26 11:30:01.755748] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ad2f60 is same with the state(5) to be set 00:22:06.144 [2024-07-26 11:30:01.755773] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:06.144 [2024-07-26 11:30:01.755781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.144 [2024-07-26 11:30:01.755788] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:06.144 [2024-07-26 11:30:01.755795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.144 [2024-07-26 11:30:01.755803] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:06.144 [2024-07-26 11:30:01.755810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.144 [2024-07-26 11:30:01.755817] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:06.144 [2024-07-26 11:30:01.755825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.144 [2024-07-26 11:30:01.755831] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1abe700 is same with the state(5) to be set 00:22:06.144 [2024-07-26 11:30:01.755854] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:06.144 [2024-07-26 11:30:01.755862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.144 [2024-07-26 11:30:01.755870] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:06.144 [2024-07-26 11:30:01.755876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.144 [2024-07-26 11:30:01.755884] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:06.144 [2024-07-26 11:30:01.755891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.144 [2024-07-26 11:30:01.755900] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:06.144 [2024-07-26 11:30:01.755906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.144 [2024-07-26 11:30:01.755913] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ae2910 is same with the state(5) to be set 00:22:06.144 [2024-07-26 11:30:01.755936] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:06.144 [2024-07-26 11:30:01.755944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.144 [2024-07-26 11:30:01.755951] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:06.144 [2024-07-26 11:30:01.755958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.144 [2024-07-26 11:30:01.755965] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:06.144 [2024-07-26 11:30:01.755972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.144 [2024-07-26 11:30:01.755979] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:06.144 [2024-07-26 11:30:01.755985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.144 [2024-07-26 11:30:01.755992] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1946b90 is same with the state(5) to be set 00:22:06.144 [2024-07-26 11:30:01.756014] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:06.144 [2024-07-26 11:30:01.756023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.144 [2024-07-26 11:30:01.756030] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:06.144 [2024-07-26 11:30:01.756037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.144 [2024-07-26 11:30:01.756044] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:06.144 [2024-07-26 11:30:01.756051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.144 [2024-07-26 11:30:01.756058] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:06.144 [2024-07-26 11:30:01.756065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.144 [2024-07-26 11:30:01.756072] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1465340 is same with the state(5) to be set 00:22:06.144 [2024-07-26 11:30:01.756094] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:06.144 [2024-07-26 11:30:01.756102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.144 [2024-07-26 11:30:01.756110] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:06.144 [2024-07-26 11:30:01.756116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.144 [2024-07-26 11:30:01.756123] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:06.144 [2024-07-26 11:30:01.756133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.144 [2024-07-26 11:30:01.756141] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:06.144 [2024-07-26 11:30:01.756147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.144 [2024-07-26 11:30:01.756154] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1942f30 is same with the state(5) to be set 00:22:06.144 [2024-07-26 11:30:01.756175] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:06.144 [2024-07-26 11:30:01.756183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.144 [2024-07-26 11:30:01.756190] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:06.144 [2024-07-26 11:30:01.756197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.144 [2024-07-26 11:30:01.756205] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:06.144 [2024-07-26 11:30:01.756212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.144 [2024-07-26 11:30:01.756220] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:06.144 [2024-07-26 11:30:01.756226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.144 [2024-07-26 11:30:01.756233] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1916c70 is same with the state(5) to be set 00:22:06.144 [2024-07-26 11:30:01.756980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.144 [2024-07-26 11:30:01.757004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.144 [2024-07-26 11:30:01.757019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.144 [2024-07-26 11:30:01.757027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.144 [2024-07-26 11:30:01.757036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.144 [2024-07-26 11:30:01.757044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.145 [2024-07-26 11:30:01.757053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.145 [2024-07-26 11:30:01.757060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.145 [2024-07-26 11:30:01.757069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.145 [2024-07-26 11:30:01.757076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.145 [2024-07-26 11:30:01.757086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.145 [2024-07-26 11:30:01.757093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.145 [2024-07-26 11:30:01.757105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.145 [2024-07-26 11:30:01.757113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.145 [2024-07-26 11:30:01.757122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.145 [2024-07-26 11:30:01.757128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.145 [2024-07-26 11:30:01.757136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.145 [2024-07-26 11:30:01.757144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.145 [2024-07-26 11:30:01.757152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.145 [2024-07-26 11:30:01.757159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.145 [2024-07-26 11:30:01.757167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.145 [2024-07-26 11:30:01.757174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.145 [2024-07-26 11:30:01.757183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.145 [2024-07-26 11:30:01.757189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.145 [2024-07-26 11:30:01.757197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.145 [2024-07-26 11:30:01.757204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.145 [2024-07-26 11:30:01.757212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.145 [2024-07-26 11:30:01.757219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.145 [2024-07-26 11:30:01.757228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.145 [2024-07-26 11:30:01.757234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.145 [2024-07-26 11:30:01.757243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.145 [2024-07-26 11:30:01.757250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.145 [2024-07-26 11:30:01.757258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.145 [2024-07-26 11:30:01.757265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.145 [2024-07-26 11:30:01.757273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.145 [2024-07-26 11:30:01.757280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.145 [2024-07-26 11:30:01.757289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.145 [2024-07-26 11:30:01.757298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.145 [2024-07-26 11:30:01.757306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.145 [2024-07-26 11:30:01.757313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.145 [2024-07-26 11:30:01.757322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.145 [2024-07-26 11:30:01.757329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.145 [2024-07-26 11:30:01.757337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.145 [2024-07-26 11:30:01.757344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.145 [2024-07-26 11:30:01.757352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.145 [2024-07-26 11:30:01.757360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.145 [2024-07-26 11:30:01.757368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.145 [2024-07-26 11:30:01.757375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.145 [2024-07-26 11:30:01.757383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.145 [2024-07-26 11:30:01.757389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.145 [2024-07-26 11:30:01.757398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.145 [2024-07-26 11:30:01.757404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.145 [2024-07-26 11:30:01.757412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.145 [2024-07-26 11:30:01.757419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.145 [2024-07-26 11:30:01.757428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.145 [2024-07-26 11:30:01.757435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.145 [2024-07-26 11:30:01.757443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.145 [2024-07-26 11:30:01.757450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.145 [2024-07-26 11:30:01.757457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.145 [2024-07-26 11:30:01.757464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.145 [2024-07-26 11:30:01.757472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.145 [2024-07-26 11:30:01.757479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.145 [2024-07-26 11:30:01.757488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.145 [2024-07-26 11:30:01.757495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.145 [2024-07-26 11:30:01.757503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.145 [2024-07-26 11:30:01.757510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.145 [2024-07-26 11:30:01.757518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.145 [2024-07-26 11:30:01.757525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.145 [2024-07-26 11:30:01.757533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.145 [2024-07-26 11:30:01.757540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.145 [2024-07-26 11:30:01.757549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.145 [2024-07-26 11:30:01.757556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.145 [2024-07-26 11:30:01.757564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.145 [2024-07-26 11:30:01.757571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.146 [2024-07-26 11:30:01.757579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.146 [2024-07-26 11:30:01.757586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.146 [2024-07-26 11:30:01.757594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.146 [2024-07-26 11:30:01.757600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.146 [2024-07-26 11:30:01.757609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.146 [2024-07-26 11:30:01.757616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.146 [2024-07-26 11:30:01.757625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.146 [2024-07-26 11:30:01.757637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.146 [2024-07-26 11:30:01.757646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.146 [2024-07-26 11:30:01.757653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.146 [2024-07-26 11:30:01.757662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.146 [2024-07-26 11:30:01.757668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.146 [2024-07-26 11:30:01.757677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.146 [2024-07-26 11:30:01.757686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.146 [2024-07-26 11:30:01.757695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.146 [2024-07-26 11:30:01.757701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.146 [2024-07-26 11:30:01.757710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.146 [2024-07-26 11:30:01.757717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.146 [2024-07-26 11:30:01.757726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.146 [2024-07-26 11:30:01.757732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.146 [2024-07-26 11:30:01.757742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.146 [2024-07-26 11:30:01.757749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.146 [2024-07-26 11:30:01.757759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.146 [2024-07-26 11:30:01.757766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.146 [2024-07-26 11:30:01.757774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.146 [2024-07-26 11:30:01.757780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.146 [2024-07-26 11:30:01.757789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.146 [2024-07-26 11:30:01.757796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.146 [2024-07-26 11:30:01.757805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.146 [2024-07-26 11:30:01.757812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.146 [2024-07-26 11:30:01.757820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.146 [2024-07-26 11:30:01.757827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.146 [2024-07-26 11:30:01.757836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.146 [2024-07-26 11:30:01.757842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.146 [2024-07-26 11:30:01.757850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.146 [2024-07-26 11:30:01.757857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.146 [2024-07-26 11:30:01.757865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.146 [2024-07-26 11:30:01.757872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.146 [2024-07-26 11:30:01.757882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.146 [2024-07-26 11:30:01.757889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.146 [2024-07-26 11:30:01.757897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.146 [2024-07-26 11:30:01.757903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.146 [2024-07-26 11:30:01.757911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.146 [2024-07-26 11:30:01.757918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.146 [2024-07-26 11:30:01.757926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.146 [2024-07-26 11:30:01.757933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.146 [2024-07-26 11:30:01.757942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.146 [2024-07-26 11:30:01.757948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.146 [2024-07-26 11:30:01.757957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.146 [2024-07-26 11:30:01.757963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.146 [2024-07-26 11:30:01.757972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.146 [2024-07-26 11:30:01.757979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.146 [2024-07-26 11:30:01.757988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.146 [2024-07-26 11:30:01.757995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.147 [2024-07-26 11:30:01.758003] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19f8910 is same with the state(5) to be set 00:22:06.147 [2024-07-26 11:30:01.758061] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x19f8910 was disconnected and freed. reset controller. 00:22:06.147 [2024-07-26 11:30:01.758091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.147 [2024-07-26 11:30:01.758099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.147 [2024-07-26 11:30:01.758110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.147 [2024-07-26 11:30:01.758117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.147 [2024-07-26 11:30:01.758126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.147 [2024-07-26 11:30:01.758133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.147 [2024-07-26 11:30:01.758142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.147 [2024-07-26 11:30:01.758151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.147 [2024-07-26 11:30:01.758160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.147 [2024-07-26 11:30:01.758167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.147 [2024-07-26 11:30:01.758175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.147 [2024-07-26 11:30:01.758182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.147 [2024-07-26 11:30:01.758191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.147 [2024-07-26 11:30:01.758198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.147 [2024-07-26 11:30:01.758206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.147 [2024-07-26 11:30:01.758212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.147 [2024-07-26 11:30:01.758222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.147 [2024-07-26 11:30:01.758229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.147 [2024-07-26 11:30:01.758237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.147 [2024-07-26 11:30:01.758244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.147 [2024-07-26 11:30:01.758252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.147 [2024-07-26 11:30:01.758259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.147 [2024-07-26 11:30:01.758267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.147 [2024-07-26 11:30:01.758273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.147 [2024-07-26 11:30:01.758281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.147 [2024-07-26 11:30:01.758288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.147 [2024-07-26 11:30:01.758296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.147 [2024-07-26 11:30:01.758303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.147 [2024-07-26 11:30:01.758311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.147 [2024-07-26 11:30:01.758317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.147 [2024-07-26 11:30:01.758326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.147 [2024-07-26 11:30:01.758333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.147 [2024-07-26 11:30:01.758342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.147 [2024-07-26 11:30:01.758349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.147 [2024-07-26 11:30:01.758357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.147 [2024-07-26 11:30:01.758364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.147 [2024-07-26 11:30:01.758372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.148 [2024-07-26 11:30:01.758380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.148 [2024-07-26 11:30:01.758388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.148 [2024-07-26 11:30:01.758396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.148 [2024-07-26 11:30:01.758404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.148 [2024-07-26 11:30:01.758410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.148 [2024-07-26 11:30:01.758418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.148 [2024-07-26 11:30:01.758425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.148 [2024-07-26 11:30:01.758434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.148 [2024-07-26 11:30:01.758442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.148 [2024-07-26 11:30:01.758450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.148 [2024-07-26 11:30:01.758456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.148 [2024-07-26 11:30:01.758464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.148 [2024-07-26 11:30:01.758471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.148 [2024-07-26 11:30:01.758480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.148 [2024-07-26 11:30:01.758486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.148 [2024-07-26 11:30:01.758494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.148 [2024-07-26 11:30:01.758501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.148 [2024-07-26 11:30:01.758509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.148 [2024-07-26 11:30:01.758516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.148 [2024-07-26 11:30:01.758524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.148 [2024-07-26 11:30:01.758533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.148 [2024-07-26 11:30:01.758542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.148 [2024-07-26 11:30:01.758548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.148 [2024-07-26 11:30:01.758556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.148 [2024-07-26 11:30:01.758563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.148 [2024-07-26 11:30:01.758571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.148 [2024-07-26 11:30:01.758578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.148 [2024-07-26 11:30:01.758586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.148 [2024-07-26 11:30:01.758593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.148 [2024-07-26 11:30:01.758601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.148 [2024-07-26 11:30:01.758609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.148 [2024-07-26 11:30:01.758617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.148 [2024-07-26 11:30:01.758624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.148 [2024-07-26 11:30:01.758639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.148 [2024-07-26 11:30:01.758647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.148 [2024-07-26 11:30:01.758655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.148 [2024-07-26 11:30:01.758662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.148 [2024-07-26 11:30:01.758671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.148 [2024-07-26 11:30:01.758677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.148 [2024-07-26 11:30:01.758686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.148 [2024-07-26 11:30:01.758693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.148 [2024-07-26 11:30:01.758702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.148 [2024-07-26 11:30:01.758709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.148 [2024-07-26 11:30:01.758717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.148 [2024-07-26 11:30:01.758724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.148 [2024-07-26 11:30:01.758734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.148 [2024-07-26 11:30:01.758741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.148 [2024-07-26 11:30:01.758750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.148 [2024-07-26 11:30:01.758757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.148 [2024-07-26 11:30:01.758765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.148 [2024-07-26 11:30:01.758772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.148 [2024-07-26 11:30:01.758780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.148 [2024-07-26 11:30:01.758788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.149 [2024-07-26 11:30:01.758796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.149 [2024-07-26 11:30:01.758802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.149 [2024-07-26 11:30:01.758811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.149 [2024-07-26 11:30:01.758817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.149 [2024-07-26 11:30:01.758825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.149 [2024-07-26 11:30:01.758831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.149 [2024-07-26 11:30:01.758840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.149 [2024-07-26 11:30:01.758846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.149 [2024-07-26 11:30:01.758855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.149 [2024-07-26 11:30:01.758861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.149 [2024-07-26 11:30:01.758872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.149 [2024-07-26 11:30:01.758879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.149 [2024-07-26 11:30:01.758886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.149 [2024-07-26 11:30:01.758893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.149 [2024-07-26 11:30:01.758902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.149 [2024-07-26 11:30:01.758908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.149 [2024-07-26 11:30:01.758917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.149 [2024-07-26 11:30:01.758924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.149 [2024-07-26 11:30:01.758933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.149 [2024-07-26 11:30:01.758940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.149 [2024-07-26 11:30:01.758949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.149 [2024-07-26 11:30:01.758955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.149 [2024-07-26 11:30:01.758964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.149 [2024-07-26 11:30:01.758970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.149 [2024-07-26 11:30:01.758979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.149 [2024-07-26 11:30:01.758986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.149 [2024-07-26 11:30:01.758994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.149 [2024-07-26 11:30:01.759000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.149 [2024-07-26 11:30:01.759009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.149 [2024-07-26 11:30:01.759015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.149 [2024-07-26 11:30:01.759023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.149 [2024-07-26 11:30:01.759029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.149 [2024-07-26 11:30:01.759038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.149 [2024-07-26 11:30:01.759044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.149 [2024-07-26 11:30:01.759053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.149 [2024-07-26 11:30:01.759059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.149 [2024-07-26 11:30:01.759067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.149 [2024-07-26 11:30:01.759075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.149 [2024-07-26 11:30:01.759459] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x19f9df0 was disconnected and freed. reset controller. 00:22:06.149 [2024-07-26 11:30:01.759845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.149 [2024-07-26 11:30:01.759865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.149 [2024-07-26 11:30:01.759879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.149 [2024-07-26 11:30:01.759892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.149 [2024-07-26 11:30:01.759901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.149 [2024-07-26 11:30:01.759908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.149 [2024-07-26 11:30:01.759917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.149 [2024-07-26 11:30:01.759923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.149 [2024-07-26 11:30:01.759932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.149 [2024-07-26 11:30:01.759939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.149 [2024-07-26 11:30:01.759947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.149 [2024-07-26 11:30:01.759954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.149 [2024-07-26 11:30:01.759963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.149 [2024-07-26 11:30:01.759970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.149 [2024-07-26 11:30:01.759978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.150 [2024-07-26 11:30:01.759985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.150 [2024-07-26 11:30:01.759993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.150 [2024-07-26 11:30:01.760000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.150 [2024-07-26 11:30:01.760008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.150 [2024-07-26 11:30:01.760015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.150 [2024-07-26 11:30:01.760023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.150 [2024-07-26 11:30:01.760029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.150 [2024-07-26 11:30:01.760038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.150 [2024-07-26 11:30:01.760044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.150 [2024-07-26 11:30:01.760052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.150 [2024-07-26 11:30:01.760059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.150 [2024-07-26 11:30:01.760067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.150 [2024-07-26 11:30:01.760074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.150 [2024-07-26 11:30:01.760084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.150 [2024-07-26 11:30:01.760091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.150 [2024-07-26 11:30:01.760099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.150 [2024-07-26 11:30:01.760106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.150 [2024-07-26 11:30:01.760114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.150 [2024-07-26 11:30:01.760121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.150 [2024-07-26 11:30:01.760129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.150 [2024-07-26 11:30:01.760138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.150 [2024-07-26 11:30:01.760147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.150 [2024-07-26 11:30:01.760153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.150 [2024-07-26 11:30:01.760161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.150 [2024-07-26 11:30:01.760168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.150 [2024-07-26 11:30:01.760176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.150 [2024-07-26 11:30:01.760183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.150 [2024-07-26 11:30:01.760191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.150 [2024-07-26 11:30:01.760198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.150 [2024-07-26 11:30:01.760206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.150 [2024-07-26 11:30:01.760212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.150 [2024-07-26 11:30:01.760220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.150 [2024-07-26 11:30:01.760227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.150 [2024-07-26 11:30:01.760236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.150 [2024-07-26 11:30:01.760242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.150 [2024-07-26 11:30:01.760250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.150 [2024-07-26 11:30:01.760257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.150 [2024-07-26 11:30:01.760265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.150 [2024-07-26 11:30:01.760273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.150 [2024-07-26 11:30:01.760281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.150 [2024-07-26 11:30:01.760288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.150 [2024-07-26 11:30:01.760296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.150 [2024-07-26 11:30:01.760303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.150 [2024-07-26 11:30:01.760311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.150 [2024-07-26 11:30:01.760317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.150 [2024-07-26 11:30:01.760325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.150 [2024-07-26 11:30:01.760332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.150 [2024-07-26 11:30:01.760340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.150 [2024-07-26 11:30:01.760346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.150 [2024-07-26 11:30:01.767522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.150 [2024-07-26 11:30:01.767536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.150 [2024-07-26 11:30:01.767548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.150 [2024-07-26 11:30:01.767558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.150 [2024-07-26 11:30:01.767569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.150 [2024-07-26 11:30:01.767577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.151 [2024-07-26 11:30:01.767588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.151 [2024-07-26 11:30:01.767597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.151 [2024-07-26 11:30:01.767608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.151 [2024-07-26 11:30:01.767617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.151 [2024-07-26 11:30:01.767636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.151 [2024-07-26 11:30:01.767646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.151 [2024-07-26 11:30:01.767657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.151 [2024-07-26 11:30:01.767665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.151 [2024-07-26 11:30:01.767679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.151 [2024-07-26 11:30:01.767688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.151 [2024-07-26 11:30:01.767699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.151 [2024-07-26 11:30:01.767708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.151 [2024-07-26 11:30:01.767719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.151 [2024-07-26 11:30:01.767727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.151 [2024-07-26 11:30:01.767738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.151 [2024-07-26 11:30:01.767748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.151 [2024-07-26 11:30:01.767759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.151 [2024-07-26 11:30:01.767767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.151 [2024-07-26 11:30:01.767778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.151 [2024-07-26 11:30:01.767787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.151 [2024-07-26 11:30:01.767798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.151 [2024-07-26 11:30:01.767806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.151 [2024-07-26 11:30:01.767817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.151 [2024-07-26 11:30:01.767826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.151 [2024-07-26 11:30:01.767837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.151 [2024-07-26 11:30:01.767845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.151 [2024-07-26 11:30:01.767856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.151 [2024-07-26 11:30:01.767865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.151 [2024-07-26 11:30:01.767875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.151 [2024-07-26 11:30:01.767885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.151 [2024-07-26 11:30:01.767896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.151 [2024-07-26 11:30:01.767904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.151 [2024-07-26 11:30:01.767915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.151 [2024-07-26 11:30:01.767926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.151 [2024-07-26 11:30:01.767937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.151 [2024-07-26 11:30:01.767945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.151 [2024-07-26 11:30:01.767956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.151 [2024-07-26 11:30:01.767964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.151 [2024-07-26 11:30:01.767975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.151 [2024-07-26 11:30:01.767984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.151 [2024-07-26 11:30:01.767995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.151 [2024-07-26 11:30:01.768004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.151 [2024-07-26 11:30:01.768014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.151 [2024-07-26 11:30:01.768023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.151 [2024-07-26 11:30:01.768035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.151 [2024-07-26 11:30:01.768043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.151 [2024-07-26 11:30:01.768055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.151 [2024-07-26 11:30:01.768063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.151 [2024-07-26 11:30:01.768074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.151 [2024-07-26 11:30:01.768082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.151 [2024-07-26 11:30:01.768093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.151 [2024-07-26 11:30:01.768102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.151 [2024-07-26 11:30:01.768113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.151 [2024-07-26 11:30:01.768122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.152 [2024-07-26 11:30:01.768132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.152 [2024-07-26 11:30:01.768141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.152 [2024-07-26 11:30:01.768152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.152 [2024-07-26 11:30:01.768160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.152 [2024-07-26 11:30:01.768236] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x23ebab0 was disconnected and freed. reset controller. 00:22:06.152 [2024-07-26 11:30:01.770649] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ad9bc0 (9): Bad file descriptor 00:22:06.152 [2024-07-26 11:30:01.770682] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ab0840 (9): Bad file descriptor 00:22:06.152 [2024-07-26 11:30:01.770700] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1aafab0 (9): Bad file descriptor 00:22:06.152 [2024-07-26 11:30:01.770715] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ad2f60 (9): Bad file descriptor 00:22:06.152 [2024-07-26 11:30:01.770730] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1abe700 (9): Bad file descriptor 00:22:06.152 [2024-07-26 11:30:01.770749] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ae2910 (9): Bad file descriptor 00:22:06.152 [2024-07-26 11:30:01.770767] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1946b90 (9): Bad file descriptor 00:22:06.152 [2024-07-26 11:30:01.770786] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1465340 (9): Bad file descriptor 00:22:06.152 [2024-07-26 11:30:01.770805] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1942f30 (9): Bad file descriptor 00:22:06.152 [2024-07-26 11:30:01.770824] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1916c70 (9): Bad file descriptor 00:22:06.152 [2024-07-26 11:30:01.772460] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode8] resetting controller 00:22:06.152 [2024-07-26 11:30:01.773447] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode9] resetting controller 00:22:06.152 [2024-07-26 11:30:01.773475] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode10] resetting controller 00:22:06.152 [2024-07-26 11:30:01.773670] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:06.152 [2024-07-26 11:30:01.773690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ad2f60 with addr=10.0.0.2, port=4420 00:22:06.152 [2024-07-26 11:30:01.773701] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ad2f60 is same with the state(5) to be set 00:22:06.152 [2024-07-26 11:30:01.773755] nvme_tcp.c:1241:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:22:06.152 [2024-07-26 11:30:01.773806] nvme_tcp.c:1241:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:22:06.152 [2024-07-26 11:30:01.773857] nvme_tcp.c:1241:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:22:06.152 [2024-07-26 11:30:01.773907] nvme_tcp.c:1241:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:22:06.152 [2024-07-26 11:30:01.773958] nvme_tcp.c:1241:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:22:06.152 [2024-07-26 11:30:01.774008] nvme_tcp.c:1241:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:22:06.152 [2024-07-26 11:30:01.774058] nvme_tcp.c:1241:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:22:06.152 [2024-07-26 11:30:01.774538] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:06.152 [2024-07-26 11:30:01.774556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ab0840 with addr=10.0.0.2, port=4420 00:22:06.152 [2024-07-26 11:30:01.774566] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ab0840 is same with the state(5) to be set 00:22:06.152 [2024-07-26 11:30:01.774636] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:06.152 [2024-07-26 11:30:01.774650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aafab0 with addr=10.0.0.2, port=4420 00:22:06.152 [2024-07-26 11:30:01.774660] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aafab0 is same with the state(5) to be set 00:22:06.152 [2024-07-26 11:30:01.774673] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ad2f60 (9): Bad file descriptor 00:22:06.152 [2024-07-26 11:30:01.774792] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ab0840 (9): Bad file descriptor 00:22:06.152 [2024-07-26 11:30:01.774806] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1aafab0 (9): Bad file descriptor 00:22:06.152 [2024-07-26 11:30:01.774817] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode8] Ctrlr is in error state 00:22:06.152 [2024-07-26 11:30:01.774825] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode8] controller reinitialization failed 00:22:06.152 [2024-07-26 11:30:01.774835] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode8] in failed state. 00:22:06.152 [2024-07-26 11:30:01.774909] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:06.152 [2024-07-26 11:30:01.774920] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode9] Ctrlr is in error state 00:22:06.152 [2024-07-26 11:30:01.774927] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode9] controller reinitialization failed 00:22:06.152 [2024-07-26 11:30:01.774936] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode9] in failed state. 00:22:06.152 [2024-07-26 11:30:01.774950] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode10] Ctrlr is in error state 00:22:06.152 [2024-07-26 11:30:01.774958] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode10] controller reinitialization failed 00:22:06.152 [2024-07-26 11:30:01.774967] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode10] in failed state. 00:22:06.152 [2024-07-26 11:30:01.775011] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:06.152 [2024-07-26 11:30:01.775019] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:06.152 [2024-07-26 11:30:01.780740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.152 [2024-07-26 11:30:01.780757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.152 [2024-07-26 11:30:01.780772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.152 [2024-07-26 11:30:01.780780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.152 [2024-07-26 11:30:01.780789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.152 [2024-07-26 11:30:01.780797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.152 [2024-07-26 11:30:01.780806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.152 [2024-07-26 11:30:01.780813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.152 [2024-07-26 11:30:01.780822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.152 [2024-07-26 11:30:01.780829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.152 [2024-07-26 11:30:01.780838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.152 [2024-07-26 11:30:01.780846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.153 [2024-07-26 11:30:01.780855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.153 [2024-07-26 11:30:01.780862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.153 [2024-07-26 11:30:01.780874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.153 [2024-07-26 11:30:01.780882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.153 [2024-07-26 11:30:01.780891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.153 [2024-07-26 11:30:01.780898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.153 [2024-07-26 11:30:01.780908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.153 [2024-07-26 11:30:01.780917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.153 [2024-07-26 11:30:01.780926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.153 [2024-07-26 11:30:01.780933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.153 [2024-07-26 11:30:01.780942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.153 [2024-07-26 11:30:01.780949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.153 [2024-07-26 11:30:01.780958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.153 [2024-07-26 11:30:01.780965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.153 [2024-07-26 11:30:01.780974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.153 [2024-07-26 11:30:01.780982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.153 [2024-07-26 11:30:01.780991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.153 [2024-07-26 11:30:01.780999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.153 [2024-07-26 11:30:01.781007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.153 [2024-07-26 11:30:01.781014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.153 [2024-07-26 11:30:01.781024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.153 [2024-07-26 11:30:01.781031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.153 [2024-07-26 11:30:01.781040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.153 [2024-07-26 11:30:01.781047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.153 [2024-07-26 11:30:01.781056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.153 [2024-07-26 11:30:01.781063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.153 [2024-07-26 11:30:01.781072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.153 [2024-07-26 11:30:01.781081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.153 [2024-07-26 11:30:01.781090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.153 [2024-07-26 11:30:01.781097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.153 [2024-07-26 11:30:01.781106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.153 [2024-07-26 11:30:01.781115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.153 [2024-07-26 11:30:01.781124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.153 [2024-07-26 11:30:01.781131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.153 [2024-07-26 11:30:01.781140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.153 [2024-07-26 11:30:01.781147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.153 [2024-07-26 11:30:01.781156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.153 [2024-07-26 11:30:01.781163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.153 [2024-07-26 11:30:01.781172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.153 [2024-07-26 11:30:01.781180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.153 [2024-07-26 11:30:01.781189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.153 [2024-07-26 11:30:01.781196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.153 [2024-07-26 11:30:01.781205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.153 [2024-07-26 11:30:01.781212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.153 [2024-07-26 11:30:01.781222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.153 [2024-07-26 11:30:01.781229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.153 [2024-07-26 11:30:01.781238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.153 [2024-07-26 11:30:01.781246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.153 [2024-07-26 11:30:01.781255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.153 [2024-07-26 11:30:01.781262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.153 [2024-07-26 11:30:01.781271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.153 [2024-07-26 11:30:01.781278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.153 [2024-07-26 11:30:01.781289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.153 [2024-07-26 11:30:01.781296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.153 [2024-07-26 11:30:01.781305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.153 [2024-07-26 11:30:01.781313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.153 [2024-07-26 11:30:01.781322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.153 [2024-07-26 11:30:01.781329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.153 [2024-07-26 11:30:01.781337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.154 [2024-07-26 11:30:01.781345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.154 [2024-07-26 11:30:01.781354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.154 [2024-07-26 11:30:01.781361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.154 [2024-07-26 11:30:01.781370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.154 [2024-07-26 11:30:01.781378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.154 [2024-07-26 11:30:01.781387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.154 [2024-07-26 11:30:01.781395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.154 [2024-07-26 11:30:01.781403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.154 [2024-07-26 11:30:01.781411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.154 [2024-07-26 11:30:01.781420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.154 [2024-07-26 11:30:01.781427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.154 [2024-07-26 11:30:01.781436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.154 [2024-07-26 11:30:01.781444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.154 [2024-07-26 11:30:01.781452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.154 [2024-07-26 11:30:01.781460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.154 [2024-07-26 11:30:01.781468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.154 [2024-07-26 11:30:01.781476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.154 [2024-07-26 11:30:01.781485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.154 [2024-07-26 11:30:01.781494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.154 [2024-07-26 11:30:01.781503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.154 [2024-07-26 11:30:01.781510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.154 [2024-07-26 11:30:01.781519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.154 [2024-07-26 11:30:01.781526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.154 [2024-07-26 11:30:01.781536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.154 [2024-07-26 11:30:01.781543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.154 [2024-07-26 11:30:01.781551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.154 [2024-07-26 11:30:01.781559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.154 [2024-07-26 11:30:01.781568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.154 [2024-07-26 11:30:01.781575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.154 [2024-07-26 11:30:01.781584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.154 [2024-07-26 11:30:01.781591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.154 [2024-07-26 11:30:01.781600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.154 [2024-07-26 11:30:01.781607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.154 [2024-07-26 11:30:01.781617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.154 [2024-07-26 11:30:01.781624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.154 [2024-07-26 11:30:01.781637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.154 [2024-07-26 11:30:01.781646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.154 [2024-07-26 11:30:01.781655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.154 [2024-07-26 11:30:01.781662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.154 [2024-07-26 11:30:01.781672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.154 [2024-07-26 11:30:01.781679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.154 [2024-07-26 11:30:01.781688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.154 [2024-07-26 11:30:01.781696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.155 [2024-07-26 11:30:01.781707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.155 [2024-07-26 11:30:01.781715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.155 [2024-07-26 11:30:01.781723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.155 [2024-07-26 11:30:01.781731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.155 [2024-07-26 11:30:01.781740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.155 [2024-07-26 11:30:01.781748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.155 [2024-07-26 11:30:01.781756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.155 [2024-07-26 11:30:01.781764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.155 [2024-07-26 11:30:01.781772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.155 [2024-07-26 11:30:01.781779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.155 [2024-07-26 11:30:01.781788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.155 [2024-07-26 11:30:01.781795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.155 [2024-07-26 11:30:01.781805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.155 [2024-07-26 11:30:01.781812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.155 [2024-07-26 11:30:01.781821] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19d9610 is same with the state(5) to be set 00:22:06.155 [2024-07-26 11:30:01.782900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.155 [2024-07-26 11:30:01.782913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.155 [2024-07-26 11:30:01.782925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.155 [2024-07-26 11:30:01.782933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.155 [2024-07-26 11:30:01.782942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.155 [2024-07-26 11:30:01.782950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.155 [2024-07-26 11:30:01.782960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.155 [2024-07-26 11:30:01.782967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.155 [2024-07-26 11:30:01.782977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.155 [2024-07-26 11:30:01.782985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.155 [2024-07-26 11:30:01.782996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.155 [2024-07-26 11:30:01.783004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.155 [2024-07-26 11:30:01.783013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.155 [2024-07-26 11:30:01.783020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.155 [2024-07-26 11:30:01.783029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.155 [2024-07-26 11:30:01.783037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.155 [2024-07-26 11:30:01.783046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.155 [2024-07-26 11:30:01.783053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.155 [2024-07-26 11:30:01.783063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.155 [2024-07-26 11:30:01.783070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.155 [2024-07-26 11:30:01.783079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.155 [2024-07-26 11:30:01.783087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.155 [2024-07-26 11:30:01.783096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.155 [2024-07-26 11:30:01.783103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.155 [2024-07-26 11:30:01.783112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.155 [2024-07-26 11:30:01.783119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.155 [2024-07-26 11:30:01.783128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.155 [2024-07-26 11:30:01.783135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.155 [2024-07-26 11:30:01.783144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.155 [2024-07-26 11:30:01.783152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.155 [2024-07-26 11:30:01.783160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.155 [2024-07-26 11:30:01.783167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.155 [2024-07-26 11:30:01.783177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.155 [2024-07-26 11:30:01.783183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.155 [2024-07-26 11:30:01.783193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.155 [2024-07-26 11:30:01.783204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.155 [2024-07-26 11:30:01.783213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.155 [2024-07-26 11:30:01.783220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.155 [2024-07-26 11:30:01.783229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.155 [2024-07-26 11:30:01.783237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.155 [2024-07-26 11:30:01.783246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.155 [2024-07-26 11:30:01.783252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.155 [2024-07-26 11:30:01.783263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.155 [2024-07-26 11:30:01.783270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.156 [2024-07-26 11:30:01.783280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.156 [2024-07-26 11:30:01.783288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.156 [2024-07-26 11:30:01.783297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.156 [2024-07-26 11:30:01.783304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.156 [2024-07-26 11:30:01.783313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.156 [2024-07-26 11:30:01.783320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.156 [2024-07-26 11:30:01.783329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.156 [2024-07-26 11:30:01.783336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.156 [2024-07-26 11:30:01.783345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.156 [2024-07-26 11:30:01.783352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.156 [2024-07-26 11:30:01.783361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.156 [2024-07-26 11:30:01.783369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.156 [2024-07-26 11:30:01.783378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.156 [2024-07-26 11:30:01.783385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.156 [2024-07-26 11:30:01.783394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.156 [2024-07-26 11:30:01.783402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.156 [2024-07-26 11:30:01.783412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.156 [2024-07-26 11:30:01.783419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.156 [2024-07-26 11:30:01.783429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.156 [2024-07-26 11:30:01.783436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.156 [2024-07-26 11:30:01.783445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.156 [2024-07-26 11:30:01.783452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.156 [2024-07-26 11:30:01.783461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.156 [2024-07-26 11:30:01.783469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.156 [2024-07-26 11:30:01.783478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.156 [2024-07-26 11:30:01.783485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.156 [2024-07-26 11:30:01.783494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.156 [2024-07-26 11:30:01.783501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.156 [2024-07-26 11:30:01.783510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.156 [2024-07-26 11:30:01.783517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.156 [2024-07-26 11:30:01.783527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.156 [2024-07-26 11:30:01.783534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.156 [2024-07-26 11:30:01.783543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.156 [2024-07-26 11:30:01.783550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.156 [2024-07-26 11:30:01.783559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.156 [2024-07-26 11:30:01.783566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.156 [2024-07-26 11:30:01.783575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.156 [2024-07-26 11:30:01.783583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.156 [2024-07-26 11:30:01.783592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.156 [2024-07-26 11:30:01.783599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.156 [2024-07-26 11:30:01.783609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.156 [2024-07-26 11:30:01.783617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.156 [2024-07-26 11:30:01.783632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.156 [2024-07-26 11:30:01.783640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.156 [2024-07-26 11:30:01.783649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.156 [2024-07-26 11:30:01.783657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.156 [2024-07-26 11:30:01.783666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.156 [2024-07-26 11:30:01.783673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.156 [2024-07-26 11:30:01.783682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.156 [2024-07-26 11:30:01.783690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.156 [2024-07-26 11:30:01.783699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.156 [2024-07-26 11:30:01.783706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.156 [2024-07-26 11:30:01.783715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.156 [2024-07-26 11:30:01.783722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.156 [2024-07-26 11:30:01.783731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.156 [2024-07-26 11:30:01.783738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.156 [2024-07-26 11:30:01.783748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.156 [2024-07-26 11:30:01.783755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.156 [2024-07-26 11:30:01.783763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.156 [2024-07-26 11:30:01.783771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.156 [2024-07-26 11:30:01.783780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.156 [2024-07-26 11:30:01.783787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.156 [2024-07-26 11:30:01.783796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.156 [2024-07-26 11:30:01.783803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.156 [2024-07-26 11:30:01.783812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.156 [2024-07-26 11:30:01.783820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.156 [2024-07-26 11:30:01.783830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.157 [2024-07-26 11:30:01.783838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.157 [2024-07-26 11:30:01.783847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.157 [2024-07-26 11:30:01.783854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.157 [2024-07-26 11:30:01.783863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.157 [2024-07-26 11:30:01.783871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.157 [2024-07-26 11:30:01.783880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.157 [2024-07-26 11:30:01.783887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.157 [2024-07-26 11:30:01.783896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.157 [2024-07-26 11:30:01.783904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.157 [2024-07-26 11:30:01.783914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.157 [2024-07-26 11:30:01.783921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.157 [2024-07-26 11:30:01.783930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.157 [2024-07-26 11:30:01.783937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.157 [2024-07-26 11:30:01.783946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.157 [2024-07-26 11:30:01.783954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.157 [2024-07-26 11:30:01.783963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.157 [2024-07-26 11:30:01.783970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.157 [2024-07-26 11:30:01.783978] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19da9a0 is same with the state(5) to be set 00:22:06.157 [2024-07-26 11:30:01.785043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.157 [2024-07-26 11:30:01.785056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.157 [2024-07-26 11:30:01.785067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.157 [2024-07-26 11:30:01.785074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.157 [2024-07-26 11:30:01.785083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.157 [2024-07-26 11:30:01.785091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.157 [2024-07-26 11:30:01.785102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.157 [2024-07-26 11:30:01.785109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.157 [2024-07-26 11:30:01.785119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.157 [2024-07-26 11:30:01.785127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.157 [2024-07-26 11:30:01.785136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.157 [2024-07-26 11:30:01.785143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.157 [2024-07-26 11:30:01.785153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.157 [2024-07-26 11:30:01.785160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.157 [2024-07-26 11:30:01.785169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.157 [2024-07-26 11:30:01.785177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.157 [2024-07-26 11:30:01.785187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.157 [2024-07-26 11:30:01.785194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.157 [2024-07-26 11:30:01.785203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.157 [2024-07-26 11:30:01.785210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.157 [2024-07-26 11:30:01.785219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.157 [2024-07-26 11:30:01.785226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.157 [2024-07-26 11:30:01.785236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.157 [2024-07-26 11:30:01.785243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.157 [2024-07-26 11:30:01.785252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.157 [2024-07-26 11:30:01.785260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.157 [2024-07-26 11:30:01.785270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.157 [2024-07-26 11:30:01.785277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.157 [2024-07-26 11:30:01.785286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.157 [2024-07-26 11:30:01.785293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.157 [2024-07-26 11:30:01.785302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.157 [2024-07-26 11:30:01.785311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.157 [2024-07-26 11:30:01.785321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.157 [2024-07-26 11:30:01.785328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.157 [2024-07-26 11:30:01.785336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.157 [2024-07-26 11:30:01.785344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.157 [2024-07-26 11:30:01.785353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.157 [2024-07-26 11:30:01.785361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.157 [2024-07-26 11:30:01.785369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.157 [2024-07-26 11:30:01.785378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.157 [2024-07-26 11:30:01.785387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.157 [2024-07-26 11:30:01.785394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.157 [2024-07-26 11:30:01.785403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.157 [2024-07-26 11:30:01.785410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.157 [2024-07-26 11:30:01.785420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.157 [2024-07-26 11:30:01.785426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.157 [2024-07-26 11:30:01.785436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.157 [2024-07-26 11:30:01.785443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.157 [2024-07-26 11:30:01.785452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.157 [2024-07-26 11:30:01.785460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.157 [2024-07-26 11:30:01.785468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.157 [2024-07-26 11:30:01.785475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.157 [2024-07-26 11:30:01.785484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.157 [2024-07-26 11:30:01.785490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.157 [2024-07-26 11:30:01.785499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.157 [2024-07-26 11:30:01.785506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.157 [2024-07-26 11:30:01.785515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.158 [2024-07-26 11:30:01.785524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.158 [2024-07-26 11:30:01.785533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.158 [2024-07-26 11:30:01.785539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.158 [2024-07-26 11:30:01.785548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.158 [2024-07-26 11:30:01.785555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.158 [2024-07-26 11:30:01.785564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.158 [2024-07-26 11:30:01.785571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.158 [2024-07-26 11:30:01.785579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.158 [2024-07-26 11:30:01.785586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.158 [2024-07-26 11:30:01.785594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.158 [2024-07-26 11:30:01.785601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.158 [2024-07-26 11:30:01.785610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.158 [2024-07-26 11:30:01.785617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.158 [2024-07-26 11:30:01.785629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.158 [2024-07-26 11:30:01.785636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.158 [2024-07-26 11:30:01.785645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.158 [2024-07-26 11:30:01.785652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.158 [2024-07-26 11:30:01.785660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.158 [2024-07-26 11:30:01.785667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.158 [2024-07-26 11:30:01.785676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.158 [2024-07-26 11:30:01.785683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.158 [2024-07-26 11:30:01.785691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.158 [2024-07-26 11:30:01.785698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.158 [2024-07-26 11:30:01.785707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.158 [2024-07-26 11:30:01.785713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.158 [2024-07-26 11:30:01.785723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.158 [2024-07-26 11:30:01.785730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.158 [2024-07-26 11:30:01.785739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.158 [2024-07-26 11:30:01.785746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.158 [2024-07-26 11:30:01.785754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.158 [2024-07-26 11:30:01.785761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.158 [2024-07-26 11:30:01.785770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.158 [2024-07-26 11:30:01.785777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.158 [2024-07-26 11:30:01.785785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.158 [2024-07-26 11:30:01.785792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.158 [2024-07-26 11:30:01.785800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.158 [2024-07-26 11:30:01.785807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.158 [2024-07-26 11:30:01.785816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.158 [2024-07-26 11:30:01.785823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.158 [2024-07-26 11:30:01.785831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.158 [2024-07-26 11:30:01.785838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.158 [2024-07-26 11:30:01.785847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.158 [2024-07-26 11:30:01.785854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.158 [2024-07-26 11:30:01.785862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.158 [2024-07-26 11:30:01.785869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.158 [2024-07-26 11:30:01.785877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.158 [2024-07-26 11:30:01.785885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.158 [2024-07-26 11:30:01.785893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.158 [2024-07-26 11:30:01.785900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.158 [2024-07-26 11:30:01.785908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.158 [2024-07-26 11:30:01.785917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.158 [2024-07-26 11:30:01.785925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.158 [2024-07-26 11:30:01.785932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.158 [2024-07-26 11:30:01.785940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.158 [2024-07-26 11:30:01.785947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.158 [2024-07-26 11:30:01.785956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.158 [2024-07-26 11:30:01.785963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.158 [2024-07-26 11:30:01.785971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.158 [2024-07-26 11:30:01.785978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.158 [2024-07-26 11:30:01.785986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.158 [2024-07-26 11:30:01.785993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.158 [2024-07-26 11:30:01.786001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.158 [2024-07-26 11:30:01.786009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.158 [2024-07-26 11:30:01.786017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.158 [2024-07-26 11:30:01.786025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.158 [2024-07-26 11:30:01.786034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.158 [2024-07-26 11:30:01.786041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.158 [2024-07-26 11:30:01.786050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.158 [2024-07-26 11:30:01.786056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.158 [2024-07-26 11:30:01.786065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.158 [2024-07-26 11:30:01.786072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.158 [2024-07-26 11:30:01.786080] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a7e580 is same with the state(5) to be set 00:22:06.422 [2024-07-26 11:30:01.787157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.422 [2024-07-26 11:30:01.787175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.422 [2024-07-26 11:30:01.787184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.422 [2024-07-26 11:30:01.787193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.423 [2024-07-26 11:30:01.787202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.423 [2024-07-26 11:30:01.787208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.423 [2024-07-26 11:30:01.787216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.423 [2024-07-26 11:30:01.787223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.423 [2024-07-26 11:30:01.787231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.423 [2024-07-26 11:30:01.787238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.423 [2024-07-26 11:30:01.787246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.423 [2024-07-26 11:30:01.787252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.423 [2024-07-26 11:30:01.787260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.423 [2024-07-26 11:30:01.787267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.423 [2024-07-26 11:30:01.787275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.423 [2024-07-26 11:30:01.787281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.423 [2024-07-26 11:30:01.787289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.423 [2024-07-26 11:30:01.787296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.423 [2024-07-26 11:30:01.787303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.423 [2024-07-26 11:30:01.787310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.423 [2024-07-26 11:30:01.787318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.423 [2024-07-26 11:30:01.787324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.423 [2024-07-26 11:30:01.787332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.423 [2024-07-26 11:30:01.787338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.423 [2024-07-26 11:30:01.787346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.423 [2024-07-26 11:30:01.787352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.423 [2024-07-26 11:30:01.787361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.423 [2024-07-26 11:30:01.787367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.423 [2024-07-26 11:30:01.787376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.423 [2024-07-26 11:30:01.787383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.423 [2024-07-26 11:30:01.787391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.423 [2024-07-26 11:30:01.787397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.423 [2024-07-26 11:30:01.787405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.423 [2024-07-26 11:30:01.787411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.423 [2024-07-26 11:30:01.787419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.423 [2024-07-26 11:30:01.787425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.423 [2024-07-26 11:30:01.787433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.423 [2024-07-26 11:30:01.787440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.423 [2024-07-26 11:30:01.787448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.423 [2024-07-26 11:30:01.787455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.423 [2024-07-26 11:30:01.787463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.423 [2024-07-26 11:30:01.787469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.423 [2024-07-26 11:30:01.787477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.423 [2024-07-26 11:30:01.787484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.423 [2024-07-26 11:30:01.787492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.423 [2024-07-26 11:30:01.787498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.423 [2024-07-26 11:30:01.787506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.423 [2024-07-26 11:30:01.787513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.423 [2024-07-26 11:30:01.787521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.423 [2024-07-26 11:30:01.787527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.423 [2024-07-26 11:30:01.787535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.423 [2024-07-26 11:30:01.787541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.423 [2024-07-26 11:30:01.787549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.423 [2024-07-26 11:30:01.787557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.423 [2024-07-26 11:30:01.787565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.423 [2024-07-26 11:30:01.787572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.423 [2024-07-26 11:30:01.787580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.423 [2024-07-26 11:30:01.787586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.423 [2024-07-26 11:30:01.787594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.423 [2024-07-26 11:30:01.787601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.423 [2024-07-26 11:30:01.787609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.423 [2024-07-26 11:30:01.787615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.423 [2024-07-26 11:30:01.787623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.423 [2024-07-26 11:30:01.787633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.423 [2024-07-26 11:30:01.787641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.423 [2024-07-26 11:30:01.787647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.423 [2024-07-26 11:30:01.787655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.423 [2024-07-26 11:30:01.787661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.423 [2024-07-26 11:30:01.787669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.423 [2024-07-26 11:30:01.787675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.423 [2024-07-26 11:30:01.787685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.423 [2024-07-26 11:30:01.787692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.423 [2024-07-26 11:30:01.787700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.423 [2024-07-26 11:30:01.787706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.423 [2024-07-26 11:30:01.787714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.423 [2024-07-26 11:30:01.787720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.423 [2024-07-26 11:30:01.787729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.423 [2024-07-26 11:30:01.787735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.423 [2024-07-26 11:30:01.787745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.423 [2024-07-26 11:30:01.787751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.423 [2024-07-26 11:30:01.787759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.423 [2024-07-26 11:30:01.787765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.424 [2024-07-26 11:30:01.787773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.424 [2024-07-26 11:30:01.787779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.424 [2024-07-26 11:30:01.787787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.424 [2024-07-26 11:30:01.787794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.424 [2024-07-26 11:30:01.787802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.424 [2024-07-26 11:30:01.787808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.424 [2024-07-26 11:30:01.787816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.424 [2024-07-26 11:30:01.787823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.424 [2024-07-26 11:30:01.787831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.424 [2024-07-26 11:30:01.787837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.424 [2024-07-26 11:30:01.787845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.424 [2024-07-26 11:30:01.787852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.424 [2024-07-26 11:30:01.787860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.424 [2024-07-26 11:30:01.787866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.424 [2024-07-26 11:30:01.787873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.424 [2024-07-26 11:30:01.787880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.424 [2024-07-26 11:30:01.787888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.424 [2024-07-26 11:30:01.787894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.424 [2024-07-26 11:30:01.787902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.424 [2024-07-26 11:30:01.787908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.424 [2024-07-26 11:30:01.787917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.424 [2024-07-26 11:30:01.787926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.424 [2024-07-26 11:30:01.787933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.424 [2024-07-26 11:30:01.787940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.424 [2024-07-26 11:30:01.787948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.424 [2024-07-26 11:30:01.787954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.424 [2024-07-26 11:30:01.787962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.424 [2024-07-26 11:30:01.787968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.424 [2024-07-26 11:30:01.787976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.424 [2024-07-26 11:30:01.787982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.424 [2024-07-26 11:30:01.787990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.424 [2024-07-26 11:30:01.787997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.424 [2024-07-26 11:30:01.788004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.424 [2024-07-26 11:30:01.788011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.424 [2024-07-26 11:30:01.788019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.424 [2024-07-26 11:30:01.788025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.424 [2024-07-26 11:30:01.788033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.424 [2024-07-26 11:30:01.788039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.424 [2024-07-26 11:30:01.788047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.424 [2024-07-26 11:30:01.788053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.424 [2024-07-26 11:30:01.788061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.424 [2024-07-26 11:30:01.788068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.424 [2024-07-26 11:30:01.788076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.424 [2024-07-26 11:30:01.788082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.424 [2024-07-26 11:30:01.788090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.424 [2024-07-26 11:30:01.788096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.424 [2024-07-26 11:30:01.788106] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a7fac0 is same with the state(5) to be set 00:22:06.424 [2024-07-26 11:30:01.789083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.424 [2024-07-26 11:30:01.789095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.424 [2024-07-26 11:30:01.789105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.424 [2024-07-26 11:30:01.789111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.424 [2024-07-26 11:30:01.789120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.424 [2024-07-26 11:30:01.789127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.424 [2024-07-26 11:30:01.789135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.424 [2024-07-26 11:30:01.789142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.424 [2024-07-26 11:30:01.789150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.424 [2024-07-26 11:30:01.789156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.424 [2024-07-26 11:30:01.789164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.424 [2024-07-26 11:30:01.789171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.424 [2024-07-26 11:30:01.789178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.424 [2024-07-26 11:30:01.789185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.424 [2024-07-26 11:30:01.789193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.424 [2024-07-26 11:30:01.789199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.424 [2024-07-26 11:30:01.789207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.424 [2024-07-26 11:30:01.789213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.424 [2024-07-26 11:30:01.789221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.424 [2024-07-26 11:30:01.789227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.424 [2024-07-26 11:30:01.789235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.424 [2024-07-26 11:30:01.789241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.424 [2024-07-26 11:30:01.789249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.424 [2024-07-26 11:30:01.789255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.424 [2024-07-26 11:30:01.789265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.424 [2024-07-26 11:30:01.789272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.424 [2024-07-26 11:30:01.789279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.424 [2024-07-26 11:30:01.789286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.424 [2024-07-26 11:30:01.789294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.424 [2024-07-26 11:30:01.789300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.424 [2024-07-26 11:30:01.789308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.425 [2024-07-26 11:30:01.789315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.425 [2024-07-26 11:30:01.789322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.425 [2024-07-26 11:30:01.789329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.425 [2024-07-26 11:30:01.789336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.425 [2024-07-26 11:30:01.789343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.425 [2024-07-26 11:30:01.789351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.425 [2024-07-26 11:30:01.789357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.425 [2024-07-26 11:30:01.789365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.425 [2024-07-26 11:30:01.789371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.425 [2024-07-26 11:30:01.789379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.425 [2024-07-26 11:30:01.789385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.425 [2024-07-26 11:30:01.789393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.425 [2024-07-26 11:30:01.789399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.425 [2024-07-26 11:30:01.789407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.425 [2024-07-26 11:30:01.789413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.425 [2024-07-26 11:30:01.789421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.425 [2024-07-26 11:30:01.789428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.425 [2024-07-26 11:30:01.789435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.425 [2024-07-26 11:30:01.789443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.425 [2024-07-26 11:30:01.789451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.425 [2024-07-26 11:30:01.789457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.425 [2024-07-26 11:30:01.789465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.425 [2024-07-26 11:30:01.789472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.425 [2024-07-26 11:30:01.789479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.425 [2024-07-26 11:30:01.789485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.425 [2024-07-26 11:30:01.789493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.425 [2024-07-26 11:30:01.789499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.425 [2024-07-26 11:30:01.789507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.425 [2024-07-26 11:30:01.789513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.425 [2024-07-26 11:30:01.789521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.425 [2024-07-26 11:30:01.789527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.425 [2024-07-26 11:30:01.789535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.425 [2024-07-26 11:30:01.789541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.425 [2024-07-26 11:30:01.789550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.425 [2024-07-26 11:30:01.789557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.425 [2024-07-26 11:30:01.789564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.425 [2024-07-26 11:30:01.789570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.425 [2024-07-26 11:30:01.789578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.425 [2024-07-26 11:30:01.789584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.425 [2024-07-26 11:30:01.789592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.425 [2024-07-26 11:30:01.789598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.425 [2024-07-26 11:30:01.789606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.425 [2024-07-26 11:30:01.789614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.425 [2024-07-26 11:30:01.789623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.425 [2024-07-26 11:30:01.789634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.425 [2024-07-26 11:30:01.789642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.425 [2024-07-26 11:30:01.789649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.425 [2024-07-26 11:30:01.789657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.425 [2024-07-26 11:30:01.789663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.425 [2024-07-26 11:30:01.789671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.425 [2024-07-26 11:30:01.789678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.425 [2024-07-26 11:30:01.789685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.425 [2024-07-26 11:30:01.789692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.425 [2024-07-26 11:30:01.789700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.425 [2024-07-26 11:30:01.789707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.425 [2024-07-26 11:30:01.789715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.425 [2024-07-26 11:30:01.789722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.425 [2024-07-26 11:30:01.789730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.425 [2024-07-26 11:30:01.789736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.425 [2024-07-26 11:30:01.789744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.425 [2024-07-26 11:30:01.789750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.425 [2024-07-26 11:30:01.789758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.425 [2024-07-26 11:30:01.789764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.425 [2024-07-26 11:30:01.789773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.425 [2024-07-26 11:30:01.789779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.425 [2024-07-26 11:30:01.789787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.425 [2024-07-26 11:30:01.789794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.425 [2024-07-26 11:30:01.789802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.425 [2024-07-26 11:30:01.789810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.425 [2024-07-26 11:30:01.789818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.425 [2024-07-26 11:30:01.789824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.425 [2024-07-26 11:30:01.789833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.425 [2024-07-26 11:30:01.789839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.425 [2024-07-26 11:30:01.789847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.425 [2024-07-26 11:30:01.789853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.425 [2024-07-26 11:30:01.789861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.425 [2024-07-26 11:30:01.789867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.425 [2024-07-26 11:30:01.789875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.426 [2024-07-26 11:30:01.789881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.426 [2024-07-26 11:30:01.789889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.426 [2024-07-26 11:30:01.789896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.426 [2024-07-26 11:30:01.789904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.426 [2024-07-26 11:30:01.789910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.426 [2024-07-26 11:30:01.789918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.426 [2024-07-26 11:30:01.789924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.426 [2024-07-26 11:30:01.789934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.426 [2024-07-26 11:30:01.789941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.426 [2024-07-26 11:30:01.789949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.426 [2024-07-26 11:30:01.789955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.426 [2024-07-26 11:30:01.789963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.426 [2024-07-26 11:30:01.789969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.426 [2024-07-26 11:30:01.789977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.426 [2024-07-26 11:30:01.795795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.426 [2024-07-26 11:30:01.795810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.426 [2024-07-26 11:30:01.795818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.426 [2024-07-26 11:30:01.795826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.426 [2024-07-26 11:30:01.795832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.426 [2024-07-26 11:30:01.795839] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1911200 is same with the state(5) to be set 00:22:06.426 [2024-07-26 11:30:01.796833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.426 [2024-07-26 11:30:01.796848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.426 [2024-07-26 11:30:01.796861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.426 [2024-07-26 11:30:01.796870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.426 [2024-07-26 11:30:01.796881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.426 [2024-07-26 11:30:01.796890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.426 [2024-07-26 11:30:01.796901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.426 [2024-07-26 11:30:01.796910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.426 [2024-07-26 11:30:01.796920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.426 [2024-07-26 11:30:01.796929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.426 [2024-07-26 11:30:01.796939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.426 [2024-07-26 11:30:01.796947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.426 [2024-07-26 11:30:01.796958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.426 [2024-07-26 11:30:01.796967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.426 [2024-07-26 11:30:01.796977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.426 [2024-07-26 11:30:01.796986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.426 [2024-07-26 11:30:01.796996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.426 [2024-07-26 11:30:01.797005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.426 [2024-07-26 11:30:01.797016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.426 [2024-07-26 11:30:01.797026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.426 [2024-07-26 11:30:01.797036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.426 [2024-07-26 11:30:01.797048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.426 [2024-07-26 11:30:01.797059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.426 [2024-07-26 11:30:01.797067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.426 [2024-07-26 11:30:01.797078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.426 [2024-07-26 11:30:01.797086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.426 [2024-07-26 11:30:01.797098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.426 [2024-07-26 11:30:01.797106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.426 [2024-07-26 11:30:01.797117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.426 [2024-07-26 11:30:01.797125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.426 [2024-07-26 11:30:01.797136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.426 [2024-07-26 11:30:01.797144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.426 [2024-07-26 11:30:01.797154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.426 [2024-07-26 11:30:01.797163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.426 [2024-07-26 11:30:01.797174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.426 [2024-07-26 11:30:01.797183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.426 [2024-07-26 11:30:01.797193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.426 [2024-07-26 11:30:01.797202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.426 [2024-07-26 11:30:01.797212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.426 [2024-07-26 11:30:01.797221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.426 [2024-07-26 11:30:01.797232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.426 [2024-07-26 11:30:01.797240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.426 [2024-07-26 11:30:01.797251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.426 [2024-07-26 11:30:01.797259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.426 [2024-07-26 11:30:01.797270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.426 [2024-07-26 11:30:01.797278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.426 [2024-07-26 11:30:01.797291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.426 [2024-07-26 11:30:01.797299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.426 [2024-07-26 11:30:01.797310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.426 [2024-07-26 11:30:01.797318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.426 [2024-07-26 11:30:01.797329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.426 [2024-07-26 11:30:01.797338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.426 [2024-07-26 11:30:01.797348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.426 [2024-07-26 11:30:01.797357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.426 [2024-07-26 11:30:01.797367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.426 [2024-07-26 11:30:01.797376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.426 [2024-07-26 11:30:01.797386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.426 [2024-07-26 11:30:01.797395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.427 [2024-07-26 11:30:01.797405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.427 [2024-07-26 11:30:01.797414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.427 [2024-07-26 11:30:01.797425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.427 [2024-07-26 11:30:01.797433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.427 [2024-07-26 11:30:01.797444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.427 [2024-07-26 11:30:01.797453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.427 [2024-07-26 11:30:01.797463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.427 [2024-07-26 11:30:01.797472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.427 [2024-07-26 11:30:01.797483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.427 [2024-07-26 11:30:01.797491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.427 [2024-07-26 11:30:01.797502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.427 [2024-07-26 11:30:01.797510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.427 [2024-07-26 11:30:01.797521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.427 [2024-07-26 11:30:01.797531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.427 [2024-07-26 11:30:01.797542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.427 [2024-07-26 11:30:01.797551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.427 [2024-07-26 11:30:01.797562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.427 [2024-07-26 11:30:01.797570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.427 [2024-07-26 11:30:01.797580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.427 [2024-07-26 11:30:01.797589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.427 [2024-07-26 11:30:01.797599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.427 [2024-07-26 11:30:01.797608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.427 [2024-07-26 11:30:01.797618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.427 [2024-07-26 11:30:01.797631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.427 [2024-07-26 11:30:01.797642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.427 [2024-07-26 11:30:01.797650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.427 [2024-07-26 11:30:01.797661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.427 [2024-07-26 11:30:01.797670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.427 [2024-07-26 11:30:01.797681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.427 [2024-07-26 11:30:01.797689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.427 [2024-07-26 11:30:01.797700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.427 [2024-07-26 11:30:01.797708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.427 [2024-07-26 11:30:01.797719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.427 [2024-07-26 11:30:01.797727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.427 [2024-07-26 11:30:01.797738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.427 [2024-07-26 11:30:01.797746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.427 [2024-07-26 11:30:01.797757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.427 [2024-07-26 11:30:01.797766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.427 [2024-07-26 11:30:01.797778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.427 [2024-07-26 11:30:01.797787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.427 [2024-07-26 11:30:01.797798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.427 [2024-07-26 11:30:01.797806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.427 [2024-07-26 11:30:01.797817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.427 [2024-07-26 11:30:01.797826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.427 [2024-07-26 11:30:01.797837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.427 [2024-07-26 11:30:01.797845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.427 [2024-07-26 11:30:01.797856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.427 [2024-07-26 11:30:01.797864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.427 [2024-07-26 11:30:01.797875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.427 [2024-07-26 11:30:01.797883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.427 [2024-07-26 11:30:01.797894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.427 [2024-07-26 11:30:01.797902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.427 [2024-07-26 11:30:01.797913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.427 [2024-07-26 11:30:01.797922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.427 [2024-07-26 11:30:01.797933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.427 [2024-07-26 11:30:01.797941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.427 [2024-07-26 11:30:01.797952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.427 [2024-07-26 11:30:01.797960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.427 [2024-07-26 11:30:01.797971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.427 [2024-07-26 11:30:01.797980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.428 [2024-07-26 11:30:01.797991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.428 [2024-07-26 11:30:01.798000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.428 [2024-07-26 11:30:01.798010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.428 [2024-07-26 11:30:01.798020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.428 [2024-07-26 11:30:01.798031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.428 [2024-07-26 11:30:01.798039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.428 [2024-07-26 11:30:01.798050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.428 [2024-07-26 11:30:01.798059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.428 [2024-07-26 11:30:01.798069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.428 [2024-07-26 11:30:01.798078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.428 [2024-07-26 11:30:01.798088] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19126b0 is same with the state(5) to be set 00:22:06.428 [2024-07-26 11:30:01.799395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.428 [2024-07-26 11:30:01.799410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.428 [2024-07-26 11:30:01.799423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.428 [2024-07-26 11:30:01.799432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.428 [2024-07-26 11:30:01.799443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.428 [2024-07-26 11:30:01.799452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.428 [2024-07-26 11:30:01.799463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.428 [2024-07-26 11:30:01.799472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.428 [2024-07-26 11:30:01.799483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.428 [2024-07-26 11:30:01.799492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.428 [2024-07-26 11:30:01.799503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.428 [2024-07-26 11:30:01.799511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.428 [2024-07-26 11:30:01.799522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.428 [2024-07-26 11:30:01.799532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.428 [2024-07-26 11:30:01.799542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.428 [2024-07-26 11:30:01.799551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.428 [2024-07-26 11:30:01.799561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.428 [2024-07-26 11:30:01.799573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.428 [2024-07-26 11:30:01.799584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.428 [2024-07-26 11:30:01.799592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.428 [2024-07-26 11:30:01.799603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.428 [2024-07-26 11:30:01.799611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.428 [2024-07-26 11:30:01.799622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.428 [2024-07-26 11:30:01.799637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.428 [2024-07-26 11:30:01.799648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.428 [2024-07-26 11:30:01.799656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.428 [2024-07-26 11:30:01.799667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.428 [2024-07-26 11:30:01.799675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.428 [2024-07-26 11:30:01.799686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.428 [2024-07-26 11:30:01.799695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.428 [2024-07-26 11:30:01.799705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.428 [2024-07-26 11:30:01.799714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.428 [2024-07-26 11:30:01.799724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.428 [2024-07-26 11:30:01.799733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.428 [2024-07-26 11:30:01.799744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.428 [2024-07-26 11:30:01.799752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.428 [2024-07-26 11:30:01.799763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.428 [2024-07-26 11:30:01.799771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.428 [2024-07-26 11:30:01.799782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.428 [2024-07-26 11:30:01.799790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.428 [2024-07-26 11:30:01.799801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.428 [2024-07-26 11:30:01.799810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.428 [2024-07-26 11:30:01.799822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.428 [2024-07-26 11:30:01.799831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.428 [2024-07-26 11:30:01.799842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.428 [2024-07-26 11:30:01.799850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.428 [2024-07-26 11:30:01.799861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.428 [2024-07-26 11:30:01.799869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.428 [2024-07-26 11:30:01.799880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.428 [2024-07-26 11:30:01.799889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.428 [2024-07-26 11:30:01.799900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.428 [2024-07-26 11:30:01.799908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.428 [2024-07-26 11:30:01.799919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.428 [2024-07-26 11:30:01.799928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.428 [2024-07-26 11:30:01.799938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.428 [2024-07-26 11:30:01.799947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.428 [2024-07-26 11:30:01.799958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.428 [2024-07-26 11:30:01.799967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.428 [2024-07-26 11:30:01.799977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.428 [2024-07-26 11:30:01.799985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.428 [2024-07-26 11:30:01.799996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.428 [2024-07-26 11:30:01.800005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.428 [2024-07-26 11:30:01.800016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.428 [2024-07-26 11:30:01.800024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.429 [2024-07-26 11:30:01.800035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.429 [2024-07-26 11:30:01.800044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.429 [2024-07-26 11:30:01.800055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.429 [2024-07-26 11:30:01.800065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.429 [2024-07-26 11:30:01.800075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.429 [2024-07-26 11:30:01.800084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.429 [2024-07-26 11:30:01.800094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.429 [2024-07-26 11:30:01.800103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.429 [2024-07-26 11:30:01.800113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.429 [2024-07-26 11:30:01.800122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.429 [2024-07-26 11:30:01.800132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.429 [2024-07-26 11:30:01.800141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.429 [2024-07-26 11:30:01.800152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.429 [2024-07-26 11:30:01.800160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.429 [2024-07-26 11:30:01.800170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.429 [2024-07-26 11:30:01.800179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.429 [2024-07-26 11:30:01.800189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.429 [2024-07-26 11:30:01.800198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.429 [2024-07-26 11:30:01.800209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.429 [2024-07-26 11:30:01.800217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.429 [2024-07-26 11:30:01.800228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.429 [2024-07-26 11:30:01.800236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.429 [2024-07-26 11:30:01.800247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.429 [2024-07-26 11:30:01.800256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.429 [2024-07-26 11:30:01.800267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.429 [2024-07-26 11:30:01.800275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.429 [2024-07-26 11:30:01.800285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.429 [2024-07-26 11:30:01.800294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.429 [2024-07-26 11:30:01.800306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.429 [2024-07-26 11:30:01.800314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.429 [2024-07-26 11:30:01.800325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.429 [2024-07-26 11:30:01.800334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.429 [2024-07-26 11:30:01.800344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.429 [2024-07-26 11:30:01.800353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.429 [2024-07-26 11:30:01.800364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.429 [2024-07-26 11:30:01.800372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.429 [2024-07-26 11:30:01.800383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.429 [2024-07-26 11:30:01.800391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.429 [2024-07-26 11:30:01.800402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.429 [2024-07-26 11:30:01.800411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.429 [2024-07-26 11:30:01.800421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.429 [2024-07-26 11:30:01.800430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.429 [2024-07-26 11:30:01.800440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.429 [2024-07-26 11:30:01.800449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.429 [2024-07-26 11:30:01.800460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.429 [2024-07-26 11:30:01.800468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.429 [2024-07-26 11:30:01.800478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.429 [2024-07-26 11:30:01.800487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.429 [2024-07-26 11:30:01.800498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.429 [2024-07-26 11:30:01.800507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.429 [2024-07-26 11:30:01.800517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.429 [2024-07-26 11:30:01.800526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.429 [2024-07-26 11:30:01.800537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.429 [2024-07-26 11:30:01.800550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.429 [2024-07-26 11:30:01.800561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.429 [2024-07-26 11:30:01.800570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.429 [2024-07-26 11:30:01.800581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.429 [2024-07-26 11:30:01.800589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.429 [2024-07-26 11:30:01.800600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.429 [2024-07-26 11:30:01.800608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.429 [2024-07-26 11:30:01.800618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.429 [2024-07-26 11:30:01.800631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.429 [2024-07-26 11:30:01.800642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.429 [2024-07-26 11:30:01.800651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.429 [2024-07-26 11:30:01.800660] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2244040 is same with the state(5) to be set 00:22:06.429 [2024-07-26 11:30:01.802224] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:22:06.429 [2024-07-26 11:30:01.802249] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode2] resetting controller 00:22:06.429 [2024-07-26 11:30:01.802260] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode3] resetting controller 00:22:06.429 [2024-07-26 11:30:01.802271] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode4] resetting controller 00:22:06.429 [2024-07-26 11:30:01.802362] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:22:06.429 [2024-07-26 11:30:01.802378] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:22:06.429 [2024-07-26 11:30:01.802395] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:22:06.429 [2024-07-26 11:30:01.802483] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode5] resetting controller 00:22:06.429 [2024-07-26 11:30:01.802499] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode6] resetting controller 00:22:06.429 task offset: 24576 on job bdev=Nvme9n1 fails 00:22:06.429 00:22:06.429 Latency(us) 00:22:06.429 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:06.429 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:06.429 Job: Nvme1n1 ended in about 0.90 seconds with error 00:22:06.429 Verification LBA range: start 0x0 length 0x400 00:22:06.429 Nvme1n1 : 0.90 213.05 13.32 71.02 0.00 223146.79 15229.32 211712.49 00:22:06.429 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:06.429 Job: Nvme2n1 ended in about 0.90 seconds with error 00:22:06.429 Verification LBA range: start 0x0 length 0x400 00:22:06.429 Nvme2n1 : 0.90 212.55 13.28 70.85 0.00 219815.25 16477.62 202724.69 00:22:06.430 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:06.430 Job: Nvme3n1 ended in about 0.91 seconds with error 00:22:06.430 Verification LBA range: start 0x0 length 0x400 00:22:06.430 Nvme3n1 : 0.91 212.06 13.25 70.69 0.00 216454.22 15104.49 208716.56 00:22:06.430 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:06.430 Job: Nvme4n1 ended in about 0.91 seconds with error 00:22:06.430 Verification LBA range: start 0x0 length 0x400 00:22:06.430 Nvme4n1 : 0.91 211.59 13.22 70.53 0.00 213141.46 19972.88 212711.13 00:22:06.430 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:06.430 Job: Nvme5n1 ended in about 0.92 seconds with error 00:22:06.430 Verification LBA range: start 0x0 length 0x400 00:22:06.430 Nvme5n1 : 0.92 209.81 13.11 69.94 0.00 211228.77 17725.93 212711.13 00:22:06.430 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:06.430 Job: Nvme6n1 ended in about 0.92 seconds with error 00:22:06.430 Verification LBA range: start 0x0 length 0x400 00:22:06.430 Nvme6n1 : 0.92 209.27 13.08 69.76 0.00 207979.52 15728.64 205720.62 00:22:06.430 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:06.430 Job: Nvme7n1 ended in about 0.92 seconds with error 00:22:06.430 Verification LBA range: start 0x0 length 0x400 00:22:06.430 Nvme7n1 : 0.92 208.69 13.04 69.56 0.00 204753.55 15166.90 208716.56 00:22:06.430 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:06.430 Job: Nvme8n1 ended in about 0.89 seconds with error 00:22:06.430 Verification LBA range: start 0x0 length 0x400 00:22:06.430 Nvme8n1 : 0.89 215.57 13.47 71.86 0.00 193657.17 14417.92 214708.42 00:22:06.430 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:06.430 Job: Nvme9n1 ended in about 0.89 seconds with error 00:22:06.430 Verification LBA range: start 0x0 length 0x400 00:22:06.430 Nvme9n1 : 0.89 216.30 13.52 72.10 0.00 189118.17 21970.16 216705.71 00:22:06.430 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:06.430 Job: Nvme10n1 ended in about 0.89 seconds with error 00:22:06.430 Verification LBA range: start 0x0 length 0x400 00:22:06.430 Nvme10n1 : 0.89 216.02 13.50 72.01 0.00 185665.95 14605.17 234681.30 00:22:06.430 =================================================================================================================== 00:22:06.430 Total : 2124.91 132.81 708.30 0.00 206496.09 14417.92 234681.30 00:22:06.430 [2024-07-26 11:30:01.824689] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:22:06.430 [2024-07-26 11:30:01.824724] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode7] resetting controller 00:22:06.430 [2024-07-26 11:30:01.824988] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:06.430 [2024-07-26 11:30:01.825006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1916c70 with addr=10.0.0.2, port=4420 00:22:06.430 [2024-07-26 11:30:01.825016] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1916c70 is same with the state(5) to be set 00:22:06.430 [2024-07-26 11:30:01.825156] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:06.430 [2024-07-26 11:30:01.825167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ae2910 with addr=10.0.0.2, port=4420 00:22:06.430 [2024-07-26 11:30:01.825174] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ae2910 is same with the state(5) to be set 00:22:06.430 [2024-07-26 11:30:01.825352] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:06.430 [2024-07-26 11:30:01.825362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1942f30 with addr=10.0.0.2, port=4420 00:22:06.430 [2024-07-26 11:30:01.825369] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1942f30 is same with the state(5) to be set 00:22:06.430 [2024-07-26 11:30:01.825553] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:06.430 [2024-07-26 11:30:01.825562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1946b90 with addr=10.0.0.2, port=4420 00:22:06.430 [2024-07-26 11:30:01.825574] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1946b90 is same with the state(5) to be set 00:22:06.430 [2024-07-26 11:30:01.827141] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode8] resetting controller 00:22:06.430 [2024-07-26 11:30:01.827156] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode10] resetting controller 00:22:06.430 [2024-07-26 11:30:01.827446] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:06.430 [2024-07-26 11:30:01.827460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1465340 with addr=10.0.0.2, port=4420 00:22:06.430 [2024-07-26 11:30:01.827468] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1465340 is same with the state(5) to be set 00:22:06.430 [2024-07-26 11:30:01.827684] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:06.430 [2024-07-26 11:30:01.827695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1abe700 with addr=10.0.0.2, port=4420 00:22:06.430 [2024-07-26 11:30:01.827702] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1abe700 is same with the state(5) to be set 00:22:06.430 [2024-07-26 11:30:01.827900] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:06.430 [2024-07-26 11:30:01.827910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ad9bc0 with addr=10.0.0.2, port=4420 00:22:06.430 [2024-07-26 11:30:01.827917] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ad9bc0 is same with the state(5) to be set 00:22:06.430 [2024-07-26 11:30:01.827929] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1916c70 (9): Bad file descriptor 00:22:06.430 [2024-07-26 11:30:01.827941] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ae2910 (9): Bad file descriptor 00:22:06.430 [2024-07-26 11:30:01.827949] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1942f30 (9): Bad file descriptor 00:22:06.430 [2024-07-26 11:30:01.827958] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1946b90 (9): Bad file descriptor 00:22:06.430 [2024-07-26 11:30:01.827985] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:22:06.430 [2024-07-26 11:30:01.828000] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:22:06.430 [2024-07-26 11:30:01.828010] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:22:06.430 [2024-07-26 11:30:01.828019] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:22:06.430 [2024-07-26 11:30:01.828030] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:22:06.430 [2024-07-26 11:30:01.828090] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode9] resetting controller 00:22:06.430 [2024-07-26 11:30:01.828322] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:06.430 [2024-07-26 11:30:01.828334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ad2f60 with addr=10.0.0.2, port=4420 00:22:06.430 [2024-07-26 11:30:01.828341] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ad2f60 is same with the state(5) to be set 00:22:06.430 [2024-07-26 11:30:01.828580] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:06.430 [2024-07-26 11:30:01.828591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aafab0 with addr=10.0.0.2, port=4420 00:22:06.430 [2024-07-26 11:30:01.828598] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aafab0 is same with the state(5) to be set 00:22:06.430 [2024-07-26 11:30:01.828606] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1465340 (9): Bad file descriptor 00:22:06.430 [2024-07-26 11:30:01.828618] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1abe700 (9): Bad file descriptor 00:22:06.430 [2024-07-26 11:30:01.828631] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ad9bc0 (9): Bad file descriptor 00:22:06.430 [2024-07-26 11:30:01.828639] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:22:06.430 [2024-07-26 11:30:01.828646] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:22:06.430 [2024-07-26 11:30:01.828654] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:22:06.430 [2024-07-26 11:30:01.828664] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2] Ctrlr is in error state 00:22:06.430 [2024-07-26 11:30:01.828670] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode2] controller reinitialization failed 00:22:06.430 [2024-07-26 11:30:01.828676] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2] in failed state. 00:22:06.430 [2024-07-26 11:30:01.828685] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode3] Ctrlr is in error state 00:22:06.430 [2024-07-26 11:30:01.828691] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode3] controller reinitialization failed 00:22:06.430 [2024-07-26 11:30:01.828697] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode3] in failed state. 00:22:06.430 [2024-07-26 11:30:01.828707] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode4] Ctrlr is in error state 00:22:06.430 [2024-07-26 11:30:01.828714] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode4] controller reinitialization failed 00:22:06.430 [2024-07-26 11:30:01.828720] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode4] in failed state. 00:22:06.430 [2024-07-26 11:30:01.828794] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:06.430 [2024-07-26 11:30:01.828802] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:06.430 [2024-07-26 11:30:01.828808] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:06.430 [2024-07-26 11:30:01.828814] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:06.430 [2024-07-26 11:30:01.828964] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:06.430 [2024-07-26 11:30:01.828975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ab0840 with addr=10.0.0.2, port=4420 00:22:06.430 [2024-07-26 11:30:01.828982] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ab0840 is same with the state(5) to be set 00:22:06.430 [2024-07-26 11:30:01.828990] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ad2f60 (9): Bad file descriptor 00:22:06.430 [2024-07-26 11:30:01.828998] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1aafab0 (9): Bad file descriptor 00:22:06.430 [2024-07-26 11:30:01.829006] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode5] Ctrlr is in error state 00:22:06.430 [2024-07-26 11:30:01.829013] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode5] controller reinitialization failed 00:22:06.430 [2024-07-26 11:30:01.829019] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode5] in failed state. 00:22:06.431 [2024-07-26 11:30:01.829027] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode6] Ctrlr is in error state 00:22:06.431 [2024-07-26 11:30:01.829033] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode6] controller reinitialization failed 00:22:06.431 [2024-07-26 11:30:01.829039] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode6] in failed state. 00:22:06.431 [2024-07-26 11:30:01.829047] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode7] Ctrlr is in error state 00:22:06.431 [2024-07-26 11:30:01.829056] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode7] controller reinitialization failed 00:22:06.431 [2024-07-26 11:30:01.829062] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode7] in failed state. 00:22:06.431 [2024-07-26 11:30:01.829088] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:06.431 [2024-07-26 11:30:01.829095] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:06.431 [2024-07-26 11:30:01.829101] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:06.431 [2024-07-26 11:30:01.829107] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ab0840 (9): Bad file descriptor 00:22:06.431 [2024-07-26 11:30:01.829114] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode8] Ctrlr is in error state 00:22:06.431 [2024-07-26 11:30:01.829120] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode8] controller reinitialization failed 00:22:06.431 [2024-07-26 11:30:01.829126] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode8] in failed state. 00:22:06.431 [2024-07-26 11:30:01.829135] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode10] Ctrlr is in error state 00:22:06.431 [2024-07-26 11:30:01.829141] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode10] controller reinitialization failed 00:22:06.431 [2024-07-26 11:30:01.829147] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode10] in failed state. 00:22:06.431 [2024-07-26 11:30:01.829170] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:06.431 [2024-07-26 11:30:01.829177] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:06.431 [2024-07-26 11:30:01.829183] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode9] Ctrlr is in error state 00:22:06.431 [2024-07-26 11:30:01.829189] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode9] controller reinitialization failed 00:22:06.431 [2024-07-26 11:30:01.829195] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode9] in failed state. 00:22:06.431 [2024-07-26 11:30:01.829217] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:06.690 11:30:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@136 -- # nvmfpid= 00:22:06.691 11:30:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@139 -- # sleep 1 00:22:07.628 11:30:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@142 -- # kill -9 1576516 00:22:07.628 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh: line 142: kill: (1576516) - No such process 00:22:07.628 11:30:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@142 -- # true 00:22:07.628 11:30:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@144 -- # stoptarget 00:22:07.628 11:30:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@41 -- # rm -f ./local-job0-0-verify.state 00:22:07.628 11:30:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@42 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:22:07.628 11:30:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:22:07.628 11:30:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@45 -- # nvmftestfini 00:22:07.628 11:30:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@488 -- # nvmfcleanup 00:22:07.628 11:30:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@117 -- # sync 00:22:07.628 11:30:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:22:07.628 11:30:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@120 -- # set +e 00:22:07.628 11:30:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@121 -- # for i in {1..20} 00:22:07.628 11:30:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:22:07.628 rmmod nvme_tcp 00:22:07.628 rmmod nvme_fabrics 00:22:07.628 rmmod nvme_keyring 00:22:07.628 11:30:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:22:07.628 11:30:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@124 -- # set -e 00:22:07.628 11:30:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@125 -- # return 0 00:22:07.628 11:30:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:22:07.628 11:30:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:22:07.628 11:30:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:22:07.628 11:30:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:22:07.628 11:30:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:22:07.628 11:30:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:22:07.628 11:30:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:07.628 11:30:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:07.628 11:30:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:10.162 11:30:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:22:10.162 00:22:10.162 real 0m7.992s 00:22:10.162 user 0m19.984s 00:22:10.162 sys 0m1.291s 00:22:10.162 11:30:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@1126 -- # xtrace_disable 00:22:10.162 11:30:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:22:10.162 ************************************ 00:22:10.162 END TEST nvmf_shutdown_tc3 00:22:10.162 ************************************ 00:22:10.162 11:30:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@151 -- # trap - SIGINT SIGTERM EXIT 00:22:10.162 00:22:10.162 real 0m31.733s 00:22:10.162 user 1m18.956s 00:22:10.162 sys 0m8.632s 00:22:10.162 11:30:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1126 -- # xtrace_disable 00:22:10.162 11:30:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:22:10.162 ************************************ 00:22:10.162 END TEST nvmf_shutdown 00:22:10.162 ************************************ 00:22:10.162 11:30:05 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@66 -- # trap - SIGINT SIGTERM EXIT 00:22:10.162 00:22:10.162 real 10m47.186s 00:22:10.162 user 23m46.560s 00:22:10.162 sys 3m11.752s 00:22:10.162 11:30:05 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1126 -- # xtrace_disable 00:22:10.162 11:30:05 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:22:10.162 ************************************ 00:22:10.162 END TEST nvmf_target_extra 00:22:10.162 ************************************ 00:22:10.162 11:30:05 nvmf_tcp -- nvmf/nvmf.sh@16 -- # run_test nvmf_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_host.sh --transport=tcp 00:22:10.162 11:30:05 nvmf_tcp -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:22:10.162 11:30:05 nvmf_tcp -- common/autotest_common.sh@1107 -- # xtrace_disable 00:22:10.162 11:30:05 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:22:10.162 ************************************ 00:22:10.162 START TEST nvmf_host 00:22:10.162 ************************************ 00:22:10.162 11:30:05 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_host.sh --transport=tcp 00:22:10.162 * Looking for test storage... 00:22:10.162 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:22:10.162 11:30:05 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:22:10.162 11:30:05 nvmf_tcp.nvmf_host -- nvmf/common.sh@7 -- # uname -s 00:22:10.162 11:30:05 nvmf_tcp.nvmf_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:10.162 11:30:05 nvmf_tcp.nvmf_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:10.162 11:30:05 nvmf_tcp.nvmf_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:10.162 11:30:05 nvmf_tcp.nvmf_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:10.162 11:30:05 nvmf_tcp.nvmf_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:10.162 11:30:05 nvmf_tcp.nvmf_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:10.162 11:30:05 nvmf_tcp.nvmf_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:10.162 11:30:05 nvmf_tcp.nvmf_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:10.162 11:30:05 nvmf_tcp.nvmf_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:10.162 11:30:05 nvmf_tcp.nvmf_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:10.162 11:30:05 nvmf_tcp.nvmf_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:22:10.162 11:30:05 nvmf_tcp.nvmf_host -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:22:10.162 11:30:05 nvmf_tcp.nvmf_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:10.162 11:30:05 nvmf_tcp.nvmf_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:10.162 11:30:05 nvmf_tcp.nvmf_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:10.162 11:30:05 nvmf_tcp.nvmf_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:10.162 11:30:05 nvmf_tcp.nvmf_host -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:10.162 11:30:05 nvmf_tcp.nvmf_host -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:10.162 11:30:05 nvmf_tcp.nvmf_host -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:10.162 11:30:05 nvmf_tcp.nvmf_host -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:10.162 11:30:05 nvmf_tcp.nvmf_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:10.162 11:30:05 nvmf_tcp.nvmf_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:10.162 11:30:05 nvmf_tcp.nvmf_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:10.162 11:30:05 nvmf_tcp.nvmf_host -- paths/export.sh@5 -- # export PATH 00:22:10.163 11:30:05 nvmf_tcp.nvmf_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:10.163 11:30:05 nvmf_tcp.nvmf_host -- nvmf/common.sh@47 -- # : 0 00:22:10.163 11:30:05 nvmf_tcp.nvmf_host -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:22:10.163 11:30:05 nvmf_tcp.nvmf_host -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:22:10.163 11:30:05 nvmf_tcp.nvmf_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:10.163 11:30:05 nvmf_tcp.nvmf_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:10.163 11:30:05 nvmf_tcp.nvmf_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:10.163 11:30:05 nvmf_tcp.nvmf_host -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:22:10.163 11:30:05 nvmf_tcp.nvmf_host -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:22:10.163 11:30:05 nvmf_tcp.nvmf_host -- nvmf/common.sh@51 -- # have_pci_nics=0 00:22:10.163 11:30:05 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@11 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:22:10.163 11:30:05 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@13 -- # TEST_ARGS=("$@") 00:22:10.163 11:30:05 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@15 -- # [[ 0 -eq 0 ]] 00:22:10.163 11:30:05 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@16 -- # run_test nvmf_multicontroller /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:22:10.163 11:30:05 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:22:10.163 11:30:05 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:22:10.163 11:30:05 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:22:10.163 ************************************ 00:22:10.163 START TEST nvmf_multicontroller 00:22:10.163 ************************************ 00:22:10.163 11:30:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:22:10.163 * Looking for test storage... 00:22:10.163 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:22:10.163 11:30:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:22:10.163 11:30:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@7 -- # uname -s 00:22:10.163 11:30:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:10.163 11:30:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:10.163 11:30:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:10.163 11:30:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:10.163 11:30:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:10.163 11:30:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:10.163 11:30:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:10.163 11:30:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:10.163 11:30:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:10.163 11:30:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:10.163 11:30:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:22:10.163 11:30:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:22:10.163 11:30:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:10.163 11:30:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:10.163 11:30:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:10.163 11:30:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:10.163 11:30:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:10.163 11:30:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:10.163 11:30:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:10.163 11:30:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:10.163 11:30:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:10.163 11:30:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:10.163 11:30:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:10.163 11:30:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@5 -- # export PATH 00:22:10.163 11:30:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:10.163 11:30:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@47 -- # : 0 00:22:10.163 11:30:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:22:10.163 11:30:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:22:10.163 11:30:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:10.163 11:30:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:10.163 11:30:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:10.163 11:30:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:22:10.163 11:30:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:22:10.163 11:30:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@51 -- # have_pci_nics=0 00:22:10.163 11:30:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@11 -- # MALLOC_BDEV_SIZE=64 00:22:10.163 11:30:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:22:10.163 11:30:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@13 -- # NVMF_HOST_FIRST_PORT=60000 00:22:10.163 11:30:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@14 -- # NVMF_HOST_SECOND_PORT=60001 00:22:10.163 11:30:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:22:10.163 11:30:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@18 -- # '[' tcp == rdma ']' 00:22:10.163 11:30:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@23 -- # nvmftestinit 00:22:10.163 11:30:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:22:10.163 11:30:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:10.163 11:30:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@448 -- # prepare_net_devs 00:22:10.163 11:30:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@410 -- # local -g is_hw=no 00:22:10.163 11:30:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@412 -- # remove_spdk_ns 00:22:10.163 11:30:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:10.163 11:30:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:10.163 11:30:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:10.163 11:30:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:22:10.163 11:30:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:22:10.163 11:30:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@285 -- # xtrace_disable 00:22:10.163 11:30:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:16.732 11:30:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:16.732 11:30:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@291 -- # pci_devs=() 00:22:16.732 11:30:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@291 -- # local -a pci_devs 00:22:16.732 11:30:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@292 -- # pci_net_devs=() 00:22:16.732 11:30:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:22:16.732 11:30:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@293 -- # pci_drivers=() 00:22:16.732 11:30:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@293 -- # local -A pci_drivers 00:22:16.732 11:30:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@295 -- # net_devs=() 00:22:16.732 11:30:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@295 -- # local -ga net_devs 00:22:16.732 11:30:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@296 -- # e810=() 00:22:16.732 11:30:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@296 -- # local -ga e810 00:22:16.732 11:30:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@297 -- # x722=() 00:22:16.732 11:30:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@297 -- # local -ga x722 00:22:16.732 11:30:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@298 -- # mlx=() 00:22:16.732 11:30:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@298 -- # local -ga mlx 00:22:16.732 11:30:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:16.732 11:30:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:16.732 11:30:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:16.732 11:30:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:16.732 11:30:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:16.732 11:30:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:16.732 11:30:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:16.732 11:30:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:16.732 11:30:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:16.732 11:30:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:16.732 11:30:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:16.732 11:30:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:22:16.732 11:30:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:22:16.732 11:30:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:22:16.732 11:30:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:22:16.732 11:30:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:22:16.732 11:30:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:22:16.732 11:30:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:16.732 11:30:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:22:16.732 Found 0000:86:00.0 (0x8086 - 0x159b) 00:22:16.732 11:30:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:16.732 11:30:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:16.732 11:30:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:16.732 11:30:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:16.732 11:30:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:16.732 11:30:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:16.732 11:30:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:22:16.732 Found 0000:86:00.1 (0x8086 - 0x159b) 00:22:16.732 11:30:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:16.732 11:30:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:16.732 11:30:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:16.732 11:30:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:16.732 11:30:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:16.732 11:30:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:22:16.732 11:30:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:22:16.732 11:30:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:22:16.732 11:30:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:16.732 11:30:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:16.732 11:30:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:22:16.732 11:30:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:16.732 11:30:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@390 -- # [[ up == up ]] 00:22:16.732 11:30:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:16.732 11:30:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:16.732 11:30:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:22:16.732 Found net devices under 0000:86:00.0: cvl_0_0 00:22:16.732 11:30:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:16.732 11:30:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:16.732 11:30:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:16.732 11:30:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:22:16.732 11:30:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:16.732 11:30:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@390 -- # [[ up == up ]] 00:22:16.732 11:30:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:16.732 11:30:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:16.732 11:30:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:22:16.732 Found net devices under 0000:86:00.1: cvl_0_1 00:22:16.732 11:30:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:16.732 11:30:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:22:16.732 11:30:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@414 -- # is_hw=yes 00:22:16.732 11:30:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:22:16.732 11:30:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:22:16.732 11:30:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:22:16.732 11:30:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:16.732 11:30:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:16.732 11:30:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:16.732 11:30:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:22:16.732 11:30:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:16.732 11:30:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:16.732 11:30:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:22:16.732 11:30:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:16.732 11:30:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:16.732 11:30:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:22:16.733 11:30:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:22:16.733 11:30:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:22:16.733 11:30:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:16.733 11:30:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:16.733 11:30:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:16.733 11:30:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:22:16.733 11:30:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:16.733 11:30:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:16.733 11:30:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:16.733 11:30:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:22:16.733 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:16.733 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.215 ms 00:22:16.733 00:22:16.733 --- 10.0.0.2 ping statistics --- 00:22:16.733 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:16.733 rtt min/avg/max/mdev = 0.215/0.215/0.215/0.000 ms 00:22:16.733 11:30:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:16.733 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:16.733 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.236 ms 00:22:16.733 00:22:16.733 --- 10.0.0.1 ping statistics --- 00:22:16.733 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:16.733 rtt min/avg/max/mdev = 0.236/0.236/0.236/0.000 ms 00:22:16.733 11:30:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:16.733 11:30:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@422 -- # return 0 00:22:16.733 11:30:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:22:16.733 11:30:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:16.733 11:30:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:22:16.733 11:30:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:22:16.733 11:30:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:16.733 11:30:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:22:16.733 11:30:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:22:16.733 11:30:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@25 -- # nvmfappstart -m 0xE 00:22:16.733 11:30:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:22:16.733 11:30:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@724 -- # xtrace_disable 00:22:16.733 11:30:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:16.733 11:30:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@481 -- # nvmfpid=1581204 00:22:16.733 11:30:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@482 -- # waitforlisten 1581204 00:22:16.733 11:30:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:22:16.733 11:30:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@831 -- # '[' -z 1581204 ']' 00:22:16.733 11:30:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:16.733 11:30:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@836 -- # local max_retries=100 00:22:16.733 11:30:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:16.733 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:16.733 11:30:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@840 -- # xtrace_disable 00:22:16.733 11:30:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:16.733 [2024-07-26 11:30:11.560474] Starting SPDK v24.09-pre git sha1 487ff9e1a / DPDK 24.03.0 initialization... 00:22:16.733 [2024-07-26 11:30:11.560523] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:16.733 EAL: No free 2048 kB hugepages reported on node 1 00:22:16.733 [2024-07-26 11:30:11.633863] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:22:16.733 [2024-07-26 11:30:11.712557] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:16.733 [2024-07-26 11:30:11.712589] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:16.733 [2024-07-26 11:30:11.712597] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:16.733 [2024-07-26 11:30:11.712603] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:16.733 [2024-07-26 11:30:11.712607] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:16.733 [2024-07-26 11:30:11.712659] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:22:16.733 [2024-07-26 11:30:11.712696] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:22:16.733 [2024-07-26 11:30:11.712697] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:22:16.733 11:30:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:22:16.733 11:30:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@864 -- # return 0 00:22:16.733 11:30:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:22:16.733 11:30:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@730 -- # xtrace_disable 00:22:16.733 11:30:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:16.992 11:30:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:16.992 11:30:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@27 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:22:16.992 11:30:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:16.992 11:30:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:16.992 [2024-07-26 11:30:12.404001] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:16.992 11:30:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:16.992 11:30:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@29 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:22:16.992 11:30:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:16.992 11:30:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:16.992 Malloc0 00:22:16.992 11:30:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:16.992 11:30:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@30 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:22:16.992 11:30:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:16.992 11:30:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:16.992 11:30:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:16.992 11:30:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:22:16.992 11:30:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:16.992 11:30:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:16.992 11:30:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:16.992 11:30:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:22:16.992 11:30:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:16.992 11:30:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:16.992 [2024-07-26 11:30:12.462370] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:16.992 11:30:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:16.992 11:30:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:22:16.992 11:30:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:16.992 11:30:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:16.992 [2024-07-26 11:30:12.470256] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:22:16.992 11:30:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:16.992 11:30:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@36 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:22:16.992 11:30:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:16.992 11:30:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:16.992 Malloc1 00:22:16.992 11:30:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:16.992 11:30:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:22:16.992 11:30:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:16.992 11:30:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:16.992 11:30:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:16.992 11:30:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc1 00:22:16.992 11:30:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:16.992 11:30:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:16.992 11:30:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:16.992 11:30:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:22:16.992 11:30:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:16.992 11:30:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:16.992 11:30:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:16.992 11:30:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4421 00:22:16.992 11:30:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:16.992 11:30:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:16.992 11:30:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:16.992 11:30:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@44 -- # bdevperf_pid=1581363 00:22:16.992 11:30:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w write -t 1 -f 00:22:16.992 11:30:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@46 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; pap "$testdir/try.txt"; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:22:16.992 11:30:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@47 -- # waitforlisten 1581363 /var/tmp/bdevperf.sock 00:22:16.992 11:30:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@831 -- # '[' -z 1581363 ']' 00:22:16.992 11:30:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:16.992 11:30:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@836 -- # local max_retries=100 00:22:16.992 11:30:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:16.992 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:16.992 11:30:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@840 -- # xtrace_disable 00:22:16.992 11:30:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:17.927 11:30:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:22:17.927 11:30:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@864 -- # return 0 00:22:17.927 11:30:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@50 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 00:22:17.927 11:30:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:17.927 11:30:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:17.927 NVMe0n1 00:22:17.927 11:30:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:17.928 11:30:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@54 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:22:17.928 11:30:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@54 -- # grep -c NVMe 00:22:17.928 11:30:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:17.928 11:30:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:17.928 11:30:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:17.928 1 00:22:17.928 11:30:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@60 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -q nqn.2021-09-7.io.spdk:00001 00:22:17.928 11:30:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@650 -- # local es=0 00:22:17.928 11:30:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -q nqn.2021-09-7.io.spdk:00001 00:22:17.928 11:30:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:22:17.928 11:30:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:22:17.928 11:30:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:22:17.928 11:30:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:22:17.928 11:30:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@653 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -q nqn.2021-09-7.io.spdk:00001 00:22:17.928 11:30:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:17.928 11:30:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:17.928 request: 00:22:17.928 { 00:22:17.928 "name": "NVMe0", 00:22:17.928 "trtype": "tcp", 00:22:17.928 "traddr": "10.0.0.2", 00:22:17.928 "adrfam": "ipv4", 00:22:17.928 "trsvcid": "4420", 00:22:17.928 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:17.928 "hostnqn": "nqn.2021-09-7.io.spdk:00001", 00:22:17.928 "hostaddr": "10.0.0.2", 00:22:17.928 "hostsvcid": "60000", 00:22:17.928 "prchk_reftag": false, 00:22:17.928 "prchk_guard": false, 00:22:17.928 "hdgst": false, 00:22:17.928 "ddgst": false, 00:22:17.928 "method": "bdev_nvme_attach_controller", 00:22:17.928 "req_id": 1 00:22:17.928 } 00:22:17.928 Got JSON-RPC error response 00:22:17.928 response: 00:22:17.928 { 00:22:17.928 "code": -114, 00:22:17.928 "message": "A controller named NVMe0 already exists with the specified network path\n" 00:22:17.928 } 00:22:17.928 11:30:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:22:17.928 11:30:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@653 -- # es=1 00:22:17.928 11:30:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:22:17.928 11:30:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:22:17.928 11:30:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:22:17.928 11:30:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@65 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.2 -c 60000 00:22:17.928 11:30:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@650 -- # local es=0 00:22:17.928 11:30:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.2 -c 60000 00:22:17.928 11:30:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:22:17.928 11:30:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:22:17.928 11:30:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:22:17.928 11:30:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:22:17.928 11:30:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@653 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.2 -c 60000 00:22:17.928 11:30:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:17.928 11:30:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:18.186 request: 00:22:18.186 { 00:22:18.186 "name": "NVMe0", 00:22:18.186 "trtype": "tcp", 00:22:18.186 "traddr": "10.0.0.2", 00:22:18.186 "adrfam": "ipv4", 00:22:18.186 "trsvcid": "4420", 00:22:18.186 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:22:18.186 "hostaddr": "10.0.0.2", 00:22:18.186 "hostsvcid": "60000", 00:22:18.186 "prchk_reftag": false, 00:22:18.186 "prchk_guard": false, 00:22:18.186 "hdgst": false, 00:22:18.186 "ddgst": false, 00:22:18.186 "method": "bdev_nvme_attach_controller", 00:22:18.186 "req_id": 1 00:22:18.186 } 00:22:18.186 Got JSON-RPC error response 00:22:18.186 response: 00:22:18.186 { 00:22:18.186 "code": -114, 00:22:18.186 "message": "A controller named NVMe0 already exists with the specified network path\n" 00:22:18.186 } 00:22:18.186 11:30:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:22:18.187 11:30:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@653 -- # es=1 00:22:18.187 11:30:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:22:18.187 11:30:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:22:18.187 11:30:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:22:18.187 11:30:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@69 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x disable 00:22:18.187 11:30:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@650 -- # local es=0 00:22:18.187 11:30:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x disable 00:22:18.187 11:30:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:22:18.187 11:30:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:22:18.187 11:30:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:22:18.187 11:30:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:22:18.187 11:30:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@653 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x disable 00:22:18.187 11:30:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:18.187 11:30:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:18.187 request: 00:22:18.187 { 00:22:18.187 "name": "NVMe0", 00:22:18.187 "trtype": "tcp", 00:22:18.187 "traddr": "10.0.0.2", 00:22:18.187 "adrfam": "ipv4", 00:22:18.187 "trsvcid": "4420", 00:22:18.187 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:18.187 "hostaddr": "10.0.0.2", 00:22:18.187 "hostsvcid": "60000", 00:22:18.187 "prchk_reftag": false, 00:22:18.187 "prchk_guard": false, 00:22:18.187 "hdgst": false, 00:22:18.187 "ddgst": false, 00:22:18.187 "multipath": "disable", 00:22:18.187 "method": "bdev_nvme_attach_controller", 00:22:18.187 "req_id": 1 00:22:18.187 } 00:22:18.187 Got JSON-RPC error response 00:22:18.187 response: 00:22:18.187 { 00:22:18.187 "code": -114, 00:22:18.187 "message": "A controller named NVMe0 already exists and multipath is disabled\n" 00:22:18.187 } 00:22:18.187 11:30:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:22:18.187 11:30:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@653 -- # es=1 00:22:18.187 11:30:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:22:18.187 11:30:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:22:18.187 11:30:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:22:18.187 11:30:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@74 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x failover 00:22:18.187 11:30:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@650 -- # local es=0 00:22:18.187 11:30:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x failover 00:22:18.187 11:30:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:22:18.187 11:30:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:22:18.187 11:30:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:22:18.187 11:30:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:22:18.187 11:30:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@653 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x failover 00:22:18.187 11:30:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:18.187 11:30:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:18.187 request: 00:22:18.187 { 00:22:18.187 "name": "NVMe0", 00:22:18.187 "trtype": "tcp", 00:22:18.187 "traddr": "10.0.0.2", 00:22:18.187 "adrfam": "ipv4", 00:22:18.187 "trsvcid": "4420", 00:22:18.187 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:18.187 "hostaddr": "10.0.0.2", 00:22:18.187 "hostsvcid": "60000", 00:22:18.187 "prchk_reftag": false, 00:22:18.187 "prchk_guard": false, 00:22:18.187 "hdgst": false, 00:22:18.187 "ddgst": false, 00:22:18.187 "multipath": "failover", 00:22:18.187 "method": "bdev_nvme_attach_controller", 00:22:18.187 "req_id": 1 00:22:18.187 } 00:22:18.187 Got JSON-RPC error response 00:22:18.187 response: 00:22:18.187 { 00:22:18.187 "code": -114, 00:22:18.187 "message": "A controller named NVMe0 already exists with the specified network path\n" 00:22:18.187 } 00:22:18.187 11:30:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:22:18.187 11:30:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@653 -- # es=1 00:22:18.187 11:30:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:22:18.187 11:30:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:22:18.187 11:30:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:22:18.187 11:30:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@79 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:22:18.187 11:30:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:18.187 11:30:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:18.445 00:22:18.445 11:30:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:18.445 11:30:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@83 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:22:18.446 11:30:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:18.446 11:30:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:18.446 11:30:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:18.446 11:30:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@87 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe1 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 00:22:18.446 11:30:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:18.446 11:30:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:18.446 00:22:18.446 11:30:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:18.446 11:30:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@90 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:22:18.446 11:30:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@90 -- # grep -c NVMe 00:22:18.446 11:30:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:18.446 11:30:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:18.446 11:30:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:18.446 11:30:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@90 -- # '[' 2 '!=' 2 ']' 00:22:18.446 11:30:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@95 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:22:19.818 0 00:22:19.818 11:30:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@98 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe1 00:22:19.818 11:30:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:19.818 11:30:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:19.818 11:30:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:19.818 11:30:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@100 -- # killprocess 1581363 00:22:19.818 11:30:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@950 -- # '[' -z 1581363 ']' 00:22:19.818 11:30:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@954 -- # kill -0 1581363 00:22:19.818 11:30:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@955 -- # uname 00:22:19.818 11:30:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:22:19.818 11:30:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1581363 00:22:19.818 11:30:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:22:19.818 11:30:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:22:19.819 11:30:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1581363' 00:22:19.819 killing process with pid 1581363 00:22:19.819 11:30:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@969 -- # kill 1581363 00:22:19.819 11:30:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@974 -- # wait 1581363 00:22:19.819 11:30:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@102 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:22:19.819 11:30:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:19.819 11:30:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:19.819 11:30:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:19.819 11:30:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@103 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:22:19.819 11:30:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:19.819 11:30:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:19.819 11:30:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:19.819 11:30:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@105 -- # trap - SIGINT SIGTERM EXIT 00:22:19.819 11:30:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@107 -- # pap /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:22:19.819 11:30:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1612 -- # read -r file 00:22:19.819 11:30:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1611 -- # find /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt -type f 00:22:19.819 11:30:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1611 -- # sort -u 00:22:19.819 11:30:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1613 -- # cat 00:22:19.819 --- /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt --- 00:22:19.819 [2024-07-26 11:30:12.572959] Starting SPDK v24.09-pre git sha1 487ff9e1a / DPDK 24.03.0 initialization... 00:22:19.819 [2024-07-26 11:30:12.573010] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1581363 ] 00:22:19.819 EAL: No free 2048 kB hugepages reported on node 1 00:22:19.819 [2024-07-26 11:30:12.636145] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:19.819 [2024-07-26 11:30:12.716276] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:22:19.819 [2024-07-26 11:30:13.987358] bdev.c:4633:bdev_name_add: *ERROR*: Bdev name 413f0546-3091-4020-8de7-a64ce8e8320f already exists 00:22:19.819 [2024-07-26 11:30:13.987386] bdev.c:7755:bdev_register: *ERROR*: Unable to add uuid:413f0546-3091-4020-8de7-a64ce8e8320f alias for bdev NVMe1n1 00:22:19.819 [2024-07-26 11:30:13.987394] bdev_nvme.c:4318:nvme_bdev_create: *ERROR*: spdk_bdev_register() failed 00:22:19.819 Running I/O for 1 seconds... 00:22:19.819 00:22:19.819 Latency(us) 00:22:19.819 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:19.819 Job: NVMe0n1 (Core Mask 0x1, workload: write, depth: 128, IO size: 4096) 00:22:19.819 NVMe0n1 : 1.01 23837.73 93.12 0.00 0.00 5352.83 4805.97 13981.01 00:22:19.819 =================================================================================================================== 00:22:19.819 Total : 23837.73 93.12 0.00 0.00 5352.83 4805.97 13981.01 00:22:19.819 Received shutdown signal, test time was about 1.000000 seconds 00:22:19.819 00:22:19.819 Latency(us) 00:22:19.819 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:19.819 =================================================================================================================== 00:22:19.819 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:22:19.819 --- /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt --- 00:22:19.819 11:30:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1618 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:22:19.819 11:30:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1612 -- # read -r file 00:22:19.819 11:30:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@108 -- # nvmftestfini 00:22:19.819 11:30:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@488 -- # nvmfcleanup 00:22:19.819 11:30:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@117 -- # sync 00:22:19.819 11:30:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:22:19.819 11:30:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@120 -- # set +e 00:22:19.819 11:30:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@121 -- # for i in {1..20} 00:22:19.819 11:30:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:22:19.819 rmmod nvme_tcp 00:22:19.819 rmmod nvme_fabrics 00:22:19.819 rmmod nvme_keyring 00:22:19.819 11:30:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:22:19.819 11:30:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@124 -- # set -e 00:22:19.819 11:30:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@125 -- # return 0 00:22:19.819 11:30:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@489 -- # '[' -n 1581204 ']' 00:22:19.819 11:30:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@490 -- # killprocess 1581204 00:22:19.819 11:30:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@950 -- # '[' -z 1581204 ']' 00:22:19.819 11:30:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@954 -- # kill -0 1581204 00:22:19.819 11:30:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@955 -- # uname 00:22:19.819 11:30:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:22:19.819 11:30:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1581204 00:22:20.078 11:30:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:22:20.078 11:30:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:22:20.078 11:30:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1581204' 00:22:20.078 killing process with pid 1581204 00:22:20.078 11:30:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@969 -- # kill 1581204 00:22:20.078 11:30:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@974 -- # wait 1581204 00:22:20.078 11:30:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:22:20.078 11:30:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:22:20.078 11:30:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:22:20.078 11:30:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:22:20.078 11:30:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@278 -- # remove_spdk_ns 00:22:20.078 11:30:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:20.078 11:30:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:20.078 11:30:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:22.719 11:30:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:22:22.719 00:22:22.719 real 0m12.180s 00:22:22.719 user 0m16.780s 00:22:22.719 sys 0m5.049s 00:22:22.719 11:30:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1126 -- # xtrace_disable 00:22:22.719 11:30:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:22.719 ************************************ 00:22:22.719 END TEST nvmf_multicontroller 00:22:22.719 ************************************ 00:22:22.719 11:30:17 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@17 -- # run_test nvmf_aer /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=tcp 00:22:22.719 11:30:17 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:22:22.719 11:30:17 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:22:22.719 11:30:17 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:22:22.719 ************************************ 00:22:22.719 START TEST nvmf_aer 00:22:22.719 ************************************ 00:22:22.719 11:30:17 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=tcp 00:22:22.719 * Looking for test storage... 00:22:22.719 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:22:22.719 11:30:17 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:22:22.719 11:30:17 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@7 -- # uname -s 00:22:22.719 11:30:17 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:22.719 11:30:17 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:22.719 11:30:17 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:22.719 11:30:17 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:22.719 11:30:17 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:22.719 11:30:17 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:22.719 11:30:17 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:22.719 11:30:17 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:22.719 11:30:17 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:22.719 11:30:17 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:22.719 11:30:17 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:22:22.719 11:30:17 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:22:22.720 11:30:17 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:22.720 11:30:17 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:22.720 11:30:17 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:22.720 11:30:17 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:22.720 11:30:17 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:22.720 11:30:17 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:22.720 11:30:17 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:22.720 11:30:17 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:22.720 11:30:17 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:22.720 11:30:17 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:22.720 11:30:17 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:22.720 11:30:17 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@5 -- # export PATH 00:22:22.720 11:30:17 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:22.720 11:30:17 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@47 -- # : 0 00:22:22.720 11:30:17 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:22:22.720 11:30:17 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:22:22.720 11:30:17 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:22.720 11:30:17 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:22.720 11:30:17 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:22.720 11:30:17 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:22:22.720 11:30:17 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:22:22.720 11:30:17 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@51 -- # have_pci_nics=0 00:22:22.720 11:30:17 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@11 -- # nvmftestinit 00:22:22.720 11:30:17 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:22:22.720 11:30:17 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:22.720 11:30:17 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@448 -- # prepare_net_devs 00:22:22.720 11:30:17 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@410 -- # local -g is_hw=no 00:22:22.720 11:30:17 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@412 -- # remove_spdk_ns 00:22:22.720 11:30:17 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:22.720 11:30:17 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:22.720 11:30:17 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:22.720 11:30:17 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:22:22.720 11:30:17 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:22:22.720 11:30:17 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@285 -- # xtrace_disable 00:22:22.720 11:30:17 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:22:27.994 11:30:23 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:27.994 11:30:23 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@291 -- # pci_devs=() 00:22:27.994 11:30:23 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@291 -- # local -a pci_devs 00:22:27.994 11:30:23 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@292 -- # pci_net_devs=() 00:22:27.994 11:30:23 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:22:27.994 11:30:23 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@293 -- # pci_drivers=() 00:22:27.994 11:30:23 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@293 -- # local -A pci_drivers 00:22:27.994 11:30:23 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@295 -- # net_devs=() 00:22:27.994 11:30:23 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@295 -- # local -ga net_devs 00:22:27.994 11:30:23 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@296 -- # e810=() 00:22:27.994 11:30:23 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@296 -- # local -ga e810 00:22:27.994 11:30:23 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@297 -- # x722=() 00:22:27.994 11:30:23 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@297 -- # local -ga x722 00:22:27.994 11:30:23 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@298 -- # mlx=() 00:22:27.994 11:30:23 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@298 -- # local -ga mlx 00:22:27.994 11:30:23 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:27.994 11:30:23 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:27.994 11:30:23 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:27.994 11:30:23 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:27.994 11:30:23 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:27.994 11:30:23 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:27.994 11:30:23 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:27.994 11:30:23 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:27.994 11:30:23 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:27.994 11:30:23 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:27.994 11:30:23 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:27.994 11:30:23 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:22:27.994 11:30:23 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:22:27.994 11:30:23 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:22:27.994 11:30:23 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:22:27.994 11:30:23 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:22:27.994 11:30:23 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:22:27.994 11:30:23 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:27.994 11:30:23 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:22:27.994 Found 0000:86:00.0 (0x8086 - 0x159b) 00:22:27.994 11:30:23 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:27.994 11:30:23 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:27.994 11:30:23 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:27.994 11:30:23 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:27.994 11:30:23 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:27.994 11:30:23 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:27.994 11:30:23 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:22:27.994 Found 0000:86:00.1 (0x8086 - 0x159b) 00:22:27.994 11:30:23 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:27.994 11:30:23 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:27.994 11:30:23 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:27.994 11:30:23 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:27.994 11:30:23 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:27.994 11:30:23 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:22:27.994 11:30:23 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:22:27.994 11:30:23 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:22:27.994 11:30:23 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:27.994 11:30:23 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:27.994 11:30:23 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:22:27.994 11:30:23 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:27.994 11:30:23 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@390 -- # [[ up == up ]] 00:22:27.994 11:30:23 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:27.994 11:30:23 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:27.994 11:30:23 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:22:27.994 Found net devices under 0000:86:00.0: cvl_0_0 00:22:27.994 11:30:23 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:27.994 11:30:23 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:27.994 11:30:23 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:27.994 11:30:23 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:22:27.994 11:30:23 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:27.994 11:30:23 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@390 -- # [[ up == up ]] 00:22:27.994 11:30:23 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:27.994 11:30:23 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:27.994 11:30:23 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:22:27.994 Found net devices under 0000:86:00.1: cvl_0_1 00:22:27.994 11:30:23 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:27.994 11:30:23 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:22:27.994 11:30:23 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@414 -- # is_hw=yes 00:22:27.994 11:30:23 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:22:27.994 11:30:23 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:22:27.994 11:30:23 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:22:27.994 11:30:23 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:27.994 11:30:23 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:27.994 11:30:23 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:27.994 11:30:23 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:22:27.994 11:30:23 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:27.994 11:30:23 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:27.994 11:30:23 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:22:27.994 11:30:23 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:27.994 11:30:23 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:27.994 11:30:23 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:22:27.994 11:30:23 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:22:27.994 11:30:23 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:22:27.994 11:30:23 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:27.994 11:30:23 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:27.994 11:30:23 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:27.994 11:30:23 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:22:27.994 11:30:23 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:27.994 11:30:23 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:27.994 11:30:23 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:28.251 11:30:23 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:22:28.251 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:28.251 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.164 ms 00:22:28.251 00:22:28.251 --- 10.0.0.2 ping statistics --- 00:22:28.251 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:28.251 rtt min/avg/max/mdev = 0.164/0.164/0.164/0.000 ms 00:22:28.251 11:30:23 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:28.251 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:28.251 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.201 ms 00:22:28.251 00:22:28.251 --- 10.0.0.1 ping statistics --- 00:22:28.251 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:28.251 rtt min/avg/max/mdev = 0.201/0.201/0.201/0.000 ms 00:22:28.251 11:30:23 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:28.251 11:30:23 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@422 -- # return 0 00:22:28.251 11:30:23 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:22:28.251 11:30:23 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:28.251 11:30:23 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:22:28.251 11:30:23 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:22:28.251 11:30:23 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:28.251 11:30:23 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:22:28.251 11:30:23 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:22:28.251 11:30:23 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@12 -- # nvmfappstart -m 0xF 00:22:28.251 11:30:23 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:22:28.251 11:30:23 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@724 -- # xtrace_disable 00:22:28.251 11:30:23 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:22:28.251 11:30:23 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@481 -- # nvmfpid=1585359 00:22:28.251 11:30:23 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@482 -- # waitforlisten 1585359 00:22:28.252 11:30:23 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:22:28.252 11:30:23 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@831 -- # '[' -z 1585359 ']' 00:22:28.252 11:30:23 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:28.252 11:30:23 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@836 -- # local max_retries=100 00:22:28.252 11:30:23 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:28.252 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:28.252 11:30:23 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@840 -- # xtrace_disable 00:22:28.252 11:30:23 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:22:28.252 [2024-07-26 11:30:23.764846] Starting SPDK v24.09-pre git sha1 487ff9e1a / DPDK 24.03.0 initialization... 00:22:28.252 [2024-07-26 11:30:23.764891] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:28.252 EAL: No free 2048 kB hugepages reported on node 1 00:22:28.252 [2024-07-26 11:30:23.833259] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:22:28.510 [2024-07-26 11:30:23.911222] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:28.510 [2024-07-26 11:30:23.911255] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:28.510 [2024-07-26 11:30:23.911262] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:28.510 [2024-07-26 11:30:23.911268] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:28.510 [2024-07-26 11:30:23.911274] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:28.510 [2024-07-26 11:30:23.911318] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:22:28.510 [2024-07-26 11:30:23.911403] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:22:28.510 [2024-07-26 11:30:23.911508] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:22:28.510 [2024-07-26 11:30:23.911509] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:22:29.076 11:30:24 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:22:29.076 11:30:24 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@864 -- # return 0 00:22:29.076 11:30:24 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:22:29.076 11:30:24 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@730 -- # xtrace_disable 00:22:29.076 11:30:24 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:22:29.076 11:30:24 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:29.076 11:30:24 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@14 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:22:29.076 11:30:24 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:29.076 11:30:24 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:22:29.076 [2024-07-26 11:30:24.617864] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:29.076 11:30:24 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:29.076 11:30:24 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@16 -- # rpc_cmd bdev_malloc_create 64 512 --name Malloc0 00:22:29.076 11:30:24 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:29.076 11:30:24 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:22:29.076 Malloc0 00:22:29.076 11:30:24 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:29.076 11:30:24 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@17 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 2 00:22:29.076 11:30:24 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:29.076 11:30:24 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:22:29.076 11:30:24 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:29.076 11:30:24 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@18 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:22:29.076 11:30:24 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:29.076 11:30:24 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:22:29.076 11:30:24 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:29.076 11:30:24 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@19 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:22:29.076 11:30:24 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:29.076 11:30:24 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:22:29.076 [2024-07-26 11:30:24.669467] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:29.076 11:30:24 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:29.076 11:30:24 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@21 -- # rpc_cmd nvmf_get_subsystems 00:22:29.076 11:30:24 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:29.076 11:30:24 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:22:29.076 [ 00:22:29.076 { 00:22:29.076 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:22:29.076 "subtype": "Discovery", 00:22:29.076 "listen_addresses": [], 00:22:29.077 "allow_any_host": true, 00:22:29.077 "hosts": [] 00:22:29.077 }, 00:22:29.077 { 00:22:29.077 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:22:29.077 "subtype": "NVMe", 00:22:29.077 "listen_addresses": [ 00:22:29.077 { 00:22:29.077 "trtype": "TCP", 00:22:29.077 "adrfam": "IPv4", 00:22:29.077 "traddr": "10.0.0.2", 00:22:29.077 "trsvcid": "4420" 00:22:29.077 } 00:22:29.077 ], 00:22:29.077 "allow_any_host": true, 00:22:29.077 "hosts": [], 00:22:29.077 "serial_number": "SPDK00000000000001", 00:22:29.077 "model_number": "SPDK bdev Controller", 00:22:29.077 "max_namespaces": 2, 00:22:29.077 "min_cntlid": 1, 00:22:29.077 "max_cntlid": 65519, 00:22:29.077 "namespaces": [ 00:22:29.077 { 00:22:29.077 "nsid": 1, 00:22:29.077 "bdev_name": "Malloc0", 00:22:29.077 "name": "Malloc0", 00:22:29.077 "nguid": "9B1DB163125D4E45885416287B1F6970", 00:22:29.077 "uuid": "9b1db163-125d-4e45-8854-16287b1f6970" 00:22:29.077 } 00:22:29.077 ] 00:22:29.077 } 00:22:29.077 ] 00:22:29.077 11:30:24 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:29.077 11:30:24 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@23 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:22:29.077 11:30:24 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@24 -- # rm -f /tmp/aer_touch_file 00:22:29.077 11:30:24 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@33 -- # aerpid=1585603 00:22:29.077 11:30:24 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -n 2 -t /tmp/aer_touch_file 00:22:29.077 11:30:24 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@36 -- # waitforfile /tmp/aer_touch_file 00:22:29.077 11:30:24 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1265 -- # local i=0 00:22:29.077 11:30:24 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:22:29.077 11:30:24 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1267 -- # '[' 0 -lt 200 ']' 00:22:29.077 11:30:24 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1268 -- # i=1 00:22:29.077 11:30:24 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1269 -- # sleep 0.1 00:22:29.336 EAL: No free 2048 kB hugepages reported on node 1 00:22:29.336 11:30:24 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:22:29.336 11:30:24 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1267 -- # '[' 1 -lt 200 ']' 00:22:29.336 11:30:24 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1268 -- # i=2 00:22:29.336 11:30:24 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1269 -- # sleep 0.1 00:22:29.336 11:30:24 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:22:29.336 11:30:24 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1272 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:22:29.336 11:30:24 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1276 -- # return 0 00:22:29.336 11:30:24 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@39 -- # rpc_cmd bdev_malloc_create 64 4096 --name Malloc1 00:22:29.336 11:30:24 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:29.336 11:30:24 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:22:29.336 Malloc1 00:22:29.336 11:30:24 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:29.336 11:30:24 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@40 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 2 00:22:29.336 11:30:24 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:29.336 11:30:24 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:22:29.336 11:30:24 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:29.336 11:30:24 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@41 -- # rpc_cmd nvmf_get_subsystems 00:22:29.336 11:30:24 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:29.336 11:30:24 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:22:29.336 Asynchronous Event Request test 00:22:29.336 Attaching to 10.0.0.2 00:22:29.336 Attached to 10.0.0.2 00:22:29.336 Registering asynchronous event callbacks... 00:22:29.336 Starting namespace attribute notice tests for all controllers... 00:22:29.336 10.0.0.2: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:22:29.336 aer_cb - Changed Namespace 00:22:29.336 Cleaning up... 00:22:29.336 [ 00:22:29.336 { 00:22:29.336 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:22:29.336 "subtype": "Discovery", 00:22:29.336 "listen_addresses": [], 00:22:29.336 "allow_any_host": true, 00:22:29.336 "hosts": [] 00:22:29.336 }, 00:22:29.336 { 00:22:29.336 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:22:29.336 "subtype": "NVMe", 00:22:29.336 "listen_addresses": [ 00:22:29.336 { 00:22:29.336 "trtype": "TCP", 00:22:29.336 "adrfam": "IPv4", 00:22:29.336 "traddr": "10.0.0.2", 00:22:29.336 "trsvcid": "4420" 00:22:29.336 } 00:22:29.336 ], 00:22:29.336 "allow_any_host": true, 00:22:29.336 "hosts": [], 00:22:29.336 "serial_number": "SPDK00000000000001", 00:22:29.336 "model_number": "SPDK bdev Controller", 00:22:29.336 "max_namespaces": 2, 00:22:29.336 "min_cntlid": 1, 00:22:29.336 "max_cntlid": 65519, 00:22:29.336 "namespaces": [ 00:22:29.336 { 00:22:29.336 "nsid": 1, 00:22:29.336 "bdev_name": "Malloc0", 00:22:29.336 "name": "Malloc0", 00:22:29.336 "nguid": "9B1DB163125D4E45885416287B1F6970", 00:22:29.336 "uuid": "9b1db163-125d-4e45-8854-16287b1f6970" 00:22:29.336 }, 00:22:29.336 { 00:22:29.336 "nsid": 2, 00:22:29.336 "bdev_name": "Malloc1", 00:22:29.336 "name": "Malloc1", 00:22:29.336 "nguid": "21A13158EF4B46329E48AC8FACDCB715", 00:22:29.336 "uuid": "21a13158-ef4b-4632-9e48-ac8facdcb715" 00:22:29.336 } 00:22:29.336 ] 00:22:29.336 } 00:22:29.336 ] 00:22:29.336 11:30:24 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:29.336 11:30:24 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@43 -- # wait 1585603 00:22:29.336 11:30:24 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@45 -- # rpc_cmd bdev_malloc_delete Malloc0 00:22:29.336 11:30:24 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:29.336 11:30:24 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:22:29.596 11:30:24 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:29.596 11:30:24 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@46 -- # rpc_cmd bdev_malloc_delete Malloc1 00:22:29.596 11:30:24 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:29.596 11:30:24 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:22:29.596 11:30:25 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:29.596 11:30:25 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@47 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:22:29.596 11:30:25 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:29.596 11:30:25 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:22:29.596 11:30:25 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:29.596 11:30:25 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@49 -- # trap - SIGINT SIGTERM EXIT 00:22:29.596 11:30:25 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@51 -- # nvmftestfini 00:22:29.596 11:30:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@488 -- # nvmfcleanup 00:22:29.596 11:30:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@117 -- # sync 00:22:29.596 11:30:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:22:29.596 11:30:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@120 -- # set +e 00:22:29.596 11:30:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@121 -- # for i in {1..20} 00:22:29.596 11:30:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:22:29.596 rmmod nvme_tcp 00:22:29.596 rmmod nvme_fabrics 00:22:29.596 rmmod nvme_keyring 00:22:29.596 11:30:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:22:29.596 11:30:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@124 -- # set -e 00:22:29.596 11:30:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@125 -- # return 0 00:22:29.596 11:30:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@489 -- # '[' -n 1585359 ']' 00:22:29.596 11:30:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@490 -- # killprocess 1585359 00:22:29.596 11:30:25 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@950 -- # '[' -z 1585359 ']' 00:22:29.596 11:30:25 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@954 -- # kill -0 1585359 00:22:29.596 11:30:25 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@955 -- # uname 00:22:29.596 11:30:25 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:22:29.596 11:30:25 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1585359 00:22:29.596 11:30:25 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:22:29.596 11:30:25 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:22:29.596 11:30:25 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1585359' 00:22:29.596 killing process with pid 1585359 00:22:29.596 11:30:25 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@969 -- # kill 1585359 00:22:29.596 11:30:25 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@974 -- # wait 1585359 00:22:29.855 11:30:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:22:29.855 11:30:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:22:29.855 11:30:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:22:29.855 11:30:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:22:29.856 11:30:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@278 -- # remove_spdk_ns 00:22:29.856 11:30:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:29.856 11:30:25 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:29.856 11:30:25 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:31.761 11:30:27 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:22:31.761 00:22:31.761 real 0m9.563s 00:22:31.761 user 0m7.376s 00:22:31.761 sys 0m4.743s 00:22:31.761 11:30:27 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1126 -- # xtrace_disable 00:22:31.761 11:30:27 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:22:31.761 ************************************ 00:22:31.761 END TEST nvmf_aer 00:22:31.761 ************************************ 00:22:32.021 11:30:27 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@18 -- # run_test nvmf_async_init /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:22:32.021 11:30:27 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:22:32.021 11:30:27 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:22:32.021 11:30:27 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:22:32.021 ************************************ 00:22:32.021 START TEST nvmf_async_init 00:22:32.021 ************************************ 00:22:32.021 11:30:27 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:22:32.021 * Looking for test storage... 00:22:32.021 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:22:32.021 11:30:27 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:22:32.021 11:30:27 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@7 -- # uname -s 00:22:32.021 11:30:27 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:32.021 11:30:27 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:32.021 11:30:27 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:32.021 11:30:27 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:32.021 11:30:27 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:32.021 11:30:27 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:32.021 11:30:27 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:32.021 11:30:27 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:32.021 11:30:27 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:32.021 11:30:27 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:32.021 11:30:27 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:22:32.021 11:30:27 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:22:32.021 11:30:27 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:32.021 11:30:27 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:32.021 11:30:27 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:32.021 11:30:27 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:32.021 11:30:27 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:32.021 11:30:27 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:32.021 11:30:27 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:32.021 11:30:27 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:32.021 11:30:27 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:32.021 11:30:27 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:32.021 11:30:27 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:32.021 11:30:27 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@5 -- # export PATH 00:22:32.021 11:30:27 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:32.021 11:30:27 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@47 -- # : 0 00:22:32.021 11:30:27 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:22:32.021 11:30:27 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:22:32.021 11:30:27 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:32.021 11:30:27 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:32.021 11:30:27 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:32.021 11:30:27 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:22:32.021 11:30:27 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:22:32.021 11:30:27 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@51 -- # have_pci_nics=0 00:22:32.021 11:30:27 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@13 -- # null_bdev_size=1024 00:22:32.021 11:30:27 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@14 -- # null_block_size=512 00:22:32.021 11:30:27 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@15 -- # null_bdev=null0 00:22:32.021 11:30:27 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@16 -- # nvme_bdev=nvme0 00:22:32.021 11:30:27 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@20 -- # uuidgen 00:22:32.021 11:30:27 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@20 -- # tr -d - 00:22:32.021 11:30:27 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@20 -- # nguid=53a9f75149a845d58bc09036e7355bc3 00:22:32.021 11:30:27 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@22 -- # nvmftestinit 00:22:32.021 11:30:27 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:22:32.021 11:30:27 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:32.021 11:30:27 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@448 -- # prepare_net_devs 00:22:32.021 11:30:27 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@410 -- # local -g is_hw=no 00:22:32.021 11:30:27 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@412 -- # remove_spdk_ns 00:22:32.021 11:30:27 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:32.021 11:30:27 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:32.021 11:30:27 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:32.021 11:30:27 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:22:32.021 11:30:27 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:22:32.021 11:30:27 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@285 -- # xtrace_disable 00:22:32.021 11:30:27 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:38.590 11:30:33 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:38.590 11:30:33 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@291 -- # pci_devs=() 00:22:38.590 11:30:33 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@291 -- # local -a pci_devs 00:22:38.590 11:30:33 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@292 -- # pci_net_devs=() 00:22:38.590 11:30:33 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:22:38.590 11:30:33 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@293 -- # pci_drivers=() 00:22:38.590 11:30:33 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@293 -- # local -A pci_drivers 00:22:38.590 11:30:33 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@295 -- # net_devs=() 00:22:38.590 11:30:33 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@295 -- # local -ga net_devs 00:22:38.590 11:30:33 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@296 -- # e810=() 00:22:38.590 11:30:33 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@296 -- # local -ga e810 00:22:38.590 11:30:33 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@297 -- # x722=() 00:22:38.590 11:30:33 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@297 -- # local -ga x722 00:22:38.590 11:30:33 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@298 -- # mlx=() 00:22:38.590 11:30:33 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@298 -- # local -ga mlx 00:22:38.590 11:30:33 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:38.590 11:30:33 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:38.590 11:30:33 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:38.590 11:30:33 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:38.590 11:30:33 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:38.590 11:30:33 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:38.590 11:30:33 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:38.590 11:30:33 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:38.590 11:30:33 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:38.590 11:30:33 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:38.590 11:30:33 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:38.590 11:30:33 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:22:38.590 11:30:33 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:22:38.590 11:30:33 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:22:38.590 11:30:33 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:22:38.590 11:30:33 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:22:38.590 11:30:33 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:22:38.590 11:30:33 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:38.590 11:30:33 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:22:38.590 Found 0000:86:00.0 (0x8086 - 0x159b) 00:22:38.590 11:30:33 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:38.590 11:30:33 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:38.590 11:30:33 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:38.590 11:30:33 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:38.590 11:30:33 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:38.590 11:30:33 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:38.590 11:30:33 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:22:38.590 Found 0000:86:00.1 (0x8086 - 0x159b) 00:22:38.590 11:30:33 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:38.590 11:30:33 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:38.590 11:30:33 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:38.590 11:30:33 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:38.590 11:30:33 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:38.590 11:30:33 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:22:38.590 11:30:33 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:22:38.590 11:30:33 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:22:38.590 11:30:33 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:38.590 11:30:33 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:38.590 11:30:33 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:22:38.590 11:30:33 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:38.590 11:30:33 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@390 -- # [[ up == up ]] 00:22:38.590 11:30:33 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:38.590 11:30:33 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:38.590 11:30:33 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:22:38.590 Found net devices under 0000:86:00.0: cvl_0_0 00:22:38.590 11:30:33 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:38.590 11:30:33 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:38.590 11:30:33 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:38.590 11:30:33 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:22:38.590 11:30:33 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:38.590 11:30:33 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@390 -- # [[ up == up ]] 00:22:38.590 11:30:33 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:38.590 11:30:33 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:38.590 11:30:33 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:22:38.591 Found net devices under 0000:86:00.1: cvl_0_1 00:22:38.591 11:30:33 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:38.591 11:30:33 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:22:38.591 11:30:33 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@414 -- # is_hw=yes 00:22:38.591 11:30:33 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:22:38.591 11:30:33 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:22:38.591 11:30:33 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:22:38.591 11:30:33 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:38.591 11:30:33 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:38.591 11:30:33 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:38.591 11:30:33 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:22:38.591 11:30:33 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:38.591 11:30:33 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:38.591 11:30:33 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:22:38.591 11:30:33 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:38.591 11:30:33 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:38.591 11:30:33 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:22:38.591 11:30:33 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:22:38.591 11:30:33 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:22:38.591 11:30:33 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:38.591 11:30:33 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:38.591 11:30:33 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:38.591 11:30:33 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:22:38.591 11:30:33 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:38.591 11:30:33 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:38.591 11:30:33 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:38.591 11:30:33 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:22:38.591 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:38.591 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.174 ms 00:22:38.591 00:22:38.591 --- 10.0.0.2 ping statistics --- 00:22:38.591 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:38.591 rtt min/avg/max/mdev = 0.174/0.174/0.174/0.000 ms 00:22:38.591 11:30:33 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:38.591 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:38.591 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.194 ms 00:22:38.591 00:22:38.591 --- 10.0.0.1 ping statistics --- 00:22:38.591 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:38.591 rtt min/avg/max/mdev = 0.194/0.194/0.194/0.000 ms 00:22:38.591 11:30:33 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:38.591 11:30:33 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@422 -- # return 0 00:22:38.591 11:30:33 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:22:38.591 11:30:33 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:38.591 11:30:33 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:22:38.591 11:30:33 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:22:38.591 11:30:33 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:38.591 11:30:33 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:22:38.591 11:30:33 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:22:38.591 11:30:33 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@23 -- # nvmfappstart -m 0x1 00:22:38.591 11:30:33 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:22:38.591 11:30:33 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@724 -- # xtrace_disable 00:22:38.591 11:30:33 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:38.591 11:30:33 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@481 -- # nvmfpid=1589120 00:22:38.591 11:30:33 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:22:38.591 11:30:33 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@482 -- # waitforlisten 1589120 00:22:38.591 11:30:33 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@831 -- # '[' -z 1589120 ']' 00:22:38.591 11:30:33 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:38.591 11:30:33 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@836 -- # local max_retries=100 00:22:38.591 11:30:33 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:38.591 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:38.591 11:30:33 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@840 -- # xtrace_disable 00:22:38.591 11:30:33 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:38.591 [2024-07-26 11:30:33.401532] Starting SPDK v24.09-pre git sha1 487ff9e1a / DPDK 24.03.0 initialization... 00:22:38.591 [2024-07-26 11:30:33.401574] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:38.591 EAL: No free 2048 kB hugepages reported on node 1 00:22:38.591 [2024-07-26 11:30:33.470164] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:38.591 [2024-07-26 11:30:33.548303] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:38.591 [2024-07-26 11:30:33.548335] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:38.591 [2024-07-26 11:30:33.548341] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:38.591 [2024-07-26 11:30:33.548347] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:38.591 [2024-07-26 11:30:33.548352] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:38.591 [2024-07-26 11:30:33.548385] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:22:38.591 11:30:34 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:22:38.591 11:30:34 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@864 -- # return 0 00:22:38.591 11:30:34 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:22:38.591 11:30:34 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@730 -- # xtrace_disable 00:22:38.591 11:30:34 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:38.591 11:30:34 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:38.591 11:30:34 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@26 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:22:38.591 11:30:34 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:38.591 11:30:34 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:38.591 [2024-07-26 11:30:34.237773] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:38.591 11:30:34 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:38.591 11:30:34 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@27 -- # rpc_cmd bdev_null_create null0 1024 512 00:22:38.591 11:30:34 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:38.591 11:30:34 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:38.850 null0 00:22:38.850 11:30:34 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:38.850 11:30:34 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@28 -- # rpc_cmd bdev_wait_for_examine 00:22:38.850 11:30:34 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:38.850 11:30:34 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:38.850 11:30:34 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:38.850 11:30:34 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@29 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a 00:22:38.850 11:30:34 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:38.850 11:30:34 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:38.850 11:30:34 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:38.850 11:30:34 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 -g 53a9f75149a845d58bc09036e7355bc3 00:22:38.850 11:30:34 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:38.850 11:30:34 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:38.850 11:30:34 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:38.850 11:30:34 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@31 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:22:38.850 11:30:34 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:38.850 11:30:34 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:38.850 [2024-07-26 11:30:34.281983] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:38.850 11:30:34 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:38.850 11:30:34 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@37 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode0 00:22:38.850 11:30:34 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:38.850 11:30:34 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:39.109 nvme0n1 00:22:39.109 11:30:34 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:39.109 11:30:34 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@41 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:22:39.109 11:30:34 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:39.109 11:30:34 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:39.109 [ 00:22:39.109 { 00:22:39.109 "name": "nvme0n1", 00:22:39.109 "aliases": [ 00:22:39.109 "53a9f751-49a8-45d5-8bc0-9036e7355bc3" 00:22:39.109 ], 00:22:39.109 "product_name": "NVMe disk", 00:22:39.109 "block_size": 512, 00:22:39.109 "num_blocks": 2097152, 00:22:39.109 "uuid": "53a9f751-49a8-45d5-8bc0-9036e7355bc3", 00:22:39.109 "assigned_rate_limits": { 00:22:39.109 "rw_ios_per_sec": 0, 00:22:39.109 "rw_mbytes_per_sec": 0, 00:22:39.109 "r_mbytes_per_sec": 0, 00:22:39.109 "w_mbytes_per_sec": 0 00:22:39.109 }, 00:22:39.109 "claimed": false, 00:22:39.109 "zoned": false, 00:22:39.109 "supported_io_types": { 00:22:39.109 "read": true, 00:22:39.109 "write": true, 00:22:39.109 "unmap": false, 00:22:39.109 "flush": true, 00:22:39.109 "reset": true, 00:22:39.109 "nvme_admin": true, 00:22:39.109 "nvme_io": true, 00:22:39.109 "nvme_io_md": false, 00:22:39.109 "write_zeroes": true, 00:22:39.109 "zcopy": false, 00:22:39.109 "get_zone_info": false, 00:22:39.109 "zone_management": false, 00:22:39.109 "zone_append": false, 00:22:39.109 "compare": true, 00:22:39.109 "compare_and_write": true, 00:22:39.109 "abort": true, 00:22:39.109 "seek_hole": false, 00:22:39.109 "seek_data": false, 00:22:39.109 "copy": true, 00:22:39.109 "nvme_iov_md": false 00:22:39.109 }, 00:22:39.109 "memory_domains": [ 00:22:39.109 { 00:22:39.109 "dma_device_id": "system", 00:22:39.110 "dma_device_type": 1 00:22:39.110 } 00:22:39.110 ], 00:22:39.110 "driver_specific": { 00:22:39.110 "nvme": [ 00:22:39.110 { 00:22:39.110 "trid": { 00:22:39.110 "trtype": "TCP", 00:22:39.110 "adrfam": "IPv4", 00:22:39.110 "traddr": "10.0.0.2", 00:22:39.110 "trsvcid": "4420", 00:22:39.110 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:22:39.110 }, 00:22:39.110 "ctrlr_data": { 00:22:39.110 "cntlid": 1, 00:22:39.110 "vendor_id": "0x8086", 00:22:39.110 "model_number": "SPDK bdev Controller", 00:22:39.110 "serial_number": "00000000000000000000", 00:22:39.110 "firmware_revision": "24.09", 00:22:39.110 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:22:39.110 "oacs": { 00:22:39.110 "security": 0, 00:22:39.110 "format": 0, 00:22:39.110 "firmware": 0, 00:22:39.110 "ns_manage": 0 00:22:39.110 }, 00:22:39.110 "multi_ctrlr": true, 00:22:39.110 "ana_reporting": false 00:22:39.110 }, 00:22:39.110 "vs": { 00:22:39.110 "nvme_version": "1.3" 00:22:39.110 }, 00:22:39.110 "ns_data": { 00:22:39.110 "id": 1, 00:22:39.110 "can_share": true 00:22:39.110 } 00:22:39.110 } 00:22:39.110 ], 00:22:39.110 "mp_policy": "active_passive" 00:22:39.110 } 00:22:39.110 } 00:22:39.110 ] 00:22:39.110 11:30:34 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:39.110 11:30:34 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@44 -- # rpc_cmd bdev_nvme_reset_controller nvme0 00:22:39.110 11:30:34 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:39.110 11:30:34 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:39.110 [2024-07-26 11:30:34.546557] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:22:39.110 [2024-07-26 11:30:34.546620] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2167390 (9): Bad file descriptor 00:22:39.110 [2024-07-26 11:30:34.678732] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:22:39.110 11:30:34 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:39.110 11:30:34 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@47 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:22:39.110 11:30:34 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:39.110 11:30:34 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:39.110 [ 00:22:39.110 { 00:22:39.110 "name": "nvme0n1", 00:22:39.110 "aliases": [ 00:22:39.110 "53a9f751-49a8-45d5-8bc0-9036e7355bc3" 00:22:39.110 ], 00:22:39.110 "product_name": "NVMe disk", 00:22:39.110 "block_size": 512, 00:22:39.110 "num_blocks": 2097152, 00:22:39.110 "uuid": "53a9f751-49a8-45d5-8bc0-9036e7355bc3", 00:22:39.110 "assigned_rate_limits": { 00:22:39.110 "rw_ios_per_sec": 0, 00:22:39.110 "rw_mbytes_per_sec": 0, 00:22:39.110 "r_mbytes_per_sec": 0, 00:22:39.110 "w_mbytes_per_sec": 0 00:22:39.110 }, 00:22:39.110 "claimed": false, 00:22:39.110 "zoned": false, 00:22:39.110 "supported_io_types": { 00:22:39.110 "read": true, 00:22:39.110 "write": true, 00:22:39.110 "unmap": false, 00:22:39.110 "flush": true, 00:22:39.110 "reset": true, 00:22:39.110 "nvme_admin": true, 00:22:39.110 "nvme_io": true, 00:22:39.110 "nvme_io_md": false, 00:22:39.110 "write_zeroes": true, 00:22:39.110 "zcopy": false, 00:22:39.110 "get_zone_info": false, 00:22:39.110 "zone_management": false, 00:22:39.110 "zone_append": false, 00:22:39.110 "compare": true, 00:22:39.110 "compare_and_write": true, 00:22:39.110 "abort": true, 00:22:39.110 "seek_hole": false, 00:22:39.110 "seek_data": false, 00:22:39.110 "copy": true, 00:22:39.110 "nvme_iov_md": false 00:22:39.110 }, 00:22:39.110 "memory_domains": [ 00:22:39.110 { 00:22:39.110 "dma_device_id": "system", 00:22:39.110 "dma_device_type": 1 00:22:39.110 } 00:22:39.110 ], 00:22:39.110 "driver_specific": { 00:22:39.110 "nvme": [ 00:22:39.110 { 00:22:39.110 "trid": { 00:22:39.110 "trtype": "TCP", 00:22:39.110 "adrfam": "IPv4", 00:22:39.110 "traddr": "10.0.0.2", 00:22:39.110 "trsvcid": "4420", 00:22:39.110 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:22:39.110 }, 00:22:39.110 "ctrlr_data": { 00:22:39.110 "cntlid": 2, 00:22:39.110 "vendor_id": "0x8086", 00:22:39.110 "model_number": "SPDK bdev Controller", 00:22:39.110 "serial_number": "00000000000000000000", 00:22:39.110 "firmware_revision": "24.09", 00:22:39.110 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:22:39.110 "oacs": { 00:22:39.110 "security": 0, 00:22:39.110 "format": 0, 00:22:39.110 "firmware": 0, 00:22:39.110 "ns_manage": 0 00:22:39.110 }, 00:22:39.110 "multi_ctrlr": true, 00:22:39.110 "ana_reporting": false 00:22:39.110 }, 00:22:39.110 "vs": { 00:22:39.110 "nvme_version": "1.3" 00:22:39.110 }, 00:22:39.110 "ns_data": { 00:22:39.110 "id": 1, 00:22:39.110 "can_share": true 00:22:39.110 } 00:22:39.110 } 00:22:39.110 ], 00:22:39.110 "mp_policy": "active_passive" 00:22:39.110 } 00:22:39.110 } 00:22:39.110 ] 00:22:39.110 11:30:34 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:39.110 11:30:34 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@50 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:39.110 11:30:34 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:39.110 11:30:34 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:39.110 11:30:34 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:39.110 11:30:34 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@53 -- # mktemp 00:22:39.110 11:30:34 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@53 -- # key_path=/tmp/tmp.JJxbEWNXzZ 00:22:39.110 11:30:34 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@54 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:22:39.110 11:30:34 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@55 -- # chmod 0600 /tmp/tmp.JJxbEWNXzZ 00:22:39.110 11:30:34 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@56 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode0 --disable 00:22:39.110 11:30:34 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:39.110 11:30:34 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:39.110 11:30:34 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:39.110 11:30:34 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@57 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 --secure-channel 00:22:39.110 11:30:34 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:39.110 11:30:34 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:39.110 [2024-07-26 11:30:34.739157] tcp.c: 956:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:22:39.110 [2024-07-26 11:30:34.739261] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:22:39.110 11:30:34 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:39.110 11:30:34 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@59 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.JJxbEWNXzZ 00:22:39.110 11:30:34 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:39.110 11:30:34 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:39.110 [2024-07-26 11:30:34.747172] tcp.c:3725:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:22:39.110 11:30:34 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:39.110 11:30:34 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@65 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4421 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.JJxbEWNXzZ 00:22:39.110 11:30:34 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:39.110 11:30:34 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:39.110 [2024-07-26 11:30:34.755204] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:22:39.110 [2024-07-26 11:30:34.755238] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:22:39.370 nvme0n1 00:22:39.370 11:30:34 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:39.370 11:30:34 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@69 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:22:39.370 11:30:34 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:39.370 11:30:34 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:39.370 [ 00:22:39.370 { 00:22:39.370 "name": "nvme0n1", 00:22:39.370 "aliases": [ 00:22:39.370 "53a9f751-49a8-45d5-8bc0-9036e7355bc3" 00:22:39.370 ], 00:22:39.370 "product_name": "NVMe disk", 00:22:39.370 "block_size": 512, 00:22:39.370 "num_blocks": 2097152, 00:22:39.370 "uuid": "53a9f751-49a8-45d5-8bc0-9036e7355bc3", 00:22:39.370 "assigned_rate_limits": { 00:22:39.370 "rw_ios_per_sec": 0, 00:22:39.370 "rw_mbytes_per_sec": 0, 00:22:39.370 "r_mbytes_per_sec": 0, 00:22:39.370 "w_mbytes_per_sec": 0 00:22:39.370 }, 00:22:39.370 "claimed": false, 00:22:39.370 "zoned": false, 00:22:39.370 "supported_io_types": { 00:22:39.370 "read": true, 00:22:39.370 "write": true, 00:22:39.370 "unmap": false, 00:22:39.370 "flush": true, 00:22:39.370 "reset": true, 00:22:39.370 "nvme_admin": true, 00:22:39.370 "nvme_io": true, 00:22:39.370 "nvme_io_md": false, 00:22:39.370 "write_zeroes": true, 00:22:39.370 "zcopy": false, 00:22:39.370 "get_zone_info": false, 00:22:39.370 "zone_management": false, 00:22:39.370 "zone_append": false, 00:22:39.370 "compare": true, 00:22:39.370 "compare_and_write": true, 00:22:39.370 "abort": true, 00:22:39.370 "seek_hole": false, 00:22:39.370 "seek_data": false, 00:22:39.370 "copy": true, 00:22:39.370 "nvme_iov_md": false 00:22:39.370 }, 00:22:39.370 "memory_domains": [ 00:22:39.370 { 00:22:39.370 "dma_device_id": "system", 00:22:39.370 "dma_device_type": 1 00:22:39.370 } 00:22:39.370 ], 00:22:39.370 "driver_specific": { 00:22:39.370 "nvme": [ 00:22:39.370 { 00:22:39.370 "trid": { 00:22:39.370 "trtype": "TCP", 00:22:39.370 "adrfam": "IPv4", 00:22:39.370 "traddr": "10.0.0.2", 00:22:39.370 "trsvcid": "4421", 00:22:39.370 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:22:39.370 }, 00:22:39.370 "ctrlr_data": { 00:22:39.370 "cntlid": 3, 00:22:39.370 "vendor_id": "0x8086", 00:22:39.370 "model_number": "SPDK bdev Controller", 00:22:39.370 "serial_number": "00000000000000000000", 00:22:39.370 "firmware_revision": "24.09", 00:22:39.370 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:22:39.370 "oacs": { 00:22:39.370 "security": 0, 00:22:39.370 "format": 0, 00:22:39.370 "firmware": 0, 00:22:39.370 "ns_manage": 0 00:22:39.370 }, 00:22:39.370 "multi_ctrlr": true, 00:22:39.370 "ana_reporting": false 00:22:39.370 }, 00:22:39.370 "vs": { 00:22:39.370 "nvme_version": "1.3" 00:22:39.370 }, 00:22:39.370 "ns_data": { 00:22:39.370 "id": 1, 00:22:39.370 "can_share": true 00:22:39.370 } 00:22:39.370 } 00:22:39.370 ], 00:22:39.370 "mp_policy": "active_passive" 00:22:39.370 } 00:22:39.370 } 00:22:39.370 ] 00:22:39.370 11:30:34 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:39.370 11:30:34 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@72 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:39.370 11:30:34 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:39.370 11:30:34 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:39.370 11:30:34 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:39.370 11:30:34 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@75 -- # rm -f /tmp/tmp.JJxbEWNXzZ 00:22:39.370 11:30:34 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@77 -- # trap - SIGINT SIGTERM EXIT 00:22:39.370 11:30:34 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@78 -- # nvmftestfini 00:22:39.370 11:30:34 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@488 -- # nvmfcleanup 00:22:39.370 11:30:34 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@117 -- # sync 00:22:39.370 11:30:34 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:22:39.370 11:30:34 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@120 -- # set +e 00:22:39.370 11:30:34 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@121 -- # for i in {1..20} 00:22:39.370 11:30:34 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:22:39.370 rmmod nvme_tcp 00:22:39.370 rmmod nvme_fabrics 00:22:39.370 rmmod nvme_keyring 00:22:39.370 11:30:34 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:22:39.370 11:30:34 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@124 -- # set -e 00:22:39.370 11:30:34 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@125 -- # return 0 00:22:39.370 11:30:34 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@489 -- # '[' -n 1589120 ']' 00:22:39.370 11:30:34 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@490 -- # killprocess 1589120 00:22:39.370 11:30:34 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@950 -- # '[' -z 1589120 ']' 00:22:39.370 11:30:34 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@954 -- # kill -0 1589120 00:22:39.370 11:30:34 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@955 -- # uname 00:22:39.370 11:30:34 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:22:39.370 11:30:34 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1589120 00:22:39.370 11:30:34 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:22:39.370 11:30:34 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:22:39.370 11:30:34 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1589120' 00:22:39.370 killing process with pid 1589120 00:22:39.370 11:30:34 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@969 -- # kill 1589120 00:22:39.370 [2024-07-26 11:30:34.961676] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:22:39.370 [2024-07-26 11:30:34.961698] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:22:39.370 11:30:34 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@974 -- # wait 1589120 00:22:39.629 11:30:35 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:22:39.629 11:30:35 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:22:39.629 11:30:35 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:22:39.629 11:30:35 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:22:39.629 11:30:35 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@278 -- # remove_spdk_ns 00:22:39.630 11:30:35 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:39.630 11:30:35 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:39.630 11:30:35 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:41.534 11:30:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:22:41.534 00:22:41.534 real 0m9.701s 00:22:41.534 user 0m3.587s 00:22:41.534 sys 0m4.645s 00:22:41.534 11:30:37 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1126 -- # xtrace_disable 00:22:41.534 11:30:37 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:41.534 ************************************ 00:22:41.534 END TEST nvmf_async_init 00:22:41.534 ************************************ 00:22:41.793 11:30:37 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@19 -- # run_test dma /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=tcp 00:22:41.793 11:30:37 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:22:41.793 11:30:37 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:22:41.793 11:30:37 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:22:41.793 ************************************ 00:22:41.793 START TEST dma 00:22:41.793 ************************************ 00:22:41.793 11:30:37 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=tcp 00:22:41.793 * Looking for test storage... 00:22:41.793 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:22:41.793 11:30:37 nvmf_tcp.nvmf_host.dma -- host/dma.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:22:41.793 11:30:37 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@7 -- # uname -s 00:22:41.793 11:30:37 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:41.793 11:30:37 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:41.793 11:30:37 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:41.793 11:30:37 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:41.793 11:30:37 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:41.793 11:30:37 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:41.793 11:30:37 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:41.793 11:30:37 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:41.793 11:30:37 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:41.793 11:30:37 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:41.793 11:30:37 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:22:41.793 11:30:37 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:22:41.793 11:30:37 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:41.793 11:30:37 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:41.793 11:30:37 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:41.793 11:30:37 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:41.793 11:30:37 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:41.793 11:30:37 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:41.793 11:30:37 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:41.793 11:30:37 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:41.793 11:30:37 nvmf_tcp.nvmf_host.dma -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:41.794 11:30:37 nvmf_tcp.nvmf_host.dma -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:41.794 11:30:37 nvmf_tcp.nvmf_host.dma -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:41.794 11:30:37 nvmf_tcp.nvmf_host.dma -- paths/export.sh@5 -- # export PATH 00:22:41.794 11:30:37 nvmf_tcp.nvmf_host.dma -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:41.794 11:30:37 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@47 -- # : 0 00:22:41.794 11:30:37 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:22:41.794 11:30:37 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:22:41.794 11:30:37 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:41.794 11:30:37 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:41.794 11:30:37 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:41.794 11:30:37 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:22:41.794 11:30:37 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:22:41.794 11:30:37 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@51 -- # have_pci_nics=0 00:22:41.794 11:30:37 nvmf_tcp.nvmf_host.dma -- host/dma.sh@12 -- # '[' tcp '!=' rdma ']' 00:22:41.794 11:30:37 nvmf_tcp.nvmf_host.dma -- host/dma.sh@13 -- # exit 0 00:22:41.794 00:22:41.794 real 0m0.118s 00:22:41.794 user 0m0.050s 00:22:41.794 sys 0m0.076s 00:22:41.794 11:30:37 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1126 -- # xtrace_disable 00:22:41.794 11:30:37 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@10 -- # set +x 00:22:41.794 ************************************ 00:22:41.794 END TEST dma 00:22:41.794 ************************************ 00:22:41.794 11:30:37 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@22 -- # run_test nvmf_identify /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=tcp 00:22:41.794 11:30:37 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:22:41.794 11:30:37 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:22:41.794 11:30:37 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:22:41.794 ************************************ 00:22:41.794 START TEST nvmf_identify 00:22:41.794 ************************************ 00:22:41.794 11:30:37 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=tcp 00:22:42.053 * Looking for test storage... 00:22:42.053 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:22:42.053 11:30:37 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:22:42.053 11:30:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@7 -- # uname -s 00:22:42.053 11:30:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:42.053 11:30:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:42.053 11:30:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:42.053 11:30:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:42.053 11:30:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:42.053 11:30:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:42.053 11:30:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:42.053 11:30:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:42.053 11:30:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:42.053 11:30:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:42.053 11:30:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:22:42.053 11:30:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:22:42.053 11:30:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:42.053 11:30:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:42.053 11:30:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:42.053 11:30:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:42.053 11:30:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:42.053 11:30:37 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:42.053 11:30:37 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:42.053 11:30:37 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:42.053 11:30:37 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:42.054 11:30:37 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:42.054 11:30:37 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:42.054 11:30:37 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@5 -- # export PATH 00:22:42.054 11:30:37 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:42.054 11:30:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@47 -- # : 0 00:22:42.054 11:30:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:22:42.054 11:30:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:22:42.054 11:30:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:42.054 11:30:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:42.054 11:30:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:42.054 11:30:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:22:42.054 11:30:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:22:42.054 11:30:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@51 -- # have_pci_nics=0 00:22:42.054 11:30:37 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@11 -- # MALLOC_BDEV_SIZE=64 00:22:42.054 11:30:37 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:22:42.054 11:30:37 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@14 -- # nvmftestinit 00:22:42.054 11:30:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:22:42.054 11:30:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:42.054 11:30:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@448 -- # prepare_net_devs 00:22:42.054 11:30:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@410 -- # local -g is_hw=no 00:22:42.054 11:30:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@412 -- # remove_spdk_ns 00:22:42.054 11:30:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:42.054 11:30:37 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:42.054 11:30:37 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:42.054 11:30:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:22:42.054 11:30:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:22:42.054 11:30:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@285 -- # xtrace_disable 00:22:42.054 11:30:37 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:22:48.623 11:30:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:48.623 11:30:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@291 -- # pci_devs=() 00:22:48.623 11:30:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@291 -- # local -a pci_devs 00:22:48.623 11:30:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@292 -- # pci_net_devs=() 00:22:48.623 11:30:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:22:48.623 11:30:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@293 -- # pci_drivers=() 00:22:48.623 11:30:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@293 -- # local -A pci_drivers 00:22:48.623 11:30:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@295 -- # net_devs=() 00:22:48.623 11:30:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@295 -- # local -ga net_devs 00:22:48.623 11:30:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@296 -- # e810=() 00:22:48.623 11:30:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@296 -- # local -ga e810 00:22:48.623 11:30:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@297 -- # x722=() 00:22:48.623 11:30:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@297 -- # local -ga x722 00:22:48.623 11:30:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@298 -- # mlx=() 00:22:48.623 11:30:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@298 -- # local -ga mlx 00:22:48.623 11:30:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:48.623 11:30:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:48.623 11:30:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:48.623 11:30:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:48.623 11:30:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:48.623 11:30:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:48.623 11:30:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:48.623 11:30:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:48.623 11:30:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:48.623 11:30:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:48.623 11:30:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:48.623 11:30:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:22:48.623 11:30:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:22:48.623 11:30:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:22:48.623 11:30:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:22:48.623 11:30:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:22:48.623 11:30:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:22:48.623 11:30:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:48.623 11:30:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:22:48.623 Found 0000:86:00.0 (0x8086 - 0x159b) 00:22:48.623 11:30:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:48.623 11:30:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:48.623 11:30:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:48.623 11:30:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:48.623 11:30:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:48.623 11:30:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:48.623 11:30:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:22:48.623 Found 0000:86:00.1 (0x8086 - 0x159b) 00:22:48.623 11:30:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:48.623 11:30:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:48.623 11:30:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:48.623 11:30:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:48.623 11:30:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:48.623 11:30:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:22:48.623 11:30:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:22:48.623 11:30:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:22:48.623 11:30:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:48.623 11:30:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:48.623 11:30:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:22:48.623 11:30:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:48.623 11:30:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@390 -- # [[ up == up ]] 00:22:48.623 11:30:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:48.623 11:30:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:48.623 11:30:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:22:48.623 Found net devices under 0000:86:00.0: cvl_0_0 00:22:48.623 11:30:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:48.623 11:30:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:48.623 11:30:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:48.623 11:30:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:22:48.623 11:30:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:48.623 11:30:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@390 -- # [[ up == up ]] 00:22:48.623 11:30:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:48.623 11:30:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:48.623 11:30:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:22:48.623 Found net devices under 0000:86:00.1: cvl_0_1 00:22:48.623 11:30:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:48.623 11:30:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:22:48.623 11:30:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@414 -- # is_hw=yes 00:22:48.623 11:30:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:22:48.623 11:30:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:22:48.623 11:30:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:22:48.623 11:30:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:48.623 11:30:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:48.623 11:30:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:48.623 11:30:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:22:48.623 11:30:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:48.623 11:30:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:48.623 11:30:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:22:48.623 11:30:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:48.623 11:30:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:48.623 11:30:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:22:48.623 11:30:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:22:48.623 11:30:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:22:48.623 11:30:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:48.623 11:30:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:48.623 11:30:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:48.623 11:30:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:22:48.623 11:30:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:48.623 11:30:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:48.623 11:30:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:48.623 11:30:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:22:48.623 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:48.623 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.169 ms 00:22:48.623 00:22:48.623 --- 10.0.0.2 ping statistics --- 00:22:48.623 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:48.623 rtt min/avg/max/mdev = 0.169/0.169/0.169/0.000 ms 00:22:48.623 11:30:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:48.623 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:48.623 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.193 ms 00:22:48.623 00:22:48.623 --- 10.0.0.1 ping statistics --- 00:22:48.623 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:48.623 rtt min/avg/max/mdev = 0.193/0.193/0.193/0.000 ms 00:22:48.623 11:30:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:48.623 11:30:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@422 -- # return 0 00:22:48.624 11:30:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:22:48.624 11:30:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:48.624 11:30:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:22:48.624 11:30:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:22:48.624 11:30:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:48.624 11:30:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:22:48.624 11:30:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:22:48.624 11:30:43 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@16 -- # timing_enter start_nvmf_tgt 00:22:48.624 11:30:43 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@724 -- # xtrace_disable 00:22:48.624 11:30:43 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:22:48.624 11:30:43 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@19 -- # nvmfpid=1592854 00:22:48.624 11:30:43 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@18 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:22:48.624 11:30:43 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@21 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:22:48.624 11:30:43 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@23 -- # waitforlisten 1592854 00:22:48.624 11:30:43 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@831 -- # '[' -z 1592854 ']' 00:22:48.624 11:30:43 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:48.624 11:30:43 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@836 -- # local max_retries=100 00:22:48.624 11:30:43 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:48.624 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:48.624 11:30:43 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@840 -- # xtrace_disable 00:22:48.624 11:30:43 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:22:48.624 [2024-07-26 11:30:43.358400] Starting SPDK v24.09-pre git sha1 487ff9e1a / DPDK 24.03.0 initialization... 00:22:48.624 [2024-07-26 11:30:43.358442] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:48.624 EAL: No free 2048 kB hugepages reported on node 1 00:22:48.624 [2024-07-26 11:30:43.430503] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:22:48.624 [2024-07-26 11:30:43.505534] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:48.624 [2024-07-26 11:30:43.505571] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:48.624 [2024-07-26 11:30:43.505578] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:48.624 [2024-07-26 11:30:43.505583] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:48.624 [2024-07-26 11:30:43.505588] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:48.624 [2024-07-26 11:30:43.505646] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:22:48.624 [2024-07-26 11:30:43.505721] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:22:48.624 [2024-07-26 11:30:43.505827] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:22:48.624 [2024-07-26 11:30:43.505828] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:22:48.624 11:30:44 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:22:48.624 11:30:44 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@864 -- # return 0 00:22:48.624 11:30:44 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@24 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:22:48.624 11:30:44 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:48.624 11:30:44 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:22:48.624 [2024-07-26 11:30:44.166643] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:48.624 11:30:44 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:48.624 11:30:44 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@25 -- # timing_exit start_nvmf_tgt 00:22:48.624 11:30:44 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@730 -- # xtrace_disable 00:22:48.624 11:30:44 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:22:48.624 11:30:44 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@27 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:22:48.624 11:30:44 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:48.624 11:30:44 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:22:48.624 Malloc0 00:22:48.624 11:30:44 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:48.624 11:30:44 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:22:48.624 11:30:44 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:48.624 11:30:44 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:22:48.624 11:30:44 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:48.624 11:30:44 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 --nguid ABCDEF0123456789ABCDEF0123456789 --eui64 ABCDEF0123456789 00:22:48.624 11:30:44 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:48.624 11:30:44 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:22:48.624 11:30:44 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:48.624 11:30:44 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:22:48.624 11:30:44 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:48.624 11:30:44 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:22:48.624 [2024-07-26 11:30:44.254365] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:48.624 11:30:44 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:48.624 11:30:44 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@35 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:22:48.624 11:30:44 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:48.624 11:30:44 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:22:48.624 11:30:44 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:48.624 11:30:44 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@37 -- # rpc_cmd nvmf_get_subsystems 00:22:48.624 11:30:44 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:48.624 11:30:44 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:22:48.624 [ 00:22:48.624 { 00:22:48.624 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:22:48.624 "subtype": "Discovery", 00:22:48.624 "listen_addresses": [ 00:22:48.624 { 00:22:48.624 "trtype": "TCP", 00:22:48.624 "adrfam": "IPv4", 00:22:48.624 "traddr": "10.0.0.2", 00:22:48.624 "trsvcid": "4420" 00:22:48.624 } 00:22:48.624 ], 00:22:48.624 "allow_any_host": true, 00:22:48.624 "hosts": [] 00:22:48.624 }, 00:22:48.624 { 00:22:48.624 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:22:48.624 "subtype": "NVMe", 00:22:48.624 "listen_addresses": [ 00:22:48.624 { 00:22:48.624 "trtype": "TCP", 00:22:48.624 "adrfam": "IPv4", 00:22:48.624 "traddr": "10.0.0.2", 00:22:48.624 "trsvcid": "4420" 00:22:48.624 } 00:22:48.624 ], 00:22:48.624 "allow_any_host": true, 00:22:48.624 "hosts": [], 00:22:48.624 "serial_number": "SPDK00000000000001", 00:22:48.624 "model_number": "SPDK bdev Controller", 00:22:48.624 "max_namespaces": 32, 00:22:48.624 "min_cntlid": 1, 00:22:48.624 "max_cntlid": 65519, 00:22:48.624 "namespaces": [ 00:22:48.624 { 00:22:48.624 "nsid": 1, 00:22:48.624 "bdev_name": "Malloc0", 00:22:48.624 "name": "Malloc0", 00:22:48.624 "nguid": "ABCDEF0123456789ABCDEF0123456789", 00:22:48.624 "eui64": "ABCDEF0123456789", 00:22:48.624 "uuid": "eded1ca0-8961-46fc-a4c2-73114c526bf1" 00:22:48.624 } 00:22:48.624 ] 00:22:48.624 } 00:22:48.624 ] 00:22:48.624 11:30:44 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:48.624 11:30:44 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@39 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' -L all 00:22:48.885 [2024-07-26 11:30:44.305703] Starting SPDK v24.09-pre git sha1 487ff9e1a / DPDK 24.03.0 initialization... 00:22:48.885 [2024-07-26 11:30:44.305746] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1592967 ] 00:22:48.885 EAL: No free 2048 kB hugepages reported on node 1 00:22:48.885 [2024-07-26 11:30:44.334934] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to connect adminq (no timeout) 00:22:48.885 [2024-07-26 11:30:44.334981] nvme_tcp.c:2338:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:22:48.885 [2024-07-26 11:30:44.334985] nvme_tcp.c:2342:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:22:48.885 [2024-07-26 11:30:44.334997] nvme_tcp.c:2360:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:22:48.885 [2024-07-26 11:30:44.335005] sock.c: 373:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:22:48.885 [2024-07-26 11:30:44.335268] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for connect adminq (no timeout) 00:22:48.885 [2024-07-26 11:30:44.335291] nvme_tcp.c:1555:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x11f2ec0 0 00:22:48.885 [2024-07-26 11:30:44.349632] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:22:48.885 [2024-07-26 11:30:44.349646] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:22:48.885 [2024-07-26 11:30:44.349651] nvme_tcp.c:1601:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:22:48.885 [2024-07-26 11:30:44.349654] nvme_tcp.c:1602:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:22:48.885 [2024-07-26 11:30:44.349691] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:48.885 [2024-07-26 11:30:44.349696] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:48.885 [2024-07-26 11:30:44.349700] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x11f2ec0) 00:22:48.885 [2024-07-26 11:30:44.349712] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:22:48.885 [2024-07-26 11:30:44.349727] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1275e40, cid 0, qid 0 00:22:48.885 [2024-07-26 11:30:44.356636] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:48.885 [2024-07-26 11:30:44.356645] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:48.885 [2024-07-26 11:30:44.356651] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:48.885 [2024-07-26 11:30:44.356655] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1275e40) on tqpair=0x11f2ec0 00:22:48.885 [2024-07-26 11:30:44.356666] nvme_fabric.c: 622:_nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:22:48.885 [2024-07-26 11:30:44.356671] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read vs (no timeout) 00:22:48.885 [2024-07-26 11:30:44.356676] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read vs wait for vs (no timeout) 00:22:48.885 [2024-07-26 11:30:44.356687] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:48.885 [2024-07-26 11:30:44.356691] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:48.885 [2024-07-26 11:30:44.356694] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x11f2ec0) 00:22:48.885 [2024-07-26 11:30:44.356701] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:48.885 [2024-07-26 11:30:44.356713] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1275e40, cid 0, qid 0 00:22:48.885 [2024-07-26 11:30:44.356929] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:48.885 [2024-07-26 11:30:44.356934] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:48.885 [2024-07-26 11:30:44.356937] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:48.885 [2024-07-26 11:30:44.356941] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1275e40) on tqpair=0x11f2ec0 00:22:48.885 [2024-07-26 11:30:44.356947] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read cap (no timeout) 00:22:48.885 [2024-07-26 11:30:44.356953] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read cap wait for cap (no timeout) 00:22:48.885 [2024-07-26 11:30:44.356959] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:48.885 [2024-07-26 11:30:44.356962] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:48.885 [2024-07-26 11:30:44.356965] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x11f2ec0) 00:22:48.885 [2024-07-26 11:30:44.356971] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:48.885 [2024-07-26 11:30:44.356981] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1275e40, cid 0, qid 0 00:22:48.885 [2024-07-26 11:30:44.357075] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:48.885 [2024-07-26 11:30:44.357081] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:48.885 [2024-07-26 11:30:44.357084] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:48.885 [2024-07-26 11:30:44.357087] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1275e40) on tqpair=0x11f2ec0 00:22:48.885 [2024-07-26 11:30:44.357091] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to check en (no timeout) 00:22:48.885 [2024-07-26 11:30:44.357098] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to check en wait for cc (timeout 15000 ms) 00:22:48.885 [2024-07-26 11:30:44.357103] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:48.885 [2024-07-26 11:30:44.357106] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:48.885 [2024-07-26 11:30:44.357109] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x11f2ec0) 00:22:48.885 [2024-07-26 11:30:44.357115] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:48.885 [2024-07-26 11:30:44.357123] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1275e40, cid 0, qid 0 00:22:48.885 [2024-07-26 11:30:44.357227] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:48.885 [2024-07-26 11:30:44.357233] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:48.885 [2024-07-26 11:30:44.357237] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:48.885 [2024-07-26 11:30:44.357241] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1275e40) on tqpair=0x11f2ec0 00:22:48.885 [2024-07-26 11:30:44.357246] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:22:48.885 [2024-07-26 11:30:44.357253] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:48.885 [2024-07-26 11:30:44.357257] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:48.885 [2024-07-26 11:30:44.357260] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x11f2ec0) 00:22:48.885 [2024-07-26 11:30:44.357265] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:48.885 [2024-07-26 11:30:44.357274] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1275e40, cid 0, qid 0 00:22:48.885 [2024-07-26 11:30:44.357336] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:48.885 [2024-07-26 11:30:44.357341] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:48.886 [2024-07-26 11:30:44.357344] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:48.886 [2024-07-26 11:30:44.357347] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1275e40) on tqpair=0x11f2ec0 00:22:48.886 [2024-07-26 11:30:44.357351] nvme_ctrlr.c:3873:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CC.EN = 0 && CSTS.RDY = 0 00:22:48.886 [2024-07-26 11:30:44.357355] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to controller is disabled (timeout 15000 ms) 00:22:48.886 [2024-07-26 11:30:44.357361] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:22:48.886 [2024-07-26 11:30:44.357466] nvme_ctrlr.c:4066:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Setting CC.EN = 1 00:22:48.886 [2024-07-26 11:30:44.357470] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:22:48.886 [2024-07-26 11:30:44.357477] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:48.886 [2024-07-26 11:30:44.357481] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:48.886 [2024-07-26 11:30:44.357484] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x11f2ec0) 00:22:48.886 [2024-07-26 11:30:44.357489] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:48.886 [2024-07-26 11:30:44.357498] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1275e40, cid 0, qid 0 00:22:48.886 [2024-07-26 11:30:44.357617] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:48.886 [2024-07-26 11:30:44.357622] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:48.886 [2024-07-26 11:30:44.357630] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:48.886 [2024-07-26 11:30:44.357633] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1275e40) on tqpair=0x11f2ec0 00:22:48.886 [2024-07-26 11:30:44.357637] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:22:48.886 [2024-07-26 11:30:44.357644] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:48.886 [2024-07-26 11:30:44.357648] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:48.886 [2024-07-26 11:30:44.357651] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x11f2ec0) 00:22:48.886 [2024-07-26 11:30:44.357657] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:48.886 [2024-07-26 11:30:44.357666] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1275e40, cid 0, qid 0 00:22:48.886 [2024-07-26 11:30:44.357768] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:48.886 [2024-07-26 11:30:44.357774] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:48.886 [2024-07-26 11:30:44.357777] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:48.886 [2024-07-26 11:30:44.357780] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1275e40) on tqpair=0x11f2ec0 00:22:48.886 [2024-07-26 11:30:44.357783] nvme_ctrlr.c:3908:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:22:48.886 [2024-07-26 11:30:44.357787] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to reset admin queue (timeout 30000 ms) 00:22:48.886 [2024-07-26 11:30:44.357793] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to identify controller (no timeout) 00:22:48.886 [2024-07-26 11:30:44.357800] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for identify controller (timeout 30000 ms) 00:22:48.886 [2024-07-26 11:30:44.357808] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:48.886 [2024-07-26 11:30:44.357811] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x11f2ec0) 00:22:48.886 [2024-07-26 11:30:44.357816] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:48.886 [2024-07-26 11:30:44.357825] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1275e40, cid 0, qid 0 00:22:48.886 [2024-07-26 11:30:44.358026] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:22:48.886 [2024-07-26 11:30:44.358032] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:22:48.886 [2024-07-26 11:30:44.358036] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:22:48.886 [2024-07-26 11:30:44.358039] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x11f2ec0): datao=0, datal=4096, cccid=0 00:22:48.886 [2024-07-26 11:30:44.358042] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1275e40) on tqpair(0x11f2ec0): expected_datao=0, payload_size=4096 00:22:48.886 [2024-07-26 11:30:44.358046] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:48.886 [2024-07-26 11:30:44.358053] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:22:48.886 [2024-07-26 11:30:44.358057] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:22:48.886 [2024-07-26 11:30:44.358063] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:48.886 [2024-07-26 11:30:44.358068] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:48.886 [2024-07-26 11:30:44.358071] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:48.886 [2024-07-26 11:30:44.358074] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1275e40) on tqpair=0x11f2ec0 00:22:48.886 [2024-07-26 11:30:44.358081] nvme_ctrlr.c:2057:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] transport max_xfer_size 4294967295 00:22:48.886 [2024-07-26 11:30:44.358085] nvme_ctrlr.c:2061:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] MDTS max_xfer_size 131072 00:22:48.886 [2024-07-26 11:30:44.358088] nvme_ctrlr.c:2064:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CNTLID 0x0001 00:22:48.886 [2024-07-26 11:30:44.358093] nvme_ctrlr.c:2088:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] transport max_sges 16 00:22:48.886 [2024-07-26 11:30:44.358097] nvme_ctrlr.c:2103:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] fuses compare and write: 1 00:22:48.886 [2024-07-26 11:30:44.358101] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to configure AER (timeout 30000 ms) 00:22:48.886 [2024-07-26 11:30:44.358109] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for configure aer (timeout 30000 ms) 00:22:48.886 [2024-07-26 11:30:44.358117] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:48.886 [2024-07-26 11:30:44.358123] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:48.886 [2024-07-26 11:30:44.358126] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x11f2ec0) 00:22:48.886 [2024-07-26 11:30:44.358132] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:22:48.886 [2024-07-26 11:30:44.358143] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1275e40, cid 0, qid 0 00:22:48.886 [2024-07-26 11:30:44.358220] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:48.886 [2024-07-26 11:30:44.358226] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:48.886 [2024-07-26 11:30:44.358229] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:48.886 [2024-07-26 11:30:44.358232] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1275e40) on tqpair=0x11f2ec0 00:22:48.886 [2024-07-26 11:30:44.358238] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:48.886 [2024-07-26 11:30:44.358241] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:48.886 [2024-07-26 11:30:44.358244] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x11f2ec0) 00:22:48.886 [2024-07-26 11:30:44.358249] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:48.886 [2024-07-26 11:30:44.358255] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:48.886 [2024-07-26 11:30:44.358258] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:48.886 [2024-07-26 11:30:44.358261] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x11f2ec0) 00:22:48.886 [2024-07-26 11:30:44.358265] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:48.886 [2024-07-26 11:30:44.358270] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:48.886 [2024-07-26 11:30:44.358273] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:48.886 [2024-07-26 11:30:44.358276] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x11f2ec0) 00:22:48.886 [2024-07-26 11:30:44.358281] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:48.886 [2024-07-26 11:30:44.358286] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:48.886 [2024-07-26 11:30:44.358289] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:48.886 [2024-07-26 11:30:44.358292] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x11f2ec0) 00:22:48.886 [2024-07-26 11:30:44.358297] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:48.886 [2024-07-26 11:30:44.358301] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to set keep alive timeout (timeout 30000 ms) 00:22:48.886 [2024-07-26 11:30:44.358310] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:22:48.886 [2024-07-26 11:30:44.358316] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:48.886 [2024-07-26 11:30:44.358319] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x11f2ec0) 00:22:48.886 [2024-07-26 11:30:44.358325] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:48.886 [2024-07-26 11:30:44.358335] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1275e40, cid 0, qid 0 00:22:48.886 [2024-07-26 11:30:44.358339] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1275fc0, cid 1, qid 0 00:22:48.886 [2024-07-26 11:30:44.358343] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1276140, cid 2, qid 0 00:22:48.886 [2024-07-26 11:30:44.358347] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12762c0, cid 3, qid 0 00:22:48.886 [2024-07-26 11:30:44.358351] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1276440, cid 4, qid 0 00:22:48.886 [2024-07-26 11:30:44.358449] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:48.886 [2024-07-26 11:30:44.358455] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:48.886 [2024-07-26 11:30:44.358458] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:48.886 [2024-07-26 11:30:44.358461] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1276440) on tqpair=0x11f2ec0 00:22:48.886 [2024-07-26 11:30:44.358466] nvme_ctrlr.c:3026:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Sending keep alive every 5000000 us 00:22:48.886 [2024-07-26 11:30:44.358470] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to ready (no timeout) 00:22:48.886 [2024-07-26 11:30:44.358479] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:48.886 [2024-07-26 11:30:44.358483] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x11f2ec0) 00:22:48.887 [2024-07-26 11:30:44.358488] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:48.887 [2024-07-26 11:30:44.358497] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1276440, cid 4, qid 0 00:22:48.887 [2024-07-26 11:30:44.358576] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:22:48.887 [2024-07-26 11:30:44.358582] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:22:48.887 [2024-07-26 11:30:44.358585] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:22:48.887 [2024-07-26 11:30:44.358588] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x11f2ec0): datao=0, datal=4096, cccid=4 00:22:48.887 [2024-07-26 11:30:44.358592] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1276440) on tqpair(0x11f2ec0): expected_datao=0, payload_size=4096 00:22:48.887 [2024-07-26 11:30:44.358595] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:48.887 [2024-07-26 11:30:44.358601] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:22:48.887 [2024-07-26 11:30:44.358604] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:22:48.887 [2024-07-26 11:30:44.358614] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:48.887 [2024-07-26 11:30:44.358620] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:48.887 [2024-07-26 11:30:44.358623] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:48.887 [2024-07-26 11:30:44.358631] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1276440) on tqpair=0x11f2ec0 00:22:48.887 [2024-07-26 11:30:44.358641] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Ctrlr already in ready state 00:22:48.887 [2024-07-26 11:30:44.358660] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:48.887 [2024-07-26 11:30:44.358664] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x11f2ec0) 00:22:48.887 [2024-07-26 11:30:44.358669] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:48.887 [2024-07-26 11:30:44.358675] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:48.887 [2024-07-26 11:30:44.358679] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:48.887 [2024-07-26 11:30:44.358682] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x11f2ec0) 00:22:48.887 [2024-07-26 11:30:44.358687] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:22:48.887 [2024-07-26 11:30:44.358699] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1276440, cid 4, qid 0 00:22:48.887 [2024-07-26 11:30:44.358704] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12765c0, cid 5, qid 0 00:22:48.887 [2024-07-26 11:30:44.358824] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:22:48.887 [2024-07-26 11:30:44.358830] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:22:48.887 [2024-07-26 11:30:44.358835] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:22:48.887 [2024-07-26 11:30:44.358838] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x11f2ec0): datao=0, datal=1024, cccid=4 00:22:48.887 [2024-07-26 11:30:44.358841] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1276440) on tqpair(0x11f2ec0): expected_datao=0, payload_size=1024 00:22:48.887 [2024-07-26 11:30:44.358845] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:48.887 [2024-07-26 11:30:44.358851] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:22:48.887 [2024-07-26 11:30:44.358854] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:22:48.887 [2024-07-26 11:30:44.358859] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:48.887 [2024-07-26 11:30:44.358863] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:48.887 [2024-07-26 11:30:44.358866] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:48.887 [2024-07-26 11:30:44.358869] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x12765c0) on tqpair=0x11f2ec0 00:22:48.887 [2024-07-26 11:30:44.402635] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:48.887 [2024-07-26 11:30:44.402645] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:48.887 [2024-07-26 11:30:44.402649] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:48.887 [2024-07-26 11:30:44.402652] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1276440) on tqpair=0x11f2ec0 00:22:48.887 [2024-07-26 11:30:44.402667] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:48.887 [2024-07-26 11:30:44.402671] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x11f2ec0) 00:22:48.887 [2024-07-26 11:30:44.402678] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:02ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:48.887 [2024-07-26 11:30:44.402693] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1276440, cid 4, qid 0 00:22:48.887 [2024-07-26 11:30:44.402858] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:22:48.887 [2024-07-26 11:30:44.402864] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:22:48.887 [2024-07-26 11:30:44.402867] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:22:48.887 [2024-07-26 11:30:44.402869] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x11f2ec0): datao=0, datal=3072, cccid=4 00:22:48.887 [2024-07-26 11:30:44.402873] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1276440) on tqpair(0x11f2ec0): expected_datao=0, payload_size=3072 00:22:48.887 [2024-07-26 11:30:44.402877] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:48.887 [2024-07-26 11:30:44.402897] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:22:48.887 [2024-07-26 11:30:44.402901] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:22:48.887 [2024-07-26 11:30:44.444769] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:48.887 [2024-07-26 11:30:44.444778] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:48.887 [2024-07-26 11:30:44.444781] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:48.887 [2024-07-26 11:30:44.444785] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1276440) on tqpair=0x11f2ec0 00:22:48.887 [2024-07-26 11:30:44.444793] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:48.887 [2024-07-26 11:30:44.444797] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x11f2ec0) 00:22:48.887 [2024-07-26 11:30:44.444803] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00010070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:48.887 [2024-07-26 11:30:44.444816] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1276440, cid 4, qid 0 00:22:48.887 [2024-07-26 11:30:44.444887] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:22:48.887 [2024-07-26 11:30:44.444892] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:22:48.887 [2024-07-26 11:30:44.444895] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:22:48.887 [2024-07-26 11:30:44.444901] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x11f2ec0): datao=0, datal=8, cccid=4 00:22:48.887 [2024-07-26 11:30:44.444904] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1276440) on tqpair(0x11f2ec0): expected_datao=0, payload_size=8 00:22:48.887 [2024-07-26 11:30:44.444908] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:48.887 [2024-07-26 11:30:44.444914] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:22:48.887 [2024-07-26 11:30:44.444917] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:22:48.887 [2024-07-26 11:30:44.486760] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:48.887 [2024-07-26 11:30:44.486771] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:48.887 [2024-07-26 11:30:44.486776] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:48.887 [2024-07-26 11:30:44.486780] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1276440) on tqpair=0x11f2ec0 00:22:48.887 ===================================================== 00:22:48.887 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2014-08.org.nvmexpress.discovery 00:22:48.887 ===================================================== 00:22:48.887 Controller Capabilities/Features 00:22:48.887 ================================ 00:22:48.887 Vendor ID: 0000 00:22:48.887 Subsystem Vendor ID: 0000 00:22:48.887 Serial Number: .................... 00:22:48.887 Model Number: ........................................ 00:22:48.887 Firmware Version: 24.09 00:22:48.887 Recommended Arb Burst: 0 00:22:48.887 IEEE OUI Identifier: 00 00 00 00:22:48.887 Multi-path I/O 00:22:48.887 May have multiple subsystem ports: No 00:22:48.887 May have multiple controllers: No 00:22:48.887 Associated with SR-IOV VF: No 00:22:48.887 Max Data Transfer Size: 131072 00:22:48.887 Max Number of Namespaces: 0 00:22:48.887 Max Number of I/O Queues: 1024 00:22:48.887 NVMe Specification Version (VS): 1.3 00:22:48.887 NVMe Specification Version (Identify): 1.3 00:22:48.887 Maximum Queue Entries: 128 00:22:48.887 Contiguous Queues Required: Yes 00:22:48.887 Arbitration Mechanisms Supported 00:22:48.887 Weighted Round Robin: Not Supported 00:22:48.887 Vendor Specific: Not Supported 00:22:48.887 Reset Timeout: 15000 ms 00:22:48.887 Doorbell Stride: 4 bytes 00:22:48.887 NVM Subsystem Reset: Not Supported 00:22:48.887 Command Sets Supported 00:22:48.887 NVM Command Set: Supported 00:22:48.887 Boot Partition: Not Supported 00:22:48.887 Memory Page Size Minimum: 4096 bytes 00:22:48.887 Memory Page Size Maximum: 4096 bytes 00:22:48.887 Persistent Memory Region: Not Supported 00:22:48.887 Optional Asynchronous Events Supported 00:22:48.887 Namespace Attribute Notices: Not Supported 00:22:48.887 Firmware Activation Notices: Not Supported 00:22:48.887 ANA Change Notices: Not Supported 00:22:48.887 PLE Aggregate Log Change Notices: Not Supported 00:22:48.887 LBA Status Info Alert Notices: Not Supported 00:22:48.887 EGE Aggregate Log Change Notices: Not Supported 00:22:48.887 Normal NVM Subsystem Shutdown event: Not Supported 00:22:48.887 Zone Descriptor Change Notices: Not Supported 00:22:48.887 Discovery Log Change Notices: Supported 00:22:48.887 Controller Attributes 00:22:48.887 128-bit Host Identifier: Not Supported 00:22:48.887 Non-Operational Permissive Mode: Not Supported 00:22:48.887 NVM Sets: Not Supported 00:22:48.887 Read Recovery Levels: Not Supported 00:22:48.887 Endurance Groups: Not Supported 00:22:48.887 Predictable Latency Mode: Not Supported 00:22:48.887 Traffic Based Keep ALive: Not Supported 00:22:48.887 Namespace Granularity: Not Supported 00:22:48.887 SQ Associations: Not Supported 00:22:48.888 UUID List: Not Supported 00:22:48.888 Multi-Domain Subsystem: Not Supported 00:22:48.888 Fixed Capacity Management: Not Supported 00:22:48.888 Variable Capacity Management: Not Supported 00:22:48.888 Delete Endurance Group: Not Supported 00:22:48.888 Delete NVM Set: Not Supported 00:22:48.888 Extended LBA Formats Supported: Not Supported 00:22:48.888 Flexible Data Placement Supported: Not Supported 00:22:48.888 00:22:48.888 Controller Memory Buffer Support 00:22:48.888 ================================ 00:22:48.888 Supported: No 00:22:48.888 00:22:48.888 Persistent Memory Region Support 00:22:48.888 ================================ 00:22:48.888 Supported: No 00:22:48.888 00:22:48.888 Admin Command Set Attributes 00:22:48.888 ============================ 00:22:48.888 Security Send/Receive: Not Supported 00:22:48.888 Format NVM: Not Supported 00:22:48.888 Firmware Activate/Download: Not Supported 00:22:48.888 Namespace Management: Not Supported 00:22:48.888 Device Self-Test: Not Supported 00:22:48.888 Directives: Not Supported 00:22:48.888 NVMe-MI: Not Supported 00:22:48.888 Virtualization Management: Not Supported 00:22:48.888 Doorbell Buffer Config: Not Supported 00:22:48.888 Get LBA Status Capability: Not Supported 00:22:48.888 Command & Feature Lockdown Capability: Not Supported 00:22:48.888 Abort Command Limit: 1 00:22:48.888 Async Event Request Limit: 4 00:22:48.888 Number of Firmware Slots: N/A 00:22:48.888 Firmware Slot 1 Read-Only: N/A 00:22:48.888 Firmware Activation Without Reset: N/A 00:22:48.888 Multiple Update Detection Support: N/A 00:22:48.888 Firmware Update Granularity: No Information Provided 00:22:48.888 Per-Namespace SMART Log: No 00:22:48.888 Asymmetric Namespace Access Log Page: Not Supported 00:22:48.888 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:22:48.888 Command Effects Log Page: Not Supported 00:22:48.888 Get Log Page Extended Data: Supported 00:22:48.888 Telemetry Log Pages: Not Supported 00:22:48.888 Persistent Event Log Pages: Not Supported 00:22:48.888 Supported Log Pages Log Page: May Support 00:22:48.888 Commands Supported & Effects Log Page: Not Supported 00:22:48.888 Feature Identifiers & Effects Log Page:May Support 00:22:48.888 NVMe-MI Commands & Effects Log Page: May Support 00:22:48.888 Data Area 4 for Telemetry Log: Not Supported 00:22:48.888 Error Log Page Entries Supported: 128 00:22:48.888 Keep Alive: Not Supported 00:22:48.888 00:22:48.888 NVM Command Set Attributes 00:22:48.888 ========================== 00:22:48.888 Submission Queue Entry Size 00:22:48.888 Max: 1 00:22:48.888 Min: 1 00:22:48.888 Completion Queue Entry Size 00:22:48.888 Max: 1 00:22:48.888 Min: 1 00:22:48.888 Number of Namespaces: 0 00:22:48.888 Compare Command: Not Supported 00:22:48.888 Write Uncorrectable Command: Not Supported 00:22:48.888 Dataset Management Command: Not Supported 00:22:48.888 Write Zeroes Command: Not Supported 00:22:48.888 Set Features Save Field: Not Supported 00:22:48.888 Reservations: Not Supported 00:22:48.888 Timestamp: Not Supported 00:22:48.888 Copy: Not Supported 00:22:48.888 Volatile Write Cache: Not Present 00:22:48.888 Atomic Write Unit (Normal): 1 00:22:48.888 Atomic Write Unit (PFail): 1 00:22:48.888 Atomic Compare & Write Unit: 1 00:22:48.888 Fused Compare & Write: Supported 00:22:48.888 Scatter-Gather List 00:22:48.888 SGL Command Set: Supported 00:22:48.888 SGL Keyed: Supported 00:22:48.888 SGL Bit Bucket Descriptor: Not Supported 00:22:48.888 SGL Metadata Pointer: Not Supported 00:22:48.888 Oversized SGL: Not Supported 00:22:48.888 SGL Metadata Address: Not Supported 00:22:48.888 SGL Offset: Supported 00:22:48.888 Transport SGL Data Block: Not Supported 00:22:48.888 Replay Protected Memory Block: Not Supported 00:22:48.888 00:22:48.888 Firmware Slot Information 00:22:48.888 ========================= 00:22:48.888 Active slot: 0 00:22:48.888 00:22:48.888 00:22:48.888 Error Log 00:22:48.888 ========= 00:22:48.888 00:22:48.888 Active Namespaces 00:22:48.888 ================= 00:22:48.888 Discovery Log Page 00:22:48.888 ================== 00:22:48.888 Generation Counter: 2 00:22:48.888 Number of Records: 2 00:22:48.888 Record Format: 0 00:22:48.888 00:22:48.888 Discovery Log Entry 0 00:22:48.888 ---------------------- 00:22:48.888 Transport Type: 3 (TCP) 00:22:48.888 Address Family: 1 (IPv4) 00:22:48.888 Subsystem Type: 3 (Current Discovery Subsystem) 00:22:48.888 Entry Flags: 00:22:48.888 Duplicate Returned Information: 1 00:22:48.888 Explicit Persistent Connection Support for Discovery: 1 00:22:48.888 Transport Requirements: 00:22:48.888 Secure Channel: Not Required 00:22:48.888 Port ID: 0 (0x0000) 00:22:48.888 Controller ID: 65535 (0xffff) 00:22:48.888 Admin Max SQ Size: 128 00:22:48.888 Transport Service Identifier: 4420 00:22:48.888 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:22:48.888 Transport Address: 10.0.0.2 00:22:48.888 Discovery Log Entry 1 00:22:48.888 ---------------------- 00:22:48.888 Transport Type: 3 (TCP) 00:22:48.888 Address Family: 1 (IPv4) 00:22:48.888 Subsystem Type: 2 (NVM Subsystem) 00:22:48.888 Entry Flags: 00:22:48.888 Duplicate Returned Information: 0 00:22:48.888 Explicit Persistent Connection Support for Discovery: 0 00:22:48.888 Transport Requirements: 00:22:48.888 Secure Channel: Not Required 00:22:48.888 Port ID: 0 (0x0000) 00:22:48.888 Controller ID: 65535 (0xffff) 00:22:48.888 Admin Max SQ Size: 128 00:22:48.888 Transport Service Identifier: 4420 00:22:48.888 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:cnode1 00:22:48.888 Transport Address: 10.0.0.2 [2024-07-26 11:30:44.486853] nvme_ctrlr.c:4361:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Prepare to destruct SSD 00:22:48.888 [2024-07-26 11:30:44.486864] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1275e40) on tqpair=0x11f2ec0 00:22:48.888 [2024-07-26 11:30:44.486870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:48.888 [2024-07-26 11:30:44.486874] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1275fc0) on tqpair=0x11f2ec0 00:22:48.888 [2024-07-26 11:30:44.486878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:48.888 [2024-07-26 11:30:44.486882] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1276140) on tqpair=0x11f2ec0 00:22:48.888 [2024-07-26 11:30:44.486886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:48.888 [2024-07-26 11:30:44.486890] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x12762c0) on tqpair=0x11f2ec0 00:22:48.888 [2024-07-26 11:30:44.486894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:48.888 [2024-07-26 11:30:44.486904] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:48.888 [2024-07-26 11:30:44.486907] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:48.888 [2024-07-26 11:30:44.486910] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x11f2ec0) 00:22:48.888 [2024-07-26 11:30:44.486916] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:48.888 [2024-07-26 11:30:44.486929] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12762c0, cid 3, qid 0 00:22:48.888 [2024-07-26 11:30:44.486989] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:48.888 [2024-07-26 11:30:44.486995] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:48.888 [2024-07-26 11:30:44.486998] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:48.888 [2024-07-26 11:30:44.487001] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x12762c0) on tqpair=0x11f2ec0 00:22:48.888 [2024-07-26 11:30:44.487007] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:48.888 [2024-07-26 11:30:44.487010] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:48.888 [2024-07-26 11:30:44.487013] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x11f2ec0) 00:22:48.888 [2024-07-26 11:30:44.487019] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:48.888 [2024-07-26 11:30:44.487030] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12762c0, cid 3, qid 0 00:22:48.888 [2024-07-26 11:30:44.487109] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:48.888 [2024-07-26 11:30:44.487115] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:48.888 [2024-07-26 11:30:44.487117] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:48.888 [2024-07-26 11:30:44.487122] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x12762c0) on tqpair=0x11f2ec0 00:22:48.888 [2024-07-26 11:30:44.487126] nvme_ctrlr.c:1147:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] RTD3E = 0 us 00:22:48.888 [2024-07-26 11:30:44.487130] nvme_ctrlr.c:1150:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] shutdown timeout = 10000 ms 00:22:48.888 [2024-07-26 11:30:44.487138] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:48.888 [2024-07-26 11:30:44.487141] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:48.888 [2024-07-26 11:30:44.487144] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x11f2ec0) 00:22:48.889 [2024-07-26 11:30:44.487149] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:48.889 [2024-07-26 11:30:44.487158] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12762c0, cid 3, qid 0 00:22:48.889 [2024-07-26 11:30:44.487225] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:48.889 [2024-07-26 11:30:44.487230] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:48.889 [2024-07-26 11:30:44.487233] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:48.889 [2024-07-26 11:30:44.487236] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x12762c0) on tqpair=0x11f2ec0 00:22:48.889 [2024-07-26 11:30:44.487245] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:48.889 [2024-07-26 11:30:44.487248] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:48.889 [2024-07-26 11:30:44.487251] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x11f2ec0) 00:22:48.889 [2024-07-26 11:30:44.487256] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:48.889 [2024-07-26 11:30:44.487265] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12762c0, cid 3, qid 0 00:22:48.889 [2024-07-26 11:30:44.487331] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:48.889 [2024-07-26 11:30:44.487337] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:48.889 [2024-07-26 11:30:44.487340] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:48.889 [2024-07-26 11:30:44.487343] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x12762c0) on tqpair=0x11f2ec0 00:22:48.889 [2024-07-26 11:30:44.487351] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:48.889 [2024-07-26 11:30:44.487355] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:48.889 [2024-07-26 11:30:44.487358] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x11f2ec0) 00:22:48.889 [2024-07-26 11:30:44.487363] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:48.889 [2024-07-26 11:30:44.487372] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12762c0, cid 3, qid 0 00:22:48.889 [2024-07-26 11:30:44.487433] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:48.889 [2024-07-26 11:30:44.487438] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:48.889 [2024-07-26 11:30:44.487441] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:48.889 [2024-07-26 11:30:44.487444] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x12762c0) on tqpair=0x11f2ec0 00:22:48.889 [2024-07-26 11:30:44.487451] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:48.889 [2024-07-26 11:30:44.487455] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:48.889 [2024-07-26 11:30:44.487458] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x11f2ec0) 00:22:48.889 [2024-07-26 11:30:44.487463] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:48.889 [2024-07-26 11:30:44.487472] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12762c0, cid 3, qid 0 00:22:48.889 [2024-07-26 11:30:44.487550] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:48.889 [2024-07-26 11:30:44.487558] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:48.889 [2024-07-26 11:30:44.487561] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:48.889 [2024-07-26 11:30:44.487564] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x12762c0) on tqpair=0x11f2ec0 00:22:48.889 [2024-07-26 11:30:44.487571] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:48.889 [2024-07-26 11:30:44.487575] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:48.889 [2024-07-26 11:30:44.487578] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x11f2ec0) 00:22:48.889 [2024-07-26 11:30:44.487583] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:48.889 [2024-07-26 11:30:44.487592] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12762c0, cid 3, qid 0 00:22:48.889 [2024-07-26 11:30:44.491636] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:48.889 [2024-07-26 11:30:44.491644] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:48.889 [2024-07-26 11:30:44.491647] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:48.889 [2024-07-26 11:30:44.491650] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x12762c0) on tqpair=0x11f2ec0 00:22:48.889 [2024-07-26 11:30:44.491660] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:48.889 [2024-07-26 11:30:44.491665] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:48.889 [2024-07-26 11:30:44.491668] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x11f2ec0) 00:22:48.889 [2024-07-26 11:30:44.491674] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:48.889 [2024-07-26 11:30:44.491684] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12762c0, cid 3, qid 0 00:22:48.889 [2024-07-26 11:30:44.491818] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:48.889 [2024-07-26 11:30:44.491824] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:48.889 [2024-07-26 11:30:44.491827] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:48.889 [2024-07-26 11:30:44.491830] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x12762c0) on tqpair=0x11f2ec0 00:22:48.889 [2024-07-26 11:30:44.491836] nvme_ctrlr.c:1269:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] shutdown complete in 4 milliseconds 00:22:48.889 00:22:48.889 11:30:44 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -L all 00:22:48.889 [2024-07-26 11:30:44.528228] Starting SPDK v24.09-pre git sha1 487ff9e1a / DPDK 24.03.0 initialization... 00:22:48.889 [2024-07-26 11:30:44.528269] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1592982 ] 00:22:48.889 EAL: No free 2048 kB hugepages reported on node 1 00:22:49.152 [2024-07-26 11:30:44.557631] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to connect adminq (no timeout) 00:22:49.152 [2024-07-26 11:30:44.557669] nvme_tcp.c:2338:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:22:49.152 [2024-07-26 11:30:44.557674] nvme_tcp.c:2342:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:22:49.152 [2024-07-26 11:30:44.557685] nvme_tcp.c:2360:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:22:49.152 [2024-07-26 11:30:44.557692] sock.c: 373:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:22:49.152 [2024-07-26 11:30:44.557896] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for connect adminq (no timeout) 00:22:49.152 [2024-07-26 11:30:44.557919] nvme_tcp.c:1555:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x1f83ec0 0 00:22:49.152 [2024-07-26 11:30:44.572636] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:22:49.152 [2024-07-26 11:30:44.572649] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:22:49.152 [2024-07-26 11:30:44.572653] nvme_tcp.c:1601:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:22:49.152 [2024-07-26 11:30:44.572656] nvme_tcp.c:1602:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:22:49.152 [2024-07-26 11:30:44.572685] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:49.152 [2024-07-26 11:30:44.572690] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:49.152 [2024-07-26 11:30:44.572694] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1f83ec0) 00:22:49.152 [2024-07-26 11:30:44.572704] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:22:49.152 [2024-07-26 11:30:44.572717] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2006e40, cid 0, qid 0 00:22:49.152 [2024-07-26 11:30:44.580638] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:49.152 [2024-07-26 11:30:44.580647] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:49.152 [2024-07-26 11:30:44.580650] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:49.152 [2024-07-26 11:30:44.580653] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2006e40) on tqpair=0x1f83ec0 00:22:49.152 [2024-07-26 11:30:44.580663] nvme_fabric.c: 622:_nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:22:49.152 [2024-07-26 11:30:44.580668] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read vs (no timeout) 00:22:49.152 [2024-07-26 11:30:44.580672] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read vs wait for vs (no timeout) 00:22:49.152 [2024-07-26 11:30:44.580683] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:49.152 [2024-07-26 11:30:44.580686] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:49.152 [2024-07-26 11:30:44.580689] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1f83ec0) 00:22:49.152 [2024-07-26 11:30:44.580696] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.152 [2024-07-26 11:30:44.580708] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2006e40, cid 0, qid 0 00:22:49.152 [2024-07-26 11:30:44.580817] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:49.152 [2024-07-26 11:30:44.580823] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:49.152 [2024-07-26 11:30:44.580826] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:49.152 [2024-07-26 11:30:44.580829] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2006e40) on tqpair=0x1f83ec0 00:22:49.152 [2024-07-26 11:30:44.580834] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read cap (no timeout) 00:22:49.152 [2024-07-26 11:30:44.580841] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read cap wait for cap (no timeout) 00:22:49.152 [2024-07-26 11:30:44.580847] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:49.152 [2024-07-26 11:30:44.580850] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:49.152 [2024-07-26 11:30:44.580853] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1f83ec0) 00:22:49.152 [2024-07-26 11:30:44.580859] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.152 [2024-07-26 11:30:44.580869] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2006e40, cid 0, qid 0 00:22:49.152 [2024-07-26 11:30:44.580930] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:49.152 [2024-07-26 11:30:44.580936] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:49.152 [2024-07-26 11:30:44.580941] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:49.152 [2024-07-26 11:30:44.580944] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2006e40) on tqpair=0x1f83ec0 00:22:49.152 [2024-07-26 11:30:44.580948] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to check en (no timeout) 00:22:49.152 [2024-07-26 11:30:44.580954] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to check en wait for cc (timeout 15000 ms) 00:22:49.152 [2024-07-26 11:30:44.580960] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:49.152 [2024-07-26 11:30:44.580963] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:49.152 [2024-07-26 11:30:44.580966] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1f83ec0) 00:22:49.152 [2024-07-26 11:30:44.580972] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.152 [2024-07-26 11:30:44.580981] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2006e40, cid 0, qid 0 00:22:49.152 [2024-07-26 11:30:44.581042] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:49.152 [2024-07-26 11:30:44.581048] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:49.152 [2024-07-26 11:30:44.581051] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:49.152 [2024-07-26 11:30:44.581054] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2006e40) on tqpair=0x1f83ec0 00:22:49.152 [2024-07-26 11:30:44.581058] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:22:49.152 [2024-07-26 11:30:44.581065] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:49.152 [2024-07-26 11:30:44.581069] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:49.152 [2024-07-26 11:30:44.581072] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1f83ec0) 00:22:49.152 [2024-07-26 11:30:44.581078] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.152 [2024-07-26 11:30:44.581087] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2006e40, cid 0, qid 0 00:22:49.152 [2024-07-26 11:30:44.581161] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:49.152 [2024-07-26 11:30:44.581167] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:49.152 [2024-07-26 11:30:44.581170] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:49.152 [2024-07-26 11:30:44.581174] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2006e40) on tqpair=0x1f83ec0 00:22:49.152 [2024-07-26 11:30:44.581177] nvme_ctrlr.c:3873:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CC.EN = 0 && CSTS.RDY = 0 00:22:49.152 [2024-07-26 11:30:44.581181] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to controller is disabled (timeout 15000 ms) 00:22:49.152 [2024-07-26 11:30:44.581188] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:22:49.152 [2024-07-26 11:30:44.581293] nvme_ctrlr.c:4066:nvme_ctrlr_process_init: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Setting CC.EN = 1 00:22:49.152 [2024-07-26 11:30:44.581297] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:22:49.152 [2024-07-26 11:30:44.581304] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:49.152 [2024-07-26 11:30:44.581307] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:49.152 [2024-07-26 11:30:44.581310] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1f83ec0) 00:22:49.152 [2024-07-26 11:30:44.581315] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.152 [2024-07-26 11:30:44.581325] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2006e40, cid 0, qid 0 00:22:49.152 [2024-07-26 11:30:44.581391] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:49.152 [2024-07-26 11:30:44.581396] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:49.152 [2024-07-26 11:30:44.581399] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:49.152 [2024-07-26 11:30:44.581402] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2006e40) on tqpair=0x1f83ec0 00:22:49.152 [2024-07-26 11:30:44.581406] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:22:49.152 [2024-07-26 11:30:44.581414] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:49.152 [2024-07-26 11:30:44.581417] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:49.152 [2024-07-26 11:30:44.581420] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1f83ec0) 00:22:49.152 [2024-07-26 11:30:44.581426] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.152 [2024-07-26 11:30:44.581435] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2006e40, cid 0, qid 0 00:22:49.152 [2024-07-26 11:30:44.581498] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:49.152 [2024-07-26 11:30:44.581504] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:49.152 [2024-07-26 11:30:44.581507] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:49.152 [2024-07-26 11:30:44.581510] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2006e40) on tqpair=0x1f83ec0 00:22:49.152 [2024-07-26 11:30:44.581514] nvme_ctrlr.c:3908:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:22:49.152 [2024-07-26 11:30:44.581517] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to reset admin queue (timeout 30000 ms) 00:22:49.152 [2024-07-26 11:30:44.581524] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify controller (no timeout) 00:22:49.153 [2024-07-26 11:30:44.581530] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify controller (timeout 30000 ms) 00:22:49.153 [2024-07-26 11:30:44.581537] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:49.153 [2024-07-26 11:30:44.581540] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1f83ec0) 00:22:49.153 [2024-07-26 11:30:44.581546] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.153 [2024-07-26 11:30:44.581556] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2006e40, cid 0, qid 0 00:22:49.153 [2024-07-26 11:30:44.581683] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:22:49.153 [2024-07-26 11:30:44.581689] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:22:49.153 [2024-07-26 11:30:44.581692] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:22:49.153 [2024-07-26 11:30:44.581695] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1f83ec0): datao=0, datal=4096, cccid=0 00:22:49.153 [2024-07-26 11:30:44.581699] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x2006e40) on tqpair(0x1f83ec0): expected_datao=0, payload_size=4096 00:22:49.153 [2024-07-26 11:30:44.581703] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:49.153 [2024-07-26 11:30:44.581709] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:22:49.153 [2024-07-26 11:30:44.581713] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:22:49.153 [2024-07-26 11:30:44.581744] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:49.153 [2024-07-26 11:30:44.581750] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:49.153 [2024-07-26 11:30:44.581753] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:49.153 [2024-07-26 11:30:44.581756] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2006e40) on tqpair=0x1f83ec0 00:22:49.153 [2024-07-26 11:30:44.581764] nvme_ctrlr.c:2057:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] transport max_xfer_size 4294967295 00:22:49.153 [2024-07-26 11:30:44.581768] nvme_ctrlr.c:2061:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] MDTS max_xfer_size 131072 00:22:49.153 [2024-07-26 11:30:44.581772] nvme_ctrlr.c:2064:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CNTLID 0x0001 00:22:49.153 [2024-07-26 11:30:44.581775] nvme_ctrlr.c:2088:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] transport max_sges 16 00:22:49.153 [2024-07-26 11:30:44.581779] nvme_ctrlr.c:2103:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] fuses compare and write: 1 00:22:49.153 [2024-07-26 11:30:44.581783] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to configure AER (timeout 30000 ms) 00:22:49.153 [2024-07-26 11:30:44.581790] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for configure aer (timeout 30000 ms) 00:22:49.153 [2024-07-26 11:30:44.581798] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:49.153 [2024-07-26 11:30:44.581801] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:49.153 [2024-07-26 11:30:44.581804] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1f83ec0) 00:22:49.153 [2024-07-26 11:30:44.581811] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:22:49.153 [2024-07-26 11:30:44.581822] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2006e40, cid 0, qid 0 00:22:49.153 [2024-07-26 11:30:44.581894] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:49.153 [2024-07-26 11:30:44.581900] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:49.153 [2024-07-26 11:30:44.581903] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:49.153 [2024-07-26 11:30:44.581906] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2006e40) on tqpair=0x1f83ec0 00:22:49.153 [2024-07-26 11:30:44.581911] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:49.153 [2024-07-26 11:30:44.581914] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:49.153 [2024-07-26 11:30:44.581917] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1f83ec0) 00:22:49.153 [2024-07-26 11:30:44.581922] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:49.153 [2024-07-26 11:30:44.581927] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:49.153 [2024-07-26 11:30:44.581930] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:49.153 [2024-07-26 11:30:44.581933] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x1f83ec0) 00:22:49.153 [2024-07-26 11:30:44.581938] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:49.153 [2024-07-26 11:30:44.581942] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:49.153 [2024-07-26 11:30:44.581946] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:49.153 [2024-07-26 11:30:44.581949] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x1f83ec0) 00:22:49.153 [2024-07-26 11:30:44.581953] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:49.153 [2024-07-26 11:30:44.581958] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:49.153 [2024-07-26 11:30:44.581961] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:49.153 [2024-07-26 11:30:44.581964] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1f83ec0) 00:22:49.153 [2024-07-26 11:30:44.581969] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:49.153 [2024-07-26 11:30:44.581973] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set keep alive timeout (timeout 30000 ms) 00:22:49.153 [2024-07-26 11:30:44.581983] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:22:49.153 [2024-07-26 11:30:44.581989] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:49.153 [2024-07-26 11:30:44.581992] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1f83ec0) 00:22:49.153 [2024-07-26 11:30:44.581998] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.153 [2024-07-26 11:30:44.582008] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2006e40, cid 0, qid 0 00:22:49.153 [2024-07-26 11:30:44.582013] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2006fc0, cid 1, qid 0 00:22:49.153 [2024-07-26 11:30:44.582017] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2007140, cid 2, qid 0 00:22:49.153 [2024-07-26 11:30:44.582021] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x20072c0, cid 3, qid 0 00:22:49.153 [2024-07-26 11:30:44.582025] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2007440, cid 4, qid 0 00:22:49.153 [2024-07-26 11:30:44.582123] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:49.153 [2024-07-26 11:30:44.582129] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:49.153 [2024-07-26 11:30:44.582132] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:49.153 [2024-07-26 11:30:44.582135] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2007440) on tqpair=0x1f83ec0 00:22:49.153 [2024-07-26 11:30:44.582139] nvme_ctrlr.c:3026:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Sending keep alive every 5000000 us 00:22:49.153 [2024-07-26 11:30:44.582143] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify controller iocs specific (timeout 30000 ms) 00:22:49.153 [2024-07-26 11:30:44.582151] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set number of queues (timeout 30000 ms) 00:22:49.153 [2024-07-26 11:30:44.582157] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for set number of queues (timeout 30000 ms) 00:22:49.153 [2024-07-26 11:30:44.582162] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:49.153 [2024-07-26 11:30:44.582165] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:49.153 [2024-07-26 11:30:44.582169] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1f83ec0) 00:22:49.153 [2024-07-26 11:30:44.582174] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:4 cdw10:00000007 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:22:49.153 [2024-07-26 11:30:44.582184] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2007440, cid 4, qid 0 00:22:49.153 [2024-07-26 11:30:44.582251] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:49.153 [2024-07-26 11:30:44.582256] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:49.153 [2024-07-26 11:30:44.582259] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:49.153 [2024-07-26 11:30:44.582262] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2007440) on tqpair=0x1f83ec0 00:22:49.153 [2024-07-26 11:30:44.582315] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify active ns (timeout 30000 ms) 00:22:49.153 [2024-07-26 11:30:44.582324] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify active ns (timeout 30000 ms) 00:22:49.153 [2024-07-26 11:30:44.582330] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:49.153 [2024-07-26 11:30:44.582334] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1f83ec0) 00:22:49.153 [2024-07-26 11:30:44.582339] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.153 [2024-07-26 11:30:44.582348] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2007440, cid 4, qid 0 00:22:49.153 [2024-07-26 11:30:44.582422] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:22:49.153 [2024-07-26 11:30:44.582428] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:22:49.153 [2024-07-26 11:30:44.582431] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:22:49.153 [2024-07-26 11:30:44.582434] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1f83ec0): datao=0, datal=4096, cccid=4 00:22:49.153 [2024-07-26 11:30:44.582438] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x2007440) on tqpair(0x1f83ec0): expected_datao=0, payload_size=4096 00:22:49.153 [2024-07-26 11:30:44.582442] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:49.153 [2024-07-26 11:30:44.582459] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:22:49.153 [2024-07-26 11:30:44.582463] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:22:49.153 [2024-07-26 11:30:44.582501] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:49.153 [2024-07-26 11:30:44.582507] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:49.153 [2024-07-26 11:30:44.582509] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:49.153 [2024-07-26 11:30:44.582513] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2007440) on tqpair=0x1f83ec0 00:22:49.153 [2024-07-26 11:30:44.582521] nvme_ctrlr.c:4697:spdk_nvme_ctrlr_get_ns: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Namespace 1 was added 00:22:49.153 [2024-07-26 11:30:44.582534] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify ns (timeout 30000 ms) 00:22:49.153 [2024-07-26 11:30:44.582542] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify ns (timeout 30000 ms) 00:22:49.154 [2024-07-26 11:30:44.582548] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:49.154 [2024-07-26 11:30:44.582551] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1f83ec0) 00:22:49.154 [2024-07-26 11:30:44.582557] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.154 [2024-07-26 11:30:44.582566] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2007440, cid 4, qid 0 00:22:49.154 [2024-07-26 11:30:44.582659] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:22:49.154 [2024-07-26 11:30:44.582665] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:22:49.154 [2024-07-26 11:30:44.582669] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:22:49.154 [2024-07-26 11:30:44.582672] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1f83ec0): datao=0, datal=4096, cccid=4 00:22:49.154 [2024-07-26 11:30:44.582675] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x2007440) on tqpair(0x1f83ec0): expected_datao=0, payload_size=4096 00:22:49.154 [2024-07-26 11:30:44.582679] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:49.154 [2024-07-26 11:30:44.582684] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:22:49.154 [2024-07-26 11:30:44.582687] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:22:49.154 [2024-07-26 11:30:44.582709] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:49.154 [2024-07-26 11:30:44.582715] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:49.154 [2024-07-26 11:30:44.582717] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:49.154 [2024-07-26 11:30:44.582721] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2007440) on tqpair=0x1f83ec0 00:22:49.154 [2024-07-26 11:30:44.582733] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:22:49.154 [2024-07-26 11:30:44.582740] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:22:49.154 [2024-07-26 11:30:44.582746] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:49.154 [2024-07-26 11:30:44.582750] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1f83ec0) 00:22:49.154 [2024-07-26 11:30:44.582757] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.154 [2024-07-26 11:30:44.582766] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2007440, cid 4, qid 0 00:22:49.154 [2024-07-26 11:30:44.582848] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:22:49.154 [2024-07-26 11:30:44.582854] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:22:49.154 [2024-07-26 11:30:44.582857] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:22:49.154 [2024-07-26 11:30:44.582860] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1f83ec0): datao=0, datal=4096, cccid=4 00:22:49.154 [2024-07-26 11:30:44.582864] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x2007440) on tqpair(0x1f83ec0): expected_datao=0, payload_size=4096 00:22:49.154 [2024-07-26 11:30:44.582867] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:49.154 [2024-07-26 11:30:44.582873] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:22:49.154 [2024-07-26 11:30:44.582876] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:22:49.154 [2024-07-26 11:30:44.582891] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:49.154 [2024-07-26 11:30:44.582896] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:49.154 [2024-07-26 11:30:44.582899] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:49.154 [2024-07-26 11:30:44.582902] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2007440) on tqpair=0x1f83ec0 00:22:49.154 [2024-07-26 11:30:44.582909] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify ns iocs specific (timeout 30000 ms) 00:22:49.154 [2024-07-26 11:30:44.582916] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set supported log pages (timeout 30000 ms) 00:22:49.154 [2024-07-26 11:30:44.582923] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set supported features (timeout 30000 ms) 00:22:49.154 [2024-07-26 11:30:44.582929] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set host behavior support feature (timeout 30000 ms) 00:22:49.154 [2024-07-26 11:30:44.582934] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set doorbell buffer config (timeout 30000 ms) 00:22:49.154 [2024-07-26 11:30:44.582938] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set host ID (timeout 30000 ms) 00:22:49.154 [2024-07-26 11:30:44.582942] nvme_ctrlr.c:3114:nvme_ctrlr_set_host_id: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] NVMe-oF transport - not sending Set Features - Host ID 00:22:49.154 [2024-07-26 11:30:44.582946] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to transport ready (timeout 30000 ms) 00:22:49.154 [2024-07-26 11:30:44.582950] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to ready (no timeout) 00:22:49.154 [2024-07-26 11:30:44.582970] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:49.154 [2024-07-26 11:30:44.582973] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1f83ec0) 00:22:49.154 [2024-07-26 11:30:44.582979] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:4 cdw10:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.154 [2024-07-26 11:30:44.582984] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:49.154 [2024-07-26 11:30:44.582988] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:49.154 [2024-07-26 11:30:44.582991] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1f83ec0) 00:22:49.154 [2024-07-26 11:30:44.582996] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:22:49.154 [2024-07-26 11:30:44.583008] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2007440, cid 4, qid 0 00:22:49.154 [2024-07-26 11:30:44.583014] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x20075c0, cid 5, qid 0 00:22:49.154 [2024-07-26 11:30:44.583088] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:49.154 [2024-07-26 11:30:44.583093] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:49.154 [2024-07-26 11:30:44.583096] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:49.154 [2024-07-26 11:30:44.583099] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2007440) on tqpair=0x1f83ec0 00:22:49.154 [2024-07-26 11:30:44.583105] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:49.154 [2024-07-26 11:30:44.583110] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:49.154 [2024-07-26 11:30:44.583113] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:49.154 [2024-07-26 11:30:44.583116] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x20075c0) on tqpair=0x1f83ec0 00:22:49.154 [2024-07-26 11:30:44.583123] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:49.154 [2024-07-26 11:30:44.583127] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1f83ec0) 00:22:49.154 [2024-07-26 11:30:44.583132] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:5 cdw10:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.154 [2024-07-26 11:30:44.583142] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x20075c0, cid 5, qid 0 00:22:49.154 [2024-07-26 11:30:44.583206] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:49.154 [2024-07-26 11:30:44.583212] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:49.154 [2024-07-26 11:30:44.583215] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:49.154 [2024-07-26 11:30:44.583218] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x20075c0) on tqpair=0x1f83ec0 00:22:49.154 [2024-07-26 11:30:44.583226] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:49.154 [2024-07-26 11:30:44.583229] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1f83ec0) 00:22:49.154 [2024-07-26 11:30:44.583234] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:5 cdw10:00000004 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.154 [2024-07-26 11:30:44.583243] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x20075c0, cid 5, qid 0 00:22:49.154 [2024-07-26 11:30:44.583319] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:49.154 [2024-07-26 11:30:44.583325] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:49.154 [2024-07-26 11:30:44.583328] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:49.154 [2024-07-26 11:30:44.583331] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x20075c0) on tqpair=0x1f83ec0 00:22:49.154 [2024-07-26 11:30:44.583339] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:49.154 [2024-07-26 11:30:44.583342] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1f83ec0) 00:22:49.154 [2024-07-26 11:30:44.583348] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:5 cdw10:00000007 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.154 [2024-07-26 11:30:44.583357] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x20075c0, cid 5, qid 0 00:22:49.154 [2024-07-26 11:30:44.583422] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:49.154 [2024-07-26 11:30:44.583428] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:49.154 [2024-07-26 11:30:44.583431] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:49.154 [2024-07-26 11:30:44.583434] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x20075c0) on tqpair=0x1f83ec0 00:22:49.154 [2024-07-26 11:30:44.583446] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:49.154 [2024-07-26 11:30:44.583450] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1f83ec0) 00:22:49.154 [2024-07-26 11:30:44.583456] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.154 [2024-07-26 11:30:44.583463] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:49.154 [2024-07-26 11:30:44.583467] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1f83ec0) 00:22:49.154 [2024-07-26 11:30:44.583472] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:ffffffff cdw10:007f0002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.154 [2024-07-26 11:30:44.583478] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:49.154 [2024-07-26 11:30:44.583481] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=6 on tqpair(0x1f83ec0) 00:22:49.154 [2024-07-26 11:30:44.583486] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:ffffffff cdw10:007f0003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.154 [2024-07-26 11:30:44.583492] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:49.154 [2024-07-26 11:30:44.583495] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x1f83ec0) 00:22:49.154 [2024-07-26 11:30:44.583500] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.154 [2024-07-26 11:30:44.583510] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x20075c0, cid 5, qid 0 00:22:49.154 [2024-07-26 11:30:44.583514] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2007440, cid 4, qid 0 00:22:49.155 [2024-07-26 11:30:44.583518] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2007740, cid 6, qid 0 00:22:49.155 [2024-07-26 11:30:44.583522] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x20078c0, cid 7, qid 0 00:22:49.155 [2024-07-26 11:30:44.583664] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:22:49.155 [2024-07-26 11:30:44.583670] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:22:49.155 [2024-07-26 11:30:44.583673] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:22:49.155 [2024-07-26 11:30:44.583676] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1f83ec0): datao=0, datal=8192, cccid=5 00:22:49.155 [2024-07-26 11:30:44.583680] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x20075c0) on tqpair(0x1f83ec0): expected_datao=0, payload_size=8192 00:22:49.155 [2024-07-26 11:30:44.583683] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:49.155 [2024-07-26 11:30:44.583696] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:22:49.155 [2024-07-26 11:30:44.583700] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:22:49.155 [2024-07-26 11:30:44.583705] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:22:49.155 [2024-07-26 11:30:44.583709] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:22:49.155 [2024-07-26 11:30:44.583712] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:22:49.155 [2024-07-26 11:30:44.583715] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1f83ec0): datao=0, datal=512, cccid=4 00:22:49.155 [2024-07-26 11:30:44.583719] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x2007440) on tqpair(0x1f83ec0): expected_datao=0, payload_size=512 00:22:49.155 [2024-07-26 11:30:44.583722] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:49.155 [2024-07-26 11:30:44.583728] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:22:49.155 [2024-07-26 11:30:44.583731] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:22:49.155 [2024-07-26 11:30:44.583735] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:22:49.155 [2024-07-26 11:30:44.583740] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:22:49.155 [2024-07-26 11:30:44.583743] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:22:49.155 [2024-07-26 11:30:44.583746] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1f83ec0): datao=0, datal=512, cccid=6 00:22:49.155 [2024-07-26 11:30:44.583749] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x2007740) on tqpair(0x1f83ec0): expected_datao=0, payload_size=512 00:22:49.155 [2024-07-26 11:30:44.583754] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:49.155 [2024-07-26 11:30:44.583760] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:22:49.155 [2024-07-26 11:30:44.583763] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:22:49.155 [2024-07-26 11:30:44.583767] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:22:49.155 [2024-07-26 11:30:44.583772] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:22:49.155 [2024-07-26 11:30:44.583775] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:22:49.155 [2024-07-26 11:30:44.583778] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1f83ec0): datao=0, datal=4096, cccid=7 00:22:49.155 [2024-07-26 11:30:44.583782] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x20078c0) on tqpair(0x1f83ec0): expected_datao=0, payload_size=4096 00:22:49.155 [2024-07-26 11:30:44.583785] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:49.155 [2024-07-26 11:30:44.583795] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:22:49.155 [2024-07-26 11:30:44.583798] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:22:49.155 [2024-07-26 11:30:44.583804] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:49.155 [2024-07-26 11:30:44.583809] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:49.155 [2024-07-26 11:30:44.583812] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:49.155 [2024-07-26 11:30:44.583815] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x20075c0) on tqpair=0x1f83ec0 00:22:49.155 [2024-07-26 11:30:44.583825] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:49.155 [2024-07-26 11:30:44.583830] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:49.155 [2024-07-26 11:30:44.583833] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:49.155 [2024-07-26 11:30:44.583836] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2007440) on tqpair=0x1f83ec0 00:22:49.155 [2024-07-26 11:30:44.583844] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:49.155 [2024-07-26 11:30:44.583849] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:49.155 [2024-07-26 11:30:44.583852] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:49.155 [2024-07-26 11:30:44.583855] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2007740) on tqpair=0x1f83ec0 00:22:49.155 [2024-07-26 11:30:44.583861] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:49.155 [2024-07-26 11:30:44.583866] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:49.155 [2024-07-26 11:30:44.583868] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:49.155 [2024-07-26 11:30:44.583872] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x20078c0) on tqpair=0x1f83ec0 00:22:49.155 ===================================================== 00:22:49.155 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:22:49.155 ===================================================== 00:22:49.155 Controller Capabilities/Features 00:22:49.155 ================================ 00:22:49.155 Vendor ID: 8086 00:22:49.155 Subsystem Vendor ID: 8086 00:22:49.155 Serial Number: SPDK00000000000001 00:22:49.155 Model Number: SPDK bdev Controller 00:22:49.155 Firmware Version: 24.09 00:22:49.155 Recommended Arb Burst: 6 00:22:49.155 IEEE OUI Identifier: e4 d2 5c 00:22:49.155 Multi-path I/O 00:22:49.155 May have multiple subsystem ports: Yes 00:22:49.155 May have multiple controllers: Yes 00:22:49.155 Associated with SR-IOV VF: No 00:22:49.155 Max Data Transfer Size: 131072 00:22:49.155 Max Number of Namespaces: 32 00:22:49.155 Max Number of I/O Queues: 127 00:22:49.155 NVMe Specification Version (VS): 1.3 00:22:49.155 NVMe Specification Version (Identify): 1.3 00:22:49.155 Maximum Queue Entries: 128 00:22:49.155 Contiguous Queues Required: Yes 00:22:49.155 Arbitration Mechanisms Supported 00:22:49.155 Weighted Round Robin: Not Supported 00:22:49.155 Vendor Specific: Not Supported 00:22:49.155 Reset Timeout: 15000 ms 00:22:49.155 Doorbell Stride: 4 bytes 00:22:49.155 NVM Subsystem Reset: Not Supported 00:22:49.155 Command Sets Supported 00:22:49.155 NVM Command Set: Supported 00:22:49.155 Boot Partition: Not Supported 00:22:49.155 Memory Page Size Minimum: 4096 bytes 00:22:49.155 Memory Page Size Maximum: 4096 bytes 00:22:49.155 Persistent Memory Region: Not Supported 00:22:49.155 Optional Asynchronous Events Supported 00:22:49.155 Namespace Attribute Notices: Supported 00:22:49.155 Firmware Activation Notices: Not Supported 00:22:49.155 ANA Change Notices: Not Supported 00:22:49.155 PLE Aggregate Log Change Notices: Not Supported 00:22:49.155 LBA Status Info Alert Notices: Not Supported 00:22:49.155 EGE Aggregate Log Change Notices: Not Supported 00:22:49.155 Normal NVM Subsystem Shutdown event: Not Supported 00:22:49.155 Zone Descriptor Change Notices: Not Supported 00:22:49.155 Discovery Log Change Notices: Not Supported 00:22:49.155 Controller Attributes 00:22:49.155 128-bit Host Identifier: Supported 00:22:49.155 Non-Operational Permissive Mode: Not Supported 00:22:49.155 NVM Sets: Not Supported 00:22:49.155 Read Recovery Levels: Not Supported 00:22:49.155 Endurance Groups: Not Supported 00:22:49.155 Predictable Latency Mode: Not Supported 00:22:49.155 Traffic Based Keep ALive: Not Supported 00:22:49.155 Namespace Granularity: Not Supported 00:22:49.155 SQ Associations: Not Supported 00:22:49.155 UUID List: Not Supported 00:22:49.155 Multi-Domain Subsystem: Not Supported 00:22:49.155 Fixed Capacity Management: Not Supported 00:22:49.155 Variable Capacity Management: Not Supported 00:22:49.155 Delete Endurance Group: Not Supported 00:22:49.155 Delete NVM Set: Not Supported 00:22:49.155 Extended LBA Formats Supported: Not Supported 00:22:49.155 Flexible Data Placement Supported: Not Supported 00:22:49.155 00:22:49.155 Controller Memory Buffer Support 00:22:49.155 ================================ 00:22:49.155 Supported: No 00:22:49.155 00:22:49.155 Persistent Memory Region Support 00:22:49.155 ================================ 00:22:49.155 Supported: No 00:22:49.155 00:22:49.155 Admin Command Set Attributes 00:22:49.155 ============================ 00:22:49.155 Security Send/Receive: Not Supported 00:22:49.155 Format NVM: Not Supported 00:22:49.155 Firmware Activate/Download: Not Supported 00:22:49.155 Namespace Management: Not Supported 00:22:49.155 Device Self-Test: Not Supported 00:22:49.155 Directives: Not Supported 00:22:49.155 NVMe-MI: Not Supported 00:22:49.155 Virtualization Management: Not Supported 00:22:49.155 Doorbell Buffer Config: Not Supported 00:22:49.155 Get LBA Status Capability: Not Supported 00:22:49.155 Command & Feature Lockdown Capability: Not Supported 00:22:49.155 Abort Command Limit: 4 00:22:49.155 Async Event Request Limit: 4 00:22:49.155 Number of Firmware Slots: N/A 00:22:49.155 Firmware Slot 1 Read-Only: N/A 00:22:49.155 Firmware Activation Without Reset: N/A 00:22:49.155 Multiple Update Detection Support: N/A 00:22:49.155 Firmware Update Granularity: No Information Provided 00:22:49.155 Per-Namespace SMART Log: No 00:22:49.155 Asymmetric Namespace Access Log Page: Not Supported 00:22:49.155 Subsystem NQN: nqn.2016-06.io.spdk:cnode1 00:22:49.155 Command Effects Log Page: Supported 00:22:49.155 Get Log Page Extended Data: Supported 00:22:49.155 Telemetry Log Pages: Not Supported 00:22:49.155 Persistent Event Log Pages: Not Supported 00:22:49.155 Supported Log Pages Log Page: May Support 00:22:49.155 Commands Supported & Effects Log Page: Not Supported 00:22:49.156 Feature Identifiers & Effects Log Page:May Support 00:22:49.156 NVMe-MI Commands & Effects Log Page: May Support 00:22:49.156 Data Area 4 for Telemetry Log: Not Supported 00:22:49.156 Error Log Page Entries Supported: 128 00:22:49.156 Keep Alive: Supported 00:22:49.156 Keep Alive Granularity: 10000 ms 00:22:49.156 00:22:49.156 NVM Command Set Attributes 00:22:49.156 ========================== 00:22:49.156 Submission Queue Entry Size 00:22:49.156 Max: 64 00:22:49.156 Min: 64 00:22:49.156 Completion Queue Entry Size 00:22:49.156 Max: 16 00:22:49.156 Min: 16 00:22:49.156 Number of Namespaces: 32 00:22:49.156 Compare Command: Supported 00:22:49.156 Write Uncorrectable Command: Not Supported 00:22:49.156 Dataset Management Command: Supported 00:22:49.156 Write Zeroes Command: Supported 00:22:49.156 Set Features Save Field: Not Supported 00:22:49.156 Reservations: Supported 00:22:49.156 Timestamp: Not Supported 00:22:49.156 Copy: Supported 00:22:49.156 Volatile Write Cache: Present 00:22:49.156 Atomic Write Unit (Normal): 1 00:22:49.156 Atomic Write Unit (PFail): 1 00:22:49.156 Atomic Compare & Write Unit: 1 00:22:49.156 Fused Compare & Write: Supported 00:22:49.156 Scatter-Gather List 00:22:49.156 SGL Command Set: Supported 00:22:49.156 SGL Keyed: Supported 00:22:49.156 SGL Bit Bucket Descriptor: Not Supported 00:22:49.156 SGL Metadata Pointer: Not Supported 00:22:49.156 Oversized SGL: Not Supported 00:22:49.156 SGL Metadata Address: Not Supported 00:22:49.156 SGL Offset: Supported 00:22:49.156 Transport SGL Data Block: Not Supported 00:22:49.156 Replay Protected Memory Block: Not Supported 00:22:49.156 00:22:49.156 Firmware Slot Information 00:22:49.156 ========================= 00:22:49.156 Active slot: 1 00:22:49.156 Slot 1 Firmware Revision: 24.09 00:22:49.156 00:22:49.156 00:22:49.156 Commands Supported and Effects 00:22:49.156 ============================== 00:22:49.156 Admin Commands 00:22:49.156 -------------- 00:22:49.156 Get Log Page (02h): Supported 00:22:49.156 Identify (06h): Supported 00:22:49.156 Abort (08h): Supported 00:22:49.156 Set Features (09h): Supported 00:22:49.156 Get Features (0Ah): Supported 00:22:49.156 Asynchronous Event Request (0Ch): Supported 00:22:49.156 Keep Alive (18h): Supported 00:22:49.156 I/O Commands 00:22:49.156 ------------ 00:22:49.156 Flush (00h): Supported LBA-Change 00:22:49.156 Write (01h): Supported LBA-Change 00:22:49.156 Read (02h): Supported 00:22:49.156 Compare (05h): Supported 00:22:49.156 Write Zeroes (08h): Supported LBA-Change 00:22:49.156 Dataset Management (09h): Supported LBA-Change 00:22:49.156 Copy (19h): Supported LBA-Change 00:22:49.156 00:22:49.156 Error Log 00:22:49.156 ========= 00:22:49.156 00:22:49.156 Arbitration 00:22:49.156 =========== 00:22:49.156 Arbitration Burst: 1 00:22:49.156 00:22:49.156 Power Management 00:22:49.156 ================ 00:22:49.156 Number of Power States: 1 00:22:49.156 Current Power State: Power State #0 00:22:49.156 Power State #0: 00:22:49.156 Max Power: 0.00 W 00:22:49.156 Non-Operational State: Operational 00:22:49.156 Entry Latency: Not Reported 00:22:49.156 Exit Latency: Not Reported 00:22:49.156 Relative Read Throughput: 0 00:22:49.156 Relative Read Latency: 0 00:22:49.156 Relative Write Throughput: 0 00:22:49.156 Relative Write Latency: 0 00:22:49.156 Idle Power: Not Reported 00:22:49.156 Active Power: Not Reported 00:22:49.156 Non-Operational Permissive Mode: Not Supported 00:22:49.156 00:22:49.156 Health Information 00:22:49.156 ================== 00:22:49.156 Critical Warnings: 00:22:49.156 Available Spare Space: OK 00:22:49.156 Temperature: OK 00:22:49.156 Device Reliability: OK 00:22:49.156 Read Only: No 00:22:49.156 Volatile Memory Backup: OK 00:22:49.156 Current Temperature: 0 Kelvin (-273 Celsius) 00:22:49.156 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:22:49.156 Available Spare: 0% 00:22:49.156 Available Spare Threshold: 0% 00:22:49.156 Life Percentage Used:[2024-07-26 11:30:44.583950] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:49.156 [2024-07-26 11:30:44.583954] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x1f83ec0) 00:22:49.156 [2024-07-26 11:30:44.583960] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:7 cdw10:00000005 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.156 [2024-07-26 11:30:44.583971] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x20078c0, cid 7, qid 0 00:22:49.156 [2024-07-26 11:30:44.584044] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:49.156 [2024-07-26 11:30:44.584050] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:49.156 [2024-07-26 11:30:44.584053] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:49.156 [2024-07-26 11:30:44.584056] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x20078c0) on tqpair=0x1f83ec0 00:22:49.156 [2024-07-26 11:30:44.584080] nvme_ctrlr.c:4361:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Prepare to destruct SSD 00:22:49.156 [2024-07-26 11:30:44.584088] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2006e40) on tqpair=0x1f83ec0 00:22:49.156 [2024-07-26 11:30:44.584093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.156 [2024-07-26 11:30:44.584099] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2006fc0) on tqpair=0x1f83ec0 00:22:49.156 [2024-07-26 11:30:44.584103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.156 [2024-07-26 11:30:44.584107] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2007140) on tqpair=0x1f83ec0 00:22:49.156 [2024-07-26 11:30:44.584111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.156 [2024-07-26 11:30:44.584115] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x20072c0) on tqpair=0x1f83ec0 00:22:49.156 [2024-07-26 11:30:44.584118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.156 [2024-07-26 11:30:44.584125] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:49.156 [2024-07-26 11:30:44.584128] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:49.156 [2024-07-26 11:30:44.584131] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1f83ec0) 00:22:49.156 [2024-07-26 11:30:44.584137] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.156 [2024-07-26 11:30:44.584147] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x20072c0, cid 3, qid 0 00:22:49.156 [2024-07-26 11:30:44.584209] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:49.156 [2024-07-26 11:30:44.584214] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:49.156 [2024-07-26 11:30:44.584217] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:49.156 [2024-07-26 11:30:44.584220] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x20072c0) on tqpair=0x1f83ec0 00:22:49.156 [2024-07-26 11:30:44.584226] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:49.156 [2024-07-26 11:30:44.584229] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:49.156 [2024-07-26 11:30:44.584232] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1f83ec0) 00:22:49.156 [2024-07-26 11:30:44.584238] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.156 [2024-07-26 11:30:44.584249] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x20072c0, cid 3, qid 0 00:22:49.156 [2024-07-26 11:30:44.584320] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:49.156 [2024-07-26 11:30:44.584325] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:49.157 [2024-07-26 11:30:44.584328] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:49.157 [2024-07-26 11:30:44.584331] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x20072c0) on tqpair=0x1f83ec0 00:22:49.157 [2024-07-26 11:30:44.584335] nvme_ctrlr.c:1147:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] RTD3E = 0 us 00:22:49.157 [2024-07-26 11:30:44.584339] nvme_ctrlr.c:1150:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] shutdown timeout = 10000 ms 00:22:49.157 [2024-07-26 11:30:44.584346] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:49.157 [2024-07-26 11:30:44.584350] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:49.157 [2024-07-26 11:30:44.584353] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1f83ec0) 00:22:49.157 [2024-07-26 11:30:44.584358] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.157 [2024-07-26 11:30:44.584366] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x20072c0, cid 3, qid 0 00:22:49.157 [2024-07-26 11:30:44.584429] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:49.157 [2024-07-26 11:30:44.584434] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:49.157 [2024-07-26 11:30:44.584437] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:49.157 [2024-07-26 11:30:44.584441] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x20072c0) on tqpair=0x1f83ec0 00:22:49.157 [2024-07-26 11:30:44.584450] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:49.157 [2024-07-26 11:30:44.584453] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:49.157 [2024-07-26 11:30:44.584457] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1f83ec0) 00:22:49.157 [2024-07-26 11:30:44.584462] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.157 [2024-07-26 11:30:44.584471] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x20072c0, cid 3, qid 0 00:22:49.157 [2024-07-26 11:30:44.584546] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:49.157 [2024-07-26 11:30:44.584551] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:49.157 [2024-07-26 11:30:44.584554] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:49.157 [2024-07-26 11:30:44.584558] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x20072c0) on tqpair=0x1f83ec0 00:22:49.157 [2024-07-26 11:30:44.584566] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:49.157 [2024-07-26 11:30:44.584569] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:49.157 [2024-07-26 11:30:44.584572] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1f83ec0) 00:22:49.157 [2024-07-26 11:30:44.584578] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.157 [2024-07-26 11:30:44.584587] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x20072c0, cid 3, qid 0 00:22:49.157 [2024-07-26 11:30:44.588634] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:49.157 [2024-07-26 11:30:44.588642] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:49.157 [2024-07-26 11:30:44.588645] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:49.157 [2024-07-26 11:30:44.588648] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x20072c0) on tqpair=0x1f83ec0 00:22:49.157 [2024-07-26 11:30:44.588657] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:49.157 [2024-07-26 11:30:44.588661] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:49.157 [2024-07-26 11:30:44.588664] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1f83ec0) 00:22:49.157 [2024-07-26 11:30:44.588669] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.157 [2024-07-26 11:30:44.588680] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x20072c0, cid 3, qid 0 00:22:49.157 [2024-07-26 11:30:44.588782] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:49.157 [2024-07-26 11:30:44.588787] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:49.157 [2024-07-26 11:30:44.588790] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:49.157 [2024-07-26 11:30:44.588793] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x20072c0) on tqpair=0x1f83ec0 00:22:49.157 [2024-07-26 11:30:44.588800] nvme_ctrlr.c:1269:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] shutdown complete in 4 milliseconds 00:22:49.157 0% 00:22:49.157 Data Units Read: 0 00:22:49.157 Data Units Written: 0 00:22:49.157 Host Read Commands: 0 00:22:49.157 Host Write Commands: 0 00:22:49.157 Controller Busy Time: 0 minutes 00:22:49.157 Power Cycles: 0 00:22:49.157 Power On Hours: 0 hours 00:22:49.157 Unsafe Shutdowns: 0 00:22:49.157 Unrecoverable Media Errors: 0 00:22:49.157 Lifetime Error Log Entries: 0 00:22:49.157 Warning Temperature Time: 0 minutes 00:22:49.157 Critical Temperature Time: 0 minutes 00:22:49.157 00:22:49.157 Number of Queues 00:22:49.157 ================ 00:22:49.157 Number of I/O Submission Queues: 127 00:22:49.157 Number of I/O Completion Queues: 127 00:22:49.157 00:22:49.157 Active Namespaces 00:22:49.157 ================= 00:22:49.157 Namespace ID:1 00:22:49.157 Error Recovery Timeout: Unlimited 00:22:49.157 Command Set Identifier: NVM (00h) 00:22:49.157 Deallocate: Supported 00:22:49.157 Deallocated/Unwritten Error: Not Supported 00:22:49.157 Deallocated Read Value: Unknown 00:22:49.157 Deallocate in Write Zeroes: Not Supported 00:22:49.157 Deallocated Guard Field: 0xFFFF 00:22:49.157 Flush: Supported 00:22:49.157 Reservation: Supported 00:22:49.157 Namespace Sharing Capabilities: Multiple Controllers 00:22:49.157 Size (in LBAs): 131072 (0GiB) 00:22:49.157 Capacity (in LBAs): 131072 (0GiB) 00:22:49.157 Utilization (in LBAs): 131072 (0GiB) 00:22:49.157 NGUID: ABCDEF0123456789ABCDEF0123456789 00:22:49.157 EUI64: ABCDEF0123456789 00:22:49.157 UUID: eded1ca0-8961-46fc-a4c2-73114c526bf1 00:22:49.157 Thin Provisioning: Not Supported 00:22:49.157 Per-NS Atomic Units: Yes 00:22:49.157 Atomic Boundary Size (Normal): 0 00:22:49.157 Atomic Boundary Size (PFail): 0 00:22:49.157 Atomic Boundary Offset: 0 00:22:49.157 Maximum Single Source Range Length: 65535 00:22:49.157 Maximum Copy Length: 65535 00:22:49.157 Maximum Source Range Count: 1 00:22:49.157 NGUID/EUI64 Never Reused: No 00:22:49.157 Namespace Write Protected: No 00:22:49.157 Number of LBA Formats: 1 00:22:49.157 Current LBA Format: LBA Format #00 00:22:49.157 LBA Format #00: Data Size: 512 Metadata Size: 0 00:22:49.157 00:22:49.157 11:30:44 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@51 -- # sync 00:22:49.157 11:30:44 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@52 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:22:49.157 11:30:44 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:49.157 11:30:44 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:22:49.157 11:30:44 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:49.157 11:30:44 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@54 -- # trap - SIGINT SIGTERM EXIT 00:22:49.157 11:30:44 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@56 -- # nvmftestfini 00:22:49.157 11:30:44 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@488 -- # nvmfcleanup 00:22:49.157 11:30:44 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@117 -- # sync 00:22:49.157 11:30:44 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:22:49.157 11:30:44 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@120 -- # set +e 00:22:49.157 11:30:44 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@121 -- # for i in {1..20} 00:22:49.157 11:30:44 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:22:49.157 rmmod nvme_tcp 00:22:49.157 rmmod nvme_fabrics 00:22:49.157 rmmod nvme_keyring 00:22:49.157 11:30:44 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:22:49.157 11:30:44 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@124 -- # set -e 00:22:49.157 11:30:44 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@125 -- # return 0 00:22:49.157 11:30:44 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@489 -- # '[' -n 1592854 ']' 00:22:49.157 11:30:44 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@490 -- # killprocess 1592854 00:22:49.157 11:30:44 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@950 -- # '[' -z 1592854 ']' 00:22:49.157 11:30:44 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@954 -- # kill -0 1592854 00:22:49.157 11:30:44 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@955 -- # uname 00:22:49.157 11:30:44 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:22:49.157 11:30:44 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1592854 00:22:49.157 11:30:44 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:22:49.157 11:30:44 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:22:49.157 11:30:44 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1592854' 00:22:49.157 killing process with pid 1592854 00:22:49.157 11:30:44 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@969 -- # kill 1592854 00:22:49.157 11:30:44 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@974 -- # wait 1592854 00:22:49.416 11:30:44 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:22:49.416 11:30:44 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:22:49.416 11:30:44 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:22:49.416 11:30:44 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:22:49.416 11:30:44 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@278 -- # remove_spdk_ns 00:22:49.416 11:30:44 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:49.416 11:30:44 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:49.416 11:30:44 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:51.951 11:30:46 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:22:51.951 00:22:51.951 real 0m9.552s 00:22:51.951 user 0m7.308s 00:22:51.951 sys 0m4.737s 00:22:51.951 11:30:46 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1126 -- # xtrace_disable 00:22:51.951 11:30:46 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:22:51.951 ************************************ 00:22:51.951 END TEST nvmf_identify 00:22:51.951 ************************************ 00:22:51.951 11:30:47 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@23 -- # run_test nvmf_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=tcp 00:22:51.951 11:30:47 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:22:51.951 11:30:47 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:22:51.951 11:30:47 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:22:51.951 ************************************ 00:22:51.951 START TEST nvmf_perf 00:22:51.951 ************************************ 00:22:51.951 11:30:47 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=tcp 00:22:51.951 * Looking for test storage... 00:22:51.951 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:22:51.951 11:30:47 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:22:51.951 11:30:47 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@7 -- # uname -s 00:22:51.951 11:30:47 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:51.951 11:30:47 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:51.951 11:30:47 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:51.951 11:30:47 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:51.951 11:30:47 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:51.951 11:30:47 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:51.951 11:30:47 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:51.951 11:30:47 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:51.951 11:30:47 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:51.951 11:30:47 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:51.951 11:30:47 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:22:51.951 11:30:47 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:22:51.951 11:30:47 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:51.951 11:30:47 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:51.951 11:30:47 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:51.951 11:30:47 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:51.951 11:30:47 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:51.951 11:30:47 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:51.951 11:30:47 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:51.951 11:30:47 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:51.951 11:30:47 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:51.951 11:30:47 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:51.951 11:30:47 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:51.951 11:30:47 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@5 -- # export PATH 00:22:51.951 11:30:47 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:51.951 11:30:47 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@47 -- # : 0 00:22:51.951 11:30:47 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:22:51.951 11:30:47 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:22:51.951 11:30:47 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:51.951 11:30:47 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:51.951 11:30:47 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:51.951 11:30:47 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:22:51.951 11:30:47 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:22:51.951 11:30:47 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@51 -- # have_pci_nics=0 00:22:51.951 11:30:47 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@12 -- # MALLOC_BDEV_SIZE=64 00:22:51.951 11:30:47 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:22:51.951 11:30:47 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:22:51.951 11:30:47 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@17 -- # nvmftestinit 00:22:51.951 11:30:47 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:22:51.952 11:30:47 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:51.952 11:30:47 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@448 -- # prepare_net_devs 00:22:51.952 11:30:47 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@410 -- # local -g is_hw=no 00:22:51.952 11:30:47 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@412 -- # remove_spdk_ns 00:22:51.952 11:30:47 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:51.952 11:30:47 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:51.952 11:30:47 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:51.952 11:30:47 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:22:51.952 11:30:47 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:22:51.952 11:30:47 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@285 -- # xtrace_disable 00:22:51.952 11:30:47 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:22:57.256 11:30:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:57.256 11:30:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@291 -- # pci_devs=() 00:22:57.256 11:30:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@291 -- # local -a pci_devs 00:22:57.256 11:30:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@292 -- # pci_net_devs=() 00:22:57.256 11:30:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:22:57.256 11:30:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@293 -- # pci_drivers=() 00:22:57.256 11:30:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@293 -- # local -A pci_drivers 00:22:57.256 11:30:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@295 -- # net_devs=() 00:22:57.256 11:30:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@295 -- # local -ga net_devs 00:22:57.256 11:30:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@296 -- # e810=() 00:22:57.256 11:30:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@296 -- # local -ga e810 00:22:57.256 11:30:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@297 -- # x722=() 00:22:57.256 11:30:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@297 -- # local -ga x722 00:22:57.256 11:30:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@298 -- # mlx=() 00:22:57.256 11:30:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@298 -- # local -ga mlx 00:22:57.256 11:30:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:57.256 11:30:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:57.256 11:30:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:57.256 11:30:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:57.256 11:30:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:57.256 11:30:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:57.256 11:30:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:57.256 11:30:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:57.256 11:30:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:57.256 11:30:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:57.256 11:30:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:57.256 11:30:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:22:57.256 11:30:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:22:57.257 11:30:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:22:57.257 11:30:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:22:57.257 11:30:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:22:57.257 11:30:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:22:57.257 11:30:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:57.257 11:30:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:22:57.257 Found 0000:86:00.0 (0x8086 - 0x159b) 00:22:57.257 11:30:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:57.257 11:30:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:57.257 11:30:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:57.257 11:30:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:57.257 11:30:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:57.257 11:30:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:57.257 11:30:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:22:57.257 Found 0000:86:00.1 (0x8086 - 0x159b) 00:22:57.257 11:30:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:57.257 11:30:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:57.257 11:30:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:57.257 11:30:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:57.257 11:30:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:57.257 11:30:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:22:57.257 11:30:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:22:57.257 11:30:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:22:57.257 11:30:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:57.257 11:30:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:57.257 11:30:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:22:57.257 11:30:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:57.257 11:30:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@390 -- # [[ up == up ]] 00:22:57.257 11:30:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:57.257 11:30:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:57.257 11:30:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:22:57.257 Found net devices under 0000:86:00.0: cvl_0_0 00:22:57.257 11:30:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:57.257 11:30:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:57.257 11:30:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:57.257 11:30:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:22:57.257 11:30:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:57.257 11:30:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@390 -- # [[ up == up ]] 00:22:57.257 11:30:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:57.257 11:30:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:57.257 11:30:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:22:57.257 Found net devices under 0000:86:00.1: cvl_0_1 00:22:57.257 11:30:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:57.257 11:30:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:22:57.257 11:30:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@414 -- # is_hw=yes 00:22:57.257 11:30:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:22:57.257 11:30:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:22:57.257 11:30:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:22:57.257 11:30:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:57.257 11:30:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:57.257 11:30:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:57.257 11:30:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:22:57.257 11:30:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:57.257 11:30:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:57.257 11:30:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:22:57.257 11:30:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:57.257 11:30:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:57.257 11:30:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:22:57.257 11:30:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:22:57.257 11:30:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:22:57.257 11:30:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:57.257 11:30:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:57.257 11:30:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:57.257 11:30:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:22:57.257 11:30:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:57.257 11:30:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:57.257 11:30:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:57.257 11:30:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:22:57.257 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:57.257 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.144 ms 00:22:57.257 00:22:57.257 --- 10.0.0.2 ping statistics --- 00:22:57.257 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:57.257 rtt min/avg/max/mdev = 0.144/0.144/0.144/0.000 ms 00:22:57.257 11:30:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:57.257 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:57.257 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.237 ms 00:22:57.257 00:22:57.257 --- 10.0.0.1 ping statistics --- 00:22:57.257 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:57.257 rtt min/avg/max/mdev = 0.237/0.237/0.237/0.000 ms 00:22:57.257 11:30:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:57.257 11:30:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@422 -- # return 0 00:22:57.257 11:30:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:22:57.257 11:30:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:57.257 11:30:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:22:57.257 11:30:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:22:57.257 11:30:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:57.257 11:30:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:22:57.516 11:30:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:22:57.516 11:30:52 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@18 -- # nvmfappstart -m 0xF 00:22:57.516 11:30:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:22:57.516 11:30:52 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@724 -- # xtrace_disable 00:22:57.516 11:30:52 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:22:57.516 11:30:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@481 -- # nvmfpid=1596485 00:22:57.516 11:30:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@482 -- # waitforlisten 1596485 00:22:57.516 11:30:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:22:57.516 11:30:52 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@831 -- # '[' -z 1596485 ']' 00:22:57.516 11:30:52 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:57.516 11:30:52 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@836 -- # local max_retries=100 00:22:57.516 11:30:52 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:57.516 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:57.516 11:30:52 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@840 -- # xtrace_disable 00:22:57.516 11:30:52 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:22:57.516 [2024-07-26 11:30:52.999466] Starting SPDK v24.09-pre git sha1 487ff9e1a / DPDK 24.03.0 initialization... 00:22:57.516 [2024-07-26 11:30:52.999511] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:57.516 EAL: No free 2048 kB hugepages reported on node 1 00:22:57.516 [2024-07-26 11:30:53.070428] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:22:57.516 [2024-07-26 11:30:53.148485] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:57.516 [2024-07-26 11:30:53.148522] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:57.516 [2024-07-26 11:30:53.148529] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:57.516 [2024-07-26 11:30:53.148535] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:57.516 [2024-07-26 11:30:53.148540] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:57.516 [2024-07-26 11:30:53.148583] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:22:57.516 [2024-07-26 11:30:53.148692] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:22:57.516 [2024-07-26 11:30:53.148718] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:22:57.516 [2024-07-26 11:30:53.148719] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:22:58.448 11:30:53 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:22:58.448 11:30:53 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@864 -- # return 0 00:22:58.448 11:30:53 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:22:58.448 11:30:53 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@730 -- # xtrace_disable 00:22:58.448 11:30:53 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:22:58.448 11:30:53 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:58.448 11:30:53 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:22:58.448 11:30:53 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py load_subsystem_config 00:23:01.776 11:30:56 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py framework_get_config bdev 00:23:01.776 11:30:56 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # jq -r '.[].params | select(.name=="Nvme0").traddr' 00:23:01.776 11:30:57 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # local_nvme_trid=0000:5e:00.0 00:23:01.776 11:30:57 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:23:01.776 11:30:57 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@31 -- # bdevs=' Malloc0' 00:23:01.776 11:30:57 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@33 -- # '[' -n 0000:5e:00.0 ']' 00:23:01.776 11:30:57 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@34 -- # bdevs=' Malloc0 Nvme0n1' 00:23:01.776 11:30:57 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@37 -- # '[' tcp == rdma ']' 00:23:01.776 11:30:57 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:23:01.776 [2024-07-26 11:30:57.373902] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:01.776 11:30:57 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:23:02.034 11:30:57 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:23:02.034 11:30:57 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:23:02.291 11:30:57 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:23:02.291 11:30:57 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:23:02.549 11:30:57 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:23:02.549 [2024-07-26 11:30:58.117394] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:02.549 11:30:58 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:23:02.807 11:30:58 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@52 -- # '[' -n 0000:5e:00.0 ']' 00:23:02.807 11:30:58 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@53 -- # perf_app -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:5e:00.0' 00:23:02.807 11:30:58 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@21 -- # '[' 0 -eq 1 ']' 00:23:02.807 11:30:58 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:5e:00.0' 00:23:04.179 Initializing NVMe Controllers 00:23:04.179 Attached to NVMe Controller at 0000:5e:00.0 [8086:0a54] 00:23:04.179 Associating PCIE (0000:5e:00.0) NSID 1 with lcore 0 00:23:04.179 Initialization complete. Launching workers. 00:23:04.179 ======================================================== 00:23:04.179 Latency(us) 00:23:04.179 Device Information : IOPS MiB/s Average min max 00:23:04.179 PCIE (0000:5e:00.0) NSID 1 from core 0: 100142.39 391.18 319.15 39.09 5189.35 00:23:04.179 ======================================================== 00:23:04.179 Total : 100142.39 391.18 319.15 39.09 5189.35 00:23:04.179 00:23:04.179 11:30:59 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 1 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:23:04.179 EAL: No free 2048 kB hugepages reported on node 1 00:23:05.551 Initializing NVMe Controllers 00:23:05.551 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:23:05.551 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:23:05.551 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:23:05.551 Initialization complete. Launching workers. 00:23:05.551 ======================================================== 00:23:05.551 Latency(us) 00:23:05.551 Device Information : IOPS MiB/s Average min max 00:23:05.551 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 105.00 0.41 9825.61 106.95 45678.10 00:23:05.551 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 55.00 0.21 18862.34 6968.70 47898.06 00:23:05.551 ======================================================== 00:23:05.551 Total : 160.00 0.62 12931.99 106.95 47898.06 00:23:05.552 00:23:05.552 11:31:00 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 32 -o 4096 -w randrw -M 50 -t 1 -HI -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:23:05.552 EAL: No free 2048 kB hugepages reported on node 1 00:23:06.924 Initializing NVMe Controllers 00:23:06.924 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:23:06.924 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:23:06.924 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:23:06.924 Initialization complete. Launching workers. 00:23:06.924 ======================================================== 00:23:06.924 Latency(us) 00:23:06.924 Device Information : IOPS MiB/s Average min max 00:23:06.924 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 11319.95 44.22 2826.77 404.28 8742.18 00:23:06.924 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 3814.98 14.90 8418.60 5692.43 17332.90 00:23:06.924 ======================================================== 00:23:06.924 Total : 15134.94 59.12 4236.27 404.28 17332.90 00:23:06.924 00:23:06.924 11:31:02 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@59 -- # [[ e810 == \e\8\1\0 ]] 00:23:06.924 11:31:02 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@59 -- # [[ tcp == \r\d\m\a ]] 00:23:06.924 11:31:02 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -O 16384 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:23:06.924 EAL: No free 2048 kB hugepages reported on node 1 00:23:09.452 Initializing NVMe Controllers 00:23:09.452 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:23:09.452 Controller IO queue size 128, less than required. 00:23:09.452 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:23:09.452 Controller IO queue size 128, less than required. 00:23:09.452 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:23:09.452 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:23:09.452 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:23:09.452 Initialization complete. Launching workers. 00:23:09.452 ======================================================== 00:23:09.452 Latency(us) 00:23:09.452 Device Information : IOPS MiB/s Average min max 00:23:09.452 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1890.97 472.74 68721.38 48125.75 101987.67 00:23:09.452 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 604.49 151.12 221460.98 77498.53 338145.58 00:23:09.452 ======================================================== 00:23:09.452 Total : 2495.46 623.86 105720.42 48125.75 338145.58 00:23:09.452 00:23:09.452 11:31:04 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 36964 -O 4096 -w randrw -M 50 -t 5 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0xf -P 4 00:23:09.452 EAL: No free 2048 kB hugepages reported on node 1 00:23:09.452 No valid NVMe controllers or AIO or URING devices found 00:23:09.452 Initializing NVMe Controllers 00:23:09.452 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:23:09.452 Controller IO queue size 128, less than required. 00:23:09.452 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:23:09.452 WARNING: IO size 36964 (-o) is not a multiple of nsid 1 sector size 512. Removing this ns from test 00:23:09.452 Controller IO queue size 128, less than required. 00:23:09.452 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:23:09.452 WARNING: IO size 36964 (-o) is not a multiple of nsid 2 sector size 512. Removing this ns from test 00:23:09.452 WARNING: Some requested NVMe devices were skipped 00:23:09.452 11:31:04 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' --transport-stat 00:23:09.452 EAL: No free 2048 kB hugepages reported on node 1 00:23:11.977 Initializing NVMe Controllers 00:23:11.977 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:23:11.977 Controller IO queue size 128, less than required. 00:23:11.977 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:23:11.977 Controller IO queue size 128, less than required. 00:23:11.977 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:23:11.978 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:23:11.978 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:23:11.978 Initialization complete. Launching workers. 00:23:11.978 00:23:11.978 ==================== 00:23:11.978 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 statistics: 00:23:11.978 TCP transport: 00:23:11.978 polls: 18214 00:23:11.978 idle_polls: 13800 00:23:11.978 sock_completions: 4414 00:23:11.978 nvme_completions: 6777 00:23:11.978 submitted_requests: 10130 00:23:11.978 queued_requests: 1 00:23:11.978 00:23:11.978 ==================== 00:23:11.978 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 statistics: 00:23:11.978 TCP transport: 00:23:11.978 polls: 18333 00:23:11.978 idle_polls: 13194 00:23:11.978 sock_completions: 5139 00:23:11.978 nvme_completions: 7155 00:23:11.978 submitted_requests: 10798 00:23:11.978 queued_requests: 1 00:23:11.978 ======================================================== 00:23:11.978 Latency(us) 00:23:11.978 Device Information : IOPS MiB/s Average min max 00:23:11.978 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1693.95 423.49 76961.85 49363.55 117295.69 00:23:11.978 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 1788.45 447.11 72217.52 32706.59 103997.79 00:23:11.978 ======================================================== 00:23:11.978 Total : 3482.40 870.60 74525.32 32706.59 117295.69 00:23:11.978 00:23:11.978 11:31:07 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@66 -- # sync 00:23:11.978 11:31:07 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@67 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:23:11.978 11:31:07 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@69 -- # '[' 0 -eq 1 ']' 00:23:11.978 11:31:07 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@112 -- # trap - SIGINT SIGTERM EXIT 00:23:11.978 11:31:07 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@114 -- # nvmftestfini 00:23:11.978 11:31:07 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@488 -- # nvmfcleanup 00:23:11.978 11:31:07 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@117 -- # sync 00:23:11.978 11:31:07 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:23:11.978 11:31:07 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@120 -- # set +e 00:23:11.978 11:31:07 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@121 -- # for i in {1..20} 00:23:11.978 11:31:07 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:23:11.978 rmmod nvme_tcp 00:23:12.235 rmmod nvme_fabrics 00:23:12.235 rmmod nvme_keyring 00:23:12.235 11:31:07 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:23:12.235 11:31:07 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@124 -- # set -e 00:23:12.235 11:31:07 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@125 -- # return 0 00:23:12.235 11:31:07 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@489 -- # '[' -n 1596485 ']' 00:23:12.235 11:31:07 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@490 -- # killprocess 1596485 00:23:12.235 11:31:07 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@950 -- # '[' -z 1596485 ']' 00:23:12.235 11:31:07 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@954 -- # kill -0 1596485 00:23:12.235 11:31:07 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@955 -- # uname 00:23:12.235 11:31:07 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:23:12.235 11:31:07 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1596485 00:23:12.235 11:31:07 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:23:12.235 11:31:07 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:23:12.235 11:31:07 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1596485' 00:23:12.235 killing process with pid 1596485 00:23:12.235 11:31:07 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@969 -- # kill 1596485 00:23:12.235 11:31:07 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@974 -- # wait 1596485 00:23:14.761 11:31:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:23:14.761 11:31:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:23:14.761 11:31:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:23:14.761 11:31:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:23:14.761 11:31:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@278 -- # remove_spdk_ns 00:23:14.761 11:31:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:14.761 11:31:09 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:14.761 11:31:09 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:16.666 11:31:11 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:23:16.666 00:23:16.666 real 0m24.837s 00:23:16.666 user 1m6.600s 00:23:16.666 sys 0m7.663s 00:23:16.666 11:31:11 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:23:16.666 11:31:11 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:23:16.666 ************************************ 00:23:16.666 END TEST nvmf_perf 00:23:16.666 ************************************ 00:23:16.666 11:31:11 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@24 -- # run_test nvmf_fio_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=tcp 00:23:16.666 11:31:11 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:23:16.666 11:31:11 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:23:16.666 11:31:11 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:23:16.666 ************************************ 00:23:16.666 START TEST nvmf_fio_host 00:23:16.666 ************************************ 00:23:16.666 11:31:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=tcp 00:23:16.666 * Looking for test storage... 00:23:16.666 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:23:16.666 11:31:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:16.666 11:31:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:16.666 11:31:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:16.666 11:31:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:16.666 11:31:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:16.667 11:31:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:16.667 11:31:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:16.667 11:31:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:23:16.667 11:31:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:16.667 11:31:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:23:16.667 11:31:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@7 -- # uname -s 00:23:16.667 11:31:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:16.667 11:31:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:16.667 11:31:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:16.667 11:31:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:16.667 11:31:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:16.667 11:31:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:16.667 11:31:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:16.667 11:31:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:16.667 11:31:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:16.667 11:31:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:16.667 11:31:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:23:16.667 11:31:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:23:16.667 11:31:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:16.667 11:31:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:16.667 11:31:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:16.667 11:31:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:16.667 11:31:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:16.667 11:31:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:16.667 11:31:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:16.667 11:31:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:16.667 11:31:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:16.667 11:31:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:16.667 11:31:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:16.667 11:31:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:23:16.667 11:31:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:16.667 11:31:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@47 -- # : 0 00:23:16.667 11:31:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:23:16.667 11:31:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:23:16.667 11:31:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:16.667 11:31:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:16.667 11:31:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:16.667 11:31:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:23:16.667 11:31:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:23:16.667 11:31:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@51 -- # have_pci_nics=0 00:23:16.667 11:31:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:23:16.667 11:31:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@14 -- # nvmftestinit 00:23:16.667 11:31:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:23:16.667 11:31:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:16.667 11:31:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@448 -- # prepare_net_devs 00:23:16.667 11:31:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@410 -- # local -g is_hw=no 00:23:16.667 11:31:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@412 -- # remove_spdk_ns 00:23:16.667 11:31:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:16.667 11:31:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:16.667 11:31:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:16.667 11:31:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:23:16.667 11:31:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:23:16.667 11:31:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@285 -- # xtrace_disable 00:23:16.667 11:31:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:23:21.944 11:31:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:21.944 11:31:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@291 -- # pci_devs=() 00:23:21.944 11:31:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@291 -- # local -a pci_devs 00:23:21.944 11:31:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@292 -- # pci_net_devs=() 00:23:21.944 11:31:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:23:21.944 11:31:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@293 -- # pci_drivers=() 00:23:21.944 11:31:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@293 -- # local -A pci_drivers 00:23:21.944 11:31:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@295 -- # net_devs=() 00:23:21.944 11:31:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@295 -- # local -ga net_devs 00:23:21.944 11:31:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@296 -- # e810=() 00:23:21.944 11:31:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@296 -- # local -ga e810 00:23:21.944 11:31:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@297 -- # x722=() 00:23:21.944 11:31:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@297 -- # local -ga x722 00:23:21.944 11:31:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@298 -- # mlx=() 00:23:21.944 11:31:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@298 -- # local -ga mlx 00:23:21.944 11:31:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:21.944 11:31:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:21.944 11:31:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:21.944 11:31:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:21.944 11:31:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:21.944 11:31:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:21.944 11:31:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:21.944 11:31:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:21.944 11:31:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:21.944 11:31:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:21.944 11:31:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:21.944 11:31:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:23:21.944 11:31:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:23:21.944 11:31:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:23:21.944 11:31:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:23:21.944 11:31:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:23:21.944 11:31:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:23:21.944 11:31:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:21.944 11:31:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:23:21.944 Found 0000:86:00.0 (0x8086 - 0x159b) 00:23:21.944 11:31:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:21.944 11:31:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:21.944 11:31:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:21.944 11:31:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:21.944 11:31:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:21.944 11:31:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:21.944 11:31:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:23:21.944 Found 0000:86:00.1 (0x8086 - 0x159b) 00:23:21.944 11:31:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:21.944 11:31:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:21.944 11:31:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:21.944 11:31:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:21.944 11:31:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:21.944 11:31:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:23:21.944 11:31:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:23:21.944 11:31:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:23:21.945 11:31:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:21.945 11:31:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:21.945 11:31:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:23:21.945 11:31:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:21.945 11:31:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@390 -- # [[ up == up ]] 00:23:21.945 11:31:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:23:21.945 11:31:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:21.945 11:31:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:23:21.945 Found net devices under 0000:86:00.0: cvl_0_0 00:23:21.945 11:31:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:23:21.945 11:31:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:21.945 11:31:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:21.945 11:31:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:23:21.945 11:31:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:21.945 11:31:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@390 -- # [[ up == up ]] 00:23:21.945 11:31:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:23:21.945 11:31:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:21.945 11:31:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:23:21.945 Found net devices under 0000:86:00.1: cvl_0_1 00:23:21.945 11:31:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:23:21.945 11:31:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:23:21.945 11:31:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@414 -- # is_hw=yes 00:23:21.945 11:31:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:23:21.945 11:31:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:23:21.945 11:31:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:23:21.945 11:31:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:21.945 11:31:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:21.945 11:31:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:21.945 11:31:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:23:21.945 11:31:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:21.945 11:31:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:21.945 11:31:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:23:21.945 11:31:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:21.945 11:31:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:21.945 11:31:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:23:21.945 11:31:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:23:21.945 11:31:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:23:21.945 11:31:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:22.204 11:31:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:22.204 11:31:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:22.204 11:31:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:23:22.204 11:31:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:22.204 11:31:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:22.204 11:31:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:22.204 11:31:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:23:22.204 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:22.204 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.157 ms 00:23:22.204 00:23:22.204 --- 10.0.0.2 ping statistics --- 00:23:22.204 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:22.204 rtt min/avg/max/mdev = 0.157/0.157/0.157/0.000 ms 00:23:22.204 11:31:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:22.204 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:22.204 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.202 ms 00:23:22.204 00:23:22.204 --- 10.0.0.1 ping statistics --- 00:23:22.204 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:22.204 rtt min/avg/max/mdev = 0.202/0.202/0.202/0.000 ms 00:23:22.204 11:31:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:22.204 11:31:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@422 -- # return 0 00:23:22.204 11:31:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:23:22.204 11:31:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:22.204 11:31:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:23:22.204 11:31:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:23:22.204 11:31:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:22.204 11:31:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:23:22.204 11:31:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:23:22.204 11:31:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@16 -- # [[ y != y ]] 00:23:22.204 11:31:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@21 -- # timing_enter start_nvmf_tgt 00:23:22.204 11:31:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@724 -- # xtrace_disable 00:23:22.204 11:31:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:23:22.204 11:31:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@24 -- # nvmfpid=1602763 00:23:22.204 11:31:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@23 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:23:22.204 11:31:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@26 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:23:22.204 11:31:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@28 -- # waitforlisten 1602763 00:23:22.204 11:31:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@831 -- # '[' -z 1602763 ']' 00:23:22.464 11:31:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:22.464 11:31:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@836 -- # local max_retries=100 00:23:22.464 11:31:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:22.464 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:22.464 11:31:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@840 -- # xtrace_disable 00:23:22.464 11:31:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:23:22.464 [2024-07-26 11:31:17.914717] Starting SPDK v24.09-pre git sha1 487ff9e1a / DPDK 24.03.0 initialization... 00:23:22.464 [2024-07-26 11:31:17.914769] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:22.464 EAL: No free 2048 kB hugepages reported on node 1 00:23:22.464 [2024-07-26 11:31:17.988063] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:23:22.464 [2024-07-26 11:31:18.061996] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:22.464 [2024-07-26 11:31:18.062037] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:22.464 [2024-07-26 11:31:18.062044] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:22.464 [2024-07-26 11:31:18.062050] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:22.464 [2024-07-26 11:31:18.062054] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:22.464 [2024-07-26 11:31:18.062115] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:23:22.464 [2024-07-26 11:31:18.062225] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:23:22.464 [2024-07-26 11:31:18.062330] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:23:22.464 [2024-07-26 11:31:18.062331] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:23:23.398 11:31:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:23:23.398 11:31:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@864 -- # return 0 00:23:23.398 11:31:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:23:23.398 [2024-07-26 11:31:18.856188] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:23.398 11:31:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@30 -- # timing_exit start_nvmf_tgt 00:23:23.398 11:31:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@730 -- # xtrace_disable 00:23:23.398 11:31:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:23:23.398 11:31:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:23:23.657 Malloc1 00:23:23.657 11:31:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:23:23.658 11:31:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:23:23.915 11:31:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:23:24.172 [2024-07-26 11:31:19.646181] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:24.172 11:31:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:23:24.463 11:31:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@38 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme 00:23:24.463 11:31:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@41 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:23:24.463 11:31:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:23:24.463 11:31:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:23:24.463 11:31:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:23:24.463 11:31:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1339 -- # local sanitizers 00:23:24.463 11:31:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:23:24.463 11:31:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # shift 00:23:24.463 11:31:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local asan_lib= 00:23:24.463 11:31:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:23:24.463 11:31:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:23:24.463 11:31:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libasan 00:23:24.463 11:31:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:23:24.463 11:31:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:23:24.463 11:31:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:23:24.463 11:31:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:23:24.463 11:31:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:23:24.463 11:31:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:23:24.463 11:31:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:23:24.463 11:31:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:23:24.463 11:31:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:23:24.463 11:31:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:23:24.463 11:31:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:23:24.727 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:23:24.727 fio-3.35 00:23:24.727 Starting 1 thread 00:23:24.727 EAL: No free 2048 kB hugepages reported on node 1 00:23:27.252 00:23:27.252 test: (groupid=0, jobs=1): err= 0: pid=1603189: Fri Jul 26 11:31:22 2024 00:23:27.253 read: IOPS=12.2k, BW=47.6MiB/s (49.9MB/s)(95.5MiB/2005msec) 00:23:27.253 slat (nsec): min=1542, max=242039, avg=1749.46, stdev=2204.20 00:23:27.253 clat (usec): min=3125, max=10530, avg=5803.88, stdev=449.09 00:23:27.253 lat (usec): min=3159, max=10532, avg=5805.63, stdev=449.08 00:23:27.253 clat percentiles (usec): 00:23:27.253 | 1.00th=[ 4686], 5.00th=[ 5080], 10.00th=[ 5276], 20.00th=[ 5473], 00:23:27.253 | 30.00th=[ 5604], 40.00th=[ 5735], 50.00th=[ 5800], 60.00th=[ 5932], 00:23:27.253 | 70.00th=[ 5997], 80.00th=[ 6128], 90.00th=[ 6325], 95.00th=[ 6456], 00:23:27.253 | 99.00th=[ 6849], 99.50th=[ 7046], 99.90th=[ 8291], 99.95th=[ 8979], 00:23:27.253 | 99.99th=[ 9765] 00:23:27.253 bw ( KiB/s): min=47896, max=49312, per=99.95%, avg=48752.00, stdev=660.96, samples=4 00:23:27.253 iops : min=11974, max=12328, avg=12188.00, stdev=165.24, samples=4 00:23:27.253 write: IOPS=12.1k, BW=47.5MiB/s (49.8MB/s)(95.2MiB/2005msec); 0 zone resets 00:23:27.253 slat (nsec): min=1578, max=231766, avg=1819.82, stdev=1658.37 00:23:27.253 clat (usec): min=2477, max=9076, avg=4682.25, stdev=375.18 00:23:27.253 lat (usec): min=2492, max=9078, avg=4684.07, stdev=375.28 00:23:27.253 clat percentiles (usec): 00:23:27.253 | 1.00th=[ 3818], 5.00th=[ 4113], 10.00th=[ 4293], 20.00th=[ 4424], 00:23:27.253 | 30.00th=[ 4490], 40.00th=[ 4621], 50.00th=[ 4686], 60.00th=[ 4752], 00:23:27.253 | 70.00th=[ 4883], 80.00th=[ 4948], 90.00th=[ 5080], 95.00th=[ 5211], 00:23:27.253 | 99.00th=[ 5473], 99.50th=[ 5800], 99.90th=[ 7373], 99.95th=[ 7701], 00:23:27.253 | 99.99th=[ 8356] 00:23:27.253 bw ( KiB/s): min=48408, max=49024, per=100.00%, avg=48610.00, stdev=279.99, samples=4 00:23:27.253 iops : min=12102, max=12256, avg=12152.50, stdev=70.00, samples=4 00:23:27.253 lat (msec) : 4=1.40%, 10=98.59%, 20=0.01% 00:23:27.253 cpu : usr=75.75%, sys=22.90%, ctx=105, majf=0, minf=5 00:23:27.253 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:23:27.253 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:27.253 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:23:27.253 issued rwts: total=24448,24360,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:27.253 latency : target=0, window=0, percentile=100.00%, depth=128 00:23:27.253 00:23:27.253 Run status group 0 (all jobs): 00:23:27.253 READ: bw=47.6MiB/s (49.9MB/s), 47.6MiB/s-47.6MiB/s (49.9MB/s-49.9MB/s), io=95.5MiB (100MB), run=2005-2005msec 00:23:27.253 WRITE: bw=47.5MiB/s (49.8MB/s), 47.5MiB/s-47.5MiB/s (49.8MB/s-49.8MB/s), io=95.2MiB (99.8MB), run=2005-2005msec 00:23:27.253 11:31:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@45 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:23:27.253 11:31:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:23:27.253 11:31:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:23:27.253 11:31:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:23:27.253 11:31:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1339 -- # local sanitizers 00:23:27.253 11:31:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:23:27.253 11:31:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # shift 00:23:27.253 11:31:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local asan_lib= 00:23:27.253 11:31:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:23:27.253 11:31:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:23:27.253 11:31:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libasan 00:23:27.253 11:31:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:23:27.253 11:31:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:23:27.253 11:31:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:23:27.253 11:31:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:23:27.253 11:31:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:23:27.253 11:31:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:23:27.253 11:31:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:23:27.253 11:31:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:23:27.253 11:31:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:23:27.253 11:31:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:23:27.253 11:31:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:23:27.253 test: (g=0): rw=randrw, bs=(R) 16.0KiB-16.0KiB, (W) 16.0KiB-16.0KiB, (T) 16.0KiB-16.0KiB, ioengine=spdk, iodepth=128 00:23:27.253 fio-3.35 00:23:27.253 Starting 1 thread 00:23:27.253 EAL: No free 2048 kB hugepages reported on node 1 00:23:29.780 00:23:29.780 test: (groupid=0, jobs=1): err= 0: pid=1603763: Fri Jul 26 11:31:25 2024 00:23:29.780 read: IOPS=11.1k, BW=173MiB/s (182MB/s)(348MiB/2006msec) 00:23:29.780 slat (nsec): min=2350, max=86732, avg=2907.98, stdev=1256.36 00:23:29.780 clat (usec): min=1378, max=13709, avg=6632.00, stdev=1578.74 00:23:29.780 lat (usec): min=1380, max=13712, avg=6634.91, stdev=1578.82 00:23:29.780 clat percentiles (usec): 00:23:29.780 | 1.00th=[ 3556], 5.00th=[ 4228], 10.00th=[ 4686], 20.00th=[ 5211], 00:23:29.780 | 30.00th=[ 5669], 40.00th=[ 6128], 50.00th=[ 6652], 60.00th=[ 7111], 00:23:29.780 | 70.00th=[ 7439], 80.00th=[ 7898], 90.00th=[ 8586], 95.00th=[ 9241], 00:23:29.780 | 99.00th=[10814], 99.50th=[11469], 99.90th=[13042], 99.95th=[13173], 00:23:29.780 | 99.99th=[13566] 00:23:29.780 bw ( KiB/s): min=89088, max=94880, per=52.07%, avg=92440.00, stdev=2433.88, samples=4 00:23:29.780 iops : min= 5568, max= 5930, avg=5777.50, stdev=152.12, samples=4 00:23:29.780 write: IOPS=6629, BW=104MiB/s (109MB/s)(189MiB/1821msec); 0 zone resets 00:23:29.780 slat (usec): min=28, max=385, avg=32.38, stdev= 6.66 00:23:29.780 clat (usec): min=4630, max=14552, avg=8387.03, stdev=1415.54 00:23:29.780 lat (usec): min=4660, max=14582, avg=8419.41, stdev=1416.44 00:23:29.780 clat percentiles (usec): 00:23:29.780 | 1.00th=[ 5669], 5.00th=[ 6325], 10.00th=[ 6718], 20.00th=[ 7242], 00:23:29.780 | 30.00th=[ 7570], 40.00th=[ 7898], 50.00th=[ 8225], 60.00th=[ 8586], 00:23:29.780 | 70.00th=[ 8979], 80.00th=[ 9503], 90.00th=[10290], 95.00th=[10945], 00:23:29.780 | 99.00th=[12256], 99.50th=[12649], 99.90th=[13698], 99.95th=[13960], 00:23:29.780 | 99.99th=[14222] 00:23:29.780 bw ( KiB/s): min=94560, max=97344, per=90.49%, avg=95984.00, stdev=1535.67, samples=4 00:23:29.780 iops : min= 5910, max= 6084, avg=5999.00, stdev=95.98, samples=4 00:23:29.780 lat (msec) : 2=0.03%, 4=2.06%, 10=91.60%, 20=6.31% 00:23:29.780 cpu : usr=86.63%, sys=12.42%, ctx=58, majf=0, minf=2 00:23:29.780 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.7%, >=64=98.7% 00:23:29.780 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:29.780 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:23:29.780 issued rwts: total=22256,12072,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:29.780 latency : target=0, window=0, percentile=100.00%, depth=128 00:23:29.780 00:23:29.780 Run status group 0 (all jobs): 00:23:29.780 READ: bw=173MiB/s (182MB/s), 173MiB/s-173MiB/s (182MB/s-182MB/s), io=348MiB (365MB), run=2006-2006msec 00:23:29.780 WRITE: bw=104MiB/s (109MB/s), 104MiB/s-104MiB/s (109MB/s-109MB/s), io=189MiB (198MB), run=1821-1821msec 00:23:29.780 11:31:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:23:29.780 11:31:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@49 -- # '[' 0 -eq 1 ']' 00:23:29.780 11:31:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:23:29.780 11:31:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@85 -- # rm -f ./local-test-0-verify.state 00:23:29.780 11:31:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@86 -- # nvmftestfini 00:23:29.780 11:31:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@488 -- # nvmfcleanup 00:23:29.780 11:31:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@117 -- # sync 00:23:29.780 11:31:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:23:29.780 11:31:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@120 -- # set +e 00:23:29.780 11:31:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@121 -- # for i in {1..20} 00:23:29.780 11:31:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:23:29.780 rmmod nvme_tcp 00:23:29.780 rmmod nvme_fabrics 00:23:29.780 rmmod nvme_keyring 00:23:30.038 11:31:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:23:30.038 11:31:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@124 -- # set -e 00:23:30.038 11:31:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@125 -- # return 0 00:23:30.038 11:31:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@489 -- # '[' -n 1602763 ']' 00:23:30.038 11:31:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@490 -- # killprocess 1602763 00:23:30.038 11:31:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@950 -- # '[' -z 1602763 ']' 00:23:30.038 11:31:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@954 -- # kill -0 1602763 00:23:30.038 11:31:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@955 -- # uname 00:23:30.038 11:31:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:23:30.038 11:31:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1602763 00:23:30.038 11:31:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:23:30.038 11:31:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:23:30.038 11:31:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1602763' 00:23:30.038 killing process with pid 1602763 00:23:30.038 11:31:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@969 -- # kill 1602763 00:23:30.038 11:31:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@974 -- # wait 1602763 00:23:30.297 11:31:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:23:30.297 11:31:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:23:30.297 11:31:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:23:30.297 11:31:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:23:30.297 11:31:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@278 -- # remove_spdk_ns 00:23:30.297 11:31:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:30.297 11:31:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:30.297 11:31:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:32.275 11:31:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:23:32.275 00:23:32.275 real 0m15.787s 00:23:32.275 user 0m47.305s 00:23:32.275 sys 0m6.199s 00:23:32.275 11:31:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1126 -- # xtrace_disable 00:23:32.275 11:31:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:23:32.275 ************************************ 00:23:32.275 END TEST nvmf_fio_host 00:23:32.275 ************************************ 00:23:32.275 11:31:27 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@25 -- # run_test nvmf_failover /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=tcp 00:23:32.275 11:31:27 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:23:32.275 11:31:27 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:23:32.275 11:31:27 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:23:32.275 ************************************ 00:23:32.275 START TEST nvmf_failover 00:23:32.275 ************************************ 00:23:32.275 11:31:27 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=tcp 00:23:32.275 * Looking for test storage... 00:23:32.275 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:23:32.275 11:31:27 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:23:32.275 11:31:27 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@7 -- # uname -s 00:23:32.535 11:31:27 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:32.535 11:31:27 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:32.535 11:31:27 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:32.535 11:31:27 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:32.535 11:31:27 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:32.535 11:31:27 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:32.535 11:31:27 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:32.535 11:31:27 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:32.535 11:31:27 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:32.535 11:31:27 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:32.535 11:31:27 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:23:32.535 11:31:27 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:23:32.535 11:31:27 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:32.535 11:31:27 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:32.535 11:31:27 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:32.535 11:31:27 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:32.535 11:31:27 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:32.535 11:31:27 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:32.535 11:31:27 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:32.535 11:31:27 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:32.535 11:31:27 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:32.535 11:31:27 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:32.535 11:31:27 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:32.535 11:31:27 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@5 -- # export PATH 00:23:32.535 11:31:27 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:32.535 11:31:27 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@47 -- # : 0 00:23:32.535 11:31:27 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:23:32.535 11:31:27 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:23:32.535 11:31:27 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:32.535 11:31:27 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:32.535 11:31:27 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:32.535 11:31:27 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:23:32.535 11:31:27 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:23:32.535 11:31:27 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@51 -- # have_pci_nics=0 00:23:32.535 11:31:27 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@11 -- # MALLOC_BDEV_SIZE=64 00:23:32.535 11:31:27 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:23:32.535 11:31:27 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:23:32.535 11:31:27 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:23:32.535 11:31:27 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@18 -- # nvmftestinit 00:23:32.535 11:31:27 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:23:32.535 11:31:27 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:32.535 11:31:27 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@448 -- # prepare_net_devs 00:23:32.535 11:31:27 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@410 -- # local -g is_hw=no 00:23:32.535 11:31:27 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@412 -- # remove_spdk_ns 00:23:32.535 11:31:27 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:32.535 11:31:27 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:32.535 11:31:27 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:32.535 11:31:27 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:23:32.535 11:31:27 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:23:32.535 11:31:27 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@285 -- # xtrace_disable 00:23:32.535 11:31:27 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:23:37.808 11:31:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:37.808 11:31:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@291 -- # pci_devs=() 00:23:37.808 11:31:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@291 -- # local -a pci_devs 00:23:37.808 11:31:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@292 -- # pci_net_devs=() 00:23:37.808 11:31:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:23:37.808 11:31:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@293 -- # pci_drivers=() 00:23:37.808 11:31:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@293 -- # local -A pci_drivers 00:23:37.808 11:31:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@295 -- # net_devs=() 00:23:37.808 11:31:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@295 -- # local -ga net_devs 00:23:37.808 11:31:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@296 -- # e810=() 00:23:37.808 11:31:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@296 -- # local -ga e810 00:23:37.808 11:31:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@297 -- # x722=() 00:23:37.808 11:31:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@297 -- # local -ga x722 00:23:37.808 11:31:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@298 -- # mlx=() 00:23:37.808 11:31:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@298 -- # local -ga mlx 00:23:37.808 11:31:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:37.808 11:31:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:37.808 11:31:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:37.808 11:31:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:37.808 11:31:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:37.808 11:31:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:37.808 11:31:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:37.808 11:31:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:37.808 11:31:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:37.808 11:31:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:37.808 11:31:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:37.808 11:31:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:23:37.808 11:31:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:23:37.808 11:31:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:23:37.808 11:31:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:23:37.808 11:31:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:23:37.808 11:31:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:23:37.808 11:31:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:37.808 11:31:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:23:37.808 Found 0000:86:00.0 (0x8086 - 0x159b) 00:23:37.808 11:31:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:37.808 11:31:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:37.808 11:31:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:37.808 11:31:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:37.808 11:31:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:37.808 11:31:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:37.808 11:31:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:23:37.808 Found 0000:86:00.1 (0x8086 - 0x159b) 00:23:37.808 11:31:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:37.808 11:31:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:37.809 11:31:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:37.809 11:31:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:37.809 11:31:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:37.809 11:31:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:23:37.809 11:31:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:23:37.809 11:31:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:23:37.809 11:31:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:37.809 11:31:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:37.809 11:31:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:23:37.809 11:31:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:37.809 11:31:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@390 -- # [[ up == up ]] 00:23:37.809 11:31:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:23:37.809 11:31:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:37.809 11:31:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:23:37.809 Found net devices under 0000:86:00.0: cvl_0_0 00:23:37.809 11:31:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:23:37.809 11:31:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:37.809 11:31:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:37.809 11:31:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:23:37.809 11:31:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:37.809 11:31:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@390 -- # [[ up == up ]] 00:23:37.809 11:31:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:23:37.809 11:31:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:37.809 11:31:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:23:37.809 Found net devices under 0000:86:00.1: cvl_0_1 00:23:37.809 11:31:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:23:37.809 11:31:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:23:37.809 11:31:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@414 -- # is_hw=yes 00:23:37.809 11:31:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:23:37.809 11:31:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:23:37.809 11:31:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:23:37.809 11:31:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:37.809 11:31:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:37.809 11:31:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:37.809 11:31:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:23:37.809 11:31:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:37.809 11:31:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:37.809 11:31:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:23:37.809 11:31:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:37.809 11:31:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:37.809 11:31:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:23:37.809 11:31:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:23:37.809 11:31:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:23:37.809 11:31:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:38.068 11:31:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:38.068 11:31:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:38.068 11:31:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:23:38.068 11:31:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:38.068 11:31:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:38.068 11:31:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:38.068 11:31:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:23:38.068 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:38.068 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.166 ms 00:23:38.068 00:23:38.068 --- 10.0.0.2 ping statistics --- 00:23:38.068 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:38.068 rtt min/avg/max/mdev = 0.166/0.166/0.166/0.000 ms 00:23:38.068 11:31:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:38.068 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:38.068 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.144 ms 00:23:38.068 00:23:38.068 --- 10.0.0.1 ping statistics --- 00:23:38.068 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:38.068 rtt min/avg/max/mdev = 0.144/0.144/0.144/0.000 ms 00:23:38.068 11:31:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:38.068 11:31:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@422 -- # return 0 00:23:38.068 11:31:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:23:38.068 11:31:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:38.068 11:31:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:23:38.068 11:31:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:23:38.068 11:31:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:38.068 11:31:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:23:38.068 11:31:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:23:38.068 11:31:33 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@20 -- # nvmfappstart -m 0xE 00:23:38.068 11:31:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:23:38.068 11:31:33 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@724 -- # xtrace_disable 00:23:38.068 11:31:33 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:23:38.068 11:31:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@481 -- # nvmfpid=1607679 00:23:38.068 11:31:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@482 -- # waitforlisten 1607679 00:23:38.068 11:31:33 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@831 -- # '[' -z 1607679 ']' 00:23:38.068 11:31:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:23:38.068 11:31:33 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:38.068 11:31:33 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@836 -- # local max_retries=100 00:23:38.068 11:31:33 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:38.068 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:38.068 11:31:33 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # xtrace_disable 00:23:38.068 11:31:33 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:23:38.068 [2024-07-26 11:31:33.719809] Starting SPDK v24.09-pre git sha1 487ff9e1a / DPDK 24.03.0 initialization... 00:23:38.068 [2024-07-26 11:31:33.719854] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:38.326 EAL: No free 2048 kB hugepages reported on node 1 00:23:38.326 [2024-07-26 11:31:33.792259] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:23:38.326 [2024-07-26 11:31:33.869258] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:38.326 [2024-07-26 11:31:33.869291] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:38.326 [2024-07-26 11:31:33.869298] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:38.326 [2024-07-26 11:31:33.869304] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:38.326 [2024-07-26 11:31:33.869309] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:38.326 [2024-07-26 11:31:33.869417] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:23:38.326 [2024-07-26 11:31:33.869525] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:23:38.327 [2024-07-26 11:31:33.869526] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:23:38.890 11:31:34 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:23:38.890 11:31:34 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@864 -- # return 0 00:23:38.890 11:31:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:23:38.890 11:31:34 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@730 -- # xtrace_disable 00:23:38.890 11:31:34 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:23:39.147 11:31:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:39.147 11:31:34 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:23:39.147 [2024-07-26 11:31:34.706184] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:39.147 11:31:34 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:23:39.404 Malloc0 00:23:39.404 11:31:34 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:23:39.661 11:31:35 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:23:39.918 11:31:35 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:23:39.918 [2024-07-26 11:31:35.473781] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:39.918 11:31:35 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:23:40.174 [2024-07-26 11:31:35.638229] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:23:40.174 11:31:35 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:23:40.174 [2024-07-26 11:31:35.814818] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:23:40.430 11:31:35 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 15 -f 00:23:40.430 11:31:35 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@31 -- # bdevperf_pid=1607994 00:23:40.430 11:31:35 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; cat $testdir/try.txt; rm -f $testdir/try.txt; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:23:40.430 11:31:35 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@34 -- # waitforlisten 1607994 /var/tmp/bdevperf.sock 00:23:40.430 11:31:35 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@831 -- # '[' -z 1607994 ']' 00:23:40.431 11:31:35 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:40.431 11:31:35 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@836 -- # local max_retries=100 00:23:40.431 11:31:35 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:40.431 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:40.431 11:31:35 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # xtrace_disable 00:23:40.431 11:31:35 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:23:41.362 11:31:36 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:23:41.362 11:31:36 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@864 -- # return 0 00:23:41.362 11:31:36 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:23:41.619 NVMe0n1 00:23:41.619 11:31:37 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:23:41.876 00:23:41.876 11:31:37 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:23:41.876 11:31:37 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@39 -- # run_test_pid=1608226 00:23:41.876 11:31:37 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@41 -- # sleep 1 00:23:42.809 11:31:38 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:23:43.066 [2024-07-26 11:31:38.497785] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12b1f50 is same with the state(5) to be set 00:23:43.066 [2024-07-26 11:31:38.497832] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12b1f50 is same with the state(5) to be set 00:23:43.066 [2024-07-26 11:31:38.497840] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12b1f50 is same with the state(5) to be set 00:23:43.066 [2024-07-26 11:31:38.497846] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12b1f50 is same with the state(5) to be set 00:23:43.066 [2024-07-26 11:31:38.497852] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12b1f50 is same with the state(5) to be set 00:23:43.066 [2024-07-26 11:31:38.497859] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12b1f50 is same with the state(5) to be set 00:23:43.066 11:31:38 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@45 -- # sleep 3 00:23:46.343 11:31:41 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:23:46.343 00:23:46.343 11:31:41 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:23:46.601 [2024-07-26 11:31:42.007509] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12b2d70 is same with the state(5) to be set 00:23:46.601 [2024-07-26 11:31:42.007548] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12b2d70 is same with the state(5) to be set 00:23:46.601 [2024-07-26 11:31:42.007555] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12b2d70 is same with the state(5) to be set 00:23:46.601 [2024-07-26 11:31:42.007562] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12b2d70 is same with the state(5) to be set 00:23:46.601 [2024-07-26 11:31:42.007568] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12b2d70 is same with the state(5) to be set 00:23:46.601 [2024-07-26 11:31:42.007574] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12b2d70 is same with the state(5) to be set 00:23:46.601 [2024-07-26 11:31:42.007580] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12b2d70 is same with the state(5) to be set 00:23:46.601 [2024-07-26 11:31:42.007585] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12b2d70 is same with the state(5) to be set 00:23:46.601 [2024-07-26 11:31:42.007591] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12b2d70 is same with the state(5) to be set 00:23:46.601 [2024-07-26 11:31:42.007597] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12b2d70 is same with the state(5) to be set 00:23:46.601 [2024-07-26 11:31:42.007602] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12b2d70 is same with the state(5) to be set 00:23:46.601 [2024-07-26 11:31:42.007613] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12b2d70 is same with the state(5) to be set 00:23:46.601 [2024-07-26 11:31:42.007619] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12b2d70 is same with the state(5) to be set 00:23:46.601 [2024-07-26 11:31:42.007625] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12b2d70 is same with the state(5) to be set 00:23:46.601 [2024-07-26 11:31:42.007638] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12b2d70 is same with the state(5) to be set 00:23:46.601 [2024-07-26 11:31:42.007644] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12b2d70 is same with the state(5) to be set 00:23:46.601 11:31:42 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@50 -- # sleep 3 00:23:49.878 11:31:45 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:23:49.878 [2024-07-26 11:31:45.212129] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:49.878 11:31:45 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@55 -- # sleep 1 00:23:50.810 11:31:46 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:23:50.810 [2024-07-26 11:31:46.415904] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146cb40 is same with the state(5) to be set 00:23:50.810 [2024-07-26 11:31:46.415949] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146cb40 is same with the state(5) to be set 00:23:50.810 [2024-07-26 11:31:46.415957] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146cb40 is same with the state(5) to be set 00:23:50.810 [2024-07-26 11:31:46.415963] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146cb40 is same with the state(5) to be set 00:23:50.810 [2024-07-26 11:31:46.415969] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146cb40 is same with the state(5) to be set 00:23:50.810 [2024-07-26 11:31:46.415975] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146cb40 is same with the state(5) to be set 00:23:50.810 [2024-07-26 11:31:46.415981] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146cb40 is same with the state(5) to be set 00:23:50.810 [2024-07-26 11:31:46.415987] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146cb40 is same with the state(5) to be set 00:23:50.810 [2024-07-26 11:31:46.415992] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146cb40 is same with the state(5) to be set 00:23:50.810 [2024-07-26 11:31:46.415998] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146cb40 is same with the state(5) to be set 00:23:50.810 [2024-07-26 11:31:46.416004] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146cb40 is same with the state(5) to be set 00:23:50.810 [2024-07-26 11:31:46.416009] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146cb40 is same with the state(5) to be set 00:23:50.810 [2024-07-26 11:31:46.416014] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146cb40 is same with the state(5) to be set 00:23:50.810 11:31:46 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@59 -- # wait 1608226 00:23:57.377 0 00:23:57.377 11:31:52 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@61 -- # killprocess 1607994 00:23:57.377 11:31:52 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@950 -- # '[' -z 1607994 ']' 00:23:57.377 11:31:52 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # kill -0 1607994 00:23:57.377 11:31:52 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@955 -- # uname 00:23:57.377 11:31:52 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:23:57.377 11:31:52 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1607994 00:23:57.377 11:31:52 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:23:57.377 11:31:52 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:23:57.377 11:31:52 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1607994' 00:23:57.377 killing process with pid 1607994 00:23:57.377 11:31:52 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@969 -- # kill 1607994 00:23:57.377 11:31:52 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@974 -- # wait 1607994 00:23:57.377 11:31:52 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@63 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:23:57.377 [2024-07-26 11:31:35.873786] Starting SPDK v24.09-pre git sha1 487ff9e1a / DPDK 24.03.0 initialization... 00:23:57.377 [2024-07-26 11:31:35.873830] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1607994 ] 00:23:57.377 EAL: No free 2048 kB hugepages reported on node 1 00:23:57.377 [2024-07-26 11:31:35.937734] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:57.377 [2024-07-26 11:31:36.013173] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:23:57.377 Running I/O for 15 seconds... 00:23:57.377 [2024-07-26 11:31:38.498174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:99176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.377 [2024-07-26 11:31:38.498210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.377 [2024-07-26 11:31:38.498226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:99184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.377 [2024-07-26 11:31:38.498233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.377 [2024-07-26 11:31:38.498242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:99192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.377 [2024-07-26 11:31:38.498249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.377 [2024-07-26 11:31:38.498257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:99200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.377 [2024-07-26 11:31:38.498263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.377 [2024-07-26 11:31:38.498271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:99208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.377 [2024-07-26 11:31:38.498277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.377 [2024-07-26 11:31:38.498285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:99216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.377 [2024-07-26 11:31:38.498291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.377 [2024-07-26 11:31:38.498299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:99224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.377 [2024-07-26 11:31:38.498305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.377 [2024-07-26 11:31:38.498314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:99976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.377 [2024-07-26 11:31:38.498322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.377 [2024-07-26 11:31:38.498330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:99984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.377 [2024-07-26 11:31:38.498337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.378 [2024-07-26 11:31:38.498345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:99992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.378 [2024-07-26 11:31:38.498351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.378 [2024-07-26 11:31:38.498359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:100000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.378 [2024-07-26 11:31:38.498365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.378 [2024-07-26 11:31:38.498379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:100008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.378 [2024-07-26 11:31:38.498385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.378 [2024-07-26 11:31:38.498393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:100016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.378 [2024-07-26 11:31:38.498399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.378 [2024-07-26 11:31:38.498407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:100024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.378 [2024-07-26 11:31:38.498413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.378 [2024-07-26 11:31:38.498420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:100032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.378 [2024-07-26 11:31:38.498426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.378 [2024-07-26 11:31:38.498434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:100040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.378 [2024-07-26 11:31:38.498441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.378 [2024-07-26 11:31:38.498448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:100048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.378 [2024-07-26 11:31:38.498455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.378 [2024-07-26 11:31:38.498462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:100056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.378 [2024-07-26 11:31:38.498469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.378 [2024-07-26 11:31:38.498477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:100064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.378 [2024-07-26 11:31:38.498483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.378 [2024-07-26 11:31:38.498491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:100072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.378 [2024-07-26 11:31:38.498497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.378 [2024-07-26 11:31:38.498505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:100080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.378 [2024-07-26 11:31:38.498512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.378 [2024-07-26 11:31:38.498519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:100088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.378 [2024-07-26 11:31:38.498525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.378 [2024-07-26 11:31:38.498533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:100096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.378 [2024-07-26 11:31:38.498539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.378 [2024-07-26 11:31:38.498546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:100104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.378 [2024-07-26 11:31:38.498555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.378 [2024-07-26 11:31:38.498562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:100112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.378 [2024-07-26 11:31:38.498568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.378 [2024-07-26 11:31:38.498576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:100120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.378 [2024-07-26 11:31:38.498582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.378 [2024-07-26 11:31:38.498590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:100128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.378 [2024-07-26 11:31:38.498596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.378 [2024-07-26 11:31:38.498604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:100136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.378 [2024-07-26 11:31:38.498610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.378 [2024-07-26 11:31:38.498618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:100144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.378 [2024-07-26 11:31:38.498624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.378 [2024-07-26 11:31:38.498638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:100152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.378 [2024-07-26 11:31:38.498644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.378 [2024-07-26 11:31:38.498652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:100160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.378 [2024-07-26 11:31:38.498658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.378 [2024-07-26 11:31:38.498665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:100168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.378 [2024-07-26 11:31:38.498672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.378 [2024-07-26 11:31:38.498679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:100176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.378 [2024-07-26 11:31:38.498686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.378 [2024-07-26 11:31:38.498694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:100184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.378 [2024-07-26 11:31:38.498700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.378 [2024-07-26 11:31:38.498709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:99232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.378 [2024-07-26 11:31:38.498715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.378 [2024-07-26 11:31:38.498723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:99240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.378 [2024-07-26 11:31:38.498729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.378 [2024-07-26 11:31:38.498739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:99248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.378 [2024-07-26 11:31:38.498745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.378 [2024-07-26 11:31:38.498753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:99256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.378 [2024-07-26 11:31:38.498759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.378 [2024-07-26 11:31:38.498766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:99264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.378 [2024-07-26 11:31:38.498772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.378 [2024-07-26 11:31:38.498780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:99272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.378 [2024-07-26 11:31:38.498787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.378 [2024-07-26 11:31:38.498795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:99280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.378 [2024-07-26 11:31:38.498801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.378 [2024-07-26 11:31:38.498810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:99288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.378 [2024-07-26 11:31:38.498817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.378 [2024-07-26 11:31:38.498825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:99296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.378 [2024-07-26 11:31:38.498833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.378 [2024-07-26 11:31:38.498841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:99304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.378 [2024-07-26 11:31:38.498848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.378 [2024-07-26 11:31:38.498855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:99312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.378 [2024-07-26 11:31:38.498862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.378 [2024-07-26 11:31:38.498870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:99320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.378 [2024-07-26 11:31:38.498876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.378 [2024-07-26 11:31:38.498885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:99328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.378 [2024-07-26 11:31:38.498892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.378 [2024-07-26 11:31:38.498901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:99336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.378 [2024-07-26 11:31:38.498908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.379 [2024-07-26 11:31:38.498916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:99344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.379 [2024-07-26 11:31:38.498926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.379 [2024-07-26 11:31:38.498935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:100192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.379 [2024-07-26 11:31:38.498943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.379 [2024-07-26 11:31:38.498951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:99352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.379 [2024-07-26 11:31:38.498959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.379 [2024-07-26 11:31:38.498967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:99360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.379 [2024-07-26 11:31:38.498974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.379 [2024-07-26 11:31:38.498983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:99368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.379 [2024-07-26 11:31:38.498989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.379 [2024-07-26 11:31:38.498997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:99376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.379 [2024-07-26 11:31:38.499003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.379 [2024-07-26 11:31:38.499011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:99384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.379 [2024-07-26 11:31:38.499018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.379 [2024-07-26 11:31:38.499025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:99392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.379 [2024-07-26 11:31:38.499032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.379 [2024-07-26 11:31:38.499041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:99400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.379 [2024-07-26 11:31:38.499047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.379 [2024-07-26 11:31:38.499055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:99408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.379 [2024-07-26 11:31:38.499061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.379 [2024-07-26 11:31:38.499069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:99416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.379 [2024-07-26 11:31:38.499075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.379 [2024-07-26 11:31:38.499082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:99424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.379 [2024-07-26 11:31:38.499088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.379 [2024-07-26 11:31:38.499096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:99432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.379 [2024-07-26 11:31:38.499102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.379 [2024-07-26 11:31:38.499112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:99440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.379 [2024-07-26 11:31:38.499118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.379 [2024-07-26 11:31:38.499126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:99448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.379 [2024-07-26 11:31:38.499132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.379 [2024-07-26 11:31:38.499139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:99456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.379 [2024-07-26 11:31:38.499146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.379 [2024-07-26 11:31:38.499154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:99464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.379 [2024-07-26 11:31:38.499161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.379 [2024-07-26 11:31:38.499170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:99472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.379 [2024-07-26 11:31:38.499176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.379 [2024-07-26 11:31:38.499184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:99480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.379 [2024-07-26 11:31:38.499190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.379 [2024-07-26 11:31:38.499198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:99488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.379 [2024-07-26 11:31:38.499204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.379 [2024-07-26 11:31:38.499212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:99496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.379 [2024-07-26 11:31:38.499218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.379 [2024-07-26 11:31:38.499226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:99504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.379 [2024-07-26 11:31:38.499232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.379 [2024-07-26 11:31:38.499240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:99512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.379 [2024-07-26 11:31:38.499246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.379 [2024-07-26 11:31:38.499253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:99520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.379 [2024-07-26 11:31:38.499259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.379 [2024-07-26 11:31:38.499267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:99528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.379 [2024-07-26 11:31:38.499273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.379 [2024-07-26 11:31:38.499281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:99536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.379 [2024-07-26 11:31:38.499288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.379 [2024-07-26 11:31:38.499297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:99544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.379 [2024-07-26 11:31:38.499303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.379 [2024-07-26 11:31:38.499311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:99552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.379 [2024-07-26 11:31:38.499317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.379 [2024-07-26 11:31:38.499325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:99560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.379 [2024-07-26 11:31:38.499332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.379 [2024-07-26 11:31:38.499340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:99568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.379 [2024-07-26 11:31:38.499346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.379 [2024-07-26 11:31:38.499354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:99576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.379 [2024-07-26 11:31:38.499360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.379 [2024-07-26 11:31:38.499367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:99584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.379 [2024-07-26 11:31:38.499374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.379 [2024-07-26 11:31:38.499382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:99592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.379 [2024-07-26 11:31:38.499392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.379 [2024-07-26 11:31:38.499400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:99600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.379 [2024-07-26 11:31:38.499406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.379 [2024-07-26 11:31:38.499414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:99608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.379 [2024-07-26 11:31:38.499420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.379 [2024-07-26 11:31:38.499429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:99616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.379 [2024-07-26 11:31:38.499436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.379 [2024-07-26 11:31:38.499443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:99624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.379 [2024-07-26 11:31:38.499449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.379 [2024-07-26 11:31:38.499457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:99632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.379 [2024-07-26 11:31:38.499463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.379 [2024-07-26 11:31:38.499471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:99640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.380 [2024-07-26 11:31:38.499480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.380 [2024-07-26 11:31:38.499488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:99648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.380 [2024-07-26 11:31:38.499494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.380 [2024-07-26 11:31:38.499502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:99656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.380 [2024-07-26 11:31:38.499509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.380 [2024-07-26 11:31:38.499517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:99664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.380 [2024-07-26 11:31:38.499523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.380 [2024-07-26 11:31:38.499531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:99672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.380 [2024-07-26 11:31:38.499538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.380 [2024-07-26 11:31:38.499546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:99680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.380 [2024-07-26 11:31:38.499552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.380 [2024-07-26 11:31:38.499559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:99688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.380 [2024-07-26 11:31:38.499565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.380 [2024-07-26 11:31:38.499573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:99696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.380 [2024-07-26 11:31:38.499581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.380 [2024-07-26 11:31:38.499590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:99704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.380 [2024-07-26 11:31:38.499596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.380 [2024-07-26 11:31:38.499604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:99712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.380 [2024-07-26 11:31:38.499610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.380 [2024-07-26 11:31:38.499617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:99720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.380 [2024-07-26 11:31:38.499624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.380 [2024-07-26 11:31:38.499637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:99728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.380 [2024-07-26 11:31:38.499644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.380 [2024-07-26 11:31:38.499652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:99736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.380 [2024-07-26 11:31:38.499658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.380 [2024-07-26 11:31:38.499667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:99744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.380 [2024-07-26 11:31:38.499674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.380 [2024-07-26 11:31:38.499681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:99752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.380 [2024-07-26 11:31:38.499687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.380 [2024-07-26 11:31:38.499696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:99760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.380 [2024-07-26 11:31:38.499703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.380 [2024-07-26 11:31:38.499710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:99768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.380 [2024-07-26 11:31:38.499716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.380 [2024-07-26 11:31:38.499724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:99776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.380 [2024-07-26 11:31:38.499730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.380 [2024-07-26 11:31:38.499738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:99784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.380 [2024-07-26 11:31:38.499744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.380 [2024-07-26 11:31:38.499751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:99792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.380 [2024-07-26 11:31:38.499758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.380 [2024-07-26 11:31:38.499766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:99800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.380 [2024-07-26 11:31:38.499773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.380 [2024-07-26 11:31:38.499781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:99808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.380 [2024-07-26 11:31:38.499787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.380 [2024-07-26 11:31:38.499794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:99816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.380 [2024-07-26 11:31:38.499800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.380 [2024-07-26 11:31:38.499809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:99824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.380 [2024-07-26 11:31:38.499815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.380 [2024-07-26 11:31:38.499823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:99832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.380 [2024-07-26 11:31:38.499829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.380 [2024-07-26 11:31:38.499837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:99840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.380 [2024-07-26 11:31:38.499844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.380 [2024-07-26 11:31:38.499852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:99848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.380 [2024-07-26 11:31:38.499860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.380 [2024-07-26 11:31:38.499869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:99856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.380 [2024-07-26 11:31:38.499875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.380 [2024-07-26 11:31:38.499883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:99864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.380 [2024-07-26 11:31:38.499889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.380 [2024-07-26 11:31:38.499897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:99872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.380 [2024-07-26 11:31:38.499903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.380 [2024-07-26 11:31:38.499911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:99880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.380 [2024-07-26 11:31:38.499918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.380 [2024-07-26 11:31:38.499925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:99888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.380 [2024-07-26 11:31:38.499932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.380 [2024-07-26 11:31:38.499939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:99896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.380 [2024-07-26 11:31:38.499945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.380 [2024-07-26 11:31:38.499953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:99904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.380 [2024-07-26 11:31:38.499959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.380 [2024-07-26 11:31:38.499967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:99912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.380 [2024-07-26 11:31:38.499975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.380 [2024-07-26 11:31:38.499983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:99920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.380 [2024-07-26 11:31:38.499989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.380 [2024-07-26 11:31:38.499997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:99928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.380 [2024-07-26 11:31:38.500003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.380 [2024-07-26 11:31:38.500011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:99936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.380 [2024-07-26 11:31:38.500018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.380 [2024-07-26 11:31:38.500027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:99944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.380 [2024-07-26 11:31:38.500034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.380 [2024-07-26 11:31:38.500041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:99952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.380 [2024-07-26 11:31:38.500047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.381 [2024-07-26 11:31:38.500055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:99960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.381 [2024-07-26 11:31:38.500061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.381 [2024-07-26 11:31:38.500080] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:57.381 [2024-07-26 11:31:38.500087] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:57.381 [2024-07-26 11:31:38.500093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:99968 len:8 PRP1 0x0 PRP2 0x0 00:23:57.381 [2024-07-26 11:31:38.500100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.381 [2024-07-26 11:31:38.500140] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x19194b0 was disconnected and freed. reset controller. 00:23:57.381 [2024-07-26 11:31:38.500149] bdev_nvme.c:1870:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:23:57.381 [2024-07-26 11:31:38.500169] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:57.381 [2024-07-26 11:31:38.500177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.381 [2024-07-26 11:31:38.500185] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:57.381 [2024-07-26 11:31:38.500191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.381 [2024-07-26 11:31:38.500197] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:57.381 [2024-07-26 11:31:38.500204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.381 [2024-07-26 11:31:38.500210] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:57.381 [2024-07-26 11:31:38.500217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.381 [2024-07-26 11:31:38.500224] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:57.381 [2024-07-26 11:31:38.500260] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1926540 (9): Bad file descriptor 00:23:57.381 [2024-07-26 11:31:38.503032] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:57.381 [2024-07-26 11:31:38.536526] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:23:57.381 [2024-07-26 11:31:42.007868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:36000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.381 [2024-07-26 11:31:42.007902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.381 [2024-07-26 11:31:42.007916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:36456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.381 [2024-07-26 11:31:42.007924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.381 [2024-07-26 11:31:42.007936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:36464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.381 [2024-07-26 11:31:42.007943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.381 [2024-07-26 11:31:42.007951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:36472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.381 [2024-07-26 11:31:42.007957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.381 [2024-07-26 11:31:42.007966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:36480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.381 [2024-07-26 11:31:42.007972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.381 [2024-07-26 11:31:42.007980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:36488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.381 [2024-07-26 11:31:42.007987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.381 [2024-07-26 11:31:42.007994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:36496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.381 [2024-07-26 11:31:42.008001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.381 [2024-07-26 11:31:42.008009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:36504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.381 [2024-07-26 11:31:42.008016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.381 [2024-07-26 11:31:42.008024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:36512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.381 [2024-07-26 11:31:42.008030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.381 [2024-07-26 11:31:42.008038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:36520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.381 [2024-07-26 11:31:42.008044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.381 [2024-07-26 11:31:42.008053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:36528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.381 [2024-07-26 11:31:42.008059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.381 [2024-07-26 11:31:42.008068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:36536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.381 [2024-07-26 11:31:42.008074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.381 [2024-07-26 11:31:42.008082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:36544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.381 [2024-07-26 11:31:42.008088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.381 [2024-07-26 11:31:42.008096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:36552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.381 [2024-07-26 11:31:42.008102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.381 [2024-07-26 11:31:42.008110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:36560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.381 [2024-07-26 11:31:42.008118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.381 [2024-07-26 11:31:42.008126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:36568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.381 [2024-07-26 11:31:42.008132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.381 [2024-07-26 11:31:42.008140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:36576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.381 [2024-07-26 11:31:42.008147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.381 [2024-07-26 11:31:42.008155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:36584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.381 [2024-07-26 11:31:42.008161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.381 [2024-07-26 11:31:42.008169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:36592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.381 [2024-07-26 11:31:42.008176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.381 [2024-07-26 11:31:42.008183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:36600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.381 [2024-07-26 11:31:42.008190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.381 [2024-07-26 11:31:42.008198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:36608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.381 [2024-07-26 11:31:42.008204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.381 [2024-07-26 11:31:42.008212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:36616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.381 [2024-07-26 11:31:42.008219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.381 [2024-07-26 11:31:42.008226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:36624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.381 [2024-07-26 11:31:42.008233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.381 [2024-07-26 11:31:42.008241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:36632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.381 [2024-07-26 11:31:42.008247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.381 [2024-07-26 11:31:42.008254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:36640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.381 [2024-07-26 11:31:42.008261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.381 [2024-07-26 11:31:42.008270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:36648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.381 [2024-07-26 11:31:42.008276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.381 [2024-07-26 11:31:42.008284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:36656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.381 [2024-07-26 11:31:42.008290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.381 [2024-07-26 11:31:42.008300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:36664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.381 [2024-07-26 11:31:42.008306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.381 [2024-07-26 11:31:42.008314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:36672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.381 [2024-07-26 11:31:42.008320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.381 [2024-07-26 11:31:42.008328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:36680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.382 [2024-07-26 11:31:42.008335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.382 [2024-07-26 11:31:42.008343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:36688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.382 [2024-07-26 11:31:42.008349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.382 [2024-07-26 11:31:42.008357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:36696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.382 [2024-07-26 11:31:42.008363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.382 [2024-07-26 11:31:42.008371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:36704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.382 [2024-07-26 11:31:42.008379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.382 [2024-07-26 11:31:42.008387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:36712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.382 [2024-07-26 11:31:42.008393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.382 [2024-07-26 11:31:42.008401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:36720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.382 [2024-07-26 11:31:42.008407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.382 [2024-07-26 11:31:42.008415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:36728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.382 [2024-07-26 11:31:42.008421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.382 [2024-07-26 11:31:42.008430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:36008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.382 [2024-07-26 11:31:42.008437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.382 [2024-07-26 11:31:42.008445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:36016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.382 [2024-07-26 11:31:42.008451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.382 [2024-07-26 11:31:42.008459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:36024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.382 [2024-07-26 11:31:42.008465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.382 [2024-07-26 11:31:42.008473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:36032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.382 [2024-07-26 11:31:42.008482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.382 [2024-07-26 11:31:42.008490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:36040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.382 [2024-07-26 11:31:42.008497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.382 [2024-07-26 11:31:42.008504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:36048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.382 [2024-07-26 11:31:42.008511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.382 [2024-07-26 11:31:42.008518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:36056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.382 [2024-07-26 11:31:42.008525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.382 [2024-07-26 11:31:42.008533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:36064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.382 [2024-07-26 11:31:42.008539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.382 [2024-07-26 11:31:42.008547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:36072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.382 [2024-07-26 11:31:42.008553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.382 [2024-07-26 11:31:42.008561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:36080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.382 [2024-07-26 11:31:42.008567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.382 [2024-07-26 11:31:42.008575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:36088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.382 [2024-07-26 11:31:42.008581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.382 [2024-07-26 11:31:42.008590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:36096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.382 [2024-07-26 11:31:42.008596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.382 [2024-07-26 11:31:42.008604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:36104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.382 [2024-07-26 11:31:42.008610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.382 [2024-07-26 11:31:42.008618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:36112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.382 [2024-07-26 11:31:42.008624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.382 [2024-07-26 11:31:42.008640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:36120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.382 [2024-07-26 11:31:42.008647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.382 [2024-07-26 11:31:42.008655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:36128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.382 [2024-07-26 11:31:42.008661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.382 [2024-07-26 11:31:42.008671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:36736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.382 [2024-07-26 11:31:42.008677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.382 [2024-07-26 11:31:42.008685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:36744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.382 [2024-07-26 11:31:42.008691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.382 [2024-07-26 11:31:42.008699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:36752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.382 [2024-07-26 11:31:42.008706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.382 [2024-07-26 11:31:42.008714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:36760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.382 [2024-07-26 11:31:42.008720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.382 [2024-07-26 11:31:42.008728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:36768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.382 [2024-07-26 11:31:42.008734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.382 [2024-07-26 11:31:42.008741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:36776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.382 [2024-07-26 11:31:42.008748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.382 [2024-07-26 11:31:42.008756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:36784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.382 [2024-07-26 11:31:42.008763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.382 [2024-07-26 11:31:42.008770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:36792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.382 [2024-07-26 11:31:42.008776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.383 [2024-07-26 11:31:42.008784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:36800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.383 [2024-07-26 11:31:42.008790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.383 [2024-07-26 11:31:42.008799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:36808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.383 [2024-07-26 11:31:42.008805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.383 [2024-07-26 11:31:42.008813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:36816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.383 [2024-07-26 11:31:42.008819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.383 [2024-07-26 11:31:42.008827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:36824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.383 [2024-07-26 11:31:42.008833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.383 [2024-07-26 11:31:42.008841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:36832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.383 [2024-07-26 11:31:42.008848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.383 [2024-07-26 11:31:42.008857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:36840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.383 [2024-07-26 11:31:42.008864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.383 [2024-07-26 11:31:42.008872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:36848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.383 [2024-07-26 11:31:42.008878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.383 [2024-07-26 11:31:42.008886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:36856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.383 [2024-07-26 11:31:42.008892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.383 [2024-07-26 11:31:42.008900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:36864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.383 [2024-07-26 11:31:42.008906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.383 [2024-07-26 11:31:42.008913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:36872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.383 [2024-07-26 11:31:42.008920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.383 [2024-07-26 11:31:42.008927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:36880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.383 [2024-07-26 11:31:42.008933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.383 [2024-07-26 11:31:42.008941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:36888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.383 [2024-07-26 11:31:42.008947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.383 [2024-07-26 11:31:42.008956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:36896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.383 [2024-07-26 11:31:42.008962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.383 [2024-07-26 11:31:42.008970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:36904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.383 [2024-07-26 11:31:42.008976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.383 [2024-07-26 11:31:42.008984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:36912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.383 [2024-07-26 11:31:42.008990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.383 [2024-07-26 11:31:42.008999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:36920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.383 [2024-07-26 11:31:42.009006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.383 [2024-07-26 11:31:42.009013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:36928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.383 [2024-07-26 11:31:42.009021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.383 [2024-07-26 11:31:42.009029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:36936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.383 [2024-07-26 11:31:42.009037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.383 [2024-07-26 11:31:42.009046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:36944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.383 [2024-07-26 11:31:42.009053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.383 [2024-07-26 11:31:42.009062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:36952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.383 [2024-07-26 11:31:42.009069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.383 [2024-07-26 11:31:42.009077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:36960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.383 [2024-07-26 11:31:42.009084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.383 [2024-07-26 11:31:42.009092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:36968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.383 [2024-07-26 11:31:42.009099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.383 [2024-07-26 11:31:42.009110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:36976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.383 [2024-07-26 11:31:42.009117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.383 [2024-07-26 11:31:42.009125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:36984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.383 [2024-07-26 11:31:42.009132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.383 [2024-07-26 11:31:42.009139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:36136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.383 [2024-07-26 11:31:42.009147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.383 [2024-07-26 11:31:42.009154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:36144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.383 [2024-07-26 11:31:42.009161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.383 [2024-07-26 11:31:42.009170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:36152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.383 [2024-07-26 11:31:42.009177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.383 [2024-07-26 11:31:42.009185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:36160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.383 [2024-07-26 11:31:42.009192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.383 [2024-07-26 11:31:42.009199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:36168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.383 [2024-07-26 11:31:42.009205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.383 [2024-07-26 11:31:42.009213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:36176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.383 [2024-07-26 11:31:42.009220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.383 [2024-07-26 11:31:42.009230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:36184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.383 [2024-07-26 11:31:42.009237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.383 [2024-07-26 11:31:42.009245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:36192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.383 [2024-07-26 11:31:42.009251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.383 [2024-07-26 11:31:42.009259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:36200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.383 [2024-07-26 11:31:42.009265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.383 [2024-07-26 11:31:42.009273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:36208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.383 [2024-07-26 11:31:42.009279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.383 [2024-07-26 11:31:42.009287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:36216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.383 [2024-07-26 11:31:42.009293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.383 [2024-07-26 11:31:42.009300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:36224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.383 [2024-07-26 11:31:42.009307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.383 [2024-07-26 11:31:42.009315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:36232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.383 [2024-07-26 11:31:42.009324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.383 [2024-07-26 11:31:42.009332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:36240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.383 [2024-07-26 11:31:42.009339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.383 [2024-07-26 11:31:42.009346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:36248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.383 [2024-07-26 11:31:42.009353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.384 [2024-07-26 11:31:42.009360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:36256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.384 [2024-07-26 11:31:42.009366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.384 [2024-07-26 11:31:42.009375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:36264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.384 [2024-07-26 11:31:42.009381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.384 [2024-07-26 11:31:42.009389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:36272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.384 [2024-07-26 11:31:42.009395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.384 [2024-07-26 11:31:42.009403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:36280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.384 [2024-07-26 11:31:42.009410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.384 [2024-07-26 11:31:42.009418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:36288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.384 [2024-07-26 11:31:42.009425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.384 [2024-07-26 11:31:42.009433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:36296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.384 [2024-07-26 11:31:42.009439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.384 [2024-07-26 11:31:42.009447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:36304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.384 [2024-07-26 11:31:42.009453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.384 [2024-07-26 11:31:42.009461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:36312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.384 [2024-07-26 11:31:42.009467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.384 [2024-07-26 11:31:42.009475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:36320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.384 [2024-07-26 11:31:42.009482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.384 [2024-07-26 11:31:42.009489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:36328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.384 [2024-07-26 11:31:42.009495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.384 [2024-07-26 11:31:42.009503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:36336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.384 [2024-07-26 11:31:42.009509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.384 [2024-07-26 11:31:42.009516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:36344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.384 [2024-07-26 11:31:42.009522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.384 [2024-07-26 11:31:42.009530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:36352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.384 [2024-07-26 11:31:42.009536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.384 [2024-07-26 11:31:42.009545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:36360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.384 [2024-07-26 11:31:42.009552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.384 [2024-07-26 11:31:42.009560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:36368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.384 [2024-07-26 11:31:42.009566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.384 [2024-07-26 11:31:42.009573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:36376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.384 [2024-07-26 11:31:42.009579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.384 [2024-07-26 11:31:42.009587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:36384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.384 [2024-07-26 11:31:42.009596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.384 [2024-07-26 11:31:42.009604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:36392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.384 [2024-07-26 11:31:42.009610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.384 [2024-07-26 11:31:42.009617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:36400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.384 [2024-07-26 11:31:42.009623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.384 [2024-07-26 11:31:42.009636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:36408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.384 [2024-07-26 11:31:42.009643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.384 [2024-07-26 11:31:42.009650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:36416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.384 [2024-07-26 11:31:42.009657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.384 [2024-07-26 11:31:42.009665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:36424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.384 [2024-07-26 11:31:42.009672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.384 [2024-07-26 11:31:42.009680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:36432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.384 [2024-07-26 11:31:42.009686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.384 [2024-07-26 11:31:42.009694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:36440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.384 [2024-07-26 11:31:42.009700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.384 [2024-07-26 11:31:42.009707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:36448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.384 [2024-07-26 11:31:42.009714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.384 [2024-07-26 11:31:42.009722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:36992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.384 [2024-07-26 11:31:42.009728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.384 [2024-07-26 11:31:42.009736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:37000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.384 [2024-07-26 11:31:42.009742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.384 [2024-07-26 11:31:42.009750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:37008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.384 [2024-07-26 11:31:42.009756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.384 [2024-07-26 11:31:42.009776] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:57.384 [2024-07-26 11:31:42.009782] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:57.384 [2024-07-26 11:31:42.009787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:37016 len:8 PRP1 0x0 PRP2 0x0 00:23:57.384 [2024-07-26 11:31:42.009799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.384 [2024-07-26 11:31:42.009838] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x194a3f0 was disconnected and freed. reset controller. 00:23:57.384 [2024-07-26 11:31:42.009847] bdev_nvme.c:1870:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4421 to 10.0.0.2:4422 00:23:57.384 [2024-07-26 11:31:42.009867] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:57.384 [2024-07-26 11:31:42.009875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.384 [2024-07-26 11:31:42.009882] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:57.384 [2024-07-26 11:31:42.009888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.384 [2024-07-26 11:31:42.009895] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:57.384 [2024-07-26 11:31:42.009902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.384 [2024-07-26 11:31:42.009909] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:57.384 [2024-07-26 11:31:42.009915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.384 [2024-07-26 11:31:42.009922] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:57.384 [2024-07-26 11:31:42.012687] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:57.384 [2024-07-26 11:31:42.012717] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1926540 (9): Bad file descriptor 00:23:57.384 [2024-07-26 11:31:42.081476] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:23:57.384 [2024-07-26 11:31:46.417787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:60688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.384 [2024-07-26 11:31:46.417826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.384 [2024-07-26 11:31:46.417841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:60696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.384 [2024-07-26 11:31:46.417850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.384 [2024-07-26 11:31:46.417858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:60704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.384 [2024-07-26 11:31:46.417865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.385 [2024-07-26 11:31:46.417874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:60712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.385 [2024-07-26 11:31:46.417881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.385 [2024-07-26 11:31:46.417891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:60720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.385 [2024-07-26 11:31:46.417898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.385 [2024-07-26 11:31:46.417906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:60728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.385 [2024-07-26 11:31:46.417919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.385 [2024-07-26 11:31:46.417927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:60736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.385 [2024-07-26 11:31:46.417933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.385 [2024-07-26 11:31:46.417943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:60744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.385 [2024-07-26 11:31:46.417951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.385 [2024-07-26 11:31:46.417959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:60752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.385 [2024-07-26 11:31:46.417966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.385 [2024-07-26 11:31:46.417975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:60760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.385 [2024-07-26 11:31:46.417982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.385 [2024-07-26 11:31:46.417991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:60768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.385 [2024-07-26 11:31:46.417999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.385 [2024-07-26 11:31:46.418008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:60776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.385 [2024-07-26 11:31:46.418015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.385 [2024-07-26 11:31:46.418024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:60784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.385 [2024-07-26 11:31:46.418031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.385 [2024-07-26 11:31:46.418039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:60792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.385 [2024-07-26 11:31:46.418046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.385 [2024-07-26 11:31:46.418055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:60800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.385 [2024-07-26 11:31:46.418061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.385 [2024-07-26 11:31:46.418070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:60808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.385 [2024-07-26 11:31:46.418076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.385 [2024-07-26 11:31:46.418084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:60816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.385 [2024-07-26 11:31:46.418090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.385 [2024-07-26 11:31:46.418099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:60824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.385 [2024-07-26 11:31:46.418106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.385 [2024-07-26 11:31:46.418116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:60832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.385 [2024-07-26 11:31:46.418122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.385 [2024-07-26 11:31:46.418130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:60840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.385 [2024-07-26 11:31:46.418137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.385 [2024-07-26 11:31:46.418145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:60848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.385 [2024-07-26 11:31:46.418151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.385 [2024-07-26 11:31:46.418158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:60856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.385 [2024-07-26 11:31:46.418165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.385 [2024-07-26 11:31:46.418173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:60864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.385 [2024-07-26 11:31:46.418179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.385 [2024-07-26 11:31:46.418187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:60872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.385 [2024-07-26 11:31:46.418193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.385 [2024-07-26 11:31:46.418201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:60880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.385 [2024-07-26 11:31:46.418208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.385 [2024-07-26 11:31:46.418215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:60888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.385 [2024-07-26 11:31:46.418222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.385 [2024-07-26 11:31:46.418230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:60896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.385 [2024-07-26 11:31:46.418237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.385 [2024-07-26 11:31:46.418245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:60904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.385 [2024-07-26 11:31:46.418251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.385 [2024-07-26 11:31:46.418259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:60912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.385 [2024-07-26 11:31:46.418265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.385 [2024-07-26 11:31:46.418273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:60920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.385 [2024-07-26 11:31:46.418279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.385 [2024-07-26 11:31:46.418288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:60928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.385 [2024-07-26 11:31:46.418296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.385 [2024-07-26 11:31:46.418304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:60936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.385 [2024-07-26 11:31:46.418310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.385 [2024-07-26 11:31:46.418318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:60944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.385 [2024-07-26 11:31:46.418324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.385 [2024-07-26 11:31:46.418333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:60952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.385 [2024-07-26 11:31:46.418339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.385 [2024-07-26 11:31:46.418348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:60960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.385 [2024-07-26 11:31:46.418355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.385 [2024-07-26 11:31:46.418363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:60968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.385 [2024-07-26 11:31:46.418368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.385 [2024-07-26 11:31:46.418376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:60976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.385 [2024-07-26 11:31:46.418383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.385 [2024-07-26 11:31:46.418391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:60984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.385 [2024-07-26 11:31:46.418398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.385 [2024-07-26 11:31:46.418407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:60992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.385 [2024-07-26 11:31:46.418413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.385 [2024-07-26 11:31:46.418421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:61000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.385 [2024-07-26 11:31:46.418427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.385 [2024-07-26 11:31:46.418435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:61008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.385 [2024-07-26 11:31:46.418442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.385 [2024-07-26 11:31:46.418450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:61016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.385 [2024-07-26 11:31:46.418456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.385 [2024-07-26 11:31:46.418464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:61024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.386 [2024-07-26 11:31:46.418470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.386 [2024-07-26 11:31:46.418478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:61032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.386 [2024-07-26 11:31:46.418487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.386 [2024-07-26 11:31:46.418495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:61040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.386 [2024-07-26 11:31:46.418502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.386 [2024-07-26 11:31:46.418509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:61048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.386 [2024-07-26 11:31:46.418515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.386 [2024-07-26 11:31:46.418524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:61056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.386 [2024-07-26 11:31:46.418530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.386 [2024-07-26 11:31:46.418538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:61064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.386 [2024-07-26 11:31:46.418545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.386 [2024-07-26 11:31:46.418552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:61072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.386 [2024-07-26 11:31:46.418559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.386 [2024-07-26 11:31:46.418567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:61080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.386 [2024-07-26 11:31:46.418573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.386 [2024-07-26 11:31:46.418581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:61088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.386 [2024-07-26 11:31:46.418587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.386 [2024-07-26 11:31:46.418595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:61096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.386 [2024-07-26 11:31:46.418602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.386 [2024-07-26 11:31:46.418610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:61104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.386 [2024-07-26 11:31:46.418616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.386 [2024-07-26 11:31:46.418624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:61112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.386 [2024-07-26 11:31:46.418635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.386 [2024-07-26 11:31:46.418643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:61120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.386 [2024-07-26 11:31:46.418650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.386 [2024-07-26 11:31:46.418658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:61128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.386 [2024-07-26 11:31:46.418665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.386 [2024-07-26 11:31:46.418674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:61136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.386 [2024-07-26 11:31:46.418681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.386 [2024-07-26 11:31:46.418689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:61144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.386 [2024-07-26 11:31:46.418695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.386 [2024-07-26 11:31:46.418703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:61152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.386 [2024-07-26 11:31:46.418710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.386 [2024-07-26 11:31:46.418718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:61160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.386 [2024-07-26 11:31:46.418724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.386 [2024-07-26 11:31:46.418732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:61168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.386 [2024-07-26 11:31:46.418738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.386 [2024-07-26 11:31:46.418746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:61176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.386 [2024-07-26 11:31:46.418754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.386 [2024-07-26 11:31:46.418762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:61184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.386 [2024-07-26 11:31:46.418768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.386 [2024-07-26 11:31:46.418776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:61192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.386 [2024-07-26 11:31:46.418783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.386 [2024-07-26 11:31:46.418790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:61200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.386 [2024-07-26 11:31:46.418797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.386 [2024-07-26 11:31:46.418805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:61208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.386 [2024-07-26 11:31:46.418812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.386 [2024-07-26 11:31:46.418820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:61216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.386 [2024-07-26 11:31:46.418826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.386 [2024-07-26 11:31:46.418834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:61224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.386 [2024-07-26 11:31:46.418841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.386 [2024-07-26 11:31:46.418848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:61232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.386 [2024-07-26 11:31:46.418857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.386 [2024-07-26 11:31:46.418865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:61240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.386 [2024-07-26 11:31:46.418871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.386 [2024-07-26 11:31:46.418879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:61248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.386 [2024-07-26 11:31:46.418885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.386 [2024-07-26 11:31:46.418893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:61256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.386 [2024-07-26 11:31:46.418899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.386 [2024-07-26 11:31:46.418907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:61264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.386 [2024-07-26 11:31:46.418914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.386 [2024-07-26 11:31:46.418922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:61272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.386 [2024-07-26 11:31:46.418928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.386 [2024-07-26 11:31:46.418936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:61280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.386 [2024-07-26 11:31:46.418942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.386 [2024-07-26 11:31:46.418950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:61288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.386 [2024-07-26 11:31:46.418956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.387 [2024-07-26 11:31:46.418965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:61296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.387 [2024-07-26 11:31:46.418972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.387 [2024-07-26 11:31:46.418980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:61304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.387 [2024-07-26 11:31:46.418986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.387 [2024-07-26 11:31:46.418994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:61312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.387 [2024-07-26 11:31:46.419000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.387 [2024-07-26 11:31:46.419008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:61320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.387 [2024-07-26 11:31:46.419015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.387 [2024-07-26 11:31:46.419023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:61328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.387 [2024-07-26 11:31:46.419029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.387 [2024-07-26 11:31:46.419038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:61336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.387 [2024-07-26 11:31:46.419046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.387 [2024-07-26 11:31:46.419054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:61344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.387 [2024-07-26 11:31:46.419061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.387 [2024-07-26 11:31:46.419069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:61352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.387 [2024-07-26 11:31:46.419076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.387 [2024-07-26 11:31:46.419084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:61360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.387 [2024-07-26 11:31:46.419090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.387 [2024-07-26 11:31:46.419097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:61368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.387 [2024-07-26 11:31:46.419103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.387 [2024-07-26 11:31:46.419112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:61376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.387 [2024-07-26 11:31:46.419119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.387 [2024-07-26 11:31:46.419127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:61384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.387 [2024-07-26 11:31:46.419134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.387 [2024-07-26 11:31:46.419155] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:57.387 [2024-07-26 11:31:46.419162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:61392 len:8 PRP1 0x0 PRP2 0x0 00:23:57.387 [2024-07-26 11:31:46.419169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.387 [2024-07-26 11:31:46.419178] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:57.387 [2024-07-26 11:31:46.419183] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:57.387 [2024-07-26 11:31:46.419188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:61400 len:8 PRP1 0x0 PRP2 0x0 00:23:57.387 [2024-07-26 11:31:46.419195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.387 [2024-07-26 11:31:46.419201] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:57.387 [2024-07-26 11:31:46.419206] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:57.387 [2024-07-26 11:31:46.419212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:61408 len:8 PRP1 0x0 PRP2 0x0 00:23:57.387 [2024-07-26 11:31:46.419219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.387 [2024-07-26 11:31:46.419225] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:57.387 [2024-07-26 11:31:46.419230] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:57.387 [2024-07-26 11:31:46.419235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:61416 len:8 PRP1 0x0 PRP2 0x0 00:23:57.387 [2024-07-26 11:31:46.419243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.387 [2024-07-26 11:31:46.419249] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:57.387 [2024-07-26 11:31:46.419254] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:57.387 [2024-07-26 11:31:46.419260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:61424 len:8 PRP1 0x0 PRP2 0x0 00:23:57.387 [2024-07-26 11:31:46.419269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.387 [2024-07-26 11:31:46.419275] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:57.387 [2024-07-26 11:31:46.419280] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:57.387 [2024-07-26 11:31:46.419285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:61432 len:8 PRP1 0x0 PRP2 0x0 00:23:57.387 [2024-07-26 11:31:46.419291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.387 [2024-07-26 11:31:46.419297] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:57.387 [2024-07-26 11:31:46.419302] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:57.387 [2024-07-26 11:31:46.419308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:61440 len:8 PRP1 0x0 PRP2 0x0 00:23:57.387 [2024-07-26 11:31:46.419314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.387 [2024-07-26 11:31:46.419321] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:57.387 [2024-07-26 11:31:46.419326] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:57.387 [2024-07-26 11:31:46.419332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:61448 len:8 PRP1 0x0 PRP2 0x0 00:23:57.387 [2024-07-26 11:31:46.419338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.387 [2024-07-26 11:31:46.419344] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:57.387 [2024-07-26 11:31:46.419349] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:57.387 [2024-07-26 11:31:46.419354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:61456 len:8 PRP1 0x0 PRP2 0x0 00:23:57.387 [2024-07-26 11:31:46.419360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.387 [2024-07-26 11:31:46.419367] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:57.387 [2024-07-26 11:31:46.419372] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:57.387 [2024-07-26 11:31:46.419377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:61464 len:8 PRP1 0x0 PRP2 0x0 00:23:57.387 [2024-07-26 11:31:46.419383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.387 [2024-07-26 11:31:46.419390] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:57.387 [2024-07-26 11:31:46.419395] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:57.387 [2024-07-26 11:31:46.419400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:61472 len:8 PRP1 0x0 PRP2 0x0 00:23:57.387 [2024-07-26 11:31:46.419406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.387 [2024-07-26 11:31:46.419412] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:57.387 [2024-07-26 11:31:46.419418] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:57.387 [2024-07-26 11:31:46.419425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:61480 len:8 PRP1 0x0 PRP2 0x0 00:23:57.387 [2024-07-26 11:31:46.419431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.387 [2024-07-26 11:31:46.419438] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:57.387 [2024-07-26 11:31:46.419443] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:57.387 [2024-07-26 11:31:46.419448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:61488 len:8 PRP1 0x0 PRP2 0x0 00:23:57.387 [2024-07-26 11:31:46.419455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.387 [2024-07-26 11:31:46.419462] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:57.387 [2024-07-26 11:31:46.419468] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:57.387 [2024-07-26 11:31:46.419474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:61496 len:8 PRP1 0x0 PRP2 0x0 00:23:57.387 [2024-07-26 11:31:46.419480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.387 [2024-07-26 11:31:46.419486] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:57.387 [2024-07-26 11:31:46.419491] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:57.387 [2024-07-26 11:31:46.419496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:61504 len:8 PRP1 0x0 PRP2 0x0 00:23:57.387 [2024-07-26 11:31:46.419502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.387 [2024-07-26 11:31:46.419509] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:57.387 [2024-07-26 11:31:46.419513] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:57.387 [2024-07-26 11:31:46.419519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:61512 len:8 PRP1 0x0 PRP2 0x0 00:23:57.388 [2024-07-26 11:31:46.419525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.388 [2024-07-26 11:31:46.419532] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:57.388 [2024-07-26 11:31:46.419537] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:57.388 [2024-07-26 11:31:46.419542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:61520 len:8 PRP1 0x0 PRP2 0x0 00:23:57.388 [2024-07-26 11:31:46.419548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.388 [2024-07-26 11:31:46.419554] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:57.388 [2024-07-26 11:31:46.419559] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:57.388 [2024-07-26 11:31:46.419564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:61528 len:8 PRP1 0x0 PRP2 0x0 00:23:57.388 [2024-07-26 11:31:46.419570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.388 [2024-07-26 11:31:46.419577] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:57.388 [2024-07-26 11:31:46.419582] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:57.388 [2024-07-26 11:31:46.419587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:61536 len:8 PRP1 0x0 PRP2 0x0 00:23:57.388 [2024-07-26 11:31:46.419593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.388 [2024-07-26 11:31:46.419599] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:57.388 [2024-07-26 11:31:46.419606] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:57.388 [2024-07-26 11:31:46.419611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:61544 len:8 PRP1 0x0 PRP2 0x0 00:23:57.388 [2024-07-26 11:31:46.419617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.388 [2024-07-26 11:31:46.419624] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:57.388 [2024-07-26 11:31:46.419634] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:57.388 [2024-07-26 11:31:46.419639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:61552 len:8 PRP1 0x0 PRP2 0x0 00:23:57.388 [2024-07-26 11:31:46.419646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.388 [2024-07-26 11:31:46.419653] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:57.388 [2024-07-26 11:31:46.419658] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:57.388 [2024-07-26 11:31:46.419664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:61560 len:8 PRP1 0x0 PRP2 0x0 00:23:57.388 [2024-07-26 11:31:46.419670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.388 [2024-07-26 11:31:46.419677] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:57.388 [2024-07-26 11:31:46.419682] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:57.388 [2024-07-26 11:31:46.419687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:61568 len:8 PRP1 0x0 PRP2 0x0 00:23:57.388 [2024-07-26 11:31:46.419693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.388 [2024-07-26 11:31:46.419700] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:57.388 [2024-07-26 11:31:46.419705] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:57.388 [2024-07-26 11:31:46.419710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:61576 len:8 PRP1 0x0 PRP2 0x0 00:23:57.388 [2024-07-26 11:31:46.419716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.388 [2024-07-26 11:31:46.419722] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:57.388 [2024-07-26 11:31:46.419727] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:57.388 [2024-07-26 11:31:46.419732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:61584 len:8 PRP1 0x0 PRP2 0x0 00:23:57.388 [2024-07-26 11:31:46.419738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.388 [2024-07-26 11:31:46.419744] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:57.388 [2024-07-26 11:31:46.419750] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:57.388 [2024-07-26 11:31:46.419755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:61592 len:8 PRP1 0x0 PRP2 0x0 00:23:57.388 [2024-07-26 11:31:46.419761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.388 [2024-07-26 11:31:46.419767] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:57.388 [2024-07-26 11:31:46.419772] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:57.388 [2024-07-26 11:31:46.419777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:61600 len:8 PRP1 0x0 PRP2 0x0 00:23:57.388 [2024-07-26 11:31:46.419783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.388 [2024-07-26 11:31:46.419794] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:57.388 [2024-07-26 11:31:46.419799] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:57.388 [2024-07-26 11:31:46.419804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:61608 len:8 PRP1 0x0 PRP2 0x0 00:23:57.388 [2024-07-26 11:31:46.419810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.388 [2024-07-26 11:31:46.419817] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:57.388 [2024-07-26 11:31:46.419821] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:57.388 [2024-07-26 11:31:46.419826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:61616 len:8 PRP1 0x0 PRP2 0x0 00:23:57.388 [2024-07-26 11:31:46.419834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.388 [2024-07-26 11:31:46.419841] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:57.388 [2024-07-26 11:31:46.419846] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:57.388 [2024-07-26 11:31:46.419853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:61624 len:8 PRP1 0x0 PRP2 0x0 00:23:57.388 [2024-07-26 11:31:46.419859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.388 [2024-07-26 11:31:46.419866] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:57.388 [2024-07-26 11:31:46.419870] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:57.388 [2024-07-26 11:31:46.419876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:61632 len:8 PRP1 0x0 PRP2 0x0 00:23:57.388 [2024-07-26 11:31:46.419882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.388 [2024-07-26 11:31:46.419888] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:57.388 [2024-07-26 11:31:46.419893] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:57.388 [2024-07-26 11:31:46.419898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:61640 len:8 PRP1 0x0 PRP2 0x0 00:23:57.388 [2024-07-26 11:31:46.419905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.388 [2024-07-26 11:31:46.419911] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:57.388 [2024-07-26 11:31:46.419916] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:57.388 [2024-07-26 11:31:46.419921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:61648 len:8 PRP1 0x0 PRP2 0x0 00:23:57.388 [2024-07-26 11:31:46.419927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.388 [2024-07-26 11:31:46.419934] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:57.388 [2024-07-26 11:31:46.419938] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:57.388 [2024-07-26 11:31:46.419945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:61656 len:8 PRP1 0x0 PRP2 0x0 00:23:57.388 [2024-07-26 11:31:46.419952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.388 [2024-07-26 11:31:46.419958] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:57.388 [2024-07-26 11:31:46.419963] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:57.388 [2024-07-26 11:31:46.419968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:61664 len:8 PRP1 0x0 PRP2 0x0 00:23:57.388 [2024-07-26 11:31:46.419976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.388 [2024-07-26 11:31:46.429511] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:57.388 [2024-07-26 11:31:46.429521] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:57.388 [2024-07-26 11:31:46.429527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:61672 len:8 PRP1 0x0 PRP2 0x0 00:23:57.388 [2024-07-26 11:31:46.429534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.388 [2024-07-26 11:31:46.429542] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:57.388 [2024-07-26 11:31:46.429547] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:57.388 [2024-07-26 11:31:46.429552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:61680 len:8 PRP1 0x0 PRP2 0x0 00:23:57.388 [2024-07-26 11:31:46.429560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.388 [2024-07-26 11:31:46.429567] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:57.388 [2024-07-26 11:31:46.429572] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:57.388 [2024-07-26 11:31:46.429579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:61688 len:8 PRP1 0x0 PRP2 0x0 00:23:57.388 [2024-07-26 11:31:46.429586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.388 [2024-07-26 11:31:46.429593] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:57.388 [2024-07-26 11:31:46.429598] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:57.388 [2024-07-26 11:31:46.429604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:61696 len:8 PRP1 0x0 PRP2 0x0 00:23:57.389 [2024-07-26 11:31:46.429610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.389 [2024-07-26 11:31:46.429617] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:57.389 [2024-07-26 11:31:46.429623] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:57.389 [2024-07-26 11:31:46.429634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:61704 len:8 PRP1 0x0 PRP2 0x0 00:23:57.389 [2024-07-26 11:31:46.429640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.389 [2024-07-26 11:31:46.429681] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x194a0b0 was disconnected and freed. reset controller. 00:23:57.389 [2024-07-26 11:31:46.429691] bdev_nvme.c:1870:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4422 to 10.0.0.2:4420 00:23:57.389 [2024-07-26 11:31:46.429726] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:57.389 [2024-07-26 11:31:46.429737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.389 [2024-07-26 11:31:46.429747] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:57.389 [2024-07-26 11:31:46.429756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.389 [2024-07-26 11:31:46.429766] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:57.389 [2024-07-26 11:31:46.429774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.389 [2024-07-26 11:31:46.429787] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:57.389 [2024-07-26 11:31:46.429796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.389 [2024-07-26 11:31:46.429805] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:57.389 [2024-07-26 11:31:46.429843] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1926540 (9): Bad file descriptor 00:23:57.389 [2024-07-26 11:31:46.433548] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:57.389 [2024-07-26 11:31:46.510910] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:23:57.389 00:23:57.389 Latency(us) 00:23:57.389 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:57.389 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:23:57.389 Verification LBA range: start 0x0 length 0x4000 00:23:57.389 NVMe0n1 : 15.01 11267.98 44.02 549.94 0.00 10809.46 421.30 19598.38 00:23:57.389 =================================================================================================================== 00:23:57.389 Total : 11267.98 44.02 549.94 0.00 10809.46 421.30 19598.38 00:23:57.389 Received shutdown signal, test time was about 15.000000 seconds 00:23:57.389 00:23:57.389 Latency(us) 00:23:57.389 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:57.389 =================================================================================================================== 00:23:57.389 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:23:57.389 11:31:52 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@65 -- # grep -c 'Resetting controller successful' 00:23:57.389 11:31:52 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@65 -- # count=3 00:23:57.389 11:31:52 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@67 -- # (( count != 3 )) 00:23:57.389 11:31:52 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@73 -- # bdevperf_pid=1610746 00:23:57.389 11:31:52 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 1 -f 00:23:57.389 11:31:52 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@75 -- # waitforlisten 1610746 /var/tmp/bdevperf.sock 00:23:57.389 11:31:52 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@831 -- # '[' -z 1610746 ']' 00:23:57.389 11:31:52 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:57.389 11:31:52 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@836 -- # local max_retries=100 00:23:57.389 11:31:52 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:57.389 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:57.389 11:31:52 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # xtrace_disable 00:23:57.389 11:31:52 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:23:57.953 11:31:53 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:23:57.953 11:31:53 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@864 -- # return 0 00:23:57.953 11:31:53 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:23:58.210 [2024-07-26 11:31:53.739990] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:23:58.210 11:31:53 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:23:58.467 [2024-07-26 11:31:53.920451] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:23:58.467 11:31:53 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@78 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:23:58.724 NVMe0n1 00:23:58.724 11:31:54 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:23:58.981 00:23:58.981 11:31:54 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:23:59.239 00:23:59.239 11:31:54 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:23:59.239 11:31:54 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@82 -- # grep -q NVMe0 00:23:59.496 11:31:55 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:23:59.754 11:31:55 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@87 -- # sleep 3 00:24:03.030 11:31:58 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:24:03.030 11:31:58 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@88 -- # grep -q NVMe0 00:24:03.030 11:31:58 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:24:03.030 11:31:58 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@90 -- # run_test_pid=1611676 00:24:03.030 11:31:58 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@92 -- # wait 1611676 00:24:03.962 0 00:24:03.962 11:31:59 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@94 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:24:03.962 [2024-07-26 11:31:52.754212] Starting SPDK v24.09-pre git sha1 487ff9e1a / DPDK 24.03.0 initialization... 00:24:03.962 [2024-07-26 11:31:52.754263] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1610746 ] 00:24:03.962 EAL: No free 2048 kB hugepages reported on node 1 00:24:03.962 [2024-07-26 11:31:52.820643] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:03.962 [2024-07-26 11:31:52.889648] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:24:03.962 [2024-07-26 11:31:55.228641] bdev_nvme.c:1870:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:24:03.962 [2024-07-26 11:31:55.228685] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:24:03.962 [2024-07-26 11:31:55.228696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:03.962 [2024-07-26 11:31:55.228704] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:24:03.962 [2024-07-26 11:31:55.228710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:03.962 [2024-07-26 11:31:55.228717] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:24:03.962 [2024-07-26 11:31:55.228724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:03.962 [2024-07-26 11:31:55.228731] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:24:03.962 [2024-07-26 11:31:55.228738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:03.962 [2024-07-26 11:31:55.228744] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:03.962 [2024-07-26 11:31:55.228768] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:03.962 [2024-07-26 11:31:55.228782] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22ab540 (9): Bad file descriptor 00:24:03.962 [2024-07-26 11:31:55.249431] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:24:03.962 Running I/O for 1 seconds... 00:24:03.962 00:24:03.962 Latency(us) 00:24:03.962 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:03.962 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:24:03.962 Verification LBA range: start 0x0 length 0x4000 00:24:03.962 NVMe0n1 : 1.01 11358.52 44.37 0.00 0.00 11227.83 2356.18 9050.21 00:24:03.962 =================================================================================================================== 00:24:03.962 Total : 11358.52 44.37 0.00 0.00 11227.83 2356.18 9050.21 00:24:03.962 11:31:59 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@95 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:24:03.962 11:31:59 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@95 -- # grep -q NVMe0 00:24:04.219 11:31:59 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@98 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:24:04.535 11:31:59 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:24:04.535 11:31:59 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@99 -- # grep -q NVMe0 00:24:04.535 11:32:00 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:24:04.840 11:32:00 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@101 -- # sleep 3 00:24:08.131 11:32:03 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@103 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:24:08.131 11:32:03 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@103 -- # grep -q NVMe0 00:24:08.131 11:32:03 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@108 -- # killprocess 1610746 00:24:08.131 11:32:03 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@950 -- # '[' -z 1610746 ']' 00:24:08.131 11:32:03 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # kill -0 1610746 00:24:08.131 11:32:03 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@955 -- # uname 00:24:08.131 11:32:03 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:24:08.131 11:32:03 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1610746 00:24:08.131 11:32:03 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:24:08.131 11:32:03 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:24:08.131 11:32:03 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1610746' 00:24:08.131 killing process with pid 1610746 00:24:08.131 11:32:03 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@969 -- # kill 1610746 00:24:08.131 11:32:03 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@974 -- # wait 1610746 00:24:08.131 11:32:03 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@110 -- # sync 00:24:08.131 11:32:03 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:24:08.389 11:32:03 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@113 -- # trap - SIGINT SIGTERM EXIT 00:24:08.389 11:32:03 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@115 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:24:08.389 11:32:03 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@116 -- # nvmftestfini 00:24:08.389 11:32:03 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@488 -- # nvmfcleanup 00:24:08.389 11:32:03 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@117 -- # sync 00:24:08.389 11:32:03 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:24:08.389 11:32:03 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@120 -- # set +e 00:24:08.389 11:32:03 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@121 -- # for i in {1..20} 00:24:08.389 11:32:03 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:24:08.389 rmmod nvme_tcp 00:24:08.389 rmmod nvme_fabrics 00:24:08.389 rmmod nvme_keyring 00:24:08.389 11:32:03 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:24:08.389 11:32:03 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@124 -- # set -e 00:24:08.389 11:32:03 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@125 -- # return 0 00:24:08.389 11:32:03 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@489 -- # '[' -n 1607679 ']' 00:24:08.389 11:32:03 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@490 -- # killprocess 1607679 00:24:08.389 11:32:03 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@950 -- # '[' -z 1607679 ']' 00:24:08.389 11:32:03 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # kill -0 1607679 00:24:08.389 11:32:03 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@955 -- # uname 00:24:08.389 11:32:03 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:24:08.389 11:32:03 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1607679 00:24:08.389 11:32:03 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:24:08.389 11:32:03 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:24:08.389 11:32:03 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1607679' 00:24:08.389 killing process with pid 1607679 00:24:08.389 11:32:03 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@969 -- # kill 1607679 00:24:08.389 11:32:03 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@974 -- # wait 1607679 00:24:08.648 11:32:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:24:08.648 11:32:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:24:08.648 11:32:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:24:08.648 11:32:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:24:08.648 11:32:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@278 -- # remove_spdk_ns 00:24:08.648 11:32:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:08.648 11:32:04 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:08.648 11:32:04 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:11.183 11:32:06 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:24:11.183 00:24:11.183 real 0m38.428s 00:24:11.183 user 2m3.014s 00:24:11.183 sys 0m7.661s 00:24:11.183 11:32:06 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1126 -- # xtrace_disable 00:24:11.183 11:32:06 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:24:11.183 ************************************ 00:24:11.183 END TEST nvmf_failover 00:24:11.183 ************************************ 00:24:11.183 11:32:06 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@26 -- # run_test nvmf_host_discovery /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:24:11.183 11:32:06 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:24:11.183 11:32:06 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:24:11.183 11:32:06 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:24:11.183 ************************************ 00:24:11.183 START TEST nvmf_host_discovery 00:24:11.183 ************************************ 00:24:11.183 11:32:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:24:11.183 * Looking for test storage... 00:24:11.183 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:24:11.183 11:32:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:11.183 11:32:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@7 -- # uname -s 00:24:11.183 11:32:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:11.183 11:32:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:11.183 11:32:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:11.183 11:32:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:11.183 11:32:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:11.183 11:32:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:11.183 11:32:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:11.183 11:32:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:11.183 11:32:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:11.183 11:32:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:11.183 11:32:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:24:11.183 11:32:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:24:11.183 11:32:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:11.183 11:32:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:11.183 11:32:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:11.183 11:32:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:11.183 11:32:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:11.183 11:32:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:11.183 11:32:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:11.183 11:32:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:11.183 11:32:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:11.183 11:32:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:11.183 11:32:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:11.183 11:32:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@5 -- # export PATH 00:24:11.183 11:32:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:11.183 11:32:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@47 -- # : 0 00:24:11.183 11:32:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:24:11.183 11:32:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:24:11.183 11:32:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:11.183 11:32:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:11.183 11:32:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:11.183 11:32:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:24:11.183 11:32:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:24:11.183 11:32:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@51 -- # have_pci_nics=0 00:24:11.183 11:32:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@11 -- # '[' tcp == rdma ']' 00:24:11.183 11:32:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@16 -- # DISCOVERY_PORT=8009 00:24:11.183 11:32:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@17 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:24:11.183 11:32:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@20 -- # NQN=nqn.2016-06.io.spdk:cnode 00:24:11.183 11:32:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@22 -- # HOST_NQN=nqn.2021-12.io.spdk:test 00:24:11.183 11:32:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@23 -- # HOST_SOCK=/tmp/host.sock 00:24:11.183 11:32:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@25 -- # nvmftestinit 00:24:11.183 11:32:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:24:11.184 11:32:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:11.184 11:32:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@448 -- # prepare_net_devs 00:24:11.184 11:32:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@410 -- # local -g is_hw=no 00:24:11.184 11:32:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@412 -- # remove_spdk_ns 00:24:11.184 11:32:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:11.184 11:32:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:11.184 11:32:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:11.184 11:32:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:24:11.184 11:32:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:24:11.184 11:32:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@285 -- # xtrace_disable 00:24:11.184 11:32:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:16.457 11:32:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:16.457 11:32:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@291 -- # pci_devs=() 00:24:16.457 11:32:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@291 -- # local -a pci_devs 00:24:16.457 11:32:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@292 -- # pci_net_devs=() 00:24:16.457 11:32:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:24:16.457 11:32:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@293 -- # pci_drivers=() 00:24:16.457 11:32:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@293 -- # local -A pci_drivers 00:24:16.457 11:32:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@295 -- # net_devs=() 00:24:16.457 11:32:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@295 -- # local -ga net_devs 00:24:16.457 11:32:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@296 -- # e810=() 00:24:16.457 11:32:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@296 -- # local -ga e810 00:24:16.457 11:32:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@297 -- # x722=() 00:24:16.457 11:32:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@297 -- # local -ga x722 00:24:16.457 11:32:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@298 -- # mlx=() 00:24:16.457 11:32:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@298 -- # local -ga mlx 00:24:16.457 11:32:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:16.457 11:32:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:16.457 11:32:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:16.457 11:32:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:16.457 11:32:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:16.457 11:32:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:16.457 11:32:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:16.457 11:32:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:16.457 11:32:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:16.457 11:32:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:16.457 11:32:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:16.457 11:32:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:24:16.457 11:32:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:24:16.457 11:32:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:24:16.457 11:32:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:24:16.457 11:32:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:24:16.457 11:32:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:24:16.457 11:32:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:16.457 11:32:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:24:16.457 Found 0000:86:00.0 (0x8086 - 0x159b) 00:24:16.457 11:32:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:24:16.457 11:32:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:24:16.457 11:32:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:16.457 11:32:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:16.457 11:32:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:24:16.457 11:32:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:16.457 11:32:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:24:16.457 Found 0000:86:00.1 (0x8086 - 0x159b) 00:24:16.458 11:32:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:24:16.458 11:32:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:24:16.458 11:32:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:16.458 11:32:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:16.458 11:32:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:24:16.458 11:32:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:24:16.458 11:32:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:24:16.458 11:32:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:24:16.458 11:32:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:16.458 11:32:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:16.458 11:32:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:24:16.458 11:32:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:16.458 11:32:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@390 -- # [[ up == up ]] 00:24:16.458 11:32:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:24:16.458 11:32:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:16.458 11:32:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:24:16.458 Found net devices under 0000:86:00.0: cvl_0_0 00:24:16.458 11:32:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:24:16.458 11:32:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:16.458 11:32:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:16.458 11:32:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:24:16.458 11:32:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:16.458 11:32:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@390 -- # [[ up == up ]] 00:24:16.458 11:32:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:24:16.458 11:32:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:16.458 11:32:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:24:16.458 Found net devices under 0000:86:00.1: cvl_0_1 00:24:16.458 11:32:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:24:16.458 11:32:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:24:16.458 11:32:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@414 -- # is_hw=yes 00:24:16.458 11:32:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:24:16.458 11:32:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:24:16.458 11:32:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:24:16.458 11:32:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:16.458 11:32:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:16.458 11:32:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:16.458 11:32:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:24:16.458 11:32:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:16.458 11:32:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:16.458 11:32:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:24:16.458 11:32:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:16.458 11:32:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:16.458 11:32:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:24:16.458 11:32:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:24:16.458 11:32:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:24:16.458 11:32:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:16.458 11:32:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:16.458 11:32:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:16.458 11:32:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:24:16.458 11:32:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:16.717 11:32:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:16.717 11:32:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:16.717 11:32:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:24:16.717 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:16.717 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.167 ms 00:24:16.717 00:24:16.717 --- 10.0.0.2 ping statistics --- 00:24:16.717 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:16.717 rtt min/avg/max/mdev = 0.167/0.167/0.167/0.000 ms 00:24:16.717 11:32:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:16.717 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:16.717 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.137 ms 00:24:16.717 00:24:16.717 --- 10.0.0.1 ping statistics --- 00:24:16.717 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:16.717 rtt min/avg/max/mdev = 0.137/0.137/0.137/0.000 ms 00:24:16.717 11:32:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:16.717 11:32:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@422 -- # return 0 00:24:16.717 11:32:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:24:16.717 11:32:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:16.717 11:32:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:24:16.717 11:32:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:24:16.717 11:32:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:16.717 11:32:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:24:16.717 11:32:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:24:16.717 11:32:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@30 -- # nvmfappstart -m 0x2 00:24:16.717 11:32:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:24:16.717 11:32:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@724 -- # xtrace_disable 00:24:16.717 11:32:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:16.717 11:32:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@481 -- # nvmfpid=1616071 00:24:16.717 11:32:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@482 -- # waitforlisten 1616071 00:24:16.717 11:32:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:24:16.717 11:32:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@831 -- # '[' -z 1616071 ']' 00:24:16.717 11:32:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:16.717 11:32:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@836 -- # local max_retries=100 00:24:16.717 11:32:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:16.717 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:16.717 11:32:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@840 -- # xtrace_disable 00:24:16.717 11:32:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:16.717 [2024-07-26 11:32:12.290594] Starting SPDK v24.09-pre git sha1 487ff9e1a / DPDK 24.03.0 initialization... 00:24:16.717 [2024-07-26 11:32:12.290640] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:16.717 EAL: No free 2048 kB hugepages reported on node 1 00:24:16.717 [2024-07-26 11:32:12.358679] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:16.975 [2024-07-26 11:32:12.430587] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:16.975 [2024-07-26 11:32:12.430633] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:16.975 [2024-07-26 11:32:12.430640] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:16.975 [2024-07-26 11:32:12.430646] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:16.975 [2024-07-26 11:32:12.430651] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:16.975 [2024-07-26 11:32:12.430669] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:24:17.575 11:32:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:24:17.575 11:32:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@864 -- # return 0 00:24:17.575 11:32:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:24:17.575 11:32:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@730 -- # xtrace_disable 00:24:17.575 11:32:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:17.575 11:32:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:17.575 11:32:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@32 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:24:17.575 11:32:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:17.575 11:32:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:17.575 [2024-07-26 11:32:13.125528] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:17.575 11:32:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:17.575 11:32:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2014-08.org.nvmexpress.discovery -t tcp -a 10.0.0.2 -s 8009 00:24:17.575 11:32:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:17.575 11:32:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:17.575 [2024-07-26 11:32:13.137690] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:24:17.575 11:32:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:17.575 11:32:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@35 -- # rpc_cmd bdev_null_create null0 1000 512 00:24:17.575 11:32:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:17.575 11:32:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:17.575 null0 00:24:17.575 11:32:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:17.575 11:32:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@36 -- # rpc_cmd bdev_null_create null1 1000 512 00:24:17.575 11:32:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:17.575 11:32:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:17.575 null1 00:24:17.575 11:32:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:17.575 11:32:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@37 -- # rpc_cmd bdev_wait_for_examine 00:24:17.575 11:32:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:17.575 11:32:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:17.575 11:32:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:17.575 11:32:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@45 -- # hostpid=1616145 00:24:17.575 11:32:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock 00:24:17.575 11:32:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@46 -- # waitforlisten 1616145 /tmp/host.sock 00:24:17.575 11:32:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@831 -- # '[' -z 1616145 ']' 00:24:17.575 11:32:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@835 -- # local rpc_addr=/tmp/host.sock 00:24:17.575 11:32:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@836 -- # local max_retries=100 00:24:17.575 11:32:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:24:17.575 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:24:17.575 11:32:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@840 -- # xtrace_disable 00:24:17.575 11:32:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:17.575 [2024-07-26 11:32:13.211669] Starting SPDK v24.09-pre git sha1 487ff9e1a / DPDK 24.03.0 initialization... 00:24:17.575 [2024-07-26 11:32:13.211707] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1616145 ] 00:24:17.575 EAL: No free 2048 kB hugepages reported on node 1 00:24:17.832 [2024-07-26 11:32:13.277547] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:17.832 [2024-07-26 11:32:13.361546] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:24:18.398 11:32:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:24:18.398 11:32:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@864 -- # return 0 00:24:18.398 11:32:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@48 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:24:18.398 11:32:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@50 -- # rpc_cmd -s /tmp/host.sock log_set_flag bdev_nvme 00:24:18.398 11:32:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:18.398 11:32:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:18.398 11:32:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:18.398 11:32:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@51 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test 00:24:18.398 11:32:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:18.398 11:32:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:18.398 11:32:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:18.398 11:32:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@72 -- # notify_id=0 00:24:18.398 11:32:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@83 -- # get_subsystem_names 00:24:18.398 11:32:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:24:18.398 11:32:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:18.398 11:32:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:18.398 11:32:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:24:18.398 11:32:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:24:18.398 11:32:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:24:18.398 11:32:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:18.656 11:32:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@83 -- # [[ '' == '' ]] 00:24:18.656 11:32:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@84 -- # get_bdev_list 00:24:18.656 11:32:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:18.656 11:32:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:24:18.656 11:32:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:18.656 11:32:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:24:18.656 11:32:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:18.656 11:32:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:24:18.656 11:32:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:18.656 11:32:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@84 -- # [[ '' == '' ]] 00:24:18.656 11:32:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@86 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 00:24:18.656 11:32:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:18.656 11:32:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:18.656 11:32:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:18.656 11:32:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@87 -- # get_subsystem_names 00:24:18.656 11:32:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:24:18.656 11:32:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:24:18.656 11:32:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:18.656 11:32:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:24:18.656 11:32:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:18.656 11:32:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:24:18.656 11:32:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:18.656 11:32:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@87 -- # [[ '' == '' ]] 00:24:18.656 11:32:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@88 -- # get_bdev_list 00:24:18.656 11:32:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:18.656 11:32:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:24:18.656 11:32:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:18.656 11:32:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:24:18.656 11:32:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:18.656 11:32:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:24:18.656 11:32:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:18.656 11:32:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@88 -- # [[ '' == '' ]] 00:24:18.656 11:32:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@90 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 00:24:18.656 11:32:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:18.656 11:32:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:18.656 11:32:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:18.656 11:32:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@91 -- # get_subsystem_names 00:24:18.656 11:32:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:24:18.656 11:32:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:24:18.656 11:32:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:18.656 11:32:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:24:18.656 11:32:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:18.656 11:32:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:24:18.656 11:32:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:18.656 11:32:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@91 -- # [[ '' == '' ]] 00:24:18.656 11:32:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@92 -- # get_bdev_list 00:24:18.656 11:32:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:18.656 11:32:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:24:18.656 11:32:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:18.656 11:32:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:24:18.656 11:32:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:18.656 11:32:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:24:18.914 11:32:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:18.914 11:32:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@92 -- # [[ '' == '' ]] 00:24:18.914 11:32:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@96 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:24:18.914 11:32:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:18.914 11:32:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:18.914 [2024-07-26 11:32:14.364879] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:18.914 11:32:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:18.914 11:32:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@97 -- # get_subsystem_names 00:24:18.914 11:32:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:24:18.914 11:32:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:24:18.914 11:32:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:18.914 11:32:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:24:18.914 11:32:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:18.914 11:32:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:24:18.914 11:32:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:18.914 11:32:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@97 -- # [[ '' == '' ]] 00:24:18.914 11:32:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@98 -- # get_bdev_list 00:24:18.914 11:32:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:18.914 11:32:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:24:18.914 11:32:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:18.914 11:32:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:18.914 11:32:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:24:18.914 11:32:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:24:18.914 11:32:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:18.914 11:32:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@98 -- # [[ '' == '' ]] 00:24:18.914 11:32:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@99 -- # is_notification_count_eq 0 00:24:18.914 11:32:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:24:18.914 11:32:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:24:18.914 11:32:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:24:18.914 11:32:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:24:18.914 11:32:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:24:18.914 11:32:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:24:18.914 11:32:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_notification_count 00:24:18.914 11:32:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:24:18.914 11:32:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:24:18.914 11:32:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:18.914 11:32:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:18.914 11:32:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:18.914 11:32:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:24:18.914 11:32:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=0 00:24:18.914 11:32:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # (( notification_count == expected_count )) 00:24:18.914 11:32:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:24:18.914 11:32:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@103 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2021-12.io.spdk:test 00:24:18.914 11:32:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:18.914 11:32:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:18.914 11:32:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:18.914 11:32:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@105 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:24:18.914 11:32:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:24:18.914 11:32:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:24:18.914 11:32:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:24:18.914 11:32:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:24:18.914 11:32:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_subsystem_names 00:24:18.914 11:32:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:24:18.915 11:32:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:24:18.915 11:32:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:18.915 11:32:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:24:18.915 11:32:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:18.915 11:32:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:24:18.915 11:32:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:19.172 11:32:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ '' == \n\v\m\e\0 ]] 00:24:19.172 11:32:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # sleep 1 00:24:19.430 [2024-07-26 11:32:15.044712] bdev_nvme.c:7011:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:24:19.430 [2024-07-26 11:32:15.044733] bdev_nvme.c:7091:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:24:19.430 [2024-07-26 11:32:15.044745] bdev_nvme.c:6974:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:24:19.687 [2024-07-26 11:32:15.132008] bdev_nvme.c:6940:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:24:19.687 [2024-07-26 11:32:15.236883] bdev_nvme.c:6830:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:24:19.687 [2024-07-26 11:32:15.236900] bdev_nvme.c:6789:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:24:19.944 11:32:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:24:19.944 11:32:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:24:19.944 11:32:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_subsystem_names 00:24:19.944 11:32:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:24:19.944 11:32:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:24:19.944 11:32:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:19.944 11:32:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:24:19.944 11:32:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:19.944 11:32:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:24:19.944 11:32:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:20.202 11:32:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:20.203 11:32:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:24:20.203 11:32:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@106 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:24:20.203 11:32:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:24:20.203 11:32:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:24:20.203 11:32:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:24:20.203 11:32:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1"' ']]' 00:24:20.203 11:32:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_bdev_list 00:24:20.203 11:32:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:20.203 11:32:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:24:20.203 11:32:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:20.203 11:32:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:24:20.203 11:32:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:20.203 11:32:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:24:20.203 11:32:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:20.203 11:32:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ nvme0n1 == \n\v\m\e\0\n\1 ]] 00:24:20.203 11:32:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:24:20.203 11:32:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@107 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:24:20.203 11:32:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:24:20.203 11:32:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:24:20.203 11:32:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:24:20.203 11:32:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT"' ']]' 00:24:20.203 11:32:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_subsystem_paths nvme0 00:24:20.203 11:32:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:24:20.203 11:32:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:20.203 11:32:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:20.203 11:32:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:24:20.203 11:32:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:24:20.203 11:32:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:24:20.203 11:32:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:20.203 11:32:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ 4420 == \4\4\2\0 ]] 00:24:20.203 11:32:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:24:20.203 11:32:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@108 -- # is_notification_count_eq 1 00:24:20.203 11:32:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:24:20.203 11:32:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:24:20.203 11:32:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:24:20.203 11:32:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:24:20.203 11:32:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:24:20.203 11:32:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:24:20.203 11:32:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_notification_count 00:24:20.203 11:32:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:24:20.203 11:32:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:24:20.203 11:32:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:20.203 11:32:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:20.203 11:32:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:20.203 11:32:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:24:20.203 11:32:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=1 00:24:20.203 11:32:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # (( notification_count == expected_count )) 00:24:20.203 11:32:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:24:20.203 11:32:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@111 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null1 00:24:20.203 11:32:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:20.203 11:32:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:20.203 11:32:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:20.203 11:32:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@113 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:24:20.203 11:32:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:24:20.203 11:32:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:24:20.203 11:32:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:24:20.203 11:32:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:24:20.203 11:32:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_bdev_list 00:24:20.203 11:32:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:20.203 11:32:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:24:20.203 11:32:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:20.203 11:32:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:24:20.203 11:32:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:20.203 11:32:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:24:20.462 11:32:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:20.462 11:32:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:24:20.462 11:32:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:24:20.462 11:32:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@114 -- # is_notification_count_eq 1 00:24:20.462 11:32:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:24:20.462 11:32:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:24:20.462 11:32:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:24:20.462 11:32:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:24:20.462 11:32:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:24:20.462 11:32:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:24:20.462 11:32:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_notification_count 00:24:20.462 11:32:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 1 00:24:20.462 11:32:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:24:20.462 11:32:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:20.462 11:32:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:20.462 11:32:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:20.462 11:32:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:24:20.462 11:32:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:24:20.462 11:32:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # (( notification_count == expected_count )) 00:24:20.462 11:32:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:24:20.462 11:32:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@118 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 00:24:20.462 11:32:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:20.462 11:32:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:20.462 [2024-07-26 11:32:16.045421] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:24:20.462 [2024-07-26 11:32:16.045633] bdev_nvme.c:6993:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:24:20.462 [2024-07-26 11:32:16.045654] bdev_nvme.c:6974:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:24:20.462 11:32:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:20.462 11:32:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@120 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:24:20.462 11:32:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:24:20.462 11:32:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:24:20.462 11:32:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:24:20.462 11:32:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:24:20.462 11:32:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_subsystem_names 00:24:20.462 11:32:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:24:20.462 11:32:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:24:20.462 11:32:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:20.462 11:32:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:20.462 11:32:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:24:20.462 11:32:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:24:20.462 11:32:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:20.462 11:32:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:20.462 11:32:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:24:20.462 11:32:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@121 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:24:20.462 11:32:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:24:20.462 11:32:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:24:20.462 11:32:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:24:20.462 11:32:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:24:20.462 11:32:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_bdev_list 00:24:20.462 11:32:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:20.462 11:32:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:24:20.462 11:32:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:20.462 11:32:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:24:20.462 11:32:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:20.462 11:32:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:24:20.720 11:32:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:20.720 11:32:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:24:20.720 11:32:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:24:20.720 11:32:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@122 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:24:20.720 11:32:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:24:20.720 11:32:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:24:20.720 11:32:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:24:20.720 11:32:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:24:20.720 11:32:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_subsystem_paths nvme0 00:24:20.720 11:32:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:24:20.720 11:32:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:24:20.721 11:32:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:24:20.721 11:32:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:24:20.721 11:32:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:20.721 11:32:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:20.721 11:32:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:20.721 [2024-07-26 11:32:16.173357] bdev_nvme.c:6935:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new path for nvme0 00:24:20.721 11:32:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ 4420 == \4\4\2\0\ \4\4\2\1 ]] 00:24:20.721 11:32:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # sleep 1 00:24:20.978 [2024-07-26 11:32:16.473615] bdev_nvme.c:6830:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:24:20.978 [2024-07-26 11:32:16.473637] bdev_nvme.c:6789:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:24:20.978 [2024-07-26 11:32:16.473641] bdev_nvme.c:6789:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:24:21.912 11:32:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:24:21.912 11:32:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:24:21.912 11:32:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_subsystem_paths nvme0 00:24:21.912 11:32:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:24:21.912 11:32:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:24:21.912 11:32:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:21.912 11:32:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:24:21.912 11:32:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:21.912 11:32:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:24:21.912 11:32:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:21.912 11:32:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ 4420 4421 == \4\4\2\0\ \4\4\2\1 ]] 00:24:21.912 11:32:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:24:21.912 11:32:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@123 -- # is_notification_count_eq 0 00:24:21.912 11:32:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:24:21.912 11:32:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:24:21.912 11:32:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:24:21.912 11:32:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:24:21.912 11:32:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:24:21.912 11:32:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:24:21.912 11:32:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_notification_count 00:24:21.912 11:32:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:24:21.912 11:32:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:24:21.913 11:32:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:21.913 11:32:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:21.913 11:32:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:21.913 11:32:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:24:21.913 11:32:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:24:21.913 11:32:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # (( notification_count == expected_count )) 00:24:21.913 11:32:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:24:21.913 11:32:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@127 -- # rpc_cmd nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:24:21.913 11:32:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:21.913 11:32:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:21.913 [2024-07-26 11:32:17.305176] bdev_nvme.c:6993:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:24:21.913 [2024-07-26 11:32:17.305197] bdev_nvme.c:6974:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:24:21.913 [2024-07-26 11:32:17.307709] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:24:21.913 [2024-07-26 11:32:17.307725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.913 [2024-07-26 11:32:17.307733] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:24:21.913 [2024-07-26 11:32:17.307740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.913 [2024-07-26 11:32:17.307747] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:24:21.913 [2024-07-26 11:32:17.307753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.913 [2024-07-26 11:32:17.307760] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:24:21.913 [2024-07-26 11:32:17.307766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.913 [2024-07-26 11:32:17.307776] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1202f30 is same with the state(5) to be set 00:24:21.913 11:32:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:21.913 11:32:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@129 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:24:21.913 11:32:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:24:21.913 11:32:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:24:21.913 11:32:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:24:21.913 11:32:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:24:21.913 11:32:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_subsystem_names 00:24:21.913 11:32:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:24:21.913 11:32:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:21.913 11:32:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:21.913 11:32:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:24:21.913 11:32:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:24:21.913 11:32:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:24:21.913 [2024-07-26 11:32:17.317723] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1202f30 (9): Bad file descriptor 00:24:21.913 [2024-07-26 11:32:17.327761] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:24:21.913 [2024-07-26 11:32:17.327969] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.913 [2024-07-26 11:32:17.327984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1202f30 with addr=10.0.0.2, port=4420 00:24:21.913 [2024-07-26 11:32:17.327991] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1202f30 is same with the state(5) to be set 00:24:21.913 [2024-07-26 11:32:17.328003] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1202f30 (9): Bad file descriptor 00:24:21.913 [2024-07-26 11:32:17.328019] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:24:21.913 [2024-07-26 11:32:17.328026] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:24:21.913 [2024-07-26 11:32:17.328033] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:24:21.913 [2024-07-26 11:32:17.328043] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:21.913 11:32:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:21.913 [2024-07-26 11:32:17.337816] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:24:21.913 [2024-07-26 11:32:17.338053] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.913 [2024-07-26 11:32:17.338065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1202f30 with addr=10.0.0.2, port=4420 00:24:21.913 [2024-07-26 11:32:17.338072] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1202f30 is same with the state(5) to be set 00:24:21.913 [2024-07-26 11:32:17.338083] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1202f30 (9): Bad file descriptor 00:24:21.913 [2024-07-26 11:32:17.338092] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:24:21.913 [2024-07-26 11:32:17.338099] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:24:21.913 [2024-07-26 11:32:17.338108] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:24:21.913 [2024-07-26 11:32:17.338117] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:21.913 [2024-07-26 11:32:17.347865] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:24:21.913 [2024-07-26 11:32:17.348148] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.913 [2024-07-26 11:32:17.348160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1202f30 with addr=10.0.0.2, port=4420 00:24:21.913 [2024-07-26 11:32:17.348166] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1202f30 is same with the state(5) to be set 00:24:21.913 [2024-07-26 11:32:17.348176] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1202f30 (9): Bad file descriptor 00:24:21.913 [2024-07-26 11:32:17.348190] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:24:21.913 [2024-07-26 11:32:17.348196] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:24:21.913 [2024-07-26 11:32:17.348203] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:24:21.913 [2024-07-26 11:32:17.348211] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:21.913 [2024-07-26 11:32:17.357916] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:24:21.913 [2024-07-26 11:32:17.358149] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.913 [2024-07-26 11:32:17.358161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1202f30 with addr=10.0.0.2, port=4420 00:24:21.913 [2024-07-26 11:32:17.358169] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1202f30 is same with the state(5) to be set 00:24:21.913 [2024-07-26 11:32:17.358180] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1202f30 (9): Bad file descriptor 00:24:21.913 [2024-07-26 11:32:17.358189] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:24:21.913 [2024-07-26 11:32:17.358195] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:24:21.913 [2024-07-26 11:32:17.358203] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:24:21.913 [2024-07-26 11:32:17.358212] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:21.913 11:32:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:21.913 11:32:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:24:21.913 11:32:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@130 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:24:21.913 11:32:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:24:21.913 11:32:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:24:21.913 11:32:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:24:21.913 11:32:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:24:21.913 11:32:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_bdev_list 00:24:21.913 11:32:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:21.913 11:32:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:24:21.913 11:32:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:21.913 11:32:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:24:21.913 11:32:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:21.914 11:32:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:24:21.914 [2024-07-26 11:32:17.367967] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:24:21.914 [2024-07-26 11:32:17.368132] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.914 [2024-07-26 11:32:17.368143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1202f30 with addr=10.0.0.2, port=4420 00:24:21.914 [2024-07-26 11:32:17.368150] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1202f30 is same with the state(5) to be set 00:24:21.914 [2024-07-26 11:32:17.368160] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1202f30 (9): Bad file descriptor 00:24:21.914 [2024-07-26 11:32:17.368169] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:24:21.914 [2024-07-26 11:32:17.368175] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:24:21.914 [2024-07-26 11:32:17.368182] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:24:21.914 [2024-07-26 11:32:17.368190] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:21.914 [2024-07-26 11:32:17.378018] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:24:21.914 [2024-07-26 11:32:17.378186] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.914 [2024-07-26 11:32:17.378199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1202f30 with addr=10.0.0.2, port=4420 00:24:21.914 [2024-07-26 11:32:17.378206] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1202f30 is same with the state(5) to be set 00:24:21.914 [2024-07-26 11:32:17.378216] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1202f30 (9): Bad file descriptor 00:24:21.914 [2024-07-26 11:32:17.378225] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:24:21.914 [2024-07-26 11:32:17.378231] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:24:21.914 [2024-07-26 11:32:17.378238] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:24:21.914 [2024-07-26 11:32:17.378247] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:21.914 [2024-07-26 11:32:17.388069] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:24:21.914 [2024-07-26 11:32:17.388323] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.914 [2024-07-26 11:32:17.388335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1202f30 with addr=10.0.0.2, port=4420 00:24:21.914 [2024-07-26 11:32:17.388342] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1202f30 is same with the state(5) to be set 00:24:21.914 [2024-07-26 11:32:17.388352] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1202f30 (9): Bad file descriptor 00:24:21.914 [2024-07-26 11:32:17.388365] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:24:21.914 [2024-07-26 11:32:17.388372] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:24:21.914 [2024-07-26 11:32:17.388378] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:24:21.914 [2024-07-26 11:32:17.388387] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:21.914 [2024-07-26 11:32:17.390937] bdev_nvme.c:6798:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 not found 00:24:21.914 [2024-07-26 11:32:17.390951] bdev_nvme.c:6789:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:24:21.914 11:32:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:21.914 11:32:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:24:21.914 11:32:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:24:21.914 11:32:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@131 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:24:21.914 11:32:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:24:21.914 11:32:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:24:21.914 11:32:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:24:21.914 11:32:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_SECOND_PORT"' ']]' 00:24:21.914 11:32:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_subsystem_paths nvme0 00:24:21.914 11:32:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:24:21.914 11:32:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:24:21.914 11:32:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:24:21.914 11:32:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:24:21.914 11:32:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:21.914 11:32:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:21.914 11:32:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:21.914 11:32:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ 4421 == \4\4\2\1 ]] 00:24:21.914 11:32:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:24:21.914 11:32:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@132 -- # is_notification_count_eq 0 00:24:21.914 11:32:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:24:21.914 11:32:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:24:21.914 11:32:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:24:21.914 11:32:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:24:21.914 11:32:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:24:21.914 11:32:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:24:21.914 11:32:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_notification_count 00:24:21.914 11:32:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:24:21.914 11:32:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:24:21.914 11:32:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:21.914 11:32:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:21.914 11:32:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:21.914 11:32:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:24:21.914 11:32:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:24:21.914 11:32:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # (( notification_count == expected_count )) 00:24:21.914 11:32:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:24:21.914 11:32:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@134 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_stop_discovery -b nvme 00:24:21.914 11:32:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:21.914 11:32:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:21.914 11:32:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:21.914 11:32:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@136 -- # waitforcondition '[[ "$(get_subsystem_names)" == "" ]]' 00:24:21.914 11:32:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_subsystem_names)" == "" ]]' 00:24:21.914 11:32:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:24:21.914 11:32:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:24:21.914 11:32:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_subsystem_names)"' == '""' ']]' 00:24:21.914 11:32:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_subsystem_names 00:24:21.914 11:32:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:24:21.914 11:32:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:21.914 11:32:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:21.914 11:32:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:24:21.914 11:32:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:24:21.914 11:32:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:24:21.914 11:32:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:22.172 11:32:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ '' == '' ]] 00:24:22.172 11:32:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:24:22.172 11:32:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@137 -- # waitforcondition '[[ "$(get_bdev_list)" == "" ]]' 00:24:22.172 11:32:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_bdev_list)" == "" ]]' 00:24:22.172 11:32:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:24:22.172 11:32:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:24:22.172 11:32:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_bdev_list)"' == '""' ']]' 00:24:22.172 11:32:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_bdev_list 00:24:22.172 11:32:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:24:22.172 11:32:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:22.172 11:32:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:24:22.172 11:32:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:22.172 11:32:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:22.172 11:32:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:24:22.172 11:32:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:22.172 11:32:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ '' == '' ]] 00:24:22.173 11:32:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:24:22.173 11:32:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@138 -- # is_notification_count_eq 2 00:24:22.173 11:32:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=2 00:24:22.173 11:32:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:24:22.173 11:32:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:24:22.173 11:32:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:24:22.173 11:32:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:24:22.173 11:32:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:24:22.173 11:32:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_notification_count 00:24:22.173 11:32:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:24:22.173 11:32:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:24:22.173 11:32:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:22.173 11:32:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:22.173 11:32:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:22.173 11:32:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=2 00:24:22.173 11:32:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=4 00:24:22.173 11:32:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # (( notification_count == expected_count )) 00:24:22.173 11:32:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:24:22.173 11:32:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@141 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:24:22.173 11:32:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:22.173 11:32:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:23.106 [2024-07-26 11:32:18.714114] bdev_nvme.c:7011:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:24:23.106 [2024-07-26 11:32:18.714130] bdev_nvme.c:7091:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:24:23.106 [2024-07-26 11:32:18.714140] bdev_nvme.c:6974:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:24:23.364 [2024-07-26 11:32:18.841541] bdev_nvme.c:6940:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new subsystem nvme0 00:24:23.364 [2024-07-26 11:32:18.950921] bdev_nvme.c:6830:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:24:23.364 [2024-07-26 11:32:18.950947] bdev_nvme.c:6789:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:24:23.364 11:32:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:23.364 11:32:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@143 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:24:23.364 11:32:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@650 -- # local es=0 00:24:23.365 11:32:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:24:23.365 11:32:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:24:23.365 11:32:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:24:23.365 11:32:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:24:23.365 11:32:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:24:23.365 11:32:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@653 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:24:23.365 11:32:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:23.365 11:32:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:23.365 request: 00:24:23.365 { 00:24:23.365 "name": "nvme", 00:24:23.365 "trtype": "tcp", 00:24:23.365 "traddr": "10.0.0.2", 00:24:23.365 "adrfam": "ipv4", 00:24:23.365 "trsvcid": "8009", 00:24:23.365 "hostnqn": "nqn.2021-12.io.spdk:test", 00:24:23.365 "wait_for_attach": true, 00:24:23.365 "method": "bdev_nvme_start_discovery", 00:24:23.365 "req_id": 1 00:24:23.365 } 00:24:23.365 Got JSON-RPC error response 00:24:23.365 response: 00:24:23.365 { 00:24:23.365 "code": -17, 00:24:23.365 "message": "File exists" 00:24:23.365 } 00:24:23.365 11:32:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:24:23.365 11:32:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@653 -- # es=1 00:24:23.365 11:32:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:24:23.365 11:32:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:24:23.365 11:32:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:24:23.365 11:32:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@145 -- # get_discovery_ctrlrs 00:24:23.365 11:32:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:24:23.365 11:32:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:24:23.365 11:32:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:23.365 11:32:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:24:23.365 11:32:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:23.365 11:32:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:24:23.365 11:32:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:23.365 11:32:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@145 -- # [[ nvme == \n\v\m\e ]] 00:24:23.365 11:32:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@146 -- # get_bdev_list 00:24:23.365 11:32:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:24:23.365 11:32:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:23.365 11:32:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:23.365 11:32:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:24:23.365 11:32:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:23.365 11:32:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:24:23.623 11:32:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:23.623 11:32:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@146 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:24:23.623 11:32:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@149 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:24:23.623 11:32:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@650 -- # local es=0 00:24:23.623 11:32:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:24:23.623 11:32:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:24:23.623 11:32:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:24:23.623 11:32:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:24:23.623 11:32:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:24:23.623 11:32:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@653 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:24:23.623 11:32:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:23.623 11:32:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:23.623 request: 00:24:23.623 { 00:24:23.623 "name": "nvme_second", 00:24:23.623 "trtype": "tcp", 00:24:23.623 "traddr": "10.0.0.2", 00:24:23.623 "adrfam": "ipv4", 00:24:23.623 "trsvcid": "8009", 00:24:23.623 "hostnqn": "nqn.2021-12.io.spdk:test", 00:24:23.623 "wait_for_attach": true, 00:24:23.623 "method": "bdev_nvme_start_discovery", 00:24:23.623 "req_id": 1 00:24:23.623 } 00:24:23.623 Got JSON-RPC error response 00:24:23.623 response: 00:24:23.623 { 00:24:23.623 "code": -17, 00:24:23.623 "message": "File exists" 00:24:23.623 } 00:24:23.623 11:32:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:24:23.623 11:32:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@653 -- # es=1 00:24:23.623 11:32:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:24:23.623 11:32:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:24:23.623 11:32:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:24:23.623 11:32:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@151 -- # get_discovery_ctrlrs 00:24:23.623 11:32:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:24:23.623 11:32:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:24:23.623 11:32:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:23.623 11:32:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:23.623 11:32:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:24:23.623 11:32:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:24:23.623 11:32:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:23.623 11:32:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@151 -- # [[ nvme == \n\v\m\e ]] 00:24:23.623 11:32:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@152 -- # get_bdev_list 00:24:23.623 11:32:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:23.623 11:32:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:24:23.623 11:32:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:23.623 11:32:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:24:23.623 11:32:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:23.623 11:32:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:24:23.623 11:32:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:23.623 11:32:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@152 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:24:23.623 11:32:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@155 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:24:23.623 11:32:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@650 -- # local es=0 00:24:23.623 11:32:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:24:23.623 11:32:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:24:23.623 11:32:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:24:23.623 11:32:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:24:23.623 11:32:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:24:23.623 11:32:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@653 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:24:23.623 11:32:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:23.623 11:32:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:24.557 [2024-07-26 11:32:20.186390] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.557 [2024-07-26 11:32:20.186428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12342a0 with addr=10.0.0.2, port=8010 00:24:24.557 [2024-07-26 11:32:20.186445] nvme_tcp.c:2711:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:24:24.557 [2024-07-26 11:32:20.186451] nvme.c: 830:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:24:24.557 [2024-07-26 11:32:20.186457] bdev_nvme.c:7073:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:24:25.930 [2024-07-26 11:32:21.188800] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:25.930 [2024-07-26 11:32:21.188826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12342a0 with addr=10.0.0.2, port=8010 00:24:25.930 [2024-07-26 11:32:21.188837] nvme_tcp.c:2711:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:24:25.930 [2024-07-26 11:32:21.188843] nvme.c: 830:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:24:25.930 [2024-07-26 11:32:21.188848] bdev_nvme.c:7073:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:24:26.865 [2024-07-26 11:32:22.190974] bdev_nvme.c:7054:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] timed out while attaching discovery ctrlr 00:24:26.865 request: 00:24:26.865 { 00:24:26.865 "name": "nvme_second", 00:24:26.865 "trtype": "tcp", 00:24:26.865 "traddr": "10.0.0.2", 00:24:26.865 "adrfam": "ipv4", 00:24:26.865 "trsvcid": "8010", 00:24:26.865 "hostnqn": "nqn.2021-12.io.spdk:test", 00:24:26.865 "wait_for_attach": false, 00:24:26.865 "attach_timeout_ms": 3000, 00:24:26.865 "method": "bdev_nvme_start_discovery", 00:24:26.865 "req_id": 1 00:24:26.865 } 00:24:26.865 Got JSON-RPC error response 00:24:26.865 response: 00:24:26.865 { 00:24:26.865 "code": -110, 00:24:26.865 "message": "Connection timed out" 00:24:26.865 } 00:24:26.865 11:32:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:24:26.865 11:32:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@653 -- # es=1 00:24:26.865 11:32:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:24:26.865 11:32:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:24:26.865 11:32:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:24:26.865 11:32:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@157 -- # get_discovery_ctrlrs 00:24:26.865 11:32:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:24:26.865 11:32:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:24:26.865 11:32:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:26.865 11:32:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:24:26.865 11:32:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:26.865 11:32:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:24:26.865 11:32:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:26.865 11:32:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@157 -- # [[ nvme == \n\v\m\e ]] 00:24:26.865 11:32:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@159 -- # trap - SIGINT SIGTERM EXIT 00:24:26.865 11:32:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@161 -- # kill 1616145 00:24:26.865 11:32:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@162 -- # nvmftestfini 00:24:26.865 11:32:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@488 -- # nvmfcleanup 00:24:26.865 11:32:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@117 -- # sync 00:24:26.865 11:32:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:24:26.865 11:32:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@120 -- # set +e 00:24:26.865 11:32:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@121 -- # for i in {1..20} 00:24:26.865 11:32:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:24:26.865 rmmod nvme_tcp 00:24:26.865 rmmod nvme_fabrics 00:24:26.865 rmmod nvme_keyring 00:24:26.865 11:32:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:24:26.865 11:32:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@124 -- # set -e 00:24:26.865 11:32:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@125 -- # return 0 00:24:26.865 11:32:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@489 -- # '[' -n 1616071 ']' 00:24:26.865 11:32:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@490 -- # killprocess 1616071 00:24:26.865 11:32:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@950 -- # '[' -z 1616071 ']' 00:24:26.865 11:32:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@954 -- # kill -0 1616071 00:24:26.865 11:32:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@955 -- # uname 00:24:26.865 11:32:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:24:26.865 11:32:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1616071 00:24:26.865 11:32:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:24:26.865 11:32:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:24:26.865 11:32:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1616071' 00:24:26.865 killing process with pid 1616071 00:24:26.865 11:32:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@969 -- # kill 1616071 00:24:26.865 11:32:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@974 -- # wait 1616071 00:24:27.125 11:32:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:24:27.125 11:32:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:24:27.125 11:32:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:24:27.125 11:32:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:24:27.125 11:32:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@278 -- # remove_spdk_ns 00:24:27.125 11:32:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:27.125 11:32:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:27.125 11:32:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:29.030 11:32:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:24:29.030 00:24:29.030 real 0m18.266s 00:24:29.030 user 0m22.635s 00:24:29.030 sys 0m5.783s 00:24:29.030 11:32:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1126 -- # xtrace_disable 00:24:29.030 11:32:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:29.030 ************************************ 00:24:29.030 END TEST nvmf_host_discovery 00:24:29.030 ************************************ 00:24:29.030 11:32:24 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@27 -- # run_test nvmf_host_multipath_status /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:24:29.030 11:32:24 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:24:29.030 11:32:24 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:24:29.030 11:32:24 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:24:29.030 ************************************ 00:24:29.030 START TEST nvmf_host_multipath_status 00:24:29.030 ************************************ 00:24:29.030 11:32:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:24:29.289 * Looking for test storage... 00:24:29.289 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:24:29.289 11:32:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:29.289 11:32:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # uname -s 00:24:29.289 11:32:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:29.289 11:32:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:29.289 11:32:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:29.289 11:32:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:29.289 11:32:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:29.289 11:32:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:29.289 11:32:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:29.289 11:32:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:29.290 11:32:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:29.290 11:32:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:29.290 11:32:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:24:29.290 11:32:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:24:29.290 11:32:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:29.290 11:32:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:29.290 11:32:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:29.290 11:32:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:29.290 11:32:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:29.290 11:32:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:29.290 11:32:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:29.290 11:32:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:29.290 11:32:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:29.290 11:32:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:29.290 11:32:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:29.290 11:32:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@5 -- # export PATH 00:24:29.290 11:32:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:29.290 11:32:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@47 -- # : 0 00:24:29.290 11:32:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:24:29.290 11:32:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:24:29.290 11:32:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:29.290 11:32:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:29.290 11:32:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:29.290 11:32:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:24:29.290 11:32:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:24:29.290 11:32:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@51 -- # have_pci_nics=0 00:24:29.290 11:32:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@12 -- # MALLOC_BDEV_SIZE=64 00:24:29.290 11:32:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:24:29.290 11:32:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:24:29.290 11:32:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@16 -- # bpf_sh=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/bpftrace.sh 00:24:29.290 11:32:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@18 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:24:29.290 11:32:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@21 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:24:29.290 11:32:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@31 -- # nvmftestinit 00:24:29.290 11:32:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:24:29.290 11:32:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:29.290 11:32:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@448 -- # prepare_net_devs 00:24:29.290 11:32:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@410 -- # local -g is_hw=no 00:24:29.290 11:32:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@412 -- # remove_spdk_ns 00:24:29.290 11:32:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:29.290 11:32:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:29.290 11:32:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:29.290 11:32:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:24:29.290 11:32:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:24:29.290 11:32:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@285 -- # xtrace_disable 00:24:29.290 11:32:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:24:35.864 11:32:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:35.864 11:32:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@291 -- # pci_devs=() 00:24:35.864 11:32:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@291 -- # local -a pci_devs 00:24:35.864 11:32:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@292 -- # pci_net_devs=() 00:24:35.864 11:32:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:24:35.864 11:32:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@293 -- # pci_drivers=() 00:24:35.864 11:32:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@293 -- # local -A pci_drivers 00:24:35.864 11:32:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@295 -- # net_devs=() 00:24:35.864 11:32:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@295 -- # local -ga net_devs 00:24:35.864 11:32:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@296 -- # e810=() 00:24:35.864 11:32:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@296 -- # local -ga e810 00:24:35.864 11:32:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@297 -- # x722=() 00:24:35.864 11:32:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@297 -- # local -ga x722 00:24:35.864 11:32:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@298 -- # mlx=() 00:24:35.864 11:32:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@298 -- # local -ga mlx 00:24:35.864 11:32:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:35.864 11:32:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:35.864 11:32:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:35.864 11:32:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:35.864 11:32:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:35.864 11:32:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:35.864 11:32:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:35.864 11:32:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:35.864 11:32:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:35.864 11:32:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:35.864 11:32:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:35.864 11:32:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:24:35.864 11:32:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:24:35.864 11:32:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:24:35.864 11:32:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:24:35.864 11:32:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:24:35.864 11:32:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:24:35.864 11:32:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:35.864 11:32:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:24:35.864 Found 0000:86:00.0 (0x8086 - 0x159b) 00:24:35.864 11:32:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:24:35.864 11:32:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:24:35.864 11:32:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:35.864 11:32:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:35.864 11:32:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:24:35.864 11:32:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:35.864 11:32:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:24:35.864 Found 0000:86:00.1 (0x8086 - 0x159b) 00:24:35.864 11:32:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:24:35.864 11:32:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:24:35.864 11:32:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:35.864 11:32:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:35.864 11:32:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:24:35.864 11:32:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:24:35.864 11:32:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:24:35.864 11:32:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:24:35.864 11:32:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:35.864 11:32:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:35.864 11:32:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:24:35.864 11:32:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:35.864 11:32:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@390 -- # [[ up == up ]] 00:24:35.864 11:32:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:24:35.864 11:32:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:35.864 11:32:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:24:35.864 Found net devices under 0000:86:00.0: cvl_0_0 00:24:35.864 11:32:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:24:35.864 11:32:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:35.864 11:32:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:35.864 11:32:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:24:35.864 11:32:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:35.864 11:32:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@390 -- # [[ up == up ]] 00:24:35.864 11:32:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:24:35.864 11:32:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:35.864 11:32:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:24:35.864 Found net devices under 0000:86:00.1: cvl_0_1 00:24:35.864 11:32:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:24:35.864 11:32:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:24:35.864 11:32:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@414 -- # is_hw=yes 00:24:35.864 11:32:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:24:35.864 11:32:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:24:35.864 11:32:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:24:35.865 11:32:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:35.865 11:32:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:35.865 11:32:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:35.865 11:32:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:24:35.865 11:32:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:35.865 11:32:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:35.865 11:32:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:24:35.865 11:32:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:35.865 11:32:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:35.865 11:32:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:24:35.865 11:32:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:24:35.865 11:32:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:24:35.865 11:32:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:35.865 11:32:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:35.865 11:32:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:35.865 11:32:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:24:35.865 11:32:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:35.865 11:32:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:35.865 11:32:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:35.865 11:32:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:24:35.865 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:35.865 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.189 ms 00:24:35.865 00:24:35.865 --- 10.0.0.2 ping statistics --- 00:24:35.865 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:35.865 rtt min/avg/max/mdev = 0.189/0.189/0.189/0.000 ms 00:24:35.865 11:32:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:35.865 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:35.865 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.153 ms 00:24:35.865 00:24:35.865 --- 10.0.0.1 ping statistics --- 00:24:35.865 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:35.865 rtt min/avg/max/mdev = 0.153/0.153/0.153/0.000 ms 00:24:35.865 11:32:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:35.865 11:32:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@422 -- # return 0 00:24:35.865 11:32:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:24:35.865 11:32:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:35.865 11:32:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:24:35.865 11:32:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:24:35.865 11:32:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:35.865 11:32:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:24:35.865 11:32:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:24:35.865 11:32:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@33 -- # nvmfappstart -m 0x3 00:24:35.865 11:32:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:24:35.865 11:32:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@724 -- # xtrace_disable 00:24:35.865 11:32:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:24:35.865 11:32:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@481 -- # nvmfpid=1621277 00:24:35.865 11:32:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@482 -- # waitforlisten 1621277 00:24:35.865 11:32:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:24:35.865 11:32:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@831 -- # '[' -z 1621277 ']' 00:24:35.865 11:32:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:35.865 11:32:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@836 -- # local max_retries=100 00:24:35.865 11:32:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:35.865 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:35.865 11:32:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@840 -- # xtrace_disable 00:24:35.865 11:32:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:24:35.865 [2024-07-26 11:32:30.632437] Starting SPDK v24.09-pre git sha1 487ff9e1a / DPDK 24.03.0 initialization... 00:24:35.865 [2024-07-26 11:32:30.632485] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:35.865 EAL: No free 2048 kB hugepages reported on node 1 00:24:35.865 [2024-07-26 11:32:30.701596] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:24:35.865 [2024-07-26 11:32:30.782150] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:35.865 [2024-07-26 11:32:30.782190] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:35.865 [2024-07-26 11:32:30.782197] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:35.865 [2024-07-26 11:32:30.782204] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:35.865 [2024-07-26 11:32:30.782209] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:35.865 [2024-07-26 11:32:30.782275] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:24:35.865 [2024-07-26 11:32:30.782277] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:24:35.865 11:32:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:24:35.865 11:32:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@864 -- # return 0 00:24:35.865 11:32:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:24:35.865 11:32:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@730 -- # xtrace_disable 00:24:35.865 11:32:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:24:35.865 11:32:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:35.865 11:32:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@34 -- # nvmfapp_pid=1621277 00:24:35.865 11:32:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:24:36.125 [2024-07-26 11:32:31.623153] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:36.125 11:32:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:24:36.421 Malloc0 00:24:36.421 11:32:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@39 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -r -m 2 00:24:36.421 11:32:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:24:36.684 11:32:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:24:36.942 [2024-07-26 11:32:32.392360] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:36.942 11:32:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:24:36.942 [2024-07-26 11:32:32.572843] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:24:37.200 11:32:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 90 00:24:37.200 11:32:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@45 -- # bdevperf_pid=1621694 00:24:37.200 11:32:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@47 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:24:37.200 11:32:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@48 -- # waitforlisten 1621694 /var/tmp/bdevperf.sock 00:24:37.200 11:32:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@831 -- # '[' -z 1621694 ']' 00:24:37.200 11:32:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:24:37.200 11:32:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@836 -- # local max_retries=100 00:24:37.200 11:32:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:24:37.200 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:24:37.200 11:32:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@840 -- # xtrace_disable 00:24:37.200 11:32:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:24:38.133 11:32:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:24:38.133 11:32:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@864 -- # return 0 00:24:38.133 11:32:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:24:38.133 11:32:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -l -1 -o 10 00:24:38.390 Nvme0n1 00:24:38.390 11:32:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:24:38.955 Nvme0n1 00:24:38.955 11:32:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 120 -s /var/tmp/bdevperf.sock perform_tests 00:24:38.955 11:32:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@78 -- # sleep 2 00:24:40.854 11:32:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@90 -- # set_ANA_state optimized optimized 00:24:40.854 11:32:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:24:41.112 11:32:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:24:41.112 11:32:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@91 -- # sleep 1 00:24:42.485 11:32:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@92 -- # check_status true false true true true true 00:24:42.485 11:32:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:24:42.485 11:32:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:42.485 11:32:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:24:42.485 11:32:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:42.485 11:32:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:24:42.485 11:32:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:42.485 11:32:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:24:42.485 11:32:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:24:42.485 11:32:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:24:42.485 11:32:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:42.485 11:32:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:24:42.743 11:32:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:42.743 11:32:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:24:42.743 11:32:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:42.743 11:32:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:24:43.001 11:32:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:43.001 11:32:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:24:43.001 11:32:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:24:43.001 11:32:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:43.259 11:32:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:43.259 11:32:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:24:43.259 11:32:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:43.259 11:32:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:24:43.517 11:32:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:43.517 11:32:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@94 -- # set_ANA_state non_optimized optimized 00:24:43.517 11:32:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:24:43.517 11:32:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:24:43.775 11:32:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@95 -- # sleep 1 00:24:44.708 11:32:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@96 -- # check_status false true true true true true 00:24:44.708 11:32:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:24:44.708 11:32:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:44.708 11:32:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:24:44.965 11:32:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:24:44.965 11:32:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:24:44.965 11:32:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:44.965 11:32:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:24:45.223 11:32:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:45.223 11:32:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:24:45.223 11:32:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:45.223 11:32:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:24:45.480 11:32:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:45.480 11:32:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:24:45.480 11:32:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:45.480 11:32:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:24:45.480 11:32:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:45.480 11:32:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:24:45.480 11:32:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:45.480 11:32:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:24:45.738 11:32:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:45.738 11:32:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:24:45.738 11:32:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:24:45.738 11:32:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:45.995 11:32:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:45.995 11:32:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@100 -- # set_ANA_state non_optimized non_optimized 00:24:45.995 11:32:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:24:46.253 11:32:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n non_optimized 00:24:46.253 11:32:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@101 -- # sleep 1 00:24:47.626 11:32:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@102 -- # check_status true false true true true true 00:24:47.626 11:32:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:24:47.626 11:32:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:47.626 11:32:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:24:47.627 11:32:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:47.627 11:32:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:24:47.627 11:32:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:24:47.627 11:32:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:47.627 11:32:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:24:47.627 11:32:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:24:47.627 11:32:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:47.627 11:32:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:24:47.891 11:32:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:47.891 11:32:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:24:47.891 11:32:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:47.891 11:32:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:24:48.153 11:32:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:48.153 11:32:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:24:48.153 11:32:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:48.153 11:32:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:24:48.411 11:32:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:48.411 11:32:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:24:48.411 11:32:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:48.411 11:32:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:24:48.411 11:32:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:48.411 11:32:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@104 -- # set_ANA_state non_optimized inaccessible 00:24:48.411 11:32:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:24:48.668 11:32:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:24:48.925 11:32:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@105 -- # sleep 1 00:24:49.858 11:32:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@106 -- # check_status true false true true true false 00:24:49.858 11:32:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:24:49.858 11:32:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:49.858 11:32:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:24:50.116 11:32:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:50.116 11:32:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:24:50.116 11:32:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:50.116 11:32:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:24:50.374 11:32:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:24:50.374 11:32:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:24:50.374 11:32:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:50.374 11:32:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:24:50.374 11:32:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:50.374 11:32:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:24:50.374 11:32:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:50.374 11:32:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:24:50.632 11:32:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:50.632 11:32:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:24:50.632 11:32:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:50.632 11:32:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:24:50.890 11:32:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:50.890 11:32:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:24:50.890 11:32:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:50.890 11:32:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:24:51.148 11:32:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:24:51.148 11:32:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@108 -- # set_ANA_state inaccessible inaccessible 00:24:51.148 11:32:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:24:51.148 11:32:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:24:51.405 11:32:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@109 -- # sleep 1 00:24:52.338 11:32:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@110 -- # check_status false false true true false false 00:24:52.338 11:32:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:24:52.338 11:32:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:24:52.338 11:32:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:52.596 11:32:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:24:52.596 11:32:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:24:52.596 11:32:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:52.596 11:32:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:24:52.853 11:32:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:24:52.853 11:32:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:24:52.853 11:32:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:52.853 11:32:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:24:52.853 11:32:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:52.853 11:32:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:24:52.853 11:32:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:52.853 11:32:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:24:53.109 11:32:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:53.109 11:32:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:24:53.109 11:32:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:53.109 11:32:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:24:53.365 11:32:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:24:53.365 11:32:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:24:53.365 11:32:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:53.365 11:32:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:24:53.365 11:32:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:24:53.365 11:32:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@112 -- # set_ANA_state inaccessible optimized 00:24:53.365 11:32:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:24:53.621 11:32:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:24:53.877 11:32:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@113 -- # sleep 1 00:24:54.808 11:32:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@114 -- # check_status false true true true false true 00:24:54.808 11:32:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:24:54.808 11:32:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:54.808 11:32:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:24:55.066 11:32:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:24:55.066 11:32:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:24:55.066 11:32:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:55.066 11:32:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:24:55.323 11:32:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:55.323 11:32:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:24:55.323 11:32:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:55.323 11:32:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:24:55.323 11:32:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:55.323 11:32:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:24:55.323 11:32:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:55.323 11:32:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:24:55.603 11:32:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:55.603 11:32:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:24:55.603 11:32:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:55.603 11:32:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:24:55.861 11:32:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:24:55.861 11:32:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:24:55.861 11:32:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:55.861 11:32:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:24:55.861 11:32:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:55.861 11:32:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@116 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_multipath_policy -b Nvme0n1 -p active_active 00:24:56.118 11:32:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@119 -- # set_ANA_state optimized optimized 00:24:56.118 11:32:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:24:56.376 11:32:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:24:56.633 11:32:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@120 -- # sleep 1 00:24:57.565 11:32:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@121 -- # check_status true true true true true true 00:24:57.565 11:32:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:24:57.565 11:32:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:57.565 11:32:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:24:57.822 11:32:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:57.822 11:32:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:24:57.822 11:32:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:57.822 11:32:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:24:58.079 11:32:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:58.079 11:32:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:24:58.079 11:32:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:58.079 11:32:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:24:58.079 11:32:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:58.079 11:32:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:24:58.079 11:32:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:58.079 11:32:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:24:58.374 11:32:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:58.374 11:32:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:24:58.374 11:32:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:58.374 11:32:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:24:58.632 11:32:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:58.632 11:32:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:24:58.632 11:32:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:58.632 11:32:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:24:58.632 11:32:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:58.632 11:32:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@123 -- # set_ANA_state non_optimized optimized 00:24:58.632 11:32:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:24:58.891 11:32:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:24:59.148 11:32:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@124 -- # sleep 1 00:25:00.118 11:32:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@125 -- # check_status false true true true true true 00:25:00.118 11:32:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:25:00.118 11:32:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:25:00.118 11:32:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:00.376 11:32:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:25:00.376 11:32:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:25:00.376 11:32:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:00.376 11:32:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:25:00.634 11:32:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:00.634 11:32:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:25:00.634 11:32:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:00.634 11:32:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:25:00.634 11:32:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:00.634 11:32:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:25:00.634 11:32:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:00.634 11:32:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:25:00.892 11:32:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:00.892 11:32:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:25:00.892 11:32:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:00.892 11:32:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:25:01.150 11:32:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:01.150 11:32:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:25:01.150 11:32:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:01.150 11:32:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:25:01.408 11:32:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:01.408 11:32:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@129 -- # set_ANA_state non_optimized non_optimized 00:25:01.408 11:32:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:25:01.408 11:32:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n non_optimized 00:25:01.665 11:32:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@130 -- # sleep 1 00:25:02.599 11:32:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@131 -- # check_status true true true true true true 00:25:02.599 11:32:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:25:02.599 11:32:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:02.599 11:32:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:25:02.857 11:32:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:02.857 11:32:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:25:02.857 11:32:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:02.857 11:32:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:25:03.114 11:32:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:03.114 11:32:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:25:03.114 11:32:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:03.114 11:32:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:25:03.372 11:32:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:03.372 11:32:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:25:03.372 11:32:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:03.372 11:32:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:25:03.372 11:32:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:03.372 11:32:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:25:03.372 11:32:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:03.372 11:32:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:25:03.630 11:32:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:03.630 11:32:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:25:03.630 11:32:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:03.630 11:32:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:25:03.888 11:32:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:03.888 11:32:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@133 -- # set_ANA_state non_optimized inaccessible 00:25:03.888 11:32:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:25:04.147 11:32:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:25:04.147 11:32:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@134 -- # sleep 1 00:25:05.519 11:33:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@135 -- # check_status true false true true true false 00:25:05.519 11:33:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:25:05.519 11:33:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:05.519 11:33:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:25:05.519 11:33:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:05.519 11:33:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:25:05.519 11:33:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:05.519 11:33:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:25:05.519 11:33:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:25:05.519 11:33:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:25:05.519 11:33:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:05.519 11:33:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:25:05.777 11:33:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:05.777 11:33:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:25:05.777 11:33:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:05.777 11:33:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:25:06.035 11:33:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:06.035 11:33:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:25:06.035 11:33:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:06.035 11:33:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:25:06.293 11:33:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:06.293 11:33:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:25:06.293 11:33:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:06.293 11:33:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:25:06.293 11:33:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:25:06.293 11:33:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@137 -- # killprocess 1621694 00:25:06.293 11:33:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@950 -- # '[' -z 1621694 ']' 00:25:06.293 11:33:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # kill -0 1621694 00:25:06.293 11:33:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@955 -- # uname 00:25:06.293 11:33:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:25:06.293 11:33:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1621694 00:25:06.293 11:33:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:25:06.293 11:33:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:25:06.293 11:33:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1621694' 00:25:06.293 killing process with pid 1621694 00:25:06.293 11:33:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@969 -- # kill 1621694 00:25:06.293 11:33:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@974 -- # wait 1621694 00:25:06.554 Connection closed with partial response: 00:25:06.554 00:25:06.554 00:25:06.554 11:33:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@139 -- # wait 1621694 00:25:06.554 11:33:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@141 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:25:06.554 [2024-07-26 11:32:32.630214] Starting SPDK v24.09-pre git sha1 487ff9e1a / DPDK 24.03.0 initialization... 00:25:06.554 [2024-07-26 11:32:32.630262] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1621694 ] 00:25:06.554 EAL: No free 2048 kB hugepages reported on node 1 00:25:06.554 [2024-07-26 11:32:32.694859] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:06.554 [2024-07-26 11:32:32.768716] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:25:06.554 Running I/O for 90 seconds... 00:25:06.554 [2024-07-26 11:32:46.737400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:54712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.554 [2024-07-26 11:32:46.737439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:25:06.554 [2024-07-26 11:32:46.737474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:54720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.554 [2024-07-26 11:32:46.737483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:25:06.554 [2024-07-26 11:32:46.737496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:54728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.554 [2024-07-26 11:32:46.737502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:25:06.554 [2024-07-26 11:32:46.737515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:54736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.554 [2024-07-26 11:32:46.737521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:25:06.554 [2024-07-26 11:32:46.737533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:54744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.554 [2024-07-26 11:32:46.737540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:25:06.554 [2024-07-26 11:32:46.737552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:54752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.554 [2024-07-26 11:32:46.737559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:25:06.554 [2024-07-26 11:32:46.737571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:54760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.554 [2024-07-26 11:32:46.737578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:25:06.554 [2024-07-26 11:32:46.737589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:54768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.554 [2024-07-26 11:32:46.737596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:06.554 [2024-07-26 11:32:46.737608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:54776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.554 [2024-07-26 11:32:46.737615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:06.554 [2024-07-26 11:32:46.737632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:54264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.554 [2024-07-26 11:32:46.737640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:25:06.554 [2024-07-26 11:32:46.737652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:54272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.554 [2024-07-26 11:32:46.737664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:25:06.554 [2024-07-26 11:32:46.737677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:54280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.554 [2024-07-26 11:32:46.737684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:25:06.554 [2024-07-26 11:32:46.737696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:54288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.554 [2024-07-26 11:32:46.737702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:25:06.554 [2024-07-26 11:32:46.737714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:54296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.554 [2024-07-26 11:32:46.737721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:25:06.554 [2024-07-26 11:32:46.737733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:54304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.554 [2024-07-26 11:32:46.737739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:25:06.554 [2024-07-26 11:32:46.737751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:54312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.554 [2024-07-26 11:32:46.737757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:25:06.554 [2024-07-26 11:32:46.737769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:54320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.554 [2024-07-26 11:32:46.737776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:25:06.554 [2024-07-26 11:32:46.737790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:54328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.554 [2024-07-26 11:32:46.737796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:25:06.554 [2024-07-26 11:32:46.737808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:54336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.554 [2024-07-26 11:32:46.737815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:25:06.554 [2024-07-26 11:32:46.737828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:54344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.554 [2024-07-26 11:32:46.737835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:25:06.554 [2024-07-26 11:32:46.737847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:54352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.555 [2024-07-26 11:32:46.737853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:25:06.555 [2024-07-26 11:32:46.737865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:54360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.555 [2024-07-26 11:32:46.737872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:25:06.555 [2024-07-26 11:32:46.737884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:54368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.555 [2024-07-26 11:32:46.737890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:25:06.555 [2024-07-26 11:32:46.737903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:54376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.555 [2024-07-26 11:32:46.737910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:25:06.555 [2024-07-26 11:32:46.737922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:54384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.555 [2024-07-26 11:32:46.737928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:25:06.555 [2024-07-26 11:32:46.737941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:54392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.555 [2024-07-26 11:32:46.737948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:25:06.555 [2024-07-26 11:32:46.737960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:54400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.555 [2024-07-26 11:32:46.737966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:25:06.555 [2024-07-26 11:32:46.737978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:54408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.555 [2024-07-26 11:32:46.737985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:25:06.555 [2024-07-26 11:32:46.737996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:54416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.555 [2024-07-26 11:32:46.738003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:25:06.555 [2024-07-26 11:32:46.738015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:54424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.555 [2024-07-26 11:32:46.738021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:25:06.555 [2024-07-26 11:32:46.738033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:54432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.555 [2024-07-26 11:32:46.738041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:25:06.555 [2024-07-26 11:32:46.738053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:54440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.555 [2024-07-26 11:32:46.738059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:25:06.555 [2024-07-26 11:32:46.738071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:54448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.555 [2024-07-26 11:32:46.738077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:25:06.555 [2024-07-26 11:32:46.738090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:54456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.555 [2024-07-26 11:32:46.738096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:25:06.555 [2024-07-26 11:32:46.738109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:54464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.555 [2024-07-26 11:32:46.738115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:25:06.555 [2024-07-26 11:32:46.738132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:54472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.555 [2024-07-26 11:32:46.738139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:25:06.555 [2024-07-26 11:32:46.738151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:54480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.555 [2024-07-26 11:32:46.738157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:25:06.555 [2024-07-26 11:32:46.738170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:54488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.555 [2024-07-26 11:32:46.738176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:25:06.555 [2024-07-26 11:32:46.738188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:54496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.555 [2024-07-26 11:32:46.738195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.555 [2024-07-26 11:32:46.738207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:54504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.555 [2024-07-26 11:32:46.738214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:06.555 [2024-07-26 11:32:46.738226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:54512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.555 [2024-07-26 11:32:46.738233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:06.555 [2024-07-26 11:32:46.738246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:54520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.555 [2024-07-26 11:32:46.738252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:25:06.555 [2024-07-26 11:32:46.738264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:54784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.555 [2024-07-26 11:32:46.738271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:25:06.555 [2024-07-26 11:32:46.738282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:54792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.555 [2024-07-26 11:32:46.738289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:25:06.555 [2024-07-26 11:32:46.738301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:54800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.555 [2024-07-26 11:32:46.738308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:25:06.555 [2024-07-26 11:32:46.738320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:54808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.555 [2024-07-26 11:32:46.738326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:25:06.555 [2024-07-26 11:32:46.738338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:54816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.555 [2024-07-26 11:32:46.738344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:25:06.555 [2024-07-26 11:32:46.738357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:54824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.555 [2024-07-26 11:32:46.738364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:25:06.555 [2024-07-26 11:32:46.738376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:54832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.555 [2024-07-26 11:32:46.738383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:25:06.555 [2024-07-26 11:32:46.738546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:54840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.555 [2024-07-26 11:32:46.738556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:25:06.555 [2024-07-26 11:32:46.738574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:54848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.555 [2024-07-26 11:32:46.738580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:25:06.555 [2024-07-26 11:32:46.738596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:54856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.555 [2024-07-26 11:32:46.738603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:25:06.555 [2024-07-26 11:32:46.738618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:54864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.555 [2024-07-26 11:32:46.738625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:25:06.555 [2024-07-26 11:32:46.738646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:54872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.555 [2024-07-26 11:32:46.738652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:25:06.555 [2024-07-26 11:32:46.738667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:54880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.555 [2024-07-26 11:32:46.738674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:25:06.555 [2024-07-26 11:32:46.738690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:54888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.555 [2024-07-26 11:32:46.738696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:25:06.555 [2024-07-26 11:32:46.738712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:54896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.555 [2024-07-26 11:32:46.738719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:25:06.555 [2024-07-26 11:32:46.738735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:54904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.555 [2024-07-26 11:32:46.738741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:25:06.556 [2024-07-26 11:32:46.738756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:54912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.556 [2024-07-26 11:32:46.738763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:25:06.556 [2024-07-26 11:32:46.738778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:54920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.556 [2024-07-26 11:32:46.738787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:25:06.556 [2024-07-26 11:32:46.738802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:54928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.556 [2024-07-26 11:32:46.738808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:25:06.556 [2024-07-26 11:32:46.738823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:54936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.556 [2024-07-26 11:32:46.738830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:25:06.556 [2024-07-26 11:32:46.738845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:54944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.556 [2024-07-26 11:32:46.738851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:25:06.556 [2024-07-26 11:32:46.738866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:54952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.556 [2024-07-26 11:32:46.738873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:25:06.556 [2024-07-26 11:32:46.738889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:54960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.556 [2024-07-26 11:32:46.738895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:25:06.556 [2024-07-26 11:32:46.738911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:54968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.556 [2024-07-26 11:32:46.738918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:25:06.556 [2024-07-26 11:32:46.739006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:54976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.556 [2024-07-26 11:32:46.739016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:25:06.556 [2024-07-26 11:32:46.739033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:54984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.556 [2024-07-26 11:32:46.739040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:25:06.556 [2024-07-26 11:32:46.739057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:54992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.556 [2024-07-26 11:32:46.739064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:25:06.556 [2024-07-26 11:32:46.739080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:55000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.556 [2024-07-26 11:32:46.739087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:25:06.556 [2024-07-26 11:32:46.739103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:55008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.556 [2024-07-26 11:32:46.739109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:25:06.556 [2024-07-26 11:32:46.739126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:55016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.556 [2024-07-26 11:32:46.739134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:06.556 [2024-07-26 11:32:46.739150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:55024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.556 [2024-07-26 11:32:46.739157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:06.556 [2024-07-26 11:32:46.739173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:55032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.556 [2024-07-26 11:32:46.739179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:25:06.556 [2024-07-26 11:32:46.739195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:55040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.556 [2024-07-26 11:32:46.739202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:25:06.556 [2024-07-26 11:32:46.739218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:55048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.556 [2024-07-26 11:32:46.739225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:25:06.556 [2024-07-26 11:32:46.739240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:55056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.556 [2024-07-26 11:32:46.739247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:25:06.556 [2024-07-26 11:32:46.739263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:55064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.556 [2024-07-26 11:32:46.739269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:25:06.556 [2024-07-26 11:32:46.739285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:55072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.556 [2024-07-26 11:32:46.739292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:25:06.556 [2024-07-26 11:32:46.739309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:55080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.556 [2024-07-26 11:32:46.739315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:25:06.556 [2024-07-26 11:32:46.739331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:55088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.556 [2024-07-26 11:32:46.739338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:25:06.556 [2024-07-26 11:32:46.739355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:55096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.556 [2024-07-26 11:32:46.739362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:25:06.556 [2024-07-26 11:32:46.739379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:55104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.556 [2024-07-26 11:32:46.739386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:25:06.556 [2024-07-26 11:32:46.739402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:55112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.556 [2024-07-26 11:32:46.739409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:25:06.556 [2024-07-26 11:32:46.739427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:55120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.556 [2024-07-26 11:32:46.739433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:25:06.556 [2024-07-26 11:32:46.739449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:55128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.556 [2024-07-26 11:32:46.739456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:25:06.556 [2024-07-26 11:32:46.739472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:55136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.556 [2024-07-26 11:32:46.739479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:25:06.556 [2024-07-26 11:32:46.739495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:55144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.556 [2024-07-26 11:32:46.739502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:25:06.556 [2024-07-26 11:32:46.739518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:55152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.556 [2024-07-26 11:32:46.739525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:25:06.556 [2024-07-26 11:32:46.739541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:55160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.556 [2024-07-26 11:32:46.739547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:25:06.556 [2024-07-26 11:32:46.739564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:55168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.556 [2024-07-26 11:32:46.739570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:25:06.556 [2024-07-26 11:32:46.739586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:55176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.556 [2024-07-26 11:32:46.739592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:25:06.556 [2024-07-26 11:32:46.739608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:55184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.556 [2024-07-26 11:32:46.739615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:25:06.556 [2024-07-26 11:32:46.739637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:55192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.556 [2024-07-26 11:32:46.739644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:25:06.556 [2024-07-26 11:32:46.739660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:55200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.556 [2024-07-26 11:32:46.739667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:25:06.556 [2024-07-26 11:32:46.739684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:55208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.556 [2024-07-26 11:32:46.739690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:25:06.557 [2024-07-26 11:32:46.739708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:55216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.557 [2024-07-26 11:32:46.739716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:25:06.557 [2024-07-26 11:32:46.739733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:55224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.557 [2024-07-26 11:32:46.739740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:25:06.557 [2024-07-26 11:32:46.739756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:55232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.557 [2024-07-26 11:32:46.739762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:25:06.557 [2024-07-26 11:32:46.739778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:55240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.557 [2024-07-26 11:32:46.739785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:25:06.557 [2024-07-26 11:32:46.739801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:55248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.557 [2024-07-26 11:32:46.739807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:25:06.557 [2024-07-26 11:32:46.739823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:55256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.557 [2024-07-26 11:32:46.739830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:25:06.557 [2024-07-26 11:32:46.739846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:55264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.557 [2024-07-26 11:32:46.739853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:25:06.557 [2024-07-26 11:32:46.739869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:55272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.557 [2024-07-26 11:32:46.739875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:06.557 [2024-07-26 11:32:46.739892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:54528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.557 [2024-07-26 11:32:46.739898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:06.557 [2024-07-26 11:32:46.739914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:54536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.557 [2024-07-26 11:32:46.739921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:25:06.557 [2024-07-26 11:32:46.739937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:54544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.557 [2024-07-26 11:32:46.739944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:25:06.557 [2024-07-26 11:32:46.739960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:54552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.557 [2024-07-26 11:32:46.739966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:25:06.557 [2024-07-26 11:32:46.739982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:54560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.557 [2024-07-26 11:32:46.739990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:25:06.557 [2024-07-26 11:32:46.740007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:54568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.557 [2024-07-26 11:32:46.740013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:25:06.557 [2024-07-26 11:32:46.740029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:54576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.557 [2024-07-26 11:32:46.740036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:25:06.557 [2024-07-26 11:32:46.740051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:55280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.557 [2024-07-26 11:32:46.740058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:25:06.557 [2024-07-26 11:32:46.740074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:54584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.557 [2024-07-26 11:32:46.740081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:25:06.557 [2024-07-26 11:32:46.740097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:54592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.557 [2024-07-26 11:32:46.740105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:25:06.557 [2024-07-26 11:32:46.740121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:54600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.557 [2024-07-26 11:32:46.740127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:25:06.557 [2024-07-26 11:32:46.740144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:54608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.557 [2024-07-26 11:32:46.740151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:25:06.557 [2024-07-26 11:32:46.740167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:54616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.557 [2024-07-26 11:32:46.740173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:25:06.557 [2024-07-26 11:32:46.740189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:54624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.557 [2024-07-26 11:32:46.740196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:25:06.557 [2024-07-26 11:32:46.740213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:54632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.557 [2024-07-26 11:32:46.740219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:25:06.557 [2024-07-26 11:32:46.740333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:54640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.557 [2024-07-26 11:32:46.740342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:25:06.557 [2024-07-26 11:32:46.740362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:54648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.557 [2024-07-26 11:32:46.740371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:25:06.557 [2024-07-26 11:32:46.740390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:54656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.557 [2024-07-26 11:32:46.740397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:25:06.557 [2024-07-26 11:32:46.740416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:54664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.557 [2024-07-26 11:32:46.740423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:25:06.557 [2024-07-26 11:32:46.740442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:54672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.557 [2024-07-26 11:32:46.740448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:25:06.557 [2024-07-26 11:32:46.740467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:54680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.557 [2024-07-26 11:32:46.740474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:25:06.557 [2024-07-26 11:32:46.740493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:54688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.557 [2024-07-26 11:32:46.740499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:25:06.557 [2024-07-26 11:32:46.740519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:54696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.557 [2024-07-26 11:32:46.740525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:25:06.557 [2024-07-26 11:32:46.740545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:54704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.557 [2024-07-26 11:32:46.740551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:25:06.557 [2024-07-26 11:32:59.743394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:111696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.557 [2024-07-26 11:32:59.743434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:25:06.557 [2024-07-26 11:32:59.743481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:111728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.557 [2024-07-26 11:32:59.743490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:25:06.557 [2024-07-26 11:32:59.743503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:111760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.557 [2024-07-26 11:32:59.743510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:25:06.557 [2024-07-26 11:32:59.743522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:111792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.557 [2024-07-26 11:32:59.743529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:25:06.557 [2024-07-26 11:32:59.743541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:111824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.557 [2024-07-26 11:32:59.743548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:25:06.557 [2024-07-26 11:32:59.743565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:111856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.557 [2024-07-26 11:32:59.743571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:25:06.557 [2024-07-26 11:32:59.743584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:111888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.558 [2024-07-26 11:32:59.743590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:25:06.558 [2024-07-26 11:32:59.743603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:111920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.558 [2024-07-26 11:32:59.743610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:25:06.558 [2024-07-26 11:32:59.743622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:111952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.558 [2024-07-26 11:32:59.743633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:25:06.558 [2024-07-26 11:32:59.743645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:111984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.558 [2024-07-26 11:32:59.743652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:25:06.558 [2024-07-26 11:32:59.743664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:112016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.558 [2024-07-26 11:32:59.743671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:25:06.558 [2024-07-26 11:32:59.743683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:112048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.558 [2024-07-26 11:32:59.743689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:25:06.558 [2024-07-26 11:32:59.743701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:111432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.558 [2024-07-26 11:32:59.743708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:25:06.558 [2024-07-26 11:32:59.743720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:111464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.558 [2024-07-26 11:32:59.743726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:25:06.558 [2024-07-26 11:32:59.743738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:111496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.558 [2024-07-26 11:32:59.743745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:25:06.558 [2024-07-26 11:32:59.743757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:111528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.558 [2024-07-26 11:32:59.743764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:25:06.558 [2024-07-26 11:32:59.743775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:111560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.558 [2024-07-26 11:32:59.743782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:25:06.558 [2024-07-26 11:32:59.743796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:111592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.558 [2024-07-26 11:32:59.743804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:25:06.558 [2024-07-26 11:32:59.743816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:111624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.558 [2024-07-26 11:32:59.743823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:25:06.558 [2024-07-26 11:32:59.743835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:111656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.558 [2024-07-26 11:32:59.743842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:25:06.558 [2024-07-26 11:32:59.743853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:111688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.558 [2024-07-26 11:32:59.743860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:25:06.558 [2024-07-26 11:32:59.743872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:111720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.558 [2024-07-26 11:32:59.743879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:25:06.558 [2024-07-26 11:32:59.743891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:111752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.558 [2024-07-26 11:32:59.743898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:25:06.558 [2024-07-26 11:32:59.743910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:111784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.558 [2024-07-26 11:32:59.743916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:25:06.558 [2024-07-26 11:32:59.743928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:111816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.558 [2024-07-26 11:32:59.743935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:06.558 [2024-07-26 11:32:59.743947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:111848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.558 [2024-07-26 11:32:59.743953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:06.558 [2024-07-26 11:32:59.743965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:111880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.558 [2024-07-26 11:32:59.743972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:25:06.558 [2024-07-26 11:32:59.743984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:111912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.558 [2024-07-26 11:32:59.743990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:25:06.558 [2024-07-26 11:32:59.744002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:111944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.558 [2024-07-26 11:32:59.744009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:25:06.558 [2024-07-26 11:32:59.744022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:111976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.558 [2024-07-26 11:32:59.744028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:25:06.558 [2024-07-26 11:32:59.744040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:112008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.558 [2024-07-26 11:32:59.744047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:25:06.558 [2024-07-26 11:32:59.744059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:112040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.558 [2024-07-26 11:32:59.744065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:25:06.558 [2024-07-26 11:32:59.744077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:112064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.558 [2024-07-26 11:32:59.744083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:25:06.558 [2024-07-26 11:32:59.744096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:112096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.558 [2024-07-26 11:32:59.744103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:25:06.558 [2024-07-26 11:32:59.744115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:112128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.558 [2024-07-26 11:32:59.744121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:25:06.558 [2024-07-26 11:32:59.744133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:112168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.558 [2024-07-26 11:32:59.744140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:25:06.558 [2024-07-26 11:32:59.744152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:112088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.558 [2024-07-26 11:32:59.744158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:25:06.558 [2024-07-26 11:32:59.744171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:112120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.559 [2024-07-26 11:32:59.744177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:25:06.559 [2024-07-26 11:32:59.744189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:112152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.559 [2024-07-26 11:32:59.744196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:25:06.559 [2024-07-26 11:32:59.744208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:112192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.559 [2024-07-26 11:32:59.744215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:25:06.559 [2024-07-26 11:32:59.744227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:112224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.559 [2024-07-26 11:32:59.744234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:25:06.559 [2024-07-26 11:32:59.744246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:112256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.559 [2024-07-26 11:32:59.744253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:25:06.559 [2024-07-26 11:32:59.744265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:112288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.559 [2024-07-26 11:32:59.744272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:25:06.559 [2024-07-26 11:32:59.744284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:112184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.559 [2024-07-26 11:32:59.744290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:25:06.559 [2024-07-26 11:32:59.744302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:112216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.559 [2024-07-26 11:32:59.744309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:25:06.559 [2024-07-26 11:32:59.744321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:112248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.559 [2024-07-26 11:32:59.744327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:25:06.559 [2024-07-26 11:32:59.744340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:112280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.559 [2024-07-26 11:32:59.744346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:25:06.559 [2024-07-26 11:32:59.744685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:112304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.559 [2024-07-26 11:32:59.744695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:25:06.559 [2024-07-26 11:32:59.744709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:112320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.559 [2024-07-26 11:32:59.744716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:25:06.559 [2024-07-26 11:32:59.744729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:112336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.559 [2024-07-26 11:32:59.744735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:25:06.559 [2024-07-26 11:32:59.744747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:112352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.559 [2024-07-26 11:32:59.744754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:25:06.559 [2024-07-26 11:32:59.744766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:112368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.559 [2024-07-26 11:32:59.744773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:25:06.559 [2024-07-26 11:32:59.744784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:112384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.559 [2024-07-26 11:32:59.744791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:25:06.559 [2024-07-26 11:32:59.744803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:112400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.559 [2024-07-26 11:32:59.744811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:25:06.559 [2024-07-26 11:32:59.744825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:112416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.559 [2024-07-26 11:32:59.744831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:25:06.559 [2024-07-26 11:32:59.745163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:112432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.559 [2024-07-26 11:32:59.745178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.559 Received shutdown signal, test time was about 27.481922 seconds 00:25:06.559 00:25:06.559 Latency(us) 00:25:06.559 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:06.559 Job: Nvme0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:25:06.559 Verification LBA range: start 0x0 length 0x4000 00:25:06.559 Nvme0n1 : 27.48 10338.82 40.39 0.00 0.00 12357.75 752.88 3019898.88 00:25:06.559 =================================================================================================================== 00:25:06.559 Total : 10338.82 40.39 0.00 0.00 12357.75 752.88 3019898.88 00:25:06.559 11:33:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@143 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:25:06.817 11:33:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@145 -- # trap - SIGINT SIGTERM EXIT 00:25:06.817 11:33:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@147 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:25:06.817 11:33:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@148 -- # nvmftestfini 00:25:06.817 11:33:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@488 -- # nvmfcleanup 00:25:06.817 11:33:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@117 -- # sync 00:25:06.817 11:33:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:25:06.817 11:33:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@120 -- # set +e 00:25:06.817 11:33:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@121 -- # for i in {1..20} 00:25:06.817 11:33:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:25:06.817 rmmod nvme_tcp 00:25:06.817 rmmod nvme_fabrics 00:25:06.817 rmmod nvme_keyring 00:25:06.817 11:33:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:25:06.817 11:33:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@124 -- # set -e 00:25:06.818 11:33:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@125 -- # return 0 00:25:06.818 11:33:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@489 -- # '[' -n 1621277 ']' 00:25:06.818 11:33:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@490 -- # killprocess 1621277 00:25:06.818 11:33:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@950 -- # '[' -z 1621277 ']' 00:25:06.818 11:33:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # kill -0 1621277 00:25:06.818 11:33:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@955 -- # uname 00:25:06.818 11:33:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:25:06.818 11:33:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1621277 00:25:06.818 11:33:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:25:06.818 11:33:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:25:06.818 11:33:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1621277' 00:25:06.818 killing process with pid 1621277 00:25:06.818 11:33:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@969 -- # kill 1621277 00:25:06.818 11:33:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@974 -- # wait 1621277 00:25:07.076 11:33:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:25:07.077 11:33:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:25:07.077 11:33:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:25:07.077 11:33:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:25:07.077 11:33:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@278 -- # remove_spdk_ns 00:25:07.077 11:33:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:07.077 11:33:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:07.077 11:33:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:09.625 11:33:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:25:09.626 00:25:09.626 real 0m40.016s 00:25:09.626 user 1m47.778s 00:25:09.626 sys 0m10.820s 00:25:09.626 11:33:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1126 -- # xtrace_disable 00:25:09.626 11:33:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:25:09.626 ************************************ 00:25:09.626 END TEST nvmf_host_multipath_status 00:25:09.626 ************************************ 00:25:09.626 11:33:04 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@28 -- # run_test nvmf_discovery_remove_ifc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:25:09.626 11:33:04 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:25:09.626 11:33:04 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:25:09.626 11:33:04 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:25:09.626 ************************************ 00:25:09.626 START TEST nvmf_discovery_remove_ifc 00:25:09.626 ************************************ 00:25:09.626 11:33:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:25:09.626 * Looking for test storage... 00:25:09.626 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:25:09.626 11:33:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:25:09.626 11:33:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # uname -s 00:25:09.626 11:33:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:09.626 11:33:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:09.626 11:33:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:09.626 11:33:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:09.626 11:33:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:09.626 11:33:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:09.626 11:33:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:09.626 11:33:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:09.626 11:33:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:09.626 11:33:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:09.626 11:33:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:25:09.626 11:33:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:25:09.626 11:33:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:09.626 11:33:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:09.626 11:33:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:25:09.626 11:33:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:09.626 11:33:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:25:09.626 11:33:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:09.626 11:33:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:09.626 11:33:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:09.626 11:33:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:09.626 11:33:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:09.626 11:33:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:09.626 11:33:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@5 -- # export PATH 00:25:09.626 11:33:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:09.626 11:33:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@47 -- # : 0 00:25:09.626 11:33:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:25:09.626 11:33:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:25:09.626 11:33:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:09.626 11:33:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:09.626 11:33:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:09.626 11:33:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:25:09.626 11:33:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:25:09.626 11:33:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@51 -- # have_pci_nics=0 00:25:09.626 11:33:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@14 -- # '[' tcp == rdma ']' 00:25:09.626 11:33:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@19 -- # discovery_port=8009 00:25:09.626 11:33:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@20 -- # discovery_nqn=nqn.2014-08.org.nvmexpress.discovery 00:25:09.626 11:33:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@23 -- # nqn=nqn.2016-06.io.spdk:cnode 00:25:09.626 11:33:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@25 -- # host_nqn=nqn.2021-12.io.spdk:test 00:25:09.626 11:33:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@26 -- # host_sock=/tmp/host.sock 00:25:09.626 11:33:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@39 -- # nvmftestinit 00:25:09.626 11:33:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:25:09.626 11:33:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:09.626 11:33:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@448 -- # prepare_net_devs 00:25:09.626 11:33:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@410 -- # local -g is_hw=no 00:25:09.626 11:33:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@412 -- # remove_spdk_ns 00:25:09.626 11:33:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:09.626 11:33:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:09.626 11:33:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:09.626 11:33:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:25:09.626 11:33:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:25:09.626 11:33:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@285 -- # xtrace_disable 00:25:09.626 11:33:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:25:14.903 11:33:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:25:14.903 11:33:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@291 -- # pci_devs=() 00:25:14.903 11:33:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@291 -- # local -a pci_devs 00:25:14.903 11:33:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@292 -- # pci_net_devs=() 00:25:14.903 11:33:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:25:14.903 11:33:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@293 -- # pci_drivers=() 00:25:14.903 11:33:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@293 -- # local -A pci_drivers 00:25:14.903 11:33:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@295 -- # net_devs=() 00:25:14.903 11:33:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@295 -- # local -ga net_devs 00:25:14.903 11:33:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@296 -- # e810=() 00:25:14.903 11:33:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@296 -- # local -ga e810 00:25:14.903 11:33:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@297 -- # x722=() 00:25:14.903 11:33:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@297 -- # local -ga x722 00:25:14.903 11:33:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@298 -- # mlx=() 00:25:14.903 11:33:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@298 -- # local -ga mlx 00:25:14.903 11:33:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:14.903 11:33:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:14.903 11:33:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:14.903 11:33:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:14.903 11:33:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:14.903 11:33:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:14.903 11:33:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:14.903 11:33:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:14.903 11:33:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:14.903 11:33:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:14.903 11:33:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:14.903 11:33:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:25:14.903 11:33:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:25:14.903 11:33:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:25:14.903 11:33:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:25:14.903 11:33:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:25:14.904 11:33:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:25:14.904 11:33:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:25:14.904 11:33:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:25:14.904 Found 0000:86:00.0 (0x8086 - 0x159b) 00:25:14.904 11:33:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:25:14.904 11:33:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:25:14.904 11:33:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:14.904 11:33:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:14.904 11:33:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:25:14.904 11:33:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:25:14.904 11:33:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:25:14.904 Found 0000:86:00.1 (0x8086 - 0x159b) 00:25:14.904 11:33:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:25:14.904 11:33:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:25:14.904 11:33:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:14.904 11:33:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:14.904 11:33:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:25:14.904 11:33:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:25:14.904 11:33:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:25:14.904 11:33:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:25:14.904 11:33:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:25:14.904 11:33:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:14.904 11:33:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:25:14.904 11:33:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:14.904 11:33:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@390 -- # [[ up == up ]] 00:25:14.904 11:33:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:25:14.904 11:33:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:14.904 11:33:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:25:14.904 Found net devices under 0000:86:00.0: cvl_0_0 00:25:14.904 11:33:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:25:14.904 11:33:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:25:14.904 11:33:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:14.904 11:33:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:25:14.904 11:33:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:14.904 11:33:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@390 -- # [[ up == up ]] 00:25:14.904 11:33:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:25:14.904 11:33:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:14.904 11:33:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:25:14.904 Found net devices under 0000:86:00.1: cvl_0_1 00:25:14.904 11:33:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:25:14.904 11:33:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:25:14.904 11:33:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@414 -- # is_hw=yes 00:25:14.904 11:33:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:25:14.904 11:33:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:25:14.904 11:33:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:25:14.904 11:33:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:14.904 11:33:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:14.904 11:33:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:25:14.904 11:33:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:25:14.904 11:33:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:25:14.904 11:33:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:25:14.904 11:33:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:25:14.904 11:33:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:25:14.904 11:33:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:14.904 11:33:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:25:14.904 11:33:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:25:14.904 11:33:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:25:14.904 11:33:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:25:14.904 11:33:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:25:14.904 11:33:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:25:14.904 11:33:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:25:14.904 11:33:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:25:15.163 11:33:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:25:15.163 11:33:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:25:15.163 11:33:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:25:15.163 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:15.163 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.167 ms 00:25:15.163 00:25:15.163 --- 10.0.0.2 ping statistics --- 00:25:15.163 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:15.163 rtt min/avg/max/mdev = 0.167/0.167/0.167/0.000 ms 00:25:15.163 11:33:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:25:15.163 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:15.163 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.192 ms 00:25:15.163 00:25:15.163 --- 10.0.0.1 ping statistics --- 00:25:15.163 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:15.164 rtt min/avg/max/mdev = 0.192/0.192/0.192/0.000 ms 00:25:15.164 11:33:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:15.164 11:33:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@422 -- # return 0 00:25:15.164 11:33:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:25:15.164 11:33:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:15.164 11:33:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:25:15.164 11:33:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:25:15.164 11:33:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:15.164 11:33:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:25:15.164 11:33:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:25:15.164 11:33:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@40 -- # nvmfappstart -m 0x2 00:25:15.164 11:33:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:25:15.164 11:33:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@724 -- # xtrace_disable 00:25:15.164 11:33:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:25:15.164 11:33:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@481 -- # nvmfpid=1630604 00:25:15.164 11:33:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:25:15.164 11:33:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@482 -- # waitforlisten 1630604 00:25:15.164 11:33:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@831 -- # '[' -z 1630604 ']' 00:25:15.164 11:33:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:15.164 11:33:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@836 -- # local max_retries=100 00:25:15.164 11:33:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:15.164 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:15.164 11:33:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@840 -- # xtrace_disable 00:25:15.164 11:33:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:25:15.164 [2024-07-26 11:33:10.710818] Starting SPDK v24.09-pre git sha1 487ff9e1a / DPDK 24.03.0 initialization... 00:25:15.164 [2024-07-26 11:33:10.710858] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:15.164 EAL: No free 2048 kB hugepages reported on node 1 00:25:15.164 [2024-07-26 11:33:10.780360] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:15.423 [2024-07-26 11:33:10.863182] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:15.423 [2024-07-26 11:33:10.863213] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:15.423 [2024-07-26 11:33:10.863220] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:15.423 [2024-07-26 11:33:10.863228] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:15.423 [2024-07-26 11:33:10.863232] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:15.423 [2024-07-26 11:33:10.863248] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:25:15.991 11:33:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:25:15.991 11:33:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@864 -- # return 0 00:25:15.991 11:33:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:25:15.991 11:33:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@730 -- # xtrace_disable 00:25:15.991 11:33:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:25:15.992 11:33:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:15.992 11:33:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@43 -- # rpc_cmd 00:25:15.992 11:33:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:15.992 11:33:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:25:15.992 [2024-07-26 11:33:11.572953] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:15.992 [2024-07-26 11:33:11.581072] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:25:15.992 null0 00:25:15.992 [2024-07-26 11:33:11.613093] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:15.992 11:33:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:15.992 11:33:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@59 -- # hostpid=1630764 00:25:15.992 11:33:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock --wait-for-rpc -L bdev_nvme 00:25:15.992 11:33:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@60 -- # waitforlisten 1630764 /tmp/host.sock 00:25:15.992 11:33:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@831 -- # '[' -z 1630764 ']' 00:25:15.992 11:33:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@835 -- # local rpc_addr=/tmp/host.sock 00:25:15.992 11:33:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@836 -- # local max_retries=100 00:25:15.992 11:33:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:25:15.992 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:25:15.992 11:33:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@840 -- # xtrace_disable 00:25:15.992 11:33:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:25:16.251 [2024-07-26 11:33:11.679429] Starting SPDK v24.09-pre git sha1 487ff9e1a / DPDK 24.03.0 initialization... 00:25:16.251 [2024-07-26 11:33:11.679465] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1630764 ] 00:25:16.251 EAL: No free 2048 kB hugepages reported on node 1 00:25:16.251 [2024-07-26 11:33:11.744959] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:16.251 [2024-07-26 11:33:11.816987] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:25:16.818 11:33:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:25:16.818 11:33:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@864 -- # return 0 00:25:16.818 11:33:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@62 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:25:16.818 11:33:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_set_options -e 1 00:25:16.818 11:33:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:16.818 11:33:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:25:17.077 11:33:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:17.077 11:33:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@66 -- # rpc_cmd -s /tmp/host.sock framework_start_init 00:25:17.077 11:33:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:17.077 11:33:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:25:17.077 11:33:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:17.077 11:33:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test --ctrlr-loss-timeout-sec 2 --reconnect-delay-sec 1 --fast-io-fail-timeout-sec 1 --wait-for-attach 00:25:17.077 11:33:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:17.077 11:33:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:25:18.010 [2024-07-26 11:33:13.611073] bdev_nvme.c:7011:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:25:18.010 [2024-07-26 11:33:13.611093] bdev_nvme.c:7091:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:25:18.011 [2024-07-26 11:33:13.611104] bdev_nvme.c:6974:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:25:18.269 [2024-07-26 11:33:13.737486] bdev_nvme.c:6940:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:25:18.269 [2024-07-26 11:33:13.917630] bdev_nvme.c:7801:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:25:18.269 [2024-07-26 11:33:13.917673] bdev_nvme.c:7801:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:25:18.269 [2024-07-26 11:33:13.917693] bdev_nvme.c:7801:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:25:18.269 [2024-07-26 11:33:13.917705] bdev_nvme.c:6830:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:25:18.269 [2024-07-26 11:33:13.917722] bdev_nvme.c:6789:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:25:18.269 11:33:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:18.269 [2024-07-26 11:33:13.920201] bdev_nvme.c:1617:bdev_nvme_disconnected_qpair_cb: *DEBUG*: qpair 0x1be9e60 was disconnected and freed. delete nvme_qpair. 00:25:18.269 11:33:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@72 -- # wait_for_bdev nvme0n1 00:25:18.269 11:33:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:25:18.269 11:33:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:18.269 11:33:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:25:18.269 11:33:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:18.269 11:33:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:25:18.269 11:33:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:25:18.269 11:33:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:25:18.526 11:33:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:18.526 11:33:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != \n\v\m\e\0\n\1 ]] 00:25:18.526 11:33:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@75 -- # ip netns exec cvl_0_0_ns_spdk ip addr del 10.0.0.2/24 dev cvl_0_0 00:25:18.526 11:33:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@76 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 down 00:25:18.526 11:33:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@79 -- # wait_for_bdev '' 00:25:18.526 11:33:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:25:18.526 11:33:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:18.526 11:33:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:25:18.526 11:33:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:18.526 11:33:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:25:18.526 11:33:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:25:18.526 11:33:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:25:18.526 11:33:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:18.526 11:33:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:25:18.526 11:33:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:25:19.899 11:33:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:25:19.899 11:33:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:19.899 11:33:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:25:19.899 11:33:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:19.899 11:33:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:25:19.899 11:33:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:25:19.899 11:33:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:25:19.899 11:33:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:19.899 11:33:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:25:19.899 11:33:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:25:20.832 11:33:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:25:20.832 11:33:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:20.832 11:33:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:25:20.832 11:33:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:20.832 11:33:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:25:20.832 11:33:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:25:20.832 11:33:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:25:20.832 11:33:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:20.832 11:33:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:25:20.832 11:33:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:25:21.766 11:33:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:25:21.766 11:33:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:21.766 11:33:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:25:21.766 11:33:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:21.766 11:33:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:25:21.766 11:33:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:25:21.766 11:33:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:25:21.766 11:33:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:21.766 11:33:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:25:21.766 11:33:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:25:22.700 11:33:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:25:22.700 11:33:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:22.700 11:33:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:25:22.700 11:33:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:22.700 11:33:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:25:22.700 11:33:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:25:22.700 11:33:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:25:22.700 11:33:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:22.700 11:33:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:25:22.700 11:33:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:25:24.074 11:33:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:25:24.074 11:33:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:24.074 11:33:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:25:24.074 11:33:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:24.074 11:33:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:25:24.074 11:33:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:25:24.074 11:33:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:25:24.074 [2024-07-26 11:33:19.359054] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 110: Connection timed out 00:25:24.074 [2024-07-26 11:33:19.359089] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:25:24.074 [2024-07-26 11:33:19.359098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:24.074 [2024-07-26 11:33:19.359122] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:25:24.074 [2024-07-26 11:33:19.359129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:24.074 [2024-07-26 11:33:19.359136] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:25:24.074 [2024-07-26 11:33:19.359144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:24.074 [2024-07-26 11:33:19.359152] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:25:24.074 [2024-07-26 11:33:19.359158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:24.074 [2024-07-26 11:33:19.359166] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:25:24.074 [2024-07-26 11:33:19.359173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:24.074 [2024-07-26 11:33:19.359179] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bb06b0 is same with the state(5) to be set 00:25:24.074 11:33:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:24.074 [2024-07-26 11:33:19.369076] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bb06b0 (9): Bad file descriptor 00:25:24.074 [2024-07-26 11:33:19.379113] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:25:24.074 11:33:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:25:24.074 11:33:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:25:25.010 11:33:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:25:25.010 11:33:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:25.010 11:33:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:25:25.010 11:33:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:25.010 11:33:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:25:25.010 11:33:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:25:25.010 11:33:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:25:25.010 [2024-07-26 11:33:20.431679] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 110 00:25:25.010 [2024-07-26 11:33:20.431762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bb06b0 with addr=10.0.0.2, port=4420 00:25:25.010 [2024-07-26 11:33:20.431794] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bb06b0 is same with the state(5) to be set 00:25:25.010 [2024-07-26 11:33:20.431851] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bb06b0 (9): Bad file descriptor 00:25:25.010 [2024-07-26 11:33:20.432789] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:25:25.010 [2024-07-26 11:33:20.432851] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:25:25.010 [2024-07-26 11:33:20.432874] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:25:25.010 [2024-07-26 11:33:20.432895] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:25:25.010 [2024-07-26 11:33:20.432955] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:25.010 [2024-07-26 11:33:20.432980] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:25:25.010 11:33:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:25.010 11:33:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:25:25.010 11:33:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:25:25.944 [2024-07-26 11:33:21.435471] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:25:25.944 [2024-07-26 11:33:21.435492] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:25:25.944 [2024-07-26 11:33:21.435499] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:25:25.944 [2024-07-26 11:33:21.435506] nvme_ctrlr.c:1094:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] already in failed state 00:25:25.944 [2024-07-26 11:33:21.435517] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:25.945 [2024-07-26 11:33:21.435535] bdev_nvme.c:6762:remove_discovery_entry: *INFO*: Discovery[10.0.0.2:8009] Remove discovery entry: nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 00:25:25.945 [2024-07-26 11:33:21.435554] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:25:25.945 [2024-07-26 11:33:21.435563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:25.945 [2024-07-26 11:33:21.435577] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:25:25.945 [2024-07-26 11:33:21.435584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:25.945 [2024-07-26 11:33:21.435591] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:25:25.945 [2024-07-26 11:33:21.435596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:25.945 [2024-07-26 11:33:21.435603] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:25:25.945 [2024-07-26 11:33:21.435610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:25.945 [2024-07-26 11:33:21.435617] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:25:25.945 [2024-07-26 11:33:21.435622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:25.945 [2024-07-26 11:33:21.435632] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery] in failed state. 00:25:25.945 [2024-07-26 11:33:21.436114] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bafa80 (9): Bad file descriptor 00:25:25.945 [2024-07-26 11:33:21.437123] nvme_fabric.c: 214:nvme_fabric_prop_get_cmd_async: *ERROR*: Failed to send Property Get fabrics command 00:25:25.945 [2024-07-26 11:33:21.437133] nvme_ctrlr.c:1213:nvme_ctrlr_shutdown_async: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery] Failed to read the CC register 00:25:25.945 11:33:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:25:25.945 11:33:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:25.945 11:33:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:25:25.945 11:33:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:25.945 11:33:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:25:25.945 11:33:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:25:25.945 11:33:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:25:25.945 11:33:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:25.945 11:33:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != '' ]] 00:25:25.945 11:33:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@82 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:25:25.945 11:33:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@83 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:25:25.945 11:33:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@86 -- # wait_for_bdev nvme1n1 00:25:25.945 11:33:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:25:25.945 11:33:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:25:25.945 11:33:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:25.945 11:33:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:25.945 11:33:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:25:25.945 11:33:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:25:25.945 11:33:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:25:25.945 11:33:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:26.202 11:33:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:25:26.202 11:33:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:25:27.178 11:33:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:25:27.178 11:33:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:27.178 11:33:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:25:27.178 11:33:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:27.178 11:33:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:25:27.178 11:33:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:25:27.178 11:33:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:25:27.178 11:33:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:27.178 11:33:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:25:27.178 11:33:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:25:28.110 [2024-07-26 11:33:23.486724] bdev_nvme.c:7011:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:25:28.110 [2024-07-26 11:33:23.486742] bdev_nvme.c:7091:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:25:28.110 [2024-07-26 11:33:23.486754] bdev_nvme.c:6974:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:25:28.110 [2024-07-26 11:33:23.573011] bdev_nvme.c:6940:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme1 00:25:28.110 [2024-07-26 11:33:23.629145] bdev_nvme.c:7801:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:25:28.110 [2024-07-26 11:33:23.629180] bdev_nvme.c:7801:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:25:28.110 [2024-07-26 11:33:23.629197] bdev_nvme.c:7801:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:25:28.110 [2024-07-26 11:33:23.629209] bdev_nvme.c:6830:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme1 done 00:25:28.110 [2024-07-26 11:33:23.629216] bdev_nvme.c:6789:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:25:28.110 [2024-07-26 11:33:23.635215] bdev_nvme.c:1617:bdev_nvme_disconnected_qpair_cb: *DEBUG*: qpair 0x1bb7180 was disconnected and freed. delete nvme_qpair. 00:25:28.110 11:33:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:25:28.110 11:33:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:28.110 11:33:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:25:28.110 11:33:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:28.110 11:33:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:25:28.110 11:33:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:25:28.110 11:33:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:25:28.110 11:33:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:28.110 11:33:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme1n1 != \n\v\m\e\1\n\1 ]] 00:25:28.110 11:33:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@88 -- # trap - SIGINT SIGTERM EXIT 00:25:28.110 11:33:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@90 -- # killprocess 1630764 00:25:28.110 11:33:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@950 -- # '[' -z 1630764 ']' 00:25:28.110 11:33:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # kill -0 1630764 00:25:28.110 11:33:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@955 -- # uname 00:25:28.110 11:33:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:25:28.110 11:33:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1630764 00:25:28.368 11:33:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:25:28.368 11:33:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:25:28.368 11:33:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1630764' 00:25:28.368 killing process with pid 1630764 00:25:28.368 11:33:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@969 -- # kill 1630764 00:25:28.368 11:33:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@974 -- # wait 1630764 00:25:28.368 11:33:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@91 -- # nvmftestfini 00:25:28.368 11:33:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@488 -- # nvmfcleanup 00:25:28.369 11:33:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@117 -- # sync 00:25:28.369 11:33:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:25:28.369 11:33:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@120 -- # set +e 00:25:28.369 11:33:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@121 -- # for i in {1..20} 00:25:28.369 11:33:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:25:28.369 rmmod nvme_tcp 00:25:28.369 rmmod nvme_fabrics 00:25:28.369 rmmod nvme_keyring 00:25:28.369 11:33:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:25:28.369 11:33:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@124 -- # set -e 00:25:28.369 11:33:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@125 -- # return 0 00:25:28.628 11:33:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@489 -- # '[' -n 1630604 ']' 00:25:28.628 11:33:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@490 -- # killprocess 1630604 00:25:28.628 11:33:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@950 -- # '[' -z 1630604 ']' 00:25:28.628 11:33:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # kill -0 1630604 00:25:28.628 11:33:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@955 -- # uname 00:25:28.628 11:33:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:25:28.628 11:33:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1630604 00:25:28.628 11:33:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:25:28.628 11:33:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:25:28.628 11:33:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1630604' 00:25:28.628 killing process with pid 1630604 00:25:28.628 11:33:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@969 -- # kill 1630604 00:25:28.628 11:33:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@974 -- # wait 1630604 00:25:28.628 11:33:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:25:28.628 11:33:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:25:28.628 11:33:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:25:28.628 11:33:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:25:28.628 11:33:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@278 -- # remove_spdk_ns 00:25:28.628 11:33:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:28.628 11:33:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:28.628 11:33:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:31.159 11:33:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:25:31.159 00:25:31.159 real 0m21.557s 00:25:31.159 user 0m26.930s 00:25:31.159 sys 0m5.637s 00:25:31.159 11:33:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:25:31.159 11:33:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:25:31.159 ************************************ 00:25:31.159 END TEST nvmf_discovery_remove_ifc 00:25:31.159 ************************************ 00:25:31.159 11:33:26 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@29 -- # run_test nvmf_identify_kernel_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:25:31.159 11:33:26 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:25:31.159 11:33:26 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:25:31.159 11:33:26 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:25:31.159 ************************************ 00:25:31.159 START TEST nvmf_identify_kernel_target 00:25:31.159 ************************************ 00:25:31.159 11:33:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:25:31.159 * Looking for test storage... 00:25:31.159 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:25:31.160 11:33:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:25:31.160 11:33:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # uname -s 00:25:31.160 11:33:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:31.160 11:33:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:31.160 11:33:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:31.160 11:33:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:31.160 11:33:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:31.160 11:33:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:31.160 11:33:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:31.160 11:33:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:31.160 11:33:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:31.160 11:33:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:31.160 11:33:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:25:31.160 11:33:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:25:31.160 11:33:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:31.160 11:33:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:31.160 11:33:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:25:31.160 11:33:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:31.160 11:33:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:25:31.160 11:33:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:31.160 11:33:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:31.160 11:33:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:31.160 11:33:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:31.160 11:33:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:31.160 11:33:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:31.160 11:33:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@5 -- # export PATH 00:25:31.160 11:33:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:31.160 11:33:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@47 -- # : 0 00:25:31.160 11:33:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:25:31.160 11:33:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:25:31.160 11:33:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:31.160 11:33:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:31.160 11:33:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:31.160 11:33:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:25:31.160 11:33:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:25:31.160 11:33:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@51 -- # have_pci_nics=0 00:25:31.160 11:33:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@11 -- # nvmftestinit 00:25:31.160 11:33:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:25:31.160 11:33:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:31.160 11:33:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@448 -- # prepare_net_devs 00:25:31.160 11:33:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@410 -- # local -g is_hw=no 00:25:31.160 11:33:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@412 -- # remove_spdk_ns 00:25:31.160 11:33:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:31.160 11:33:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:31.160 11:33:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:31.160 11:33:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:25:31.160 11:33:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:25:31.160 11:33:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@285 -- # xtrace_disable 00:25:31.160 11:33:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:25:36.449 11:33:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:25:36.449 11:33:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@291 -- # pci_devs=() 00:25:36.449 11:33:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@291 -- # local -a pci_devs 00:25:36.449 11:33:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@292 -- # pci_net_devs=() 00:25:36.449 11:33:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:25:36.449 11:33:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@293 -- # pci_drivers=() 00:25:36.449 11:33:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@293 -- # local -A pci_drivers 00:25:36.449 11:33:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@295 -- # net_devs=() 00:25:36.449 11:33:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@295 -- # local -ga net_devs 00:25:36.449 11:33:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@296 -- # e810=() 00:25:36.449 11:33:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@296 -- # local -ga e810 00:25:36.449 11:33:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@297 -- # x722=() 00:25:36.449 11:33:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@297 -- # local -ga x722 00:25:36.449 11:33:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@298 -- # mlx=() 00:25:36.449 11:33:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@298 -- # local -ga mlx 00:25:36.449 11:33:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:36.449 11:33:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:36.449 11:33:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:36.449 11:33:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:36.449 11:33:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:36.449 11:33:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:36.449 11:33:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:36.449 11:33:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:36.449 11:33:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:36.449 11:33:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:36.449 11:33:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:36.449 11:33:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:25:36.449 11:33:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:25:36.449 11:33:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:25:36.449 11:33:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:25:36.449 11:33:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:25:36.449 11:33:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:25:36.449 11:33:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:25:36.449 11:33:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:25:36.449 Found 0000:86:00.0 (0x8086 - 0x159b) 00:25:36.449 11:33:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:25:36.449 11:33:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:25:36.449 11:33:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:36.449 11:33:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:36.449 11:33:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:25:36.449 11:33:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:25:36.449 11:33:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:25:36.449 Found 0000:86:00.1 (0x8086 - 0x159b) 00:25:36.449 11:33:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:25:36.449 11:33:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:25:36.449 11:33:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:36.449 11:33:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:36.449 11:33:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:25:36.449 11:33:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:25:36.449 11:33:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:25:36.449 11:33:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:25:36.449 11:33:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:25:36.449 11:33:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:36.449 11:33:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:25:36.449 11:33:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:36.449 11:33:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:25:36.449 11:33:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:25:36.449 11:33:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:36.449 11:33:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:25:36.449 Found net devices under 0000:86:00.0: cvl_0_0 00:25:36.449 11:33:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:25:36.449 11:33:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:25:36.449 11:33:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:36.449 11:33:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:25:36.449 11:33:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:36.449 11:33:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:25:36.449 11:33:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:25:36.449 11:33:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:36.449 11:33:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:25:36.449 Found net devices under 0000:86:00.1: cvl_0_1 00:25:36.449 11:33:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:25:36.449 11:33:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:25:36.449 11:33:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@414 -- # is_hw=yes 00:25:36.449 11:33:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:25:36.449 11:33:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:25:36.449 11:33:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:25:36.449 11:33:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:36.449 11:33:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:36.449 11:33:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:25:36.449 11:33:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:25:36.449 11:33:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:25:36.449 11:33:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:25:36.450 11:33:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:25:36.450 11:33:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:25:36.450 11:33:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:36.450 11:33:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:25:36.450 11:33:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:25:36.450 11:33:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:25:36.450 11:33:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:25:36.450 11:33:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:25:36.450 11:33:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:25:36.709 11:33:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:25:36.709 11:33:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:25:36.709 11:33:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:25:36.709 11:33:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:25:36.709 11:33:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:25:36.709 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:36.709 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.177 ms 00:25:36.709 00:25:36.709 --- 10.0.0.2 ping statistics --- 00:25:36.709 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:36.709 rtt min/avg/max/mdev = 0.177/0.177/0.177/0.000 ms 00:25:36.709 11:33:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:25:36.709 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:36.709 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.213 ms 00:25:36.709 00:25:36.709 --- 10.0.0.1 ping statistics --- 00:25:36.709 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:36.709 rtt min/avg/max/mdev = 0.213/0.213/0.213/0.000 ms 00:25:36.709 11:33:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:36.709 11:33:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@422 -- # return 0 00:25:36.709 11:33:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:25:36.709 11:33:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:36.709 11:33:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:25:36.709 11:33:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:25:36.709 11:33:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:36.709 11:33:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:25:36.709 11:33:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:25:36.709 11:33:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@13 -- # trap 'nvmftestfini || :; clean_kernel_target' EXIT 00:25:36.709 11:33:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # get_main_ns_ip 00:25:36.709 11:33:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@741 -- # local ip 00:25:36.709 11:33:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:36.709 11:33:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:36.709 11:33:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:36.709 11:33:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:36.709 11:33:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:36.709 11:33:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:36.709 11:33:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:36.709 11:33:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:36.709 11:33:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:36.709 11:33:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # target_ip=10.0.0.1 00:25:36.709 11:33:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@16 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:25:36.709 11:33:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@632 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:25:36.709 11:33:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@634 -- # nvmet=/sys/kernel/config/nvmet 00:25:36.709 11:33:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@635 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:25:36.709 11:33:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@636 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:25:36.709 11:33:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@637 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:25:36.709 11:33:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@639 -- # local block nvme 00:25:36.709 11:33:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@641 -- # [[ ! -e /sys/module/nvmet ]] 00:25:36.709 11:33:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@642 -- # modprobe nvmet 00:25:36.709 11:33:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@645 -- # [[ -e /sys/kernel/config/nvmet ]] 00:25:36.709 11:33:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@647 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:25:39.244 Waiting for block devices as requested 00:25:39.503 0000:5e:00.0 (8086 0a54): vfio-pci -> nvme 00:25:39.503 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:25:39.762 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:25:39.762 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:25:39.762 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:25:39.762 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:25:40.021 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:25:40.021 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:25:40.021 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:25:40.279 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:25:40.279 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:25:40.279 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:25:40.279 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:25:40.538 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:25:40.538 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:25:40.538 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:25:40.797 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:25:40.797 11:33:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:25:40.797 11:33:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n1 ]] 00:25:40.797 11:33:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@652 -- # is_block_zoned nvme0n1 00:25:40.797 11:33:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:25:40.797 11:33:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:25:40.797 11:33:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:25:40.797 11:33:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@653 -- # block_in_use nvme0n1 00:25:40.797 11:33:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:25:40.797 11:33:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:25:40.797 No valid GPT data, bailing 00:25:40.797 11:33:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:25:40.797 11:33:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@391 -- # pt= 00:25:40.797 11:33:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@392 -- # return 1 00:25:40.797 11:33:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n1 00:25:40.797 11:33:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@656 -- # [[ -b /dev/nvme0n1 ]] 00:25:40.797 11:33:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@658 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:25:40.797 11:33:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@659 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:25:40.797 11:33:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@660 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:25:40.797 11:33:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@665 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:25:40.797 11:33:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@667 -- # echo 1 00:25:40.797 11:33:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@668 -- # echo /dev/nvme0n1 00:25:40.797 11:33:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@669 -- # echo 1 00:25:40.797 11:33:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@671 -- # echo 10.0.0.1 00:25:40.797 11:33:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@672 -- # echo tcp 00:25:40.797 11:33:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@673 -- # echo 4420 00:25:40.797 11:33:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@674 -- # echo ipv4 00:25:40.797 11:33:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@677 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:25:40.797 11:33:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@680 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid=00ad29c2-ccbd-e911-906e-0017a4403562 -a 10.0.0.1 -t tcp -s 4420 00:25:41.057 00:25:41.057 Discovery Log Number of Records 2, Generation counter 2 00:25:41.057 =====Discovery Log Entry 0====== 00:25:41.057 trtype: tcp 00:25:41.057 adrfam: ipv4 00:25:41.057 subtype: current discovery subsystem 00:25:41.057 treq: not specified, sq flow control disable supported 00:25:41.057 portid: 1 00:25:41.057 trsvcid: 4420 00:25:41.057 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:25:41.057 traddr: 10.0.0.1 00:25:41.057 eflags: none 00:25:41.057 sectype: none 00:25:41.057 =====Discovery Log Entry 1====== 00:25:41.057 trtype: tcp 00:25:41.057 adrfam: ipv4 00:25:41.057 subtype: nvme subsystem 00:25:41.057 treq: not specified, sq flow control disable supported 00:25:41.057 portid: 1 00:25:41.057 trsvcid: 4420 00:25:41.057 subnqn: nqn.2016-06.io.spdk:testnqn 00:25:41.057 traddr: 10.0.0.1 00:25:41.057 eflags: none 00:25:41.057 sectype: none 00:25:41.057 11:33:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 00:25:41.057 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' 00:25:41.057 EAL: No free 2048 kB hugepages reported on node 1 00:25:41.057 ===================================================== 00:25:41.057 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2014-08.org.nvmexpress.discovery 00:25:41.057 ===================================================== 00:25:41.057 Controller Capabilities/Features 00:25:41.057 ================================ 00:25:41.057 Vendor ID: 0000 00:25:41.057 Subsystem Vendor ID: 0000 00:25:41.057 Serial Number: 8406414f4f5df7c68a90 00:25:41.057 Model Number: Linux 00:25:41.057 Firmware Version: 6.7.0-68 00:25:41.057 Recommended Arb Burst: 0 00:25:41.057 IEEE OUI Identifier: 00 00 00 00:25:41.057 Multi-path I/O 00:25:41.057 May have multiple subsystem ports: No 00:25:41.057 May have multiple controllers: No 00:25:41.057 Associated with SR-IOV VF: No 00:25:41.057 Max Data Transfer Size: Unlimited 00:25:41.057 Max Number of Namespaces: 0 00:25:41.057 Max Number of I/O Queues: 1024 00:25:41.057 NVMe Specification Version (VS): 1.3 00:25:41.057 NVMe Specification Version (Identify): 1.3 00:25:41.057 Maximum Queue Entries: 1024 00:25:41.057 Contiguous Queues Required: No 00:25:41.057 Arbitration Mechanisms Supported 00:25:41.057 Weighted Round Robin: Not Supported 00:25:41.057 Vendor Specific: Not Supported 00:25:41.057 Reset Timeout: 7500 ms 00:25:41.057 Doorbell Stride: 4 bytes 00:25:41.057 NVM Subsystem Reset: Not Supported 00:25:41.057 Command Sets Supported 00:25:41.057 NVM Command Set: Supported 00:25:41.057 Boot Partition: Not Supported 00:25:41.057 Memory Page Size Minimum: 4096 bytes 00:25:41.057 Memory Page Size Maximum: 4096 bytes 00:25:41.057 Persistent Memory Region: Not Supported 00:25:41.057 Optional Asynchronous Events Supported 00:25:41.057 Namespace Attribute Notices: Not Supported 00:25:41.057 Firmware Activation Notices: Not Supported 00:25:41.057 ANA Change Notices: Not Supported 00:25:41.057 PLE Aggregate Log Change Notices: Not Supported 00:25:41.057 LBA Status Info Alert Notices: Not Supported 00:25:41.057 EGE Aggregate Log Change Notices: Not Supported 00:25:41.057 Normal NVM Subsystem Shutdown event: Not Supported 00:25:41.057 Zone Descriptor Change Notices: Not Supported 00:25:41.057 Discovery Log Change Notices: Supported 00:25:41.057 Controller Attributes 00:25:41.057 128-bit Host Identifier: Not Supported 00:25:41.057 Non-Operational Permissive Mode: Not Supported 00:25:41.057 NVM Sets: Not Supported 00:25:41.057 Read Recovery Levels: Not Supported 00:25:41.057 Endurance Groups: Not Supported 00:25:41.057 Predictable Latency Mode: Not Supported 00:25:41.057 Traffic Based Keep ALive: Not Supported 00:25:41.057 Namespace Granularity: Not Supported 00:25:41.057 SQ Associations: Not Supported 00:25:41.057 UUID List: Not Supported 00:25:41.057 Multi-Domain Subsystem: Not Supported 00:25:41.057 Fixed Capacity Management: Not Supported 00:25:41.057 Variable Capacity Management: Not Supported 00:25:41.057 Delete Endurance Group: Not Supported 00:25:41.057 Delete NVM Set: Not Supported 00:25:41.057 Extended LBA Formats Supported: Not Supported 00:25:41.057 Flexible Data Placement Supported: Not Supported 00:25:41.057 00:25:41.057 Controller Memory Buffer Support 00:25:41.057 ================================ 00:25:41.057 Supported: No 00:25:41.057 00:25:41.057 Persistent Memory Region Support 00:25:41.057 ================================ 00:25:41.057 Supported: No 00:25:41.057 00:25:41.057 Admin Command Set Attributes 00:25:41.057 ============================ 00:25:41.057 Security Send/Receive: Not Supported 00:25:41.057 Format NVM: Not Supported 00:25:41.057 Firmware Activate/Download: Not Supported 00:25:41.057 Namespace Management: Not Supported 00:25:41.057 Device Self-Test: Not Supported 00:25:41.057 Directives: Not Supported 00:25:41.057 NVMe-MI: Not Supported 00:25:41.057 Virtualization Management: Not Supported 00:25:41.057 Doorbell Buffer Config: Not Supported 00:25:41.057 Get LBA Status Capability: Not Supported 00:25:41.057 Command & Feature Lockdown Capability: Not Supported 00:25:41.057 Abort Command Limit: 1 00:25:41.057 Async Event Request Limit: 1 00:25:41.057 Number of Firmware Slots: N/A 00:25:41.057 Firmware Slot 1 Read-Only: N/A 00:25:41.057 Firmware Activation Without Reset: N/A 00:25:41.057 Multiple Update Detection Support: N/A 00:25:41.058 Firmware Update Granularity: No Information Provided 00:25:41.058 Per-Namespace SMART Log: No 00:25:41.058 Asymmetric Namespace Access Log Page: Not Supported 00:25:41.058 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:25:41.058 Command Effects Log Page: Not Supported 00:25:41.058 Get Log Page Extended Data: Supported 00:25:41.058 Telemetry Log Pages: Not Supported 00:25:41.058 Persistent Event Log Pages: Not Supported 00:25:41.058 Supported Log Pages Log Page: May Support 00:25:41.058 Commands Supported & Effects Log Page: Not Supported 00:25:41.058 Feature Identifiers & Effects Log Page:May Support 00:25:41.058 NVMe-MI Commands & Effects Log Page: May Support 00:25:41.058 Data Area 4 for Telemetry Log: Not Supported 00:25:41.058 Error Log Page Entries Supported: 1 00:25:41.058 Keep Alive: Not Supported 00:25:41.058 00:25:41.058 NVM Command Set Attributes 00:25:41.058 ========================== 00:25:41.058 Submission Queue Entry Size 00:25:41.058 Max: 1 00:25:41.058 Min: 1 00:25:41.058 Completion Queue Entry Size 00:25:41.058 Max: 1 00:25:41.058 Min: 1 00:25:41.058 Number of Namespaces: 0 00:25:41.058 Compare Command: Not Supported 00:25:41.058 Write Uncorrectable Command: Not Supported 00:25:41.058 Dataset Management Command: Not Supported 00:25:41.058 Write Zeroes Command: Not Supported 00:25:41.058 Set Features Save Field: Not Supported 00:25:41.058 Reservations: Not Supported 00:25:41.058 Timestamp: Not Supported 00:25:41.058 Copy: Not Supported 00:25:41.058 Volatile Write Cache: Not Present 00:25:41.058 Atomic Write Unit (Normal): 1 00:25:41.058 Atomic Write Unit (PFail): 1 00:25:41.058 Atomic Compare & Write Unit: 1 00:25:41.058 Fused Compare & Write: Not Supported 00:25:41.058 Scatter-Gather List 00:25:41.058 SGL Command Set: Supported 00:25:41.058 SGL Keyed: Not Supported 00:25:41.058 SGL Bit Bucket Descriptor: Not Supported 00:25:41.058 SGL Metadata Pointer: Not Supported 00:25:41.058 Oversized SGL: Not Supported 00:25:41.058 SGL Metadata Address: Not Supported 00:25:41.058 SGL Offset: Supported 00:25:41.058 Transport SGL Data Block: Not Supported 00:25:41.058 Replay Protected Memory Block: Not Supported 00:25:41.058 00:25:41.058 Firmware Slot Information 00:25:41.058 ========================= 00:25:41.058 Active slot: 0 00:25:41.058 00:25:41.058 00:25:41.058 Error Log 00:25:41.058 ========= 00:25:41.058 00:25:41.058 Active Namespaces 00:25:41.058 ================= 00:25:41.058 Discovery Log Page 00:25:41.058 ================== 00:25:41.058 Generation Counter: 2 00:25:41.058 Number of Records: 2 00:25:41.058 Record Format: 0 00:25:41.058 00:25:41.058 Discovery Log Entry 0 00:25:41.058 ---------------------- 00:25:41.058 Transport Type: 3 (TCP) 00:25:41.058 Address Family: 1 (IPv4) 00:25:41.058 Subsystem Type: 3 (Current Discovery Subsystem) 00:25:41.058 Entry Flags: 00:25:41.058 Duplicate Returned Information: 0 00:25:41.058 Explicit Persistent Connection Support for Discovery: 0 00:25:41.058 Transport Requirements: 00:25:41.058 Secure Channel: Not Specified 00:25:41.058 Port ID: 1 (0x0001) 00:25:41.058 Controller ID: 65535 (0xffff) 00:25:41.058 Admin Max SQ Size: 32 00:25:41.058 Transport Service Identifier: 4420 00:25:41.058 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:25:41.058 Transport Address: 10.0.0.1 00:25:41.058 Discovery Log Entry 1 00:25:41.058 ---------------------- 00:25:41.058 Transport Type: 3 (TCP) 00:25:41.058 Address Family: 1 (IPv4) 00:25:41.058 Subsystem Type: 2 (NVM Subsystem) 00:25:41.058 Entry Flags: 00:25:41.058 Duplicate Returned Information: 0 00:25:41.058 Explicit Persistent Connection Support for Discovery: 0 00:25:41.058 Transport Requirements: 00:25:41.058 Secure Channel: Not Specified 00:25:41.058 Port ID: 1 (0x0001) 00:25:41.058 Controller ID: 65535 (0xffff) 00:25:41.058 Admin Max SQ Size: 32 00:25:41.058 Transport Service Identifier: 4420 00:25:41.058 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:testnqn 00:25:41.058 Transport Address: 10.0.0.1 00:25:41.058 11:33:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:25:41.058 EAL: No free 2048 kB hugepages reported on node 1 00:25:41.058 get_feature(0x01) failed 00:25:41.058 get_feature(0x02) failed 00:25:41.058 get_feature(0x04) failed 00:25:41.058 ===================================================== 00:25:41.058 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:25:41.058 ===================================================== 00:25:41.058 Controller Capabilities/Features 00:25:41.058 ================================ 00:25:41.058 Vendor ID: 0000 00:25:41.058 Subsystem Vendor ID: 0000 00:25:41.058 Serial Number: 1fd1c33ef7f55a085a9c 00:25:41.058 Model Number: SPDK-nqn.2016-06.io.spdk:testnqn 00:25:41.058 Firmware Version: 6.7.0-68 00:25:41.058 Recommended Arb Burst: 6 00:25:41.058 IEEE OUI Identifier: 00 00 00 00:25:41.058 Multi-path I/O 00:25:41.058 May have multiple subsystem ports: Yes 00:25:41.058 May have multiple controllers: Yes 00:25:41.058 Associated with SR-IOV VF: No 00:25:41.058 Max Data Transfer Size: Unlimited 00:25:41.058 Max Number of Namespaces: 1024 00:25:41.058 Max Number of I/O Queues: 128 00:25:41.058 NVMe Specification Version (VS): 1.3 00:25:41.058 NVMe Specification Version (Identify): 1.3 00:25:41.058 Maximum Queue Entries: 1024 00:25:41.058 Contiguous Queues Required: No 00:25:41.058 Arbitration Mechanisms Supported 00:25:41.058 Weighted Round Robin: Not Supported 00:25:41.058 Vendor Specific: Not Supported 00:25:41.058 Reset Timeout: 7500 ms 00:25:41.058 Doorbell Stride: 4 bytes 00:25:41.058 NVM Subsystem Reset: Not Supported 00:25:41.058 Command Sets Supported 00:25:41.058 NVM Command Set: Supported 00:25:41.058 Boot Partition: Not Supported 00:25:41.058 Memory Page Size Minimum: 4096 bytes 00:25:41.058 Memory Page Size Maximum: 4096 bytes 00:25:41.058 Persistent Memory Region: Not Supported 00:25:41.058 Optional Asynchronous Events Supported 00:25:41.058 Namespace Attribute Notices: Supported 00:25:41.058 Firmware Activation Notices: Not Supported 00:25:41.058 ANA Change Notices: Supported 00:25:41.058 PLE Aggregate Log Change Notices: Not Supported 00:25:41.058 LBA Status Info Alert Notices: Not Supported 00:25:41.058 EGE Aggregate Log Change Notices: Not Supported 00:25:41.058 Normal NVM Subsystem Shutdown event: Not Supported 00:25:41.058 Zone Descriptor Change Notices: Not Supported 00:25:41.058 Discovery Log Change Notices: Not Supported 00:25:41.058 Controller Attributes 00:25:41.058 128-bit Host Identifier: Supported 00:25:41.058 Non-Operational Permissive Mode: Not Supported 00:25:41.058 NVM Sets: Not Supported 00:25:41.058 Read Recovery Levels: Not Supported 00:25:41.058 Endurance Groups: Not Supported 00:25:41.058 Predictable Latency Mode: Not Supported 00:25:41.058 Traffic Based Keep ALive: Supported 00:25:41.058 Namespace Granularity: Not Supported 00:25:41.058 SQ Associations: Not Supported 00:25:41.058 UUID List: Not Supported 00:25:41.058 Multi-Domain Subsystem: Not Supported 00:25:41.058 Fixed Capacity Management: Not Supported 00:25:41.058 Variable Capacity Management: Not Supported 00:25:41.058 Delete Endurance Group: Not Supported 00:25:41.058 Delete NVM Set: Not Supported 00:25:41.058 Extended LBA Formats Supported: Not Supported 00:25:41.058 Flexible Data Placement Supported: Not Supported 00:25:41.058 00:25:41.058 Controller Memory Buffer Support 00:25:41.058 ================================ 00:25:41.058 Supported: No 00:25:41.058 00:25:41.058 Persistent Memory Region Support 00:25:41.058 ================================ 00:25:41.058 Supported: No 00:25:41.058 00:25:41.058 Admin Command Set Attributes 00:25:41.058 ============================ 00:25:41.058 Security Send/Receive: Not Supported 00:25:41.058 Format NVM: Not Supported 00:25:41.058 Firmware Activate/Download: Not Supported 00:25:41.058 Namespace Management: Not Supported 00:25:41.058 Device Self-Test: Not Supported 00:25:41.058 Directives: Not Supported 00:25:41.058 NVMe-MI: Not Supported 00:25:41.058 Virtualization Management: Not Supported 00:25:41.058 Doorbell Buffer Config: Not Supported 00:25:41.058 Get LBA Status Capability: Not Supported 00:25:41.058 Command & Feature Lockdown Capability: Not Supported 00:25:41.058 Abort Command Limit: 4 00:25:41.058 Async Event Request Limit: 4 00:25:41.058 Number of Firmware Slots: N/A 00:25:41.058 Firmware Slot 1 Read-Only: N/A 00:25:41.058 Firmware Activation Without Reset: N/A 00:25:41.059 Multiple Update Detection Support: N/A 00:25:41.059 Firmware Update Granularity: No Information Provided 00:25:41.059 Per-Namespace SMART Log: Yes 00:25:41.059 Asymmetric Namespace Access Log Page: Supported 00:25:41.059 ANA Transition Time : 10 sec 00:25:41.059 00:25:41.059 Asymmetric Namespace Access Capabilities 00:25:41.059 ANA Optimized State : Supported 00:25:41.059 ANA Non-Optimized State : Supported 00:25:41.059 ANA Inaccessible State : Supported 00:25:41.059 ANA Persistent Loss State : Supported 00:25:41.059 ANA Change State : Supported 00:25:41.059 ANAGRPID is not changed : No 00:25:41.059 Non-Zero ANAGRPID for NS Mgmt Cmd : Not Supported 00:25:41.059 00:25:41.059 ANA Group Identifier Maximum : 128 00:25:41.059 Number of ANA Group Identifiers : 128 00:25:41.059 Max Number of Allowed Namespaces : 1024 00:25:41.059 Subsystem NQN: nqn.2016-06.io.spdk:testnqn 00:25:41.059 Command Effects Log Page: Supported 00:25:41.059 Get Log Page Extended Data: Supported 00:25:41.059 Telemetry Log Pages: Not Supported 00:25:41.059 Persistent Event Log Pages: Not Supported 00:25:41.059 Supported Log Pages Log Page: May Support 00:25:41.059 Commands Supported & Effects Log Page: Not Supported 00:25:41.059 Feature Identifiers & Effects Log Page:May Support 00:25:41.059 NVMe-MI Commands & Effects Log Page: May Support 00:25:41.059 Data Area 4 for Telemetry Log: Not Supported 00:25:41.059 Error Log Page Entries Supported: 128 00:25:41.059 Keep Alive: Supported 00:25:41.059 Keep Alive Granularity: 1000 ms 00:25:41.059 00:25:41.059 NVM Command Set Attributes 00:25:41.059 ========================== 00:25:41.059 Submission Queue Entry Size 00:25:41.059 Max: 64 00:25:41.059 Min: 64 00:25:41.059 Completion Queue Entry Size 00:25:41.059 Max: 16 00:25:41.059 Min: 16 00:25:41.059 Number of Namespaces: 1024 00:25:41.059 Compare Command: Not Supported 00:25:41.059 Write Uncorrectable Command: Not Supported 00:25:41.059 Dataset Management Command: Supported 00:25:41.059 Write Zeroes Command: Supported 00:25:41.059 Set Features Save Field: Not Supported 00:25:41.059 Reservations: Not Supported 00:25:41.059 Timestamp: Not Supported 00:25:41.059 Copy: Not Supported 00:25:41.059 Volatile Write Cache: Present 00:25:41.059 Atomic Write Unit (Normal): 1 00:25:41.059 Atomic Write Unit (PFail): 1 00:25:41.059 Atomic Compare & Write Unit: 1 00:25:41.059 Fused Compare & Write: Not Supported 00:25:41.059 Scatter-Gather List 00:25:41.059 SGL Command Set: Supported 00:25:41.059 SGL Keyed: Not Supported 00:25:41.059 SGL Bit Bucket Descriptor: Not Supported 00:25:41.059 SGL Metadata Pointer: Not Supported 00:25:41.059 Oversized SGL: Not Supported 00:25:41.059 SGL Metadata Address: Not Supported 00:25:41.059 SGL Offset: Supported 00:25:41.059 Transport SGL Data Block: Not Supported 00:25:41.059 Replay Protected Memory Block: Not Supported 00:25:41.059 00:25:41.059 Firmware Slot Information 00:25:41.059 ========================= 00:25:41.059 Active slot: 0 00:25:41.059 00:25:41.059 Asymmetric Namespace Access 00:25:41.059 =========================== 00:25:41.059 Change Count : 0 00:25:41.059 Number of ANA Group Descriptors : 1 00:25:41.059 ANA Group Descriptor : 0 00:25:41.059 ANA Group ID : 1 00:25:41.059 Number of NSID Values : 1 00:25:41.059 Change Count : 0 00:25:41.059 ANA State : 1 00:25:41.059 Namespace Identifier : 1 00:25:41.059 00:25:41.059 Commands Supported and Effects 00:25:41.059 ============================== 00:25:41.059 Admin Commands 00:25:41.059 -------------- 00:25:41.059 Get Log Page (02h): Supported 00:25:41.059 Identify (06h): Supported 00:25:41.059 Abort (08h): Supported 00:25:41.059 Set Features (09h): Supported 00:25:41.059 Get Features (0Ah): Supported 00:25:41.059 Asynchronous Event Request (0Ch): Supported 00:25:41.059 Keep Alive (18h): Supported 00:25:41.059 I/O Commands 00:25:41.059 ------------ 00:25:41.059 Flush (00h): Supported 00:25:41.059 Write (01h): Supported LBA-Change 00:25:41.059 Read (02h): Supported 00:25:41.059 Write Zeroes (08h): Supported LBA-Change 00:25:41.059 Dataset Management (09h): Supported 00:25:41.059 00:25:41.059 Error Log 00:25:41.059 ========= 00:25:41.059 Entry: 0 00:25:41.059 Error Count: 0x3 00:25:41.059 Submission Queue Id: 0x0 00:25:41.059 Command Id: 0x5 00:25:41.059 Phase Bit: 0 00:25:41.059 Status Code: 0x2 00:25:41.059 Status Code Type: 0x0 00:25:41.059 Do Not Retry: 1 00:25:41.059 Error Location: 0x28 00:25:41.059 LBA: 0x0 00:25:41.059 Namespace: 0x0 00:25:41.059 Vendor Log Page: 0x0 00:25:41.059 ----------- 00:25:41.059 Entry: 1 00:25:41.059 Error Count: 0x2 00:25:41.059 Submission Queue Id: 0x0 00:25:41.059 Command Id: 0x5 00:25:41.059 Phase Bit: 0 00:25:41.059 Status Code: 0x2 00:25:41.059 Status Code Type: 0x0 00:25:41.059 Do Not Retry: 1 00:25:41.059 Error Location: 0x28 00:25:41.059 LBA: 0x0 00:25:41.059 Namespace: 0x0 00:25:41.059 Vendor Log Page: 0x0 00:25:41.059 ----------- 00:25:41.059 Entry: 2 00:25:41.059 Error Count: 0x1 00:25:41.059 Submission Queue Id: 0x0 00:25:41.059 Command Id: 0x4 00:25:41.059 Phase Bit: 0 00:25:41.059 Status Code: 0x2 00:25:41.059 Status Code Type: 0x0 00:25:41.059 Do Not Retry: 1 00:25:41.059 Error Location: 0x28 00:25:41.059 LBA: 0x0 00:25:41.059 Namespace: 0x0 00:25:41.059 Vendor Log Page: 0x0 00:25:41.059 00:25:41.059 Number of Queues 00:25:41.059 ================ 00:25:41.059 Number of I/O Submission Queues: 128 00:25:41.059 Number of I/O Completion Queues: 128 00:25:41.059 00:25:41.059 ZNS Specific Controller Data 00:25:41.059 ============================ 00:25:41.059 Zone Append Size Limit: 0 00:25:41.059 00:25:41.059 00:25:41.059 Active Namespaces 00:25:41.059 ================= 00:25:41.059 get_feature(0x05) failed 00:25:41.059 Namespace ID:1 00:25:41.059 Command Set Identifier: NVM (00h) 00:25:41.059 Deallocate: Supported 00:25:41.059 Deallocated/Unwritten Error: Not Supported 00:25:41.059 Deallocated Read Value: Unknown 00:25:41.059 Deallocate in Write Zeroes: Not Supported 00:25:41.059 Deallocated Guard Field: 0xFFFF 00:25:41.059 Flush: Supported 00:25:41.059 Reservation: Not Supported 00:25:41.059 Namespace Sharing Capabilities: Multiple Controllers 00:25:41.059 Size (in LBAs): 3125627568 (1490GiB) 00:25:41.059 Capacity (in LBAs): 3125627568 (1490GiB) 00:25:41.059 Utilization (in LBAs): 3125627568 (1490GiB) 00:25:41.059 UUID: 00522532-1584-4fbb-a196-35b2fcbde080 00:25:41.059 Thin Provisioning: Not Supported 00:25:41.059 Per-NS Atomic Units: Yes 00:25:41.059 Atomic Boundary Size (Normal): 0 00:25:41.059 Atomic Boundary Size (PFail): 0 00:25:41.059 Atomic Boundary Offset: 0 00:25:41.059 NGUID/EUI64 Never Reused: No 00:25:41.059 ANA group ID: 1 00:25:41.059 Namespace Write Protected: No 00:25:41.059 Number of LBA Formats: 1 00:25:41.059 Current LBA Format: LBA Format #00 00:25:41.059 LBA Format #00: Data Size: 512 Metadata Size: 0 00:25:41.059 00:25:41.059 11:33:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # nvmftestfini 00:25:41.059 11:33:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@488 -- # nvmfcleanup 00:25:41.059 11:33:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@117 -- # sync 00:25:41.059 11:33:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:25:41.059 11:33:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@120 -- # set +e 00:25:41.059 11:33:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@121 -- # for i in {1..20} 00:25:41.059 11:33:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:25:41.059 rmmod nvme_tcp 00:25:41.059 rmmod nvme_fabrics 00:25:41.059 11:33:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:25:41.059 11:33:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@124 -- # set -e 00:25:41.059 11:33:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@125 -- # return 0 00:25:41.059 11:33:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:25:41.059 11:33:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:25:41.059 11:33:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:25:41.059 11:33:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:25:41.059 11:33:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:25:41.059 11:33:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@278 -- # remove_spdk_ns 00:25:41.059 11:33:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:41.060 11:33:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:41.060 11:33:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:43.595 11:33:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:25:43.595 11:33:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # clean_kernel_target 00:25:43.595 11:33:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@684 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:25:43.595 11:33:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@686 -- # echo 0 00:25:43.595 11:33:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@688 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:25:43.595 11:33:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@689 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:25:43.595 11:33:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@690 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:25:43.595 11:33:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@691 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:25:43.595 11:33:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@693 -- # modules=(/sys/module/nvmet/holders/*) 00:25:43.595 11:33:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@695 -- # modprobe -r nvmet_tcp nvmet 00:25:43.595 11:33:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@698 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:25:46.131 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:25:46.131 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:25:46.131 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:25:46.131 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:25:46.131 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:25:46.131 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:25:46.131 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:25:46.131 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:25:46.131 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:25:46.131 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:25:46.131 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:25:46.131 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:25:46.131 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:25:46.131 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:25:46.131 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:25:46.131 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:25:47.510 0000:5e:00.0 (8086 0a54): nvme -> vfio-pci 00:25:47.769 00:25:47.769 real 0m16.887s 00:25:47.769 user 0m4.143s 00:25:47.769 sys 0m8.436s 00:25:47.769 11:33:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1126 -- # xtrace_disable 00:25:47.769 11:33:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:25:47.769 ************************************ 00:25:47.769 END TEST nvmf_identify_kernel_target 00:25:47.769 ************************************ 00:25:47.769 11:33:43 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@30 -- # run_test nvmf_auth_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/auth.sh --transport=tcp 00:25:47.769 11:33:43 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:25:47.769 11:33:43 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:25:47.769 11:33:43 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:25:47.769 ************************************ 00:25:47.769 START TEST nvmf_auth_host 00:25:47.769 ************************************ 00:25:47.769 11:33:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/auth.sh --transport=tcp 00:25:48.028 * Looking for test storage... 00:25:48.028 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:25:48.028 11:33:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:25:48.028 11:33:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@7 -- # uname -s 00:25:48.028 11:33:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:48.028 11:33:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:48.028 11:33:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:48.028 11:33:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:48.028 11:33:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:48.028 11:33:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:48.028 11:33:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:48.028 11:33:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:48.028 11:33:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:48.028 11:33:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:48.028 11:33:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:25:48.028 11:33:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:25:48.028 11:33:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:48.028 11:33:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:48.028 11:33:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:25:48.028 11:33:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:48.028 11:33:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:25:48.029 11:33:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:48.029 11:33:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:48.029 11:33:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:48.029 11:33:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:48.029 11:33:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:48.029 11:33:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:48.029 11:33:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@5 -- # export PATH 00:25:48.029 11:33:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:48.029 11:33:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@47 -- # : 0 00:25:48.029 11:33:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:25:48.029 11:33:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:25:48.029 11:33:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:48.029 11:33:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:48.029 11:33:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:48.029 11:33:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:25:48.029 11:33:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:25:48.029 11:33:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@51 -- # have_pci_nics=0 00:25:48.029 11:33:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:25:48.029 11:33:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@16 -- # dhgroups=("ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:25:48.029 11:33:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@17 -- # subnqn=nqn.2024-02.io.spdk:cnode0 00:25:48.029 11:33:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@18 -- # hostnqn=nqn.2024-02.io.spdk:host0 00:25:48.029 11:33:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@19 -- # nvmet_subsys=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:25:48.029 11:33:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@20 -- # nvmet_host=/sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:25:48.029 11:33:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@21 -- # keys=() 00:25:48.029 11:33:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@21 -- # ckeys=() 00:25:48.029 11:33:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@68 -- # nvmftestinit 00:25:48.029 11:33:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:25:48.029 11:33:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:48.029 11:33:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@448 -- # prepare_net_devs 00:25:48.029 11:33:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@410 -- # local -g is_hw=no 00:25:48.029 11:33:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@412 -- # remove_spdk_ns 00:25:48.029 11:33:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:48.029 11:33:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:48.029 11:33:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:48.029 11:33:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:25:48.029 11:33:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:25:48.029 11:33:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@285 -- # xtrace_disable 00:25:48.029 11:33:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:53.327 11:33:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:25:53.327 11:33:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@291 -- # pci_devs=() 00:25:53.327 11:33:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@291 -- # local -a pci_devs 00:25:53.327 11:33:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@292 -- # pci_net_devs=() 00:25:53.327 11:33:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:25:53.327 11:33:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@293 -- # pci_drivers=() 00:25:53.327 11:33:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@293 -- # local -A pci_drivers 00:25:53.327 11:33:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@295 -- # net_devs=() 00:25:53.327 11:33:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@295 -- # local -ga net_devs 00:25:53.327 11:33:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@296 -- # e810=() 00:25:53.327 11:33:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@296 -- # local -ga e810 00:25:53.327 11:33:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@297 -- # x722=() 00:25:53.327 11:33:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@297 -- # local -ga x722 00:25:53.327 11:33:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@298 -- # mlx=() 00:25:53.327 11:33:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@298 -- # local -ga mlx 00:25:53.327 11:33:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:53.327 11:33:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:53.327 11:33:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:53.327 11:33:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:53.327 11:33:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:53.327 11:33:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:53.327 11:33:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:53.327 11:33:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:53.327 11:33:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:53.327 11:33:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:53.327 11:33:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:53.327 11:33:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:25:53.327 11:33:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:25:53.327 11:33:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:25:53.328 11:33:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:25:53.328 11:33:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:25:53.328 11:33:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:25:53.328 11:33:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:25:53.328 11:33:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:25:53.328 Found 0000:86:00.0 (0x8086 - 0x159b) 00:25:53.328 11:33:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:25:53.328 11:33:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:25:53.328 11:33:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:53.328 11:33:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:53.328 11:33:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:25:53.328 11:33:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:25:53.328 11:33:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:25:53.328 Found 0000:86:00.1 (0x8086 - 0x159b) 00:25:53.328 11:33:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:25:53.328 11:33:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:25:53.328 11:33:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:53.328 11:33:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:53.328 11:33:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:25:53.328 11:33:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:25:53.328 11:33:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:25:53.328 11:33:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:25:53.328 11:33:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:25:53.328 11:33:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:53.328 11:33:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:25:53.328 11:33:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:53.328 11:33:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@390 -- # [[ up == up ]] 00:25:53.328 11:33:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:25:53.328 11:33:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:53.328 11:33:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:25:53.328 Found net devices under 0000:86:00.0: cvl_0_0 00:25:53.328 11:33:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:25:53.328 11:33:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:25:53.328 11:33:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:53.328 11:33:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:25:53.328 11:33:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:53.328 11:33:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@390 -- # [[ up == up ]] 00:25:53.328 11:33:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:25:53.328 11:33:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:53.328 11:33:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:25:53.328 Found net devices under 0000:86:00.1: cvl_0_1 00:25:53.328 11:33:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:25:53.328 11:33:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:25:53.328 11:33:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@414 -- # is_hw=yes 00:25:53.328 11:33:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:25:53.328 11:33:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:25:53.328 11:33:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:25:53.328 11:33:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:53.328 11:33:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:53.328 11:33:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:25:53.328 11:33:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:25:53.328 11:33:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:25:53.328 11:33:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:25:53.328 11:33:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:25:53.328 11:33:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:25:53.328 11:33:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:53.328 11:33:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:25:53.328 11:33:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:25:53.328 11:33:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:25:53.328 11:33:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:25:53.587 11:33:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:25:53.587 11:33:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:25:53.587 11:33:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:25:53.587 11:33:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:25:53.587 11:33:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:25:53.587 11:33:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:25:53.587 11:33:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:25:53.587 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:53.587 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.189 ms 00:25:53.587 00:25:53.587 --- 10.0.0.2 ping statistics --- 00:25:53.587 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:53.587 rtt min/avg/max/mdev = 0.189/0.189/0.189/0.000 ms 00:25:53.587 11:33:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:25:53.587 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:53.587 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.159 ms 00:25:53.587 00:25:53.587 --- 10.0.0.1 ping statistics --- 00:25:53.587 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:53.587 rtt min/avg/max/mdev = 0.159/0.159/0.159/0.000 ms 00:25:53.587 11:33:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:53.587 11:33:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@422 -- # return 0 00:25:53.587 11:33:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:25:53.587 11:33:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:53.587 11:33:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:25:53.587 11:33:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:25:53.587 11:33:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:53.587 11:33:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:25:53.587 11:33:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:25:53.587 11:33:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@69 -- # nvmfappstart -L nvme_auth 00:25:53.587 11:33:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:25:53.587 11:33:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@724 -- # xtrace_disable 00:25:53.587 11:33:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:53.587 11:33:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@481 -- # nvmfpid=1642634 00:25:53.587 11:33:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@482 -- # waitforlisten 1642634 00:25:53.587 11:33:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvme_auth 00:25:53.587 11:33:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@831 -- # '[' -z 1642634 ']' 00:25:53.588 11:33:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:53.588 11:33:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@836 -- # local max_retries=100 00:25:53.588 11:33:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:53.588 11:33:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@840 -- # xtrace_disable 00:25:53.588 11:33:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:54.524 11:33:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:25:54.524 11:33:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@864 -- # return 0 00:25:54.524 11:33:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:25:54.524 11:33:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@730 -- # xtrace_disable 00:25:54.524 11:33:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:54.524 11:33:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:54.524 11:33:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@70 -- # trap 'cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log; cleanup' SIGINT SIGTERM EXIT 00:25:54.524 11:33:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key null 32 00:25:54.524 11:33:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:25:54.524 11:33:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:25:54.524 11:33:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:25:54.524 11:33:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=null 00:25:54.524 11:33:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:25:54.524 11:33:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:25:54.524 11:33:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # key=6397453e0fa93b8488b995190fc70324 00:25:54.524 11:33:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:25:54.524 11:33:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.RUB 00:25:54.524 11:33:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 6397453e0fa93b8488b995190fc70324 0 00:25:54.524 11:33:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 6397453e0fa93b8488b995190fc70324 0 00:25:54.524 11:33:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:25:54.524 11:33:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:25:54.524 11:33:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # key=6397453e0fa93b8488b995190fc70324 00:25:54.524 11:33:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=0 00:25:54.524 11:33:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:25:54.524 11:33:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.RUB 00:25:54.524 11:33:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.RUB 00:25:54.524 11:33:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # keys[0]=/tmp/spdk.key-null.RUB 00:25:54.524 11:33:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key sha512 64 00:25:54.524 11:33:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:25:54.524 11:33:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:25:54.524 11:33:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:25:54.524 11:33:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha512 00:25:54.524 11:33:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # len=64 00:25:54.524 11:33:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:25:54.524 11:33:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # key=c908debba369dc24fcfc3c3eb33590f1ac980ca1bc0f7391d65b620742b6fc93 00:25:54.524 11:33:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:25:54.524 11:33:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.bEW 00:25:54.524 11:33:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key c908debba369dc24fcfc3c3eb33590f1ac980ca1bc0f7391d65b620742b6fc93 3 00:25:54.524 11:33:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 c908debba369dc24fcfc3c3eb33590f1ac980ca1bc0f7391d65b620742b6fc93 3 00:25:54.524 11:33:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:25:54.524 11:33:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:25:54.524 11:33:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # key=c908debba369dc24fcfc3c3eb33590f1ac980ca1bc0f7391d65b620742b6fc93 00:25:54.524 11:33:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=3 00:25:54.524 11:33:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:25:54.524 11:33:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.bEW 00:25:54.524 11:33:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.bEW 00:25:54.524 11:33:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # ckeys[0]=/tmp/spdk.key-sha512.bEW 00:25:54.524 11:33:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key null 48 00:25:54.524 11:33:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:25:54.524 11:33:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:25:54.524 11:33:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:25:54.524 11:33:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=null 00:25:54.524 11:33:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # len=48 00:25:54.524 11:33:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:25:54.524 11:33:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # key=7a83e0a1382c1b0e52e6471aae5e37f9c27b70f4fc71fab5 00:25:54.783 11:33:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:25:54.783 11:33:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.vl4 00:25:54.783 11:33:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 7a83e0a1382c1b0e52e6471aae5e37f9c27b70f4fc71fab5 0 00:25:54.783 11:33:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 7a83e0a1382c1b0e52e6471aae5e37f9c27b70f4fc71fab5 0 00:25:54.783 11:33:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:25:54.783 11:33:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:25:54.783 11:33:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # key=7a83e0a1382c1b0e52e6471aae5e37f9c27b70f4fc71fab5 00:25:54.783 11:33:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=0 00:25:54.783 11:33:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:25:54.783 11:33:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.vl4 00:25:54.783 11:33:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.vl4 00:25:54.783 11:33:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # keys[1]=/tmp/spdk.key-null.vl4 00:25:54.783 11:33:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key sha384 48 00:25:54.783 11:33:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:25:54.783 11:33:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:25:54.783 11:33:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:25:54.783 11:33:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha384 00:25:54.783 11:33:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # len=48 00:25:54.783 11:33:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:25:54.783 11:33:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # key=e3efe9405dc16e6886b89bad9439ed8e70b067f73f19bb47 00:25:54.783 11:33:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:25:54.783 11:33:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.zsm 00:25:54.783 11:33:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key e3efe9405dc16e6886b89bad9439ed8e70b067f73f19bb47 2 00:25:54.783 11:33:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 e3efe9405dc16e6886b89bad9439ed8e70b067f73f19bb47 2 00:25:54.783 11:33:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:25:54.783 11:33:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:25:54.783 11:33:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # key=e3efe9405dc16e6886b89bad9439ed8e70b067f73f19bb47 00:25:54.783 11:33:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=2 00:25:54.783 11:33:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:25:54.783 11:33:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.zsm 00:25:54.783 11:33:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.zsm 00:25:54.783 11:33:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # ckeys[1]=/tmp/spdk.key-sha384.zsm 00:25:54.783 11:33:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:25:54.783 11:33:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:25:54.783 11:33:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:25:54.783 11:33:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:25:54.783 11:33:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha256 00:25:54.783 11:33:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:25:54.783 11:33:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:25:54.783 11:33:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # key=671f488b568dce9538ea6e19989727fe 00:25:54.783 11:33:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:25:54.783 11:33:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.MWL 00:25:54.783 11:33:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 671f488b568dce9538ea6e19989727fe 1 00:25:54.783 11:33:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 671f488b568dce9538ea6e19989727fe 1 00:25:54.783 11:33:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:25:54.783 11:33:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:25:54.783 11:33:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # key=671f488b568dce9538ea6e19989727fe 00:25:54.783 11:33:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=1 00:25:54.783 11:33:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:25:54.783 11:33:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.MWL 00:25:54.783 11:33:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.MWL 00:25:54.783 11:33:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # keys[2]=/tmp/spdk.key-sha256.MWL 00:25:54.783 11:33:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:25:54.783 11:33:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:25:54.783 11:33:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:25:54.784 11:33:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:25:54.784 11:33:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha256 00:25:54.784 11:33:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:25:54.784 11:33:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:25:54.784 11:33:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # key=0903515676e1be30309734d2a385127b 00:25:54.784 11:33:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:25:54.784 11:33:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.93N 00:25:54.784 11:33:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 0903515676e1be30309734d2a385127b 1 00:25:54.784 11:33:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 0903515676e1be30309734d2a385127b 1 00:25:54.784 11:33:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:25:54.784 11:33:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:25:54.784 11:33:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # key=0903515676e1be30309734d2a385127b 00:25:54.784 11:33:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=1 00:25:54.784 11:33:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:25:54.784 11:33:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.93N 00:25:54.784 11:33:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.93N 00:25:54.784 11:33:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # ckeys[2]=/tmp/spdk.key-sha256.93N 00:25:54.784 11:33:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key sha384 48 00:25:54.784 11:33:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:25:54.784 11:33:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:25:54.784 11:33:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:25:54.784 11:33:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha384 00:25:54.784 11:33:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # len=48 00:25:54.784 11:33:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:25:54.784 11:33:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # key=be181836e9b7a33c4f25d5ebf122b2b1172012b1ccd1986f 00:25:54.784 11:33:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:25:54.784 11:33:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.sp3 00:25:54.784 11:33:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key be181836e9b7a33c4f25d5ebf122b2b1172012b1ccd1986f 2 00:25:54.784 11:33:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 be181836e9b7a33c4f25d5ebf122b2b1172012b1ccd1986f 2 00:25:54.784 11:33:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:25:54.784 11:33:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:25:54.784 11:33:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # key=be181836e9b7a33c4f25d5ebf122b2b1172012b1ccd1986f 00:25:54.784 11:33:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=2 00:25:54.784 11:33:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:25:55.043 11:33:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.sp3 00:25:55.043 11:33:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.sp3 00:25:55.043 11:33:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # keys[3]=/tmp/spdk.key-sha384.sp3 00:25:55.043 11:33:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key null 32 00:25:55.043 11:33:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:25:55.043 11:33:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:25:55.043 11:33:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:25:55.043 11:33:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=null 00:25:55.043 11:33:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:25:55.043 11:33:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:25:55.043 11:33:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # key=a7c037c4d459dbeac4c1b8a80468f3a4 00:25:55.043 11:33:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:25:55.043 11:33:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.Tfc 00:25:55.043 11:33:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key a7c037c4d459dbeac4c1b8a80468f3a4 0 00:25:55.043 11:33:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 a7c037c4d459dbeac4c1b8a80468f3a4 0 00:25:55.043 11:33:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:25:55.043 11:33:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:25:55.043 11:33:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # key=a7c037c4d459dbeac4c1b8a80468f3a4 00:25:55.043 11:33:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=0 00:25:55.043 11:33:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:25:55.043 11:33:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.Tfc 00:25:55.043 11:33:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.Tfc 00:25:55.043 11:33:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # ckeys[3]=/tmp/spdk.key-null.Tfc 00:25:55.043 11:33:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # gen_dhchap_key sha512 64 00:25:55.043 11:33:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:25:55.043 11:33:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:25:55.043 11:33:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:25:55.043 11:33:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha512 00:25:55.043 11:33:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # len=64 00:25:55.043 11:33:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:25:55.043 11:33:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # key=a87a936dc29db85873286d35aefe833a27e0c023f85cc268d7077b1d49c1451a 00:25:55.043 11:33:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:25:55.043 11:33:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.R7D 00:25:55.043 11:33:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key a87a936dc29db85873286d35aefe833a27e0c023f85cc268d7077b1d49c1451a 3 00:25:55.043 11:33:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 a87a936dc29db85873286d35aefe833a27e0c023f85cc268d7077b1d49c1451a 3 00:25:55.043 11:33:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:25:55.043 11:33:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:25:55.043 11:33:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # key=a87a936dc29db85873286d35aefe833a27e0c023f85cc268d7077b1d49c1451a 00:25:55.043 11:33:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=3 00:25:55.043 11:33:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:25:55.043 11:33:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.R7D 00:25:55.043 11:33:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.R7D 00:25:55.043 11:33:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # keys[4]=/tmp/spdk.key-sha512.R7D 00:25:55.043 11:33:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # ckeys[4]= 00:25:55.043 11:33:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@79 -- # waitforlisten 1642634 00:25:55.043 11:33:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@831 -- # '[' -z 1642634 ']' 00:25:55.043 11:33:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:55.043 11:33:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@836 -- # local max_retries=100 00:25:55.043 11:33:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:55.043 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:55.043 11:33:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@840 -- # xtrace_disable 00:25:55.043 11:33:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:55.303 11:33:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:25:55.303 11:33:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@864 -- # return 0 00:25:55.303 11:33:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:25:55.303 11:33:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.RUB 00:25:55.303 11:33:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:55.303 11:33:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:55.303 11:33:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:55.303 11:33:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha512.bEW ]] 00:25:55.303 11:33:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.bEW 00:25:55.303 11:33:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:55.303 11:33:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:55.303 11:33:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:55.303 11:33:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:25:55.303 11:33:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-null.vl4 00:25:55.303 11:33:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:55.303 11:33:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:55.303 11:33:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:55.303 11:33:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha384.zsm ]] 00:25:55.303 11:33:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.zsm 00:25:55.303 11:33:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:55.303 11:33:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:55.303 11:33:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:55.303 11:33:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:25:55.303 11:33:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha256.MWL 00:25:55.303 11:33:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:55.303 11:33:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:55.303 11:33:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:55.303 11:33:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha256.93N ]] 00:25:55.303 11:33:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.93N 00:25:55.303 11:33:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:55.303 11:33:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:55.303 11:33:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:55.303 11:33:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:25:55.303 11:33:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha384.sp3 00:25:55.303 11:33:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:55.303 11:33:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:55.303 11:33:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:55.303 11:33:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-null.Tfc ]] 00:25:55.303 11:33:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey3 /tmp/spdk.key-null.Tfc 00:25:55.303 11:33:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:55.303 11:33:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:55.303 11:33:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:55.303 11:33:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:25:55.303 11:33:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key4 /tmp/spdk.key-sha512.R7D 00:25:55.303 11:33:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:55.303 11:33:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:55.303 11:33:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:55.303 11:33:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n '' ]] 00:25:55.303 11:33:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@85 -- # nvmet_auth_init 00:25:55.303 11:33:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@35 -- # get_main_ns_ip 00:25:55.303 11:33:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:55.303 11:33:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:55.303 11:33:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:55.303 11:33:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:55.303 11:33:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:55.303 11:33:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:55.303 11:33:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:55.303 11:33:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:55.303 11:33:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:55.303 11:33:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:55.303 11:33:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@35 -- # configure_kernel_target nqn.2024-02.io.spdk:cnode0 10.0.0.1 00:25:55.303 11:33:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@632 -- # local kernel_name=nqn.2024-02.io.spdk:cnode0 kernel_target_ip=10.0.0.1 00:25:55.303 11:33:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@634 -- # nvmet=/sys/kernel/config/nvmet 00:25:55.303 11:33:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@635 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:25:55.303 11:33:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@636 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:25:55.303 11:33:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@637 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:25:55.303 11:33:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@639 -- # local block nvme 00:25:55.303 11:33:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@641 -- # [[ ! -e /sys/module/nvmet ]] 00:25:55.303 11:33:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@642 -- # modprobe nvmet 00:25:55.303 11:33:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@645 -- # [[ -e /sys/kernel/config/nvmet ]] 00:25:55.303 11:33:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@647 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:25:57.829 Waiting for block devices as requested 00:25:57.829 0000:5e:00.0 (8086 0a54): vfio-pci -> nvme 00:25:58.086 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:25:58.086 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:25:58.343 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:25:58.343 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:25:58.343 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:25:58.343 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:25:58.601 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:25:58.601 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:25:58.601 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:25:58.601 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:25:58.893 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:25:58.893 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:25:58.893 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:25:59.160 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:25:59.160 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:25:59.160 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:25:59.727 11:33:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:25:59.727 11:33:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n1 ]] 00:25:59.727 11:33:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@652 -- # is_block_zoned nvme0n1 00:25:59.727 11:33:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:25:59.727 11:33:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:25:59.727 11:33:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:25:59.727 11:33:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@653 -- # block_in_use nvme0n1 00:25:59.727 11:33:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:25:59.727 11:33:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:25:59.727 No valid GPT data, bailing 00:25:59.728 11:33:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:25:59.728 11:33:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@391 -- # pt= 00:25:59.728 11:33:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@392 -- # return 1 00:25:59.728 11:33:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n1 00:25:59.728 11:33:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@656 -- # [[ -b /dev/nvme0n1 ]] 00:25:59.728 11:33:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@658 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:25:59.728 11:33:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@659 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:25:59.728 11:33:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@660 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:25:59.728 11:33:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@665 -- # echo SPDK-nqn.2024-02.io.spdk:cnode0 00:25:59.728 11:33:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@667 -- # echo 1 00:25:59.728 11:33:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@668 -- # echo /dev/nvme0n1 00:25:59.728 11:33:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@669 -- # echo 1 00:25:59.728 11:33:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@671 -- # echo 10.0.0.1 00:25:59.728 11:33:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@672 -- # echo tcp 00:25:59.728 11:33:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@673 -- # echo 4420 00:25:59.728 11:33:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@674 -- # echo ipv4 00:25:59.728 11:33:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@677 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 /sys/kernel/config/nvmet/ports/1/subsystems/ 00:25:59.728 11:33:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@680 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid=00ad29c2-ccbd-e911-906e-0017a4403562 -a 10.0.0.1 -t tcp -s 4420 00:25:59.986 00:25:59.986 Discovery Log Number of Records 2, Generation counter 2 00:25:59.986 =====Discovery Log Entry 0====== 00:25:59.986 trtype: tcp 00:25:59.986 adrfam: ipv4 00:25:59.986 subtype: current discovery subsystem 00:25:59.986 treq: not specified, sq flow control disable supported 00:25:59.986 portid: 1 00:25:59.986 trsvcid: 4420 00:25:59.986 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:25:59.986 traddr: 10.0.0.1 00:25:59.986 eflags: none 00:25:59.986 sectype: none 00:25:59.986 =====Discovery Log Entry 1====== 00:25:59.986 trtype: tcp 00:25:59.986 adrfam: ipv4 00:25:59.986 subtype: nvme subsystem 00:25:59.986 treq: not specified, sq flow control disable supported 00:25:59.986 portid: 1 00:25:59.986 trsvcid: 4420 00:25:59.986 subnqn: nqn.2024-02.io.spdk:cnode0 00:25:59.986 traddr: 10.0.0.1 00:25:59.986 eflags: none 00:25:59.986 sectype: none 00:25:59.986 11:33:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@36 -- # mkdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:25:59.986 11:33:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@37 -- # echo 0 00:25:59.986 11:33:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@38 -- # ln -s /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:25:59.986 11:33:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@88 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:25:59.986 11:33:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:59.986 11:33:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:59.986 11:33:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:25:59.986 11:33:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:25:59.986 11:33:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:N2E4M2UwYTEzODJjMWIwZTUyZTY0NzFhYWU1ZTM3ZjljMjdiNzBmNGZjNzFmYWI1SKWBLA==: 00:25:59.986 11:33:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZTNlZmU5NDA1ZGMxNmU2ODg2Yjg5YmFkOTQzOWVkOGU3MGIwNjdmNzNmMTliYjQ3FeGUFQ==: 00:25:59.986 11:33:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:59.986 11:33:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:25:59.986 11:33:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:N2E4M2UwYTEzODJjMWIwZTUyZTY0NzFhYWU1ZTM3ZjljMjdiNzBmNGZjNzFmYWI1SKWBLA==: 00:25:59.986 11:33:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZTNlZmU5NDA1ZGMxNmU2ODg2Yjg5YmFkOTQzOWVkOGU3MGIwNjdmNzNmMTliYjQ3FeGUFQ==: ]] 00:25:59.986 11:33:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZTNlZmU5NDA1ZGMxNmU2ODg2Yjg5YmFkOTQzOWVkOGU3MGIwNjdmNzNmMTliYjQ3FeGUFQ==: 00:25:59.986 11:33:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:25:59.986 11:33:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@94 -- # printf %s sha256,sha384,sha512 00:25:59.986 11:33:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:25:59.986 11:33:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@94 -- # printf %s ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:25:59.986 11:33:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # connect_authenticate sha256,sha384,sha512 ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 1 00:25:59.986 11:33:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:59.986 11:33:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256,sha384,sha512 00:25:59.986 11:33:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:25:59.986 11:33:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:25:59.986 11:33:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:59.986 11:33:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:25:59.986 11:33:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:59.986 11:33:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:59.986 11:33:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:59.986 11:33:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:59.986 11:33:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:59.986 11:33:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:59.986 11:33:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:59.986 11:33:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:59.986 11:33:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:59.986 11:33:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:59.986 11:33:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:59.986 11:33:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:59.986 11:33:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:59.986 11:33:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:59.986 11:33:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:25:59.986 11:33:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:59.986 11:33:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:59.986 nvme0n1 00:25:59.986 11:33:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:59.986 11:33:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:59.986 11:33:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:59.986 11:33:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:59.986 11:33:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:59.986 11:33:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:00.245 11:33:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:00.245 11:33:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:00.245 11:33:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:00.245 11:33:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:00.245 11:33:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:00.245 11:33:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:26:00.245 11:33:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:26:00.245 11:33:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:00.245 11:33:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 0 00:26:00.245 11:33:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:00.245 11:33:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:00.245 11:33:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:26:00.245 11:33:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:26:00.245 11:33:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NjM5NzQ1M2UwZmE5M2I4NDg4Yjk5NTE5MGZjNzAzMjQQR7aR: 00:26:00.245 11:33:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YzkwOGRlYmJhMzY5ZGMyNGZjZmMzYzNlYjMzNTkwZjFhYzk4MGNhMWJjMGY3MzkxZDY1YjYyMDc0MmI2ZmM5M/gqEKA=: 00:26:00.245 11:33:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:00.245 11:33:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:26:00.245 11:33:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NjM5NzQ1M2UwZmE5M2I4NDg4Yjk5NTE5MGZjNzAzMjQQR7aR: 00:26:00.245 11:33:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YzkwOGRlYmJhMzY5ZGMyNGZjZmMzYzNlYjMzNTkwZjFhYzk4MGNhMWJjMGY3MzkxZDY1YjYyMDc0MmI2ZmM5M/gqEKA=: ]] 00:26:00.245 11:33:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YzkwOGRlYmJhMzY5ZGMyNGZjZmMzYzNlYjMzNTkwZjFhYzk4MGNhMWJjMGY3MzkxZDY1YjYyMDc0MmI2ZmM5M/gqEKA=: 00:26:00.245 11:33:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 0 00:26:00.245 11:33:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:00.245 11:33:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:00.245 11:33:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:26:00.245 11:33:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:26:00.245 11:33:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:00.245 11:33:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:26:00.245 11:33:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:00.245 11:33:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:00.245 11:33:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:00.245 11:33:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:00.245 11:33:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:00.245 11:33:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:00.245 11:33:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:00.245 11:33:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:00.245 11:33:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:00.245 11:33:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:00.245 11:33:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:00.245 11:33:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:00.245 11:33:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:00.245 11:33:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:00.245 11:33:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:26:00.245 11:33:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:00.245 11:33:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:00.245 nvme0n1 00:26:00.245 11:33:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:00.245 11:33:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:00.245 11:33:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:00.245 11:33:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:00.245 11:33:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:00.245 11:33:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:00.245 11:33:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:00.245 11:33:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:00.245 11:33:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:00.245 11:33:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:00.245 11:33:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:00.245 11:33:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:00.245 11:33:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:26:00.245 11:33:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:00.245 11:33:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:00.504 11:33:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:26:00.504 11:33:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:26:00.504 11:33:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:N2E4M2UwYTEzODJjMWIwZTUyZTY0NzFhYWU1ZTM3ZjljMjdiNzBmNGZjNzFmYWI1SKWBLA==: 00:26:00.504 11:33:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZTNlZmU5NDA1ZGMxNmU2ODg2Yjg5YmFkOTQzOWVkOGU3MGIwNjdmNzNmMTliYjQ3FeGUFQ==: 00:26:00.504 11:33:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:00.504 11:33:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:26:00.504 11:33:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:N2E4M2UwYTEzODJjMWIwZTUyZTY0NzFhYWU1ZTM3ZjljMjdiNzBmNGZjNzFmYWI1SKWBLA==: 00:26:00.504 11:33:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZTNlZmU5NDA1ZGMxNmU2ODg2Yjg5YmFkOTQzOWVkOGU3MGIwNjdmNzNmMTliYjQ3FeGUFQ==: ]] 00:26:00.504 11:33:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZTNlZmU5NDA1ZGMxNmU2ODg2Yjg5YmFkOTQzOWVkOGU3MGIwNjdmNzNmMTliYjQ3FeGUFQ==: 00:26:00.504 11:33:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 1 00:26:00.504 11:33:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:00.504 11:33:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:00.504 11:33:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:26:00.504 11:33:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:26:00.504 11:33:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:00.504 11:33:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:26:00.504 11:33:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:00.504 11:33:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:00.504 11:33:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:00.504 11:33:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:00.504 11:33:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:00.504 11:33:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:00.504 11:33:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:00.504 11:33:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:00.504 11:33:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:00.504 11:33:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:00.504 11:33:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:00.504 11:33:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:00.504 11:33:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:00.504 11:33:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:00.504 11:33:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:26:00.504 11:33:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:00.504 11:33:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:00.504 nvme0n1 00:26:00.504 11:33:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:00.504 11:33:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:00.504 11:33:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:00.504 11:33:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:00.504 11:33:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:00.504 11:33:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:00.504 11:33:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:00.504 11:33:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:00.504 11:33:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:00.504 11:33:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:00.504 11:33:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:00.504 11:33:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:00.504 11:33:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:26:00.504 11:33:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:00.505 11:33:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:00.505 11:33:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:26:00.505 11:33:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:26:00.505 11:33:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NjcxZjQ4OGI1NjhkY2U5NTM4ZWE2ZTE5OTg5NzI3ZmUWdZes: 00:26:00.505 11:33:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MDkwMzUxNTY3NmUxYmUzMDMwOTczNGQyYTM4NTEyN2JSqHgT: 00:26:00.505 11:33:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:00.505 11:33:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:26:00.505 11:33:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NjcxZjQ4OGI1NjhkY2U5NTM4ZWE2ZTE5OTg5NzI3ZmUWdZes: 00:26:00.505 11:33:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MDkwMzUxNTY3NmUxYmUzMDMwOTczNGQyYTM4NTEyN2JSqHgT: ]] 00:26:00.505 11:33:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MDkwMzUxNTY3NmUxYmUzMDMwOTczNGQyYTM4NTEyN2JSqHgT: 00:26:00.505 11:33:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 2 00:26:00.505 11:33:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:00.505 11:33:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:00.505 11:33:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:26:00.505 11:33:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:26:00.505 11:33:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:00.505 11:33:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:26:00.505 11:33:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:00.505 11:33:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:00.505 11:33:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:00.505 11:33:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:00.505 11:33:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:00.505 11:33:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:00.505 11:33:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:00.505 11:33:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:00.505 11:33:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:00.505 11:33:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:00.505 11:33:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:00.505 11:33:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:00.505 11:33:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:00.505 11:33:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:00.505 11:33:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:26:00.505 11:33:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:00.505 11:33:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:00.763 nvme0n1 00:26:00.763 11:33:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:00.763 11:33:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:00.763 11:33:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:00.763 11:33:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:00.763 11:33:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:00.763 11:33:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:00.763 11:33:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:00.763 11:33:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:00.763 11:33:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:00.763 11:33:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:00.763 11:33:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:00.763 11:33:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:00.763 11:33:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 3 00:26:00.763 11:33:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:00.763 11:33:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:00.763 11:33:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:26:00.763 11:33:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:26:00.763 11:33:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YmUxODE4MzZlOWI3YTMzYzRmMjVkNWViZjEyMmIyYjExNzIwMTJiMWNjZDE5ODZmunzmpg==: 00:26:00.763 11:33:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YTdjMDM3YzRkNDU5ZGJlYWM0YzFiOGE4MDQ2OGYzYTRA71V3: 00:26:00.763 11:33:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:00.763 11:33:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:26:00.763 11:33:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YmUxODE4MzZlOWI3YTMzYzRmMjVkNWViZjEyMmIyYjExNzIwMTJiMWNjZDE5ODZmunzmpg==: 00:26:00.763 11:33:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YTdjMDM3YzRkNDU5ZGJlYWM0YzFiOGE4MDQ2OGYzYTRA71V3: ]] 00:26:00.763 11:33:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YTdjMDM3YzRkNDU5ZGJlYWM0YzFiOGE4MDQ2OGYzYTRA71V3: 00:26:00.763 11:33:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 3 00:26:00.763 11:33:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:00.763 11:33:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:00.763 11:33:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:26:00.763 11:33:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:26:00.763 11:33:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:00.763 11:33:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:26:00.763 11:33:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:00.763 11:33:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:00.763 11:33:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:00.763 11:33:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:00.763 11:33:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:00.763 11:33:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:00.763 11:33:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:00.763 11:33:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:00.763 11:33:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:00.763 11:33:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:00.763 11:33:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:00.763 11:33:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:00.763 11:33:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:00.763 11:33:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:00.763 11:33:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:26:00.763 11:33:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:00.763 11:33:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:01.022 nvme0n1 00:26:01.022 11:33:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:01.022 11:33:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:01.022 11:33:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:01.022 11:33:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:01.022 11:33:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:01.022 11:33:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:01.022 11:33:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:01.022 11:33:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:01.022 11:33:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:01.022 11:33:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:01.022 11:33:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:01.022 11:33:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:01.022 11:33:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 4 00:26:01.022 11:33:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:01.022 11:33:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:01.022 11:33:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:26:01.022 11:33:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:26:01.022 11:33:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YTg3YTkzNmRjMjlkYjg1ODczMjg2ZDM1YWVmZTgzM2EyN2UwYzAyM2Y4NWNjMjY4ZDcwNzdiMWQ0OWMxNDUxYTXOMcU=: 00:26:01.022 11:33:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:26:01.022 11:33:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:01.022 11:33:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:26:01.022 11:33:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YTg3YTkzNmRjMjlkYjg1ODczMjg2ZDM1YWVmZTgzM2EyN2UwYzAyM2Y4NWNjMjY4ZDcwNzdiMWQ0OWMxNDUxYTXOMcU=: 00:26:01.022 11:33:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:26:01.022 11:33:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 4 00:26:01.022 11:33:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:01.022 11:33:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:01.022 11:33:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:26:01.022 11:33:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:26:01.022 11:33:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:01.022 11:33:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:26:01.022 11:33:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:01.022 11:33:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:01.022 11:33:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:01.022 11:33:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:01.022 11:33:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:01.022 11:33:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:01.022 11:33:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:01.022 11:33:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:01.022 11:33:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:01.022 11:33:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:01.022 11:33:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:01.022 11:33:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:01.022 11:33:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:01.022 11:33:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:01.022 11:33:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:26:01.022 11:33:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:01.022 11:33:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:01.280 nvme0n1 00:26:01.280 11:33:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:01.280 11:33:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:01.280 11:33:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:01.280 11:33:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:01.280 11:33:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:01.280 11:33:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:01.280 11:33:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:01.280 11:33:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:01.280 11:33:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:01.280 11:33:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:01.280 11:33:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:01.280 11:33:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:26:01.280 11:33:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:01.280 11:33:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 0 00:26:01.280 11:33:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:01.280 11:33:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:01.280 11:33:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:26:01.280 11:33:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:26:01.280 11:33:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NjM5NzQ1M2UwZmE5M2I4NDg4Yjk5NTE5MGZjNzAzMjQQR7aR: 00:26:01.280 11:33:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YzkwOGRlYmJhMzY5ZGMyNGZjZmMzYzNlYjMzNTkwZjFhYzk4MGNhMWJjMGY3MzkxZDY1YjYyMDc0MmI2ZmM5M/gqEKA=: 00:26:01.280 11:33:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:01.280 11:33:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:26:01.280 11:33:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NjM5NzQ1M2UwZmE5M2I4NDg4Yjk5NTE5MGZjNzAzMjQQR7aR: 00:26:01.280 11:33:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YzkwOGRlYmJhMzY5ZGMyNGZjZmMzYzNlYjMzNTkwZjFhYzk4MGNhMWJjMGY3MzkxZDY1YjYyMDc0MmI2ZmM5M/gqEKA=: ]] 00:26:01.280 11:33:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YzkwOGRlYmJhMzY5ZGMyNGZjZmMzYzNlYjMzNTkwZjFhYzk4MGNhMWJjMGY3MzkxZDY1YjYyMDc0MmI2ZmM5M/gqEKA=: 00:26:01.280 11:33:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 0 00:26:01.280 11:33:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:01.280 11:33:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:01.280 11:33:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:26:01.280 11:33:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:26:01.280 11:33:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:01.280 11:33:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:26:01.280 11:33:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:01.280 11:33:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:01.280 11:33:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:01.281 11:33:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:01.281 11:33:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:01.281 11:33:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:01.281 11:33:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:01.281 11:33:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:01.281 11:33:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:01.281 11:33:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:01.281 11:33:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:01.281 11:33:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:01.281 11:33:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:01.281 11:33:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:01.281 11:33:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:26:01.281 11:33:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:01.281 11:33:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:01.539 nvme0n1 00:26:01.539 11:33:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:01.539 11:33:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:01.539 11:33:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:01.539 11:33:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:01.539 11:33:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:01.539 11:33:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:01.539 11:33:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:01.539 11:33:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:01.539 11:33:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:01.539 11:33:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:01.539 11:33:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:01.539 11:33:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:01.539 11:33:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 1 00:26:01.539 11:33:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:01.539 11:33:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:01.539 11:33:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:26:01.539 11:33:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:26:01.539 11:33:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:N2E4M2UwYTEzODJjMWIwZTUyZTY0NzFhYWU1ZTM3ZjljMjdiNzBmNGZjNzFmYWI1SKWBLA==: 00:26:01.539 11:33:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZTNlZmU5NDA1ZGMxNmU2ODg2Yjg5YmFkOTQzOWVkOGU3MGIwNjdmNzNmMTliYjQ3FeGUFQ==: 00:26:01.539 11:33:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:01.539 11:33:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:26:01.539 11:33:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:N2E4M2UwYTEzODJjMWIwZTUyZTY0NzFhYWU1ZTM3ZjljMjdiNzBmNGZjNzFmYWI1SKWBLA==: 00:26:01.539 11:33:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZTNlZmU5NDA1ZGMxNmU2ODg2Yjg5YmFkOTQzOWVkOGU3MGIwNjdmNzNmMTliYjQ3FeGUFQ==: ]] 00:26:01.539 11:33:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZTNlZmU5NDA1ZGMxNmU2ODg2Yjg5YmFkOTQzOWVkOGU3MGIwNjdmNzNmMTliYjQ3FeGUFQ==: 00:26:01.539 11:33:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 1 00:26:01.539 11:33:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:01.539 11:33:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:01.539 11:33:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:26:01.539 11:33:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:26:01.539 11:33:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:01.539 11:33:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:26:01.539 11:33:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:01.539 11:33:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:01.539 11:33:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:01.539 11:33:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:01.539 11:33:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:01.539 11:33:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:01.539 11:33:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:01.539 11:33:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:01.539 11:33:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:01.539 11:33:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:01.539 11:33:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:01.539 11:33:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:01.539 11:33:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:01.539 11:33:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:01.539 11:33:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:26:01.539 11:33:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:01.539 11:33:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:01.797 nvme0n1 00:26:01.797 11:33:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:01.797 11:33:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:01.797 11:33:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:01.797 11:33:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:01.797 11:33:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:01.797 11:33:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:01.797 11:33:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:01.797 11:33:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:01.798 11:33:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:01.798 11:33:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:01.798 11:33:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:01.798 11:33:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:01.798 11:33:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 2 00:26:01.798 11:33:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:01.798 11:33:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:01.798 11:33:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:26:01.798 11:33:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:26:01.798 11:33:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NjcxZjQ4OGI1NjhkY2U5NTM4ZWE2ZTE5OTg5NzI3ZmUWdZes: 00:26:01.798 11:33:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MDkwMzUxNTY3NmUxYmUzMDMwOTczNGQyYTM4NTEyN2JSqHgT: 00:26:01.798 11:33:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:01.798 11:33:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:26:01.798 11:33:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NjcxZjQ4OGI1NjhkY2U5NTM4ZWE2ZTE5OTg5NzI3ZmUWdZes: 00:26:01.798 11:33:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MDkwMzUxNTY3NmUxYmUzMDMwOTczNGQyYTM4NTEyN2JSqHgT: ]] 00:26:01.798 11:33:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MDkwMzUxNTY3NmUxYmUzMDMwOTczNGQyYTM4NTEyN2JSqHgT: 00:26:01.798 11:33:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 2 00:26:01.798 11:33:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:01.798 11:33:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:01.798 11:33:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:26:01.798 11:33:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:26:01.798 11:33:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:01.798 11:33:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:26:01.798 11:33:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:01.798 11:33:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:01.798 11:33:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:01.798 11:33:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:01.798 11:33:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:01.798 11:33:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:01.798 11:33:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:01.798 11:33:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:01.798 11:33:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:01.798 11:33:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:01.798 11:33:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:01.798 11:33:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:01.798 11:33:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:01.798 11:33:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:01.798 11:33:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:26:01.798 11:33:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:01.798 11:33:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:02.056 nvme0n1 00:26:02.056 11:33:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:02.056 11:33:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:02.056 11:33:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:02.056 11:33:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:02.056 11:33:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:02.056 11:33:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:02.056 11:33:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:02.056 11:33:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:02.056 11:33:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:02.056 11:33:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:02.056 11:33:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:02.056 11:33:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:02.056 11:33:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 3 00:26:02.056 11:33:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:02.056 11:33:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:02.056 11:33:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:26:02.056 11:33:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:26:02.056 11:33:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YmUxODE4MzZlOWI3YTMzYzRmMjVkNWViZjEyMmIyYjExNzIwMTJiMWNjZDE5ODZmunzmpg==: 00:26:02.056 11:33:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YTdjMDM3YzRkNDU5ZGJlYWM0YzFiOGE4MDQ2OGYzYTRA71V3: 00:26:02.056 11:33:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:02.056 11:33:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:26:02.056 11:33:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YmUxODE4MzZlOWI3YTMzYzRmMjVkNWViZjEyMmIyYjExNzIwMTJiMWNjZDE5ODZmunzmpg==: 00:26:02.056 11:33:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YTdjMDM3YzRkNDU5ZGJlYWM0YzFiOGE4MDQ2OGYzYTRA71V3: ]] 00:26:02.056 11:33:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YTdjMDM3YzRkNDU5ZGJlYWM0YzFiOGE4MDQ2OGYzYTRA71V3: 00:26:02.056 11:33:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 3 00:26:02.056 11:33:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:02.056 11:33:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:02.056 11:33:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:26:02.056 11:33:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:26:02.056 11:33:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:02.056 11:33:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:26:02.056 11:33:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:02.056 11:33:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:02.056 11:33:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:02.056 11:33:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:02.056 11:33:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:02.056 11:33:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:02.056 11:33:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:02.056 11:33:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:02.056 11:33:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:02.056 11:33:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:02.056 11:33:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:02.056 11:33:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:02.056 11:33:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:02.056 11:33:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:02.056 11:33:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:26:02.056 11:33:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:02.056 11:33:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:02.314 nvme0n1 00:26:02.314 11:33:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:02.314 11:33:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:02.314 11:33:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:02.314 11:33:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:02.314 11:33:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:02.314 11:33:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:02.314 11:33:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:02.314 11:33:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:02.314 11:33:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:02.314 11:33:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:02.314 11:33:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:02.314 11:33:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:02.314 11:33:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 4 00:26:02.314 11:33:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:02.314 11:33:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:02.314 11:33:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:26:02.314 11:33:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:26:02.314 11:33:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YTg3YTkzNmRjMjlkYjg1ODczMjg2ZDM1YWVmZTgzM2EyN2UwYzAyM2Y4NWNjMjY4ZDcwNzdiMWQ0OWMxNDUxYTXOMcU=: 00:26:02.314 11:33:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:26:02.314 11:33:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:02.314 11:33:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:26:02.314 11:33:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YTg3YTkzNmRjMjlkYjg1ODczMjg2ZDM1YWVmZTgzM2EyN2UwYzAyM2Y4NWNjMjY4ZDcwNzdiMWQ0OWMxNDUxYTXOMcU=: 00:26:02.314 11:33:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:26:02.314 11:33:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 4 00:26:02.314 11:33:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:02.314 11:33:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:02.314 11:33:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:26:02.314 11:33:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:26:02.314 11:33:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:02.314 11:33:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:26:02.314 11:33:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:02.314 11:33:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:02.314 11:33:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:02.314 11:33:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:02.314 11:33:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:02.314 11:33:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:02.314 11:33:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:02.314 11:33:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:02.314 11:33:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:02.314 11:33:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:02.314 11:33:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:02.314 11:33:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:02.314 11:33:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:02.314 11:33:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:02.314 11:33:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:26:02.314 11:33:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:02.314 11:33:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:02.314 nvme0n1 00:26:02.314 11:33:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:02.314 11:33:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:02.314 11:33:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:02.314 11:33:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:02.314 11:33:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:02.572 11:33:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:02.572 11:33:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:02.572 11:33:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:02.572 11:33:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:02.572 11:33:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:02.572 11:33:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:02.572 11:33:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:26:02.572 11:33:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:02.572 11:33:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 0 00:26:02.572 11:33:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:02.572 11:33:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:02.572 11:33:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:26:02.572 11:33:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:26:02.573 11:33:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NjM5NzQ1M2UwZmE5M2I4NDg4Yjk5NTE5MGZjNzAzMjQQR7aR: 00:26:02.573 11:33:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YzkwOGRlYmJhMzY5ZGMyNGZjZmMzYzNlYjMzNTkwZjFhYzk4MGNhMWJjMGY3MzkxZDY1YjYyMDc0MmI2ZmM5M/gqEKA=: 00:26:02.573 11:33:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:02.573 11:33:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:26:02.573 11:33:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NjM5NzQ1M2UwZmE5M2I4NDg4Yjk5NTE5MGZjNzAzMjQQR7aR: 00:26:02.573 11:33:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YzkwOGRlYmJhMzY5ZGMyNGZjZmMzYzNlYjMzNTkwZjFhYzk4MGNhMWJjMGY3MzkxZDY1YjYyMDc0MmI2ZmM5M/gqEKA=: ]] 00:26:02.573 11:33:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YzkwOGRlYmJhMzY5ZGMyNGZjZmMzYzNlYjMzNTkwZjFhYzk4MGNhMWJjMGY3MzkxZDY1YjYyMDc0MmI2ZmM5M/gqEKA=: 00:26:02.573 11:33:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 0 00:26:02.573 11:33:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:02.573 11:33:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:02.573 11:33:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:26:02.573 11:33:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:26:02.573 11:33:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:02.573 11:33:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:26:02.573 11:33:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:02.573 11:33:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:02.573 11:33:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:02.573 11:33:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:02.573 11:33:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:02.573 11:33:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:02.573 11:33:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:02.573 11:33:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:02.573 11:33:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:02.573 11:33:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:02.573 11:33:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:02.573 11:33:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:02.573 11:33:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:02.573 11:33:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:02.573 11:33:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:26:02.573 11:33:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:02.573 11:33:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:02.831 nvme0n1 00:26:02.831 11:33:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:02.831 11:33:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:02.831 11:33:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:02.831 11:33:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:02.831 11:33:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:02.831 11:33:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:02.831 11:33:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:02.831 11:33:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:02.831 11:33:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:02.831 11:33:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:02.831 11:33:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:02.831 11:33:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:02.831 11:33:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 1 00:26:02.831 11:33:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:02.831 11:33:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:02.831 11:33:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:26:02.831 11:33:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:26:02.831 11:33:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:N2E4M2UwYTEzODJjMWIwZTUyZTY0NzFhYWU1ZTM3ZjljMjdiNzBmNGZjNzFmYWI1SKWBLA==: 00:26:02.831 11:33:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZTNlZmU5NDA1ZGMxNmU2ODg2Yjg5YmFkOTQzOWVkOGU3MGIwNjdmNzNmMTliYjQ3FeGUFQ==: 00:26:02.831 11:33:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:02.831 11:33:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:26:02.831 11:33:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:N2E4M2UwYTEzODJjMWIwZTUyZTY0NzFhYWU1ZTM3ZjljMjdiNzBmNGZjNzFmYWI1SKWBLA==: 00:26:02.831 11:33:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZTNlZmU5NDA1ZGMxNmU2ODg2Yjg5YmFkOTQzOWVkOGU3MGIwNjdmNzNmMTliYjQ3FeGUFQ==: ]] 00:26:02.831 11:33:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZTNlZmU5NDA1ZGMxNmU2ODg2Yjg5YmFkOTQzOWVkOGU3MGIwNjdmNzNmMTliYjQ3FeGUFQ==: 00:26:02.831 11:33:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 1 00:26:02.831 11:33:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:02.831 11:33:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:02.831 11:33:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:26:02.831 11:33:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:26:02.831 11:33:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:02.831 11:33:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:26:02.831 11:33:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:02.831 11:33:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:02.831 11:33:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:02.831 11:33:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:02.831 11:33:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:02.831 11:33:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:02.831 11:33:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:02.831 11:33:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:02.831 11:33:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:02.831 11:33:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:02.831 11:33:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:02.831 11:33:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:02.831 11:33:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:02.831 11:33:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:02.831 11:33:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:26:02.831 11:33:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:02.831 11:33:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:03.089 nvme0n1 00:26:03.089 11:33:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:03.089 11:33:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:03.089 11:33:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:03.089 11:33:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:03.089 11:33:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:03.089 11:33:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:03.089 11:33:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:03.089 11:33:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:03.089 11:33:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:03.089 11:33:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:03.089 11:33:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:03.089 11:33:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:03.089 11:33:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 2 00:26:03.089 11:33:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:03.089 11:33:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:03.089 11:33:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:26:03.089 11:33:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:26:03.089 11:33:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NjcxZjQ4OGI1NjhkY2U5NTM4ZWE2ZTE5OTg5NzI3ZmUWdZes: 00:26:03.089 11:33:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MDkwMzUxNTY3NmUxYmUzMDMwOTczNGQyYTM4NTEyN2JSqHgT: 00:26:03.089 11:33:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:03.089 11:33:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:26:03.089 11:33:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NjcxZjQ4OGI1NjhkY2U5NTM4ZWE2ZTE5OTg5NzI3ZmUWdZes: 00:26:03.089 11:33:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MDkwMzUxNTY3NmUxYmUzMDMwOTczNGQyYTM4NTEyN2JSqHgT: ]] 00:26:03.089 11:33:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MDkwMzUxNTY3NmUxYmUzMDMwOTczNGQyYTM4NTEyN2JSqHgT: 00:26:03.089 11:33:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 2 00:26:03.089 11:33:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:03.089 11:33:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:03.089 11:33:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:26:03.089 11:33:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:26:03.089 11:33:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:03.089 11:33:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:26:03.089 11:33:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:03.089 11:33:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:03.089 11:33:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:03.089 11:33:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:03.089 11:33:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:03.089 11:33:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:03.089 11:33:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:03.089 11:33:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:03.089 11:33:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:03.089 11:33:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:03.089 11:33:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:03.089 11:33:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:03.089 11:33:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:03.089 11:33:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:03.089 11:33:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:26:03.089 11:33:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:03.089 11:33:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:03.347 nvme0n1 00:26:03.347 11:33:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:03.347 11:33:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:03.347 11:33:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:03.347 11:33:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:03.347 11:33:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:03.347 11:33:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:03.347 11:33:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:03.347 11:33:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:03.347 11:33:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:03.347 11:33:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:03.348 11:33:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:03.348 11:33:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:03.348 11:33:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 3 00:26:03.348 11:33:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:03.348 11:33:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:03.348 11:33:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:26:03.348 11:33:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:26:03.348 11:33:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YmUxODE4MzZlOWI3YTMzYzRmMjVkNWViZjEyMmIyYjExNzIwMTJiMWNjZDE5ODZmunzmpg==: 00:26:03.348 11:33:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YTdjMDM3YzRkNDU5ZGJlYWM0YzFiOGE4MDQ2OGYzYTRA71V3: 00:26:03.348 11:33:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:03.348 11:33:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:26:03.348 11:33:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YmUxODE4MzZlOWI3YTMzYzRmMjVkNWViZjEyMmIyYjExNzIwMTJiMWNjZDE5ODZmunzmpg==: 00:26:03.348 11:33:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YTdjMDM3YzRkNDU5ZGJlYWM0YzFiOGE4MDQ2OGYzYTRA71V3: ]] 00:26:03.348 11:33:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YTdjMDM3YzRkNDU5ZGJlYWM0YzFiOGE4MDQ2OGYzYTRA71V3: 00:26:03.348 11:33:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 3 00:26:03.348 11:33:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:03.348 11:33:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:03.348 11:33:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:26:03.348 11:33:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:26:03.348 11:33:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:03.348 11:33:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:26:03.348 11:33:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:03.348 11:33:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:03.348 11:33:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:03.348 11:33:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:03.348 11:33:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:03.348 11:33:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:03.348 11:33:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:03.348 11:33:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:03.348 11:33:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:03.348 11:33:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:03.348 11:33:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:03.348 11:33:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:03.348 11:33:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:03.348 11:33:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:03.348 11:33:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:26:03.348 11:33:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:03.348 11:33:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:03.606 nvme0n1 00:26:03.606 11:33:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:03.606 11:33:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:03.606 11:33:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:03.606 11:33:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:03.606 11:33:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:03.606 11:33:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:03.865 11:33:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:03.865 11:33:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:03.865 11:33:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:03.865 11:33:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:03.865 11:33:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:03.865 11:33:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:03.865 11:33:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 4 00:26:03.865 11:33:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:03.865 11:33:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:03.865 11:33:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:26:03.865 11:33:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:26:03.865 11:33:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YTg3YTkzNmRjMjlkYjg1ODczMjg2ZDM1YWVmZTgzM2EyN2UwYzAyM2Y4NWNjMjY4ZDcwNzdiMWQ0OWMxNDUxYTXOMcU=: 00:26:03.865 11:33:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:26:03.865 11:33:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:03.865 11:33:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:26:03.865 11:33:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YTg3YTkzNmRjMjlkYjg1ODczMjg2ZDM1YWVmZTgzM2EyN2UwYzAyM2Y4NWNjMjY4ZDcwNzdiMWQ0OWMxNDUxYTXOMcU=: 00:26:03.865 11:33:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:26:03.865 11:33:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 4 00:26:03.865 11:33:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:03.865 11:33:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:03.865 11:33:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:26:03.865 11:33:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:26:03.865 11:33:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:03.865 11:33:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:26:03.865 11:33:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:03.865 11:33:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:03.865 11:33:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:03.865 11:33:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:03.865 11:33:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:03.865 11:33:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:03.865 11:33:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:03.865 11:33:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:03.865 11:33:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:03.865 11:33:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:03.865 11:33:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:03.865 11:33:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:03.865 11:33:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:03.865 11:33:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:03.865 11:33:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:26:03.865 11:33:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:03.865 11:33:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:04.123 nvme0n1 00:26:04.123 11:33:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:04.123 11:33:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:04.123 11:33:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:04.123 11:33:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:04.123 11:33:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:04.123 11:33:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:04.123 11:33:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:04.123 11:33:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:04.123 11:33:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:04.123 11:33:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:04.123 11:33:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:04.123 11:33:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:26:04.123 11:33:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:04.123 11:33:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 0 00:26:04.123 11:33:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:04.123 11:33:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:04.123 11:33:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:26:04.123 11:33:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:26:04.123 11:33:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NjM5NzQ1M2UwZmE5M2I4NDg4Yjk5NTE5MGZjNzAzMjQQR7aR: 00:26:04.123 11:33:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YzkwOGRlYmJhMzY5ZGMyNGZjZmMzYzNlYjMzNTkwZjFhYzk4MGNhMWJjMGY3MzkxZDY1YjYyMDc0MmI2ZmM5M/gqEKA=: 00:26:04.123 11:33:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:04.123 11:33:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:26:04.123 11:33:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NjM5NzQ1M2UwZmE5M2I4NDg4Yjk5NTE5MGZjNzAzMjQQR7aR: 00:26:04.123 11:33:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YzkwOGRlYmJhMzY5ZGMyNGZjZmMzYzNlYjMzNTkwZjFhYzk4MGNhMWJjMGY3MzkxZDY1YjYyMDc0MmI2ZmM5M/gqEKA=: ]] 00:26:04.123 11:33:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YzkwOGRlYmJhMzY5ZGMyNGZjZmMzYzNlYjMzNTkwZjFhYzk4MGNhMWJjMGY3MzkxZDY1YjYyMDc0MmI2ZmM5M/gqEKA=: 00:26:04.123 11:33:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 0 00:26:04.123 11:33:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:04.123 11:33:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:04.123 11:33:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:26:04.123 11:33:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:26:04.123 11:33:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:04.123 11:33:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:26:04.124 11:33:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:04.124 11:33:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:04.124 11:33:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:04.124 11:33:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:04.124 11:33:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:04.124 11:33:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:04.124 11:33:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:04.124 11:33:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:04.124 11:33:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:04.124 11:33:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:04.124 11:33:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:04.124 11:33:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:04.124 11:33:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:04.124 11:33:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:04.124 11:33:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:26:04.124 11:33:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:04.124 11:33:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:04.381 nvme0n1 00:26:04.381 11:34:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:04.381 11:34:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:04.381 11:34:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:04.381 11:34:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:04.382 11:34:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:04.382 11:34:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:04.639 11:34:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:04.639 11:34:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:04.639 11:34:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:04.639 11:34:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:04.640 11:34:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:04.640 11:34:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:04.640 11:34:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 1 00:26:04.640 11:34:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:04.640 11:34:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:04.640 11:34:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:26:04.640 11:34:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:26:04.640 11:34:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:N2E4M2UwYTEzODJjMWIwZTUyZTY0NzFhYWU1ZTM3ZjljMjdiNzBmNGZjNzFmYWI1SKWBLA==: 00:26:04.640 11:34:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZTNlZmU5NDA1ZGMxNmU2ODg2Yjg5YmFkOTQzOWVkOGU3MGIwNjdmNzNmMTliYjQ3FeGUFQ==: 00:26:04.640 11:34:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:04.640 11:34:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:26:04.640 11:34:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:N2E4M2UwYTEzODJjMWIwZTUyZTY0NzFhYWU1ZTM3ZjljMjdiNzBmNGZjNzFmYWI1SKWBLA==: 00:26:04.640 11:34:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZTNlZmU5NDA1ZGMxNmU2ODg2Yjg5YmFkOTQzOWVkOGU3MGIwNjdmNzNmMTliYjQ3FeGUFQ==: ]] 00:26:04.640 11:34:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZTNlZmU5NDA1ZGMxNmU2ODg2Yjg5YmFkOTQzOWVkOGU3MGIwNjdmNzNmMTliYjQ3FeGUFQ==: 00:26:04.640 11:34:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 1 00:26:04.640 11:34:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:04.640 11:34:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:04.640 11:34:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:26:04.640 11:34:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:26:04.640 11:34:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:04.640 11:34:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:26:04.640 11:34:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:04.640 11:34:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:04.640 11:34:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:04.640 11:34:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:04.640 11:34:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:04.640 11:34:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:04.640 11:34:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:04.640 11:34:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:04.640 11:34:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:04.640 11:34:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:04.640 11:34:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:04.640 11:34:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:04.640 11:34:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:04.640 11:34:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:04.640 11:34:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:26:04.640 11:34:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:04.640 11:34:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:04.897 nvme0n1 00:26:04.897 11:34:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:04.897 11:34:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:04.897 11:34:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:04.897 11:34:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:04.897 11:34:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:04.897 11:34:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:04.897 11:34:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:04.897 11:34:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:04.897 11:34:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:04.897 11:34:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:04.897 11:34:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:04.897 11:34:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:04.897 11:34:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 2 00:26:04.898 11:34:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:04.898 11:34:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:04.898 11:34:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:26:04.898 11:34:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:26:04.898 11:34:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NjcxZjQ4OGI1NjhkY2U5NTM4ZWE2ZTE5OTg5NzI3ZmUWdZes: 00:26:04.898 11:34:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MDkwMzUxNTY3NmUxYmUzMDMwOTczNGQyYTM4NTEyN2JSqHgT: 00:26:04.898 11:34:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:04.898 11:34:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:26:04.898 11:34:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NjcxZjQ4OGI1NjhkY2U5NTM4ZWE2ZTE5OTg5NzI3ZmUWdZes: 00:26:04.898 11:34:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MDkwMzUxNTY3NmUxYmUzMDMwOTczNGQyYTM4NTEyN2JSqHgT: ]] 00:26:04.898 11:34:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MDkwMzUxNTY3NmUxYmUzMDMwOTczNGQyYTM4NTEyN2JSqHgT: 00:26:04.898 11:34:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 2 00:26:04.898 11:34:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:04.898 11:34:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:04.898 11:34:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:26:04.898 11:34:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:26:04.898 11:34:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:04.898 11:34:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:26:04.898 11:34:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:04.898 11:34:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:04.898 11:34:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:04.898 11:34:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:04.898 11:34:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:04.898 11:34:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:04.898 11:34:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:04.898 11:34:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:04.898 11:34:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:04.898 11:34:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:04.898 11:34:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:04.898 11:34:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:04.898 11:34:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:04.898 11:34:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:04.898 11:34:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:26:04.898 11:34:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:04.898 11:34:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:05.463 nvme0n1 00:26:05.463 11:34:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:05.463 11:34:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:05.463 11:34:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:05.463 11:34:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:05.463 11:34:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:05.463 11:34:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:05.463 11:34:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:05.463 11:34:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:05.463 11:34:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:05.463 11:34:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:05.463 11:34:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:05.463 11:34:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:05.463 11:34:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 3 00:26:05.463 11:34:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:05.463 11:34:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:05.463 11:34:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:26:05.463 11:34:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:26:05.463 11:34:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YmUxODE4MzZlOWI3YTMzYzRmMjVkNWViZjEyMmIyYjExNzIwMTJiMWNjZDE5ODZmunzmpg==: 00:26:05.463 11:34:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YTdjMDM3YzRkNDU5ZGJlYWM0YzFiOGE4MDQ2OGYzYTRA71V3: 00:26:05.463 11:34:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:05.463 11:34:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:26:05.463 11:34:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YmUxODE4MzZlOWI3YTMzYzRmMjVkNWViZjEyMmIyYjExNzIwMTJiMWNjZDE5ODZmunzmpg==: 00:26:05.463 11:34:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YTdjMDM3YzRkNDU5ZGJlYWM0YzFiOGE4MDQ2OGYzYTRA71V3: ]] 00:26:05.463 11:34:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YTdjMDM3YzRkNDU5ZGJlYWM0YzFiOGE4MDQ2OGYzYTRA71V3: 00:26:05.463 11:34:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 3 00:26:05.463 11:34:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:05.463 11:34:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:05.463 11:34:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:26:05.463 11:34:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:26:05.463 11:34:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:05.463 11:34:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:26:05.463 11:34:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:05.463 11:34:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:05.463 11:34:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:05.463 11:34:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:05.463 11:34:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:05.463 11:34:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:05.463 11:34:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:05.463 11:34:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:05.463 11:34:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:05.463 11:34:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:05.463 11:34:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:05.463 11:34:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:05.463 11:34:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:05.463 11:34:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:05.463 11:34:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:26:05.463 11:34:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:05.463 11:34:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:05.721 nvme0n1 00:26:05.721 11:34:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:05.721 11:34:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:05.721 11:34:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:05.721 11:34:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:05.721 11:34:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:05.978 11:34:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:05.978 11:34:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:05.978 11:34:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:05.978 11:34:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:05.978 11:34:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:05.978 11:34:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:05.978 11:34:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:05.978 11:34:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 4 00:26:05.978 11:34:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:05.978 11:34:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:05.978 11:34:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:26:05.978 11:34:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:26:05.978 11:34:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YTg3YTkzNmRjMjlkYjg1ODczMjg2ZDM1YWVmZTgzM2EyN2UwYzAyM2Y4NWNjMjY4ZDcwNzdiMWQ0OWMxNDUxYTXOMcU=: 00:26:05.978 11:34:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:26:05.978 11:34:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:05.978 11:34:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:26:05.978 11:34:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YTg3YTkzNmRjMjlkYjg1ODczMjg2ZDM1YWVmZTgzM2EyN2UwYzAyM2Y4NWNjMjY4ZDcwNzdiMWQ0OWMxNDUxYTXOMcU=: 00:26:05.978 11:34:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:26:05.978 11:34:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 4 00:26:05.978 11:34:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:05.978 11:34:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:05.978 11:34:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:26:05.978 11:34:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:26:05.978 11:34:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:05.978 11:34:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:26:05.978 11:34:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:05.978 11:34:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:05.978 11:34:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:05.978 11:34:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:05.978 11:34:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:05.978 11:34:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:05.978 11:34:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:05.978 11:34:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:05.978 11:34:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:05.978 11:34:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:05.978 11:34:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:05.978 11:34:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:05.978 11:34:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:05.978 11:34:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:05.978 11:34:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:26:05.978 11:34:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:05.978 11:34:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:06.237 nvme0n1 00:26:06.237 11:34:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:06.237 11:34:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:06.237 11:34:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:06.237 11:34:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:06.237 11:34:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:06.237 11:34:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:06.237 11:34:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:06.237 11:34:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:06.237 11:34:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:06.237 11:34:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:06.237 11:34:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:06.237 11:34:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:26:06.237 11:34:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:06.237 11:34:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 0 00:26:06.237 11:34:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:06.237 11:34:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:06.237 11:34:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:26:06.237 11:34:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:26:06.237 11:34:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NjM5NzQ1M2UwZmE5M2I4NDg4Yjk5NTE5MGZjNzAzMjQQR7aR: 00:26:06.237 11:34:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YzkwOGRlYmJhMzY5ZGMyNGZjZmMzYzNlYjMzNTkwZjFhYzk4MGNhMWJjMGY3MzkxZDY1YjYyMDc0MmI2ZmM5M/gqEKA=: 00:26:06.237 11:34:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:06.237 11:34:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:26:06.237 11:34:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NjM5NzQ1M2UwZmE5M2I4NDg4Yjk5NTE5MGZjNzAzMjQQR7aR: 00:26:06.237 11:34:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YzkwOGRlYmJhMzY5ZGMyNGZjZmMzYzNlYjMzNTkwZjFhYzk4MGNhMWJjMGY3MzkxZDY1YjYyMDc0MmI2ZmM5M/gqEKA=: ]] 00:26:06.237 11:34:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YzkwOGRlYmJhMzY5ZGMyNGZjZmMzYzNlYjMzNTkwZjFhYzk4MGNhMWJjMGY3MzkxZDY1YjYyMDc0MmI2ZmM5M/gqEKA=: 00:26:06.237 11:34:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 0 00:26:06.237 11:34:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:06.237 11:34:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:06.237 11:34:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:26:06.237 11:34:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:26:06.237 11:34:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:06.237 11:34:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:26:06.237 11:34:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:06.237 11:34:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:06.237 11:34:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:06.495 11:34:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:06.495 11:34:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:06.495 11:34:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:06.495 11:34:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:06.495 11:34:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:06.495 11:34:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:06.495 11:34:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:06.495 11:34:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:06.495 11:34:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:06.495 11:34:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:06.496 11:34:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:06.496 11:34:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:26:06.496 11:34:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:06.496 11:34:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:07.061 nvme0n1 00:26:07.061 11:34:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:07.061 11:34:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:07.061 11:34:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:07.061 11:34:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:07.061 11:34:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:07.061 11:34:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:07.061 11:34:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:07.061 11:34:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:07.061 11:34:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:07.061 11:34:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:07.061 11:34:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:07.061 11:34:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:07.061 11:34:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 1 00:26:07.061 11:34:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:07.061 11:34:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:07.061 11:34:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:26:07.061 11:34:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:26:07.061 11:34:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:N2E4M2UwYTEzODJjMWIwZTUyZTY0NzFhYWU1ZTM3ZjljMjdiNzBmNGZjNzFmYWI1SKWBLA==: 00:26:07.061 11:34:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZTNlZmU5NDA1ZGMxNmU2ODg2Yjg5YmFkOTQzOWVkOGU3MGIwNjdmNzNmMTliYjQ3FeGUFQ==: 00:26:07.061 11:34:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:07.061 11:34:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:26:07.061 11:34:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:N2E4M2UwYTEzODJjMWIwZTUyZTY0NzFhYWU1ZTM3ZjljMjdiNzBmNGZjNzFmYWI1SKWBLA==: 00:26:07.061 11:34:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZTNlZmU5NDA1ZGMxNmU2ODg2Yjg5YmFkOTQzOWVkOGU3MGIwNjdmNzNmMTliYjQ3FeGUFQ==: ]] 00:26:07.061 11:34:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZTNlZmU5NDA1ZGMxNmU2ODg2Yjg5YmFkOTQzOWVkOGU3MGIwNjdmNzNmMTliYjQ3FeGUFQ==: 00:26:07.061 11:34:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 1 00:26:07.061 11:34:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:07.061 11:34:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:07.061 11:34:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:26:07.061 11:34:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:26:07.061 11:34:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:07.061 11:34:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:26:07.061 11:34:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:07.061 11:34:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:07.061 11:34:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:07.061 11:34:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:07.061 11:34:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:07.061 11:34:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:07.061 11:34:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:07.061 11:34:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:07.061 11:34:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:07.061 11:34:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:07.062 11:34:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:07.062 11:34:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:07.062 11:34:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:07.062 11:34:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:07.062 11:34:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:26:07.062 11:34:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:07.062 11:34:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:07.627 nvme0n1 00:26:07.627 11:34:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:07.627 11:34:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:07.627 11:34:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:07.627 11:34:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:07.627 11:34:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:07.627 11:34:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:07.627 11:34:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:07.627 11:34:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:07.627 11:34:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:07.627 11:34:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:07.627 11:34:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:07.627 11:34:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:07.627 11:34:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 2 00:26:07.627 11:34:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:07.627 11:34:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:07.627 11:34:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:26:07.627 11:34:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:26:07.627 11:34:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NjcxZjQ4OGI1NjhkY2U5NTM4ZWE2ZTE5OTg5NzI3ZmUWdZes: 00:26:07.627 11:34:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MDkwMzUxNTY3NmUxYmUzMDMwOTczNGQyYTM4NTEyN2JSqHgT: 00:26:07.627 11:34:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:07.627 11:34:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:26:07.627 11:34:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NjcxZjQ4OGI1NjhkY2U5NTM4ZWE2ZTE5OTg5NzI3ZmUWdZes: 00:26:07.627 11:34:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MDkwMzUxNTY3NmUxYmUzMDMwOTczNGQyYTM4NTEyN2JSqHgT: ]] 00:26:07.627 11:34:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MDkwMzUxNTY3NmUxYmUzMDMwOTczNGQyYTM4NTEyN2JSqHgT: 00:26:07.627 11:34:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 2 00:26:07.627 11:34:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:07.628 11:34:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:07.628 11:34:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:26:07.628 11:34:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:26:07.628 11:34:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:07.628 11:34:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:26:07.628 11:34:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:07.628 11:34:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:07.628 11:34:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:07.628 11:34:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:07.628 11:34:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:07.628 11:34:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:07.628 11:34:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:07.628 11:34:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:07.628 11:34:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:07.628 11:34:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:07.628 11:34:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:07.628 11:34:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:07.628 11:34:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:07.628 11:34:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:07.628 11:34:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:26:07.628 11:34:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:07.628 11:34:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:08.191 nvme0n1 00:26:08.191 11:34:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:08.191 11:34:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:08.191 11:34:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:08.191 11:34:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:08.191 11:34:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:08.191 11:34:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:08.191 11:34:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:08.191 11:34:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:08.191 11:34:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:08.191 11:34:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:08.191 11:34:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:08.191 11:34:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:08.191 11:34:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 3 00:26:08.191 11:34:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:08.191 11:34:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:08.191 11:34:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:26:08.191 11:34:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:26:08.191 11:34:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YmUxODE4MzZlOWI3YTMzYzRmMjVkNWViZjEyMmIyYjExNzIwMTJiMWNjZDE5ODZmunzmpg==: 00:26:08.191 11:34:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YTdjMDM3YzRkNDU5ZGJlYWM0YzFiOGE4MDQ2OGYzYTRA71V3: 00:26:08.191 11:34:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:08.191 11:34:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:26:08.192 11:34:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YmUxODE4MzZlOWI3YTMzYzRmMjVkNWViZjEyMmIyYjExNzIwMTJiMWNjZDE5ODZmunzmpg==: 00:26:08.192 11:34:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YTdjMDM3YzRkNDU5ZGJlYWM0YzFiOGE4MDQ2OGYzYTRA71V3: ]] 00:26:08.192 11:34:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YTdjMDM3YzRkNDU5ZGJlYWM0YzFiOGE4MDQ2OGYzYTRA71V3: 00:26:08.192 11:34:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 3 00:26:08.192 11:34:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:08.192 11:34:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:08.192 11:34:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:26:08.192 11:34:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:26:08.192 11:34:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:08.192 11:34:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:26:08.192 11:34:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:08.192 11:34:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:08.192 11:34:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:08.192 11:34:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:08.192 11:34:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:08.192 11:34:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:08.192 11:34:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:08.192 11:34:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:08.192 11:34:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:08.192 11:34:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:08.192 11:34:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:08.192 11:34:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:08.192 11:34:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:08.192 11:34:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:08.192 11:34:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:26:08.192 11:34:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:08.192 11:34:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:08.755 nvme0n1 00:26:08.755 11:34:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:08.755 11:34:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:08.755 11:34:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:08.755 11:34:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:08.755 11:34:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:08.755 11:34:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:09.012 11:34:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:09.012 11:34:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:09.012 11:34:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:09.012 11:34:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:09.012 11:34:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:09.012 11:34:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:09.012 11:34:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 4 00:26:09.012 11:34:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:09.012 11:34:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:09.012 11:34:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:26:09.012 11:34:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:26:09.012 11:34:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YTg3YTkzNmRjMjlkYjg1ODczMjg2ZDM1YWVmZTgzM2EyN2UwYzAyM2Y4NWNjMjY4ZDcwNzdiMWQ0OWMxNDUxYTXOMcU=: 00:26:09.013 11:34:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:26:09.013 11:34:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:09.013 11:34:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:26:09.013 11:34:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YTg3YTkzNmRjMjlkYjg1ODczMjg2ZDM1YWVmZTgzM2EyN2UwYzAyM2Y4NWNjMjY4ZDcwNzdiMWQ0OWMxNDUxYTXOMcU=: 00:26:09.013 11:34:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:26:09.013 11:34:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 4 00:26:09.013 11:34:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:09.013 11:34:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:09.013 11:34:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:26:09.013 11:34:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:26:09.013 11:34:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:09.013 11:34:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:26:09.013 11:34:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:09.013 11:34:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:09.013 11:34:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:09.013 11:34:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:09.013 11:34:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:09.013 11:34:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:09.013 11:34:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:09.013 11:34:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:09.013 11:34:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:09.013 11:34:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:09.013 11:34:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:09.013 11:34:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:09.013 11:34:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:09.013 11:34:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:09.013 11:34:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:26:09.013 11:34:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:09.013 11:34:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:09.578 nvme0n1 00:26:09.578 11:34:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:09.578 11:34:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:09.578 11:34:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:09.578 11:34:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:09.578 11:34:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:09.578 11:34:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:09.578 11:34:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:09.578 11:34:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:09.578 11:34:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:09.578 11:34:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:09.578 11:34:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:09.578 11:34:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:26:09.578 11:34:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:26:09.578 11:34:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:09.578 11:34:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 0 00:26:09.578 11:34:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:09.578 11:34:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:09.578 11:34:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:26:09.578 11:34:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:26:09.578 11:34:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NjM5NzQ1M2UwZmE5M2I4NDg4Yjk5NTE5MGZjNzAzMjQQR7aR: 00:26:09.578 11:34:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YzkwOGRlYmJhMzY5ZGMyNGZjZmMzYzNlYjMzNTkwZjFhYzk4MGNhMWJjMGY3MzkxZDY1YjYyMDc0MmI2ZmM5M/gqEKA=: 00:26:09.578 11:34:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:09.578 11:34:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:26:09.578 11:34:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NjM5NzQ1M2UwZmE5M2I4NDg4Yjk5NTE5MGZjNzAzMjQQR7aR: 00:26:09.578 11:34:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YzkwOGRlYmJhMzY5ZGMyNGZjZmMzYzNlYjMzNTkwZjFhYzk4MGNhMWJjMGY3MzkxZDY1YjYyMDc0MmI2ZmM5M/gqEKA=: ]] 00:26:09.578 11:34:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YzkwOGRlYmJhMzY5ZGMyNGZjZmMzYzNlYjMzNTkwZjFhYzk4MGNhMWJjMGY3MzkxZDY1YjYyMDc0MmI2ZmM5M/gqEKA=: 00:26:09.578 11:34:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 0 00:26:09.578 11:34:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:09.578 11:34:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:09.578 11:34:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:26:09.578 11:34:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:26:09.578 11:34:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:09.578 11:34:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:26:09.578 11:34:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:09.578 11:34:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:09.579 11:34:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:09.579 11:34:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:09.579 11:34:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:09.579 11:34:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:09.579 11:34:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:09.579 11:34:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:09.579 11:34:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:09.579 11:34:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:09.579 11:34:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:09.579 11:34:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:09.579 11:34:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:09.579 11:34:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:09.579 11:34:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:26:09.579 11:34:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:09.579 11:34:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:09.836 nvme0n1 00:26:09.836 11:34:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:09.836 11:34:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:09.836 11:34:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:09.836 11:34:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:09.836 11:34:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:09.836 11:34:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:09.836 11:34:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:09.836 11:34:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:09.836 11:34:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:09.836 11:34:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:09.836 11:34:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:09.836 11:34:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:09.836 11:34:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 1 00:26:09.836 11:34:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:09.836 11:34:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:09.836 11:34:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:26:09.836 11:34:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:26:09.836 11:34:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:N2E4M2UwYTEzODJjMWIwZTUyZTY0NzFhYWU1ZTM3ZjljMjdiNzBmNGZjNzFmYWI1SKWBLA==: 00:26:09.836 11:34:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZTNlZmU5NDA1ZGMxNmU2ODg2Yjg5YmFkOTQzOWVkOGU3MGIwNjdmNzNmMTliYjQ3FeGUFQ==: 00:26:09.836 11:34:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:09.836 11:34:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:26:09.836 11:34:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:N2E4M2UwYTEzODJjMWIwZTUyZTY0NzFhYWU1ZTM3ZjljMjdiNzBmNGZjNzFmYWI1SKWBLA==: 00:26:09.836 11:34:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZTNlZmU5NDA1ZGMxNmU2ODg2Yjg5YmFkOTQzOWVkOGU3MGIwNjdmNzNmMTliYjQ3FeGUFQ==: ]] 00:26:09.836 11:34:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZTNlZmU5NDA1ZGMxNmU2ODg2Yjg5YmFkOTQzOWVkOGU3MGIwNjdmNzNmMTliYjQ3FeGUFQ==: 00:26:09.836 11:34:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 1 00:26:09.836 11:34:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:09.836 11:34:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:09.836 11:34:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:26:09.836 11:34:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:26:09.836 11:34:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:09.836 11:34:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:26:09.837 11:34:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:09.837 11:34:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:09.837 11:34:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:09.837 11:34:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:09.837 11:34:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:09.837 11:34:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:09.837 11:34:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:09.837 11:34:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:09.837 11:34:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:09.837 11:34:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:09.837 11:34:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:09.837 11:34:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:09.837 11:34:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:09.837 11:34:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:09.837 11:34:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:26:09.837 11:34:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:09.837 11:34:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:09.837 nvme0n1 00:26:09.837 11:34:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:09.837 11:34:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:09.837 11:34:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:09.837 11:34:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:09.837 11:34:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:10.145 11:34:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:10.146 11:34:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:10.146 11:34:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:10.146 11:34:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:10.146 11:34:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:10.146 11:34:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:10.146 11:34:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:10.146 11:34:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 2 00:26:10.146 11:34:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:10.146 11:34:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:10.146 11:34:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:26:10.146 11:34:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:26:10.146 11:34:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NjcxZjQ4OGI1NjhkY2U5NTM4ZWE2ZTE5OTg5NzI3ZmUWdZes: 00:26:10.146 11:34:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MDkwMzUxNTY3NmUxYmUzMDMwOTczNGQyYTM4NTEyN2JSqHgT: 00:26:10.146 11:34:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:10.146 11:34:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:26:10.146 11:34:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NjcxZjQ4OGI1NjhkY2U5NTM4ZWE2ZTE5OTg5NzI3ZmUWdZes: 00:26:10.146 11:34:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MDkwMzUxNTY3NmUxYmUzMDMwOTczNGQyYTM4NTEyN2JSqHgT: ]] 00:26:10.146 11:34:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MDkwMzUxNTY3NmUxYmUzMDMwOTczNGQyYTM4NTEyN2JSqHgT: 00:26:10.146 11:34:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 2 00:26:10.146 11:34:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:10.146 11:34:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:10.146 11:34:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:26:10.146 11:34:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:26:10.146 11:34:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:10.146 11:34:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:26:10.146 11:34:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:10.146 11:34:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:10.146 11:34:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:10.146 11:34:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:10.146 11:34:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:10.146 11:34:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:10.146 11:34:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:10.146 11:34:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:10.146 11:34:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:10.146 11:34:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:10.146 11:34:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:10.146 11:34:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:10.146 11:34:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:10.146 11:34:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:10.146 11:34:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:26:10.146 11:34:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:10.146 11:34:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:10.146 nvme0n1 00:26:10.146 11:34:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:10.146 11:34:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:10.146 11:34:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:10.146 11:34:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:10.146 11:34:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:10.146 11:34:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:10.146 11:34:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:10.146 11:34:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:10.146 11:34:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:10.146 11:34:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:10.146 11:34:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:10.146 11:34:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:10.146 11:34:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 3 00:26:10.146 11:34:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:10.146 11:34:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:10.146 11:34:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:26:10.146 11:34:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:26:10.146 11:34:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YmUxODE4MzZlOWI3YTMzYzRmMjVkNWViZjEyMmIyYjExNzIwMTJiMWNjZDE5ODZmunzmpg==: 00:26:10.146 11:34:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YTdjMDM3YzRkNDU5ZGJlYWM0YzFiOGE4MDQ2OGYzYTRA71V3: 00:26:10.146 11:34:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:10.146 11:34:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:26:10.146 11:34:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YmUxODE4MzZlOWI3YTMzYzRmMjVkNWViZjEyMmIyYjExNzIwMTJiMWNjZDE5ODZmunzmpg==: 00:26:10.146 11:34:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YTdjMDM3YzRkNDU5ZGJlYWM0YzFiOGE4MDQ2OGYzYTRA71V3: ]] 00:26:10.146 11:34:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YTdjMDM3YzRkNDU5ZGJlYWM0YzFiOGE4MDQ2OGYzYTRA71V3: 00:26:10.146 11:34:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 3 00:26:10.146 11:34:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:10.146 11:34:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:10.146 11:34:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:26:10.146 11:34:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:26:10.146 11:34:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:10.146 11:34:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:26:10.146 11:34:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:10.146 11:34:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:10.146 11:34:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:10.146 11:34:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:10.146 11:34:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:10.146 11:34:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:10.146 11:34:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:10.146 11:34:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:10.146 11:34:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:10.146 11:34:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:10.146 11:34:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:10.146 11:34:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:10.146 11:34:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:10.146 11:34:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:10.146 11:34:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:26:10.146 11:34:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:10.146 11:34:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:10.405 nvme0n1 00:26:10.405 11:34:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:10.405 11:34:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:10.405 11:34:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:10.405 11:34:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:10.405 11:34:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:10.405 11:34:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:10.405 11:34:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:10.405 11:34:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:10.405 11:34:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:10.405 11:34:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:10.405 11:34:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:10.405 11:34:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:10.405 11:34:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 4 00:26:10.405 11:34:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:10.405 11:34:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:10.405 11:34:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:26:10.405 11:34:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:26:10.405 11:34:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YTg3YTkzNmRjMjlkYjg1ODczMjg2ZDM1YWVmZTgzM2EyN2UwYzAyM2Y4NWNjMjY4ZDcwNzdiMWQ0OWMxNDUxYTXOMcU=: 00:26:10.405 11:34:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:26:10.405 11:34:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:10.405 11:34:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:26:10.405 11:34:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YTg3YTkzNmRjMjlkYjg1ODczMjg2ZDM1YWVmZTgzM2EyN2UwYzAyM2Y4NWNjMjY4ZDcwNzdiMWQ0OWMxNDUxYTXOMcU=: 00:26:10.405 11:34:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:26:10.405 11:34:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 4 00:26:10.405 11:34:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:10.405 11:34:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:10.405 11:34:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:26:10.405 11:34:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:26:10.405 11:34:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:10.405 11:34:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:26:10.405 11:34:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:10.405 11:34:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:10.405 11:34:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:10.405 11:34:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:10.405 11:34:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:10.405 11:34:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:10.405 11:34:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:10.405 11:34:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:10.405 11:34:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:10.405 11:34:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:10.405 11:34:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:10.405 11:34:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:10.405 11:34:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:10.405 11:34:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:10.405 11:34:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:26:10.405 11:34:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:10.405 11:34:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:10.662 nvme0n1 00:26:10.662 11:34:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:10.662 11:34:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:10.662 11:34:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:10.662 11:34:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:10.662 11:34:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:10.662 11:34:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:10.662 11:34:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:10.662 11:34:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:10.662 11:34:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:10.663 11:34:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:10.663 11:34:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:10.663 11:34:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:26:10.663 11:34:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:10.663 11:34:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 0 00:26:10.663 11:34:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:10.663 11:34:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:10.663 11:34:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:26:10.663 11:34:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:26:10.663 11:34:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NjM5NzQ1M2UwZmE5M2I4NDg4Yjk5NTE5MGZjNzAzMjQQR7aR: 00:26:10.663 11:34:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YzkwOGRlYmJhMzY5ZGMyNGZjZmMzYzNlYjMzNTkwZjFhYzk4MGNhMWJjMGY3MzkxZDY1YjYyMDc0MmI2ZmM5M/gqEKA=: 00:26:10.663 11:34:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:10.663 11:34:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:26:10.663 11:34:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NjM5NzQ1M2UwZmE5M2I4NDg4Yjk5NTE5MGZjNzAzMjQQR7aR: 00:26:10.663 11:34:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YzkwOGRlYmJhMzY5ZGMyNGZjZmMzYzNlYjMzNTkwZjFhYzk4MGNhMWJjMGY3MzkxZDY1YjYyMDc0MmI2ZmM5M/gqEKA=: ]] 00:26:10.663 11:34:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YzkwOGRlYmJhMzY5ZGMyNGZjZmMzYzNlYjMzNTkwZjFhYzk4MGNhMWJjMGY3MzkxZDY1YjYyMDc0MmI2ZmM5M/gqEKA=: 00:26:10.663 11:34:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 0 00:26:10.663 11:34:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:10.663 11:34:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:10.663 11:34:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:26:10.663 11:34:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:26:10.663 11:34:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:10.663 11:34:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:26:10.663 11:34:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:10.663 11:34:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:10.663 11:34:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:10.663 11:34:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:10.663 11:34:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:10.663 11:34:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:10.663 11:34:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:10.663 11:34:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:10.663 11:34:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:10.663 11:34:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:10.663 11:34:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:10.663 11:34:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:10.663 11:34:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:10.663 11:34:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:10.663 11:34:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:26:10.663 11:34:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:10.663 11:34:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:10.920 nvme0n1 00:26:10.920 11:34:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:10.920 11:34:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:10.920 11:34:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:10.920 11:34:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:10.920 11:34:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:10.920 11:34:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:10.920 11:34:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:10.920 11:34:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:10.920 11:34:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:10.920 11:34:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:10.920 11:34:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:10.921 11:34:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:10.921 11:34:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 1 00:26:10.921 11:34:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:10.921 11:34:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:10.921 11:34:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:26:10.921 11:34:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:26:10.921 11:34:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:N2E4M2UwYTEzODJjMWIwZTUyZTY0NzFhYWU1ZTM3ZjljMjdiNzBmNGZjNzFmYWI1SKWBLA==: 00:26:10.921 11:34:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZTNlZmU5NDA1ZGMxNmU2ODg2Yjg5YmFkOTQzOWVkOGU3MGIwNjdmNzNmMTliYjQ3FeGUFQ==: 00:26:10.921 11:34:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:10.921 11:34:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:26:10.921 11:34:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:N2E4M2UwYTEzODJjMWIwZTUyZTY0NzFhYWU1ZTM3ZjljMjdiNzBmNGZjNzFmYWI1SKWBLA==: 00:26:10.921 11:34:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZTNlZmU5NDA1ZGMxNmU2ODg2Yjg5YmFkOTQzOWVkOGU3MGIwNjdmNzNmMTliYjQ3FeGUFQ==: ]] 00:26:10.921 11:34:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZTNlZmU5NDA1ZGMxNmU2ODg2Yjg5YmFkOTQzOWVkOGU3MGIwNjdmNzNmMTliYjQ3FeGUFQ==: 00:26:10.921 11:34:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 1 00:26:10.921 11:34:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:10.921 11:34:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:10.921 11:34:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:26:10.921 11:34:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:26:10.921 11:34:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:10.921 11:34:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:26:10.921 11:34:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:10.921 11:34:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:10.921 11:34:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:10.921 11:34:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:10.921 11:34:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:10.921 11:34:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:10.921 11:34:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:10.921 11:34:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:10.921 11:34:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:10.921 11:34:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:10.921 11:34:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:10.921 11:34:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:10.921 11:34:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:10.921 11:34:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:10.921 11:34:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:26:10.921 11:34:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:10.921 11:34:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:11.179 nvme0n1 00:26:11.179 11:34:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:11.179 11:34:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:11.179 11:34:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:11.179 11:34:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:11.179 11:34:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:11.179 11:34:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:11.179 11:34:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:11.179 11:34:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:11.179 11:34:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:11.179 11:34:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:11.179 11:34:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:11.179 11:34:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:11.179 11:34:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 2 00:26:11.179 11:34:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:11.179 11:34:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:11.179 11:34:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:26:11.179 11:34:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:26:11.179 11:34:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NjcxZjQ4OGI1NjhkY2U5NTM4ZWE2ZTE5OTg5NzI3ZmUWdZes: 00:26:11.179 11:34:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MDkwMzUxNTY3NmUxYmUzMDMwOTczNGQyYTM4NTEyN2JSqHgT: 00:26:11.179 11:34:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:11.179 11:34:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:26:11.179 11:34:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NjcxZjQ4OGI1NjhkY2U5NTM4ZWE2ZTE5OTg5NzI3ZmUWdZes: 00:26:11.179 11:34:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MDkwMzUxNTY3NmUxYmUzMDMwOTczNGQyYTM4NTEyN2JSqHgT: ]] 00:26:11.179 11:34:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MDkwMzUxNTY3NmUxYmUzMDMwOTczNGQyYTM4NTEyN2JSqHgT: 00:26:11.179 11:34:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 2 00:26:11.179 11:34:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:11.179 11:34:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:11.179 11:34:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:26:11.179 11:34:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:26:11.179 11:34:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:11.179 11:34:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:26:11.179 11:34:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:11.179 11:34:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:11.179 11:34:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:11.179 11:34:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:11.179 11:34:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:11.179 11:34:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:11.179 11:34:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:11.179 11:34:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:11.179 11:34:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:11.179 11:34:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:11.179 11:34:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:11.179 11:34:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:11.179 11:34:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:11.179 11:34:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:11.179 11:34:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:26:11.179 11:34:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:11.179 11:34:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:11.437 nvme0n1 00:26:11.437 11:34:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:11.437 11:34:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:11.437 11:34:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:11.437 11:34:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:11.437 11:34:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:11.437 11:34:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:11.437 11:34:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:11.437 11:34:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:11.437 11:34:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:11.437 11:34:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:11.437 11:34:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:11.437 11:34:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:11.438 11:34:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 3 00:26:11.438 11:34:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:11.438 11:34:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:11.438 11:34:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:26:11.438 11:34:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:26:11.438 11:34:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YmUxODE4MzZlOWI3YTMzYzRmMjVkNWViZjEyMmIyYjExNzIwMTJiMWNjZDE5ODZmunzmpg==: 00:26:11.438 11:34:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YTdjMDM3YzRkNDU5ZGJlYWM0YzFiOGE4MDQ2OGYzYTRA71V3: 00:26:11.438 11:34:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:11.438 11:34:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:26:11.438 11:34:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YmUxODE4MzZlOWI3YTMzYzRmMjVkNWViZjEyMmIyYjExNzIwMTJiMWNjZDE5ODZmunzmpg==: 00:26:11.438 11:34:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YTdjMDM3YzRkNDU5ZGJlYWM0YzFiOGE4MDQ2OGYzYTRA71V3: ]] 00:26:11.438 11:34:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YTdjMDM3YzRkNDU5ZGJlYWM0YzFiOGE4MDQ2OGYzYTRA71V3: 00:26:11.438 11:34:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 3 00:26:11.438 11:34:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:11.438 11:34:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:11.438 11:34:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:26:11.438 11:34:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:26:11.438 11:34:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:11.438 11:34:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:26:11.438 11:34:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:11.438 11:34:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:11.438 11:34:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:11.438 11:34:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:11.438 11:34:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:11.438 11:34:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:11.438 11:34:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:11.438 11:34:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:11.438 11:34:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:11.438 11:34:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:11.438 11:34:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:11.438 11:34:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:11.438 11:34:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:11.438 11:34:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:11.438 11:34:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:26:11.438 11:34:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:11.438 11:34:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:11.696 nvme0n1 00:26:11.696 11:34:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:11.696 11:34:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:11.696 11:34:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:11.696 11:34:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:11.696 11:34:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:11.696 11:34:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:11.696 11:34:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:11.696 11:34:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:11.696 11:34:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:11.696 11:34:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:11.696 11:34:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:11.696 11:34:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:11.696 11:34:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 4 00:26:11.696 11:34:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:11.696 11:34:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:11.696 11:34:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:26:11.696 11:34:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:26:11.696 11:34:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YTg3YTkzNmRjMjlkYjg1ODczMjg2ZDM1YWVmZTgzM2EyN2UwYzAyM2Y4NWNjMjY4ZDcwNzdiMWQ0OWMxNDUxYTXOMcU=: 00:26:11.696 11:34:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:26:11.696 11:34:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:11.696 11:34:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:26:11.696 11:34:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YTg3YTkzNmRjMjlkYjg1ODczMjg2ZDM1YWVmZTgzM2EyN2UwYzAyM2Y4NWNjMjY4ZDcwNzdiMWQ0OWMxNDUxYTXOMcU=: 00:26:11.696 11:34:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:26:11.696 11:34:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 4 00:26:11.696 11:34:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:11.696 11:34:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:11.696 11:34:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:26:11.696 11:34:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:26:11.696 11:34:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:11.696 11:34:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:26:11.696 11:34:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:11.696 11:34:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:11.696 11:34:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:11.696 11:34:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:11.696 11:34:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:11.696 11:34:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:11.696 11:34:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:11.696 11:34:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:11.696 11:34:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:11.696 11:34:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:11.696 11:34:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:11.696 11:34:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:11.696 11:34:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:11.696 11:34:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:11.696 11:34:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:26:11.696 11:34:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:11.696 11:34:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:11.954 nvme0n1 00:26:11.954 11:34:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:11.954 11:34:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:11.954 11:34:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:11.954 11:34:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:11.954 11:34:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:11.954 11:34:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:11.954 11:34:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:11.954 11:34:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:11.954 11:34:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:11.954 11:34:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:11.954 11:34:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:11.954 11:34:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:26:11.954 11:34:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:11.954 11:34:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 0 00:26:11.954 11:34:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:11.954 11:34:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:11.954 11:34:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:26:11.954 11:34:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:26:11.954 11:34:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NjM5NzQ1M2UwZmE5M2I4NDg4Yjk5NTE5MGZjNzAzMjQQR7aR: 00:26:11.954 11:34:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YzkwOGRlYmJhMzY5ZGMyNGZjZmMzYzNlYjMzNTkwZjFhYzk4MGNhMWJjMGY3MzkxZDY1YjYyMDc0MmI2ZmM5M/gqEKA=: 00:26:11.954 11:34:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:11.954 11:34:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:26:11.954 11:34:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NjM5NzQ1M2UwZmE5M2I4NDg4Yjk5NTE5MGZjNzAzMjQQR7aR: 00:26:11.954 11:34:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YzkwOGRlYmJhMzY5ZGMyNGZjZmMzYzNlYjMzNTkwZjFhYzk4MGNhMWJjMGY3MzkxZDY1YjYyMDc0MmI2ZmM5M/gqEKA=: ]] 00:26:11.954 11:34:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YzkwOGRlYmJhMzY5ZGMyNGZjZmMzYzNlYjMzNTkwZjFhYzk4MGNhMWJjMGY3MzkxZDY1YjYyMDc0MmI2ZmM5M/gqEKA=: 00:26:11.954 11:34:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 0 00:26:11.954 11:34:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:11.954 11:34:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:11.954 11:34:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:26:11.954 11:34:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:26:11.954 11:34:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:11.954 11:34:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:26:11.954 11:34:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:11.954 11:34:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:11.954 11:34:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:11.954 11:34:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:11.954 11:34:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:11.954 11:34:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:11.954 11:34:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:11.954 11:34:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:11.955 11:34:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:11.955 11:34:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:11.955 11:34:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:11.955 11:34:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:11.955 11:34:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:11.955 11:34:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:11.955 11:34:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:26:11.955 11:34:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:11.955 11:34:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:12.212 nvme0n1 00:26:12.212 11:34:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:12.212 11:34:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:12.212 11:34:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:12.212 11:34:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:12.212 11:34:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:12.212 11:34:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:12.212 11:34:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:12.212 11:34:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:12.212 11:34:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:12.212 11:34:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:12.212 11:34:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:12.212 11:34:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:12.212 11:34:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 1 00:26:12.212 11:34:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:12.212 11:34:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:12.212 11:34:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:26:12.212 11:34:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:26:12.212 11:34:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:N2E4M2UwYTEzODJjMWIwZTUyZTY0NzFhYWU1ZTM3ZjljMjdiNzBmNGZjNzFmYWI1SKWBLA==: 00:26:12.212 11:34:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZTNlZmU5NDA1ZGMxNmU2ODg2Yjg5YmFkOTQzOWVkOGU3MGIwNjdmNzNmMTliYjQ3FeGUFQ==: 00:26:12.212 11:34:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:12.212 11:34:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:26:12.212 11:34:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:N2E4M2UwYTEzODJjMWIwZTUyZTY0NzFhYWU1ZTM3ZjljMjdiNzBmNGZjNzFmYWI1SKWBLA==: 00:26:12.212 11:34:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZTNlZmU5NDA1ZGMxNmU2ODg2Yjg5YmFkOTQzOWVkOGU3MGIwNjdmNzNmMTliYjQ3FeGUFQ==: ]] 00:26:12.212 11:34:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZTNlZmU5NDA1ZGMxNmU2ODg2Yjg5YmFkOTQzOWVkOGU3MGIwNjdmNzNmMTliYjQ3FeGUFQ==: 00:26:12.212 11:34:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 1 00:26:12.212 11:34:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:12.212 11:34:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:12.212 11:34:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:26:12.212 11:34:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:26:12.212 11:34:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:12.212 11:34:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:26:12.212 11:34:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:12.212 11:34:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:12.212 11:34:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:12.212 11:34:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:12.212 11:34:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:12.212 11:34:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:12.212 11:34:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:12.212 11:34:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:12.212 11:34:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:12.212 11:34:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:12.212 11:34:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:12.212 11:34:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:12.212 11:34:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:12.212 11:34:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:12.212 11:34:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:26:12.212 11:34:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:12.212 11:34:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:12.469 nvme0n1 00:26:12.469 11:34:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:12.469 11:34:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:12.469 11:34:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:12.469 11:34:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:12.469 11:34:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:12.469 11:34:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:12.469 11:34:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:12.469 11:34:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:12.469 11:34:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:12.469 11:34:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:12.727 11:34:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:12.727 11:34:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:12.727 11:34:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 2 00:26:12.727 11:34:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:12.727 11:34:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:12.727 11:34:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:26:12.727 11:34:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:26:12.727 11:34:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NjcxZjQ4OGI1NjhkY2U5NTM4ZWE2ZTE5OTg5NzI3ZmUWdZes: 00:26:12.727 11:34:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MDkwMzUxNTY3NmUxYmUzMDMwOTczNGQyYTM4NTEyN2JSqHgT: 00:26:12.727 11:34:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:12.727 11:34:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:26:12.727 11:34:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NjcxZjQ4OGI1NjhkY2U5NTM4ZWE2ZTE5OTg5NzI3ZmUWdZes: 00:26:12.727 11:34:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MDkwMzUxNTY3NmUxYmUzMDMwOTczNGQyYTM4NTEyN2JSqHgT: ]] 00:26:12.727 11:34:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MDkwMzUxNTY3NmUxYmUzMDMwOTczNGQyYTM4NTEyN2JSqHgT: 00:26:12.727 11:34:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 2 00:26:12.727 11:34:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:12.727 11:34:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:12.727 11:34:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:26:12.727 11:34:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:26:12.727 11:34:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:12.727 11:34:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:26:12.727 11:34:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:12.727 11:34:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:12.727 11:34:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:12.727 11:34:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:12.727 11:34:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:12.727 11:34:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:12.727 11:34:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:12.727 11:34:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:12.727 11:34:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:12.727 11:34:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:12.727 11:34:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:12.727 11:34:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:12.727 11:34:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:12.727 11:34:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:12.727 11:34:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:26:12.727 11:34:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:12.727 11:34:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:12.984 nvme0n1 00:26:12.984 11:34:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:12.984 11:34:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:12.984 11:34:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:12.984 11:34:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:12.984 11:34:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:12.984 11:34:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:12.984 11:34:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:12.984 11:34:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:12.984 11:34:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:12.984 11:34:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:12.984 11:34:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:12.984 11:34:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:12.984 11:34:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 3 00:26:12.984 11:34:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:12.984 11:34:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:12.984 11:34:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:26:12.984 11:34:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:26:12.984 11:34:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YmUxODE4MzZlOWI3YTMzYzRmMjVkNWViZjEyMmIyYjExNzIwMTJiMWNjZDE5ODZmunzmpg==: 00:26:12.984 11:34:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YTdjMDM3YzRkNDU5ZGJlYWM0YzFiOGE4MDQ2OGYzYTRA71V3: 00:26:12.984 11:34:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:12.984 11:34:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:26:12.984 11:34:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YmUxODE4MzZlOWI3YTMzYzRmMjVkNWViZjEyMmIyYjExNzIwMTJiMWNjZDE5ODZmunzmpg==: 00:26:12.984 11:34:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YTdjMDM3YzRkNDU5ZGJlYWM0YzFiOGE4MDQ2OGYzYTRA71V3: ]] 00:26:12.984 11:34:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YTdjMDM3YzRkNDU5ZGJlYWM0YzFiOGE4MDQ2OGYzYTRA71V3: 00:26:12.985 11:34:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 3 00:26:12.985 11:34:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:12.985 11:34:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:12.985 11:34:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:26:12.985 11:34:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:26:12.985 11:34:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:12.985 11:34:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:26:12.985 11:34:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:12.985 11:34:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:12.985 11:34:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:12.985 11:34:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:12.985 11:34:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:12.985 11:34:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:12.985 11:34:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:12.985 11:34:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:12.985 11:34:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:12.985 11:34:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:12.985 11:34:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:12.985 11:34:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:12.985 11:34:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:12.985 11:34:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:12.985 11:34:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:26:12.985 11:34:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:12.985 11:34:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:13.242 nvme0n1 00:26:13.242 11:34:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:13.242 11:34:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:13.242 11:34:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:13.242 11:34:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:13.242 11:34:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:13.242 11:34:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:13.242 11:34:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:13.242 11:34:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:13.242 11:34:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:13.242 11:34:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:13.242 11:34:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:13.242 11:34:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:13.242 11:34:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 4 00:26:13.242 11:34:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:13.242 11:34:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:13.242 11:34:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:26:13.242 11:34:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:26:13.242 11:34:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YTg3YTkzNmRjMjlkYjg1ODczMjg2ZDM1YWVmZTgzM2EyN2UwYzAyM2Y4NWNjMjY4ZDcwNzdiMWQ0OWMxNDUxYTXOMcU=: 00:26:13.242 11:34:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:26:13.242 11:34:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:13.242 11:34:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:26:13.242 11:34:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YTg3YTkzNmRjMjlkYjg1ODczMjg2ZDM1YWVmZTgzM2EyN2UwYzAyM2Y4NWNjMjY4ZDcwNzdiMWQ0OWMxNDUxYTXOMcU=: 00:26:13.242 11:34:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:26:13.242 11:34:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 4 00:26:13.242 11:34:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:13.242 11:34:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:13.242 11:34:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:26:13.242 11:34:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:26:13.242 11:34:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:13.242 11:34:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:26:13.242 11:34:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:13.242 11:34:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:13.242 11:34:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:13.242 11:34:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:13.242 11:34:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:13.242 11:34:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:13.242 11:34:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:13.242 11:34:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:13.242 11:34:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:13.242 11:34:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:13.242 11:34:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:13.242 11:34:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:13.242 11:34:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:13.242 11:34:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:13.242 11:34:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:26:13.242 11:34:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:13.242 11:34:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:13.500 nvme0n1 00:26:13.500 11:34:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:13.500 11:34:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:13.500 11:34:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:13.500 11:34:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:13.500 11:34:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:13.500 11:34:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:13.500 11:34:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:13.500 11:34:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:13.500 11:34:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:13.500 11:34:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:13.500 11:34:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:13.500 11:34:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:26:13.500 11:34:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:13.500 11:34:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 0 00:26:13.501 11:34:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:13.501 11:34:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:13.501 11:34:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:26:13.501 11:34:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:26:13.501 11:34:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NjM5NzQ1M2UwZmE5M2I4NDg4Yjk5NTE5MGZjNzAzMjQQR7aR: 00:26:13.501 11:34:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YzkwOGRlYmJhMzY5ZGMyNGZjZmMzYzNlYjMzNTkwZjFhYzk4MGNhMWJjMGY3MzkxZDY1YjYyMDc0MmI2ZmM5M/gqEKA=: 00:26:13.501 11:34:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:13.501 11:34:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:26:13.501 11:34:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NjM5NzQ1M2UwZmE5M2I4NDg4Yjk5NTE5MGZjNzAzMjQQR7aR: 00:26:13.501 11:34:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YzkwOGRlYmJhMzY5ZGMyNGZjZmMzYzNlYjMzNTkwZjFhYzk4MGNhMWJjMGY3MzkxZDY1YjYyMDc0MmI2ZmM5M/gqEKA=: ]] 00:26:13.501 11:34:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YzkwOGRlYmJhMzY5ZGMyNGZjZmMzYzNlYjMzNTkwZjFhYzk4MGNhMWJjMGY3MzkxZDY1YjYyMDc0MmI2ZmM5M/gqEKA=: 00:26:13.501 11:34:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 0 00:26:13.501 11:34:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:13.501 11:34:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:13.501 11:34:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:26:13.501 11:34:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:26:13.501 11:34:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:13.501 11:34:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:26:13.501 11:34:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:13.501 11:34:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:13.501 11:34:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:13.501 11:34:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:13.501 11:34:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:13.501 11:34:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:13.501 11:34:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:13.501 11:34:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:13.501 11:34:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:13.501 11:34:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:13.501 11:34:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:13.501 11:34:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:13.501 11:34:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:13.501 11:34:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:13.501 11:34:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:26:13.501 11:34:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:13.501 11:34:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:14.066 nvme0n1 00:26:14.066 11:34:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:14.066 11:34:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:14.066 11:34:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:14.066 11:34:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:14.066 11:34:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:14.066 11:34:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:14.066 11:34:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:14.066 11:34:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:14.066 11:34:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:14.066 11:34:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:14.066 11:34:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:14.066 11:34:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:14.066 11:34:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 1 00:26:14.066 11:34:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:14.066 11:34:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:14.066 11:34:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:26:14.066 11:34:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:26:14.066 11:34:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:N2E4M2UwYTEzODJjMWIwZTUyZTY0NzFhYWU1ZTM3ZjljMjdiNzBmNGZjNzFmYWI1SKWBLA==: 00:26:14.066 11:34:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZTNlZmU5NDA1ZGMxNmU2ODg2Yjg5YmFkOTQzOWVkOGU3MGIwNjdmNzNmMTliYjQ3FeGUFQ==: 00:26:14.066 11:34:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:14.066 11:34:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:26:14.066 11:34:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:N2E4M2UwYTEzODJjMWIwZTUyZTY0NzFhYWU1ZTM3ZjljMjdiNzBmNGZjNzFmYWI1SKWBLA==: 00:26:14.066 11:34:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZTNlZmU5NDA1ZGMxNmU2ODg2Yjg5YmFkOTQzOWVkOGU3MGIwNjdmNzNmMTliYjQ3FeGUFQ==: ]] 00:26:14.066 11:34:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZTNlZmU5NDA1ZGMxNmU2ODg2Yjg5YmFkOTQzOWVkOGU3MGIwNjdmNzNmMTliYjQ3FeGUFQ==: 00:26:14.066 11:34:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 1 00:26:14.066 11:34:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:14.066 11:34:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:14.066 11:34:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:26:14.066 11:34:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:26:14.066 11:34:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:14.066 11:34:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:26:14.066 11:34:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:14.066 11:34:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:14.066 11:34:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:14.066 11:34:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:14.066 11:34:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:14.066 11:34:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:14.066 11:34:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:14.066 11:34:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:14.066 11:34:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:14.066 11:34:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:14.066 11:34:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:14.066 11:34:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:14.066 11:34:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:14.066 11:34:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:14.066 11:34:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:26:14.066 11:34:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:14.066 11:34:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:14.324 nvme0n1 00:26:14.324 11:34:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:14.324 11:34:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:14.324 11:34:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:14.324 11:34:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:14.324 11:34:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:14.324 11:34:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:14.324 11:34:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:14.324 11:34:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:14.324 11:34:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:14.324 11:34:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:14.324 11:34:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:14.324 11:34:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:14.324 11:34:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 2 00:26:14.324 11:34:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:14.324 11:34:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:14.324 11:34:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:26:14.324 11:34:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:26:14.324 11:34:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NjcxZjQ4OGI1NjhkY2U5NTM4ZWE2ZTE5OTg5NzI3ZmUWdZes: 00:26:14.324 11:34:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MDkwMzUxNTY3NmUxYmUzMDMwOTczNGQyYTM4NTEyN2JSqHgT: 00:26:14.582 11:34:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:14.582 11:34:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:26:14.582 11:34:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NjcxZjQ4OGI1NjhkY2U5NTM4ZWE2ZTE5OTg5NzI3ZmUWdZes: 00:26:14.582 11:34:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MDkwMzUxNTY3NmUxYmUzMDMwOTczNGQyYTM4NTEyN2JSqHgT: ]] 00:26:14.582 11:34:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MDkwMzUxNTY3NmUxYmUzMDMwOTczNGQyYTM4NTEyN2JSqHgT: 00:26:14.582 11:34:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 2 00:26:14.582 11:34:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:14.582 11:34:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:14.582 11:34:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:26:14.582 11:34:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:26:14.582 11:34:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:14.582 11:34:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:26:14.582 11:34:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:14.582 11:34:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:14.582 11:34:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:14.582 11:34:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:14.582 11:34:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:14.582 11:34:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:14.582 11:34:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:14.582 11:34:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:14.582 11:34:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:14.582 11:34:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:14.582 11:34:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:14.582 11:34:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:14.582 11:34:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:14.582 11:34:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:14.582 11:34:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:26:14.582 11:34:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:14.582 11:34:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:14.840 nvme0n1 00:26:14.840 11:34:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:14.840 11:34:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:14.840 11:34:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:14.840 11:34:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:14.840 11:34:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:14.840 11:34:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:14.840 11:34:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:14.840 11:34:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:14.840 11:34:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:14.840 11:34:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:14.840 11:34:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:14.840 11:34:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:14.840 11:34:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 3 00:26:14.840 11:34:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:14.840 11:34:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:14.841 11:34:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:26:14.841 11:34:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:26:14.841 11:34:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YmUxODE4MzZlOWI3YTMzYzRmMjVkNWViZjEyMmIyYjExNzIwMTJiMWNjZDE5ODZmunzmpg==: 00:26:14.841 11:34:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YTdjMDM3YzRkNDU5ZGJlYWM0YzFiOGE4MDQ2OGYzYTRA71V3: 00:26:14.841 11:34:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:14.841 11:34:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:26:14.841 11:34:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YmUxODE4MzZlOWI3YTMzYzRmMjVkNWViZjEyMmIyYjExNzIwMTJiMWNjZDE5ODZmunzmpg==: 00:26:14.841 11:34:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YTdjMDM3YzRkNDU5ZGJlYWM0YzFiOGE4MDQ2OGYzYTRA71V3: ]] 00:26:14.841 11:34:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YTdjMDM3YzRkNDU5ZGJlYWM0YzFiOGE4MDQ2OGYzYTRA71V3: 00:26:14.841 11:34:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 3 00:26:14.841 11:34:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:14.841 11:34:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:14.841 11:34:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:26:14.841 11:34:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:26:14.841 11:34:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:14.841 11:34:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:26:14.841 11:34:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:14.841 11:34:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:14.841 11:34:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:14.841 11:34:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:14.841 11:34:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:14.841 11:34:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:14.841 11:34:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:14.841 11:34:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:14.841 11:34:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:14.841 11:34:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:14.841 11:34:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:14.841 11:34:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:14.841 11:34:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:14.841 11:34:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:14.841 11:34:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:26:14.841 11:34:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:14.841 11:34:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:15.406 nvme0n1 00:26:15.406 11:34:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:15.406 11:34:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:15.406 11:34:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:15.406 11:34:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:15.406 11:34:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:15.406 11:34:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:15.406 11:34:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:15.406 11:34:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:15.406 11:34:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:15.406 11:34:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:15.406 11:34:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:15.406 11:34:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:15.406 11:34:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 4 00:26:15.406 11:34:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:15.406 11:34:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:15.406 11:34:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:26:15.406 11:34:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:26:15.406 11:34:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YTg3YTkzNmRjMjlkYjg1ODczMjg2ZDM1YWVmZTgzM2EyN2UwYzAyM2Y4NWNjMjY4ZDcwNzdiMWQ0OWMxNDUxYTXOMcU=: 00:26:15.406 11:34:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:26:15.406 11:34:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:15.406 11:34:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:26:15.406 11:34:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YTg3YTkzNmRjMjlkYjg1ODczMjg2ZDM1YWVmZTgzM2EyN2UwYzAyM2Y4NWNjMjY4ZDcwNzdiMWQ0OWMxNDUxYTXOMcU=: 00:26:15.406 11:34:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:26:15.406 11:34:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 4 00:26:15.406 11:34:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:15.406 11:34:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:15.406 11:34:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:26:15.406 11:34:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:26:15.406 11:34:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:15.406 11:34:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:26:15.406 11:34:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:15.406 11:34:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:15.406 11:34:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:15.406 11:34:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:15.406 11:34:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:15.406 11:34:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:15.406 11:34:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:15.406 11:34:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:15.406 11:34:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:15.406 11:34:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:15.406 11:34:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:15.406 11:34:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:15.406 11:34:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:15.406 11:34:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:15.406 11:34:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:26:15.406 11:34:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:15.406 11:34:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:15.664 nvme0n1 00:26:15.664 11:34:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:15.664 11:34:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:15.664 11:34:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:15.664 11:34:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:15.664 11:34:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:15.664 11:34:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:15.664 11:34:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:15.664 11:34:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:15.664 11:34:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:15.664 11:34:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:15.935 11:34:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:15.935 11:34:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:26:15.935 11:34:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:15.935 11:34:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 0 00:26:15.935 11:34:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:15.935 11:34:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:15.935 11:34:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:26:15.935 11:34:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:26:15.935 11:34:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NjM5NzQ1M2UwZmE5M2I4NDg4Yjk5NTE5MGZjNzAzMjQQR7aR: 00:26:15.935 11:34:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YzkwOGRlYmJhMzY5ZGMyNGZjZmMzYzNlYjMzNTkwZjFhYzk4MGNhMWJjMGY3MzkxZDY1YjYyMDc0MmI2ZmM5M/gqEKA=: 00:26:15.935 11:34:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:15.935 11:34:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:26:15.935 11:34:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NjM5NzQ1M2UwZmE5M2I4NDg4Yjk5NTE5MGZjNzAzMjQQR7aR: 00:26:15.935 11:34:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YzkwOGRlYmJhMzY5ZGMyNGZjZmMzYzNlYjMzNTkwZjFhYzk4MGNhMWJjMGY3MzkxZDY1YjYyMDc0MmI2ZmM5M/gqEKA=: ]] 00:26:15.935 11:34:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YzkwOGRlYmJhMzY5ZGMyNGZjZmMzYzNlYjMzNTkwZjFhYzk4MGNhMWJjMGY3MzkxZDY1YjYyMDc0MmI2ZmM5M/gqEKA=: 00:26:15.935 11:34:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 0 00:26:15.935 11:34:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:15.935 11:34:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:15.935 11:34:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:26:15.935 11:34:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:26:15.935 11:34:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:15.935 11:34:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:26:15.935 11:34:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:15.935 11:34:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:15.935 11:34:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:15.935 11:34:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:15.935 11:34:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:15.935 11:34:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:15.935 11:34:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:15.935 11:34:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:15.935 11:34:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:15.935 11:34:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:15.935 11:34:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:15.935 11:34:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:15.935 11:34:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:15.935 11:34:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:15.935 11:34:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:26:15.935 11:34:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:15.935 11:34:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:16.516 nvme0n1 00:26:16.516 11:34:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:16.516 11:34:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:16.516 11:34:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:16.516 11:34:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:16.516 11:34:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:16.516 11:34:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:16.516 11:34:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:16.516 11:34:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:16.516 11:34:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:16.516 11:34:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:16.516 11:34:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:16.516 11:34:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:16.516 11:34:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 1 00:26:16.516 11:34:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:16.516 11:34:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:16.516 11:34:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:26:16.516 11:34:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:26:16.516 11:34:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:N2E4M2UwYTEzODJjMWIwZTUyZTY0NzFhYWU1ZTM3ZjljMjdiNzBmNGZjNzFmYWI1SKWBLA==: 00:26:16.516 11:34:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZTNlZmU5NDA1ZGMxNmU2ODg2Yjg5YmFkOTQzOWVkOGU3MGIwNjdmNzNmMTliYjQ3FeGUFQ==: 00:26:16.516 11:34:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:16.516 11:34:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:26:16.516 11:34:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:N2E4M2UwYTEzODJjMWIwZTUyZTY0NzFhYWU1ZTM3ZjljMjdiNzBmNGZjNzFmYWI1SKWBLA==: 00:26:16.516 11:34:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZTNlZmU5NDA1ZGMxNmU2ODg2Yjg5YmFkOTQzOWVkOGU3MGIwNjdmNzNmMTliYjQ3FeGUFQ==: ]] 00:26:16.516 11:34:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZTNlZmU5NDA1ZGMxNmU2ODg2Yjg5YmFkOTQzOWVkOGU3MGIwNjdmNzNmMTliYjQ3FeGUFQ==: 00:26:16.516 11:34:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 1 00:26:16.516 11:34:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:16.516 11:34:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:16.516 11:34:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:26:16.516 11:34:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:26:16.516 11:34:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:16.516 11:34:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:26:16.516 11:34:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:16.516 11:34:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:16.516 11:34:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:16.516 11:34:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:16.516 11:34:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:16.516 11:34:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:16.516 11:34:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:16.516 11:34:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:16.516 11:34:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:16.516 11:34:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:16.516 11:34:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:16.516 11:34:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:16.516 11:34:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:16.516 11:34:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:16.516 11:34:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:26:16.516 11:34:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:16.516 11:34:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:17.082 nvme0n1 00:26:17.082 11:34:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:17.082 11:34:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:17.082 11:34:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:17.082 11:34:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:17.082 11:34:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:17.082 11:34:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:17.082 11:34:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:17.082 11:34:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:17.082 11:34:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:17.082 11:34:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:17.082 11:34:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:17.082 11:34:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:17.082 11:34:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 2 00:26:17.082 11:34:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:17.082 11:34:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:17.082 11:34:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:26:17.082 11:34:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:26:17.082 11:34:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NjcxZjQ4OGI1NjhkY2U5NTM4ZWE2ZTE5OTg5NzI3ZmUWdZes: 00:26:17.082 11:34:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MDkwMzUxNTY3NmUxYmUzMDMwOTczNGQyYTM4NTEyN2JSqHgT: 00:26:17.082 11:34:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:17.082 11:34:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:26:17.082 11:34:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NjcxZjQ4OGI1NjhkY2U5NTM4ZWE2ZTE5OTg5NzI3ZmUWdZes: 00:26:17.082 11:34:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MDkwMzUxNTY3NmUxYmUzMDMwOTczNGQyYTM4NTEyN2JSqHgT: ]] 00:26:17.082 11:34:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MDkwMzUxNTY3NmUxYmUzMDMwOTczNGQyYTM4NTEyN2JSqHgT: 00:26:17.082 11:34:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 2 00:26:17.082 11:34:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:17.082 11:34:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:17.082 11:34:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:26:17.082 11:34:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:26:17.082 11:34:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:17.082 11:34:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:26:17.082 11:34:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:17.082 11:34:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:17.082 11:34:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:17.082 11:34:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:17.082 11:34:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:17.082 11:34:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:17.082 11:34:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:17.082 11:34:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:17.082 11:34:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:17.082 11:34:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:17.082 11:34:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:17.082 11:34:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:17.082 11:34:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:17.082 11:34:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:17.082 11:34:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:26:17.082 11:34:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:17.082 11:34:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:17.648 nvme0n1 00:26:17.649 11:34:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:17.649 11:34:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:17.649 11:34:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:17.649 11:34:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:17.649 11:34:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:17.649 11:34:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:17.649 11:34:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:17.649 11:34:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:17.649 11:34:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:17.649 11:34:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:17.649 11:34:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:17.649 11:34:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:17.649 11:34:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 3 00:26:17.649 11:34:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:17.649 11:34:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:17.649 11:34:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:26:17.649 11:34:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:26:17.649 11:34:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YmUxODE4MzZlOWI3YTMzYzRmMjVkNWViZjEyMmIyYjExNzIwMTJiMWNjZDE5ODZmunzmpg==: 00:26:17.649 11:34:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YTdjMDM3YzRkNDU5ZGJlYWM0YzFiOGE4MDQ2OGYzYTRA71V3: 00:26:17.649 11:34:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:17.649 11:34:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:26:17.649 11:34:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YmUxODE4MzZlOWI3YTMzYzRmMjVkNWViZjEyMmIyYjExNzIwMTJiMWNjZDE5ODZmunzmpg==: 00:26:17.649 11:34:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YTdjMDM3YzRkNDU5ZGJlYWM0YzFiOGE4MDQ2OGYzYTRA71V3: ]] 00:26:17.649 11:34:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YTdjMDM3YzRkNDU5ZGJlYWM0YzFiOGE4MDQ2OGYzYTRA71V3: 00:26:17.649 11:34:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 3 00:26:17.649 11:34:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:17.649 11:34:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:17.649 11:34:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:26:17.649 11:34:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:26:17.649 11:34:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:17.649 11:34:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:26:17.649 11:34:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:17.649 11:34:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:17.649 11:34:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:17.649 11:34:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:17.649 11:34:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:17.649 11:34:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:17.649 11:34:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:17.649 11:34:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:17.649 11:34:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:17.649 11:34:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:17.649 11:34:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:17.649 11:34:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:17.649 11:34:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:17.649 11:34:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:17.649 11:34:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:26:17.649 11:34:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:17.649 11:34:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:18.213 nvme0n1 00:26:18.213 11:34:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:18.213 11:34:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:18.213 11:34:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:18.213 11:34:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:18.213 11:34:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:18.213 11:34:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:18.470 11:34:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:18.470 11:34:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:18.470 11:34:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:18.470 11:34:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:18.470 11:34:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:18.470 11:34:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:18.470 11:34:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 4 00:26:18.470 11:34:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:18.470 11:34:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:18.470 11:34:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:26:18.470 11:34:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:26:18.470 11:34:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YTg3YTkzNmRjMjlkYjg1ODczMjg2ZDM1YWVmZTgzM2EyN2UwYzAyM2Y4NWNjMjY4ZDcwNzdiMWQ0OWMxNDUxYTXOMcU=: 00:26:18.470 11:34:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:26:18.470 11:34:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:18.470 11:34:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:26:18.470 11:34:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YTg3YTkzNmRjMjlkYjg1ODczMjg2ZDM1YWVmZTgzM2EyN2UwYzAyM2Y4NWNjMjY4ZDcwNzdiMWQ0OWMxNDUxYTXOMcU=: 00:26:18.470 11:34:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:26:18.470 11:34:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 4 00:26:18.470 11:34:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:18.470 11:34:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:18.470 11:34:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:26:18.470 11:34:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:26:18.470 11:34:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:18.470 11:34:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:26:18.470 11:34:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:18.470 11:34:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:18.470 11:34:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:18.470 11:34:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:18.470 11:34:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:18.470 11:34:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:18.470 11:34:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:18.470 11:34:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:18.470 11:34:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:18.470 11:34:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:18.470 11:34:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:18.470 11:34:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:18.470 11:34:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:18.470 11:34:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:18.470 11:34:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:26:18.470 11:34:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:18.470 11:34:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:19.035 nvme0n1 00:26:19.035 11:34:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:19.035 11:34:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:19.035 11:34:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:19.035 11:34:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:19.035 11:34:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:19.035 11:34:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:19.035 11:34:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:19.035 11:34:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:19.035 11:34:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:19.035 11:34:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:19.035 11:34:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:19.035 11:34:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:26:19.035 11:34:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:26:19.035 11:34:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:19.035 11:34:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 0 00:26:19.035 11:34:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:19.035 11:34:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:19.035 11:34:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:26:19.035 11:34:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:26:19.035 11:34:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NjM5NzQ1M2UwZmE5M2I4NDg4Yjk5NTE5MGZjNzAzMjQQR7aR: 00:26:19.035 11:34:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YzkwOGRlYmJhMzY5ZGMyNGZjZmMzYzNlYjMzNTkwZjFhYzk4MGNhMWJjMGY3MzkxZDY1YjYyMDc0MmI2ZmM5M/gqEKA=: 00:26:19.035 11:34:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:19.035 11:34:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:26:19.035 11:34:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NjM5NzQ1M2UwZmE5M2I4NDg4Yjk5NTE5MGZjNzAzMjQQR7aR: 00:26:19.035 11:34:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YzkwOGRlYmJhMzY5ZGMyNGZjZmMzYzNlYjMzNTkwZjFhYzk4MGNhMWJjMGY3MzkxZDY1YjYyMDc0MmI2ZmM5M/gqEKA=: ]] 00:26:19.035 11:34:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YzkwOGRlYmJhMzY5ZGMyNGZjZmMzYzNlYjMzNTkwZjFhYzk4MGNhMWJjMGY3MzkxZDY1YjYyMDc0MmI2ZmM5M/gqEKA=: 00:26:19.035 11:34:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 0 00:26:19.035 11:34:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:19.035 11:34:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:19.035 11:34:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:26:19.035 11:34:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:26:19.035 11:34:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:19.035 11:34:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:26:19.035 11:34:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:19.035 11:34:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:19.035 11:34:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:19.035 11:34:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:19.035 11:34:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:19.035 11:34:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:19.035 11:34:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:19.035 11:34:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:19.035 11:34:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:19.035 11:34:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:19.035 11:34:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:19.035 11:34:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:19.035 11:34:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:19.035 11:34:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:19.035 11:34:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:26:19.035 11:34:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:19.035 11:34:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:19.292 nvme0n1 00:26:19.293 11:34:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:19.293 11:34:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:19.293 11:34:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:19.293 11:34:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:19.293 11:34:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:19.293 11:34:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:19.293 11:34:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:19.293 11:34:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:19.293 11:34:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:19.293 11:34:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:19.293 11:34:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:19.293 11:34:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:19.293 11:34:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 1 00:26:19.293 11:34:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:19.293 11:34:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:19.293 11:34:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:26:19.293 11:34:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:26:19.293 11:34:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:N2E4M2UwYTEzODJjMWIwZTUyZTY0NzFhYWU1ZTM3ZjljMjdiNzBmNGZjNzFmYWI1SKWBLA==: 00:26:19.293 11:34:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZTNlZmU5NDA1ZGMxNmU2ODg2Yjg5YmFkOTQzOWVkOGU3MGIwNjdmNzNmMTliYjQ3FeGUFQ==: 00:26:19.293 11:34:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:19.293 11:34:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:26:19.293 11:34:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:N2E4M2UwYTEzODJjMWIwZTUyZTY0NzFhYWU1ZTM3ZjljMjdiNzBmNGZjNzFmYWI1SKWBLA==: 00:26:19.293 11:34:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZTNlZmU5NDA1ZGMxNmU2ODg2Yjg5YmFkOTQzOWVkOGU3MGIwNjdmNzNmMTliYjQ3FeGUFQ==: ]] 00:26:19.293 11:34:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZTNlZmU5NDA1ZGMxNmU2ODg2Yjg5YmFkOTQzOWVkOGU3MGIwNjdmNzNmMTliYjQ3FeGUFQ==: 00:26:19.293 11:34:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 1 00:26:19.293 11:34:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:19.293 11:34:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:19.293 11:34:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:26:19.293 11:34:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:26:19.293 11:34:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:19.293 11:34:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:26:19.293 11:34:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:19.293 11:34:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:19.293 11:34:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:19.293 11:34:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:19.293 11:34:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:19.293 11:34:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:19.293 11:34:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:19.293 11:34:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:19.293 11:34:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:19.293 11:34:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:19.293 11:34:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:19.293 11:34:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:19.293 11:34:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:19.293 11:34:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:19.293 11:34:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:26:19.293 11:34:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:19.293 11:34:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:19.293 nvme0n1 00:26:19.293 11:34:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:19.293 11:34:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:19.293 11:34:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:19.293 11:34:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:19.293 11:34:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:19.293 11:34:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:19.550 11:34:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:19.550 11:34:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:19.550 11:34:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:19.550 11:34:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:19.550 11:34:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:19.550 11:34:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:19.550 11:34:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 2 00:26:19.550 11:34:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:19.550 11:34:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:19.550 11:34:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:26:19.550 11:34:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:26:19.550 11:34:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NjcxZjQ4OGI1NjhkY2U5NTM4ZWE2ZTE5OTg5NzI3ZmUWdZes: 00:26:19.550 11:34:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MDkwMzUxNTY3NmUxYmUzMDMwOTczNGQyYTM4NTEyN2JSqHgT: 00:26:19.550 11:34:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:19.550 11:34:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:26:19.550 11:34:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NjcxZjQ4OGI1NjhkY2U5NTM4ZWE2ZTE5OTg5NzI3ZmUWdZes: 00:26:19.550 11:34:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MDkwMzUxNTY3NmUxYmUzMDMwOTczNGQyYTM4NTEyN2JSqHgT: ]] 00:26:19.551 11:34:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MDkwMzUxNTY3NmUxYmUzMDMwOTczNGQyYTM4NTEyN2JSqHgT: 00:26:19.551 11:34:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 2 00:26:19.551 11:34:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:19.551 11:34:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:19.551 11:34:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:26:19.551 11:34:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:26:19.551 11:34:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:19.551 11:34:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:26:19.551 11:34:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:19.551 11:34:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:19.551 11:34:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:19.551 11:34:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:19.551 11:34:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:19.551 11:34:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:19.551 11:34:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:19.551 11:34:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:19.551 11:34:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:19.551 11:34:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:19.551 11:34:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:19.551 11:34:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:19.551 11:34:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:19.551 11:34:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:19.551 11:34:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:26:19.551 11:34:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:19.551 11:34:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:19.551 nvme0n1 00:26:19.551 11:34:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:19.551 11:34:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:19.551 11:34:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:19.551 11:34:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:19.551 11:34:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:19.551 11:34:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:19.551 11:34:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:19.551 11:34:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:19.551 11:34:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:19.551 11:34:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:19.809 11:34:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:19.809 11:34:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:19.809 11:34:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 3 00:26:19.809 11:34:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:19.809 11:34:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:19.809 11:34:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:26:19.809 11:34:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:26:19.809 11:34:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YmUxODE4MzZlOWI3YTMzYzRmMjVkNWViZjEyMmIyYjExNzIwMTJiMWNjZDE5ODZmunzmpg==: 00:26:19.809 11:34:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YTdjMDM3YzRkNDU5ZGJlYWM0YzFiOGE4MDQ2OGYzYTRA71V3: 00:26:19.809 11:34:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:19.809 11:34:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:26:19.809 11:34:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YmUxODE4MzZlOWI3YTMzYzRmMjVkNWViZjEyMmIyYjExNzIwMTJiMWNjZDE5ODZmunzmpg==: 00:26:19.809 11:34:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YTdjMDM3YzRkNDU5ZGJlYWM0YzFiOGE4MDQ2OGYzYTRA71V3: ]] 00:26:19.809 11:34:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YTdjMDM3YzRkNDU5ZGJlYWM0YzFiOGE4MDQ2OGYzYTRA71V3: 00:26:19.809 11:34:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 3 00:26:19.809 11:34:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:19.809 11:34:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:19.809 11:34:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:26:19.809 11:34:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:26:19.809 11:34:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:19.809 11:34:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:26:19.809 11:34:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:19.809 11:34:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:19.809 11:34:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:19.809 11:34:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:19.809 11:34:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:19.809 11:34:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:19.809 11:34:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:19.809 11:34:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:19.809 11:34:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:19.809 11:34:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:19.809 11:34:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:19.809 11:34:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:19.809 11:34:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:19.809 11:34:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:19.809 11:34:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:26:19.809 11:34:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:19.809 11:34:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:19.809 nvme0n1 00:26:19.809 11:34:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:19.809 11:34:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:19.809 11:34:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:19.809 11:34:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:19.809 11:34:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:19.809 11:34:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:19.809 11:34:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:19.809 11:34:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:19.809 11:34:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:19.809 11:34:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:19.809 11:34:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:19.809 11:34:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:19.809 11:34:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 4 00:26:19.809 11:34:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:19.809 11:34:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:19.809 11:34:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:26:19.809 11:34:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:26:19.809 11:34:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YTg3YTkzNmRjMjlkYjg1ODczMjg2ZDM1YWVmZTgzM2EyN2UwYzAyM2Y4NWNjMjY4ZDcwNzdiMWQ0OWMxNDUxYTXOMcU=: 00:26:19.809 11:34:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:26:19.809 11:34:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:19.809 11:34:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:26:19.809 11:34:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YTg3YTkzNmRjMjlkYjg1ODczMjg2ZDM1YWVmZTgzM2EyN2UwYzAyM2Y4NWNjMjY4ZDcwNzdiMWQ0OWMxNDUxYTXOMcU=: 00:26:19.809 11:34:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:26:19.809 11:34:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 4 00:26:19.809 11:34:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:19.809 11:34:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:19.809 11:34:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:26:19.809 11:34:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:26:19.809 11:34:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:19.809 11:34:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:26:19.809 11:34:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:19.809 11:34:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:19.809 11:34:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:19.809 11:34:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:19.809 11:34:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:19.809 11:34:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:19.809 11:34:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:19.809 11:34:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:19.809 11:34:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:19.809 11:34:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:19.809 11:34:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:19.809 11:34:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:19.809 11:34:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:19.809 11:34:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:19.809 11:34:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:26:19.809 11:34:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:19.809 11:34:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:20.068 nvme0n1 00:26:20.068 11:34:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:20.068 11:34:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:20.068 11:34:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:20.068 11:34:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:20.068 11:34:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:20.068 11:34:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:20.068 11:34:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:20.068 11:34:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:20.068 11:34:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:20.068 11:34:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:20.068 11:34:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:20.068 11:34:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:26:20.068 11:34:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:20.068 11:34:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 0 00:26:20.068 11:34:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:20.068 11:34:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:20.068 11:34:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:26:20.068 11:34:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:26:20.068 11:34:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NjM5NzQ1M2UwZmE5M2I4NDg4Yjk5NTE5MGZjNzAzMjQQR7aR: 00:26:20.068 11:34:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YzkwOGRlYmJhMzY5ZGMyNGZjZmMzYzNlYjMzNTkwZjFhYzk4MGNhMWJjMGY3MzkxZDY1YjYyMDc0MmI2ZmM5M/gqEKA=: 00:26:20.068 11:34:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:20.068 11:34:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:26:20.068 11:34:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NjM5NzQ1M2UwZmE5M2I4NDg4Yjk5NTE5MGZjNzAzMjQQR7aR: 00:26:20.068 11:34:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YzkwOGRlYmJhMzY5ZGMyNGZjZmMzYzNlYjMzNTkwZjFhYzk4MGNhMWJjMGY3MzkxZDY1YjYyMDc0MmI2ZmM5M/gqEKA=: ]] 00:26:20.068 11:34:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YzkwOGRlYmJhMzY5ZGMyNGZjZmMzYzNlYjMzNTkwZjFhYzk4MGNhMWJjMGY3MzkxZDY1YjYyMDc0MmI2ZmM5M/gqEKA=: 00:26:20.068 11:34:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 0 00:26:20.068 11:34:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:20.068 11:34:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:20.068 11:34:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:26:20.068 11:34:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:26:20.068 11:34:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:20.068 11:34:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:26:20.068 11:34:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:20.068 11:34:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:20.068 11:34:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:20.068 11:34:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:20.068 11:34:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:20.068 11:34:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:20.068 11:34:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:20.068 11:34:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:20.068 11:34:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:20.068 11:34:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:20.068 11:34:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:20.068 11:34:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:20.068 11:34:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:20.068 11:34:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:20.068 11:34:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:26:20.068 11:34:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:20.068 11:34:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:20.325 nvme0n1 00:26:20.325 11:34:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:20.325 11:34:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:20.325 11:34:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:20.326 11:34:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:20.326 11:34:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:20.326 11:34:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:20.326 11:34:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:20.326 11:34:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:20.326 11:34:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:20.326 11:34:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:20.326 11:34:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:20.326 11:34:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:20.326 11:34:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 1 00:26:20.326 11:34:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:20.326 11:34:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:20.326 11:34:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:26:20.326 11:34:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:26:20.326 11:34:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:N2E4M2UwYTEzODJjMWIwZTUyZTY0NzFhYWU1ZTM3ZjljMjdiNzBmNGZjNzFmYWI1SKWBLA==: 00:26:20.326 11:34:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZTNlZmU5NDA1ZGMxNmU2ODg2Yjg5YmFkOTQzOWVkOGU3MGIwNjdmNzNmMTliYjQ3FeGUFQ==: 00:26:20.326 11:34:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:20.326 11:34:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:26:20.326 11:34:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:N2E4M2UwYTEzODJjMWIwZTUyZTY0NzFhYWU1ZTM3ZjljMjdiNzBmNGZjNzFmYWI1SKWBLA==: 00:26:20.326 11:34:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZTNlZmU5NDA1ZGMxNmU2ODg2Yjg5YmFkOTQzOWVkOGU3MGIwNjdmNzNmMTliYjQ3FeGUFQ==: ]] 00:26:20.326 11:34:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZTNlZmU5NDA1ZGMxNmU2ODg2Yjg5YmFkOTQzOWVkOGU3MGIwNjdmNzNmMTliYjQ3FeGUFQ==: 00:26:20.326 11:34:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 1 00:26:20.326 11:34:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:20.326 11:34:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:20.326 11:34:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:26:20.326 11:34:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:26:20.326 11:34:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:20.326 11:34:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:26:20.326 11:34:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:20.326 11:34:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:20.326 11:34:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:20.326 11:34:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:20.326 11:34:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:20.326 11:34:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:20.326 11:34:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:20.326 11:34:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:20.326 11:34:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:20.326 11:34:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:20.326 11:34:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:20.326 11:34:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:20.326 11:34:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:20.326 11:34:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:20.326 11:34:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:26:20.326 11:34:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:20.326 11:34:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:20.584 nvme0n1 00:26:20.584 11:34:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:20.584 11:34:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:20.584 11:34:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:20.584 11:34:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:20.584 11:34:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:20.584 11:34:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:20.584 11:34:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:20.584 11:34:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:20.584 11:34:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:20.584 11:34:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:20.584 11:34:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:20.584 11:34:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:20.584 11:34:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 2 00:26:20.584 11:34:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:20.584 11:34:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:20.584 11:34:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:26:20.584 11:34:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:26:20.584 11:34:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NjcxZjQ4OGI1NjhkY2U5NTM4ZWE2ZTE5OTg5NzI3ZmUWdZes: 00:26:20.584 11:34:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MDkwMzUxNTY3NmUxYmUzMDMwOTczNGQyYTM4NTEyN2JSqHgT: 00:26:20.584 11:34:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:20.584 11:34:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:26:20.584 11:34:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NjcxZjQ4OGI1NjhkY2U5NTM4ZWE2ZTE5OTg5NzI3ZmUWdZes: 00:26:20.585 11:34:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MDkwMzUxNTY3NmUxYmUzMDMwOTczNGQyYTM4NTEyN2JSqHgT: ]] 00:26:20.585 11:34:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MDkwMzUxNTY3NmUxYmUzMDMwOTczNGQyYTM4NTEyN2JSqHgT: 00:26:20.585 11:34:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 2 00:26:20.585 11:34:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:20.585 11:34:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:20.585 11:34:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:26:20.585 11:34:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:26:20.585 11:34:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:20.585 11:34:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:26:20.585 11:34:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:20.585 11:34:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:20.585 11:34:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:20.585 11:34:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:20.585 11:34:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:20.585 11:34:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:20.585 11:34:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:20.585 11:34:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:20.585 11:34:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:20.585 11:34:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:20.585 11:34:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:20.585 11:34:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:20.585 11:34:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:20.585 11:34:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:20.585 11:34:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:26:20.585 11:34:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:20.585 11:34:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:20.842 nvme0n1 00:26:20.842 11:34:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:20.842 11:34:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:20.842 11:34:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:20.842 11:34:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:20.842 11:34:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:20.842 11:34:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:20.842 11:34:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:20.843 11:34:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:20.843 11:34:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:20.843 11:34:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:20.843 11:34:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:20.843 11:34:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:20.843 11:34:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 3 00:26:20.843 11:34:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:20.843 11:34:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:20.843 11:34:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:26:20.843 11:34:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:26:20.843 11:34:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YmUxODE4MzZlOWI3YTMzYzRmMjVkNWViZjEyMmIyYjExNzIwMTJiMWNjZDE5ODZmunzmpg==: 00:26:20.843 11:34:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YTdjMDM3YzRkNDU5ZGJlYWM0YzFiOGE4MDQ2OGYzYTRA71V3: 00:26:20.843 11:34:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:20.843 11:34:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:26:20.843 11:34:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YmUxODE4MzZlOWI3YTMzYzRmMjVkNWViZjEyMmIyYjExNzIwMTJiMWNjZDE5ODZmunzmpg==: 00:26:20.843 11:34:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YTdjMDM3YzRkNDU5ZGJlYWM0YzFiOGE4MDQ2OGYzYTRA71V3: ]] 00:26:20.843 11:34:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YTdjMDM3YzRkNDU5ZGJlYWM0YzFiOGE4MDQ2OGYzYTRA71V3: 00:26:20.843 11:34:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 3 00:26:20.843 11:34:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:20.843 11:34:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:20.843 11:34:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:26:20.843 11:34:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:26:20.843 11:34:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:20.843 11:34:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:26:20.843 11:34:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:20.843 11:34:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:20.843 11:34:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:20.843 11:34:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:20.843 11:34:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:20.843 11:34:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:20.843 11:34:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:20.843 11:34:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:20.843 11:34:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:20.843 11:34:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:20.843 11:34:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:20.843 11:34:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:20.843 11:34:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:20.843 11:34:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:20.843 11:34:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:26:20.843 11:34:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:20.843 11:34:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:21.101 nvme0n1 00:26:21.101 11:34:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:21.101 11:34:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:21.101 11:34:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:21.101 11:34:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:21.101 11:34:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:21.101 11:34:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:21.101 11:34:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:21.101 11:34:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:21.101 11:34:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:21.101 11:34:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:21.101 11:34:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:21.101 11:34:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:21.101 11:34:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 4 00:26:21.101 11:34:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:21.101 11:34:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:21.101 11:34:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:26:21.101 11:34:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:26:21.101 11:34:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YTg3YTkzNmRjMjlkYjg1ODczMjg2ZDM1YWVmZTgzM2EyN2UwYzAyM2Y4NWNjMjY4ZDcwNzdiMWQ0OWMxNDUxYTXOMcU=: 00:26:21.101 11:34:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:26:21.101 11:34:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:21.101 11:34:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:26:21.101 11:34:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YTg3YTkzNmRjMjlkYjg1ODczMjg2ZDM1YWVmZTgzM2EyN2UwYzAyM2Y4NWNjMjY4ZDcwNzdiMWQ0OWMxNDUxYTXOMcU=: 00:26:21.101 11:34:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:26:21.101 11:34:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 4 00:26:21.101 11:34:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:21.101 11:34:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:21.101 11:34:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:26:21.101 11:34:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:26:21.101 11:34:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:21.101 11:34:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:26:21.101 11:34:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:21.101 11:34:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:21.101 11:34:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:21.101 11:34:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:21.101 11:34:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:21.101 11:34:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:21.101 11:34:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:21.101 11:34:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:21.101 11:34:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:21.101 11:34:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:21.101 11:34:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:21.101 11:34:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:21.101 11:34:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:21.101 11:34:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:21.101 11:34:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:26:21.101 11:34:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:21.101 11:34:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:21.359 nvme0n1 00:26:21.359 11:34:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:21.359 11:34:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:21.359 11:34:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:21.359 11:34:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:21.359 11:34:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:21.359 11:34:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:21.359 11:34:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:21.359 11:34:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:21.359 11:34:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:21.359 11:34:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:21.359 11:34:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:21.359 11:34:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:26:21.359 11:34:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:21.360 11:34:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 0 00:26:21.360 11:34:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:21.360 11:34:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:21.360 11:34:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:26:21.360 11:34:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:26:21.360 11:34:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NjM5NzQ1M2UwZmE5M2I4NDg4Yjk5NTE5MGZjNzAzMjQQR7aR: 00:26:21.360 11:34:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YzkwOGRlYmJhMzY5ZGMyNGZjZmMzYzNlYjMzNTkwZjFhYzk4MGNhMWJjMGY3MzkxZDY1YjYyMDc0MmI2ZmM5M/gqEKA=: 00:26:21.360 11:34:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:21.360 11:34:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:26:21.360 11:34:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NjM5NzQ1M2UwZmE5M2I4NDg4Yjk5NTE5MGZjNzAzMjQQR7aR: 00:26:21.360 11:34:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YzkwOGRlYmJhMzY5ZGMyNGZjZmMzYzNlYjMzNTkwZjFhYzk4MGNhMWJjMGY3MzkxZDY1YjYyMDc0MmI2ZmM5M/gqEKA=: ]] 00:26:21.360 11:34:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YzkwOGRlYmJhMzY5ZGMyNGZjZmMzYzNlYjMzNTkwZjFhYzk4MGNhMWJjMGY3MzkxZDY1YjYyMDc0MmI2ZmM5M/gqEKA=: 00:26:21.360 11:34:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 0 00:26:21.360 11:34:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:21.360 11:34:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:21.360 11:34:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:26:21.360 11:34:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:26:21.360 11:34:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:21.360 11:34:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:26:21.360 11:34:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:21.360 11:34:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:21.360 11:34:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:21.360 11:34:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:21.360 11:34:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:21.360 11:34:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:21.360 11:34:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:21.360 11:34:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:21.360 11:34:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:21.360 11:34:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:21.360 11:34:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:21.360 11:34:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:21.360 11:34:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:21.360 11:34:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:21.360 11:34:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:26:21.360 11:34:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:21.360 11:34:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:21.617 nvme0n1 00:26:21.617 11:34:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:21.617 11:34:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:21.617 11:34:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:21.617 11:34:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:21.617 11:34:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:21.617 11:34:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:21.617 11:34:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:21.617 11:34:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:21.617 11:34:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:21.617 11:34:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:21.617 11:34:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:21.617 11:34:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:21.617 11:34:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 1 00:26:21.617 11:34:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:21.617 11:34:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:21.617 11:34:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:26:21.617 11:34:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:26:21.617 11:34:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:N2E4M2UwYTEzODJjMWIwZTUyZTY0NzFhYWU1ZTM3ZjljMjdiNzBmNGZjNzFmYWI1SKWBLA==: 00:26:21.617 11:34:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZTNlZmU5NDA1ZGMxNmU2ODg2Yjg5YmFkOTQzOWVkOGU3MGIwNjdmNzNmMTliYjQ3FeGUFQ==: 00:26:21.617 11:34:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:21.617 11:34:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:26:21.617 11:34:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:N2E4M2UwYTEzODJjMWIwZTUyZTY0NzFhYWU1ZTM3ZjljMjdiNzBmNGZjNzFmYWI1SKWBLA==: 00:26:21.617 11:34:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZTNlZmU5NDA1ZGMxNmU2ODg2Yjg5YmFkOTQzOWVkOGU3MGIwNjdmNzNmMTliYjQ3FeGUFQ==: ]] 00:26:21.617 11:34:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZTNlZmU5NDA1ZGMxNmU2ODg2Yjg5YmFkOTQzOWVkOGU3MGIwNjdmNzNmMTliYjQ3FeGUFQ==: 00:26:21.617 11:34:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 1 00:26:21.617 11:34:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:21.617 11:34:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:21.617 11:34:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:26:21.617 11:34:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:26:21.617 11:34:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:21.617 11:34:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:26:21.617 11:34:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:21.617 11:34:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:21.874 11:34:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:21.874 11:34:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:21.874 11:34:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:21.874 11:34:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:21.874 11:34:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:21.874 11:34:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:21.874 11:34:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:21.874 11:34:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:21.874 11:34:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:21.874 11:34:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:21.874 11:34:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:21.874 11:34:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:21.874 11:34:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:26:21.874 11:34:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:21.874 11:34:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:21.874 nvme0n1 00:26:21.874 11:34:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:21.874 11:34:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:21.874 11:34:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:22.132 11:34:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:22.132 11:34:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:22.132 11:34:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:22.132 11:34:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:22.132 11:34:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:22.132 11:34:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:22.132 11:34:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:22.132 11:34:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:22.132 11:34:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:22.132 11:34:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 2 00:26:22.132 11:34:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:22.132 11:34:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:22.132 11:34:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:26:22.132 11:34:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:26:22.132 11:34:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NjcxZjQ4OGI1NjhkY2U5NTM4ZWE2ZTE5OTg5NzI3ZmUWdZes: 00:26:22.132 11:34:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MDkwMzUxNTY3NmUxYmUzMDMwOTczNGQyYTM4NTEyN2JSqHgT: 00:26:22.132 11:34:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:22.132 11:34:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:26:22.132 11:34:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NjcxZjQ4OGI1NjhkY2U5NTM4ZWE2ZTE5OTg5NzI3ZmUWdZes: 00:26:22.132 11:34:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MDkwMzUxNTY3NmUxYmUzMDMwOTczNGQyYTM4NTEyN2JSqHgT: ]] 00:26:22.132 11:34:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MDkwMzUxNTY3NmUxYmUzMDMwOTczNGQyYTM4NTEyN2JSqHgT: 00:26:22.132 11:34:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 2 00:26:22.132 11:34:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:22.132 11:34:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:22.132 11:34:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:26:22.132 11:34:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:26:22.132 11:34:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:22.132 11:34:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:26:22.132 11:34:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:22.132 11:34:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:22.132 11:34:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:22.132 11:34:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:22.132 11:34:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:22.132 11:34:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:22.132 11:34:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:22.132 11:34:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:22.132 11:34:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:22.132 11:34:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:22.132 11:34:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:22.132 11:34:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:22.132 11:34:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:22.132 11:34:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:22.132 11:34:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:26:22.132 11:34:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:22.132 11:34:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:22.390 nvme0n1 00:26:22.390 11:34:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:22.390 11:34:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:22.390 11:34:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:22.390 11:34:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:22.390 11:34:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:22.390 11:34:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:22.390 11:34:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:22.390 11:34:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:22.390 11:34:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:22.390 11:34:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:22.390 11:34:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:22.390 11:34:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:22.390 11:34:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 3 00:26:22.390 11:34:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:22.390 11:34:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:22.390 11:34:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:26:22.390 11:34:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:26:22.390 11:34:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YmUxODE4MzZlOWI3YTMzYzRmMjVkNWViZjEyMmIyYjExNzIwMTJiMWNjZDE5ODZmunzmpg==: 00:26:22.390 11:34:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YTdjMDM3YzRkNDU5ZGJlYWM0YzFiOGE4MDQ2OGYzYTRA71V3: 00:26:22.390 11:34:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:22.390 11:34:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:26:22.390 11:34:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YmUxODE4MzZlOWI3YTMzYzRmMjVkNWViZjEyMmIyYjExNzIwMTJiMWNjZDE5ODZmunzmpg==: 00:26:22.390 11:34:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YTdjMDM3YzRkNDU5ZGJlYWM0YzFiOGE4MDQ2OGYzYTRA71V3: ]] 00:26:22.390 11:34:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YTdjMDM3YzRkNDU5ZGJlYWM0YzFiOGE4MDQ2OGYzYTRA71V3: 00:26:22.390 11:34:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 3 00:26:22.390 11:34:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:22.390 11:34:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:22.390 11:34:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:26:22.390 11:34:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:26:22.390 11:34:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:22.390 11:34:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:26:22.390 11:34:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:22.390 11:34:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:22.390 11:34:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:22.391 11:34:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:22.391 11:34:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:22.391 11:34:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:22.391 11:34:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:22.391 11:34:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:22.391 11:34:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:22.391 11:34:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:22.391 11:34:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:22.391 11:34:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:22.391 11:34:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:22.391 11:34:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:22.391 11:34:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:26:22.391 11:34:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:22.391 11:34:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:22.648 nvme0n1 00:26:22.648 11:34:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:22.648 11:34:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:22.648 11:34:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:22.648 11:34:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:22.648 11:34:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:22.648 11:34:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:22.648 11:34:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:22.648 11:34:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:22.648 11:34:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:22.648 11:34:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:22.648 11:34:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:22.648 11:34:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:22.648 11:34:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 4 00:26:22.648 11:34:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:22.648 11:34:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:22.648 11:34:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:26:22.648 11:34:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:26:22.648 11:34:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YTg3YTkzNmRjMjlkYjg1ODczMjg2ZDM1YWVmZTgzM2EyN2UwYzAyM2Y4NWNjMjY4ZDcwNzdiMWQ0OWMxNDUxYTXOMcU=: 00:26:22.648 11:34:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:26:22.648 11:34:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:22.648 11:34:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:26:22.648 11:34:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YTg3YTkzNmRjMjlkYjg1ODczMjg2ZDM1YWVmZTgzM2EyN2UwYzAyM2Y4NWNjMjY4ZDcwNzdiMWQ0OWMxNDUxYTXOMcU=: 00:26:22.648 11:34:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:26:22.648 11:34:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 4 00:26:22.648 11:34:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:22.648 11:34:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:22.648 11:34:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:26:22.648 11:34:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:26:22.648 11:34:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:22.648 11:34:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:26:22.648 11:34:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:22.648 11:34:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:22.648 11:34:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:22.648 11:34:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:22.648 11:34:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:22.648 11:34:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:22.648 11:34:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:22.648 11:34:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:22.648 11:34:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:22.648 11:34:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:22.648 11:34:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:22.648 11:34:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:22.649 11:34:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:22.649 11:34:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:22.649 11:34:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:26:22.649 11:34:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:22.649 11:34:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:22.906 nvme0n1 00:26:22.906 11:34:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:22.906 11:34:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:22.906 11:34:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:22.906 11:34:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:22.906 11:34:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:22.906 11:34:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:22.906 11:34:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:22.906 11:34:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:22.906 11:34:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:22.906 11:34:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:23.164 11:34:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:23.164 11:34:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:26:23.164 11:34:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:23.164 11:34:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 0 00:26:23.164 11:34:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:23.164 11:34:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:23.164 11:34:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:26:23.164 11:34:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:26:23.164 11:34:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NjM5NzQ1M2UwZmE5M2I4NDg4Yjk5NTE5MGZjNzAzMjQQR7aR: 00:26:23.164 11:34:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YzkwOGRlYmJhMzY5ZGMyNGZjZmMzYzNlYjMzNTkwZjFhYzk4MGNhMWJjMGY3MzkxZDY1YjYyMDc0MmI2ZmM5M/gqEKA=: 00:26:23.164 11:34:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:23.164 11:34:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:26:23.164 11:34:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NjM5NzQ1M2UwZmE5M2I4NDg4Yjk5NTE5MGZjNzAzMjQQR7aR: 00:26:23.164 11:34:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YzkwOGRlYmJhMzY5ZGMyNGZjZmMzYzNlYjMzNTkwZjFhYzk4MGNhMWJjMGY3MzkxZDY1YjYyMDc0MmI2ZmM5M/gqEKA=: ]] 00:26:23.164 11:34:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YzkwOGRlYmJhMzY5ZGMyNGZjZmMzYzNlYjMzNTkwZjFhYzk4MGNhMWJjMGY3MzkxZDY1YjYyMDc0MmI2ZmM5M/gqEKA=: 00:26:23.164 11:34:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 0 00:26:23.164 11:34:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:23.164 11:34:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:23.164 11:34:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:26:23.164 11:34:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:26:23.164 11:34:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:23.164 11:34:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:26:23.164 11:34:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:23.164 11:34:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:23.164 11:34:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:23.164 11:34:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:23.164 11:34:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:23.164 11:34:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:23.164 11:34:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:23.164 11:34:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:23.164 11:34:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:23.164 11:34:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:23.164 11:34:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:23.164 11:34:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:23.164 11:34:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:23.164 11:34:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:23.164 11:34:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:26:23.164 11:34:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:23.164 11:34:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:23.422 nvme0n1 00:26:23.422 11:34:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:23.422 11:34:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:23.422 11:34:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:23.422 11:34:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:23.422 11:34:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:23.422 11:34:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:23.422 11:34:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:23.422 11:34:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:23.422 11:34:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:23.422 11:34:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:23.422 11:34:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:23.422 11:34:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:23.422 11:34:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 1 00:26:23.422 11:34:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:23.422 11:34:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:23.422 11:34:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:26:23.422 11:34:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:26:23.422 11:34:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:N2E4M2UwYTEzODJjMWIwZTUyZTY0NzFhYWU1ZTM3ZjljMjdiNzBmNGZjNzFmYWI1SKWBLA==: 00:26:23.422 11:34:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZTNlZmU5NDA1ZGMxNmU2ODg2Yjg5YmFkOTQzOWVkOGU3MGIwNjdmNzNmMTliYjQ3FeGUFQ==: 00:26:23.422 11:34:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:23.422 11:34:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:26:23.422 11:34:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:N2E4M2UwYTEzODJjMWIwZTUyZTY0NzFhYWU1ZTM3ZjljMjdiNzBmNGZjNzFmYWI1SKWBLA==: 00:26:23.422 11:34:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZTNlZmU5NDA1ZGMxNmU2ODg2Yjg5YmFkOTQzOWVkOGU3MGIwNjdmNzNmMTliYjQ3FeGUFQ==: ]] 00:26:23.422 11:34:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZTNlZmU5NDA1ZGMxNmU2ODg2Yjg5YmFkOTQzOWVkOGU3MGIwNjdmNzNmMTliYjQ3FeGUFQ==: 00:26:23.422 11:34:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 1 00:26:23.422 11:34:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:23.422 11:34:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:23.422 11:34:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:26:23.422 11:34:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:26:23.422 11:34:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:23.422 11:34:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:26:23.422 11:34:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:23.422 11:34:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:23.422 11:34:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:23.422 11:34:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:23.422 11:34:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:23.422 11:34:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:23.422 11:34:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:23.422 11:34:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:23.422 11:34:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:23.422 11:34:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:23.422 11:34:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:23.422 11:34:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:23.422 11:34:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:23.422 11:34:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:23.423 11:34:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:26:23.423 11:34:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:23.423 11:34:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:23.989 nvme0n1 00:26:23.989 11:34:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:23.989 11:34:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:23.989 11:34:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:23.989 11:34:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:23.989 11:34:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:23.989 11:34:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:23.989 11:34:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:23.989 11:34:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:23.989 11:34:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:23.989 11:34:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:23.989 11:34:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:23.989 11:34:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:23.989 11:34:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 2 00:26:23.989 11:34:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:23.989 11:34:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:23.989 11:34:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:26:23.989 11:34:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:26:23.989 11:34:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NjcxZjQ4OGI1NjhkY2U5NTM4ZWE2ZTE5OTg5NzI3ZmUWdZes: 00:26:23.989 11:34:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MDkwMzUxNTY3NmUxYmUzMDMwOTczNGQyYTM4NTEyN2JSqHgT: 00:26:23.989 11:34:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:23.989 11:34:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:26:23.989 11:34:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NjcxZjQ4OGI1NjhkY2U5NTM4ZWE2ZTE5OTg5NzI3ZmUWdZes: 00:26:23.989 11:34:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MDkwMzUxNTY3NmUxYmUzMDMwOTczNGQyYTM4NTEyN2JSqHgT: ]] 00:26:23.989 11:34:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MDkwMzUxNTY3NmUxYmUzMDMwOTczNGQyYTM4NTEyN2JSqHgT: 00:26:23.989 11:34:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 2 00:26:23.989 11:34:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:23.989 11:34:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:23.989 11:34:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:26:23.989 11:34:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:26:23.989 11:34:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:23.989 11:34:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:26:23.989 11:34:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:23.989 11:34:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:23.989 11:34:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:23.989 11:34:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:23.989 11:34:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:23.989 11:34:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:23.989 11:34:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:23.989 11:34:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:23.989 11:34:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:23.989 11:34:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:23.989 11:34:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:23.989 11:34:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:23.989 11:34:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:23.989 11:34:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:23.989 11:34:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:26:23.989 11:34:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:23.989 11:34:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:24.247 nvme0n1 00:26:24.247 11:34:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:24.247 11:34:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:24.247 11:34:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:24.247 11:34:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:24.247 11:34:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:24.247 11:34:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:24.506 11:34:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:24.506 11:34:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:24.506 11:34:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:24.506 11:34:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:24.506 11:34:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:24.506 11:34:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:24.506 11:34:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 3 00:26:24.506 11:34:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:24.506 11:34:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:24.506 11:34:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:26:24.506 11:34:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:26:24.506 11:34:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YmUxODE4MzZlOWI3YTMzYzRmMjVkNWViZjEyMmIyYjExNzIwMTJiMWNjZDE5ODZmunzmpg==: 00:26:24.506 11:34:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YTdjMDM3YzRkNDU5ZGJlYWM0YzFiOGE4MDQ2OGYzYTRA71V3: 00:26:24.506 11:34:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:24.506 11:34:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:26:24.506 11:34:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YmUxODE4MzZlOWI3YTMzYzRmMjVkNWViZjEyMmIyYjExNzIwMTJiMWNjZDE5ODZmunzmpg==: 00:26:24.506 11:34:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YTdjMDM3YzRkNDU5ZGJlYWM0YzFiOGE4MDQ2OGYzYTRA71V3: ]] 00:26:24.506 11:34:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YTdjMDM3YzRkNDU5ZGJlYWM0YzFiOGE4MDQ2OGYzYTRA71V3: 00:26:24.506 11:34:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 3 00:26:24.506 11:34:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:24.506 11:34:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:24.506 11:34:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:26:24.506 11:34:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:26:24.506 11:34:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:24.506 11:34:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:26:24.506 11:34:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:24.506 11:34:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:24.506 11:34:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:24.506 11:34:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:24.506 11:34:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:24.506 11:34:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:24.506 11:34:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:24.506 11:34:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:24.506 11:34:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:24.506 11:34:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:24.506 11:34:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:24.506 11:34:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:24.506 11:34:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:24.506 11:34:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:24.506 11:34:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:26:24.506 11:34:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:24.506 11:34:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:24.764 nvme0n1 00:26:24.764 11:34:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:24.764 11:34:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:24.764 11:34:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:24.764 11:34:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:24.764 11:34:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:24.764 11:34:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:24.764 11:34:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:24.764 11:34:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:24.764 11:34:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:24.764 11:34:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:24.764 11:34:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:24.764 11:34:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:24.764 11:34:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 4 00:26:24.764 11:34:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:24.764 11:34:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:24.764 11:34:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:26:24.764 11:34:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:26:24.764 11:34:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YTg3YTkzNmRjMjlkYjg1ODczMjg2ZDM1YWVmZTgzM2EyN2UwYzAyM2Y4NWNjMjY4ZDcwNzdiMWQ0OWMxNDUxYTXOMcU=: 00:26:24.764 11:34:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:26:24.764 11:34:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:24.764 11:34:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:26:24.764 11:34:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YTg3YTkzNmRjMjlkYjg1ODczMjg2ZDM1YWVmZTgzM2EyN2UwYzAyM2Y4NWNjMjY4ZDcwNzdiMWQ0OWMxNDUxYTXOMcU=: 00:26:24.764 11:34:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:26:24.764 11:34:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 4 00:26:24.764 11:34:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:24.764 11:34:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:24.764 11:34:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:26:24.764 11:34:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:26:24.764 11:34:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:24.764 11:34:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:26:24.764 11:34:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:24.764 11:34:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:24.764 11:34:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:25.022 11:34:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:25.022 11:34:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:25.022 11:34:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:25.022 11:34:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:25.022 11:34:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:25.022 11:34:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:25.022 11:34:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:25.022 11:34:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:25.022 11:34:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:25.022 11:34:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:25.022 11:34:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:25.022 11:34:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:26:25.022 11:34:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:25.022 11:34:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:25.280 nvme0n1 00:26:25.280 11:34:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:25.280 11:34:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:25.280 11:34:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:25.280 11:34:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:25.280 11:34:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:25.280 11:34:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:25.280 11:34:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:25.280 11:34:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:25.280 11:34:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:25.280 11:34:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:25.280 11:34:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:25.280 11:34:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:26:25.280 11:34:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:25.280 11:34:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 0 00:26:25.280 11:34:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:25.280 11:34:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:25.280 11:34:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:26:25.280 11:34:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:26:25.280 11:34:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NjM5NzQ1M2UwZmE5M2I4NDg4Yjk5NTE5MGZjNzAzMjQQR7aR: 00:26:25.280 11:34:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YzkwOGRlYmJhMzY5ZGMyNGZjZmMzYzNlYjMzNTkwZjFhYzk4MGNhMWJjMGY3MzkxZDY1YjYyMDc0MmI2ZmM5M/gqEKA=: 00:26:25.280 11:34:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:25.280 11:34:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:26:25.280 11:34:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NjM5NzQ1M2UwZmE5M2I4NDg4Yjk5NTE5MGZjNzAzMjQQR7aR: 00:26:25.280 11:34:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YzkwOGRlYmJhMzY5ZGMyNGZjZmMzYzNlYjMzNTkwZjFhYzk4MGNhMWJjMGY3MzkxZDY1YjYyMDc0MmI2ZmM5M/gqEKA=: ]] 00:26:25.280 11:34:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YzkwOGRlYmJhMzY5ZGMyNGZjZmMzYzNlYjMzNTkwZjFhYzk4MGNhMWJjMGY3MzkxZDY1YjYyMDc0MmI2ZmM5M/gqEKA=: 00:26:25.280 11:34:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 0 00:26:25.280 11:34:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:25.280 11:34:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:25.280 11:34:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:26:25.280 11:34:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:26:25.280 11:34:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:25.280 11:34:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:26:25.280 11:34:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:25.280 11:34:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:25.280 11:34:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:25.280 11:34:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:25.280 11:34:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:25.280 11:34:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:25.280 11:34:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:25.280 11:34:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:25.280 11:34:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:25.280 11:34:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:25.280 11:34:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:25.280 11:34:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:25.280 11:34:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:25.280 11:34:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:25.280 11:34:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:26:25.281 11:34:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:25.281 11:34:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:25.845 nvme0n1 00:26:25.845 11:34:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:25.845 11:34:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:25.845 11:34:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:25.845 11:34:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:25.845 11:34:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:25.845 11:34:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:25.845 11:34:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:25.845 11:34:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:25.845 11:34:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:25.845 11:34:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:26.102 11:34:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:26.102 11:34:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:26.102 11:34:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 1 00:26:26.102 11:34:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:26.102 11:34:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:26.102 11:34:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:26:26.102 11:34:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:26:26.102 11:34:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:N2E4M2UwYTEzODJjMWIwZTUyZTY0NzFhYWU1ZTM3ZjljMjdiNzBmNGZjNzFmYWI1SKWBLA==: 00:26:26.102 11:34:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZTNlZmU5NDA1ZGMxNmU2ODg2Yjg5YmFkOTQzOWVkOGU3MGIwNjdmNzNmMTliYjQ3FeGUFQ==: 00:26:26.102 11:34:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:26.102 11:34:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:26:26.102 11:34:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:N2E4M2UwYTEzODJjMWIwZTUyZTY0NzFhYWU1ZTM3ZjljMjdiNzBmNGZjNzFmYWI1SKWBLA==: 00:26:26.102 11:34:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZTNlZmU5NDA1ZGMxNmU2ODg2Yjg5YmFkOTQzOWVkOGU3MGIwNjdmNzNmMTliYjQ3FeGUFQ==: ]] 00:26:26.102 11:34:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZTNlZmU5NDA1ZGMxNmU2ODg2Yjg5YmFkOTQzOWVkOGU3MGIwNjdmNzNmMTliYjQ3FeGUFQ==: 00:26:26.102 11:34:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 1 00:26:26.102 11:34:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:26.102 11:34:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:26.102 11:34:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:26:26.102 11:34:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:26:26.102 11:34:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:26.102 11:34:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:26:26.102 11:34:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:26.102 11:34:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:26.102 11:34:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:26.102 11:34:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:26.102 11:34:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:26.102 11:34:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:26.102 11:34:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:26.102 11:34:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:26.102 11:34:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:26.102 11:34:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:26.102 11:34:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:26.102 11:34:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:26.102 11:34:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:26.102 11:34:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:26.102 11:34:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:26:26.102 11:34:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:26.102 11:34:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:26.668 nvme0n1 00:26:26.668 11:34:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:26.668 11:34:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:26.668 11:34:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:26.668 11:34:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:26.668 11:34:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:26.668 11:34:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:26.668 11:34:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:26.668 11:34:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:26.668 11:34:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:26.668 11:34:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:26.668 11:34:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:26.668 11:34:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:26.668 11:34:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 2 00:26:26.668 11:34:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:26.668 11:34:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:26.668 11:34:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:26:26.668 11:34:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:26:26.668 11:34:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NjcxZjQ4OGI1NjhkY2U5NTM4ZWE2ZTE5OTg5NzI3ZmUWdZes: 00:26:26.668 11:34:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MDkwMzUxNTY3NmUxYmUzMDMwOTczNGQyYTM4NTEyN2JSqHgT: 00:26:26.668 11:34:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:26.668 11:34:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:26:26.668 11:34:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NjcxZjQ4OGI1NjhkY2U5NTM4ZWE2ZTE5OTg5NzI3ZmUWdZes: 00:26:26.668 11:34:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MDkwMzUxNTY3NmUxYmUzMDMwOTczNGQyYTM4NTEyN2JSqHgT: ]] 00:26:26.668 11:34:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MDkwMzUxNTY3NmUxYmUzMDMwOTczNGQyYTM4NTEyN2JSqHgT: 00:26:26.668 11:34:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 2 00:26:26.668 11:34:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:26.668 11:34:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:26.668 11:34:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:26:26.668 11:34:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:26:26.668 11:34:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:26.668 11:34:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:26:26.668 11:34:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:26.668 11:34:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:26.668 11:34:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:26.668 11:34:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:26.668 11:34:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:26.668 11:34:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:26.668 11:34:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:26.668 11:34:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:26.668 11:34:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:26.668 11:34:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:26.668 11:34:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:26.668 11:34:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:26.668 11:34:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:26.668 11:34:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:26.668 11:34:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:26:26.668 11:34:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:26.668 11:34:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:27.234 nvme0n1 00:26:27.234 11:34:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:27.234 11:34:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:27.234 11:34:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:27.234 11:34:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:27.234 11:34:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:27.234 11:34:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:27.234 11:34:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:27.234 11:34:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:27.234 11:34:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:27.234 11:34:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:27.234 11:34:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:27.234 11:34:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:27.234 11:34:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 3 00:26:27.234 11:34:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:27.234 11:34:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:27.234 11:34:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:26:27.234 11:34:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:26:27.234 11:34:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YmUxODE4MzZlOWI3YTMzYzRmMjVkNWViZjEyMmIyYjExNzIwMTJiMWNjZDE5ODZmunzmpg==: 00:26:27.234 11:34:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YTdjMDM3YzRkNDU5ZGJlYWM0YzFiOGE4MDQ2OGYzYTRA71V3: 00:26:27.234 11:34:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:27.234 11:34:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:26:27.234 11:34:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YmUxODE4MzZlOWI3YTMzYzRmMjVkNWViZjEyMmIyYjExNzIwMTJiMWNjZDE5ODZmunzmpg==: 00:26:27.234 11:34:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YTdjMDM3YzRkNDU5ZGJlYWM0YzFiOGE4MDQ2OGYzYTRA71V3: ]] 00:26:27.234 11:34:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YTdjMDM3YzRkNDU5ZGJlYWM0YzFiOGE4MDQ2OGYzYTRA71V3: 00:26:27.234 11:34:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 3 00:26:27.234 11:34:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:27.234 11:34:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:27.234 11:34:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:26:27.234 11:34:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:26:27.234 11:34:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:27.234 11:34:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:26:27.234 11:34:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:27.234 11:34:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:27.234 11:34:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:27.234 11:34:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:27.234 11:34:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:27.234 11:34:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:27.234 11:34:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:27.234 11:34:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:27.234 11:34:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:27.234 11:34:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:27.234 11:34:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:27.234 11:34:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:27.234 11:34:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:27.234 11:34:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:27.234 11:34:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:26:27.234 11:34:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:27.234 11:34:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:27.800 nvme0n1 00:26:27.800 11:34:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:27.800 11:34:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:27.800 11:34:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:27.800 11:34:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:27.800 11:34:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:27.800 11:34:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:27.800 11:34:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:27.800 11:34:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:27.800 11:34:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:27.800 11:34:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:27.800 11:34:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:27.800 11:34:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:27.800 11:34:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 4 00:26:27.800 11:34:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:27.800 11:34:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:27.800 11:34:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:26:27.800 11:34:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:26:27.800 11:34:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YTg3YTkzNmRjMjlkYjg1ODczMjg2ZDM1YWVmZTgzM2EyN2UwYzAyM2Y4NWNjMjY4ZDcwNzdiMWQ0OWMxNDUxYTXOMcU=: 00:26:27.800 11:34:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:26:27.800 11:34:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:27.800 11:34:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:26:27.800 11:34:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YTg3YTkzNmRjMjlkYjg1ODczMjg2ZDM1YWVmZTgzM2EyN2UwYzAyM2Y4NWNjMjY4ZDcwNzdiMWQ0OWMxNDUxYTXOMcU=: 00:26:27.800 11:34:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:26:27.800 11:34:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 4 00:26:27.800 11:34:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:27.800 11:34:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:27.800 11:34:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:26:27.800 11:34:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:26:27.800 11:34:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:27.800 11:34:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:26:27.800 11:34:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:27.800 11:34:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:27.800 11:34:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:28.058 11:34:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:28.058 11:34:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:28.058 11:34:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:28.058 11:34:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:28.058 11:34:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:28.058 11:34:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:28.059 11:34:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:28.059 11:34:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:28.059 11:34:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:28.059 11:34:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:28.059 11:34:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:28.059 11:34:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:26:28.059 11:34:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:28.059 11:34:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:28.625 nvme0n1 00:26:28.625 11:34:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:28.625 11:34:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:28.625 11:34:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:28.625 11:34:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:28.625 11:34:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:28.625 11:34:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:28.625 11:34:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:28.625 11:34:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:28.625 11:34:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:28.625 11:34:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:28.625 11:34:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:28.625 11:34:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:26:28.625 11:34:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:28.625 11:34:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:28.625 11:34:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:26:28.625 11:34:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:26:28.625 11:34:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:N2E4M2UwYTEzODJjMWIwZTUyZTY0NzFhYWU1ZTM3ZjljMjdiNzBmNGZjNzFmYWI1SKWBLA==: 00:26:28.625 11:34:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZTNlZmU5NDA1ZGMxNmU2ODg2Yjg5YmFkOTQzOWVkOGU3MGIwNjdmNzNmMTliYjQ3FeGUFQ==: 00:26:28.625 11:34:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:28.625 11:34:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:26:28.625 11:34:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:N2E4M2UwYTEzODJjMWIwZTUyZTY0NzFhYWU1ZTM3ZjljMjdiNzBmNGZjNzFmYWI1SKWBLA==: 00:26:28.625 11:34:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZTNlZmU5NDA1ZGMxNmU2ODg2Yjg5YmFkOTQzOWVkOGU3MGIwNjdmNzNmMTliYjQ3FeGUFQ==: ]] 00:26:28.625 11:34:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZTNlZmU5NDA1ZGMxNmU2ODg2Yjg5YmFkOTQzOWVkOGU3MGIwNjdmNzNmMTliYjQ3FeGUFQ==: 00:26:28.625 11:34:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@111 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:26:28.625 11:34:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:28.625 11:34:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:28.625 11:34:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:28.625 11:34:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@112 -- # get_main_ns_ip 00:26:28.625 11:34:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:28.625 11:34:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:28.625 11:34:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:28.625 11:34:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:28.625 11:34:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:28.625 11:34:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:28.625 11:34:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:28.625 11:34:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:28.625 11:34:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:28.625 11:34:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:28.625 11:34:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@112 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:26:28.625 11:34:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@650 -- # local es=0 00:26:28.625 11:34:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:26:28.625 11:34:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:26:28.625 11:34:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:26:28.625 11:34:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:26:28.625 11:34:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:26:28.625 11:34:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:26:28.625 11:34:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:28.625 11:34:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:28.625 request: 00:26:28.625 { 00:26:28.625 "name": "nvme0", 00:26:28.625 "trtype": "tcp", 00:26:28.625 "traddr": "10.0.0.1", 00:26:28.625 "adrfam": "ipv4", 00:26:28.625 "trsvcid": "4420", 00:26:28.625 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:26:28.625 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:26:28.625 "prchk_reftag": false, 00:26:28.625 "prchk_guard": false, 00:26:28.625 "hdgst": false, 00:26:28.625 "ddgst": false, 00:26:28.625 "method": "bdev_nvme_attach_controller", 00:26:28.625 "req_id": 1 00:26:28.625 } 00:26:28.625 Got JSON-RPC error response 00:26:28.625 response: 00:26:28.625 { 00:26:28.625 "code": -5, 00:26:28.625 "message": "Input/output error" 00:26:28.625 } 00:26:28.625 11:34:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:26:28.625 11:34:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # es=1 00:26:28.625 11:34:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:26:28.625 11:34:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:26:28.625 11:34:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:26:28.626 11:34:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # rpc_cmd bdev_nvme_get_controllers 00:26:28.626 11:34:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # jq length 00:26:28.626 11:34:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:28.626 11:34:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:28.626 11:34:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:28.626 11:34:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # (( 0 == 0 )) 00:26:28.626 11:34:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@117 -- # get_main_ns_ip 00:26:28.626 11:34:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:28.626 11:34:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:28.626 11:34:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:28.626 11:34:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:28.626 11:34:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:28.626 11:34:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:28.626 11:34:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:28.626 11:34:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:28.626 11:34:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:28.626 11:34:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:28.626 11:34:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@117 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:26:28.626 11:34:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@650 -- # local es=0 00:26:28.626 11:34:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:26:28.626 11:34:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:26:28.626 11:34:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:26:28.626 11:34:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:26:28.626 11:34:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:26:28.626 11:34:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:26:28.626 11:34:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:28.626 11:34:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:28.884 request: 00:26:28.884 { 00:26:28.884 "name": "nvme0", 00:26:28.884 "trtype": "tcp", 00:26:28.884 "traddr": "10.0.0.1", 00:26:28.884 "adrfam": "ipv4", 00:26:28.884 "trsvcid": "4420", 00:26:28.884 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:26:28.884 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:26:28.884 "prchk_reftag": false, 00:26:28.884 "prchk_guard": false, 00:26:28.884 "hdgst": false, 00:26:28.884 "ddgst": false, 00:26:28.884 "dhchap_key": "key2", 00:26:28.884 "method": "bdev_nvme_attach_controller", 00:26:28.884 "req_id": 1 00:26:28.884 } 00:26:28.884 Got JSON-RPC error response 00:26:28.884 response: 00:26:28.884 { 00:26:28.884 "code": -5, 00:26:28.884 "message": "Input/output error" 00:26:28.884 } 00:26:28.884 11:34:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:26:28.884 11:34:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # es=1 00:26:28.884 11:34:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:26:28.884 11:34:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:26:28.884 11:34:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:26:28.884 11:34:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # rpc_cmd bdev_nvme_get_controllers 00:26:28.884 11:34:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # jq length 00:26:28.884 11:34:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:28.884 11:34:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:28.884 11:34:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:28.884 11:34:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # (( 0 == 0 )) 00:26:28.884 11:34:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@123 -- # get_main_ns_ip 00:26:28.884 11:34:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:28.884 11:34:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:28.884 11:34:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:28.884 11:34:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:28.884 11:34:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:28.884 11:34:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:28.884 11:34:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:28.884 11:34:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:28.884 11:34:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:28.884 11:34:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:28.884 11:34:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@123 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:26:28.884 11:34:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@650 -- # local es=0 00:26:28.884 11:34:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:26:28.884 11:34:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:26:28.884 11:34:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:26:28.884 11:34:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:26:28.884 11:34:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:26:28.884 11:34:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:26:28.884 11:34:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:28.884 11:34:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:28.884 request: 00:26:28.884 { 00:26:28.884 "name": "nvme0", 00:26:28.884 "trtype": "tcp", 00:26:28.884 "traddr": "10.0.0.1", 00:26:28.884 "adrfam": "ipv4", 00:26:28.884 "trsvcid": "4420", 00:26:28.884 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:26:28.884 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:26:28.884 "prchk_reftag": false, 00:26:28.884 "prchk_guard": false, 00:26:28.884 "hdgst": false, 00:26:28.884 "ddgst": false, 00:26:28.884 "dhchap_key": "key1", 00:26:28.884 "dhchap_ctrlr_key": "ckey2", 00:26:28.884 "method": "bdev_nvme_attach_controller", 00:26:28.884 "req_id": 1 00:26:28.884 } 00:26:28.884 Got JSON-RPC error response 00:26:28.884 response: 00:26:28.884 { 00:26:28.884 "code": -5, 00:26:28.884 "message": "Input/output error" 00:26:28.884 } 00:26:28.884 11:34:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:26:28.884 11:34:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # es=1 00:26:28.884 11:34:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:26:28.884 11:34:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:26:28.884 11:34:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:26:28.884 11:34:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@127 -- # trap - SIGINT SIGTERM EXIT 00:26:28.884 11:34:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@128 -- # cleanup 00:26:28.884 11:34:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@24 -- # nvmftestfini 00:26:28.884 11:34:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@488 -- # nvmfcleanup 00:26:28.884 11:34:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@117 -- # sync 00:26:28.884 11:34:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:26:28.884 11:34:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@120 -- # set +e 00:26:28.884 11:34:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@121 -- # for i in {1..20} 00:26:28.884 11:34:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:26:28.884 rmmod nvme_tcp 00:26:28.884 rmmod nvme_fabrics 00:26:28.884 11:34:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:26:28.884 11:34:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@124 -- # set -e 00:26:28.884 11:34:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@125 -- # return 0 00:26:28.884 11:34:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@489 -- # '[' -n 1642634 ']' 00:26:28.884 11:34:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@490 -- # killprocess 1642634 00:26:28.884 11:34:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@950 -- # '[' -z 1642634 ']' 00:26:28.884 11:34:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@954 -- # kill -0 1642634 00:26:28.884 11:34:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@955 -- # uname 00:26:28.884 11:34:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:26:28.884 11:34:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1642634 00:26:28.884 11:34:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:26:28.884 11:34:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:26:28.884 11:34:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1642634' 00:26:28.884 killing process with pid 1642634 00:26:28.884 11:34:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@969 -- # kill 1642634 00:26:28.884 11:34:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@974 -- # wait 1642634 00:26:29.143 11:34:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:26:29.143 11:34:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:26:29.143 11:34:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:26:29.143 11:34:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:26:29.143 11:34:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@278 -- # remove_spdk_ns 00:26:29.143 11:34:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:29.143 11:34:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:29.143 11:34:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:31.678 11:34:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:26:31.678 11:34:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@25 -- # rm /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:26:31.678 11:34:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@26 -- # rmdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:26:31.678 11:34:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@27 -- # clean_kernel_target 00:26:31.678 11:34:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@684 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 ]] 00:26:31.678 11:34:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@686 -- # echo 0 00:26:31.678 11:34:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@688 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2024-02.io.spdk:cnode0 00:26:31.678 11:34:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@689 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:26:31.678 11:34:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@690 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:26:31.678 11:34:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@691 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:26:31.678 11:34:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@693 -- # modules=(/sys/module/nvmet/holders/*) 00:26:31.678 11:34:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@695 -- # modprobe -r nvmet_tcp nvmet 00:26:31.678 11:34:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@698 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:26:34.212 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:26:34.212 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:26:34.212 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:26:34.212 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:26:34.212 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:26:34.212 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:26:34.212 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:26:34.212 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:26:34.212 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:26:34.212 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:26:34.212 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:26:34.212 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:26:34.212 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:26:34.212 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:26:34.212 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:26:34.212 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:26:35.588 0000:5e:00.0 (8086 0a54): nvme -> vfio-pci 00:26:35.847 11:34:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@28 -- # rm -f /tmp/spdk.key-null.RUB /tmp/spdk.key-null.vl4 /tmp/spdk.key-sha256.MWL /tmp/spdk.key-sha384.sp3 /tmp/spdk.key-sha512.R7D /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log 00:26:35.847 11:34:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:26:38.381 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:26:38.381 0000:5e:00.0 (8086 0a54): Already using the vfio-pci driver 00:26:38.381 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:26:38.381 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:26:38.381 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:26:38.381 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:26:38.381 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:26:38.381 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:26:38.381 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:26:38.381 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:26:38.381 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:26:38.381 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:26:38.381 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:26:38.381 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:26:38.381 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:26:38.381 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:26:38.381 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:26:38.657 00:26:38.657 real 0m50.706s 00:26:38.657 user 0m44.763s 00:26:38.657 sys 0m12.170s 00:26:38.658 11:34:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1126 -- # xtrace_disable 00:26:38.658 11:34:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:38.658 ************************************ 00:26:38.658 END TEST nvmf_auth_host 00:26:38.658 ************************************ 00:26:38.658 11:34:34 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@32 -- # [[ tcp == \t\c\p ]] 00:26:38.658 11:34:34 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@33 -- # run_test nvmf_digest /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/digest.sh --transport=tcp 00:26:38.658 11:34:34 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:26:38.658 11:34:34 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:26:38.658 11:34:34 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:26:38.658 ************************************ 00:26:38.658 START TEST nvmf_digest 00:26:38.658 ************************************ 00:26:38.658 11:34:34 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/digest.sh --transport=tcp 00:26:38.658 * Looking for test storage... 00:26:38.658 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:26:38.658 11:34:34 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:26:38.658 11:34:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@7 -- # uname -s 00:26:38.658 11:34:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:38.658 11:34:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:38.658 11:34:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:38.658 11:34:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:38.658 11:34:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:38.658 11:34:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:38.658 11:34:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:38.658 11:34:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:38.658 11:34:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:38.658 11:34:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:38.658 11:34:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:26:38.658 11:34:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:26:38.658 11:34:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:38.658 11:34:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:38.658 11:34:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:26:38.658 11:34:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:26:38.658 11:34:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:26:38.658 11:34:34 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:38.658 11:34:34 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:38.658 11:34:34 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:38.658 11:34:34 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:38.658 11:34:34 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:38.658 11:34:34 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:38.658 11:34:34 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@5 -- # export PATH 00:26:38.658 11:34:34 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:38.658 11:34:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@47 -- # : 0 00:26:38.658 11:34:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:26:38.658 11:34:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:26:38.658 11:34:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:26:38.658 11:34:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:38.658 11:34:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:38.658 11:34:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:26:38.658 11:34:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:26:38.658 11:34:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@51 -- # have_pci_nics=0 00:26:38.658 11:34:34 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@14 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:26:38.658 11:34:34 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@15 -- # bperfsock=/var/tmp/bperf.sock 00:26:38.658 11:34:34 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@16 -- # runtime=2 00:26:38.658 11:34:34 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@136 -- # [[ tcp != \t\c\p ]] 00:26:38.658 11:34:34 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@138 -- # nvmftestinit 00:26:38.658 11:34:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:26:38.658 11:34:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:38.658 11:34:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@448 -- # prepare_net_devs 00:26:38.658 11:34:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@410 -- # local -g is_hw=no 00:26:38.658 11:34:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@412 -- # remove_spdk_ns 00:26:38.658 11:34:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:38.658 11:34:34 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:38.658 11:34:34 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:38.658 11:34:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:26:38.658 11:34:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:26:38.658 11:34:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@285 -- # xtrace_disable 00:26:38.658 11:34:34 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:26:45.267 11:34:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:26:45.268 11:34:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@291 -- # pci_devs=() 00:26:45.268 11:34:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@291 -- # local -a pci_devs 00:26:45.268 11:34:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@292 -- # pci_net_devs=() 00:26:45.268 11:34:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:26:45.268 11:34:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@293 -- # pci_drivers=() 00:26:45.268 11:34:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@293 -- # local -A pci_drivers 00:26:45.268 11:34:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@295 -- # net_devs=() 00:26:45.268 11:34:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@295 -- # local -ga net_devs 00:26:45.268 11:34:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@296 -- # e810=() 00:26:45.268 11:34:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@296 -- # local -ga e810 00:26:45.268 11:34:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@297 -- # x722=() 00:26:45.268 11:34:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@297 -- # local -ga x722 00:26:45.268 11:34:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@298 -- # mlx=() 00:26:45.268 11:34:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@298 -- # local -ga mlx 00:26:45.268 11:34:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:26:45.268 11:34:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:26:45.268 11:34:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:26:45.268 11:34:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:26:45.268 11:34:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:26:45.268 11:34:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:26:45.268 11:34:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:26:45.268 11:34:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:26:45.268 11:34:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:26:45.268 11:34:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:26:45.268 11:34:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:26:45.268 11:34:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:26:45.268 11:34:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:26:45.268 11:34:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:26:45.268 11:34:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:26:45.268 11:34:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:26:45.268 11:34:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:26:45.268 11:34:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:26:45.268 11:34:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:26:45.268 Found 0000:86:00.0 (0x8086 - 0x159b) 00:26:45.268 11:34:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:26:45.268 11:34:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:26:45.268 11:34:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:45.268 11:34:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:45.268 11:34:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:26:45.268 11:34:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:26:45.268 11:34:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:26:45.268 Found 0000:86:00.1 (0x8086 - 0x159b) 00:26:45.268 11:34:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:26:45.268 11:34:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:26:45.268 11:34:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:45.268 11:34:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:45.268 11:34:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:26:45.268 11:34:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:26:45.268 11:34:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:26:45.268 11:34:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:26:45.268 11:34:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:26:45.268 11:34:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:45.268 11:34:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:26:45.268 11:34:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:45.268 11:34:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@390 -- # [[ up == up ]] 00:26:45.268 11:34:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:26:45.268 11:34:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:45.268 11:34:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:26:45.268 Found net devices under 0000:86:00.0: cvl_0_0 00:26:45.268 11:34:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:26:45.268 11:34:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:26:45.268 11:34:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:45.268 11:34:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:26:45.268 11:34:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:45.268 11:34:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@390 -- # [[ up == up ]] 00:26:45.268 11:34:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:26:45.268 11:34:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:45.268 11:34:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:26:45.268 Found net devices under 0000:86:00.1: cvl_0_1 00:26:45.268 11:34:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:26:45.268 11:34:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:26:45.268 11:34:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@414 -- # is_hw=yes 00:26:45.268 11:34:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:26:45.268 11:34:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:26:45.268 11:34:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:26:45.268 11:34:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:26:45.268 11:34:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:26:45.268 11:34:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:26:45.268 11:34:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:26:45.268 11:34:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:26:45.268 11:34:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:26:45.268 11:34:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:26:45.268 11:34:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:26:45.268 11:34:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:45.268 11:34:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:26:45.268 11:34:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:26:45.268 11:34:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:26:45.268 11:34:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:26:45.268 11:34:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:26:45.268 11:34:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:26:45.268 11:34:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:26:45.268 11:34:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:26:45.268 11:34:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:26:45.268 11:34:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:26:45.268 11:34:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:26:45.268 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:45.268 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.155 ms 00:26:45.268 00:26:45.268 --- 10.0.0.2 ping statistics --- 00:26:45.268 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:45.268 rtt min/avg/max/mdev = 0.155/0.155/0.155/0.000 ms 00:26:45.268 11:34:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:26:45.268 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:45.268 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.226 ms 00:26:45.268 00:26:45.268 --- 10.0.0.1 ping statistics --- 00:26:45.268 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:45.268 rtt min/avg/max/mdev = 0.226/0.226/0.226/0.000 ms 00:26:45.268 11:34:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:45.268 11:34:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@422 -- # return 0 00:26:45.268 11:34:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:26:45.268 11:34:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:45.268 11:34:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:26:45.268 11:34:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:26:45.268 11:34:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:45.268 11:34:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:26:45.268 11:34:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:26:45.268 11:34:39 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@140 -- # trap cleanup SIGINT SIGTERM EXIT 00:26:45.268 11:34:39 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@141 -- # [[ 0 -eq 1 ]] 00:26:45.268 11:34:39 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@145 -- # run_test nvmf_digest_clean run_digest 00:26:45.269 11:34:39 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:26:45.269 11:34:39 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1107 -- # xtrace_disable 00:26:45.269 11:34:39 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:26:45.269 ************************************ 00:26:45.269 START TEST nvmf_digest_clean 00:26:45.269 ************************************ 00:26:45.269 11:34:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1125 -- # run_digest 00:26:45.269 11:34:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@120 -- # local dsa_initiator 00:26:45.269 11:34:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # [[ '' == \d\s\a\_\i\n\i\t\i\a\t\o\r ]] 00:26:45.269 11:34:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # dsa_initiator=false 00:26:45.269 11:34:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@123 -- # tgt_params=("--wait-for-rpc") 00:26:45.269 11:34:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@124 -- # nvmfappstart --wait-for-rpc 00:26:45.269 11:34:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:26:45.269 11:34:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@724 -- # xtrace_disable 00:26:45.269 11:34:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:26:45.269 11:34:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@481 -- # nvmfpid=1655900 00:26:45.269 11:34:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@482 -- # waitforlisten 1655900 00:26:45.269 11:34:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:26:45.269 11:34:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@831 -- # '[' -z 1655900 ']' 00:26:45.269 11:34:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:45.269 11:34:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # local max_retries=100 00:26:45.269 11:34:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:45.269 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:45.269 11:34:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # xtrace_disable 00:26:45.269 11:34:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:26:45.269 [2024-07-26 11:34:40.090418] Starting SPDK v24.09-pre git sha1 487ff9e1a / DPDK 24.03.0 initialization... 00:26:45.269 [2024-07-26 11:34:40.090463] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:45.269 EAL: No free 2048 kB hugepages reported on node 1 00:26:45.269 [2024-07-26 11:34:40.159606] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:45.269 [2024-07-26 11:34:40.237241] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:45.269 [2024-07-26 11:34:40.237275] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:45.269 [2024-07-26 11:34:40.237282] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:45.269 [2024-07-26 11:34:40.237289] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:45.269 [2024-07-26 11:34:40.237294] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:45.269 [2024-07-26 11:34:40.237313] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:26:45.269 11:34:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:26:45.269 11:34:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # return 0 00:26:45.269 11:34:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:26:45.269 11:34:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@730 -- # xtrace_disable 00:26:45.269 11:34:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:26:45.527 11:34:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:45.527 11:34:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@125 -- # [[ '' == \d\s\a\_\t\a\r\g\e\t ]] 00:26:45.527 11:34:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@126 -- # common_target_config 00:26:45.527 11:34:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@43 -- # rpc_cmd 00:26:45.527 11:34:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:45.527 11:34:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:26:45.527 null0 00:26:45.527 [2024-07-26 11:34:41.012842] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:45.527 [2024-07-26 11:34:41.037032] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:45.527 11:34:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:45.527 11:34:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@128 -- # run_bperf randread 4096 128 false 00:26:45.527 11:34:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:26:45.527 11:34:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:26:45.527 11:34:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:26:45.527 11:34:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:26:45.527 11:34:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:26:45.527 11:34:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:26:45.527 11:34:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=1656063 00:26:45.527 11:34:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 1656063 /var/tmp/bperf.sock 00:26:45.527 11:34:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:26:45.527 11:34:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@831 -- # '[' -z 1656063 ']' 00:26:45.527 11:34:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:26:45.527 11:34:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # local max_retries=100 00:26:45.527 11:34:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:26:45.527 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:26:45.527 11:34:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # xtrace_disable 00:26:45.527 11:34:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:26:45.527 [2024-07-26 11:34:41.086843] Starting SPDK v24.09-pre git sha1 487ff9e1a / DPDK 24.03.0 initialization... 00:26:45.527 [2024-07-26 11:34:41.086885] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1656063 ] 00:26:45.527 EAL: No free 2048 kB hugepages reported on node 1 00:26:45.527 [2024-07-26 11:34:41.148748] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:45.784 [2024-07-26 11:34:41.249810] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:26:46.348 11:34:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:26:46.348 11:34:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # return 0 00:26:46.348 11:34:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:26:46.348 11:34:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:26:46.348 11:34:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:26:46.605 11:34:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:26:46.605 11:34:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:26:46.862 nvme0n1 00:26:47.119 11:34:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:26:47.119 11:34:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:26:47.119 Running I/O for 2 seconds... 00:26:49.015 00:26:49.015 Latency(us) 00:26:49.015 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:49.015 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:26:49.015 nvme0n1 : 2.04 25614.77 100.06 0.00 0.00 4898.00 2543.42 44938.97 00:26:49.015 =================================================================================================================== 00:26:49.015 Total : 25614.77 100.06 0.00 0.00 4898.00 2543.42 44938.97 00:26:49.015 0 00:26:49.273 11:34:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:26:49.273 11:34:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:26:49.273 11:34:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:26:49.273 11:34:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:26:49.273 | select(.opcode=="crc32c") 00:26:49.273 | "\(.module_name) \(.executed)"' 00:26:49.273 11:34:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:26:49.273 11:34:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:26:49.273 11:34:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:26:49.273 11:34:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:26:49.273 11:34:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:26:49.273 11:34:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 1656063 00:26:49.273 11:34:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@950 -- # '[' -z 1656063 ']' 00:26:49.273 11:34:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # kill -0 1656063 00:26:49.273 11:34:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # uname 00:26:49.273 11:34:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:26:49.273 11:34:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1656063 00:26:49.273 11:34:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:26:49.273 11:34:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:26:49.273 11:34:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1656063' 00:26:49.273 killing process with pid 1656063 00:26:49.273 11:34:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@969 -- # kill 1656063 00:26:49.273 Received shutdown signal, test time was about 2.000000 seconds 00:26:49.273 00:26:49.273 Latency(us) 00:26:49.273 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:49.273 =================================================================================================================== 00:26:49.273 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:26:49.273 11:34:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@974 -- # wait 1656063 00:26:49.531 11:34:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@129 -- # run_bperf randread 131072 16 false 00:26:49.531 11:34:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:26:49.531 11:34:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:26:49.531 11:34:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:26:49.531 11:34:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:26:49.531 11:34:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:26:49.531 11:34:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:26:49.531 11:34:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=1656762 00:26:49.531 11:34:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 1656762 /var/tmp/bperf.sock 00:26:49.531 11:34:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:26:49.531 11:34:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@831 -- # '[' -z 1656762 ']' 00:26:49.531 11:34:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:26:49.531 11:34:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # local max_retries=100 00:26:49.531 11:34:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:26:49.531 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:26:49.531 11:34:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # xtrace_disable 00:26:49.531 11:34:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:26:49.531 [2024-07-26 11:34:45.133459] Starting SPDK v24.09-pre git sha1 487ff9e1a / DPDK 24.03.0 initialization... 00:26:49.531 [2024-07-26 11:34:45.133506] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1656762 ] 00:26:49.531 I/O size of 131072 is greater than zero copy threshold (65536). 00:26:49.531 Zero copy mechanism will not be used. 00:26:49.531 EAL: No free 2048 kB hugepages reported on node 1 00:26:49.788 [2024-07-26 11:34:45.201189] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:49.788 [2024-07-26 11:34:45.280144] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:26:50.352 11:34:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:26:50.352 11:34:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # return 0 00:26:50.352 11:34:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:26:50.352 11:34:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:26:50.352 11:34:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:26:50.610 11:34:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:26:50.610 11:34:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:26:51.176 nvme0n1 00:26:51.176 11:34:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:26:51.176 11:34:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:26:51.176 I/O size of 131072 is greater than zero copy threshold (65536). 00:26:51.176 Zero copy mechanism will not be used. 00:26:51.176 Running I/O for 2 seconds... 00:26:53.073 00:26:53.073 Latency(us) 00:26:53.073 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:53.073 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:26:53.073 nvme0n1 : 2.00 5411.78 676.47 0.00 0.00 2954.18 624.15 5742.20 00:26:53.073 =================================================================================================================== 00:26:53.073 Total : 5411.78 676.47 0.00 0.00 2954.18 624.15 5742.20 00:26:53.073 0 00:26:53.073 11:34:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:26:53.073 11:34:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:26:53.073 11:34:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:26:53.073 11:34:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:26:53.073 | select(.opcode=="crc32c") 00:26:53.073 | "\(.module_name) \(.executed)"' 00:26:53.073 11:34:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:26:53.331 11:34:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:26:53.331 11:34:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:26:53.331 11:34:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:26:53.331 11:34:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:26:53.331 11:34:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 1656762 00:26:53.331 11:34:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@950 -- # '[' -z 1656762 ']' 00:26:53.331 11:34:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # kill -0 1656762 00:26:53.331 11:34:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # uname 00:26:53.331 11:34:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:26:53.331 11:34:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1656762 00:26:53.331 11:34:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:26:53.331 11:34:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:26:53.331 11:34:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1656762' 00:26:53.331 killing process with pid 1656762 00:26:53.331 11:34:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@969 -- # kill 1656762 00:26:53.331 Received shutdown signal, test time was about 2.000000 seconds 00:26:53.331 00:26:53.331 Latency(us) 00:26:53.331 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:53.331 =================================================================================================================== 00:26:53.331 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:26:53.331 11:34:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@974 -- # wait 1656762 00:26:53.589 11:34:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@130 -- # run_bperf randwrite 4096 128 false 00:26:53.589 11:34:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:26:53.589 11:34:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:26:53.589 11:34:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:26:53.589 11:34:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:26:53.590 11:34:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:26:53.590 11:34:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:26:53.590 11:34:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=1657452 00:26:53.590 11:34:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 1657452 /var/tmp/bperf.sock 00:26:53.590 11:34:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:26:53.590 11:34:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@831 -- # '[' -z 1657452 ']' 00:26:53.590 11:34:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:26:53.590 11:34:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # local max_retries=100 00:26:53.590 11:34:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:26:53.590 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:26:53.590 11:34:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # xtrace_disable 00:26:53.590 11:34:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:26:53.590 [2024-07-26 11:34:49.137552] Starting SPDK v24.09-pre git sha1 487ff9e1a / DPDK 24.03.0 initialization... 00:26:53.590 [2024-07-26 11:34:49.137601] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1657452 ] 00:26:53.590 EAL: No free 2048 kB hugepages reported on node 1 00:26:53.590 [2024-07-26 11:34:49.202714] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:53.847 [2024-07-26 11:34:49.271176] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:26:54.413 11:34:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:26:54.413 11:34:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # return 0 00:26:54.413 11:34:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:26:54.413 11:34:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:26:54.413 11:34:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:26:54.670 11:34:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:26:54.670 11:34:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:26:54.928 nvme0n1 00:26:54.928 11:34:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:26:54.928 11:34:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:26:54.928 Running I/O for 2 seconds... 00:26:57.455 00:26:57.455 Latency(us) 00:26:57.455 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:57.455 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:26:57.455 nvme0n1 : 2.00 27919.54 109.06 0.00 0.00 4576.39 4369.07 11297.16 00:26:57.455 =================================================================================================================== 00:26:57.455 Total : 27919.54 109.06 0.00 0.00 4576.39 4369.07 11297.16 00:26:57.455 0 00:26:57.455 11:34:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:26:57.455 11:34:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:26:57.455 11:34:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:26:57.455 11:34:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:26:57.455 | select(.opcode=="crc32c") 00:26:57.455 | "\(.module_name) \(.executed)"' 00:26:57.455 11:34:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:26:57.455 11:34:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:26:57.455 11:34:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:26:57.455 11:34:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:26:57.455 11:34:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:26:57.455 11:34:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 1657452 00:26:57.455 11:34:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@950 -- # '[' -z 1657452 ']' 00:26:57.455 11:34:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # kill -0 1657452 00:26:57.455 11:34:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # uname 00:26:57.455 11:34:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:26:57.455 11:34:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1657452 00:26:57.455 11:34:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:26:57.455 11:34:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:26:57.455 11:34:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1657452' 00:26:57.455 killing process with pid 1657452 00:26:57.455 11:34:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@969 -- # kill 1657452 00:26:57.455 Received shutdown signal, test time was about 2.000000 seconds 00:26:57.455 00:26:57.455 Latency(us) 00:26:57.455 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:57.455 =================================================================================================================== 00:26:57.455 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:26:57.455 11:34:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@974 -- # wait 1657452 00:26:57.455 11:34:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@131 -- # run_bperf randwrite 131072 16 false 00:26:57.455 11:34:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:26:57.455 11:34:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:26:57.455 11:34:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:26:57.455 11:34:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:26:57.455 11:34:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:26:57.455 11:34:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:26:57.455 11:34:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=1658074 00:26:57.455 11:34:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 1658074 /var/tmp/bperf.sock 00:26:57.455 11:34:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:26:57.455 11:34:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@831 -- # '[' -z 1658074 ']' 00:26:57.455 11:34:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:26:57.455 11:34:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # local max_retries=100 00:26:57.455 11:34:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:26:57.455 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:26:57.455 11:34:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # xtrace_disable 00:26:57.455 11:34:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:26:57.455 [2024-07-26 11:34:53.064178] Starting SPDK v24.09-pre git sha1 487ff9e1a / DPDK 24.03.0 initialization... 00:26:57.455 [2024-07-26 11:34:53.064225] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1658074 ] 00:26:57.455 I/O size of 131072 is greater than zero copy threshold (65536). 00:26:57.455 Zero copy mechanism will not be used. 00:26:57.455 EAL: No free 2048 kB hugepages reported on node 1 00:26:57.713 [2024-07-26 11:34:53.131594] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:57.713 [2024-07-26 11:34:53.202171] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:26:58.277 11:34:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:26:58.277 11:34:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # return 0 00:26:58.277 11:34:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:26:58.277 11:34:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:26:58.277 11:34:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:26:58.534 11:34:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:26:58.534 11:34:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:26:58.792 nvme0n1 00:26:58.792 11:34:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:26:58.792 11:34:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:26:59.050 I/O size of 131072 is greater than zero copy threshold (65536). 00:26:59.050 Zero copy mechanism will not be used. 00:26:59.050 Running I/O for 2 seconds... 00:27:00.950 00:27:00.950 Latency(us) 00:27:00.950 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:00.950 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:27:00.950 nvme0n1 : 2.00 6500.29 812.54 0.00 0.00 2456.79 1630.60 12170.97 00:27:00.950 =================================================================================================================== 00:27:00.950 Total : 6500.29 812.54 0.00 0.00 2456.79 1630.60 12170.97 00:27:00.950 0 00:27:00.950 11:34:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:27:00.950 11:34:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:27:00.950 11:34:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:27:00.950 11:34:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:27:00.950 | select(.opcode=="crc32c") 00:27:00.950 | "\(.module_name) \(.executed)"' 00:27:00.950 11:34:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:27:01.208 11:34:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:27:01.208 11:34:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:27:01.208 11:34:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:27:01.208 11:34:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:27:01.208 11:34:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 1658074 00:27:01.208 11:34:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@950 -- # '[' -z 1658074 ']' 00:27:01.208 11:34:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # kill -0 1658074 00:27:01.208 11:34:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # uname 00:27:01.208 11:34:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:27:01.208 11:34:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1658074 00:27:01.208 11:34:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:27:01.208 11:34:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:27:01.208 11:34:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1658074' 00:27:01.208 killing process with pid 1658074 00:27:01.208 11:34:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@969 -- # kill 1658074 00:27:01.208 Received shutdown signal, test time was about 2.000000 seconds 00:27:01.208 00:27:01.208 Latency(us) 00:27:01.208 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:01.208 =================================================================================================================== 00:27:01.208 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:27:01.209 11:34:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@974 -- # wait 1658074 00:27:01.467 11:34:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@132 -- # killprocess 1655900 00:27:01.467 11:34:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@950 -- # '[' -z 1655900 ']' 00:27:01.467 11:34:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # kill -0 1655900 00:27:01.467 11:34:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # uname 00:27:01.467 11:34:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:27:01.467 11:34:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1655900 00:27:01.467 11:34:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:27:01.467 11:34:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:27:01.467 11:34:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1655900' 00:27:01.467 killing process with pid 1655900 00:27:01.467 11:34:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@969 -- # kill 1655900 00:27:01.467 11:34:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@974 -- # wait 1655900 00:27:01.726 00:27:01.726 real 0m17.162s 00:27:01.726 user 0m32.690s 00:27:01.726 sys 0m4.720s 00:27:01.726 11:34:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1126 -- # xtrace_disable 00:27:01.726 11:34:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:27:01.726 ************************************ 00:27:01.726 END TEST nvmf_digest_clean 00:27:01.726 ************************************ 00:27:01.726 11:34:57 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@147 -- # run_test nvmf_digest_error run_digest_error 00:27:01.726 11:34:57 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:27:01.726 11:34:57 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1107 -- # xtrace_disable 00:27:01.726 11:34:57 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:27:01.726 ************************************ 00:27:01.726 START TEST nvmf_digest_error 00:27:01.726 ************************************ 00:27:01.726 11:34:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1125 -- # run_digest_error 00:27:01.726 11:34:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@102 -- # nvmfappstart --wait-for-rpc 00:27:01.726 11:34:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:27:01.726 11:34:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@724 -- # xtrace_disable 00:27:01.726 11:34:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:27:01.726 11:34:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@481 -- # nvmfpid=1658778 00:27:01.726 11:34:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@482 -- # waitforlisten 1658778 00:27:01.726 11:34:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:27:01.726 11:34:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@831 -- # '[' -z 1658778 ']' 00:27:01.726 11:34:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:01.726 11:34:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # local max_retries=100 00:27:01.726 11:34:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:01.726 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:01.726 11:34:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # xtrace_disable 00:27:01.726 11:34:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:27:01.726 [2024-07-26 11:34:57.325641] Starting SPDK v24.09-pre git sha1 487ff9e1a / DPDK 24.03.0 initialization... 00:27:01.726 [2024-07-26 11:34:57.325678] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:01.726 EAL: No free 2048 kB hugepages reported on node 1 00:27:01.984 [2024-07-26 11:34:57.395624] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:01.984 [2024-07-26 11:34:57.472188] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:01.984 [2024-07-26 11:34:57.472225] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:01.984 [2024-07-26 11:34:57.472235] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:01.984 [2024-07-26 11:34:57.472241] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:01.984 [2024-07-26 11:34:57.472245] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:01.984 [2024-07-26 11:34:57.472277] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:27:02.550 11:34:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:27:02.550 11:34:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # return 0 00:27:02.550 11:34:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:27:02.550 11:34:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@730 -- # xtrace_disable 00:27:02.550 11:34:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:27:02.550 11:34:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:02.550 11:34:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@104 -- # rpc_cmd accel_assign_opc -o crc32c -m error 00:27:02.550 11:34:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:02.550 11:34:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:27:02.550 [2024-07-26 11:34:58.154249] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation crc32c will be assigned to module error 00:27:02.550 11:34:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:02.550 11:34:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@105 -- # common_target_config 00:27:02.550 11:34:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@43 -- # rpc_cmd 00:27:02.550 11:34:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:02.550 11:34:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:27:02.808 null0 00:27:02.808 [2024-07-26 11:34:58.242788] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:02.808 [2024-07-26 11:34:58.266964] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:02.808 11:34:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:02.808 11:34:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@108 -- # run_bperf_err randread 4096 128 00:27:02.808 11:34:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:27:02.808 11:34:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:27:02.808 11:34:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:27:02.808 11:34:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:27:02.808 11:34:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=1658907 00:27:02.808 11:34:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 1658907 /var/tmp/bperf.sock 00:27:02.808 11:34:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z 00:27:02.808 11:34:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@831 -- # '[' -z 1658907 ']' 00:27:02.808 11:34:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:27:02.809 11:34:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # local max_retries=100 00:27:02.809 11:34:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:27:02.809 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:27:02.809 11:34:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # xtrace_disable 00:27:02.809 11:34:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:27:02.809 [2024-07-26 11:34:58.315248] Starting SPDK v24.09-pre git sha1 487ff9e1a / DPDK 24.03.0 initialization... 00:27:02.809 [2024-07-26 11:34:58.315287] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1658907 ] 00:27:02.809 EAL: No free 2048 kB hugepages reported on node 1 00:27:02.809 [2024-07-26 11:34:58.382239] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:02.809 [2024-07-26 11:34:58.460620] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:27:03.743 11:34:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:27:03.743 11:34:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # return 0 00:27:03.743 11:34:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:27:03.743 11:34:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:27:03.743 11:34:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:27:03.743 11:34:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:03.743 11:34:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:27:03.743 11:34:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:03.743 11:34:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:27:03.743 11:34:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:27:04.003 nvme0n1 00:27:04.003 11:34:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:27:04.003 11:34:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:04.003 11:34:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:27:04.003 11:34:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:04.003 11:34:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:27:04.003 11:34:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:27:04.295 Running I/O for 2 seconds... 00:27:04.295 [2024-07-26 11:34:59.699931] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24b14f0) 00:27:04.295 [2024-07-26 11:34:59.699963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:2058 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.295 [2024-07-26 11:34:59.699973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:04.295 [2024-07-26 11:34:59.711836] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24b14f0) 00:27:04.295 [2024-07-26 11:34:59.711860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:20603 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.295 [2024-07-26 11:34:59.711880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:04.295 [2024-07-26 11:34:59.724404] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24b14f0) 00:27:04.295 [2024-07-26 11:34:59.724425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:153 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.295 [2024-07-26 11:34:59.724434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:04.295 [2024-07-26 11:34:59.732912] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24b14f0) 00:27:04.295 [2024-07-26 11:34:59.732932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:22775 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.295 [2024-07-26 11:34:59.732940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:04.295 [2024-07-26 11:34:59.744308] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24b14f0) 00:27:04.295 [2024-07-26 11:34:59.744327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:21577 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.295 [2024-07-26 11:34:59.744335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:04.295 [2024-07-26 11:34:59.755894] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24b14f0) 00:27:04.295 [2024-07-26 11:34:59.755914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:9608 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.295 [2024-07-26 11:34:59.755922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:04.295 [2024-07-26 11:34:59.763956] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24b14f0) 00:27:04.295 [2024-07-26 11:34:59.763976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:21411 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.295 [2024-07-26 11:34:59.763984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:04.296 [2024-07-26 11:34:59.775382] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24b14f0) 00:27:04.296 [2024-07-26 11:34:59.775401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:16925 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.296 [2024-07-26 11:34:59.775409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:04.296 [2024-07-26 11:34:59.787931] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24b14f0) 00:27:04.296 [2024-07-26 11:34:59.787949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:4340 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.296 [2024-07-26 11:34:59.787957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:04.296 [2024-07-26 11:34:59.800085] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24b14f0) 00:27:04.296 [2024-07-26 11:34:59.800104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:5436 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.296 [2024-07-26 11:34:59.800112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:04.296 [2024-07-26 11:34:59.810924] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24b14f0) 00:27:04.296 [2024-07-26 11:34:59.810944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:22203 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.296 [2024-07-26 11:34:59.810955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:04.296 [2024-07-26 11:34:59.823383] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24b14f0) 00:27:04.296 [2024-07-26 11:34:59.823406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:22009 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.296 [2024-07-26 11:34:59.823414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:04.296 [2024-07-26 11:34:59.832073] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24b14f0) 00:27:04.296 [2024-07-26 11:34:59.832092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:12568 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.296 [2024-07-26 11:34:59.832101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:04.296 [2024-07-26 11:34:59.844514] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24b14f0) 00:27:04.296 [2024-07-26 11:34:59.844533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:12511 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.296 [2024-07-26 11:34:59.844541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:04.296 [2024-07-26 11:34:59.852687] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24b14f0) 00:27:04.296 [2024-07-26 11:34:59.852706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:2626 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.296 [2024-07-26 11:34:59.852714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:04.296 [2024-07-26 11:34:59.863787] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24b14f0) 00:27:04.296 [2024-07-26 11:34:59.863807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:9192 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.296 [2024-07-26 11:34:59.863814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:04.296 [2024-07-26 11:34:59.875789] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24b14f0) 00:27:04.296 [2024-07-26 11:34:59.875810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:16880 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.296 [2024-07-26 11:34:59.875819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:04.296 [2024-07-26 11:34:59.887814] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24b14f0) 00:27:04.296 [2024-07-26 11:34:59.887835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:21864 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.296 [2024-07-26 11:34:59.887842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:04.296 [2024-07-26 11:34:59.900055] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24b14f0) 00:27:04.296 [2024-07-26 11:34:59.900079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:9165 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.296 [2024-07-26 11:34:59.900087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:04.296 [2024-07-26 11:34:59.911386] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24b14f0) 00:27:04.296 [2024-07-26 11:34:59.911410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:9724 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.296 [2024-07-26 11:34:59.911418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:04.296 [2024-07-26 11:34:59.920559] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24b14f0) 00:27:04.296 [2024-07-26 11:34:59.920581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:15592 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.296 [2024-07-26 11:34:59.920590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:04.296 [2024-07-26 11:34:59.932089] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24b14f0) 00:27:04.296 [2024-07-26 11:34:59.932110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:14719 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.296 [2024-07-26 11:34:59.932118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:04.296 [2024-07-26 11:34:59.945102] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24b14f0) 00:27:04.296 [2024-07-26 11:34:59.945123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:15393 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.296 [2024-07-26 11:34:59.945131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:04.575 [2024-07-26 11:34:59.957128] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24b14f0) 00:27:04.575 [2024-07-26 11:34:59.957149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:757 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.575 [2024-07-26 11:34:59.957157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:04.575 [2024-07-26 11:34:59.966632] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24b14f0) 00:27:04.575 [2024-07-26 11:34:59.966652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:11675 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.575 [2024-07-26 11:34:59.966660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:04.575 [2024-07-26 11:34:59.976421] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24b14f0) 00:27:04.575 [2024-07-26 11:34:59.976442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:19324 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.575 [2024-07-26 11:34:59.976449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:04.575 [2024-07-26 11:34:59.989072] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24b14f0) 00:27:04.575 [2024-07-26 11:34:59.989092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:13614 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.575 [2024-07-26 11:34:59.989100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:04.575 [2024-07-26 11:35:00.001300] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24b14f0) 00:27:04.575 [2024-07-26 11:35:00.001321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:20543 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.575 [2024-07-26 11:35:00.001335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:04.575 [2024-07-26 11:35:00.013589] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24b14f0) 00:27:04.575 [2024-07-26 11:35:00.013611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:7123 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.575 [2024-07-26 11:35:00.013619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:04.575 [2024-07-26 11:35:00.022855] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24b14f0) 00:27:04.575 [2024-07-26 11:35:00.022875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:5989 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.575 [2024-07-26 11:35:00.022884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:04.575 [2024-07-26 11:35:00.034974] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24b14f0) 00:27:04.575 [2024-07-26 11:35:00.034994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:16409 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.575 [2024-07-26 11:35:00.035003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:04.575 [2024-07-26 11:35:00.046051] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24b14f0) 00:27:04.575 [2024-07-26 11:35:00.046438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:25596 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.575 [2024-07-26 11:35:00.046513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:04.575 [2024-07-26 11:35:00.057086] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24b14f0) 00:27:04.575 [2024-07-26 11:35:00.057111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:397 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.575 [2024-07-26 11:35:00.057119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:04.575 [2024-07-26 11:35:00.069006] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24b14f0) 00:27:04.575 [2024-07-26 11:35:00.069028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:9457 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.575 [2024-07-26 11:35:00.069036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:04.575 [2024-07-26 11:35:00.078329] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24b14f0) 00:27:04.575 [2024-07-26 11:35:00.078351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:25397 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.575 [2024-07-26 11:35:00.078359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:04.575 [2024-07-26 11:35:00.089789] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24b14f0) 00:27:04.575 [2024-07-26 11:35:00.089811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:18171 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.575 [2024-07-26 11:35:00.089820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:04.575 [2024-07-26 11:35:00.099157] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24b14f0) 00:27:04.575 [2024-07-26 11:35:00.099183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:22774 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.575 [2024-07-26 11:35:00.099192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:04.575 [2024-07-26 11:35:00.112245] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24b14f0) 00:27:04.575 [2024-07-26 11:35:00.112266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:13707 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.575 [2024-07-26 11:35:00.112274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:04.575 [2024-07-26 11:35:00.123137] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24b14f0) 00:27:04.575 [2024-07-26 11:35:00.123157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:4104 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.575 [2024-07-26 11:35:00.123166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:04.575 [2024-07-26 11:35:00.131475] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24b14f0) 00:27:04.575 [2024-07-26 11:35:00.131495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:16060 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.575 [2024-07-26 11:35:00.131503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:04.575 [2024-07-26 11:35:00.144032] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24b14f0) 00:27:04.575 [2024-07-26 11:35:00.144053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:25191 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.575 [2024-07-26 11:35:00.144062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:04.575 [2024-07-26 11:35:00.153893] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24b14f0) 00:27:04.575 [2024-07-26 11:35:00.153914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:21063 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.575 [2024-07-26 11:35:00.153922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:04.575 [2024-07-26 11:35:00.163245] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24b14f0) 00:27:04.575 [2024-07-26 11:35:00.163265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:23874 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.575 [2024-07-26 11:35:00.163273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:04.575 [2024-07-26 11:35:00.172162] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24b14f0) 00:27:04.575 [2024-07-26 11:35:00.172182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:8046 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.575 [2024-07-26 11:35:00.172190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:04.575 [2024-07-26 11:35:00.181660] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24b14f0) 00:27:04.575 [2024-07-26 11:35:00.181680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:6765 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.576 [2024-07-26 11:35:00.181689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:04.576 [2024-07-26 11:35:00.191131] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24b14f0) 00:27:04.576 [2024-07-26 11:35:00.191152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:9711 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.576 [2024-07-26 11:35:00.191160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:04.576 [2024-07-26 11:35:00.199624] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24b14f0) 00:27:04.576 [2024-07-26 11:35:00.199653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:11465 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.576 [2024-07-26 11:35:00.199662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:04.576 [2024-07-26 11:35:00.210314] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24b14f0) 00:27:04.576 [2024-07-26 11:35:00.210335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:7665 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.576 [2024-07-26 11:35:00.210343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:04.576 [2024-07-26 11:35:00.222889] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24b14f0) 00:27:04.576 [2024-07-26 11:35:00.222911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:17346 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.576 [2024-07-26 11:35:00.222919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:04.835 [2024-07-26 11:35:00.235424] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24b14f0) 00:27:04.835 [2024-07-26 11:35:00.235444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:23288 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.835 [2024-07-26 11:35:00.235452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:04.835 [2024-07-26 11:35:00.243896] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24b14f0) 00:27:04.835 [2024-07-26 11:35:00.243916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:13744 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.835 [2024-07-26 11:35:00.243924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:04.835 [2024-07-26 11:35:00.254905] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24b14f0) 00:27:04.835 [2024-07-26 11:35:00.254925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:21971 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.835 [2024-07-26 11:35:00.254933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:04.835 [2024-07-26 11:35:00.265039] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24b14f0) 00:27:04.835 [2024-07-26 11:35:00.265058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:10578 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.835 [2024-07-26 11:35:00.265065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:04.835 [2024-07-26 11:35:00.273546] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24b14f0) 00:27:04.835 [2024-07-26 11:35:00.273566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:14732 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.835 [2024-07-26 11:35:00.273577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:04.835 [2024-07-26 11:35:00.285016] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24b14f0) 00:27:04.835 [2024-07-26 11:35:00.285036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:21050 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.835 [2024-07-26 11:35:00.285043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:04.835 [2024-07-26 11:35:00.292676] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24b14f0) 00:27:04.835 [2024-07-26 11:35:00.292697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:13141 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.835 [2024-07-26 11:35:00.292705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:04.835 [2024-07-26 11:35:00.304128] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24b14f0) 00:27:04.835 [2024-07-26 11:35:00.304148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:1082 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.835 [2024-07-26 11:35:00.304156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:04.835 [2024-07-26 11:35:00.314915] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24b14f0) 00:27:04.835 [2024-07-26 11:35:00.314935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:16768 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.835 [2024-07-26 11:35:00.314943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:04.835 [2024-07-26 11:35:00.323411] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24b14f0) 00:27:04.835 [2024-07-26 11:35:00.323431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:11738 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.835 [2024-07-26 11:35:00.323439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:04.835 [2024-07-26 11:35:00.334947] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24b14f0) 00:27:04.835 [2024-07-26 11:35:00.334966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:9574 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.835 [2024-07-26 11:35:00.334974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:04.835 [2024-07-26 11:35:00.344065] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24b14f0) 00:27:04.835 [2024-07-26 11:35:00.344083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:4548 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.835 [2024-07-26 11:35:00.344091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:04.835 [2024-07-26 11:35:00.353364] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24b14f0) 00:27:04.835 [2024-07-26 11:35:00.353383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:11358 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.835 [2024-07-26 11:35:00.353390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:04.835 [2024-07-26 11:35:00.361582] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24b14f0) 00:27:04.835 [2024-07-26 11:35:00.361605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:2869 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.835 [2024-07-26 11:35:00.361613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:04.835 [2024-07-26 11:35:00.373116] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24b14f0) 00:27:04.835 [2024-07-26 11:35:00.373135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:11174 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.835 [2024-07-26 11:35:00.373143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:04.835 [2024-07-26 11:35:00.382233] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24b14f0) 00:27:04.835 [2024-07-26 11:35:00.382251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:13676 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.835 [2024-07-26 11:35:00.382259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:04.835 [2024-07-26 11:35:00.390763] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24b14f0) 00:27:04.835 [2024-07-26 11:35:00.390782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:9051 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.835 [2024-07-26 11:35:00.390789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:04.835 [2024-07-26 11:35:00.400779] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24b14f0) 00:27:04.835 [2024-07-26 11:35:00.400798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25117 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.835 [2024-07-26 11:35:00.400805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:04.836 [2024-07-26 11:35:00.408911] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24b14f0) 00:27:04.836 [2024-07-26 11:35:00.408930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24675 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.836 [2024-07-26 11:35:00.408937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:04.836 [2024-07-26 11:35:00.418097] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24b14f0) 00:27:04.836 [2024-07-26 11:35:00.418116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:15070 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.836 [2024-07-26 11:35:00.418124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:04.836 [2024-07-26 11:35:00.428250] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24b14f0) 00:27:04.836 [2024-07-26 11:35:00.428269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:1377 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.836 [2024-07-26 11:35:00.428277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:04.836 [2024-07-26 11:35:00.437311] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24b14f0) 00:27:04.836 [2024-07-26 11:35:00.437329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:19863 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.836 [2024-07-26 11:35:00.437339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:04.836 [2024-07-26 11:35:00.446132] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24b14f0) 00:27:04.836 [2024-07-26 11:35:00.446152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:17054 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.836 [2024-07-26 11:35:00.446160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:04.836 [2024-07-26 11:35:00.454381] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24b14f0) 00:27:04.836 [2024-07-26 11:35:00.454401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:10228 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.836 [2024-07-26 11:35:00.454408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:04.836 [2024-07-26 11:35:00.465420] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24b14f0) 00:27:04.836 [2024-07-26 11:35:00.465439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:20742 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.836 [2024-07-26 11:35:00.465447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:04.836 [2024-07-26 11:35:00.473360] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24b14f0) 00:27:04.836 [2024-07-26 11:35:00.473380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:13781 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.836 [2024-07-26 11:35:00.473388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:04.836 [2024-07-26 11:35:00.485153] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24b14f0) 00:27:04.836 [2024-07-26 11:35:00.485173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:12295 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.836 [2024-07-26 11:35:00.485182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:05.094 [2024-07-26 11:35:00.495560] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24b14f0) 00:27:05.094 [2024-07-26 11:35:00.495580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:14011 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.094 [2024-07-26 11:35:00.495588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:05.094 [2024-07-26 11:35:00.504106] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24b14f0) 00:27:05.094 [2024-07-26 11:35:00.504125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:21137 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.094 [2024-07-26 11:35:00.504133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:05.094 [2024-07-26 11:35:00.515570] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24b14f0) 00:27:05.094 [2024-07-26 11:35:00.515590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:13883 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.094 [2024-07-26 11:35:00.515598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:05.094 [2024-07-26 11:35:00.523765] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24b14f0) 00:27:05.094 [2024-07-26 11:35:00.523786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18172 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.094 [2024-07-26 11:35:00.523793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:05.094 [2024-07-26 11:35:00.535299] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24b14f0) 00:27:05.094 [2024-07-26 11:35:00.535319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:4974 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.094 [2024-07-26 11:35:00.535326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:05.094 [2024-07-26 11:35:00.546765] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24b14f0) 00:27:05.094 [2024-07-26 11:35:00.546784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:19963 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.094 [2024-07-26 11:35:00.546792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:05.094 [2024-07-26 11:35:00.554964] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24b14f0) 00:27:05.094 [2024-07-26 11:35:00.554983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:22365 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.094 [2024-07-26 11:35:00.554990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:05.094 [2024-07-26 11:35:00.567281] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24b14f0) 00:27:05.094 [2024-07-26 11:35:00.567301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:4236 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.094 [2024-07-26 11:35:00.567308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:05.094 [2024-07-26 11:35:00.577610] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24b14f0) 00:27:05.094 [2024-07-26 11:35:00.577634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:20088 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.095 [2024-07-26 11:35:00.577642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:05.095 [2024-07-26 11:35:00.585450] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24b14f0) 00:27:05.095 [2024-07-26 11:35:00.585469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:15594 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.095 [2024-07-26 11:35:00.585477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:05.095 [2024-07-26 11:35:00.597618] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24b14f0) 00:27:05.095 [2024-07-26 11:35:00.597642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:1279 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.095 [2024-07-26 11:35:00.597650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:05.095 [2024-07-26 11:35:00.609593] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24b14f0) 00:27:05.095 [2024-07-26 11:35:00.609613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:15993 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.095 [2024-07-26 11:35:00.609621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:05.095 [2024-07-26 11:35:00.620923] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24b14f0) 00:27:05.095 [2024-07-26 11:35:00.620943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:18037 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.095 [2024-07-26 11:35:00.620951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:05.095 [2024-07-26 11:35:00.631920] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24b14f0) 00:27:05.095 [2024-07-26 11:35:00.631938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:12471 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.095 [2024-07-26 11:35:00.631946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:05.095 [2024-07-26 11:35:00.640277] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24b14f0) 00:27:05.095 [2024-07-26 11:35:00.640295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:4114 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.095 [2024-07-26 11:35:00.640303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:05.095 [2024-07-26 11:35:00.651323] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24b14f0) 00:27:05.095 [2024-07-26 11:35:00.651343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:5425 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.095 [2024-07-26 11:35:00.651350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:05.095 [2024-07-26 11:35:00.663499] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24b14f0) 00:27:05.095 [2024-07-26 11:35:00.663518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:12849 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.095 [2024-07-26 11:35:00.663526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:05.095 [2024-07-26 11:35:00.675787] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24b14f0) 00:27:05.095 [2024-07-26 11:35:00.675807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:18537 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.095 [2024-07-26 11:35:00.675815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:05.095 [2024-07-26 11:35:00.686043] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24b14f0) 00:27:05.095 [2024-07-26 11:35:00.686062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:24764 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.095 [2024-07-26 11:35:00.686069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:05.095 [2024-07-26 11:35:00.697736] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24b14f0) 00:27:05.095 [2024-07-26 11:35:00.697755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:6610 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.095 [2024-07-26 11:35:00.697763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:05.095 [2024-07-26 11:35:00.706352] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24b14f0) 00:27:05.095 [2024-07-26 11:35:00.706371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:15013 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.095 [2024-07-26 11:35:00.706382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:05.095 [2024-07-26 11:35:00.717969] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24b14f0) 00:27:05.095 [2024-07-26 11:35:00.717989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:695 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.095 [2024-07-26 11:35:00.717998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:05.095 [2024-07-26 11:35:00.731555] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24b14f0) 00:27:05.095 [2024-07-26 11:35:00.731574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:7248 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.095 [2024-07-26 11:35:00.731582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:05.095 [2024-07-26 11:35:00.742306] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24b14f0) 00:27:05.095 [2024-07-26 11:35:00.742326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:17454 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.095 [2024-07-26 11:35:00.742334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:05.095 [2024-07-26 11:35:00.750762] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24b14f0) 00:27:05.095 [2024-07-26 11:35:00.750781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:17665 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.095 [2024-07-26 11:35:00.750789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:05.354 [2024-07-26 11:35:00.760585] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24b14f0) 00:27:05.354 [2024-07-26 11:35:00.760604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:4105 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.354 [2024-07-26 11:35:00.760612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:05.354 [2024-07-26 11:35:00.768787] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24b14f0) 00:27:05.354 [2024-07-26 11:35:00.768806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:16209 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.354 [2024-07-26 11:35:00.768814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:05.354 [2024-07-26 11:35:00.780570] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24b14f0) 00:27:05.354 [2024-07-26 11:35:00.780592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:21701 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.354 [2024-07-26 11:35:00.780600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:05.354 [2024-07-26 11:35:00.791664] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24b14f0) 00:27:05.354 [2024-07-26 11:35:00.791683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:14493 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.354 [2024-07-26 11:35:00.791691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:05.354 [2024-07-26 11:35:00.800402] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24b14f0) 00:27:05.354 [2024-07-26 11:35:00.800424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:24695 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.354 [2024-07-26 11:35:00.800432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:05.354 [2024-07-26 11:35:00.811504] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24b14f0) 00:27:05.354 [2024-07-26 11:35:00.811523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:22386 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.354 [2024-07-26 11:35:00.811531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:05.354 [2024-07-26 11:35:00.823262] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24b14f0) 00:27:05.354 [2024-07-26 11:35:00.823281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:24259 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.354 [2024-07-26 11:35:00.823289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:05.354 [2024-07-26 11:35:00.833465] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24b14f0) 00:27:05.354 [2024-07-26 11:35:00.833484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:7428 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.354 [2024-07-26 11:35:00.833492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:05.354 [2024-07-26 11:35:00.841604] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24b14f0) 00:27:05.354 [2024-07-26 11:35:00.841623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16839 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.354 [2024-07-26 11:35:00.841636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:05.354 [2024-07-26 11:35:00.852704] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24b14f0) 00:27:05.354 [2024-07-26 11:35:00.852723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:23960 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.354 [2024-07-26 11:35:00.852730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:05.354 [2024-07-26 11:35:00.860457] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24b14f0) 00:27:05.354 [2024-07-26 11:35:00.860475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:16906 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.354 [2024-07-26 11:35:00.860483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:05.354 [2024-07-26 11:35:00.871778] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24b14f0) 00:27:05.354 [2024-07-26 11:35:00.871798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:4085 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.354 [2024-07-26 11:35:00.871805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:05.354 [2024-07-26 11:35:00.883376] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24b14f0) 00:27:05.354 [2024-07-26 11:35:00.883395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18801 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.354 [2024-07-26 11:35:00.883403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:05.354 [2024-07-26 11:35:00.896111] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24b14f0) 00:27:05.354 [2024-07-26 11:35:00.896131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:9173 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.354 [2024-07-26 11:35:00.896139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:05.354 [2024-07-26 11:35:00.907195] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24b14f0) 00:27:05.354 [2024-07-26 11:35:00.907213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:13609 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.354 [2024-07-26 11:35:00.907221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:05.354 [2024-07-26 11:35:00.917367] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24b14f0) 00:27:05.354 [2024-07-26 11:35:00.917385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:10072 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.354 [2024-07-26 11:35:00.917393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:05.354 [2024-07-26 11:35:00.925709] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24b14f0) 00:27:05.354 [2024-07-26 11:35:00.925728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:17053 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.354 [2024-07-26 11:35:00.925735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:05.355 [2024-07-26 11:35:00.937782] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24b14f0) 00:27:05.355 [2024-07-26 11:35:00.937801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:11118 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.355 [2024-07-26 11:35:00.937809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:05.355 [2024-07-26 11:35:00.948227] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24b14f0) 00:27:05.355 [2024-07-26 11:35:00.948246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:7008 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.355 [2024-07-26 11:35:00.948254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:05.355 [2024-07-26 11:35:00.956735] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24b14f0) 00:27:05.355 [2024-07-26 11:35:00.956754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:10058 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.355 [2024-07-26 11:35:00.956762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:05.355 [2024-07-26 11:35:00.967795] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24b14f0) 00:27:05.355 [2024-07-26 11:35:00.967814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:23586 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.355 [2024-07-26 11:35:00.967822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:05.355 [2024-07-26 11:35:00.977468] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24b14f0) 00:27:05.355 [2024-07-26 11:35:00.977487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:8917 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.355 [2024-07-26 11:35:00.977501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:05.355 [2024-07-26 11:35:00.986030] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24b14f0) 00:27:05.355 [2024-07-26 11:35:00.986049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:4766 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.355 [2024-07-26 11:35:00.986056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:05.355 [2024-07-26 11:35:00.995036] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24b14f0) 00:27:05.355 [2024-07-26 11:35:00.995056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:9240 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.355 [2024-07-26 11:35:00.995064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:05.355 [2024-07-26 11:35:01.004008] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24b14f0) 00:27:05.355 [2024-07-26 11:35:01.004027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:22152 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.355 [2024-07-26 11:35:01.004034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:05.614 [2024-07-26 11:35:01.013871] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24b14f0) 00:27:05.614 [2024-07-26 11:35:01.013891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:12207 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.614 [2024-07-26 11:35:01.013899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:05.614 [2024-07-26 11:35:01.024690] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24b14f0) 00:27:05.614 [2024-07-26 11:35:01.024709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:16770 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.614 [2024-07-26 11:35:01.024717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:05.614 [2024-07-26 11:35:01.033313] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24b14f0) 00:27:05.614 [2024-07-26 11:35:01.033332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:5179 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.614 [2024-07-26 11:35:01.033340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:05.614 [2024-07-26 11:35:01.045925] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24b14f0) 00:27:05.614 [2024-07-26 11:35:01.045952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:23046 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.614 [2024-07-26 11:35:01.045960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:05.614 [2024-07-26 11:35:01.057764] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24b14f0) 00:27:05.614 [2024-07-26 11:35:01.057784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:6350 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.614 [2024-07-26 11:35:01.057792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:05.614 [2024-07-26 11:35:01.069069] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24b14f0) 00:27:05.614 [2024-07-26 11:35:01.069089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:17757 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.614 [2024-07-26 11:35:01.069097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:05.614 [2024-07-26 11:35:01.078934] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24b14f0) 00:27:05.614 [2024-07-26 11:35:01.078954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:4703 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.614 [2024-07-26 11:35:01.078961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:05.614 [2024-07-26 11:35:01.086736] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24b14f0) 00:27:05.614 [2024-07-26 11:35:01.086755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:24709 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.614 [2024-07-26 11:35:01.086763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:05.614 [2024-07-26 11:35:01.096035] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24b14f0) 00:27:05.614 [2024-07-26 11:35:01.096054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:22582 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.614 [2024-07-26 11:35:01.096062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:05.614 [2024-07-26 11:35:01.107514] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24b14f0) 00:27:05.614 [2024-07-26 11:35:01.107533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:14381 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.614 [2024-07-26 11:35:01.107541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:05.614 [2024-07-26 11:35:01.115468] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24b14f0) 00:27:05.614 [2024-07-26 11:35:01.115487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:15200 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.614 [2024-07-26 11:35:01.115494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:05.614 [2024-07-26 11:35:01.127350] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24b14f0) 00:27:05.614 [2024-07-26 11:35:01.127369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:10139 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.614 [2024-07-26 11:35:01.127377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:05.614 [2024-07-26 11:35:01.136921] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24b14f0) 00:27:05.614 [2024-07-26 11:35:01.136940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:6427 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.614 [2024-07-26 11:35:01.136948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:05.614 [2024-07-26 11:35:01.144864] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24b14f0) 00:27:05.614 [2024-07-26 11:35:01.144892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:11938 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.614 [2024-07-26 11:35:01.144903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:05.614 [2024-07-26 11:35:01.154465] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24b14f0) 00:27:05.614 [2024-07-26 11:35:01.154484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:1098 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.614 [2024-07-26 11:35:01.154492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:05.614 [2024-07-26 11:35:01.165957] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24b14f0) 00:27:05.614 [2024-07-26 11:35:01.165976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:18062 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.614 [2024-07-26 11:35:01.165983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:05.614 [2024-07-26 11:35:01.178013] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24b14f0) 00:27:05.614 [2024-07-26 11:35:01.178033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:23008 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.614 [2024-07-26 11:35:01.178040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:05.614 [2024-07-26 11:35:01.185985] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24b14f0) 00:27:05.614 [2024-07-26 11:35:01.186004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:23067 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.614 [2024-07-26 11:35:01.186013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:05.614 [2024-07-26 11:35:01.197350] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24b14f0) 00:27:05.614 [2024-07-26 11:35:01.197369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:24663 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.614 [2024-07-26 11:35:01.197377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:05.614 [2024-07-26 11:35:01.209555] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24b14f0) 00:27:05.614 [2024-07-26 11:35:01.209574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:14425 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.614 [2024-07-26 11:35:01.209582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:05.614 [2024-07-26 11:35:01.220786] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24b14f0) 00:27:05.614 [2024-07-26 11:35:01.220805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:4633 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.614 [2024-07-26 11:35:01.220813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:05.614 [2024-07-26 11:35:01.232186] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24b14f0) 00:27:05.614 [2024-07-26 11:35:01.232205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:10921 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.615 [2024-07-26 11:35:01.232214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:05.615 [2024-07-26 11:35:01.240580] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24b14f0) 00:27:05.615 [2024-07-26 11:35:01.240603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:5876 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.615 [2024-07-26 11:35:01.240611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:05.615 [2024-07-26 11:35:01.250962] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24b14f0) 00:27:05.615 [2024-07-26 11:35:01.250982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:24994 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.615 [2024-07-26 11:35:01.250990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:05.615 [2024-07-26 11:35:01.260156] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24b14f0) 00:27:05.615 [2024-07-26 11:35:01.260175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:20617 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.615 [2024-07-26 11:35:01.260183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:05.615 [2024-07-26 11:35:01.268412] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24b14f0) 00:27:05.615 [2024-07-26 11:35:01.268431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:19722 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.615 [2024-07-26 11:35:01.268438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:05.873 [2024-07-26 11:35:01.278278] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24b14f0) 00:27:05.873 [2024-07-26 11:35:01.278298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:4539 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.873 [2024-07-26 11:35:01.278305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:05.873 [2024-07-26 11:35:01.289005] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24b14f0) 00:27:05.873 [2024-07-26 11:35:01.289025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:20160 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.873 [2024-07-26 11:35:01.289033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:05.873 [2024-07-26 11:35:01.297431] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24b14f0) 00:27:05.873 [2024-07-26 11:35:01.297453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:23843 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.873 [2024-07-26 11:35:01.297461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:05.873 [2024-07-26 11:35:01.307754] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24b14f0) 00:27:05.873 [2024-07-26 11:35:01.307775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:18606 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.873 [2024-07-26 11:35:01.307782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:05.873 [2024-07-26 11:35:01.316702] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24b14f0) 00:27:05.873 [2024-07-26 11:35:01.316723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7802 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.874 [2024-07-26 11:35:01.316731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:05.874 [2024-07-26 11:35:01.325999] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24b14f0) 00:27:05.874 [2024-07-26 11:35:01.326019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:12730 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.874 [2024-07-26 11:35:01.326027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:05.874 [2024-07-26 11:35:01.335257] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24b14f0) 00:27:05.874 [2024-07-26 11:35:01.335278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:11020 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.874 [2024-07-26 11:35:01.335286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:05.874 [2024-07-26 11:35:01.343656] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24b14f0) 00:27:05.874 [2024-07-26 11:35:01.343675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:5557 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.874 [2024-07-26 11:35:01.343684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:05.874 [2024-07-26 11:35:01.355005] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24b14f0) 00:27:05.874 [2024-07-26 11:35:01.355026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:7466 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.874 [2024-07-26 11:35:01.355036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:05.874 [2024-07-26 11:35:01.362663] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24b14f0) 00:27:05.874 [2024-07-26 11:35:01.362683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:10313 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.874 [2024-07-26 11:35:01.362691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:05.874 [2024-07-26 11:35:01.372494] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24b14f0) 00:27:05.874 [2024-07-26 11:35:01.372515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:662 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.874 [2024-07-26 11:35:01.372523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:05.874 [2024-07-26 11:35:01.382302] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24b14f0) 00:27:05.874 [2024-07-26 11:35:01.382321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:22253 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.874 [2024-07-26 11:35:01.382329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:05.874 [2024-07-26 11:35:01.390909] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24b14f0) 00:27:05.874 [2024-07-26 11:35:01.390927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:14840 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.874 [2024-07-26 11:35:01.390935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:05.874 [2024-07-26 11:35:01.400797] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24b14f0) 00:27:05.874 [2024-07-26 11:35:01.400816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:9683 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.874 [2024-07-26 11:35:01.400828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:05.874 [2024-07-26 11:35:01.408755] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24b14f0) 00:27:05.874 [2024-07-26 11:35:01.408775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:11079 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.874 [2024-07-26 11:35:01.408782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:05.874 [2024-07-26 11:35:01.420020] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24b14f0) 00:27:05.874 [2024-07-26 11:35:01.420040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:25560 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.874 [2024-07-26 11:35:01.420048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:05.874 [2024-07-26 11:35:01.430677] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24b14f0) 00:27:05.874 [2024-07-26 11:35:01.430696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:19262 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.874 [2024-07-26 11:35:01.430704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:05.874 [2024-07-26 11:35:01.438845] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24b14f0) 00:27:05.874 [2024-07-26 11:35:01.438865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:21874 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.874 [2024-07-26 11:35:01.438873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:05.874 [2024-07-26 11:35:01.449809] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24b14f0) 00:27:05.874 [2024-07-26 11:35:01.449828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:3534 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.874 [2024-07-26 11:35:01.449835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:05.874 [2024-07-26 11:35:01.458914] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24b14f0) 00:27:05.874 [2024-07-26 11:35:01.458936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:23204 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.874 [2024-07-26 11:35:01.458943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:05.874 [2024-07-26 11:35:01.470338] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24b14f0) 00:27:05.874 [2024-07-26 11:35:01.470358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:24457 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.874 [2024-07-26 11:35:01.470365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:05.874 [2024-07-26 11:35:01.481307] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24b14f0) 00:27:05.874 [2024-07-26 11:35:01.481326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:19435 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.874 [2024-07-26 11:35:01.481334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:05.874 [2024-07-26 11:35:01.489990] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24b14f0) 00:27:05.874 [2024-07-26 11:35:01.490010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:531 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.874 [2024-07-26 11:35:01.490018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:05.874 [2024-07-26 11:35:01.502764] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24b14f0) 00:27:05.874 [2024-07-26 11:35:01.502784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:10799 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.874 [2024-07-26 11:35:01.502792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:05.874 [2024-07-26 11:35:01.510502] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24b14f0) 00:27:05.874 [2024-07-26 11:35:01.510522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:11156 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.874 [2024-07-26 11:35:01.510530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:05.874 [2024-07-26 11:35:01.521636] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24b14f0) 00:27:05.874 [2024-07-26 11:35:01.521655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:7606 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.874 [2024-07-26 11:35:01.521662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:05.874 [2024-07-26 11:35:01.531970] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24b14f0) 00:27:05.874 [2024-07-26 11:35:01.531990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:10536 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.874 [2024-07-26 11:35:01.531997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:06.132 [2024-07-26 11:35:01.540449] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24b14f0) 00:27:06.133 [2024-07-26 11:35:01.540469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:4107 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.133 [2024-07-26 11:35:01.540477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:06.133 [2024-07-26 11:35:01.553157] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24b14f0) 00:27:06.133 [2024-07-26 11:35:01.553176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:13386 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.133 [2024-07-26 11:35:01.553184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:06.133 [2024-07-26 11:35:01.565287] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24b14f0) 00:27:06.133 [2024-07-26 11:35:01.565308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22778 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.133 [2024-07-26 11:35:01.565315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:06.133 [2024-07-26 11:35:01.573385] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24b14f0) 00:27:06.133 [2024-07-26 11:35:01.573405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:20303 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.133 [2024-07-26 11:35:01.573416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:06.133 [2024-07-26 11:35:01.583466] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24b14f0) 00:27:06.133 [2024-07-26 11:35:01.583486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:14967 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.133 [2024-07-26 11:35:01.583494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:06.133 [2024-07-26 11:35:01.592631] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24b14f0) 00:27:06.133 [2024-07-26 11:35:01.592650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:17608 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.133 [2024-07-26 11:35:01.592658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:06.133 [2024-07-26 11:35:01.601809] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24b14f0) 00:27:06.133 [2024-07-26 11:35:01.601828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:5008 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.133 [2024-07-26 11:35:01.601836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:06.133 [2024-07-26 11:35:01.611154] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24b14f0) 00:27:06.133 [2024-07-26 11:35:01.611173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:14845 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.133 [2024-07-26 11:35:01.611181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:06.133 [2024-07-26 11:35:01.620711] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24b14f0) 00:27:06.133 [2024-07-26 11:35:01.620730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:15970 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.133 [2024-07-26 11:35:01.620737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:06.133 [2024-07-26 11:35:01.629331] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24b14f0) 00:27:06.133 [2024-07-26 11:35:01.629350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:3160 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.133 [2024-07-26 11:35:01.629357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:06.133 [2024-07-26 11:35:01.639706] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24b14f0) 00:27:06.133 [2024-07-26 11:35:01.639725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:21759 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.133 [2024-07-26 11:35:01.639733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:06.133 [2024-07-26 11:35:01.647637] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24b14f0) 00:27:06.133 [2024-07-26 11:35:01.647656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:12757 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.133 [2024-07-26 11:35:01.647664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:06.133 [2024-07-26 11:35:01.656764] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24b14f0) 00:27:06.133 [2024-07-26 11:35:01.656786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:6506 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.133 [2024-07-26 11:35:01.656794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:06.133 [2024-07-26 11:35:01.666575] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24b14f0) 00:27:06.133 [2024-07-26 11:35:01.666595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:8308 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.133 [2024-07-26 11:35:01.666602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:06.133 [2024-07-26 11:35:01.676187] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24b14f0) 00:27:06.133 [2024-07-26 11:35:01.676207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:18559 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.133 [2024-07-26 11:35:01.676214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:06.133 [2024-07-26 11:35:01.685160] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24b14f0) 00:27:06.133 [2024-07-26 11:35:01.685180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:21674 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.133 [2024-07-26 11:35:01.685188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:06.133 00:27:06.133 Latency(us) 00:27:06.133 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:06.133 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:27:06.133 nvme0n1 : 2.00 24928.46 97.38 0.00 0.00 5129.21 2527.82 18974.23 00:27:06.133 =================================================================================================================== 00:27:06.133 Total : 24928.46 97.38 0.00 0.00 5129.21 2527.82 18974.23 00:27:06.133 0 00:27:06.133 11:35:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:27:06.133 11:35:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:27:06.133 11:35:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:27:06.133 | .driver_specific 00:27:06.133 | .nvme_error 00:27:06.133 | .status_code 00:27:06.133 | .command_transient_transport_error' 00:27:06.133 11:35:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:27:06.391 11:35:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 195 > 0 )) 00:27:06.391 11:35:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 1658907 00:27:06.391 11:35:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@950 -- # '[' -z 1658907 ']' 00:27:06.391 11:35:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # kill -0 1658907 00:27:06.391 11:35:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # uname 00:27:06.391 11:35:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:27:06.391 11:35:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1658907 00:27:06.391 11:35:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:27:06.391 11:35:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:27:06.391 11:35:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1658907' 00:27:06.391 killing process with pid 1658907 00:27:06.391 11:35:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@969 -- # kill 1658907 00:27:06.391 Received shutdown signal, test time was about 2.000000 seconds 00:27:06.391 00:27:06.391 Latency(us) 00:27:06.391 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:06.391 =================================================================================================================== 00:27:06.391 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:27:06.391 11:35:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@974 -- # wait 1658907 00:27:06.649 11:35:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@109 -- # run_bperf_err randread 131072 16 00:27:06.649 11:35:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:27:06.649 11:35:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:27:06.649 11:35:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:27:06.649 11:35:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:27:06.649 11:35:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=1659602 00:27:06.649 11:35:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 1659602 /var/tmp/bperf.sock 00:27:06.649 11:35:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z 00:27:06.650 11:35:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@831 -- # '[' -z 1659602 ']' 00:27:06.650 11:35:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:27:06.650 11:35:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # local max_retries=100 00:27:06.650 11:35:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:27:06.650 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:27:06.650 11:35:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # xtrace_disable 00:27:06.650 11:35:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:27:06.650 [2024-07-26 11:35:02.162126] Starting SPDK v24.09-pre git sha1 487ff9e1a / DPDK 24.03.0 initialization... 00:27:06.650 [2024-07-26 11:35:02.162170] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1659602 ] 00:27:06.650 I/O size of 131072 is greater than zero copy threshold (65536). 00:27:06.650 Zero copy mechanism will not be used. 00:27:06.650 EAL: No free 2048 kB hugepages reported on node 1 00:27:06.650 [2024-07-26 11:35:02.227920] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:06.650 [2024-07-26 11:35:02.295033] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:27:07.583 11:35:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:27:07.583 11:35:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # return 0 00:27:07.583 11:35:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:27:07.583 11:35:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:27:07.583 11:35:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:27:07.583 11:35:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:07.583 11:35:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:27:07.583 11:35:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:07.583 11:35:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:27:07.583 11:35:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:27:08.166 nvme0n1 00:27:08.166 11:35:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:27:08.166 11:35:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:08.166 11:35:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:27:08.166 11:35:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:08.166 11:35:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:27:08.166 11:35:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:27:08.166 I/O size of 131072 is greater than zero copy threshold (65536). 00:27:08.166 Zero copy mechanism will not be used. 00:27:08.166 Running I/O for 2 seconds... 00:27:08.166 [2024-07-26 11:35:03.676736] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x112b030) 00:27:08.166 [2024-07-26 11:35:03.676769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.166 [2024-07-26 11:35:03.676779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:08.166 [2024-07-26 11:35:03.682889] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x112b030) 00:27:08.166 [2024-07-26 11:35:03.682917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.166 [2024-07-26 11:35:03.682926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:08.166 [2024-07-26 11:35:03.689031] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x112b030) 00:27:08.166 [2024-07-26 11:35:03.689053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.166 [2024-07-26 11:35:03.689062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:08.166 [2024-07-26 11:35:03.695274] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x112b030) 00:27:08.166 [2024-07-26 11:35:03.695296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.166 [2024-07-26 11:35:03.695304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:08.166 [2024-07-26 11:35:03.701222] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x112b030) 00:27:08.166 [2024-07-26 11:35:03.701242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.166 [2024-07-26 11:35:03.701255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:08.166 [2024-07-26 11:35:03.707239] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x112b030) 00:27:08.166 [2024-07-26 11:35:03.707259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.166 [2024-07-26 11:35:03.707266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:08.166 [2024-07-26 11:35:03.713104] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x112b030) 00:27:08.166 [2024-07-26 11:35:03.713124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.166 [2024-07-26 11:35:03.713132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:08.166 [2024-07-26 11:35:03.719205] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x112b030) 00:27:08.166 [2024-07-26 11:35:03.719225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.166 [2024-07-26 11:35:03.719232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:08.166 [2024-07-26 11:35:03.724979] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x112b030) 00:27:08.166 [2024-07-26 11:35:03.724998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.166 [2024-07-26 11:35:03.725006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:08.166 [2024-07-26 11:35:03.731684] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x112b030) 00:27:08.166 [2024-07-26 11:35:03.731703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.166 [2024-07-26 11:35:03.731711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:08.166 [2024-07-26 11:35:03.737608] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x112b030) 00:27:08.166 [2024-07-26 11:35:03.737633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.166 [2024-07-26 11:35:03.737641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:08.166 [2024-07-26 11:35:03.742982] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x112b030) 00:27:08.166 [2024-07-26 11:35:03.743002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.166 [2024-07-26 11:35:03.743010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:08.166 [2024-07-26 11:35:03.748882] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x112b030) 00:27:08.166 [2024-07-26 11:35:03.748901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.166 [2024-07-26 11:35:03.748909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:08.166 [2024-07-26 11:35:03.755228] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x112b030) 00:27:08.166 [2024-07-26 11:35:03.755251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.166 [2024-07-26 11:35:03.755259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:08.166 [2024-07-26 11:35:03.761129] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x112b030) 00:27:08.166 [2024-07-26 11:35:03.761148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.166 [2024-07-26 11:35:03.761156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:08.167 [2024-07-26 11:35:03.767752] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x112b030) 00:27:08.167 [2024-07-26 11:35:03.767771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.167 [2024-07-26 11:35:03.767778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:08.167 [2024-07-26 11:35:03.773456] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x112b030) 00:27:08.167 [2024-07-26 11:35:03.773476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.167 [2024-07-26 11:35:03.773484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:08.167 [2024-07-26 11:35:03.778549] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x112b030) 00:27:08.167 [2024-07-26 11:35:03.778569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.167 [2024-07-26 11:35:03.778576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:08.167 [2024-07-26 11:35:03.783936] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x112b030) 00:27:08.167 [2024-07-26 11:35:03.783957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.167 [2024-07-26 11:35:03.783965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:08.167 [2024-07-26 11:35:03.789747] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x112b030) 00:27:08.167 [2024-07-26 11:35:03.789767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.167 [2024-07-26 11:35:03.789775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:08.167 [2024-07-26 11:35:03.795650] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x112b030) 00:27:08.167 [2024-07-26 11:35:03.795671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:11648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.167 [2024-07-26 11:35:03.795678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:08.167 [2024-07-26 11:35:03.801888] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x112b030) 00:27:08.167 [2024-07-26 11:35:03.801909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.167 [2024-07-26 11:35:03.801916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:08.167 [2024-07-26 11:35:03.808339] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x112b030) 00:27:08.167 [2024-07-26 11:35:03.808360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:8192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.167 [2024-07-26 11:35:03.808367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:08.167 [2024-07-26 11:35:03.814437] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x112b030) 00:27:08.167 [2024-07-26 11:35:03.814458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:96 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.167 [2024-07-26 11:35:03.814465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:08.167 [2024-07-26 11:35:03.820431] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x112b030) 00:27:08.167 [2024-07-26 11:35:03.820452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:4032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.167 [2024-07-26 11:35:03.820460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:08.425 [2024-07-26 11:35:03.826346] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x112b030) 00:27:08.426 [2024-07-26 11:35:03.826367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.426 [2024-07-26 11:35:03.826374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:08.426 [2024-07-26 11:35:03.832512] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x112b030) 00:27:08.426 [2024-07-26 11:35:03.832533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:22688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.426 [2024-07-26 11:35:03.832541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:08.426 [2024-07-26 11:35:03.838658] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x112b030) 00:27:08.426 [2024-07-26 11:35:03.838678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:12672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.426 [2024-07-26 11:35:03.838685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:08.426 [2024-07-26 11:35:03.844553] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x112b030) 00:27:08.426 [2024-07-26 11:35:03.844573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.426 [2024-07-26 11:35:03.844580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:08.426 [2024-07-26 11:35:03.850753] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x112b030) 00:27:08.426 [2024-07-26 11:35:03.850774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.426 [2024-07-26 11:35:03.850782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:08.426 [2024-07-26 11:35:03.856588] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x112b030) 00:27:08.426 [2024-07-26 11:35:03.856609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:11392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.426 [2024-07-26 11:35:03.856620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:08.426 [2024-07-26 11:35:03.861936] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x112b030) 00:27:08.426 [2024-07-26 11:35:03.861957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:7008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.426 [2024-07-26 11:35:03.861964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:08.426 [2024-07-26 11:35:03.867752] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x112b030) 00:27:08.426 [2024-07-26 11:35:03.867773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:23648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.426 [2024-07-26 11:35:03.867781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:08.426 [2024-07-26 11:35:03.873793] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x112b030) 00:27:08.426 [2024-07-26 11:35:03.873814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.426 [2024-07-26 11:35:03.873822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:08.426 [2024-07-26 11:35:03.879357] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x112b030) 00:27:08.426 [2024-07-26 11:35:03.879377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:20384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.426 [2024-07-26 11:35:03.879385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:08.426 [2024-07-26 11:35:03.884950] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x112b030) 00:27:08.426 [2024-07-26 11:35:03.884970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:10240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.426 [2024-07-26 11:35:03.884978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:08.426 [2024-07-26 11:35:03.890571] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x112b030) 00:27:08.426 [2024-07-26 11:35:03.890591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:20288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.426 [2024-07-26 11:35:03.890599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:08.426 [2024-07-26 11:35:03.896136] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x112b030) 00:27:08.426 [2024-07-26 11:35:03.896156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.426 [2024-07-26 11:35:03.896164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:08.426 [2024-07-26 11:35:03.900263] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x112b030) 00:27:08.426 [2024-07-26 11:35:03.900282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:5184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.426 [2024-07-26 11:35:03.900289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:08.426 [2024-07-26 11:35:03.906829] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x112b030) 00:27:08.426 [2024-07-26 11:35:03.906852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:1760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.426 [2024-07-26 11:35:03.906860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:08.426 [2024-07-26 11:35:03.912675] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x112b030) 00:27:08.426 [2024-07-26 11:35:03.912694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.426 [2024-07-26 11:35:03.912702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:08.426 [2024-07-26 11:35:03.918581] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x112b030) 00:27:08.426 [2024-07-26 11:35:03.918603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:3104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.426 [2024-07-26 11:35:03.918610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:08.426 [2024-07-26 11:35:03.924322] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x112b030) 00:27:08.426 [2024-07-26 11:35:03.924342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:4800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.426 [2024-07-26 11:35:03.924350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:08.426 [2024-07-26 11:35:03.929862] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x112b030) 00:27:08.426 [2024-07-26 11:35:03.929883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:4544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.426 [2024-07-26 11:35:03.929891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:08.426 [2024-07-26 11:35:03.936138] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x112b030) 00:27:08.426 [2024-07-26 11:35:03.936160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:19360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.426 [2024-07-26 11:35:03.936168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:08.426 [2024-07-26 11:35:03.942311] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x112b030) 00:27:08.426 [2024-07-26 11:35:03.942332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:22880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.426 [2024-07-26 11:35:03.942339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:08.426 [2024-07-26 11:35:03.948335] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x112b030) 00:27:08.426 [2024-07-26 11:35:03.948356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:8480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.426 [2024-07-26 11:35:03.948364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:08.426 [2024-07-26 11:35:03.954500] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x112b030) 00:27:08.426 [2024-07-26 11:35:03.954520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:11296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.426 [2024-07-26 11:35:03.954527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:08.426 [2024-07-26 11:35:03.960406] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x112b030) 00:27:08.426 [2024-07-26 11:35:03.960426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:15776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.426 [2024-07-26 11:35:03.960434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:08.426 [2024-07-26 11:35:03.966289] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x112b030) 00:27:08.426 [2024-07-26 11:35:03.966308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:2944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.426 [2024-07-26 11:35:03.966316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:08.426 [2024-07-26 11:35:03.972128] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x112b030) 00:27:08.426 [2024-07-26 11:35:03.972148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:11168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.426 [2024-07-26 11:35:03.972158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:08.426 [2024-07-26 11:35:03.977818] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x112b030) 00:27:08.426 [2024-07-26 11:35:03.977838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:23552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.426 [2024-07-26 11:35:03.977846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:08.426 [2024-07-26 11:35:03.985069] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x112b030) 00:27:08.426 [2024-07-26 11:35:03.985090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.426 [2024-07-26 11:35:03.985098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:08.426 [2024-07-26 11:35:03.992317] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x112b030) 00:27:08.426 [2024-07-26 11:35:03.992337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:13056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.426 [2024-07-26 11:35:03.992345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:08.426 [2024-07-26 11:35:04.000470] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x112b030) 00:27:08.426 [2024-07-26 11:35:04.000492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:4160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.426 [2024-07-26 11:35:04.000500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:08.426 [2024-07-26 11:35:04.008334] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x112b030) 00:27:08.426 [2024-07-26 11:35:04.008355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.426 [2024-07-26 11:35:04.008364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:08.426 [2024-07-26 11:35:04.016382] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x112b030) 00:27:08.426 [2024-07-26 11:35:04.016403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:3392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.426 [2024-07-26 11:35:04.016415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:08.426 [2024-07-26 11:35:04.024609] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x112b030) 00:27:08.426 [2024-07-26 11:35:04.024638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:4768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.426 [2024-07-26 11:35:04.024647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:08.426 [2024-07-26 11:35:04.031975] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x112b030) 00:27:08.426 [2024-07-26 11:35:04.031996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:13184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.426 [2024-07-26 11:35:04.032004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:08.426 [2024-07-26 11:35:04.038270] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x112b030) 00:27:08.426 [2024-07-26 11:35:04.038291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:10368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.426 [2024-07-26 11:35:04.038299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:08.426 [2024-07-26 11:35:04.044500] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x112b030) 00:27:08.426 [2024-07-26 11:35:04.044521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:21312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.426 [2024-07-26 11:35:04.044528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:08.426 [2024-07-26 11:35:04.050265] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x112b030) 00:27:08.426 [2024-07-26 11:35:04.050285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:7936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.426 [2024-07-26 11:35:04.050293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:08.427 [2024-07-26 11:35:04.056156] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x112b030) 00:27:08.427 [2024-07-26 11:35:04.056176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:7040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.427 [2024-07-26 11:35:04.056184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:08.427 [2024-07-26 11:35:04.061822] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x112b030) 00:27:08.427 [2024-07-26 11:35:04.061842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:23136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.427 [2024-07-26 11:35:04.061850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:08.427 [2024-07-26 11:35:04.067677] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x112b030) 00:27:08.427 [2024-07-26 11:35:04.067697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.427 [2024-07-26 11:35:04.067704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:08.427 [2024-07-26 11:35:04.073467] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x112b030) 00:27:08.427 [2024-07-26 11:35:04.073487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.427 [2024-07-26 11:35:04.073494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:08.427 [2024-07-26 11:35:04.079116] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x112b030) 00:27:08.427 [2024-07-26 11:35:04.079136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:12352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.427 [2024-07-26 11:35:04.079143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:08.686 [2024-07-26 11:35:04.085049] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x112b030) 00:27:08.686 [2024-07-26 11:35:04.085070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:15488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.686 [2024-07-26 11:35:04.085077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:08.686 [2024-07-26 11:35:04.091658] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x112b030) 00:27:08.686 [2024-07-26 11:35:04.091677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:3392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.686 [2024-07-26 11:35:04.091685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:08.686 [2024-07-26 11:35:04.097671] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x112b030) 00:27:08.686 [2024-07-26 11:35:04.097691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:14112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.686 [2024-07-26 11:35:04.097698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:08.686 [2024-07-26 11:35:04.103678] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x112b030) 00:27:08.686 [2024-07-26 11:35:04.103705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:1056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.686 [2024-07-26 11:35:04.103712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:08.686 [2024-07-26 11:35:04.109810] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x112b030) 00:27:08.686 [2024-07-26 11:35:04.109831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:22080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.686 [2024-07-26 11:35:04.109838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:08.686 [2024-07-26 11:35:04.116283] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x112b030) 00:27:08.686 [2024-07-26 11:35:04.116303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:21984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.686 [2024-07-26 11:35:04.116310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:08.686 [2024-07-26 11:35:04.122421] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x112b030) 00:27:08.686 [2024-07-26 11:35:04.122441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:23072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.686 [2024-07-26 11:35:04.122452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:08.686 [2024-07-26 11:35:04.128261] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x112b030) 00:27:08.686 [2024-07-26 11:35:04.128281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:8000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.686 [2024-07-26 11:35:04.128288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:08.686 [2024-07-26 11:35:04.134173] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x112b030) 00:27:08.686 [2024-07-26 11:35:04.134193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:9632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.686 [2024-07-26 11:35:04.134201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:08.687 [2024-07-26 11:35:04.137990] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x112b030) 00:27:08.687 [2024-07-26 11:35:04.138009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:13056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.687 [2024-07-26 11:35:04.138017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:08.687 [2024-07-26 11:35:04.142255] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x112b030) 00:27:08.687 [2024-07-26 11:35:04.142275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:23712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.687 [2024-07-26 11:35:04.142282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:08.687 [2024-07-26 11:35:04.148329] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x112b030) 00:27:08.687 [2024-07-26 11:35:04.148351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:8096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.687 [2024-07-26 11:35:04.148359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:08.687 [2024-07-26 11:35:04.153374] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x112b030) 00:27:08.687 [2024-07-26 11:35:04.153395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.687 [2024-07-26 11:35:04.153403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:08.687 [2024-07-26 11:35:04.159642] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x112b030) 00:27:08.687 [2024-07-26 11:35:04.159661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:21984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.687 [2024-07-26 11:35:04.159669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:08.687 [2024-07-26 11:35:04.165671] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x112b030) 00:27:08.687 [2024-07-26 11:35:04.165691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:10592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.687 [2024-07-26 11:35:04.165698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:08.687 [2024-07-26 11:35:04.171603] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x112b030) 00:27:08.687 [2024-07-26 11:35:04.171631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:10688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.687 [2024-07-26 11:35:04.171639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:08.687 [2024-07-26 11:35:04.177427] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x112b030) 00:27:08.687 [2024-07-26 11:35:04.177448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.687 [2024-07-26 11:35:04.177455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:08.687 [2024-07-26 11:35:04.183037] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x112b030) 00:27:08.687 [2024-07-26 11:35:04.183058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:19872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.687 [2024-07-26 11:35:04.183065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:08.687 [2024-07-26 11:35:04.188588] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x112b030) 00:27:08.687 [2024-07-26 11:35:04.188610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:15680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.687 [2024-07-26 11:35:04.188618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:08.687 [2024-07-26 11:35:04.194099] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x112b030) 00:27:08.687 [2024-07-26 11:35:04.194118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.687 [2024-07-26 11:35:04.194126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:08.687 [2024-07-26 11:35:04.199662] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x112b030) 00:27:08.687 [2024-07-26 11:35:04.199682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.687 [2024-07-26 11:35:04.199690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:08.687 [2024-07-26 11:35:04.205159] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x112b030) 00:27:08.687 [2024-07-26 11:35:04.205180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:8096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.687 [2024-07-26 11:35:04.205187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:08.687 [2024-07-26 11:35:04.210759] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x112b030) 00:27:08.687 [2024-07-26 11:35:04.210780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:23168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.687 [2024-07-26 11:35:04.210787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:08.687 [2024-07-26 11:35:04.216425] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x112b030) 00:27:08.687 [2024-07-26 11:35:04.216446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:23904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.687 [2024-07-26 11:35:04.216454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:08.687 [2024-07-26 11:35:04.221995] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x112b030) 00:27:08.687 [2024-07-26 11:35:04.222018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:1344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.687 [2024-07-26 11:35:04.222026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:08.687 [2024-07-26 11:35:04.227653] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x112b030) 00:27:08.687 [2024-07-26 11:35:04.227674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:12000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.687 [2024-07-26 11:35:04.227681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:08.687 [2024-07-26 11:35:04.233329] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x112b030) 00:27:08.687 [2024-07-26 11:35:04.233351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:9792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.687 [2024-07-26 11:35:04.233359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:08.687 [2024-07-26 11:35:04.238905] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x112b030) 00:27:08.687 [2024-07-26 11:35:04.238927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:21888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.687 [2024-07-26 11:35:04.238934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:08.687 [2024-07-26 11:35:04.244312] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x112b030) 00:27:08.687 [2024-07-26 11:35:04.244332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:15168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.687 [2024-07-26 11:35:04.244339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:08.687 [2024-07-26 11:35:04.249888] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x112b030) 00:27:08.687 [2024-07-26 11:35:04.249909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.687 [2024-07-26 11:35:04.249917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:08.687 [2024-07-26 11:35:04.255036] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x112b030) 00:27:08.687 [2024-07-26 11:35:04.255058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:25312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.687 [2024-07-26 11:35:04.255066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:08.687 [2024-07-26 11:35:04.260377] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x112b030) 00:27:08.687 [2024-07-26 11:35:04.260398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:16352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.687 [2024-07-26 11:35:04.260405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:08.687 [2024-07-26 11:35:04.265644] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x112b030) 00:27:08.687 [2024-07-26 11:35:04.265664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:2752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.687 [2024-07-26 11:35:04.265676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:08.687 [2024-07-26 11:35:04.271165] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x112b030) 00:27:08.687 [2024-07-26 11:35:04.271186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:20544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.687 [2024-07-26 11:35:04.271193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:08.687 [2024-07-26 11:35:04.276579] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x112b030) 00:27:08.687 [2024-07-26 11:35:04.276600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:18752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.687 [2024-07-26 11:35:04.276607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:08.687 [2024-07-26 11:35:04.282138] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x112b030) 00:27:08.687 [2024-07-26 11:35:04.282159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:3776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.688 [2024-07-26 11:35:04.282167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:08.688 [2024-07-26 11:35:04.286841] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x112b030) 00:27:08.688 [2024-07-26 11:35:04.286861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:22432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.688 [2024-07-26 11:35:04.286869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:08.688 [2024-07-26 11:35:04.289978] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x112b030) 00:27:08.688 [2024-07-26 11:35:04.289998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:8832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.688 [2024-07-26 11:35:04.290005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:08.688 [2024-07-26 11:35:04.295243] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x112b030) 00:27:08.688 [2024-07-26 11:35:04.295263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:9216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.688 [2024-07-26 11:35:04.295271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:08.688 [2024-07-26 11:35:04.300386] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x112b030) 00:27:08.688 [2024-07-26 11:35:04.300406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:4704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.688 [2024-07-26 11:35:04.300414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:08.688 [2024-07-26 11:35:04.305596] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x112b030) 00:27:08.688 [2024-07-26 11:35:04.305616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:4960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.688 [2024-07-26 11:35:04.305623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:08.688 [2024-07-26 11:35:04.310937] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x112b030) 00:27:08.688 [2024-07-26 11:35:04.310961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:19232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.688 [2024-07-26 11:35:04.310969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:08.688 [2024-07-26 11:35:04.316260] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x112b030) 00:27:08.688 [2024-07-26 11:35:04.316279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:16320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.688 [2024-07-26 11:35:04.316288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:08.688 [2024-07-26 11:35:04.321581] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x112b030) 00:27:08.688 [2024-07-26 11:35:04.321601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:16640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.688 [2024-07-26 11:35:04.321609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:08.688 [2024-07-26 11:35:04.326845] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x112b030) 00:27:08.688 [2024-07-26 11:35:04.326866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:11744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.688 [2024-07-26 11:35:04.326873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:08.688 [2024-07-26 11:35:04.331449] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x112b030) 00:27:08.688 [2024-07-26 11:35:04.331470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:5824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.688 [2024-07-26 11:35:04.331477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:08.688 [2024-07-26 11:35:04.335960] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x112b030) 00:27:08.688 [2024-07-26 11:35:04.335980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.688 [2024-07-26 11:35:04.335988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:08.688 [2024-07-26 11:35:04.341030] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x112b030) 00:27:08.688 [2024-07-26 11:35:04.341051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.688 [2024-07-26 11:35:04.341059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:08.948 [2024-07-26 11:35:04.345719] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x112b030) 00:27:08.948 [2024-07-26 11:35:04.345740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:20640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.948 [2024-07-26 11:35:04.345749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:08.948 [2024-07-26 11:35:04.350738] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x112b030) 00:27:08.948 [2024-07-26 11:35:04.350758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.948 [2024-07-26 11:35:04.350766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:08.948 [2024-07-26 11:35:04.355713] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x112b030) 00:27:08.948 [2024-07-26 11:35:04.355733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.948 [2024-07-26 11:35:04.355741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:08.948 [2024-07-26 11:35:04.360671] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x112b030) 00:27:08.948 [2024-07-26 11:35:04.360691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:7584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.948 [2024-07-26 11:35:04.360699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:08.948 [2024-07-26 11:35:04.365587] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x112b030) 00:27:08.948 [2024-07-26 11:35:04.365607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:9088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.948 [2024-07-26 11:35:04.365614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:08.948 [2024-07-26 11:35:04.370602] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x112b030) 00:27:08.948 [2024-07-26 11:35:04.370623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:13024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.948 [2024-07-26 11:35:04.370637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:08.948 [2024-07-26 11:35:04.375675] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x112b030) 00:27:08.948 [2024-07-26 11:35:04.375695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:7776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.948 [2024-07-26 11:35:04.375702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:08.948 [2024-07-26 11:35:04.380723] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x112b030) 00:27:08.948 [2024-07-26 11:35:04.380743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:2592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.948 [2024-07-26 11:35:04.380751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:08.948 [2024-07-26 11:35:04.385869] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x112b030) 00:27:08.948 [2024-07-26 11:35:04.385889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:2080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.948 [2024-07-26 11:35:04.385898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:08.948 [2024-07-26 11:35:04.390896] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x112b030) 00:27:08.948 [2024-07-26 11:35:04.390917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:3968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.948 [2024-07-26 11:35:04.390924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:08.948 [2024-07-26 11:35:04.395942] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x112b030) 00:27:08.948 [2024-07-26 11:35:04.395966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.948 [2024-07-26 11:35:04.395973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:08.948 [2024-07-26 11:35:04.400967] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x112b030) 00:27:08.948 [2024-07-26 11:35:04.400988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:4704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.948 [2024-07-26 11:35:04.400995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:08.948 [2024-07-26 11:35:04.406054] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x112b030) 00:27:08.948 [2024-07-26 11:35:04.406075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:18912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.948 [2024-07-26 11:35:04.406082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:08.948 [2024-07-26 11:35:04.411091] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x112b030) 00:27:08.948 [2024-07-26 11:35:04.411110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.948 [2024-07-26 11:35:04.411118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:08.948 [2024-07-26 11:35:04.416158] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x112b030) 00:27:08.948 [2024-07-26 11:35:04.416178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.948 [2024-07-26 11:35:04.416187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:08.948 [2024-07-26 11:35:04.421363] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x112b030) 00:27:08.948 [2024-07-26 11:35:04.421384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:10848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.948 [2024-07-26 11:35:04.421392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:08.948 [2024-07-26 11:35:04.426616] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x112b030) 00:27:08.948 [2024-07-26 11:35:04.426644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:19200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.948 [2024-07-26 11:35:04.426652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:08.948 [2024-07-26 11:35:04.431825] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x112b030) 00:27:08.948 [2024-07-26 11:35:04.431845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:18432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.948 [2024-07-26 11:35:04.431852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:08.948 [2024-07-26 11:35:04.437060] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x112b030) 00:27:08.948 [2024-07-26 11:35:04.437081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.948 [2024-07-26 11:35:04.437088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:08.948 [2024-07-26 11:35:04.442436] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x112b030) 00:27:08.949 [2024-07-26 11:35:04.442457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:21600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.949 [2024-07-26 11:35:04.442465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:08.949 [2024-07-26 11:35:04.447854] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x112b030) 00:27:08.949 [2024-07-26 11:35:04.447873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:11968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.949 [2024-07-26 11:35:04.447881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:08.949 [2024-07-26 11:35:04.453390] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x112b030) 00:27:08.949 [2024-07-26 11:35:04.453410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:64 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.949 [2024-07-26 11:35:04.453417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:08.949 [2024-07-26 11:35:04.459039] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x112b030) 00:27:08.949 [2024-07-26 11:35:04.459059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.949 [2024-07-26 11:35:04.459067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:08.949 [2024-07-26 11:35:04.464543] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x112b030) 00:27:08.949 [2024-07-26 11:35:04.464564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.949 [2024-07-26 11:35:04.464572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:08.949 [2024-07-26 11:35:04.469951] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x112b030) 00:27:08.949 [2024-07-26 11:35:04.469972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:25376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.949 [2024-07-26 11:35:04.469979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:08.949 [2024-07-26 11:35:04.475298] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x112b030) 00:27:08.949 [2024-07-26 11:35:04.475318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:21504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.949 [2024-07-26 11:35:04.475326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:08.949 [2024-07-26 11:35:04.480815] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x112b030) 00:27:08.949 [2024-07-26 11:35:04.480835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:21184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.949 [2024-07-26 11:35:04.480843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:08.949 [2024-07-26 11:35:04.486239] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x112b030) 00:27:08.949 [2024-07-26 11:35:04.486260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:21856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.949 [2024-07-26 11:35:04.486271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:08.949 [2024-07-26 11:35:04.491664] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x112b030) 00:27:08.949 [2024-07-26 11:35:04.491684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:1920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.949 [2024-07-26 11:35:04.491692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:08.949 [2024-07-26 11:35:04.497052] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x112b030) 00:27:08.949 [2024-07-26 11:35:04.497073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.949 [2024-07-26 11:35:04.497081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:08.949 [2024-07-26 11:35:04.502370] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x112b030) 00:27:08.949 [2024-07-26 11:35:04.502389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:20736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.949 [2024-07-26 11:35:04.502397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:08.949 [2024-07-26 11:35:04.507586] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x112b030) 00:27:08.949 [2024-07-26 11:35:04.507609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:25248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.949 [2024-07-26 11:35:04.507618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:08.949 [2024-07-26 11:35:04.512796] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x112b030) 00:27:08.949 [2024-07-26 11:35:04.512818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.949 [2024-07-26 11:35:04.512825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:08.949 [2024-07-26 11:35:04.518131] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x112b030) 00:27:08.949 [2024-07-26 11:35:04.518153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:5440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.949 [2024-07-26 11:35:04.518160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:08.949 [2024-07-26 11:35:04.523399] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x112b030) 00:27:08.949 [2024-07-26 11:35:04.523420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.949 [2024-07-26 11:35:04.523429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:08.949 [2024-07-26 11:35:04.528681] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x112b030) 00:27:08.949 [2024-07-26 11:35:04.528701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:20160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.949 [2024-07-26 11:35:04.528709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:08.949 [2024-07-26 11:35:04.533983] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x112b030) 00:27:08.949 [2024-07-26 11:35:04.534009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:13728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.949 [2024-07-26 11:35:04.534016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:08.949 [2024-07-26 11:35:04.539300] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x112b030) 00:27:08.949 [2024-07-26 11:35:04.539320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.949 [2024-07-26 11:35:04.539327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:08.949 [2024-07-26 11:35:04.544754] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x112b030) 00:27:08.949 [2024-07-26 11:35:04.544774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:6656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.949 [2024-07-26 11:35:04.544782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:08.949 [2024-07-26 11:35:04.550152] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x112b030) 00:27:08.949 [2024-07-26 11:35:04.550173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:18880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.949 [2024-07-26 11:35:04.550182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:08.949 [2024-07-26 11:35:04.555453] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x112b030) 00:27:08.949 [2024-07-26 11:35:04.555474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:16544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.949 [2024-07-26 11:35:04.555481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:08.949 [2024-07-26 11:35:04.560796] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x112b030) 00:27:08.949 [2024-07-26 11:35:04.560816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.949 [2024-07-26 11:35:04.560824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:08.949 [2024-07-26 11:35:04.566093] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x112b030) 00:27:08.949 [2024-07-26 11:35:04.566113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.949 [2024-07-26 11:35:04.566121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:08.949 [2024-07-26 11:35:04.571257] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x112b030) 00:27:08.949 [2024-07-26 11:35:04.571278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.949 [2024-07-26 11:35:04.571286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:08.949 [2024-07-26 11:35:04.576308] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x112b030) 00:27:08.949 [2024-07-26 11:35:04.576329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:11936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.949 [2024-07-26 11:35:04.576337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:08.949 [2024-07-26 11:35:04.581319] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x112b030) 00:27:08.949 [2024-07-26 11:35:04.581339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:22656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.950 [2024-07-26 11:35:04.581347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:08.950 [2024-07-26 11:35:04.586400] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x112b030) 00:27:08.950 [2024-07-26 11:35:04.586420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:8800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.950 [2024-07-26 11:35:04.586427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:08.950 [2024-07-26 11:35:04.591533] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x112b030) 00:27:08.950 [2024-07-26 11:35:04.591553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.950 [2024-07-26 11:35:04.591560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:08.950 [2024-07-26 11:35:04.596838] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x112b030) 00:27:08.950 [2024-07-26 11:35:04.596859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:11552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.950 [2024-07-26 11:35:04.596866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:08.950 [2024-07-26 11:35:04.602111] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x112b030) 00:27:08.950 [2024-07-26 11:35:04.602132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:14496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.950 [2024-07-26 11:35:04.602140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:09.209 [2024-07-26 11:35:04.607441] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x112b030) 00:27:09.209 [2024-07-26 11:35:04.607462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:9504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.209 [2024-07-26 11:35:04.607470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:09.209 [2024-07-26 11:35:04.612791] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x112b030) 00:27:09.209 [2024-07-26 11:35:04.612811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:20704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.209 [2024-07-26 11:35:04.612818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:09.209 [2024-07-26 11:35:04.618189] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x112b030) 00:27:09.209 [2024-07-26 11:35:04.618210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.209 [2024-07-26 11:35:04.618217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:09.209 [2024-07-26 11:35:04.623555] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x112b030) 00:27:09.209 [2024-07-26 11:35:04.623575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:24288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.209 [2024-07-26 11:35:04.623585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:09.209 [2024-07-26 11:35:04.628980] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x112b030) 00:27:09.209 [2024-07-26 11:35:04.628999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:1856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.209 [2024-07-26 11:35:04.629006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:09.209 [2024-07-26 11:35:04.634343] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x112b030) 00:27:09.209 [2024-07-26 11:35:04.634363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:19872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.209 [2024-07-26 11:35:04.634370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:09.209 [2024-07-26 11:35:04.639620] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x112b030) 00:27:09.209 [2024-07-26 11:35:04.639646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.209 [2024-07-26 11:35:04.639654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:09.209 [2024-07-26 11:35:04.644841] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x112b030) 00:27:09.209 [2024-07-26 11:35:04.644861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:7680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.209 [2024-07-26 11:35:04.644868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:09.209 [2024-07-26 11:35:04.650079] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x112b030) 00:27:09.209 [2024-07-26 11:35:04.650099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:10656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.209 [2024-07-26 11:35:04.650106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:09.209 [2024-07-26 11:35:04.655445] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x112b030) 00:27:09.209 [2024-07-26 11:35:04.655464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:9312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.209 [2024-07-26 11:35:04.655472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:09.209 [2024-07-26 11:35:04.660799] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x112b030) 00:27:09.209 [2024-07-26 11:35:04.660820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:19232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.209 [2024-07-26 11:35:04.660828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:09.209 [2024-07-26 11:35:04.666164] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x112b030) 00:27:09.209 [2024-07-26 11:35:04.666183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.209 [2024-07-26 11:35:04.666190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:09.209 [2024-07-26 11:35:04.671585] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x112b030) 00:27:09.209 [2024-07-26 11:35:04.671606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:7840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.209 [2024-07-26 11:35:04.671613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:09.209 [2024-07-26 11:35:04.676936] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x112b030) 00:27:09.209 [2024-07-26 11:35:04.676956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:21856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.209 [2024-07-26 11:35:04.676964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:09.210 [2024-07-26 11:35:04.682221] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x112b030) 00:27:09.210 [2024-07-26 11:35:04.682243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:14528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.210 [2024-07-26 11:35:04.682252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:09.210 [2024-07-26 11:35:04.687468] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x112b030) 00:27:09.210 [2024-07-26 11:35:04.687487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:23136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.210 [2024-07-26 11:35:04.687495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:09.210 [2024-07-26 11:35:04.692729] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x112b030) 00:27:09.210 [2024-07-26 11:35:04.692748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.210 [2024-07-26 11:35:04.692756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:09.210 [2024-07-26 11:35:04.698176] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x112b030) 00:27:09.210 [2024-07-26 11:35:04.698195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:8832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.210 [2024-07-26 11:35:04.698203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:09.210 [2024-07-26 11:35:04.703602] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x112b030) 00:27:09.210 [2024-07-26 11:35:04.703623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:12704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.210 [2024-07-26 11:35:04.703635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:09.210 [2024-07-26 11:35:04.708984] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x112b030) 00:27:09.210 [2024-07-26 11:35:04.709004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:16384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.210 [2024-07-26 11:35:04.709011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:09.210 [2024-07-26 11:35:04.714399] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x112b030) 00:27:09.210 [2024-07-26 11:35:04.714419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:3488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.210 [2024-07-26 11:35:04.714429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:09.210 [2024-07-26 11:35:04.719857] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x112b030) 00:27:09.210 [2024-07-26 11:35:04.719878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.210 [2024-07-26 11:35:04.719886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:09.210 [2024-07-26 11:35:04.725196] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x112b030) 00:27:09.210 [2024-07-26 11:35:04.725216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:10016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.210 [2024-07-26 11:35:04.725224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:09.210 [2024-07-26 11:35:04.730467] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x112b030) 00:27:09.210 [2024-07-26 11:35:04.730487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:21344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.210 [2024-07-26 11:35:04.730495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:09.210 [2024-07-26 11:35:04.735714] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x112b030) 00:27:09.210 [2024-07-26 11:35:04.735734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:8960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.210 [2024-07-26 11:35:04.735741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:09.210 [2024-07-26 11:35:04.740900] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x112b030) 00:27:09.210 [2024-07-26 11:35:04.740921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.210 [2024-07-26 11:35:04.740928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:09.210 [2024-07-26 11:35:04.746210] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x112b030) 00:27:09.210 [2024-07-26 11:35:04.746230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:24832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.210 [2024-07-26 11:35:04.746238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:09.210 [2024-07-26 11:35:04.751572] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x112b030) 00:27:09.210 [2024-07-26 11:35:04.751592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:13728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.210 [2024-07-26 11:35:04.751599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:09.210 [2024-07-26 11:35:04.756947] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x112b030) 00:27:09.210 [2024-07-26 11:35:04.756967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:19936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.210 [2024-07-26 11:35:04.756976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:09.210 [2024-07-26 11:35:04.762317] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x112b030) 00:27:09.210 [2024-07-26 11:35:04.762339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:10176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.210 [2024-07-26 11:35:04.762347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:09.210 [2024-07-26 11:35:04.767675] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x112b030) 00:27:09.210 [2024-07-26 11:35:04.767694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:9472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.210 [2024-07-26 11:35:04.767701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:09.210 [2024-07-26 11:35:04.773053] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x112b030) 00:27:09.210 [2024-07-26 11:35:04.773073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:15936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.210 [2024-07-26 11:35:04.773081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:09.210 [2024-07-26 11:35:04.778241] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x112b030) 00:27:09.210 [2024-07-26 11:35:04.778261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:5888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.210 [2024-07-26 11:35:04.778268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:09.210 [2024-07-26 11:35:04.783439] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x112b030) 00:27:09.210 [2024-07-26 11:35:04.783459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:22560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.210 [2024-07-26 11:35:04.783466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:09.210 [2024-07-26 11:35:04.788687] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x112b030) 00:27:09.210 [2024-07-26 11:35:04.788708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:12992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.210 [2024-07-26 11:35:04.788715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:09.210 [2024-07-26 11:35:04.794007] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x112b030) 00:27:09.211 [2024-07-26 11:35:04.794027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.211 [2024-07-26 11:35:04.794035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:09.211 [2024-07-26 11:35:04.799404] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x112b030) 00:27:09.211 [2024-07-26 11:35:04.799424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:4320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.211 [2024-07-26 11:35:04.799431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:09.211 [2024-07-26 11:35:04.804815] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x112b030) 00:27:09.211 [2024-07-26 11:35:04.804835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:5536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.211 [2024-07-26 11:35:04.804842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:09.211 [2024-07-26 11:35:04.810334] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x112b030) 00:27:09.211 [2024-07-26 11:35:04.810354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:20512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.211 [2024-07-26 11:35:04.810361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:09.211 [2024-07-26 11:35:04.815775] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x112b030) 00:27:09.211 [2024-07-26 11:35:04.815794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:14528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.211 [2024-07-26 11:35:04.815801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:09.211 [2024-07-26 11:35:04.821131] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x112b030) 00:27:09.211 [2024-07-26 11:35:04.821151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.211 [2024-07-26 11:35:04.821158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:09.211 [2024-07-26 11:35:04.826418] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x112b030) 00:27:09.211 [2024-07-26 11:35:04.826437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.211 [2024-07-26 11:35:04.826445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:09.211 [2024-07-26 11:35:04.831610] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x112b030) 00:27:09.211 [2024-07-26 11:35:04.831637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.211 [2024-07-26 11:35:04.831644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:09.211 [2024-07-26 11:35:04.836809] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x112b030) 00:27:09.211 [2024-07-26 11:35:04.836829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:4896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.211 [2024-07-26 11:35:04.836836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:09.211 [2024-07-26 11:35:04.842020] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x112b030) 00:27:09.211 [2024-07-26 11:35:04.842038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.211 [2024-07-26 11:35:04.842045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:09.211 [2024-07-26 11:35:04.847251] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x112b030) 00:27:09.211 [2024-07-26 11:35:04.847271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:23296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.211 [2024-07-26 11:35:04.847278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:09.211 [2024-07-26 11:35:04.852483] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x112b030) 00:27:09.211 [2024-07-26 11:35:04.852504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:7616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.211 [2024-07-26 11:35:04.852514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:09.211 [2024-07-26 11:35:04.857811] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x112b030) 00:27:09.211 [2024-07-26 11:35:04.857831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:96 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.211 [2024-07-26 11:35:04.857838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:09.211 [2024-07-26 11:35:04.863259] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x112b030) 00:27:09.211 [2024-07-26 11:35:04.863279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:1216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.211 [2024-07-26 11:35:04.863287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:09.470 [2024-07-26 11:35:04.868760] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x112b030) 00:27:09.470 [2024-07-26 11:35:04.868781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.470 [2024-07-26 11:35:04.868789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:09.470 [2024-07-26 11:35:04.874149] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x112b030) 00:27:09.470 [2024-07-26 11:35:04.874168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:14208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.470 [2024-07-26 11:35:04.874176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:09.470 [2024-07-26 11:35:04.879593] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x112b030) 00:27:09.470 [2024-07-26 11:35:04.879613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:4544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.470 [2024-07-26 11:35:04.879620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:09.470 [2024-07-26 11:35:04.884923] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x112b030) 00:27:09.470 [2024-07-26 11:35:04.884942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:6432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.470 [2024-07-26 11:35:04.884950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:09.470 [2024-07-26 11:35:04.890120] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x112b030) 00:27:09.470 [2024-07-26 11:35:04.890140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.470 [2024-07-26 11:35:04.890147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:09.470 [2024-07-26 11:35:04.895399] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x112b030) 00:27:09.470 [2024-07-26 11:35:04.895418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:13184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.470 [2024-07-26 11:35:04.895426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:09.470 [2024-07-26 11:35:04.900646] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x112b030) 00:27:09.470 [2024-07-26 11:35:04.900666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:2688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.471 [2024-07-26 11:35:04.900673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:09.471 [2024-07-26 11:35:04.905887] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x112b030) 00:27:09.471 [2024-07-26 11:35:04.905907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:12704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.471 [2024-07-26 11:35:04.905914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:09.471 [2024-07-26 11:35:04.911191] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x112b030) 00:27:09.471 [2024-07-26 11:35:04.911211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:13824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.471 [2024-07-26 11:35:04.911218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:09.471 [2024-07-26 11:35:04.916377] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x112b030) 00:27:09.471 [2024-07-26 11:35:04.916397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.471 [2024-07-26 11:35:04.916404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:09.471 [2024-07-26 11:35:04.921583] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x112b030) 00:27:09.471 [2024-07-26 11:35:04.921603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.471 [2024-07-26 11:35:04.921610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:09.471 [2024-07-26 11:35:04.926809] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x112b030) 00:27:09.471 [2024-07-26 11:35:04.926828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.471 [2024-07-26 11:35:04.926836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:09.471 [2024-07-26 11:35:04.932179] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x112b030) 00:27:09.471 [2024-07-26 11:35:04.932199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:4832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.471 [2024-07-26 11:35:04.932207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:09.471 [2024-07-26 11:35:04.937562] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x112b030) 00:27:09.471 [2024-07-26 11:35:04.937581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:9952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.471 [2024-07-26 11:35:04.937589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:09.471 [2024-07-26 11:35:04.942972] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x112b030) 00:27:09.471 [2024-07-26 11:35:04.942992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.471 [2024-07-26 11:35:04.943003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:09.471 [2024-07-26 11:35:04.948269] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x112b030) 00:27:09.471 [2024-07-26 11:35:04.948289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:7296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.471 [2024-07-26 11:35:04.948297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:09.471 [2024-07-26 11:35:04.953551] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x112b030) 00:27:09.471 [2024-07-26 11:35:04.953571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:4672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.471 [2024-07-26 11:35:04.953579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:09.471 [2024-07-26 11:35:04.958798] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x112b030) 00:27:09.471 [2024-07-26 11:35:04.958817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:20384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.471 [2024-07-26 11:35:04.958825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:09.471 [2024-07-26 11:35:04.964034] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x112b030) 00:27:09.471 [2024-07-26 11:35:04.964054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.471 [2024-07-26 11:35:04.964061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:09.471 [2024-07-26 11:35:04.969394] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x112b030) 00:27:09.471 [2024-07-26 11:35:04.969414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:9792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.471 [2024-07-26 11:35:04.969421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:09.471 [2024-07-26 11:35:04.974744] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x112b030) 00:27:09.471 [2024-07-26 11:35:04.974763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:4160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.471 [2024-07-26 11:35:04.974771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:09.471 [2024-07-26 11:35:04.980103] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x112b030) 00:27:09.471 [2024-07-26 11:35:04.980123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:12640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.471 [2024-07-26 11:35:04.980130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:09.471 [2024-07-26 11:35:04.985403] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x112b030) 00:27:09.471 [2024-07-26 11:35:04.985423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.471 [2024-07-26 11:35:04.985430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:09.471 [2024-07-26 11:35:04.990783] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x112b030) 00:27:09.471 [2024-07-26 11:35:04.990806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:3808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.471 [2024-07-26 11:35:04.990814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:09.471 [2024-07-26 11:35:04.996128] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x112b030) 00:27:09.471 [2024-07-26 11:35:04.996148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:2912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.471 [2024-07-26 11:35:04.996156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:09.471 [2024-07-26 11:35:05.001558] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x112b030) 00:27:09.471 [2024-07-26 11:35:05.001578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:22464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.471 [2024-07-26 11:35:05.001585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:09.471 [2024-07-26 11:35:05.006878] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x112b030) 00:27:09.471 [2024-07-26 11:35:05.006898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:5024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.471 [2024-07-26 11:35:05.006906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:09.471 [2024-07-26 11:35:05.012097] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x112b030) 00:27:09.471 [2024-07-26 11:35:05.012116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:10592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.471 [2024-07-26 11:35:05.012124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:09.471 [2024-07-26 11:35:05.017281] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x112b030) 00:27:09.471 [2024-07-26 11:35:05.017301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.471 [2024-07-26 11:35:05.017308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:09.471 [2024-07-26 11:35:05.022577] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x112b030) 00:27:09.471 [2024-07-26 11:35:05.022597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:23680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.471 [2024-07-26 11:35:05.022605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:09.471 [2024-07-26 11:35:05.027835] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x112b030) 00:27:09.471 [2024-07-26 11:35:05.027854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.471 [2024-07-26 11:35:05.027862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:09.471 [2024-07-26 11:35:05.033236] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x112b030) 00:27:09.471 [2024-07-26 11:35:05.033256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:13888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.471 [2024-07-26 11:35:05.033263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:09.471 [2024-07-26 11:35:05.038594] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x112b030) 00:27:09.471 [2024-07-26 11:35:05.038613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:5856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.471 [2024-07-26 11:35:05.038621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:09.471 [2024-07-26 11:35:05.043882] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x112b030) 00:27:09.472 [2024-07-26 11:35:05.043902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:12096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.472 [2024-07-26 11:35:05.043909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:09.472 [2024-07-26 11:35:05.049243] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x112b030) 00:27:09.472 [2024-07-26 11:35:05.049263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:2176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.472 [2024-07-26 11:35:05.049270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:09.472 [2024-07-26 11:35:05.054598] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x112b030) 00:27:09.472 [2024-07-26 11:35:05.054618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:13408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.472 [2024-07-26 11:35:05.054625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:09.472 [2024-07-26 11:35:05.059986] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x112b030) 00:27:09.472 [2024-07-26 11:35:05.060006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:5600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.472 [2024-07-26 11:35:05.060013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:09.472 [2024-07-26 11:35:05.065359] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x112b030) 00:27:09.472 [2024-07-26 11:35:05.065378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:8640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.472 [2024-07-26 11:35:05.065385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:09.472 [2024-07-26 11:35:05.070699] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x112b030) 00:27:09.472 [2024-07-26 11:35:05.070718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.472 [2024-07-26 11:35:05.070726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:09.472 [2024-07-26 11:35:05.076016] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x112b030) 00:27:09.472 [2024-07-26 11:35:05.076035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:6176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.472 [2024-07-26 11:35:05.076042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:09.472 [2024-07-26 11:35:05.081385] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x112b030) 00:27:09.472 [2024-07-26 11:35:05.081405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:4704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.472 [2024-07-26 11:35:05.081416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:09.472 [2024-07-26 11:35:05.086701] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x112b030) 00:27:09.472 [2024-07-26 11:35:05.086720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:10144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.472 [2024-07-26 11:35:05.086727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:09.472 [2024-07-26 11:35:05.092057] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x112b030) 00:27:09.472 [2024-07-26 11:35:05.092077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.472 [2024-07-26 11:35:05.092084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:09.472 [2024-07-26 11:35:05.097395] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x112b030) 00:27:09.472 [2024-07-26 11:35:05.097414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:3200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.472 [2024-07-26 11:35:05.097421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:09.472 [2024-07-26 11:35:05.102818] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x112b030) 00:27:09.472 [2024-07-26 11:35:05.102839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:9440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.472 [2024-07-26 11:35:05.102846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:09.472 [2024-07-26 11:35:05.108210] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x112b030) 00:27:09.472 [2024-07-26 11:35:05.108229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:13952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.472 [2024-07-26 11:35:05.108236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:09.472 [2024-07-26 11:35:05.113450] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x112b030) 00:27:09.472 [2024-07-26 11:35:05.113470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:1024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.472 [2024-07-26 11:35:05.113477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:09.472 [2024-07-26 11:35:05.118607] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x112b030) 00:27:09.472 [2024-07-26 11:35:05.118633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.472 [2024-07-26 11:35:05.118641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:09.472 [2024-07-26 11:35:05.123809] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x112b030) 00:27:09.472 [2024-07-26 11:35:05.123828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:5952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.472 [2024-07-26 11:35:05.123835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:09.731 [2024-07-26 11:35:05.129167] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x112b030) 00:27:09.731 [2024-07-26 11:35:05.129190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:17984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.731 [2024-07-26 11:35:05.129198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:09.731 [2024-07-26 11:35:05.134469] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x112b030) 00:27:09.731 [2024-07-26 11:35:05.134489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:14240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.731 [2024-07-26 11:35:05.134497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:09.731 [2024-07-26 11:35:05.139751] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x112b030) 00:27:09.731 [2024-07-26 11:35:05.139770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:10112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.731 [2024-07-26 11:35:05.139778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:09.731 [2024-07-26 11:35:05.145216] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x112b030) 00:27:09.731 [2024-07-26 11:35:05.145236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:22016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.731 [2024-07-26 11:35:05.145244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:09.731 [2024-07-26 11:35:05.150620] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x112b030) 00:27:09.731 [2024-07-26 11:35:05.150646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:22656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.731 [2024-07-26 11:35:05.150654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:09.731 [2024-07-26 11:35:05.155984] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x112b030) 00:27:09.731 [2024-07-26 11:35:05.156003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:20736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.731 [2024-07-26 11:35:05.156010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:09.731 [2024-07-26 11:35:05.161325] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x112b030) 00:27:09.731 [2024-07-26 11:35:05.161345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.732 [2024-07-26 11:35:05.161352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:09.732 [2024-07-26 11:35:05.166521] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x112b030) 00:27:09.732 [2024-07-26 11:35:05.166541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:24352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.732 [2024-07-26 11:35:05.166548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:09.732 [2024-07-26 11:35:05.171745] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x112b030) 00:27:09.732 [2024-07-26 11:35:05.171764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:15584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.732 [2024-07-26 11:35:05.171772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:09.732 [2024-07-26 11:35:05.176974] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x112b030) 00:27:09.732 [2024-07-26 11:35:05.176994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:1280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.732 [2024-07-26 11:35:05.177001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:09.732 [2024-07-26 11:35:05.182142] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x112b030) 00:27:09.732 [2024-07-26 11:35:05.182161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.732 [2024-07-26 11:35:05.182168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:09.732 [2024-07-26 11:35:05.187421] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x112b030) 00:27:09.732 [2024-07-26 11:35:05.187440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:4832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.732 [2024-07-26 11:35:05.187447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:09.732 [2024-07-26 11:35:05.192717] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x112b030) 00:27:09.732 [2024-07-26 11:35:05.192736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:6432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.732 [2024-07-26 11:35:05.192744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:09.732 [2024-07-26 11:35:05.197941] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x112b030) 00:27:09.732 [2024-07-26 11:35:05.197961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.732 [2024-07-26 11:35:05.197968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:09.732 [2024-07-26 11:35:05.203231] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x112b030) 00:27:09.732 [2024-07-26 11:35:05.203251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:11200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.732 [2024-07-26 11:35:05.203258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:09.732 [2024-07-26 11:35:05.208733] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x112b030) 00:27:09.732 [2024-07-26 11:35:05.208753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:13152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.732 [2024-07-26 11:35:05.208760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:09.732 [2024-07-26 11:35:05.214181] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x112b030) 00:27:09.732 [2024-07-26 11:35:05.214200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.732 [2024-07-26 11:35:05.214208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:09.732 [2024-07-26 11:35:05.219608] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x112b030) 00:27:09.732 [2024-07-26 11:35:05.219639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:19488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.732 [2024-07-26 11:35:05.219647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:09.732 [2024-07-26 11:35:05.224852] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x112b030) 00:27:09.732 [2024-07-26 11:35:05.224871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.732 [2024-07-26 11:35:05.224879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:09.732 [2024-07-26 11:35:05.230084] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x112b030) 00:27:09.732 [2024-07-26 11:35:05.230104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:3296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.732 [2024-07-26 11:35:05.230111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:09.732 [2024-07-26 11:35:05.235206] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x112b030) 00:27:09.732 [2024-07-26 11:35:05.235226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:8640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.732 [2024-07-26 11:35:05.235233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:09.732 [2024-07-26 11:35:05.240398] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x112b030) 00:27:09.732 [2024-07-26 11:35:05.240419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.732 [2024-07-26 11:35:05.240426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:09.732 [2024-07-26 11:35:05.245556] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x112b030) 00:27:09.732 [2024-07-26 11:35:05.245575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.732 [2024-07-26 11:35:05.245582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:09.732 [2024-07-26 11:35:05.250781] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x112b030) 00:27:09.732 [2024-07-26 11:35:05.250801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:19840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.732 [2024-07-26 11:35:05.250808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:09.732 [2024-07-26 11:35:05.256008] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x112b030) 00:27:09.732 [2024-07-26 11:35:05.256027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:8128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.732 [2024-07-26 11:35:05.256034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:09.732 [2024-07-26 11:35:05.261267] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x112b030) 00:27:09.732 [2024-07-26 11:35:05.261287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:20096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.732 [2024-07-26 11:35:05.261294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:09.732 [2024-07-26 11:35:05.266674] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x112b030) 00:27:09.732 [2024-07-26 11:35:05.266693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:4032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.732 [2024-07-26 11:35:05.266701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:09.732 [2024-07-26 11:35:05.272055] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x112b030) 00:27:09.732 [2024-07-26 11:35:05.272075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:4512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.732 [2024-07-26 11:35:05.272082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:09.732 [2024-07-26 11:35:05.277406] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x112b030) 00:27:09.732 [2024-07-26 11:35:05.277425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.732 [2024-07-26 11:35:05.277433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:09.732 [2024-07-26 11:35:05.282739] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x112b030) 00:27:09.732 [2024-07-26 11:35:05.282759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:24640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.732 [2024-07-26 11:35:05.282766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:09.733 [2024-07-26 11:35:05.288141] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x112b030) 00:27:09.733 [2024-07-26 11:35:05.288160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:13184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.733 [2024-07-26 11:35:05.288168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:09.733 [2024-07-26 11:35:05.293423] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x112b030) 00:27:09.733 [2024-07-26 11:35:05.293443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:13120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.733 [2024-07-26 11:35:05.293450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:09.733 [2024-07-26 11:35:05.298577] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x112b030) 00:27:09.733 [2024-07-26 11:35:05.298597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:10976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.733 [2024-07-26 11:35:05.298604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:09.733 [2024-07-26 11:35:05.303756] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x112b030) 00:27:09.733 [2024-07-26 11:35:05.303776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:5568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.733 [2024-07-26 11:35:05.303784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:09.733 [2024-07-26 11:35:05.308944] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x112b030) 00:27:09.733 [2024-07-26 11:35:05.308963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:12416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.733 [2024-07-26 11:35:05.308973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:09.733 [2024-07-26 11:35:05.314194] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x112b030) 00:27:09.733 [2024-07-26 11:35:05.314214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:24192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.733 [2024-07-26 11:35:05.314222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:09.733 [2024-07-26 11:35:05.319462] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x112b030) 00:27:09.733 [2024-07-26 11:35:05.319483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:24992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.733 [2024-07-26 11:35:05.319490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:09.733 [2024-07-26 11:35:05.324735] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x112b030) 00:27:09.733 [2024-07-26 11:35:05.324754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.733 [2024-07-26 11:35:05.324761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:09.733 [2024-07-26 11:35:05.330126] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x112b030) 00:27:09.733 [2024-07-26 11:35:05.330146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:20416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.733 [2024-07-26 11:35:05.330154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:09.733 [2024-07-26 11:35:05.335444] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x112b030) 00:27:09.733 [2024-07-26 11:35:05.335464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:9664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.733 [2024-07-26 11:35:05.335471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:09.733 [2024-07-26 11:35:05.340781] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x112b030) 00:27:09.733 [2024-07-26 11:35:05.340801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:0 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.733 [2024-07-26 11:35:05.340808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:09.733 [2024-07-26 11:35:05.346131] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x112b030) 00:27:09.733 [2024-07-26 11:35:05.346151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.733 [2024-07-26 11:35:05.346159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:09.733 [2024-07-26 11:35:05.351519] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x112b030) 00:27:09.733 [2024-07-26 11:35:05.351539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:4000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.733 [2024-07-26 11:35:05.351547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:09.733 [2024-07-26 11:35:05.356725] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x112b030) 00:27:09.733 [2024-07-26 11:35:05.356748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:22816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.733 [2024-07-26 11:35:05.356755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:09.733 [2024-07-26 11:35:05.361892] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x112b030) 00:27:09.733 [2024-07-26 11:35:05.361912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:14240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.733 [2024-07-26 11:35:05.361919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:09.733 [2024-07-26 11:35:05.367061] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x112b030) 00:27:09.733 [2024-07-26 11:35:05.367081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.733 [2024-07-26 11:35:05.367088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:09.733 [2024-07-26 11:35:05.372217] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x112b030) 00:27:09.733 [2024-07-26 11:35:05.372236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.733 [2024-07-26 11:35:05.372244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:09.733 [2024-07-26 11:35:05.377427] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x112b030) 00:27:09.733 [2024-07-26 11:35:05.377446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:17088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.733 [2024-07-26 11:35:05.377454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:09.733 [2024-07-26 11:35:05.382586] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x112b030) 00:27:09.733 [2024-07-26 11:35:05.382606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:20576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.733 [2024-07-26 11:35:05.382614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:09.733 [2024-07-26 11:35:05.387871] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x112b030) 00:27:09.733 [2024-07-26 11:35:05.387891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.733 [2024-07-26 11:35:05.387899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:09.992 [2024-07-26 11:35:05.393098] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x112b030) 00:27:09.993 [2024-07-26 11:35:05.393118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:21152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.993 [2024-07-26 11:35:05.393125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:09.993 [2024-07-26 11:35:05.398433] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x112b030) 00:27:09.993 [2024-07-26 11:35:05.398452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:13024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.993 [2024-07-26 11:35:05.398459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:09.993 [2024-07-26 11:35:05.403775] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x112b030) 00:27:09.993 [2024-07-26 11:35:05.403795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.993 [2024-07-26 11:35:05.403802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:09.993 [2024-07-26 11:35:05.409159] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x112b030) 00:27:09.993 [2024-07-26 11:35:05.409179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:17760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.993 [2024-07-26 11:35:05.409187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:09.993 [2024-07-26 11:35:05.414525] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x112b030) 00:27:09.993 [2024-07-26 11:35:05.414544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:4320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.993 [2024-07-26 11:35:05.414551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:09.993 [2024-07-26 11:35:05.419940] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x112b030) 00:27:09.993 [2024-07-26 11:35:05.419959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:3008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.993 [2024-07-26 11:35:05.419966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:09.993 [2024-07-26 11:35:05.425201] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x112b030) 00:27:09.993 [2024-07-26 11:35:05.425221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.993 [2024-07-26 11:35:05.425228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:09.993 [2024-07-26 11:35:05.430430] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x112b030) 00:27:09.993 [2024-07-26 11:35:05.430450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:20288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.993 [2024-07-26 11:35:05.430457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:09.993 [2024-07-26 11:35:05.435583] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x112b030) 00:27:09.993 [2024-07-26 11:35:05.435603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:4768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.993 [2024-07-26 11:35:05.435610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:09.993 [2024-07-26 11:35:05.440789] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x112b030) 00:27:09.993 [2024-07-26 11:35:05.440809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:7520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.993 [2024-07-26 11:35:05.440816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:09.993 [2024-07-26 11:35:05.446143] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x112b030) 00:27:09.993 [2024-07-26 11:35:05.446163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:19360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.993 [2024-07-26 11:35:05.446174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:09.993 [2024-07-26 11:35:05.451529] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x112b030) 00:27:09.993 [2024-07-26 11:35:05.451549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.993 [2024-07-26 11:35:05.451556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:09.993 [2024-07-26 11:35:05.456932] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x112b030) 00:27:09.993 [2024-07-26 11:35:05.456952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:16576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.993 [2024-07-26 11:35:05.456959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:09.993 [2024-07-26 11:35:05.462366] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x112b030) 00:27:09.993 [2024-07-26 11:35:05.462385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:5632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.993 [2024-07-26 11:35:05.462392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:09.993 [2024-07-26 11:35:05.467802] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x112b030) 00:27:09.993 [2024-07-26 11:35:05.467822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:22848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.993 [2024-07-26 11:35:05.467830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:09.993 [2024-07-26 11:35:05.473145] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x112b030) 00:27:09.993 [2024-07-26 11:35:05.473165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.993 [2024-07-26 11:35:05.473172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:09.993 [2024-07-26 11:35:05.478506] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x112b030) 00:27:09.993 [2024-07-26 11:35:05.478525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:11648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.993 [2024-07-26 11:35:05.478533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:09.993 [2024-07-26 11:35:05.483858] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x112b030) 00:27:09.993 [2024-07-26 11:35:05.483878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:14848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.993 [2024-07-26 11:35:05.483886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:09.993 [2024-07-26 11:35:05.489086] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x112b030) 00:27:09.993 [2024-07-26 11:35:05.489106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:4224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.993 [2024-07-26 11:35:05.489113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:09.993 [2024-07-26 11:35:05.494401] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x112b030) 00:27:09.993 [2024-07-26 11:35:05.494421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.993 [2024-07-26 11:35:05.494429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:09.993 [2024-07-26 11:35:05.499905] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x112b030) 00:27:09.993 [2024-07-26 11:35:05.499925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:8704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.993 [2024-07-26 11:35:05.499933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:09.993 [2024-07-26 11:35:05.505617] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x112b030) 00:27:09.993 [2024-07-26 11:35:05.505642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:1856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.993 [2024-07-26 11:35:05.505649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:09.993 [2024-07-26 11:35:05.511537] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x112b030) 00:27:09.993 [2024-07-26 11:35:05.511557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:22720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.993 [2024-07-26 11:35:05.511564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:09.993 [2024-07-26 11:35:05.517350] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x112b030) 00:27:09.993 [2024-07-26 11:35:05.517370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:24160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.993 [2024-07-26 11:35:05.517377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:09.993 [2024-07-26 11:35:05.523367] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x112b030) 00:27:09.993 [2024-07-26 11:35:05.523388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.993 [2024-07-26 11:35:05.523396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:09.993 [2024-07-26 11:35:05.529234] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x112b030) 00:27:09.993 [2024-07-26 11:35:05.529256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:5696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.993 [2024-07-26 11:35:05.529264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:09.993 [2024-07-26 11:35:05.535005] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x112b030) 00:27:09.993 [2024-07-26 11:35:05.535026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:15360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.994 [2024-07-26 11:35:05.535034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:09.994 [2024-07-26 11:35:05.540613] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x112b030) 00:27:09.994 [2024-07-26 11:35:05.540639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.994 [2024-07-26 11:35:05.540650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:09.994 [2024-07-26 11:35:05.546468] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x112b030) 00:27:09.994 [2024-07-26 11:35:05.546488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.994 [2024-07-26 11:35:05.546495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:09.994 [2024-07-26 11:35:05.552367] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x112b030) 00:27:09.994 [2024-07-26 11:35:05.552387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:20736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.994 [2024-07-26 11:35:05.552394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:09.994 [2024-07-26 11:35:05.558316] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x112b030) 00:27:09.994 [2024-07-26 11:35:05.558337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:7616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.994 [2024-07-26 11:35:05.558344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:09.994 [2024-07-26 11:35:05.564345] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x112b030) 00:27:09.994 [2024-07-26 11:35:05.564365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:1504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.994 [2024-07-26 11:35:05.564372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:09.994 [2024-07-26 11:35:05.569837] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x112b030) 00:27:09.994 [2024-07-26 11:35:05.569857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.994 [2024-07-26 11:35:05.569865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:09.994 [2024-07-26 11:35:05.575268] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x112b030) 00:27:09.994 [2024-07-26 11:35:05.575288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:21408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.994 [2024-07-26 11:35:05.575296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:09.994 [2024-07-26 11:35:05.580535] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x112b030) 00:27:09.994 [2024-07-26 11:35:05.580554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.994 [2024-07-26 11:35:05.580563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:09.994 [2024-07-26 11:35:05.585872] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x112b030) 00:27:09.994 [2024-07-26 11:35:05.585893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:17440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.994 [2024-07-26 11:35:05.585901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:09.994 [2024-07-26 11:35:05.591127] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x112b030) 00:27:09.994 [2024-07-26 11:35:05.591151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:22304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.994 [2024-07-26 11:35:05.591159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:09.994 [2024-07-26 11:35:05.596363] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x112b030) 00:27:09.994 [2024-07-26 11:35:05.596383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.994 [2024-07-26 11:35:05.596390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:09.994 [2024-07-26 11:35:05.601742] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x112b030) 00:27:09.994 [2024-07-26 11:35:05.601761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:5504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.994 [2024-07-26 11:35:05.601768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:09.994 [2024-07-26 11:35:05.607183] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x112b030) 00:27:09.994 [2024-07-26 11:35:05.607203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:11456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.994 [2024-07-26 11:35:05.607211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:09.994 [2024-07-26 11:35:05.613194] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x112b030) 00:27:09.994 [2024-07-26 11:35:05.613215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:8608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.994 [2024-07-26 11:35:05.613223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:09.994 [2024-07-26 11:35:05.619073] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x112b030) 00:27:09.994 [2024-07-26 11:35:05.619093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:18432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.994 [2024-07-26 11:35:05.619101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:09.994 [2024-07-26 11:35:05.624810] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x112b030) 00:27:09.994 [2024-07-26 11:35:05.624832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.994 [2024-07-26 11:35:05.624839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:09.994 [2024-07-26 11:35:05.630598] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x112b030) 00:27:09.994 [2024-07-26 11:35:05.630617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.994 [2024-07-26 11:35:05.630631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:09.994 [2024-07-26 11:35:05.636266] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x112b030) 00:27:09.994 [2024-07-26 11:35:05.636286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:5600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.994 [2024-07-26 11:35:05.636295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:09.994 [2024-07-26 11:35:05.641784] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x112b030) 00:27:09.994 [2024-07-26 11:35:05.641804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:10176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.994 [2024-07-26 11:35:05.641811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:09.994 [2024-07-26 11:35:05.647654] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x112b030) 00:27:09.994 [2024-07-26 11:35:05.647676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.994 [2024-07-26 11:35:05.647684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:10.252 [2024-07-26 11:35:05.653446] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x112b030) 00:27:10.252 [2024-07-26 11:35:05.653468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:21632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.252 [2024-07-26 11:35:05.653476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:10.252 [2024-07-26 11:35:05.659122] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x112b030) 00:27:10.252 [2024-07-26 11:35:05.659143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:8832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.252 [2024-07-26 11:35:05.659150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:10.252 [2024-07-26 11:35:05.664634] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x112b030) 00:27:10.252 [2024-07-26 11:35:05.664656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:24640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.252 [2024-07-26 11:35:05.664663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:10.252 [2024-07-26 11:35:05.670331] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x112b030) 00:27:10.252 [2024-07-26 11:35:05.670352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:21856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.252 [2024-07-26 11:35:05.670359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:10.252 00:27:10.252 Latency(us) 00:27:10.252 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:10.252 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:27:10.252 nvme0n1 : 2.00 5639.41 704.93 0.00 0.00 2833.99 620.25 9736.78 00:27:10.252 =================================================================================================================== 00:27:10.252 Total : 5639.41 704.93 0.00 0.00 2833.99 620.25 9736.78 00:27:10.253 0 00:27:10.253 11:35:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:27:10.253 11:35:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:27:10.253 11:35:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:27:10.253 | .driver_specific 00:27:10.253 | .nvme_error 00:27:10.253 | .status_code 00:27:10.253 | .command_transient_transport_error' 00:27:10.253 11:35:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:27:10.253 11:35:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 364 > 0 )) 00:27:10.253 11:35:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 1659602 00:27:10.253 11:35:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@950 -- # '[' -z 1659602 ']' 00:27:10.253 11:35:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # kill -0 1659602 00:27:10.253 11:35:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # uname 00:27:10.253 11:35:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:27:10.253 11:35:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1659602 00:27:10.510 11:35:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:27:10.510 11:35:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:27:10.510 11:35:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1659602' 00:27:10.510 killing process with pid 1659602 00:27:10.510 11:35:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@969 -- # kill 1659602 00:27:10.510 Received shutdown signal, test time was about 2.000000 seconds 00:27:10.510 00:27:10.510 Latency(us) 00:27:10.510 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:10.510 =================================================================================================================== 00:27:10.510 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:27:10.510 11:35:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@974 -- # wait 1659602 00:27:10.510 11:35:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@114 -- # run_bperf_err randwrite 4096 128 00:27:10.510 11:35:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:27:10.510 11:35:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:27:10.511 11:35:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:27:10.511 11:35:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:27:10.511 11:35:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=1660301 00:27:10.511 11:35:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 1660301 /var/tmp/bperf.sock 00:27:10.511 11:35:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z 00:27:10.511 11:35:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@831 -- # '[' -z 1660301 ']' 00:27:10.511 11:35:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:27:10.511 11:35:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # local max_retries=100 00:27:10.511 11:35:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:27:10.511 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:27:10.511 11:35:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # xtrace_disable 00:27:10.511 11:35:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:27:10.511 [2024-07-26 11:35:06.146117] Starting SPDK v24.09-pre git sha1 487ff9e1a / DPDK 24.03.0 initialization... 00:27:10.511 [2024-07-26 11:35:06.146164] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1660301 ] 00:27:10.511 EAL: No free 2048 kB hugepages reported on node 1 00:27:10.768 [2024-07-26 11:35:06.213666] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:10.768 [2024-07-26 11:35:06.284590] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:27:11.334 11:35:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:27:11.334 11:35:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # return 0 00:27:11.334 11:35:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:27:11.334 11:35:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:27:11.592 11:35:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:27:11.592 11:35:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:11.592 11:35:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:27:11.592 11:35:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:11.592 11:35:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:27:11.592 11:35:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:27:11.850 nvme0n1 00:27:11.850 11:35:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:27:11.850 11:35:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:11.850 11:35:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:27:11.850 11:35:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:11.850 11:35:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:27:11.850 11:35:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:27:11.850 Running I/O for 2 seconds... 00:27:11.850 [2024-07-26 11:35:07.497886] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1285420) with pdu=0x2000190fd640 00:27:11.850 [2024-07-26 11:35:07.498078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:18493 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:11.850 [2024-07-26 11:35:07.498106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:11.850 [2024-07-26 11:35:07.507512] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1285420) with pdu=0x2000190fd640 00:27:11.851 [2024-07-26 11:35:07.507686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:6697 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:11.851 [2024-07-26 11:35:07.507709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:12.109 [2024-07-26 11:35:07.517149] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1285420) with pdu=0x2000190fd640 00:27:12.109 [2024-07-26 11:35:07.517311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:740 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.109 [2024-07-26 11:35:07.517330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:12.109 [2024-07-26 11:35:07.526370] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1285420) with pdu=0x2000190fd640 00:27:12.109 [2024-07-26 11:35:07.526528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:1324 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.109 [2024-07-26 11:35:07.526546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:12.109 [2024-07-26 11:35:07.535607] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1285420) with pdu=0x2000190fd640 00:27:12.109 [2024-07-26 11:35:07.535776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:11330 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.109 [2024-07-26 11:35:07.535793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:12.109 [2024-07-26 11:35:07.544823] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1285420) with pdu=0x2000190fd640 00:27:12.109 [2024-07-26 11:35:07.544981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:20455 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.109 [2024-07-26 11:35:07.544998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:12.109 [2024-07-26 11:35:07.554055] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1285420) with pdu=0x2000190fd640 00:27:12.109 [2024-07-26 11:35:07.554230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:17205 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.109 [2024-07-26 11:35:07.554246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:12.109 [2024-07-26 11:35:07.563304] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1285420) with pdu=0x2000190fd640 00:27:12.109 [2024-07-26 11:35:07.563463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:20539 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.109 [2024-07-26 11:35:07.563480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:12.109 [2024-07-26 11:35:07.572572] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1285420) with pdu=0x2000190fd640 00:27:12.109 [2024-07-26 11:35:07.572735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:10541 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.109 [2024-07-26 11:35:07.572752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:12.109 [2024-07-26 11:35:07.581776] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1285420) with pdu=0x2000190fd640 00:27:12.109 [2024-07-26 11:35:07.581932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:6935 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.109 [2024-07-26 11:35:07.581949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:12.109 [2024-07-26 11:35:07.590984] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1285420) with pdu=0x2000190fd640 00:27:12.109 [2024-07-26 11:35:07.591167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:20707 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.109 [2024-07-26 11:35:07.591184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:12.109 [2024-07-26 11:35:07.600227] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1285420) with pdu=0x2000190fd640 00:27:12.109 [2024-07-26 11:35:07.600383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:13481 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.109 [2024-07-26 11:35:07.600404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:12.109 [2024-07-26 11:35:07.609426] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1285420) with pdu=0x2000190fd640 00:27:12.109 [2024-07-26 11:35:07.609581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:10249 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.109 [2024-07-26 11:35:07.609597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:12.109 [2024-07-26 11:35:07.618619] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1285420) with pdu=0x2000190fd640 00:27:12.109 [2024-07-26 11:35:07.618782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:7058 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.109 [2024-07-26 11:35:07.618799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:12.109 [2024-07-26 11:35:07.627802] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1285420) with pdu=0x2000190fd640 00:27:12.109 [2024-07-26 11:35:07.627976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:18421 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.109 [2024-07-26 11:35:07.627993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:12.109 [2024-07-26 11:35:07.637103] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1285420) with pdu=0x2000190fd640 00:27:12.109 [2024-07-26 11:35:07.637258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:12042 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.109 [2024-07-26 11:35:07.637274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:12.109 [2024-07-26 11:35:07.646288] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1285420) with pdu=0x2000190fd640 00:27:12.109 [2024-07-26 11:35:07.646442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:19798 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.109 [2024-07-26 11:35:07.646459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:12.109 [2024-07-26 11:35:07.655461] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1285420) with pdu=0x2000190fd640 00:27:12.109 [2024-07-26 11:35:07.655617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:17441 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.109 [2024-07-26 11:35:07.655637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:12.109 [2024-07-26 11:35:07.664686] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1285420) with pdu=0x2000190fd640 00:27:12.109 [2024-07-26 11:35:07.664848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:13024 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.109 [2024-07-26 11:35:07.664865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:12.109 [2024-07-26 11:35:07.673889] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1285420) with pdu=0x2000190fd640 00:27:12.109 [2024-07-26 11:35:07.674046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:20958 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.109 [2024-07-26 11:35:07.674062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:12.109 [2024-07-26 11:35:07.683079] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1285420) with pdu=0x2000190fd640 00:27:12.109 [2024-07-26 11:35:07.683241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:2816 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.109 [2024-07-26 11:35:07.683257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:12.109 [2024-07-26 11:35:07.692316] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1285420) with pdu=0x2000190fd640 00:27:12.109 [2024-07-26 11:35:07.692471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:11391 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.109 [2024-07-26 11:35:07.692487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:12.109 [2024-07-26 11:35:07.701527] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1285420) with pdu=0x2000190fd640 00:27:12.109 [2024-07-26 11:35:07.701705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:24713 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.109 [2024-07-26 11:35:07.701729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:12.109 [2024-07-26 11:35:07.710762] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1285420) with pdu=0x2000190fd640 00:27:12.109 [2024-07-26 11:35:07.710919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:6040 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.110 [2024-07-26 11:35:07.710936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:12.110 [2024-07-26 11:35:07.719940] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1285420) with pdu=0x2000190fd640 00:27:12.110 [2024-07-26 11:35:07.720092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:13924 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.110 [2024-07-26 11:35:07.720108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:12.110 [2024-07-26 11:35:07.729116] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1285420) with pdu=0x2000190fd640 00:27:12.110 [2024-07-26 11:35:07.729270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:16212 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.110 [2024-07-26 11:35:07.729287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:12.110 [2024-07-26 11:35:07.738294] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1285420) with pdu=0x2000190fd640 00:27:12.110 [2024-07-26 11:35:07.738451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:20036 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.110 [2024-07-26 11:35:07.738468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:12.110 [2024-07-26 11:35:07.747500] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1285420) with pdu=0x2000190fd640 00:27:12.110 [2024-07-26 11:35:07.747656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:9368 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.110 [2024-07-26 11:35:07.747673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:12.110 [2024-07-26 11:35:07.756689] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1285420) with pdu=0x2000190fd640 00:27:12.110 [2024-07-26 11:35:07.756862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:14929 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.110 [2024-07-26 11:35:07.756879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:12.110 [2024-07-26 11:35:07.766206] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1285420) with pdu=0x2000190fd640 00:27:12.110 [2024-07-26 11:35:07.766368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:23609 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.110 [2024-07-26 11:35:07.766386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:12.368 [2024-07-26 11:35:07.775784] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1285420) with pdu=0x2000190fd640 00:27:12.368 [2024-07-26 11:35:07.775947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:13785 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.368 [2024-07-26 11:35:07.775963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:12.368 [2024-07-26 11:35:07.785033] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1285420) with pdu=0x2000190fd640 00:27:12.368 [2024-07-26 11:35:07.785192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:7471 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.368 [2024-07-26 11:35:07.785210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:12.368 [2024-07-26 11:35:07.794270] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1285420) with pdu=0x2000190fd640 00:27:12.368 [2024-07-26 11:35:07.794433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:18691 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.368 [2024-07-26 11:35:07.794451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:12.368 [2024-07-26 11:35:07.803469] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1285420) with pdu=0x2000190fd640 00:27:12.368 [2024-07-26 11:35:07.803629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:15803 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.368 [2024-07-26 11:35:07.803646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:12.368 [2024-07-26 11:35:07.812649] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1285420) with pdu=0x2000190fd640 00:27:12.368 [2024-07-26 11:35:07.812825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:892 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.368 [2024-07-26 11:35:07.812841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:12.368 [2024-07-26 11:35:07.821878] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1285420) with pdu=0x2000190fd640 00:27:12.368 [2024-07-26 11:35:07.822051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:18335 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.368 [2024-07-26 11:35:07.822068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:12.368 [2024-07-26 11:35:07.831089] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1285420) with pdu=0x2000190fd640 00:27:12.368 [2024-07-26 11:35:07.831245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:9769 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.368 [2024-07-26 11:35:07.831262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:12.368 [2024-07-26 11:35:07.840311] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1285420) with pdu=0x2000190fd640 00:27:12.368 [2024-07-26 11:35:07.840467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:15183 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.368 [2024-07-26 11:35:07.840483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:12.368 [2024-07-26 11:35:07.849556] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1285420) with pdu=0x2000190fd640 00:27:12.368 [2024-07-26 11:35:07.849718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:12243 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.368 [2024-07-26 11:35:07.849735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:12.368 [2024-07-26 11:35:07.858735] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1285420) with pdu=0x2000190fd640 00:27:12.368 [2024-07-26 11:35:07.858910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:16923 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.368 [2024-07-26 11:35:07.858927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:12.368 [2024-07-26 11:35:07.867954] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1285420) with pdu=0x2000190fd640 00:27:12.368 [2024-07-26 11:35:07.868109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:11487 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.368 [2024-07-26 11:35:07.868125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:12.368 [2024-07-26 11:35:07.877126] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1285420) with pdu=0x2000190fd640 00:27:12.368 [2024-07-26 11:35:07.877281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:5986 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.368 [2024-07-26 11:35:07.877298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:12.368 [2024-07-26 11:35:07.886337] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1285420) with pdu=0x2000190fd640 00:27:12.369 [2024-07-26 11:35:07.886491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:15861 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.369 [2024-07-26 11:35:07.886507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:12.369 [2024-07-26 11:35:07.895643] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1285420) with pdu=0x2000190fd640 00:27:12.369 [2024-07-26 11:35:07.895800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:15554 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.369 [2024-07-26 11:35:07.895818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:12.369 [2024-07-26 11:35:07.904823] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1285420) with pdu=0x2000190fd640 00:27:12.369 [2024-07-26 11:35:07.904981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:9652 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.369 [2024-07-26 11:35:07.904997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:12.369 [2024-07-26 11:35:07.914002] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1285420) with pdu=0x2000190fd640 00:27:12.369 [2024-07-26 11:35:07.914159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:13643 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.369 [2024-07-26 11:35:07.914175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:12.369 [2024-07-26 11:35:07.923193] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1285420) with pdu=0x2000190fd640 00:27:12.369 [2024-07-26 11:35:07.923350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:24932 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.369 [2024-07-26 11:35:07.923369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:12.369 [2024-07-26 11:35:07.932376] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1285420) with pdu=0x2000190fd640 00:27:12.369 [2024-07-26 11:35:07.932531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:936 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.369 [2024-07-26 11:35:07.932548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:12.369 [2024-07-26 11:35:07.941607] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1285420) with pdu=0x2000190fd640 00:27:12.369 [2024-07-26 11:35:07.941789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:9380 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.369 [2024-07-26 11:35:07.941806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:12.369 [2024-07-26 11:35:07.950816] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1285420) with pdu=0x2000190fd640 00:27:12.369 [2024-07-26 11:35:07.950998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:22357 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.369 [2024-07-26 11:35:07.951015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:12.369 [2024-07-26 11:35:07.960032] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1285420) with pdu=0x2000190fd640 00:27:12.369 [2024-07-26 11:35:07.960186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:13336 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.369 [2024-07-26 11:35:07.960202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:12.369 [2024-07-26 11:35:07.969199] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1285420) with pdu=0x2000190fd640 00:27:12.369 [2024-07-26 11:35:07.969354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:4301 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.369 [2024-07-26 11:35:07.969370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:12.369 [2024-07-26 11:35:07.978379] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1285420) with pdu=0x2000190fd640 00:27:12.369 [2024-07-26 11:35:07.978536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:6941 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.369 [2024-07-26 11:35:07.978552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:12.369 [2024-07-26 11:35:07.987551] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1285420) with pdu=0x2000190fd640 00:27:12.369 [2024-07-26 11:35:07.987712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:186 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.369 [2024-07-26 11:35:07.987729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:12.369 [2024-07-26 11:35:07.996741] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1285420) with pdu=0x2000190fd640 00:27:12.369 [2024-07-26 11:35:07.996899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:23671 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.369 [2024-07-26 11:35:07.996916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:12.369 [2024-07-26 11:35:08.005919] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1285420) with pdu=0x2000190fd640 00:27:12.369 [2024-07-26 11:35:08.006078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:18980 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.369 [2024-07-26 11:35:08.006094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:12.369 [2024-07-26 11:35:08.015113] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1285420) with pdu=0x2000190fd640 00:27:12.369 [2024-07-26 11:35:08.015280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:510 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.369 [2024-07-26 11:35:08.015297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:12.369 [2024-07-26 11:35:08.024589] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1285420) with pdu=0x2000190fd640 00:27:12.369 [2024-07-26 11:35:08.024756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:7074 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.369 [2024-07-26 11:35:08.024773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:12.627 [2024-07-26 11:35:08.034215] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1285420) with pdu=0x2000190fd640 00:27:12.627 [2024-07-26 11:35:08.034394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:20309 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.627 [2024-07-26 11:35:08.034410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:12.627 [2024-07-26 11:35:08.043453] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1285420) with pdu=0x2000190fd640 00:27:12.627 [2024-07-26 11:35:08.043610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:18316 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.627 [2024-07-26 11:35:08.043630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:12.627 [2024-07-26 11:35:08.052635] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1285420) with pdu=0x2000190fd640 00:27:12.627 [2024-07-26 11:35:08.052792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:8082 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.627 [2024-07-26 11:35:08.052809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:12.627 [2024-07-26 11:35:08.061809] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1285420) with pdu=0x2000190fd640 00:27:12.627 [2024-07-26 11:35:08.061965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:24359 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.627 [2024-07-26 11:35:08.061981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:12.627 [2024-07-26 11:35:08.070990] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1285420) with pdu=0x2000190fd640 00:27:12.627 [2024-07-26 11:35:08.071145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:14839 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.627 [2024-07-26 11:35:08.071161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:12.627 [2024-07-26 11:35:08.080200] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1285420) with pdu=0x2000190fd640 00:27:12.627 [2024-07-26 11:35:08.080372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:17624 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.627 [2024-07-26 11:35:08.080389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:12.627 [2024-07-26 11:35:08.089407] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1285420) with pdu=0x2000190fd640 00:27:12.627 [2024-07-26 11:35:08.089582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:24985 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.627 [2024-07-26 11:35:08.089599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:12.627 [2024-07-26 11:35:08.098658] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1285420) with pdu=0x2000190fd640 00:27:12.627 [2024-07-26 11:35:08.098815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:8531 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.627 [2024-07-26 11:35:08.098831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:12.627 [2024-07-26 11:35:08.107825] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1285420) with pdu=0x2000190fd640 00:27:12.627 [2024-07-26 11:35:08.107980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:13333 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.627 [2024-07-26 11:35:08.107997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:12.627 [2024-07-26 11:35:08.117004] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1285420) with pdu=0x2000190fd640 00:27:12.627 [2024-07-26 11:35:08.117158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:1993 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.627 [2024-07-26 11:35:08.117175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:12.627 [2024-07-26 11:35:08.126223] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1285420) with pdu=0x2000190fd640 00:27:12.628 [2024-07-26 11:35:08.126380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:9022 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.628 [2024-07-26 11:35:08.126396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:12.628 [2024-07-26 11:35:08.135409] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1285420) with pdu=0x2000190fd640 00:27:12.628 [2024-07-26 11:35:08.135562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:17695 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.628 [2024-07-26 11:35:08.135578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:12.628 [2024-07-26 11:35:08.144630] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1285420) with pdu=0x2000190fd640 00:27:12.628 [2024-07-26 11:35:08.144785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:466 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.628 [2024-07-26 11:35:08.144803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:12.628 [2024-07-26 11:35:08.153780] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1285420) with pdu=0x2000190fd640 00:27:12.628 [2024-07-26 11:35:08.153957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:16914 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.628 [2024-07-26 11:35:08.153974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:12.628 [2024-07-26 11:35:08.163094] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1285420) with pdu=0x2000190fd640 00:27:12.628 [2024-07-26 11:35:08.163249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:25377 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.628 [2024-07-26 11:35:08.163268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:12.628 [2024-07-26 11:35:08.172284] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1285420) with pdu=0x2000190fd640 00:27:12.628 [2024-07-26 11:35:08.172441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:2164 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.628 [2024-07-26 11:35:08.172458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:12.628 [2024-07-26 11:35:08.181440] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1285420) with pdu=0x2000190fd640 00:27:12.628 [2024-07-26 11:35:08.181594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:9668 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.628 [2024-07-26 11:35:08.181611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:12.628 [2024-07-26 11:35:08.190592] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1285420) with pdu=0x2000190fd640 00:27:12.628 [2024-07-26 11:35:08.190755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:25427 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.628 [2024-07-26 11:35:08.190771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:12.628 [2024-07-26 11:35:08.199839] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1285420) with pdu=0x2000190fd640 00:27:12.628 [2024-07-26 11:35:08.199993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:5453 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.628 [2024-07-26 11:35:08.200010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:12.628 [2024-07-26 11:35:08.209241] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1285420) with pdu=0x2000190fd640 00:27:12.628 [2024-07-26 11:35:08.209396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:14893 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.628 [2024-07-26 11:35:08.209413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:12.628 [2024-07-26 11:35:08.218430] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1285420) with pdu=0x2000190fd640 00:27:12.628 [2024-07-26 11:35:08.218584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:15090 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.628 [2024-07-26 11:35:08.218600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:12.628 [2024-07-26 11:35:08.227592] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1285420) with pdu=0x2000190fd640 00:27:12.628 [2024-07-26 11:35:08.227754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:12061 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.628 [2024-07-26 11:35:08.227771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:12.628 [2024-07-26 11:35:08.236809] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1285420) with pdu=0x2000190fd640 00:27:12.628 [2024-07-26 11:35:08.236966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:13633 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.628 [2024-07-26 11:35:08.236982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:12.628 [2024-07-26 11:35:08.245978] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1285420) with pdu=0x2000190fd640 00:27:12.628 [2024-07-26 11:35:08.246136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:16401 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.628 [2024-07-26 11:35:08.246153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:12.628 [2024-07-26 11:35:08.255158] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1285420) with pdu=0x2000190fd640 00:27:12.628 [2024-07-26 11:35:08.255333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:21200 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.628 [2024-07-26 11:35:08.255349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:12.628 [2024-07-26 11:35:08.264368] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1285420) with pdu=0x2000190fd640 00:27:12.628 [2024-07-26 11:35:08.264523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:7923 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.628 [2024-07-26 11:35:08.264540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:12.628 [2024-07-26 11:35:08.273668] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1285420) with pdu=0x2000190fd640 00:27:12.628 [2024-07-26 11:35:08.273837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:14315 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.628 [2024-07-26 11:35:08.273854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:12.628 [2024-07-26 11:35:08.283128] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1285420) with pdu=0x2000190fd640 00:27:12.628 [2024-07-26 11:35:08.283293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:10460 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.628 [2024-07-26 11:35:08.283309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:12.886 [2024-07-26 11:35:08.292697] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1285420) with pdu=0x2000190fd640 00:27:12.886 [2024-07-26 11:35:08.292877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:5956 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.886 [2024-07-26 11:35:08.292893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:12.886 [2024-07-26 11:35:08.301951] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1285420) with pdu=0x2000190fd640 00:27:12.886 [2024-07-26 11:35:08.302106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:14406 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.886 [2024-07-26 11:35:08.302122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:12.886 [2024-07-26 11:35:08.311143] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1285420) with pdu=0x2000190fd640 00:27:12.886 [2024-07-26 11:35:08.311298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:20903 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.887 [2024-07-26 11:35:08.311314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:12.887 [2024-07-26 11:35:08.320607] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1285420) with pdu=0x2000190fd640 00:27:12.887 [2024-07-26 11:35:08.320770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:17975 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.887 [2024-07-26 11:35:08.320787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:12.887 [2024-07-26 11:35:08.329897] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1285420) with pdu=0x2000190fd640 00:27:12.887 [2024-07-26 11:35:08.330073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:20644 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.887 [2024-07-26 11:35:08.330090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:12.887 [2024-07-26 11:35:08.339214] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1285420) with pdu=0x2000190fd640 00:27:12.887 [2024-07-26 11:35:08.339388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:1655 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.887 [2024-07-26 11:35:08.339405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:12.887 [2024-07-26 11:35:08.348649] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1285420) with pdu=0x2000190fd640 00:27:12.887 [2024-07-26 11:35:08.348808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:22595 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.887 [2024-07-26 11:35:08.348824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:12.887 [2024-07-26 11:35:08.357833] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1285420) with pdu=0x2000190fd640 00:27:12.887 [2024-07-26 11:35:08.357988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:5819 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.887 [2024-07-26 11:35:08.358005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:12.887 [2024-07-26 11:35:08.367038] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1285420) with pdu=0x2000190fd640 00:27:12.887 [2024-07-26 11:35:08.367190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:14261 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.887 [2024-07-26 11:35:08.367207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:12.887 [2024-07-26 11:35:08.376219] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1285420) with pdu=0x2000190fd640 00:27:12.887 [2024-07-26 11:35:08.376374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:21720 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.887 [2024-07-26 11:35:08.376391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:12.887 [2024-07-26 11:35:08.385389] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1285420) with pdu=0x2000190fd640 00:27:12.887 [2024-07-26 11:35:08.385545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:7422 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.887 [2024-07-26 11:35:08.385562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:12.887 [2024-07-26 11:35:08.394559] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1285420) with pdu=0x2000190fd640 00:27:12.887 [2024-07-26 11:35:08.394722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:1010 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.887 [2024-07-26 11:35:08.394738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:12.887 [2024-07-26 11:35:08.403746] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1285420) with pdu=0x2000190fd640 00:27:12.887 [2024-07-26 11:35:08.403905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:19371 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.887 [2024-07-26 11:35:08.403921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:12.887 [2024-07-26 11:35:08.412929] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1285420) with pdu=0x2000190fd640 00:27:12.887 [2024-07-26 11:35:08.413087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:14507 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.887 [2024-07-26 11:35:08.413103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:12.887 [2024-07-26 11:35:08.422099] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1285420) with pdu=0x2000190fd640 00:27:12.887 [2024-07-26 11:35:08.422256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:1667 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.887 [2024-07-26 11:35:08.422273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:12.887 [2024-07-26 11:35:08.431291] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1285420) with pdu=0x2000190fd640 00:27:12.887 [2024-07-26 11:35:08.431446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:3157 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.887 [2024-07-26 11:35:08.431463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:12.887 [2024-07-26 11:35:08.440464] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1285420) with pdu=0x2000190fd640 00:27:12.887 [2024-07-26 11:35:08.440618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:1070 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.887 [2024-07-26 11:35:08.440638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:12.887 [2024-07-26 11:35:08.449731] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1285420) with pdu=0x2000190fd640 00:27:12.887 [2024-07-26 11:35:08.449886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:6944 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.887 [2024-07-26 11:35:08.449902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:12.887 [2024-07-26 11:35:08.458896] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1285420) with pdu=0x2000190fd640 00:27:12.887 [2024-07-26 11:35:08.459065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:8945 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.887 [2024-07-26 11:35:08.459082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:12.887 [2024-07-26 11:35:08.468153] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1285420) with pdu=0x2000190fd640 00:27:12.887 [2024-07-26 11:35:08.468308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:13645 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.887 [2024-07-26 11:35:08.468324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:12.887 [2024-07-26 11:35:08.477340] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1285420) with pdu=0x2000190fd640 00:27:12.887 [2024-07-26 11:35:08.477496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:8019 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.887 [2024-07-26 11:35:08.477512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:12.887 [2024-07-26 11:35:08.486540] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1285420) with pdu=0x2000190fd640 00:27:12.887 [2024-07-26 11:35:08.486708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:20314 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.887 [2024-07-26 11:35:08.486728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:12.887 [2024-07-26 11:35:08.495749] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1285420) with pdu=0x2000190fd640 00:27:12.887 [2024-07-26 11:35:08.495908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:11733 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.887 [2024-07-26 11:35:08.495924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:12.887 [2024-07-26 11:35:08.504935] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1285420) with pdu=0x2000190fd640 00:27:12.887 [2024-07-26 11:35:08.505093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:4404 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.887 [2024-07-26 11:35:08.505110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:12.887 [2024-07-26 11:35:08.514137] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1285420) with pdu=0x2000190fd640 00:27:12.887 [2024-07-26 11:35:08.514293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:5022 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.887 [2024-07-26 11:35:08.514310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:12.887 [2024-07-26 11:35:08.523330] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1285420) with pdu=0x2000190fd640 00:27:12.887 [2024-07-26 11:35:08.523485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:7274 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.887 [2024-07-26 11:35:08.523501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:12.887 [2024-07-26 11:35:08.532515] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1285420) with pdu=0x2000190fd640 00:27:12.887 [2024-07-26 11:35:08.532693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:22061 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.887 [2024-07-26 11:35:08.532710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:12.887 [2024-07-26 11:35:08.542071] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1285420) with pdu=0x2000190fd640 00:27:12.887 [2024-07-26 11:35:08.542280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:7509 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.887 [2024-07-26 11:35:08.542297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:13.158 [2024-07-26 11:35:08.551830] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1285420) with pdu=0x2000190fd640 00:27:13.158 [2024-07-26 11:35:08.551993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:13830 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:13.158 [2024-07-26 11:35:08.552011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:13.158 [2024-07-26 11:35:08.561246] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1285420) with pdu=0x2000190fd640 00:27:13.158 [2024-07-26 11:35:08.561402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:25424 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:13.158 [2024-07-26 11:35:08.561418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:13.158 [2024-07-26 11:35:08.570421] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1285420) with pdu=0x2000190fd640 00:27:13.158 [2024-07-26 11:35:08.570584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:17510 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:13.158 [2024-07-26 11:35:08.570601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:13.158 [2024-07-26 11:35:08.579641] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1285420) with pdu=0x2000190fd640 00:27:13.158 [2024-07-26 11:35:08.579798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:19431 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:13.158 [2024-07-26 11:35:08.579817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:13.158 [2024-07-26 11:35:08.588832] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1285420) with pdu=0x2000190fd640 00:27:13.158 [2024-07-26 11:35:08.588989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:19440 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:13.158 [2024-07-26 11:35:08.589006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:13.158 [2024-07-26 11:35:08.598066] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1285420) with pdu=0x2000190fd640 00:27:13.158 [2024-07-26 11:35:08.598223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:256 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:13.158 [2024-07-26 11:35:08.598239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:13.158 [2024-07-26 11:35:08.607228] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1285420) with pdu=0x2000190fd640 00:27:13.158 [2024-07-26 11:35:08.607384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:24670 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:13.158 [2024-07-26 11:35:08.607401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:13.158 [2024-07-26 11:35:08.616446] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1285420) with pdu=0x2000190fd640 00:27:13.158 [2024-07-26 11:35:08.616600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:15892 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:13.158 [2024-07-26 11:35:08.616617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:13.159 [2024-07-26 11:35:08.625588] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1285420) with pdu=0x2000190fd640 00:27:13.159 [2024-07-26 11:35:08.625751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:15666 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:13.159 [2024-07-26 11:35:08.625767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:13.159 [2024-07-26 11:35:08.634804] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1285420) with pdu=0x2000190fd640 00:27:13.159 [2024-07-26 11:35:08.634989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:90 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:13.159 [2024-07-26 11:35:08.635005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:13.159 [2024-07-26 11:35:08.644014] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1285420) with pdu=0x2000190fd640 00:27:13.159 [2024-07-26 11:35:08.644170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:18275 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:13.159 [2024-07-26 11:35:08.644187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:13.159 [2024-07-26 11:35:08.653234] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1285420) with pdu=0x2000190fd640 00:27:13.159 [2024-07-26 11:35:08.653391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:7671 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:13.159 [2024-07-26 11:35:08.653408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:13.159 [2024-07-26 11:35:08.662471] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1285420) with pdu=0x2000190fd640 00:27:13.159 [2024-07-26 11:35:08.662634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:14864 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:13.159 [2024-07-26 11:35:08.662650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:13.159 [2024-07-26 11:35:08.671705] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1285420) with pdu=0x2000190fd640 00:27:13.159 [2024-07-26 11:35:08.671881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:2096 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:13.159 [2024-07-26 11:35:08.671898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:13.159 [2024-07-26 11:35:08.681183] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1285420) with pdu=0x2000190fd640 00:27:13.159 [2024-07-26 11:35:08.681339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:13873 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:13.159 [2024-07-26 11:35:08.681356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:13.159 [2024-07-26 11:35:08.690387] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1285420) with pdu=0x2000190fd640 00:27:13.159 [2024-07-26 11:35:08.690560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:18584 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:13.159 [2024-07-26 11:35:08.690577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:13.159 [2024-07-26 11:35:08.699623] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1285420) with pdu=0x2000190fd640 00:27:13.159 [2024-07-26 11:35:08.699783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:3394 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:13.159 [2024-07-26 11:35:08.699800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:13.159 [2024-07-26 11:35:08.708810] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1285420) with pdu=0x2000190fd640 00:27:13.160 [2024-07-26 11:35:08.708965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:3395 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:13.160 [2024-07-26 11:35:08.708982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:13.160 [2024-07-26 11:35:08.718098] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1285420) with pdu=0x2000190fd640 00:27:13.160 [2024-07-26 11:35:08.718256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:21238 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:13.160 [2024-07-26 11:35:08.718272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:13.160 [2024-07-26 11:35:08.727436] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1285420) with pdu=0x2000190fd640 00:27:13.160 [2024-07-26 11:35:08.727592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:14322 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:13.160 [2024-07-26 11:35:08.727612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:13.160 [2024-07-26 11:35:08.736690] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1285420) with pdu=0x2000190fd640 00:27:13.160 [2024-07-26 11:35:08.736845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:20970 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:13.160 [2024-07-26 11:35:08.736862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:13.160 [2024-07-26 11:35:08.745921] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1285420) with pdu=0x2000190fd640 00:27:13.160 [2024-07-26 11:35:08.746078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:19525 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:13.161 [2024-07-26 11:35:08.746095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:13.161 [2024-07-26 11:35:08.755109] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1285420) with pdu=0x2000190fd640 00:27:13.161 [2024-07-26 11:35:08.755266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:8016 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:13.161 [2024-07-26 11:35:08.755283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:13.161 [2024-07-26 11:35:08.764362] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1285420) with pdu=0x2000190fd640 00:27:13.161 [2024-07-26 11:35:08.764519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:14573 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:13.161 [2024-07-26 11:35:08.764535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:13.161 [2024-07-26 11:35:08.773568] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1285420) with pdu=0x2000190fd640 00:27:13.161 [2024-07-26 11:35:08.773754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:4513 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:13.161 [2024-07-26 11:35:08.773772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:13.161 [2024-07-26 11:35:08.782836] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1285420) with pdu=0x2000190fd640 00:27:13.161 [2024-07-26 11:35:08.783005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:17368 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:13.161 [2024-07-26 11:35:08.783022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:13.161 [2024-07-26 11:35:08.792290] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1285420) with pdu=0x2000190fd640 00:27:13.161 [2024-07-26 11:35:08.792466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:22092 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:13.161 [2024-07-26 11:35:08.792482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:13.161 [2024-07-26 11:35:08.801570] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1285420) with pdu=0x2000190fd640 00:27:13.161 [2024-07-26 11:35:08.801751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:16739 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:13.161 [2024-07-26 11:35:08.801769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:13.161 [2024-07-26 11:35:08.811105] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1285420) with pdu=0x2000190fd640 00:27:13.161 [2024-07-26 11:35:08.811265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:3690 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:13.161 [2024-07-26 11:35:08.811284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:13.423 [2024-07-26 11:35:08.820489] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1285420) with pdu=0x2000190fd640 00:27:13.423 [2024-07-26 11:35:08.820646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:12940 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:13.423 [2024-07-26 11:35:08.820680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:13.423 [2024-07-26 11:35:08.829863] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1285420) with pdu=0x2000190fd640 00:27:13.423 [2024-07-26 11:35:08.830020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:10856 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:13.423 [2024-07-26 11:35:08.830037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:13.423 [2024-07-26 11:35:08.839056] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1285420) with pdu=0x2000190fd640 00:27:13.423 [2024-07-26 11:35:08.839211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:25118 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:13.423 [2024-07-26 11:35:08.839228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:13.423 [2024-07-26 11:35:08.848249] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1285420) with pdu=0x2000190fd640 00:27:13.423 [2024-07-26 11:35:08.848406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:11452 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:13.423 [2024-07-26 11:35:08.848422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:13.423 [2024-07-26 11:35:08.857474] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1285420) with pdu=0x2000190fd640 00:27:13.423 [2024-07-26 11:35:08.857636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:228 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:13.423 [2024-07-26 11:35:08.857655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:13.423 [2024-07-26 11:35:08.866693] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1285420) with pdu=0x2000190fd640 00:27:13.423 [2024-07-26 11:35:08.866849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:6033 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:13.423 [2024-07-26 11:35:08.866867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:13.423 [2024-07-26 11:35:08.875882] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1285420) with pdu=0x2000190fd640 00:27:13.423 [2024-07-26 11:35:08.876037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:215 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:13.423 [2024-07-26 11:35:08.876056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:13.423 [2024-07-26 11:35:08.885095] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1285420) with pdu=0x2000190fd640 00:27:13.423 [2024-07-26 11:35:08.885250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:380 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:13.423 [2024-07-26 11:35:08.885268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:13.423 [2024-07-26 11:35:08.894283] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1285420) with pdu=0x2000190fd640 00:27:13.423 [2024-07-26 11:35:08.894461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:848 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:13.423 [2024-07-26 11:35:08.894478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:13.423 [2024-07-26 11:35:08.903513] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1285420) with pdu=0x2000190fd640 00:27:13.423 [2024-07-26 11:35:08.903676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:11558 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:13.423 [2024-07-26 11:35:08.903693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:13.423 [2024-07-26 11:35:08.912703] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1285420) with pdu=0x2000190fd640 00:27:13.423 [2024-07-26 11:35:08.912859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:10807 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:13.423 [2024-07-26 11:35:08.912877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:13.423 [2024-07-26 11:35:08.921900] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1285420) with pdu=0x2000190fd640 00:27:13.423 [2024-07-26 11:35:08.922053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:1023 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:13.423 [2024-07-26 11:35:08.922070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:13.423 [2024-07-26 11:35:08.931084] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1285420) with pdu=0x2000190fd640 00:27:13.423 [2024-07-26 11:35:08.931238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:2099 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:13.423 [2024-07-26 11:35:08.931254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:13.424 [2024-07-26 11:35:08.940265] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1285420) with pdu=0x2000190fd640 00:27:13.424 [2024-07-26 11:35:08.940419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:18417 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:13.424 [2024-07-26 11:35:08.940435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:13.424 [2024-07-26 11:35:08.949435] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1285420) with pdu=0x2000190fd640 00:27:13.424 [2024-07-26 11:35:08.949589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:23513 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:13.424 [2024-07-26 11:35:08.949606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:13.424 [2024-07-26 11:35:08.958689] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1285420) with pdu=0x2000190fd640 00:27:13.424 [2024-07-26 11:35:08.958844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:12 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:13.424 [2024-07-26 11:35:08.958861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:13.424 [2024-07-26 11:35:08.967881] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1285420) with pdu=0x2000190fd640 00:27:13.424 [2024-07-26 11:35:08.968034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:1502 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:13.424 [2024-07-26 11:35:08.968051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:13.424 [2024-07-26 11:35:08.977068] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1285420) with pdu=0x2000190fd640 00:27:13.424 [2024-07-26 11:35:08.977221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:17459 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:13.424 [2024-07-26 11:35:08.977237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:13.424 [2024-07-26 11:35:08.986236] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1285420) with pdu=0x2000190fd640 00:27:13.424 [2024-07-26 11:35:08.986389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:16965 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:13.424 [2024-07-26 11:35:08.986406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:13.424 [2024-07-26 11:35:08.995414] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1285420) with pdu=0x2000190fd640 00:27:13.424 [2024-07-26 11:35:08.995567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:24534 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:13.424 [2024-07-26 11:35:08.995584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:13.424 [2024-07-26 11:35:09.004565] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1285420) with pdu=0x2000190fd640 00:27:13.424 [2024-07-26 11:35:09.004727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:2194 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:13.424 [2024-07-26 11:35:09.004743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:13.424 [2024-07-26 11:35:09.013764] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1285420) with pdu=0x2000190fd640 00:27:13.424 [2024-07-26 11:35:09.013919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:25497 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:13.424 [2024-07-26 11:35:09.013935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:13.424 [2024-07-26 11:35:09.022941] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1285420) with pdu=0x2000190fd640 00:27:13.424 [2024-07-26 11:35:09.023096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:15094 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:13.424 [2024-07-26 11:35:09.023113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:13.424 [2024-07-26 11:35:09.032126] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1285420) with pdu=0x2000190fd640 00:27:13.424 [2024-07-26 11:35:09.032281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:9733 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:13.424 [2024-07-26 11:35:09.032298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:13.424 [2024-07-26 11:35:09.041291] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1285420) with pdu=0x2000190fd640 00:27:13.424 [2024-07-26 11:35:09.041460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:10370 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:13.424 [2024-07-26 11:35:09.041476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:13.424 [2024-07-26 11:35:09.050729] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1285420) with pdu=0x2000190fd640 00:27:13.424 [2024-07-26 11:35:09.050904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:16190 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:13.424 [2024-07-26 11:35:09.050926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:13.424 [2024-07-26 11:35:09.060126] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1285420) with pdu=0x2000190fd640 00:27:13.424 [2024-07-26 11:35:09.060284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:15956 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:13.424 [2024-07-26 11:35:09.060301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:13.424 [2024-07-26 11:35:09.069420] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1285420) with pdu=0x2000190fd640 00:27:13.424 [2024-07-26 11:35:09.069573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:7085 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:13.424 [2024-07-26 11:35:09.069589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:13.424 [2024-07-26 11:35:09.078716] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1285420) with pdu=0x2000190fd640 00:27:13.424 [2024-07-26 11:35:09.078878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:6093 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:13.424 [2024-07-26 11:35:09.078895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:13.682 [2024-07-26 11:35:09.088343] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1285420) with pdu=0x2000190fd640 00:27:13.682 [2024-07-26 11:35:09.088504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:21938 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:13.682 [2024-07-26 11:35:09.088521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:13.682 [2024-07-26 11:35:09.097619] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1285420) with pdu=0x2000190fd640 00:27:13.682 [2024-07-26 11:35:09.097784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:2489 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:13.682 [2024-07-26 11:35:09.097800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:13.682 [2024-07-26 11:35:09.106827] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1285420) with pdu=0x2000190fd640 00:27:13.683 [2024-07-26 11:35:09.106983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:20053 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:13.683 [2024-07-26 11:35:09.106999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:13.683 [2024-07-26 11:35:09.116008] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1285420) with pdu=0x2000190fd640 00:27:13.683 [2024-07-26 11:35:09.116162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:13446 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:13.683 [2024-07-26 11:35:09.116179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:13.683 [2024-07-26 11:35:09.125186] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1285420) with pdu=0x2000190fd640 00:27:13.683 [2024-07-26 11:35:09.125340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:19698 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:13.683 [2024-07-26 11:35:09.125357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:13.683 [2024-07-26 11:35:09.134382] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1285420) with pdu=0x2000190fd640 00:27:13.683 [2024-07-26 11:35:09.134557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:367 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:13.683 [2024-07-26 11:35:09.134574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:13.683 [2024-07-26 11:35:09.143624] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1285420) with pdu=0x2000190fd640 00:27:13.683 [2024-07-26 11:35:09.143788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:497 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:13.683 [2024-07-26 11:35:09.143805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:13.683 [2024-07-26 11:35:09.152853] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1285420) with pdu=0x2000190fd640 00:27:13.683 [2024-07-26 11:35:09.153009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:9903 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:13.683 [2024-07-26 11:35:09.153026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:13.683 [2024-07-26 11:35:09.162041] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1285420) with pdu=0x2000190fd640 00:27:13.683 [2024-07-26 11:35:09.162197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:2155 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:13.683 [2024-07-26 11:35:09.162230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:13.683 [2024-07-26 11:35:09.171289] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1285420) with pdu=0x2000190fd640 00:27:13.683 [2024-07-26 11:35:09.171446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:13999 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:13.683 [2024-07-26 11:35:09.171463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:13.683 [2024-07-26 11:35:09.180476] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1285420) with pdu=0x2000190fd640 00:27:13.683 [2024-07-26 11:35:09.180636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:13030 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:13.683 [2024-07-26 11:35:09.180653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:13.683 [2024-07-26 11:35:09.189704] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1285420) with pdu=0x2000190fd640 00:27:13.683 [2024-07-26 11:35:09.189884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:23096 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:13.683 [2024-07-26 11:35:09.189900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:13.683 [2024-07-26 11:35:09.199011] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1285420) with pdu=0x2000190fd640 00:27:13.683 [2024-07-26 11:35:09.199183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:23291 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:13.683 [2024-07-26 11:35:09.199200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:13.683 [2024-07-26 11:35:09.208251] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1285420) with pdu=0x2000190fd640 00:27:13.683 [2024-07-26 11:35:09.208406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:16802 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:13.683 [2024-07-26 11:35:09.208423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:13.683 [2024-07-26 11:35:09.217430] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1285420) with pdu=0x2000190fd640 00:27:13.683 [2024-07-26 11:35:09.217586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:2059 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:13.683 [2024-07-26 11:35:09.217602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:13.683 [2024-07-26 11:35:09.226591] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1285420) with pdu=0x2000190fd640 00:27:13.683 [2024-07-26 11:35:09.226753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:7428 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:13.683 [2024-07-26 11:35:09.226769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:13.683 [2024-07-26 11:35:09.235967] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1285420) with pdu=0x2000190fd640 00:27:13.683 [2024-07-26 11:35:09.236121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:14575 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:13.683 [2024-07-26 11:35:09.236138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:13.683 [2024-07-26 11:35:09.245146] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1285420) with pdu=0x2000190fd640 00:27:13.683 [2024-07-26 11:35:09.245320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:14179 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:13.683 [2024-07-26 11:35:09.245337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:13.683 [2024-07-26 11:35:09.254366] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1285420) with pdu=0x2000190fd640 00:27:13.683 [2024-07-26 11:35:09.254521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:9381 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:13.683 [2024-07-26 11:35:09.254538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:13.683 [2024-07-26 11:35:09.263543] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1285420) with pdu=0x2000190fd640 00:27:13.683 [2024-07-26 11:35:09.263704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:8420 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:13.683 [2024-07-26 11:35:09.263720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:13.683 [2024-07-26 11:35:09.272737] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1285420) with pdu=0x2000190fd640 00:27:13.683 [2024-07-26 11:35:09.272892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:14853 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:13.683 [2024-07-26 11:35:09.272908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:13.683 [2024-07-26 11:35:09.281909] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1285420) with pdu=0x2000190fd640 00:27:13.684 [2024-07-26 11:35:09.282065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:6138 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:13.684 [2024-07-26 11:35:09.282081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:13.684 [2024-07-26 11:35:09.291081] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1285420) with pdu=0x2000190fd640 00:27:13.684 [2024-07-26 11:35:09.291235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:23146 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:13.684 [2024-07-26 11:35:09.291252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:13.684 [2024-07-26 11:35:09.300254] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1285420) with pdu=0x2000190fd640 00:27:13.684 [2024-07-26 11:35:09.300421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:23960 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:13.684 [2024-07-26 11:35:09.300437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:13.684 [2024-07-26 11:35:09.309683] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1285420) with pdu=0x2000190fd640 00:27:13.684 [2024-07-26 11:35:09.309843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:18282 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:13.684 [2024-07-26 11:35:09.309860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:13.684 [2024-07-26 11:35:09.319068] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1285420) with pdu=0x2000190fd640 00:27:13.684 [2024-07-26 11:35:09.319240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:2145 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:13.684 [2024-07-26 11:35:09.319257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:13.684 [2024-07-26 11:35:09.328318] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1285420) with pdu=0x2000190fd640 00:27:13.684 [2024-07-26 11:35:09.328491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:18302 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:13.684 [2024-07-26 11:35:09.328507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:13.684 [2024-07-26 11:35:09.337584] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1285420) with pdu=0x2000190fd640 00:27:13.684 [2024-07-26 11:35:09.337750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:22228 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:13.684 [2024-07-26 11:35:09.337768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:13.942 [2024-07-26 11:35:09.347279] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1285420) with pdu=0x2000190fd640 00:27:13.942 [2024-07-26 11:35:09.347436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:4539 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:13.942 [2024-07-26 11:35:09.347452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:13.942 [2024-07-26 11:35:09.356570] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1285420) with pdu=0x2000190fd640 00:27:13.942 [2024-07-26 11:35:09.356750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:1154 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:13.942 [2024-07-26 11:35:09.356767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:13.942 [2024-07-26 11:35:09.365844] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1285420) with pdu=0x2000190fd640 00:27:13.942 [2024-07-26 11:35:09.366018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:23673 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:13.942 [2024-07-26 11:35:09.366035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:13.942 [2024-07-26 11:35:09.375084] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1285420) with pdu=0x2000190fd640 00:27:13.943 [2024-07-26 11:35:09.375238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:21049 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:13.943 [2024-07-26 11:35:09.375257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:13.943 [2024-07-26 11:35:09.384249] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1285420) with pdu=0x2000190fd640 00:27:13.943 [2024-07-26 11:35:09.384403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:12039 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:13.943 [2024-07-26 11:35:09.384419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:13.943 [2024-07-26 11:35:09.393426] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1285420) with pdu=0x2000190fd640 00:27:13.943 [2024-07-26 11:35:09.393581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:14646 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:13.943 [2024-07-26 11:35:09.393597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:13.943 [2024-07-26 11:35:09.402602] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1285420) with pdu=0x2000190fd640 00:27:13.943 [2024-07-26 11:35:09.402765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:11870 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:13.943 [2024-07-26 11:35:09.402781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:13.943 [2024-07-26 11:35:09.411794] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1285420) with pdu=0x2000190fd640 00:27:13.943 [2024-07-26 11:35:09.411951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:14679 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:13.943 [2024-07-26 11:35:09.411967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:13.943 [2024-07-26 11:35:09.420943] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1285420) with pdu=0x2000190fd640 00:27:13.943 [2024-07-26 11:35:09.421101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:11468 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:13.943 [2024-07-26 11:35:09.421118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:13.943 [2024-07-26 11:35:09.430145] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1285420) with pdu=0x2000190fd640 00:27:13.943 [2024-07-26 11:35:09.430298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:14919 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:13.943 [2024-07-26 11:35:09.430314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:13.943 [2024-07-26 11:35:09.439307] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1285420) with pdu=0x2000190fd640 00:27:13.943 [2024-07-26 11:35:09.439462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:12451 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:13.943 [2024-07-26 11:35:09.439478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:13.943 [2024-07-26 11:35:09.448492] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1285420) with pdu=0x2000190fd640 00:27:13.943 [2024-07-26 11:35:09.448648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:19377 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:13.943 [2024-07-26 11:35:09.448665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:13.943 [2024-07-26 11:35:09.457712] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1285420) with pdu=0x2000190fd640 00:27:13.943 [2024-07-26 11:35:09.457871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:20980 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:13.943 [2024-07-26 11:35:09.457888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:13.943 [2024-07-26 11:35:09.466862] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1285420) with pdu=0x2000190fd640 00:27:13.943 [2024-07-26 11:35:09.467036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:10349 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:13.943 [2024-07-26 11:35:09.467053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:13.943 [2024-07-26 11:35:09.476071] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1285420) with pdu=0x2000190fd640 00:27:13.943 [2024-07-26 11:35:09.476225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:21295 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:13.943 [2024-07-26 11:35:09.476241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:13.943 [2024-07-26 11:35:09.485248] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1285420) with pdu=0x2000190fd640 00:27:13.943 [2024-07-26 11:35:09.485402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:24759 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:13.943 [2024-07-26 11:35:09.485419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:13.943 [2024-07-26 11:35:09.494394] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1285420) with pdu=0x2000190fd640 00:27:13.943 [2024-07-26 11:35:09.494547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:15583 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:13.943 [2024-07-26 11:35:09.494564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:13.943 00:27:13.943 Latency(us) 00:27:13.943 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:13.943 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:27:13.943 nvme0n1 : 2.00 27631.25 107.93 0.00 0.00 4624.23 2028.50 9924.02 00:27:13.943 =================================================================================================================== 00:27:13.943 Total : 27631.25 107.93 0.00 0.00 4624.23 2028.50 9924.02 00:27:13.943 0 00:27:13.943 11:35:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:27:13.943 11:35:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:27:13.943 11:35:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:27:13.943 | .driver_specific 00:27:13.943 | .nvme_error 00:27:13.943 | .status_code 00:27:13.943 | .command_transient_transport_error' 00:27:13.943 11:35:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:27:14.201 11:35:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 217 > 0 )) 00:27:14.201 11:35:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 1660301 00:27:14.201 11:35:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@950 -- # '[' -z 1660301 ']' 00:27:14.201 11:35:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # kill -0 1660301 00:27:14.201 11:35:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # uname 00:27:14.201 11:35:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:27:14.201 11:35:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1660301 00:27:14.201 11:35:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:27:14.201 11:35:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:27:14.201 11:35:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1660301' 00:27:14.201 killing process with pid 1660301 00:27:14.201 11:35:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@969 -- # kill 1660301 00:27:14.201 Received shutdown signal, test time was about 2.000000 seconds 00:27:14.201 00:27:14.201 Latency(us) 00:27:14.201 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:14.201 =================================================================================================================== 00:27:14.201 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:27:14.201 11:35:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@974 -- # wait 1660301 00:27:14.459 11:35:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@115 -- # run_bperf_err randwrite 131072 16 00:27:14.459 11:35:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:27:14.459 11:35:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:27:14.459 11:35:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:27:14.459 11:35:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:27:14.459 11:35:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=1660790 00:27:14.459 11:35:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z 00:27:14.459 11:35:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 1660790 /var/tmp/bperf.sock 00:27:14.459 11:35:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@831 -- # '[' -z 1660790 ']' 00:27:14.459 11:35:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:27:14.459 11:35:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # local max_retries=100 00:27:14.459 11:35:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:27:14.459 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:27:14.459 11:35:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # xtrace_disable 00:27:14.459 11:35:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:27:14.459 [2024-07-26 11:35:09.967874] Starting SPDK v24.09-pre git sha1 487ff9e1a / DPDK 24.03.0 initialization... 00:27:14.459 [2024-07-26 11:35:09.967928] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1660790 ] 00:27:14.459 I/O size of 131072 is greater than zero copy threshold (65536). 00:27:14.459 Zero copy mechanism will not be used. 00:27:14.459 EAL: No free 2048 kB hugepages reported on node 1 00:27:14.459 [2024-07-26 11:35:10.034641] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:14.716 [2024-07-26 11:35:10.120948] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:27:15.279 11:35:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:27:15.279 11:35:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # return 0 00:27:15.279 11:35:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:27:15.279 11:35:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:27:15.536 11:35:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:27:15.536 11:35:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:15.536 11:35:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:27:15.536 11:35:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:15.536 11:35:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:27:15.536 11:35:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:27:15.536 nvme0n1 00:27:15.795 11:35:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:27:15.795 11:35:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:15.795 11:35:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:27:15.795 11:35:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:15.795 11:35:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:27:15.795 11:35:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:27:15.795 I/O size of 131072 is greater than zero copy threshold (65536). 00:27:15.795 Zero copy mechanism will not be used. 00:27:15.795 Running I/O for 2 seconds... 00:27:15.795 [2024-07-26 11:35:11.308985] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12870a0) with pdu=0x2000190fef90 00:27:15.795 [2024-07-26 11:35:11.309341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.795 [2024-07-26 11:35:11.309368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:15.795 [2024-07-26 11:35:11.315987] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12870a0) with pdu=0x2000190fef90 00:27:15.795 [2024-07-26 11:35:11.316357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.795 [2024-07-26 11:35:11.316379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:15.795 [2024-07-26 11:35:11.321887] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12870a0) with pdu=0x2000190fef90 00:27:15.795 [2024-07-26 11:35:11.321946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.795 [2024-07-26 11:35:11.321965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:15.795 [2024-07-26 11:35:11.328386] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12870a0) with pdu=0x2000190fef90 00:27:15.795 [2024-07-26 11:35:11.328759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.795 [2024-07-26 11:35:11.328782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:15.795 [2024-07-26 11:35:11.334700] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12870a0) with pdu=0x2000190fef90 00:27:15.795 [2024-07-26 11:35:11.335070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.795 [2024-07-26 11:35:11.335088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:15.795 [2024-07-26 11:35:11.340660] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12870a0) with pdu=0x2000190fef90 00:27:15.795 [2024-07-26 11:35:11.340715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.795 [2024-07-26 11:35:11.340732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:15.796 [2024-07-26 11:35:11.346945] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12870a0) with pdu=0x2000190fef90 00:27:15.796 [2024-07-26 11:35:11.347299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.796 [2024-07-26 11:35:11.347317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:15.796 [2024-07-26 11:35:11.352608] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12870a0) with pdu=0x2000190fef90 00:27:15.796 [2024-07-26 11:35:11.352674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.796 [2024-07-26 11:35:11.352692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:15.796 [2024-07-26 11:35:11.358817] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12870a0) with pdu=0x2000190fef90 00:27:15.796 [2024-07-26 11:35:11.359206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.796 [2024-07-26 11:35:11.359225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:15.796 [2024-07-26 11:35:11.364867] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12870a0) with pdu=0x2000190fef90 00:27:15.796 [2024-07-26 11:35:11.365225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.796 [2024-07-26 11:35:11.365243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:15.796 [2024-07-26 11:35:11.371195] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12870a0) with pdu=0x2000190fef90 00:27:15.796 [2024-07-26 11:35:11.371566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.796 [2024-07-26 11:35:11.371584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:15.796 [2024-07-26 11:35:11.377152] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12870a0) with pdu=0x2000190fef90 00:27:15.796 [2024-07-26 11:35:11.377518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.796 [2024-07-26 11:35:11.377538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:15.796 [2024-07-26 11:35:11.382212] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12870a0) with pdu=0x2000190fef90 00:27:15.796 [2024-07-26 11:35:11.382575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.796 [2024-07-26 11:35:11.382593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:15.796 [2024-07-26 11:35:11.386684] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12870a0) with pdu=0x2000190fef90 00:27:15.796 [2024-07-26 11:35:11.387037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.796 [2024-07-26 11:35:11.387055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:15.796 [2024-07-26 11:35:11.391125] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12870a0) with pdu=0x2000190fef90 00:27:15.796 [2024-07-26 11:35:11.391490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.796 [2024-07-26 11:35:11.391509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:15.796 [2024-07-26 11:35:11.395649] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12870a0) with pdu=0x2000190fef90 00:27:15.796 [2024-07-26 11:35:11.396009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.796 [2024-07-26 11:35:11.396027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:15.796 [2024-07-26 11:35:11.400133] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12870a0) with pdu=0x2000190fef90 00:27:15.796 [2024-07-26 11:35:11.400498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.796 [2024-07-26 11:35:11.400517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:15.796 [2024-07-26 11:35:11.404681] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12870a0) with pdu=0x2000190fef90 00:27:15.796 [2024-07-26 11:35:11.405032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.796 [2024-07-26 11:35:11.405051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:15.796 [2024-07-26 11:35:11.409237] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12870a0) with pdu=0x2000190fef90 00:27:15.796 [2024-07-26 11:35:11.409597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.796 [2024-07-26 11:35:11.409616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:15.796 [2024-07-26 11:35:11.413772] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12870a0) with pdu=0x2000190fef90 00:27:15.796 [2024-07-26 11:35:11.414132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.796 [2024-07-26 11:35:11.414150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:15.796 [2024-07-26 11:35:11.418316] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12870a0) with pdu=0x2000190fef90 00:27:15.796 [2024-07-26 11:35:11.418682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.796 [2024-07-26 11:35:11.418700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:15.796 [2024-07-26 11:35:11.422790] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12870a0) with pdu=0x2000190fef90 00:27:15.796 [2024-07-26 11:35:11.423148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.796 [2024-07-26 11:35:11.423166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:15.796 [2024-07-26 11:35:11.427303] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12870a0) with pdu=0x2000190fef90 00:27:15.796 [2024-07-26 11:35:11.427658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.796 [2024-07-26 11:35:11.427677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:15.796 [2024-07-26 11:35:11.431757] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12870a0) with pdu=0x2000190fef90 00:27:15.796 [2024-07-26 11:35:11.432104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.796 [2024-07-26 11:35:11.432123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:15.796 [2024-07-26 11:35:11.436226] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12870a0) with pdu=0x2000190fef90 00:27:15.796 [2024-07-26 11:35:11.436601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.796 [2024-07-26 11:35:11.436620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:15.796 [2024-07-26 11:35:11.440684] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12870a0) with pdu=0x2000190fef90 00:27:15.796 [2024-07-26 11:35:11.441041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.796 [2024-07-26 11:35:11.441060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:15.796 [2024-07-26 11:35:11.445138] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12870a0) with pdu=0x2000190fef90 00:27:15.796 [2024-07-26 11:35:11.445488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.796 [2024-07-26 11:35:11.445506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:15.796 [2024-07-26 11:35:11.449721] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12870a0) with pdu=0x2000190fef90 00:27:15.796 [2024-07-26 11:35:11.450084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.796 [2024-07-26 11:35:11.450103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:16.055 [2024-07-26 11:35:11.454428] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12870a0) with pdu=0x2000190fef90 00:27:16.055 [2024-07-26 11:35:11.454808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.055 [2024-07-26 11:35:11.454827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:16.055 [2024-07-26 11:35:11.459031] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12870a0) with pdu=0x2000190fef90 00:27:16.055 [2024-07-26 11:35:11.459393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.055 [2024-07-26 11:35:11.459418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:16.055 [2024-07-26 11:35:11.463700] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12870a0) with pdu=0x2000190fef90 00:27:16.055 [2024-07-26 11:35:11.464068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.055 [2024-07-26 11:35:11.464086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:16.055 [2024-07-26 11:35:11.469174] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12870a0) with pdu=0x2000190fef90 00:27:16.055 [2024-07-26 11:35:11.469545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.055 [2024-07-26 11:35:11.469564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:16.055 [2024-07-26 11:35:11.474522] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12870a0) with pdu=0x2000190fef90 00:27:16.056 [2024-07-26 11:35:11.474884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.056 [2024-07-26 11:35:11.474902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:16.056 [2024-07-26 11:35:11.480600] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12870a0) with pdu=0x2000190fef90 00:27:16.056 [2024-07-26 11:35:11.480956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.056 [2024-07-26 11:35:11.480975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:16.056 [2024-07-26 11:35:11.486739] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12870a0) with pdu=0x2000190fef90 00:27:16.056 [2024-07-26 11:35:11.487093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.056 [2024-07-26 11:35:11.487112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:16.056 [2024-07-26 11:35:11.492289] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12870a0) with pdu=0x2000190fef90 00:27:16.056 [2024-07-26 11:35:11.492645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.056 [2024-07-26 11:35:11.492664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:16.056 [2024-07-26 11:35:11.497713] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12870a0) with pdu=0x2000190fef90 00:27:16.056 [2024-07-26 11:35:11.498050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.056 [2024-07-26 11:35:11.498068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:16.056 [2024-07-26 11:35:11.503939] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12870a0) with pdu=0x2000190fef90 00:27:16.056 [2024-07-26 11:35:11.504301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.056 [2024-07-26 11:35:11.504319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:16.056 [2024-07-26 11:35:11.510068] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12870a0) with pdu=0x2000190fef90 00:27:16.056 [2024-07-26 11:35:11.510424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.056 [2024-07-26 11:35:11.510442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:16.056 [2024-07-26 11:35:11.516964] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12870a0) with pdu=0x2000190fef90 00:27:16.056 [2024-07-26 11:35:11.517296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.056 [2024-07-26 11:35:11.517314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:16.056 [2024-07-26 11:35:11.524728] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12870a0) with pdu=0x2000190fef90 00:27:16.056 [2024-07-26 11:35:11.524827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.056 [2024-07-26 11:35:11.524843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:16.056 [2024-07-26 11:35:11.531979] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12870a0) with pdu=0x2000190fef90 00:27:16.056 [2024-07-26 11:35:11.532420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.056 [2024-07-26 11:35:11.532438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:16.056 [2024-07-26 11:35:11.539299] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12870a0) with pdu=0x2000190fef90 00:27:16.056 [2024-07-26 11:35:11.539730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.056 [2024-07-26 11:35:11.539749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:16.056 [2024-07-26 11:35:11.547023] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12870a0) with pdu=0x2000190fef90 00:27:16.056 [2024-07-26 11:35:11.547462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.056 [2024-07-26 11:35:11.547481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:16.056 [2024-07-26 11:35:11.554299] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12870a0) with pdu=0x2000190fef90 00:27:16.056 [2024-07-26 11:35:11.554732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.056 [2024-07-26 11:35:11.554751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:16.056 [2024-07-26 11:35:11.562286] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12870a0) with pdu=0x2000190fef90 00:27:16.056 [2024-07-26 11:35:11.562733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.056 [2024-07-26 11:35:11.562753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:16.056 [2024-07-26 11:35:11.569813] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12870a0) with pdu=0x2000190fef90 00:27:16.056 [2024-07-26 11:35:11.570202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.056 [2024-07-26 11:35:11.570220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:16.056 [2024-07-26 11:35:11.576976] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12870a0) with pdu=0x2000190fef90 00:27:16.056 [2024-07-26 11:35:11.577352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.056 [2024-07-26 11:35:11.577371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:16.056 [2024-07-26 11:35:11.583693] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12870a0) with pdu=0x2000190fef90 00:27:16.056 [2024-07-26 11:35:11.584057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.056 [2024-07-26 11:35:11.584076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:16.056 [2024-07-26 11:35:11.590707] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12870a0) with pdu=0x2000190fef90 00:27:16.056 [2024-07-26 11:35:11.591126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.056 [2024-07-26 11:35:11.591145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:16.056 [2024-07-26 11:35:11.597708] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12870a0) with pdu=0x2000190fef90 00:27:16.056 [2024-07-26 11:35:11.598113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.056 [2024-07-26 11:35:11.598132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:16.056 [2024-07-26 11:35:11.605030] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12870a0) with pdu=0x2000190fef90 00:27:16.056 [2024-07-26 11:35:11.605388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.056 [2024-07-26 11:35:11.605406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:16.056 [2024-07-26 11:35:11.612253] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12870a0) with pdu=0x2000190fef90 00:27:16.056 [2024-07-26 11:35:11.612623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.056 [2024-07-26 11:35:11.612648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:16.056 [2024-07-26 11:35:11.619025] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12870a0) with pdu=0x2000190fef90 00:27:16.056 [2024-07-26 11:35:11.619463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.056 [2024-07-26 11:35:11.619481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:16.056 [2024-07-26 11:35:11.626315] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12870a0) with pdu=0x2000190fef90 00:27:16.056 [2024-07-26 11:35:11.626749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.056 [2024-07-26 11:35:11.626767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:16.056 [2024-07-26 11:35:11.633429] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12870a0) with pdu=0x2000190fef90 00:27:16.056 [2024-07-26 11:35:11.633878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.056 [2024-07-26 11:35:11.633901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:16.056 [2024-07-26 11:35:11.641076] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12870a0) with pdu=0x2000190fef90 00:27:16.056 [2024-07-26 11:35:11.641466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.056 [2024-07-26 11:35:11.641485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:16.056 [2024-07-26 11:35:11.647108] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12870a0) with pdu=0x2000190fef90 00:27:16.056 [2024-07-26 11:35:11.647447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.056 [2024-07-26 11:35:11.647466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:16.056 [2024-07-26 11:35:11.653147] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12870a0) with pdu=0x2000190fef90 00:27:16.056 [2024-07-26 11:35:11.653487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.056 [2024-07-26 11:35:11.653505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:16.056 [2024-07-26 11:35:11.660079] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12870a0) with pdu=0x2000190fef90 00:27:16.056 [2024-07-26 11:35:11.660518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.056 [2024-07-26 11:35:11.660536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:16.056 [2024-07-26 11:35:11.667502] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12870a0) with pdu=0x2000190fef90 00:27:16.056 [2024-07-26 11:35:11.667924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.056 [2024-07-26 11:35:11.667943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:16.056 [2024-07-26 11:35:11.674112] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12870a0) with pdu=0x2000190fef90 00:27:16.056 [2024-07-26 11:35:11.674479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.056 [2024-07-26 11:35:11.674498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:16.056 [2024-07-26 11:35:11.680596] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12870a0) with pdu=0x2000190fef90 00:27:16.056 [2024-07-26 11:35:11.680968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.056 [2024-07-26 11:35:11.680987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:16.056 [2024-07-26 11:35:11.686757] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12870a0) with pdu=0x2000190fef90 00:27:16.056 [2024-07-26 11:35:11.687165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.056 [2024-07-26 11:35:11.687183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:16.056 [2024-07-26 11:35:11.692972] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12870a0) with pdu=0x2000190fef90 00:27:16.056 [2024-07-26 11:35:11.693315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.056 [2024-07-26 11:35:11.693333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:16.056 [2024-07-26 11:35:11.698612] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12870a0) with pdu=0x2000190fef90 00:27:16.057 [2024-07-26 11:35:11.698966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.057 [2024-07-26 11:35:11.698985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:16.057 [2024-07-26 11:35:11.704361] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12870a0) with pdu=0x2000190fef90 00:27:16.057 [2024-07-26 11:35:11.704699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.057 [2024-07-26 11:35:11.704718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:16.057 [2024-07-26 11:35:11.710381] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12870a0) with pdu=0x2000190fef90 00:27:16.057 [2024-07-26 11:35:11.710751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.057 [2024-07-26 11:35:11.710770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:16.316 [2024-07-26 11:35:11.716431] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12870a0) with pdu=0x2000190fef90 00:27:16.316 [2024-07-26 11:35:11.716814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.316 [2024-07-26 11:35:11.716832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:16.316 [2024-07-26 11:35:11.723157] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12870a0) with pdu=0x2000190fef90 00:27:16.316 [2024-07-26 11:35:11.723515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.316 [2024-07-26 11:35:11.723533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:16.316 [2024-07-26 11:35:11.729489] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12870a0) with pdu=0x2000190fef90 00:27:16.316 [2024-07-26 11:35:11.729844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.316 [2024-07-26 11:35:11.729862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:16.316 [2024-07-26 11:35:11.736060] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12870a0) with pdu=0x2000190fef90 00:27:16.316 [2024-07-26 11:35:11.736423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.316 [2024-07-26 11:35:11.736441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:16.316 [2024-07-26 11:35:11.742275] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12870a0) with pdu=0x2000190fef90 00:27:16.316 [2024-07-26 11:35:11.742670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.316 [2024-07-26 11:35:11.742689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:16.316 [2024-07-26 11:35:11.748500] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12870a0) with pdu=0x2000190fef90 00:27:16.316 [2024-07-26 11:35:11.748874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.316 [2024-07-26 11:35:11.748893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:16.316 [2024-07-26 11:35:11.755155] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12870a0) with pdu=0x2000190fef90 00:27:16.316 [2024-07-26 11:35:11.755484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.316 [2024-07-26 11:35:11.755502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:16.316 [2024-07-26 11:35:11.761911] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12870a0) with pdu=0x2000190fef90 00:27:16.316 [2024-07-26 11:35:11.762243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.316 [2024-07-26 11:35:11.762261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:16.316 [2024-07-26 11:35:11.767673] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12870a0) with pdu=0x2000190fef90 00:27:16.316 [2024-07-26 11:35:11.768027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.316 [2024-07-26 11:35:11.768045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:16.316 [2024-07-26 11:35:11.774386] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12870a0) with pdu=0x2000190fef90 00:27:16.316 [2024-07-26 11:35:11.774793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.316 [2024-07-26 11:35:11.774812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:16.316 [2024-07-26 11:35:11.781134] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12870a0) with pdu=0x2000190fef90 00:27:16.316 [2024-07-26 11:35:11.781498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.316 [2024-07-26 11:35:11.781516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:16.316 [2024-07-26 11:35:11.787900] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12870a0) with pdu=0x2000190fef90 00:27:16.316 [2024-07-26 11:35:11.788215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.316 [2024-07-26 11:35:11.788234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:16.316 [2024-07-26 11:35:11.793853] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12870a0) with pdu=0x2000190fef90 00:27:16.316 [2024-07-26 11:35:11.794164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.316 [2024-07-26 11:35:11.794182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:16.316 [2024-07-26 11:35:11.798408] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12870a0) with pdu=0x2000190fef90 00:27:16.316 [2024-07-26 11:35:11.798723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.316 [2024-07-26 11:35:11.798745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:16.316 [2024-07-26 11:35:11.802766] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12870a0) with pdu=0x2000190fef90 00:27:16.316 [2024-07-26 11:35:11.803087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.316 [2024-07-26 11:35:11.803105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:16.316 [2024-07-26 11:35:11.807076] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12870a0) with pdu=0x2000190fef90 00:27:16.316 [2024-07-26 11:35:11.807388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.316 [2024-07-26 11:35:11.807406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:16.316 [2024-07-26 11:35:11.811371] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12870a0) with pdu=0x2000190fef90 00:27:16.316 [2024-07-26 11:35:11.811683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.316 [2024-07-26 11:35:11.811701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:16.316 [2024-07-26 11:35:11.815671] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12870a0) with pdu=0x2000190fef90 00:27:16.316 [2024-07-26 11:35:11.815995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.316 [2024-07-26 11:35:11.816013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:16.317 [2024-07-26 11:35:11.820041] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12870a0) with pdu=0x2000190fef90 00:27:16.317 [2024-07-26 11:35:11.820344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.317 [2024-07-26 11:35:11.820362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:16.317 [2024-07-26 11:35:11.824350] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12870a0) with pdu=0x2000190fef90 00:27:16.317 [2024-07-26 11:35:11.824663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.317 [2024-07-26 11:35:11.824682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:16.317 [2024-07-26 11:35:11.828667] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12870a0) with pdu=0x2000190fef90 00:27:16.317 [2024-07-26 11:35:11.828980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.317 [2024-07-26 11:35:11.828999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:16.317 [2024-07-26 11:35:11.832949] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12870a0) with pdu=0x2000190fef90 00:27:16.317 [2024-07-26 11:35:11.833269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.317 [2024-07-26 11:35:11.833287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:16.317 [2024-07-26 11:35:11.837263] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12870a0) with pdu=0x2000190fef90 00:27:16.317 [2024-07-26 11:35:11.837574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.317 [2024-07-26 11:35:11.837592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:16.317 [2024-07-26 11:35:11.841570] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12870a0) with pdu=0x2000190fef90 00:27:16.317 [2024-07-26 11:35:11.841900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.317 [2024-07-26 11:35:11.841918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:16.317 [2024-07-26 11:35:11.845897] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12870a0) with pdu=0x2000190fef90 00:27:16.317 [2024-07-26 11:35:11.846210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.317 [2024-07-26 11:35:11.846228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:16.317 [2024-07-26 11:35:11.850173] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12870a0) with pdu=0x2000190fef90 00:27:16.317 [2024-07-26 11:35:11.850485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.317 [2024-07-26 11:35:11.850503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:16.317 [2024-07-26 11:35:11.854453] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12870a0) with pdu=0x2000190fef90 00:27:16.317 [2024-07-26 11:35:11.854769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.317 [2024-07-26 11:35:11.854787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:16.317 [2024-07-26 11:35:11.858602] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12870a0) with pdu=0x2000190fef90 00:27:16.317 [2024-07-26 11:35:11.858872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.317 [2024-07-26 11:35:11.858890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:16.317 [2024-07-26 11:35:11.862398] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12870a0) with pdu=0x2000190fef90 00:27:16.317 [2024-07-26 11:35:11.862663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.317 [2024-07-26 11:35:11.862682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:16.317 [2024-07-26 11:35:11.866160] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12870a0) with pdu=0x2000190fef90 00:27:16.317 [2024-07-26 11:35:11.866412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.317 [2024-07-26 11:35:11.866430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:16.317 [2024-07-26 11:35:11.869844] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12870a0) with pdu=0x2000190fef90 00:27:16.317 [2024-07-26 11:35:11.870099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.317 [2024-07-26 11:35:11.870120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:16.317 [2024-07-26 11:35:11.874033] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12870a0) with pdu=0x2000190fef90 00:27:16.317 [2024-07-26 11:35:11.874284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.317 [2024-07-26 11:35:11.874301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:16.317 [2024-07-26 11:35:11.877731] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12870a0) with pdu=0x2000190fef90 00:27:16.317 [2024-07-26 11:35:11.877979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.317 [2024-07-26 11:35:11.877997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:16.317 [2024-07-26 11:35:11.881397] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12870a0) with pdu=0x2000190fef90 00:27:16.317 [2024-07-26 11:35:11.881660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.317 [2024-07-26 11:35:11.881678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:16.317 [2024-07-26 11:35:11.885400] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12870a0) with pdu=0x2000190fef90 00:27:16.317 [2024-07-26 11:35:11.885649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.317 [2024-07-26 11:35:11.885667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:16.317 [2024-07-26 11:35:11.890199] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12870a0) with pdu=0x2000190fef90 00:27:16.317 [2024-07-26 11:35:11.890460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.317 [2024-07-26 11:35:11.890479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:16.317 [2024-07-26 11:35:11.894950] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12870a0) with pdu=0x2000190fef90 00:27:16.317 [2024-07-26 11:35:11.895187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.317 [2024-07-26 11:35:11.895205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:16.317 [2024-07-26 11:35:11.899362] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12870a0) with pdu=0x2000190fef90 00:27:16.317 [2024-07-26 11:35:11.899605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.317 [2024-07-26 11:35:11.899623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:16.317 [2024-07-26 11:35:11.903615] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12870a0) with pdu=0x2000190fef90 00:27:16.317 [2024-07-26 11:35:11.903866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.317 [2024-07-26 11:35:11.903883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:16.317 [2024-07-26 11:35:11.907421] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12870a0) with pdu=0x2000190fef90 00:27:16.317 [2024-07-26 11:35:11.907677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.317 [2024-07-26 11:35:11.907695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:16.317 [2024-07-26 11:35:11.911179] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12870a0) with pdu=0x2000190fef90 00:27:16.317 [2024-07-26 11:35:11.911433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.317 [2024-07-26 11:35:11.911450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:16.317 [2024-07-26 11:35:11.914955] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12870a0) with pdu=0x2000190fef90 00:27:16.317 [2024-07-26 11:35:11.915203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.317 [2024-07-26 11:35:11.915221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:16.317 [2024-07-26 11:35:11.919176] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12870a0) with pdu=0x2000190fef90 00:27:16.317 [2024-07-26 11:35:11.919435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.317 [2024-07-26 11:35:11.919453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:16.317 [2024-07-26 11:35:11.922997] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12870a0) with pdu=0x2000190fef90 00:27:16.317 [2024-07-26 11:35:11.923249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.317 [2024-07-26 11:35:11.923267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:16.317 [2024-07-26 11:35:11.926826] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12870a0) with pdu=0x2000190fef90 00:27:16.318 [2024-07-26 11:35:11.927081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.318 [2024-07-26 11:35:11.927100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:16.318 [2024-07-26 11:35:11.930669] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12870a0) with pdu=0x2000190fef90 00:27:16.318 [2024-07-26 11:35:11.930920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.318 [2024-07-26 11:35:11.930938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:16.318 [2024-07-26 11:35:11.934487] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12870a0) with pdu=0x2000190fef90 00:27:16.318 [2024-07-26 11:35:11.934740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.318 [2024-07-26 11:35:11.934758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:16.318 [2024-07-26 11:35:11.938267] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12870a0) with pdu=0x2000190fef90 00:27:16.318 [2024-07-26 11:35:11.938519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.318 [2024-07-26 11:35:11.938538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:16.318 [2024-07-26 11:35:11.942046] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12870a0) with pdu=0x2000190fef90 00:27:16.318 [2024-07-26 11:35:11.942296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.318 [2024-07-26 11:35:11.942314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:16.318 [2024-07-26 11:35:11.945832] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12870a0) with pdu=0x2000190fef90 00:27:16.318 [2024-07-26 11:35:11.946088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.318 [2024-07-26 11:35:11.946106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:16.318 [2024-07-26 11:35:11.949565] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12870a0) with pdu=0x2000190fef90 00:27:16.318 [2024-07-26 11:35:11.949818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.318 [2024-07-26 11:35:11.949836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:16.318 [2024-07-26 11:35:11.953304] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12870a0) with pdu=0x2000190fef90 00:27:16.318 [2024-07-26 11:35:11.953544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.318 [2024-07-26 11:35:11.953562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:16.318 [2024-07-26 11:35:11.957045] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12870a0) with pdu=0x2000190fef90 00:27:16.318 [2024-07-26 11:35:11.957299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.318 [2024-07-26 11:35:11.957317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:16.318 [2024-07-26 11:35:11.960826] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12870a0) with pdu=0x2000190fef90 00:27:16.318 [2024-07-26 11:35:11.961083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.318 [2024-07-26 11:35:11.961102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:16.318 [2024-07-26 11:35:11.964701] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12870a0) with pdu=0x2000190fef90 00:27:16.318 [2024-07-26 11:35:11.964968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.318 [2024-07-26 11:35:11.964986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:16.318 [2024-07-26 11:35:11.969054] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12870a0) with pdu=0x2000190fef90 00:27:16.318 [2024-07-26 11:35:11.969296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.318 [2024-07-26 11:35:11.969314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:16.579 [2024-07-26 11:35:11.973978] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12870a0) with pdu=0x2000190fef90 00:27:16.579 [2024-07-26 11:35:11.974254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.579 [2024-07-26 11:35:11.974276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:16.579 [2024-07-26 11:35:11.978490] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12870a0) with pdu=0x2000190fef90 00:27:16.579 [2024-07-26 11:35:11.978748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.579 [2024-07-26 11:35:11.978766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:16.579 [2024-07-26 11:35:11.983419] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12870a0) with pdu=0x2000190fef90 00:27:16.579 [2024-07-26 11:35:11.983676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.579 [2024-07-26 11:35:11.983695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:16.579 [2024-07-26 11:35:11.988636] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12870a0) with pdu=0x2000190fef90 00:27:16.579 [2024-07-26 11:35:11.988886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.579 [2024-07-26 11:35:11.988905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:16.579 [2024-07-26 11:35:11.992803] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12870a0) with pdu=0x2000190fef90 00:27:16.579 [2024-07-26 11:35:11.993050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.579 [2024-07-26 11:35:11.993067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:16.579 [2024-07-26 11:35:11.996461] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12870a0) with pdu=0x2000190fef90 00:27:16.579 [2024-07-26 11:35:11.996709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.579 [2024-07-26 11:35:11.996727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:16.579 [2024-07-26 11:35:12.000108] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12870a0) with pdu=0x2000190fef90 00:27:16.579 [2024-07-26 11:35:12.000354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.579 [2024-07-26 11:35:12.000372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:16.579 [2024-07-26 11:35:12.003726] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12870a0) with pdu=0x2000190fef90 00:27:16.579 [2024-07-26 11:35:12.003974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.579 [2024-07-26 11:35:12.003993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:16.579 [2024-07-26 11:35:12.007398] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12870a0) with pdu=0x2000190fef90 00:27:16.579 [2024-07-26 11:35:12.007650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.579 [2024-07-26 11:35:12.007668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:16.579 [2024-07-26 11:35:12.011023] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12870a0) with pdu=0x2000190fef90 00:27:16.579 [2024-07-26 11:35:12.011277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.579 [2024-07-26 11:35:12.011296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:16.579 [2024-07-26 11:35:12.014673] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12870a0) with pdu=0x2000190fef90 00:27:16.579 [2024-07-26 11:35:12.014926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.579 [2024-07-26 11:35:12.014944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:16.579 [2024-07-26 11:35:12.018265] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12870a0) with pdu=0x2000190fef90 00:27:16.579 [2024-07-26 11:35:12.018517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.579 [2024-07-26 11:35:12.018535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:16.579 [2024-07-26 11:35:12.021914] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12870a0) with pdu=0x2000190fef90 00:27:16.579 [2024-07-26 11:35:12.022159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.579 [2024-07-26 11:35:12.022177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:16.580 [2024-07-26 11:35:12.025560] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12870a0) with pdu=0x2000190fef90 00:27:16.580 [2024-07-26 11:35:12.025817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.580 [2024-07-26 11:35:12.025835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:16.580 [2024-07-26 11:35:12.029154] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12870a0) with pdu=0x2000190fef90 00:27:16.580 [2024-07-26 11:35:12.029397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.580 [2024-07-26 11:35:12.029415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:16.580 [2024-07-26 11:35:12.032833] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12870a0) with pdu=0x2000190fef90 00:27:16.580 [2024-07-26 11:35:12.033067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.580 [2024-07-26 11:35:12.033085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:16.580 [2024-07-26 11:35:12.036450] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12870a0) with pdu=0x2000190fef90 00:27:16.580 [2024-07-26 11:35:12.036702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.580 [2024-07-26 11:35:12.036720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:16.580 [2024-07-26 11:35:12.040074] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12870a0) with pdu=0x2000190fef90 00:27:16.580 [2024-07-26 11:35:12.040342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.580 [2024-07-26 11:35:12.040360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:16.580 [2024-07-26 11:35:12.043741] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12870a0) with pdu=0x2000190fef90 00:27:16.580 [2024-07-26 11:35:12.044001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.580 [2024-07-26 11:35:12.044020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:16.580 [2024-07-26 11:35:12.047426] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12870a0) with pdu=0x2000190fef90 00:27:16.580 [2024-07-26 11:35:12.047680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.580 [2024-07-26 11:35:12.047698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:16.580 [2024-07-26 11:35:12.051062] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12870a0) with pdu=0x2000190fef90 00:27:16.580 [2024-07-26 11:35:12.051310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.580 [2024-07-26 11:35:12.051328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:16.580 [2024-07-26 11:35:12.054751] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12870a0) with pdu=0x2000190fef90 00:27:16.580 [2024-07-26 11:35:12.055000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.580 [2024-07-26 11:35:12.055018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:16.580 [2024-07-26 11:35:12.058396] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12870a0) with pdu=0x2000190fef90 00:27:16.580 [2024-07-26 11:35:12.058643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.580 [2024-07-26 11:35:12.058661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:16.580 [2024-07-26 11:35:12.062025] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12870a0) with pdu=0x2000190fef90 00:27:16.580 [2024-07-26 11:35:12.062273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.580 [2024-07-26 11:35:12.062291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:16.580 [2024-07-26 11:35:12.065695] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12870a0) with pdu=0x2000190fef90 00:27:16.580 [2024-07-26 11:35:12.065938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.580 [2024-07-26 11:35:12.065957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:16.580 [2024-07-26 11:35:12.069711] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12870a0) with pdu=0x2000190fef90 00:27:16.580 [2024-07-26 11:35:12.069960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.580 [2024-07-26 11:35:12.069979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:16.580 [2024-07-26 11:35:12.073429] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12870a0) with pdu=0x2000190fef90 00:27:16.580 [2024-07-26 11:35:12.073695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.580 [2024-07-26 11:35:12.073718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:16.580 [2024-07-26 11:35:12.077093] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12870a0) with pdu=0x2000190fef90 00:27:16.580 [2024-07-26 11:35:12.077356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.580 [2024-07-26 11:35:12.077375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:16.580 [2024-07-26 11:35:12.080897] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12870a0) with pdu=0x2000190fef90 00:27:16.580 [2024-07-26 11:35:12.081151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.580 [2024-07-26 11:35:12.081169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:16.580 [2024-07-26 11:35:12.085240] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12870a0) with pdu=0x2000190fef90 00:27:16.580 [2024-07-26 11:35:12.085487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.580 [2024-07-26 11:35:12.085505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:16.580 [2024-07-26 11:35:12.090772] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12870a0) with pdu=0x2000190fef90 00:27:16.580 [2024-07-26 11:35:12.091031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.580 [2024-07-26 11:35:12.091050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:16.580 [2024-07-26 11:35:12.094989] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12870a0) with pdu=0x2000190fef90 00:27:16.580 [2024-07-26 11:35:12.095248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.580 [2024-07-26 11:35:12.095267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:16.580 [2024-07-26 11:35:12.099353] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12870a0) with pdu=0x2000190fef90 00:27:16.580 [2024-07-26 11:35:12.099601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.580 [2024-07-26 11:35:12.099619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:16.580 [2024-07-26 11:35:12.103371] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12870a0) with pdu=0x2000190fef90 00:27:16.580 [2024-07-26 11:35:12.103647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.580 [2024-07-26 11:35:12.103665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:16.580 [2024-07-26 11:35:12.107390] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12870a0) with pdu=0x2000190fef90 00:27:16.580 [2024-07-26 11:35:12.107630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.580 [2024-07-26 11:35:12.107649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:16.580 [2024-07-26 11:35:12.112069] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12870a0) with pdu=0x2000190fef90 00:27:16.580 [2024-07-26 11:35:12.112332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.580 [2024-07-26 11:35:12.112352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:16.580 [2024-07-26 11:35:12.116949] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12870a0) with pdu=0x2000190fef90 00:27:16.580 [2024-07-26 11:35:12.117193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.580 [2024-07-26 11:35:12.117212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:16.580 [2024-07-26 11:35:12.121271] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12870a0) with pdu=0x2000190fef90 00:27:16.580 [2024-07-26 11:35:12.121517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.580 [2024-07-26 11:35:12.121536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:16.580 [2024-07-26 11:35:12.125441] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12870a0) with pdu=0x2000190fef90 00:27:16.581 [2024-07-26 11:35:12.125685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.581 [2024-07-26 11:35:12.125704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:16.581 [2024-07-26 11:35:12.129487] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12870a0) with pdu=0x2000190fef90 00:27:16.581 [2024-07-26 11:35:12.129741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.581 [2024-07-26 11:35:12.129759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:16.581 [2024-07-26 11:35:12.133306] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12870a0) with pdu=0x2000190fef90 00:27:16.581 [2024-07-26 11:35:12.133561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.581 [2024-07-26 11:35:12.133579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:16.581 [2024-07-26 11:35:12.137151] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12870a0) with pdu=0x2000190fef90 00:27:16.581 [2024-07-26 11:35:12.137402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.581 [2024-07-26 11:35:12.137420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:16.581 [2024-07-26 11:35:12.140938] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12870a0) with pdu=0x2000190fef90 00:27:16.581 [2024-07-26 11:35:12.141180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.581 [2024-07-26 11:35:12.141198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:16.581 [2024-07-26 11:35:12.144746] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12870a0) with pdu=0x2000190fef90 00:27:16.581 [2024-07-26 11:35:12.144999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.581 [2024-07-26 11:35:12.145017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:16.581 [2024-07-26 11:35:12.148582] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12870a0) with pdu=0x2000190fef90 00:27:16.581 [2024-07-26 11:35:12.148836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.581 [2024-07-26 11:35:12.148854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:16.581 [2024-07-26 11:35:12.152375] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12870a0) with pdu=0x2000190fef90 00:27:16.581 [2024-07-26 11:35:12.152640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.581 [2024-07-26 11:35:12.152659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:16.581 [2024-07-26 11:35:12.156248] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12870a0) with pdu=0x2000190fef90 00:27:16.581 [2024-07-26 11:35:12.156499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.581 [2024-07-26 11:35:12.156517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:16.581 [2024-07-26 11:35:12.160336] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12870a0) with pdu=0x2000190fef90 00:27:16.581 [2024-07-26 11:35:12.160590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.581 [2024-07-26 11:35:12.160608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:16.581 [2024-07-26 11:35:12.164286] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12870a0) with pdu=0x2000190fef90 00:27:16.581 [2024-07-26 11:35:12.164539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.581 [2024-07-26 11:35:12.164558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:16.581 [2024-07-26 11:35:12.168172] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12870a0) with pdu=0x2000190fef90 00:27:16.581 [2024-07-26 11:35:12.168424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.581 [2024-07-26 11:35:12.168442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:16.581 [2024-07-26 11:35:12.171948] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12870a0) with pdu=0x2000190fef90 00:27:16.581 [2024-07-26 11:35:12.172195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.581 [2024-07-26 11:35:12.172213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:16.581 [2024-07-26 11:35:12.176320] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12870a0) with pdu=0x2000190fef90 00:27:16.581 [2024-07-26 11:35:12.176568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.581 [2024-07-26 11:35:12.176588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:16.581 [2024-07-26 11:35:12.180145] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12870a0) with pdu=0x2000190fef90 00:27:16.581 [2024-07-26 11:35:12.180396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.581 [2024-07-26 11:35:12.180414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:16.581 [2024-07-26 11:35:12.183931] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12870a0) with pdu=0x2000190fef90 00:27:16.581 [2024-07-26 11:35:12.184185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.581 [2024-07-26 11:35:12.184203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:16.581 [2024-07-26 11:35:12.187711] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12870a0) with pdu=0x2000190fef90 00:27:16.581 [2024-07-26 11:35:12.187955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.581 [2024-07-26 11:35:12.187973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:16.581 [2024-07-26 11:35:12.191492] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12870a0) with pdu=0x2000190fef90 00:27:16.581 [2024-07-26 11:35:12.191739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.581 [2024-07-26 11:35:12.191756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:16.581 [2024-07-26 11:35:12.195286] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12870a0) with pdu=0x2000190fef90 00:27:16.581 [2024-07-26 11:35:12.195528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.581 [2024-07-26 11:35:12.195546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:16.581 [2024-07-26 11:35:12.199065] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12870a0) with pdu=0x2000190fef90 00:27:16.581 [2024-07-26 11:35:12.199318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.581 [2024-07-26 11:35:12.199337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:16.581 [2024-07-26 11:35:12.203205] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12870a0) with pdu=0x2000190fef90 00:27:16.581 [2024-07-26 11:35:12.203514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.581 [2024-07-26 11:35:12.203532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:16.581 [2024-07-26 11:35:12.208273] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12870a0) with pdu=0x2000190fef90 00:27:16.581 [2024-07-26 11:35:12.208622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.581 [2024-07-26 11:35:12.208646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:16.581 [2024-07-26 11:35:12.213929] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12870a0) with pdu=0x2000190fef90 00:27:16.581 [2024-07-26 11:35:12.214227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.581 [2024-07-26 11:35:12.214245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:16.581 [2024-07-26 11:35:12.220129] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12870a0) with pdu=0x2000190fef90 00:27:16.581 [2024-07-26 11:35:12.220474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.581 [2024-07-26 11:35:12.220492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:16.581 [2024-07-26 11:35:12.226710] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12870a0) with pdu=0x2000190fef90 00:27:16.581 [2024-07-26 11:35:12.226955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.581 [2024-07-26 11:35:12.226972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:16.582 [2024-07-26 11:35:12.232349] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12870a0) with pdu=0x2000190fef90 00:27:16.582 [2024-07-26 11:35:12.232609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.582 [2024-07-26 11:35:12.232633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:16.841 [2024-07-26 11:35:12.237773] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12870a0) with pdu=0x2000190fef90 00:27:16.841 [2024-07-26 11:35:12.238028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.841 [2024-07-26 11:35:12.238046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:16.841 [2024-07-26 11:35:12.242646] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12870a0) with pdu=0x2000190fef90 00:27:16.841 [2024-07-26 11:35:12.242909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.841 [2024-07-26 11:35:12.242927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:16.841 [2024-07-26 11:35:12.247485] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12870a0) with pdu=0x2000190fef90 00:27:16.841 [2024-07-26 11:35:12.247752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.841 [2024-07-26 11:35:12.247770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:16.841 [2024-07-26 11:35:12.252372] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12870a0) with pdu=0x2000190fef90 00:27:16.841 [2024-07-26 11:35:12.252611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.841 [2024-07-26 11:35:12.252634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:16.841 [2024-07-26 11:35:12.257370] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12870a0) with pdu=0x2000190fef90 00:27:16.841 [2024-07-26 11:35:12.257624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.841 [2024-07-26 11:35:12.257648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:16.841 [2024-07-26 11:35:12.262279] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12870a0) with pdu=0x2000190fef90 00:27:16.841 [2024-07-26 11:35:12.262515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.841 [2024-07-26 11:35:12.262536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:16.841 [2024-07-26 11:35:12.267292] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12870a0) with pdu=0x2000190fef90 00:27:16.841 [2024-07-26 11:35:12.267531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.841 [2024-07-26 11:35:12.267549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:16.841 [2024-07-26 11:35:12.272104] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12870a0) with pdu=0x2000190fef90 00:27:16.841 [2024-07-26 11:35:12.272349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.841 [2024-07-26 11:35:12.272366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:16.841 [2024-07-26 11:35:12.277404] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12870a0) with pdu=0x2000190fef90 00:27:16.842 [2024-07-26 11:35:12.277678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.842 [2024-07-26 11:35:12.277696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:16.842 [2024-07-26 11:35:12.283015] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12870a0) with pdu=0x2000190fef90 00:27:16.842 [2024-07-26 11:35:12.283310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.842 [2024-07-26 11:35:12.283328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:16.842 [2024-07-26 11:35:12.287769] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12870a0) with pdu=0x2000190fef90 00:27:16.842 [2024-07-26 11:35:12.288016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.842 [2024-07-26 11:35:12.288034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:16.842 [2024-07-26 11:35:12.292836] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12870a0) with pdu=0x2000190fef90 00:27:16.842 [2024-07-26 11:35:12.293085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.842 [2024-07-26 11:35:12.293103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:16.842 [2024-07-26 11:35:12.297691] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12870a0) with pdu=0x2000190fef90 00:27:16.842 [2024-07-26 11:35:12.297936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.842 [2024-07-26 11:35:12.297954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:16.842 [2024-07-26 11:35:12.302845] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12870a0) with pdu=0x2000190fef90 00:27:16.842 [2024-07-26 11:35:12.303145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.842 [2024-07-26 11:35:12.303163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:16.842 [2024-07-26 11:35:12.307707] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12870a0) with pdu=0x2000190fef90 00:27:16.842 [2024-07-26 11:35:12.307967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.842 [2024-07-26 11:35:12.307985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:16.842 [2024-07-26 11:35:12.312608] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12870a0) with pdu=0x2000190fef90 00:27:16.842 [2024-07-26 11:35:12.312866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.842 [2024-07-26 11:35:12.312883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:16.842 [2024-07-26 11:35:12.316737] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12870a0) with pdu=0x2000190fef90 00:27:16.842 [2024-07-26 11:35:12.317012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.842 [2024-07-26 11:35:12.317031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:16.842 [2024-07-26 11:35:12.320634] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12870a0) with pdu=0x2000190fef90 00:27:16.842 [2024-07-26 11:35:12.320891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.842 [2024-07-26 11:35:12.320909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:16.842 [2024-07-26 11:35:12.324439] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12870a0) with pdu=0x2000190fef90 00:27:16.842 [2024-07-26 11:35:12.324693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.842 [2024-07-26 11:35:12.324712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:16.842 [2024-07-26 11:35:12.328229] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12870a0) with pdu=0x2000190fef90 00:27:16.842 [2024-07-26 11:35:12.328496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.842 [2024-07-26 11:35:12.328515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:16.842 [2024-07-26 11:35:12.332052] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12870a0) with pdu=0x2000190fef90 00:27:16.842 [2024-07-26 11:35:12.332310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.842 [2024-07-26 11:35:12.332328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:16.842 [2024-07-26 11:35:12.335854] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12870a0) with pdu=0x2000190fef90 00:27:16.842 [2024-07-26 11:35:12.336119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.842 [2024-07-26 11:35:12.336137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:16.842 [2024-07-26 11:35:12.339800] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12870a0) with pdu=0x2000190fef90 00:27:16.842 [2024-07-26 11:35:12.340046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.842 [2024-07-26 11:35:12.340064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:16.842 [2024-07-26 11:35:12.344065] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12870a0) with pdu=0x2000190fef90 00:27:16.842 [2024-07-26 11:35:12.344327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.842 [2024-07-26 11:35:12.344344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:16.842 [2024-07-26 11:35:12.349267] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12870a0) with pdu=0x2000190fef90 00:27:16.842 [2024-07-26 11:35:12.349521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.842 [2024-07-26 11:35:12.349539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:16.842 [2024-07-26 11:35:12.353864] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12870a0) with pdu=0x2000190fef90 00:27:16.842 [2024-07-26 11:35:12.354124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.842 [2024-07-26 11:35:12.354141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:16.842 [2024-07-26 11:35:12.358334] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12870a0) with pdu=0x2000190fef90 00:27:16.842 [2024-07-26 11:35:12.358441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.842 [2024-07-26 11:35:12.358458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:16.842 [2024-07-26 11:35:12.362828] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12870a0) with pdu=0x2000190fef90 00:27:16.842 [2024-07-26 11:35:12.363077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.842 [2024-07-26 11:35:12.363095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:16.842 [2024-07-26 11:35:12.367607] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12870a0) with pdu=0x2000190fef90 00:27:16.842 [2024-07-26 11:35:12.367856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.842 [2024-07-26 11:35:12.367874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:16.842 [2024-07-26 11:35:12.372188] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12870a0) with pdu=0x2000190fef90 00:27:16.842 [2024-07-26 11:35:12.372448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.842 [2024-07-26 11:35:12.372465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:16.842 [2024-07-26 11:35:12.376403] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12870a0) with pdu=0x2000190fef90 00:27:16.842 [2024-07-26 11:35:12.376648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.842 [2024-07-26 11:35:12.376665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:16.842 [2024-07-26 11:35:12.380282] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12870a0) with pdu=0x2000190fef90 00:27:16.842 [2024-07-26 11:35:12.380533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.842 [2024-07-26 11:35:12.380557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:16.842 [2024-07-26 11:35:12.384085] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12870a0) with pdu=0x2000190fef90 00:27:16.842 [2024-07-26 11:35:12.384334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.842 [2024-07-26 11:35:12.384352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:16.842 [2024-07-26 11:35:12.387851] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12870a0) with pdu=0x2000190fef90 00:27:16.842 [2024-07-26 11:35:12.388106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.843 [2024-07-26 11:35:12.388124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:16.843 [2024-07-26 11:35:12.391581] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12870a0) with pdu=0x2000190fef90 00:27:16.843 [2024-07-26 11:35:12.391832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.843 [2024-07-26 11:35:12.391850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:16.843 [2024-07-26 11:35:12.395341] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12870a0) with pdu=0x2000190fef90 00:27:16.843 [2024-07-26 11:35:12.395591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.843 [2024-07-26 11:35:12.395609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:16.843 [2024-07-26 11:35:12.399132] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12870a0) with pdu=0x2000190fef90 00:27:16.843 [2024-07-26 11:35:12.399387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.843 [2024-07-26 11:35:12.399405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:16.843 [2024-07-26 11:35:12.402942] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12870a0) with pdu=0x2000190fef90 00:27:16.843 [2024-07-26 11:35:12.403184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.843 [2024-07-26 11:35:12.403201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:16.843 [2024-07-26 11:35:12.406724] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12870a0) with pdu=0x2000190fef90 00:27:16.843 [2024-07-26 11:35:12.406980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.843 [2024-07-26 11:35:12.406997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:16.843 [2024-07-26 11:35:12.410474] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12870a0) with pdu=0x2000190fef90 00:27:16.843 [2024-07-26 11:35:12.410721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.843 [2024-07-26 11:35:12.410739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:16.843 [2024-07-26 11:35:12.414298] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12870a0) with pdu=0x2000190fef90 00:27:16.843 [2024-07-26 11:35:12.414576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.843 [2024-07-26 11:35:12.414594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:16.843 [2024-07-26 11:35:12.419177] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12870a0) with pdu=0x2000190fef90 00:27:16.843 [2024-07-26 11:35:12.419543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.843 [2024-07-26 11:35:12.419561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:16.843 [2024-07-26 11:35:12.424413] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12870a0) with pdu=0x2000190fef90 00:27:16.843 [2024-07-26 11:35:12.424748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.843 [2024-07-26 11:35:12.424766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:16.843 [2024-07-26 11:35:12.430315] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12870a0) with pdu=0x2000190fef90 00:27:16.843 [2024-07-26 11:35:12.430604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.843 [2024-07-26 11:35:12.430623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:16.843 [2024-07-26 11:35:12.435666] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12870a0) with pdu=0x2000190fef90 00:27:16.843 [2024-07-26 11:35:12.435968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.843 [2024-07-26 11:35:12.435986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:16.843 [2024-07-26 11:35:12.439852] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12870a0) with pdu=0x2000190fef90 00:27:16.843 [2024-07-26 11:35:12.440114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.843 [2024-07-26 11:35:12.440132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:16.843 [2024-07-26 11:35:12.443735] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12870a0) with pdu=0x2000190fef90 00:27:16.843 [2024-07-26 11:35:12.443991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.843 [2024-07-26 11:35:12.444008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:16.843 [2024-07-26 11:35:12.447501] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12870a0) with pdu=0x2000190fef90 00:27:16.843 [2024-07-26 11:35:12.447755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.843 [2024-07-26 11:35:12.447773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:16.843 [2024-07-26 11:35:12.451307] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12870a0) with pdu=0x2000190fef90 00:27:16.843 [2024-07-26 11:35:12.451562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.843 [2024-07-26 11:35:12.451580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:16.843 [2024-07-26 11:35:12.455126] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12870a0) with pdu=0x2000190fef90 00:27:16.843 [2024-07-26 11:35:12.455375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.843 [2024-07-26 11:35:12.455393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:16.843 [2024-07-26 11:35:12.458906] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12870a0) with pdu=0x2000190fef90 00:27:16.843 [2024-07-26 11:35:12.459159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.843 [2024-07-26 11:35:12.459177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:16.843 [2024-07-26 11:35:12.462687] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12870a0) with pdu=0x2000190fef90 00:27:16.843 [2024-07-26 11:35:12.462928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.843 [2024-07-26 11:35:12.462946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:16.843 [2024-07-26 11:35:12.466432] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12870a0) with pdu=0x2000190fef90 00:27:16.843 [2024-07-26 11:35:12.466691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.843 [2024-07-26 11:35:12.466708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:16.843 [2024-07-26 11:35:12.470205] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12870a0) with pdu=0x2000190fef90 00:27:16.843 [2024-07-26 11:35:12.470466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.843 [2024-07-26 11:35:12.470484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:16.843 [2024-07-26 11:35:12.474571] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12870a0) with pdu=0x2000190fef90 00:27:16.843 [2024-07-26 11:35:12.474828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.843 [2024-07-26 11:35:12.474846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:16.843 [2024-07-26 11:35:12.478460] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12870a0) with pdu=0x2000190fef90 00:27:16.843 [2024-07-26 11:35:12.478714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.843 [2024-07-26 11:35:12.478732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:16.843 [2024-07-26 11:35:12.482252] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12870a0) with pdu=0x2000190fef90 00:27:16.843 [2024-07-26 11:35:12.482514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.843 [2024-07-26 11:35:12.482532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:16.843 [2024-07-26 11:35:12.486252] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12870a0) with pdu=0x2000190fef90 00:27:16.843 [2024-07-26 11:35:12.486511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.843 [2024-07-26 11:35:12.486533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:16.843 [2024-07-26 11:35:12.490685] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12870a0) with pdu=0x2000190fef90 00:27:16.843 [2024-07-26 11:35:12.490944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.844 [2024-07-26 11:35:12.490962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:16.844 [2024-07-26 11:35:12.494564] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12870a0) with pdu=0x2000190fef90 00:27:16.844 [2024-07-26 11:35:12.494825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.844 [2024-07-26 11:35:12.494844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:16.844 [2024-07-26 11:35:12.498576] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12870a0) with pdu=0x2000190fef90 00:27:16.844 [2024-07-26 11:35:12.498840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.844 [2024-07-26 11:35:12.498858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:17.102 [2024-07-26 11:35:12.502523] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12870a0) with pdu=0x2000190fef90 00:27:17.102 [2024-07-26 11:35:12.502799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.102 [2024-07-26 11:35:12.502817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:17.102 [2024-07-26 11:35:12.506458] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12870a0) with pdu=0x2000190fef90 00:27:17.103 [2024-07-26 11:35:12.506717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.103 [2024-07-26 11:35:12.506736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:17.103 [2024-07-26 11:35:12.510269] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12870a0) with pdu=0x2000190fef90 00:27:17.103 [2024-07-26 11:35:12.510520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.103 [2024-07-26 11:35:12.510538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:17.103 [2024-07-26 11:35:12.514110] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12870a0) with pdu=0x2000190fef90 00:27:17.103 [2024-07-26 11:35:12.514360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.103 [2024-07-26 11:35:12.514377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:17.103 [2024-07-26 11:35:12.517891] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12870a0) with pdu=0x2000190fef90 00:27:17.103 [2024-07-26 11:35:12.518136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.103 [2024-07-26 11:35:12.518154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:17.103 [2024-07-26 11:35:12.521781] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12870a0) with pdu=0x2000190fef90 00:27:17.103 [2024-07-26 11:35:12.522032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.103 [2024-07-26 11:35:12.522050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:17.103 [2024-07-26 11:35:12.525994] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12870a0) with pdu=0x2000190fef90 00:27:17.103 [2024-07-26 11:35:12.526269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.103 [2024-07-26 11:35:12.526287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:17.103 [2024-07-26 11:35:12.531055] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12870a0) with pdu=0x2000190fef90 00:27:17.103 [2024-07-26 11:35:12.531296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.103 [2024-07-26 11:35:12.531314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:17.103 [2024-07-26 11:35:12.536221] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12870a0) with pdu=0x2000190fef90 00:27:17.103 [2024-07-26 11:35:12.536468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.103 [2024-07-26 11:35:12.536486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:17.103 [2024-07-26 11:35:12.541181] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12870a0) with pdu=0x2000190fef90 00:27:17.103 [2024-07-26 11:35:12.541423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.103 [2024-07-26 11:35:12.541441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:17.103 [2024-07-26 11:35:12.546065] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12870a0) with pdu=0x2000190fef90 00:27:17.103 [2024-07-26 11:35:12.546323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.103 [2024-07-26 11:35:12.546342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:17.103 [2024-07-26 11:35:12.550956] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12870a0) with pdu=0x2000190fef90 00:27:17.103 [2024-07-26 11:35:12.551203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.103 [2024-07-26 11:35:12.551221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:17.103 [2024-07-26 11:35:12.556343] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12870a0) with pdu=0x2000190fef90 00:27:17.103 [2024-07-26 11:35:12.556593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.103 [2024-07-26 11:35:12.556610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:17.103 [2024-07-26 11:35:12.561185] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12870a0) with pdu=0x2000190fef90 00:27:17.103 [2024-07-26 11:35:12.561445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.103 [2024-07-26 11:35:12.561466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:17.103 [2024-07-26 11:35:12.566202] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12870a0) with pdu=0x2000190fef90 00:27:17.103 [2024-07-26 11:35:12.566444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.103 [2024-07-26 11:35:12.566462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:17.103 [2024-07-26 11:35:12.571075] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12870a0) with pdu=0x2000190fef90 00:27:17.103 [2024-07-26 11:35:12.571335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.103 [2024-07-26 11:35:12.571354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:17.103 [2024-07-26 11:35:12.575875] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12870a0) with pdu=0x2000190fef90 00:27:17.103 [2024-07-26 11:35:12.576128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.103 [2024-07-26 11:35:12.576146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:17.103 [2024-07-26 11:35:12.580603] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12870a0) with pdu=0x2000190fef90 00:27:17.103 [2024-07-26 11:35:12.580863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.103 [2024-07-26 11:35:12.580882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:17.103 [2024-07-26 11:35:12.585346] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12870a0) with pdu=0x2000190fef90 00:27:17.103 [2024-07-26 11:35:12.585593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.103 [2024-07-26 11:35:12.585611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:17.103 [2024-07-26 11:35:12.590207] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12870a0) with pdu=0x2000190fef90 00:27:17.103 [2024-07-26 11:35:12.590467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.103 [2024-07-26 11:35:12.590485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:17.103 [2024-07-26 11:35:12.595814] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12870a0) with pdu=0x2000190fef90 00:27:17.103 [2024-07-26 11:35:12.596068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.103 [2024-07-26 11:35:12.596086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:17.103 [2024-07-26 11:35:12.600545] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12870a0) with pdu=0x2000190fef90 00:27:17.103 [2024-07-26 11:35:12.600822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.103 [2024-07-26 11:35:12.600840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:17.103 [2024-07-26 11:35:12.605769] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12870a0) with pdu=0x2000190fef90 00:27:17.103 [2024-07-26 11:35:12.606022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.103 [2024-07-26 11:35:12.606040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:17.103 [2024-07-26 11:35:12.610604] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12870a0) with pdu=0x2000190fef90 00:27:17.103 [2024-07-26 11:35:12.610858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.103 [2024-07-26 11:35:12.610876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:17.103 [2024-07-26 11:35:12.615462] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12870a0) with pdu=0x2000190fef90 00:27:17.103 [2024-07-26 11:35:12.615713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.103 [2024-07-26 11:35:12.615730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:17.103 [2024-07-26 11:35:12.620316] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12870a0) with pdu=0x2000190fef90 00:27:17.103 [2024-07-26 11:35:12.620562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.103 [2024-07-26 11:35:12.620580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:17.103 [2024-07-26 11:35:12.624526] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12870a0) with pdu=0x2000190fef90 00:27:17.103 [2024-07-26 11:35:12.624783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.103 [2024-07-26 11:35:12.624801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:17.103 [2024-07-26 11:35:12.628564] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12870a0) with pdu=0x2000190fef90 00:27:17.103 [2024-07-26 11:35:12.628825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.104 [2024-07-26 11:35:12.628843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:17.104 [2024-07-26 11:35:12.632422] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12870a0) with pdu=0x2000190fef90 00:27:17.104 [2024-07-26 11:35:12.632683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.104 [2024-07-26 11:35:12.632701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:17.104 [2024-07-26 11:35:12.636205] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12870a0) with pdu=0x2000190fef90 00:27:17.104 [2024-07-26 11:35:12.636451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.104 [2024-07-26 11:35:12.636469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:17.104 [2024-07-26 11:35:12.640051] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12870a0) with pdu=0x2000190fef90 00:27:17.104 [2024-07-26 11:35:12.640312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.104 [2024-07-26 11:35:12.640330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:17.104 [2024-07-26 11:35:12.644090] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12870a0) with pdu=0x2000190fef90 00:27:17.104 [2024-07-26 11:35:12.644341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.104 [2024-07-26 11:35:12.644359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:17.104 [2024-07-26 11:35:12.648325] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12870a0) with pdu=0x2000190fef90 00:27:17.104 [2024-07-26 11:35:12.648584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.104 [2024-07-26 11:35:12.648601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:17.104 [2024-07-26 11:35:12.652127] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12870a0) with pdu=0x2000190fef90 00:27:17.104 [2024-07-26 11:35:12.652376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.104 [2024-07-26 11:35:12.652394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:17.104 [2024-07-26 11:35:12.655902] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12870a0) with pdu=0x2000190fef90 00:27:17.104 [2024-07-26 11:35:12.656149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.104 [2024-07-26 11:35:12.656169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:17.104 [2024-07-26 11:35:12.659683] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12870a0) with pdu=0x2000190fef90 00:27:17.104 [2024-07-26 11:35:12.659940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.104 [2024-07-26 11:35:12.659958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:17.104 [2024-07-26 11:35:12.663500] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12870a0) with pdu=0x2000190fef90 00:27:17.104 [2024-07-26 11:35:12.663760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.104 [2024-07-26 11:35:12.663779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:17.104 [2024-07-26 11:35:12.667305] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12870a0) with pdu=0x2000190fef90 00:27:17.104 [2024-07-26 11:35:12.667552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.104 [2024-07-26 11:35:12.667570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:17.104 [2024-07-26 11:35:12.671288] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12870a0) with pdu=0x2000190fef90 00:27:17.104 [2024-07-26 11:35:12.671528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.104 [2024-07-26 11:35:12.671546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:17.104 [2024-07-26 11:35:12.675343] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12870a0) with pdu=0x2000190fef90 00:27:17.104 [2024-07-26 11:35:12.675615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.104 [2024-07-26 11:35:12.675643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:17.104 [2024-07-26 11:35:12.679202] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12870a0) with pdu=0x2000190fef90 00:27:17.104 [2024-07-26 11:35:12.679454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.104 [2024-07-26 11:35:12.679472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:17.104 [2024-07-26 11:35:12.683023] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12870a0) with pdu=0x2000190fef90 00:27:17.104 [2024-07-26 11:35:12.683278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.104 [2024-07-26 11:35:12.683296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:17.104 [2024-07-26 11:35:12.686886] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12870a0) with pdu=0x2000190fef90 00:27:17.104 [2024-07-26 11:35:12.687137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.104 [2024-07-26 11:35:12.687155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:17.104 [2024-07-26 11:35:12.690766] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12870a0) with pdu=0x2000190fef90 00:27:17.104 [2024-07-26 11:35:12.691027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.104 [2024-07-26 11:35:12.691055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:17.104 [2024-07-26 11:35:12.694614] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12870a0) with pdu=0x2000190fef90 00:27:17.104 [2024-07-26 11:35:12.694888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.104 [2024-07-26 11:35:12.694906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:17.104 [2024-07-26 11:35:12.698498] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12870a0) with pdu=0x2000190fef90 00:27:17.104 [2024-07-26 11:35:12.698759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.104 [2024-07-26 11:35:12.698776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:17.104 [2024-07-26 11:35:12.702341] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12870a0) with pdu=0x2000190fef90 00:27:17.104 [2024-07-26 11:35:12.702591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.104 [2024-07-26 11:35:12.702609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:17.104 [2024-07-26 11:35:12.706133] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12870a0) with pdu=0x2000190fef90 00:27:17.104 [2024-07-26 11:35:12.706379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.104 [2024-07-26 11:35:12.706396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:17.104 [2024-07-26 11:35:12.709949] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12870a0) with pdu=0x2000190fef90 00:27:17.104 [2024-07-26 11:35:12.710203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.104 [2024-07-26 11:35:12.710221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:17.104 [2024-07-26 11:35:12.713752] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12870a0) with pdu=0x2000190fef90 00:27:17.104 [2024-07-26 11:35:12.713996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.104 [2024-07-26 11:35:12.714014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:17.104 [2024-07-26 11:35:12.717543] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12870a0) with pdu=0x2000190fef90 00:27:17.104 [2024-07-26 11:35:12.717798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.104 [2024-07-26 11:35:12.717816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:17.104 [2024-07-26 11:35:12.721747] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12870a0) with pdu=0x2000190fef90 00:27:17.104 [2024-07-26 11:35:12.722003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.104 [2024-07-26 11:35:12.722021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:17.104 [2024-07-26 11:35:12.726528] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12870a0) with pdu=0x2000190fef90 00:27:17.104 [2024-07-26 11:35:12.726790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.104 [2024-07-26 11:35:12.726808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:17.104 [2024-07-26 11:35:12.731339] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12870a0) with pdu=0x2000190fef90 00:27:17.104 [2024-07-26 11:35:12.731578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.104 [2024-07-26 11:35:12.731595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:17.104 [2024-07-26 11:35:12.735522] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12870a0) with pdu=0x2000190fef90 00:27:17.104 [2024-07-26 11:35:12.735776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.105 [2024-07-26 11:35:12.735794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:17.105 [2024-07-26 11:35:12.739637] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12870a0) with pdu=0x2000190fef90 00:27:17.105 [2024-07-26 11:35:12.739887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.105 [2024-07-26 11:35:12.739905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:17.105 [2024-07-26 11:35:12.743803] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12870a0) with pdu=0x2000190fef90 00:27:17.105 [2024-07-26 11:35:12.744055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.105 [2024-07-26 11:35:12.744073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:17.105 [2024-07-26 11:35:12.747897] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12870a0) with pdu=0x2000190fef90 00:27:17.105 [2024-07-26 11:35:12.748143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.105 [2024-07-26 11:35:12.748162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:17.105 [2024-07-26 11:35:12.752051] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12870a0) with pdu=0x2000190fef90 00:27:17.105 [2024-07-26 11:35:12.752304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.105 [2024-07-26 11:35:12.752322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:17.105 [2024-07-26 11:35:12.756192] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12870a0) with pdu=0x2000190fef90 00:27:17.105 [2024-07-26 11:35:12.756449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.105 [2024-07-26 11:35:12.756467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:17.105 [2024-07-26 11:35:12.760347] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12870a0) with pdu=0x2000190fef90 00:27:17.105 [2024-07-26 11:35:12.760593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.105 [2024-07-26 11:35:12.760611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:17.364 [2024-07-26 11:35:12.764443] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12870a0) with pdu=0x2000190fef90 00:27:17.364 [2024-07-26 11:35:12.764714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.364 [2024-07-26 11:35:12.764732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:17.364 [2024-07-26 11:35:12.768686] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12870a0) with pdu=0x2000190fef90 00:27:17.364 [2024-07-26 11:35:12.768935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.364 [2024-07-26 11:35:12.768953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:17.364 [2024-07-26 11:35:12.773961] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12870a0) with pdu=0x2000190fef90 00:27:17.364 [2024-07-26 11:35:12.774225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.364 [2024-07-26 11:35:12.774243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:17.364 [2024-07-26 11:35:12.778325] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12870a0) with pdu=0x2000190fef90 00:27:17.364 [2024-07-26 11:35:12.778567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.364 [2024-07-26 11:35:12.778584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:17.364 [2024-07-26 11:35:12.782561] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12870a0) with pdu=0x2000190fef90 00:27:17.364 [2024-07-26 11:35:12.782815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.364 [2024-07-26 11:35:12.782836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:17.364 [2024-07-26 11:35:12.786645] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12870a0) with pdu=0x2000190fef90 00:27:17.364 [2024-07-26 11:35:12.786888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.364 [2024-07-26 11:35:12.786906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:17.364 [2024-07-26 11:35:12.790930] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12870a0) with pdu=0x2000190fef90 00:27:17.364 [2024-07-26 11:35:12.791173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.364 [2024-07-26 11:35:12.791192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:17.364 [2024-07-26 11:35:12.794762] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12870a0) with pdu=0x2000190fef90 00:27:17.364 [2024-07-26 11:35:12.795011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.364 [2024-07-26 11:35:12.795029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:17.364 [2024-07-26 11:35:12.798594] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12870a0) with pdu=0x2000190fef90 00:27:17.364 [2024-07-26 11:35:12.798869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.364 [2024-07-26 11:35:12.798898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:17.364 [2024-07-26 11:35:12.802358] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12870a0) with pdu=0x2000190fef90 00:27:17.364 [2024-07-26 11:35:12.802609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.364 [2024-07-26 11:35:12.802633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:17.364 [2024-07-26 11:35:12.806144] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12870a0) with pdu=0x2000190fef90 00:27:17.364 [2024-07-26 11:35:12.806400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.364 [2024-07-26 11:35:12.806419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:17.364 [2024-07-26 11:35:12.809906] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12870a0) with pdu=0x2000190fef90 00:27:17.364 [2024-07-26 11:35:12.810156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.364 [2024-07-26 11:35:12.810174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:17.364 [2024-07-26 11:35:12.813975] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12870a0) with pdu=0x2000190fef90 00:27:17.364 [2024-07-26 11:35:12.814225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.364 [2024-07-26 11:35:12.814243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:17.364 [2024-07-26 11:35:12.818908] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12870a0) with pdu=0x2000190fef90 00:27:17.364 [2024-07-26 11:35:12.819154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.364 [2024-07-26 11:35:12.819172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:17.364 [2024-07-26 11:35:12.823927] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12870a0) with pdu=0x2000190fef90 00:27:17.364 [2024-07-26 11:35:12.824193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.364 [2024-07-26 11:35:12.824211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:17.364 [2024-07-26 11:35:12.828163] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12870a0) with pdu=0x2000190fef90 00:27:17.364 [2024-07-26 11:35:12.828419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.364 [2024-07-26 11:35:12.828437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:17.364 [2024-07-26 11:35:12.832296] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12870a0) with pdu=0x2000190fef90 00:27:17.364 [2024-07-26 11:35:12.832548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.365 [2024-07-26 11:35:12.832566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:17.365 [2024-07-26 11:35:12.836649] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12870a0) with pdu=0x2000190fef90 00:27:17.365 [2024-07-26 11:35:12.836906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.365 [2024-07-26 11:35:12.836924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:17.365 [2024-07-26 11:35:12.840862] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12870a0) with pdu=0x2000190fef90 00:27:17.365 [2024-07-26 11:35:12.841110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.365 [2024-07-26 11:35:12.841127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:17.365 [2024-07-26 11:35:12.845290] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12870a0) with pdu=0x2000190fef90 00:27:17.365 [2024-07-26 11:35:12.845553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.365 [2024-07-26 11:35:12.845572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:17.365 [2024-07-26 11:35:12.849689] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12870a0) with pdu=0x2000190fef90 00:27:17.365 [2024-07-26 11:35:12.849945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.365 [2024-07-26 11:35:12.849963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:17.365 [2024-07-26 11:35:12.853864] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12870a0) with pdu=0x2000190fef90 00:27:17.365 [2024-07-26 11:35:12.854118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.365 [2024-07-26 11:35:12.854140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:17.365 [2024-07-26 11:35:12.858120] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12870a0) with pdu=0x2000190fef90 00:27:17.365 [2024-07-26 11:35:12.858371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.365 [2024-07-26 11:35:12.858389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:17.365 [2024-07-26 11:35:12.862291] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12870a0) with pdu=0x2000190fef90 00:27:17.365 [2024-07-26 11:35:12.862543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.365 [2024-07-26 11:35:12.862561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:17.365 [2024-07-26 11:35:12.866555] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12870a0) with pdu=0x2000190fef90 00:27:17.365 [2024-07-26 11:35:12.866813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.365 [2024-07-26 11:35:12.866831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:17.365 [2024-07-26 11:35:12.870805] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12870a0) with pdu=0x2000190fef90 00:27:17.365 [2024-07-26 11:35:12.871049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.365 [2024-07-26 11:35:12.871067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:17.365 [2024-07-26 11:35:12.874726] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12870a0) with pdu=0x2000190fef90 00:27:17.365 [2024-07-26 11:35:12.874970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.365 [2024-07-26 11:35:12.874989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:17.365 [2024-07-26 11:35:12.878542] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12870a0) with pdu=0x2000190fef90 00:27:17.365 [2024-07-26 11:35:12.878806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.365 [2024-07-26 11:35:12.878825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:17.365 [2024-07-26 11:35:12.882370] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12870a0) with pdu=0x2000190fef90 00:27:17.365 [2024-07-26 11:35:12.882638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.365 [2024-07-26 11:35:12.882656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:17.365 [2024-07-26 11:35:12.886191] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12870a0) with pdu=0x2000190fef90 00:27:17.365 [2024-07-26 11:35:12.886441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.365 [2024-07-26 11:35:12.886459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:17.365 [2024-07-26 11:35:12.890047] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12870a0) with pdu=0x2000190fef90 00:27:17.365 [2024-07-26 11:35:12.890307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.365 [2024-07-26 11:35:12.890326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:17.365 [2024-07-26 11:35:12.894133] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12870a0) with pdu=0x2000190fef90 00:27:17.365 [2024-07-26 11:35:12.894386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.365 [2024-07-26 11:35:12.894404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:17.365 [2024-07-26 11:35:12.897876] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12870a0) with pdu=0x2000190fef90 00:27:17.365 [2024-07-26 11:35:12.898120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.365 [2024-07-26 11:35:12.898138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:17.365 [2024-07-26 11:35:12.901600] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12870a0) with pdu=0x2000190fef90 00:27:17.365 [2024-07-26 11:35:12.901876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.365 [2024-07-26 11:35:12.901894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:17.365 [2024-07-26 11:35:12.905749] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12870a0) with pdu=0x2000190fef90 00:27:17.365 [2024-07-26 11:35:12.906018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.365 [2024-07-26 11:35:12.906037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:17.365 [2024-07-26 11:35:12.909674] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12870a0) with pdu=0x2000190fef90 00:27:17.365 [2024-07-26 11:35:12.909915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.365 [2024-07-26 11:35:12.909933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:17.365 [2024-07-26 11:35:12.914342] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12870a0) with pdu=0x2000190fef90 00:27:17.365 [2024-07-26 11:35:12.914580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.365 [2024-07-26 11:35:12.914598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:17.365 [2024-07-26 11:35:12.919190] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12870a0) with pdu=0x2000190fef90 00:27:17.365 [2024-07-26 11:35:12.919444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.365 [2024-07-26 11:35:12.919462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:17.365 [2024-07-26 11:35:12.923911] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12870a0) with pdu=0x2000190fef90 00:27:17.365 [2024-07-26 11:35:12.924160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.365 [2024-07-26 11:35:12.924179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:17.365 [2024-07-26 11:35:12.928601] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12870a0) with pdu=0x2000190fef90 00:27:17.365 [2024-07-26 11:35:12.928851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.365 [2024-07-26 11:35:12.928869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:17.365 [2024-07-26 11:35:12.934202] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12870a0) with pdu=0x2000190fef90 00:27:17.365 [2024-07-26 11:35:12.934449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.365 [2024-07-26 11:35:12.934467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:17.365 [2024-07-26 11:35:12.939009] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12870a0) with pdu=0x2000190fef90 00:27:17.365 [2024-07-26 11:35:12.939258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.365 [2024-07-26 11:35:12.939276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:17.365 [2024-07-26 11:35:12.944360] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12870a0) with pdu=0x2000190fef90 00:27:17.365 [2024-07-26 11:35:12.944622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.365 [2024-07-26 11:35:12.944775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:17.365 [2024-07-26 11:35:12.949050] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12870a0) with pdu=0x2000190fef90 00:27:17.366 [2024-07-26 11:35:12.949302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.366 [2024-07-26 11:35:12.949320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:17.366 [2024-07-26 11:35:12.953237] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12870a0) with pdu=0x2000190fef90 00:27:17.366 [2024-07-26 11:35:12.953485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.366 [2024-07-26 11:35:12.953504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:17.366 [2024-07-26 11:35:12.957177] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12870a0) with pdu=0x2000190fef90 00:27:17.366 [2024-07-26 11:35:12.957429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.366 [2024-07-26 11:35:12.957447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:17.366 [2024-07-26 11:35:12.961302] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12870a0) with pdu=0x2000190fef90 00:27:17.366 [2024-07-26 11:35:12.961560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.366 [2024-07-26 11:35:12.961578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:17.366 [2024-07-26 11:35:12.965497] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12870a0) with pdu=0x2000190fef90 00:27:17.366 [2024-07-26 11:35:12.965764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.366 [2024-07-26 11:35:12.965786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:17.366 [2024-07-26 11:35:12.969520] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12870a0) with pdu=0x2000190fef90 00:27:17.366 [2024-07-26 11:35:12.969771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.366 [2024-07-26 11:35:12.969789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:17.366 [2024-07-26 11:35:12.973516] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12870a0) with pdu=0x2000190fef90 00:27:17.366 [2024-07-26 11:35:12.973747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.366 [2024-07-26 11:35:12.973765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:17.366 [2024-07-26 11:35:12.978121] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12870a0) with pdu=0x2000190fef90 00:27:17.366 [2024-07-26 11:35:12.978344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.366 [2024-07-26 11:35:12.978362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:17.366 [2024-07-26 11:35:12.982288] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12870a0) with pdu=0x2000190fef90 00:27:17.366 [2024-07-26 11:35:12.982509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.366 [2024-07-26 11:35:12.982527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:17.366 [2024-07-26 11:35:12.986351] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12870a0) with pdu=0x2000190fef90 00:27:17.366 [2024-07-26 11:35:12.986571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.366 [2024-07-26 11:35:12.986589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:17.366 [2024-07-26 11:35:12.990479] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12870a0) with pdu=0x2000190fef90 00:27:17.366 [2024-07-26 11:35:12.990702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.366 [2024-07-26 11:35:12.990720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:17.366 [2024-07-26 11:35:12.994553] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12870a0) with pdu=0x2000190fef90 00:27:17.366 [2024-07-26 11:35:12.994774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.366 [2024-07-26 11:35:12.994792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:17.366 [2024-07-26 11:35:12.999145] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12870a0) with pdu=0x2000190fef90 00:27:17.366 [2024-07-26 11:35:12.999368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.366 [2024-07-26 11:35:12.999386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:17.366 [2024-07-26 11:35:13.003196] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12870a0) with pdu=0x2000190fef90 00:27:17.366 [2024-07-26 11:35:13.003423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.366 [2024-07-26 11:35:13.003441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:17.366 [2024-07-26 11:35:13.007301] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12870a0) with pdu=0x2000190fef90 00:27:17.366 [2024-07-26 11:35:13.007525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.366 [2024-07-26 11:35:13.007544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:17.366 [2024-07-26 11:35:13.011137] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12870a0) with pdu=0x2000190fef90 00:27:17.366 [2024-07-26 11:35:13.011360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.366 [2024-07-26 11:35:13.011378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:17.366 [2024-07-26 11:35:13.015203] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12870a0) with pdu=0x2000190fef90 00:27:17.366 [2024-07-26 11:35:13.015437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.366 [2024-07-26 11:35:13.015455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:17.366 [2024-07-26 11:35:13.019741] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12870a0) with pdu=0x2000190fef90 00:27:17.366 [2024-07-26 11:35:13.019977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.366 [2024-07-26 11:35:13.019996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:17.626 [2024-07-26 11:35:13.024762] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12870a0) with pdu=0x2000190fef90 00:27:17.626 [2024-07-26 11:35:13.024987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.626 [2024-07-26 11:35:13.025005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:17.626 [2024-07-26 11:35:13.029071] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12870a0) with pdu=0x2000190fef90 00:27:17.626 [2024-07-26 11:35:13.029312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.626 [2024-07-26 11:35:13.029330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:17.626 [2024-07-26 11:35:13.033232] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12870a0) with pdu=0x2000190fef90 00:27:17.626 [2024-07-26 11:35:13.033454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:64 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.626 [2024-07-26 11:35:13.033472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:17.626 [2024-07-26 11:35:13.037666] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12870a0) with pdu=0x2000190fef90 00:27:17.626 [2024-07-26 11:35:13.037881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.626 [2024-07-26 11:35:13.037899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:17.626 [2024-07-26 11:35:13.041836] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12870a0) with pdu=0x2000190fef90 00:27:17.626 [2024-07-26 11:35:13.042056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.626 [2024-07-26 11:35:13.042075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:17.626 [2024-07-26 11:35:13.045922] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12870a0) with pdu=0x2000190fef90 00:27:17.626 [2024-07-26 11:35:13.046144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.626 [2024-07-26 11:35:13.046162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:17.626 [2024-07-26 11:35:13.050123] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12870a0) with pdu=0x2000190fef90 00:27:17.626 [2024-07-26 11:35:13.050353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.626 [2024-07-26 11:35:13.050371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:17.626 [2024-07-26 11:35:13.054190] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12870a0) with pdu=0x2000190fef90 00:27:17.626 [2024-07-26 11:35:13.054409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.626 [2024-07-26 11:35:13.054427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:17.626 [2024-07-26 11:35:13.057911] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12870a0) with pdu=0x2000190fef90 00:27:17.626 [2024-07-26 11:35:13.058136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.626 [2024-07-26 11:35:13.058154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:17.626 [2024-07-26 11:35:13.061453] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12870a0) with pdu=0x2000190fef90 00:27:17.626 [2024-07-26 11:35:13.061690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.626 [2024-07-26 11:35:13.061709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:17.626 [2024-07-26 11:35:13.065086] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12870a0) with pdu=0x2000190fef90 00:27:17.626 [2024-07-26 11:35:13.065326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.626 [2024-07-26 11:35:13.065345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:17.626 [2024-07-26 11:35:13.069183] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12870a0) with pdu=0x2000190fef90 00:27:17.626 [2024-07-26 11:35:13.069408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.626 [2024-07-26 11:35:13.069426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:17.626 [2024-07-26 11:35:13.073048] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12870a0) with pdu=0x2000190fef90 00:27:17.626 [2024-07-26 11:35:13.073268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.626 [2024-07-26 11:35:13.073289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:17.626 [2024-07-26 11:35:13.076714] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12870a0) with pdu=0x2000190fef90 00:27:17.626 [2024-07-26 11:35:13.076937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.626 [2024-07-26 11:35:13.076955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:17.626 [2024-07-26 11:35:13.080353] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12870a0) with pdu=0x2000190fef90 00:27:17.626 [2024-07-26 11:35:13.080586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.626 [2024-07-26 11:35:13.080605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:17.626 [2024-07-26 11:35:13.084457] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12870a0) with pdu=0x2000190fef90 00:27:17.626 [2024-07-26 11:35:13.084705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.626 [2024-07-26 11:35:13.084724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:17.626 [2024-07-26 11:35:13.088185] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12870a0) with pdu=0x2000190fef90 00:27:17.626 [2024-07-26 11:35:13.088406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.626 [2024-07-26 11:35:13.088425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:17.626 [2024-07-26 11:35:13.091812] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12870a0) with pdu=0x2000190fef90 00:27:17.626 [2024-07-26 11:35:13.092033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.626 [2024-07-26 11:35:13.092052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:17.626 [2024-07-26 11:35:13.095427] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12870a0) with pdu=0x2000190fef90 00:27:17.626 [2024-07-26 11:35:13.095646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.626 [2024-07-26 11:35:13.095665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:17.626 [2024-07-26 11:35:13.099370] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12870a0) with pdu=0x2000190fef90 00:27:17.626 [2024-07-26 11:35:13.099592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.626 [2024-07-26 11:35:13.099611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:17.627 [2024-07-26 11:35:13.103622] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12870a0) with pdu=0x2000190fef90 00:27:17.627 [2024-07-26 11:35:13.103846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.627 [2024-07-26 11:35:13.103864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:17.627 [2024-07-26 11:35:13.108331] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12870a0) with pdu=0x2000190fef90 00:27:17.627 [2024-07-26 11:35:13.108560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:64 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.627 [2024-07-26 11:35:13.108578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:17.627 [2024-07-26 11:35:13.112593] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12870a0) with pdu=0x2000190fef90 00:27:17.627 [2024-07-26 11:35:13.112834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.627 [2024-07-26 11:35:13.112853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:17.627 [2024-07-26 11:35:13.117283] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12870a0) with pdu=0x2000190fef90 00:27:17.627 [2024-07-26 11:35:13.117516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.627 [2024-07-26 11:35:13.117534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:17.627 [2024-07-26 11:35:13.122333] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12870a0) with pdu=0x2000190fef90 00:27:17.627 [2024-07-26 11:35:13.122555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.627 [2024-07-26 11:35:13.122574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:17.627 [2024-07-26 11:35:13.126819] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12870a0) with pdu=0x2000190fef90 00:27:17.627 [2024-07-26 11:35:13.127054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.627 [2024-07-26 11:35:13.127072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:17.627 [2024-07-26 11:35:13.130674] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12870a0) with pdu=0x2000190fef90 00:27:17.627 [2024-07-26 11:35:13.130912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.627 [2024-07-26 11:35:13.130930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:17.627 [2024-07-26 11:35:13.134448] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12870a0) with pdu=0x2000190fef90 00:27:17.627 [2024-07-26 11:35:13.134675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.627 [2024-07-26 11:35:13.134694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:17.627 [2024-07-26 11:35:13.138220] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12870a0) with pdu=0x2000190fef90 00:27:17.627 [2024-07-26 11:35:13.138461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.627 [2024-07-26 11:35:13.138480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:17.627 [2024-07-26 11:35:13.142013] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12870a0) with pdu=0x2000190fef90 00:27:17.627 [2024-07-26 11:35:13.142257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.627 [2024-07-26 11:35:13.142275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:17.627 [2024-07-26 11:35:13.145788] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12870a0) with pdu=0x2000190fef90 00:27:17.627 [2024-07-26 11:35:13.146017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.627 [2024-07-26 11:35:13.146035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:17.627 [2024-07-26 11:35:13.149473] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12870a0) with pdu=0x2000190fef90 00:27:17.627 [2024-07-26 11:35:13.149697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.627 [2024-07-26 11:35:13.149715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:17.627 [2024-07-26 11:35:13.153891] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12870a0) with pdu=0x2000190fef90 00:27:17.627 [2024-07-26 11:35:13.154167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.627 [2024-07-26 11:35:13.154185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:17.627 [2024-07-26 11:35:13.158830] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12870a0) with pdu=0x2000190fef90 00:27:17.627 [2024-07-26 11:35:13.159096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.627 [2024-07-26 11:35:13.159114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:17.627 [2024-07-26 11:35:13.163942] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12870a0) with pdu=0x2000190fef90 00:27:17.627 [2024-07-26 11:35:13.164269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.627 [2024-07-26 11:35:13.164287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:17.627 [2024-07-26 11:35:13.169036] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12870a0) with pdu=0x2000190fef90 00:27:17.627 [2024-07-26 11:35:13.169359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.627 [2024-07-26 11:35:13.169378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:17.627 [2024-07-26 11:35:13.174712] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12870a0) with pdu=0x2000190fef90 00:27:17.627 [2024-07-26 11:35:13.175055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.627 [2024-07-26 11:35:13.175074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:17.627 [2024-07-26 11:35:13.180298] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12870a0) with pdu=0x2000190fef90 00:27:17.627 [2024-07-26 11:35:13.180648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.627 [2024-07-26 11:35:13.180667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:17.627 [2024-07-26 11:35:13.185766] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12870a0) with pdu=0x2000190fef90 00:27:17.627 [2024-07-26 11:35:13.186098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.627 [2024-07-26 11:35:13.186120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:17.627 [2024-07-26 11:35:13.190852] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12870a0) with pdu=0x2000190fef90 00:27:17.627 [2024-07-26 11:35:13.191180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.627 [2024-07-26 11:35:13.191199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:17.627 [2024-07-26 11:35:13.196481] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12870a0) with pdu=0x2000190fef90 00:27:17.627 [2024-07-26 11:35:13.196800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.627 [2024-07-26 11:35:13.196819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:17.627 [2024-07-26 11:35:13.201798] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12870a0) with pdu=0x2000190fef90 00:27:17.627 [2024-07-26 11:35:13.202127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.627 [2024-07-26 11:35:13.202146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:17.627 [2024-07-26 11:35:13.206999] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12870a0) with pdu=0x2000190fef90 00:27:17.627 [2024-07-26 11:35:13.207340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.627 [2024-07-26 11:35:13.207360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:17.627 [2024-07-26 11:35:13.212567] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12870a0) with pdu=0x2000190fef90 00:27:17.627 [2024-07-26 11:35:13.212847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.627 [2024-07-26 11:35:13.212866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:17.627 [2024-07-26 11:35:13.218131] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12870a0) with pdu=0x2000190fef90 00:27:17.627 [2024-07-26 11:35:13.218381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.627 [2024-07-26 11:35:13.218400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:17.627 [2024-07-26 11:35:13.223337] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12870a0) with pdu=0x2000190fef90 00:27:17.627 [2024-07-26 11:35:13.223607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.627 [2024-07-26 11:35:13.223625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:17.627 [2024-07-26 11:35:13.229269] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12870a0) with pdu=0x2000190fef90 00:27:17.627 [2024-07-26 11:35:13.229606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.628 [2024-07-26 11:35:13.229624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:17.628 [2024-07-26 11:35:13.234897] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12870a0) with pdu=0x2000190fef90 00:27:17.628 [2024-07-26 11:35:13.235169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.628 [2024-07-26 11:35:13.235187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:17.628 [2024-07-26 11:35:13.240011] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12870a0) with pdu=0x2000190fef90 00:27:17.628 [2024-07-26 11:35:13.240346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.628 [2024-07-26 11:35:13.240364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:17.628 [2024-07-26 11:35:13.245441] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12870a0) with pdu=0x2000190fef90 00:27:17.628 [2024-07-26 11:35:13.245704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.628 [2024-07-26 11:35:13.245723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:17.628 [2024-07-26 11:35:13.249709] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12870a0) with pdu=0x2000190fef90 00:27:17.628 [2024-07-26 11:35:13.249942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.628 [2024-07-26 11:35:13.249960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:17.628 [2024-07-26 11:35:13.253600] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12870a0) with pdu=0x2000190fef90 00:27:17.628 [2024-07-26 11:35:13.253859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.628 [2024-07-26 11:35:13.253877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:17.628 [2024-07-26 11:35:13.257754] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12870a0) with pdu=0x2000190fef90 00:27:17.628 [2024-07-26 11:35:13.258040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.628 [2024-07-26 11:35:13.258058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:17.628 [2024-07-26 11:35:13.262121] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12870a0) with pdu=0x2000190fef90 00:27:17.628 [2024-07-26 11:35:13.262370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.628 [2024-07-26 11:35:13.262388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:17.628 [2024-07-26 11:35:13.266254] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12870a0) with pdu=0x2000190fef90 00:27:17.628 [2024-07-26 11:35:13.266501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.628 [2024-07-26 11:35:13.266519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:17.628 [2024-07-26 11:35:13.271384] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12870a0) with pdu=0x2000190fef90 00:27:17.628 [2024-07-26 11:35:13.271636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.628 [2024-07-26 11:35:13.271658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:17.628 [2024-07-26 11:35:13.275190] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12870a0) with pdu=0x2000190fef90 00:27:17.628 [2024-07-26 11:35:13.275409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.628 [2024-07-26 11:35:13.275427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:17.628 [2024-07-26 11:35:13.278925] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12870a0) with pdu=0x2000190fef90 00:27:17.628 [2024-07-26 11:35:13.279168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.628 [2024-07-26 11:35:13.279186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:17.628 [2024-07-26 11:35:13.282791] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12870a0) with pdu=0x2000190fef90 00:27:17.628 [2024-07-26 11:35:13.283026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.628 [2024-07-26 11:35:13.283044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:17.888 [2024-07-26 11:35:13.286782] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12870a0) with pdu=0x2000190fef90 00:27:17.888 [2024-07-26 11:35:13.287018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.888 [2024-07-26 11:35:13.287036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:17.888 [2024-07-26 11:35:13.290649] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12870a0) with pdu=0x2000190fef90 00:27:17.888 [2024-07-26 11:35:13.290876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.888 [2024-07-26 11:35:13.290894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:17.888 [2024-07-26 11:35:13.294453] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12870a0) with pdu=0x2000190fef90 00:27:17.888 [2024-07-26 11:35:13.294684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.888 [2024-07-26 11:35:13.294702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:17.888 [2024-07-26 11:35:13.298226] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12870a0) with pdu=0x2000190fef90 00:27:17.888 [2024-07-26 11:35:13.298457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.888 [2024-07-26 11:35:13.298475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:17.888 00:27:17.888 Latency(us) 00:27:17.888 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:17.888 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:27:17.888 nvme0n1 : 2.00 6735.54 841.94 0.00 0.00 2371.96 1677.41 11297.16 00:27:17.888 =================================================================================================================== 00:27:17.888 Total : 6735.54 841.94 0.00 0.00 2371.96 1677.41 11297.16 00:27:17.888 0 00:27:17.888 11:35:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:27:17.888 11:35:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:27:17.888 11:35:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:27:17.888 | .driver_specific 00:27:17.888 | .nvme_error 00:27:17.888 | .status_code 00:27:17.888 | .command_transient_transport_error' 00:27:17.888 11:35:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:27:17.888 11:35:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 434 > 0 )) 00:27:17.888 11:35:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 1660790 00:27:17.888 11:35:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@950 -- # '[' -z 1660790 ']' 00:27:17.889 11:35:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # kill -0 1660790 00:27:17.889 11:35:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # uname 00:27:17.889 11:35:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:27:17.889 11:35:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1660790 00:27:17.889 11:35:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:27:18.146 11:35:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:27:18.146 11:35:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1660790' 00:27:18.146 killing process with pid 1660790 00:27:18.146 11:35:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@969 -- # kill 1660790 00:27:18.146 Received shutdown signal, test time was about 2.000000 seconds 00:27:18.146 00:27:18.146 Latency(us) 00:27:18.146 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:18.146 =================================================================================================================== 00:27:18.146 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:27:18.146 11:35:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@974 -- # wait 1660790 00:27:18.146 11:35:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@116 -- # killprocess 1658778 00:27:18.146 11:35:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@950 -- # '[' -z 1658778 ']' 00:27:18.146 11:35:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # kill -0 1658778 00:27:18.146 11:35:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # uname 00:27:18.147 11:35:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:27:18.147 11:35:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1658778 00:27:18.147 11:35:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:27:18.147 11:35:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:27:18.147 11:35:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1658778' 00:27:18.147 killing process with pid 1658778 00:27:18.147 11:35:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@969 -- # kill 1658778 00:27:18.147 11:35:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@974 -- # wait 1658778 00:27:18.405 00:27:18.405 real 0m16.681s 00:27:18.405 user 0m31.715s 00:27:18.406 sys 0m4.749s 00:27:18.406 11:35:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1126 -- # xtrace_disable 00:27:18.406 11:35:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:27:18.406 ************************************ 00:27:18.406 END TEST nvmf_digest_error 00:27:18.406 ************************************ 00:27:18.406 11:35:13 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@149 -- # trap - SIGINT SIGTERM EXIT 00:27:18.406 11:35:13 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@150 -- # nvmftestfini 00:27:18.406 11:35:13 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@488 -- # nvmfcleanup 00:27:18.406 11:35:13 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@117 -- # sync 00:27:18.406 11:35:13 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:27:18.406 11:35:13 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@120 -- # set +e 00:27:18.406 11:35:13 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@121 -- # for i in {1..20} 00:27:18.406 11:35:13 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:27:18.406 rmmod nvme_tcp 00:27:18.406 rmmod nvme_fabrics 00:27:18.406 rmmod nvme_keyring 00:27:18.406 11:35:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:27:18.406 11:35:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@124 -- # set -e 00:27:18.406 11:35:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@125 -- # return 0 00:27:18.406 11:35:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@489 -- # '[' -n 1658778 ']' 00:27:18.406 11:35:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@490 -- # killprocess 1658778 00:27:18.406 11:35:14 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@950 -- # '[' -z 1658778 ']' 00:27:18.406 11:35:14 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@954 -- # kill -0 1658778 00:27:18.406 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 954: kill: (1658778) - No such process 00:27:18.406 11:35:14 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@977 -- # echo 'Process with pid 1658778 is not found' 00:27:18.406 Process with pid 1658778 is not found 00:27:18.406 11:35:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:27:18.406 11:35:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:27:18.406 11:35:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:27:18.406 11:35:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:27:18.406 11:35:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@278 -- # remove_spdk_ns 00:27:18.406 11:35:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:18.406 11:35:14 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:18.406 11:35:14 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:20.942 11:35:16 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:27:20.942 00:27:20.942 real 0m41.990s 00:27:20.942 user 1m6.042s 00:27:20.942 sys 0m13.951s 00:27:20.942 11:35:16 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1126 -- # xtrace_disable 00:27:20.942 11:35:16 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:27:20.942 ************************************ 00:27:20.942 END TEST nvmf_digest 00:27:20.942 ************************************ 00:27:20.942 11:35:16 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@36 -- # [[ 0 -eq 1 ]] 00:27:20.942 11:35:16 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@41 -- # [[ 0 -eq 1 ]] 00:27:20.942 11:35:16 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@46 -- # [[ phy == phy ]] 00:27:20.942 11:35:16 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@47 -- # run_test nvmf_bdevperf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=tcp 00:27:20.942 11:35:16 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:27:20.942 11:35:16 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:27:20.942 11:35:16 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:27:20.942 ************************************ 00:27:20.942 START TEST nvmf_bdevperf 00:27:20.942 ************************************ 00:27:20.943 11:35:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=tcp 00:27:20.943 * Looking for test storage... 00:27:20.943 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:27:20.943 11:35:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:27:20.943 11:35:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@7 -- # uname -s 00:27:20.943 11:35:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:20.943 11:35:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:20.943 11:35:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:20.943 11:35:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:20.943 11:35:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:20.943 11:35:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:20.943 11:35:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:20.943 11:35:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:20.943 11:35:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:20.943 11:35:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:20.943 11:35:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:27:20.943 11:35:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:27:20.943 11:35:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:20.943 11:35:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:20.943 11:35:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:27:20.943 11:35:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:27:20.943 11:35:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:27:20.943 11:35:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:20.943 11:35:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:20.943 11:35:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:20.943 11:35:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:20.943 11:35:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:20.943 11:35:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:20.943 11:35:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@5 -- # export PATH 00:27:20.943 11:35:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:20.943 11:35:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@47 -- # : 0 00:27:20.943 11:35:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:27:20.943 11:35:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:27:20.943 11:35:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:27:20.943 11:35:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:20.943 11:35:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:20.943 11:35:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:27:20.943 11:35:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:27:20.943 11:35:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@51 -- # have_pci_nics=0 00:27:20.943 11:35:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@11 -- # MALLOC_BDEV_SIZE=64 00:27:20.943 11:35:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:27:20.943 11:35:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@24 -- # nvmftestinit 00:27:20.943 11:35:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:27:20.943 11:35:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:20.943 11:35:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@448 -- # prepare_net_devs 00:27:20.943 11:35:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@410 -- # local -g is_hw=no 00:27:20.943 11:35:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@412 -- # remove_spdk_ns 00:27:20.943 11:35:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:20.943 11:35:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:20.943 11:35:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:20.943 11:35:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:27:20.943 11:35:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:27:20.943 11:35:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@285 -- # xtrace_disable 00:27:20.943 11:35:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:27:26.283 11:35:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:27:26.283 11:35:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@291 -- # pci_devs=() 00:27:26.283 11:35:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@291 -- # local -a pci_devs 00:27:26.283 11:35:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@292 -- # pci_net_devs=() 00:27:26.283 11:35:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:27:26.283 11:35:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@293 -- # pci_drivers=() 00:27:26.283 11:35:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@293 -- # local -A pci_drivers 00:27:26.283 11:35:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@295 -- # net_devs=() 00:27:26.283 11:35:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@295 -- # local -ga net_devs 00:27:26.283 11:35:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@296 -- # e810=() 00:27:26.283 11:35:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@296 -- # local -ga e810 00:27:26.283 11:35:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@297 -- # x722=() 00:27:26.283 11:35:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@297 -- # local -ga x722 00:27:26.283 11:35:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@298 -- # mlx=() 00:27:26.283 11:35:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@298 -- # local -ga mlx 00:27:26.283 11:35:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:26.283 11:35:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:26.283 11:35:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:26.283 11:35:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:26.284 11:35:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:26.284 11:35:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:26.284 11:35:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:26.284 11:35:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:26.284 11:35:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:26.284 11:35:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:26.284 11:35:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:26.284 11:35:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:27:26.284 11:35:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:27:26.284 11:35:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:27:26.284 11:35:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:27:26.284 11:35:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:27:26.284 11:35:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:27:26.284 11:35:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:27:26.284 11:35:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:27:26.284 Found 0000:86:00.0 (0x8086 - 0x159b) 00:27:26.284 11:35:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:27:26.284 11:35:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:27:26.284 11:35:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:26.284 11:35:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:26.284 11:35:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:27:26.284 11:35:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:27:26.284 11:35:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:27:26.284 Found 0000:86:00.1 (0x8086 - 0x159b) 00:27:26.284 11:35:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:27:26.284 11:35:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:27:26.284 11:35:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:26.284 11:35:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:26.284 11:35:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:27:26.284 11:35:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:27:26.284 11:35:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:27:26.284 11:35:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:27:26.284 11:35:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:27:26.284 11:35:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:26.284 11:35:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:27:26.284 11:35:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:26.284 11:35:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@390 -- # [[ up == up ]] 00:27:26.284 11:35:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:27:26.284 11:35:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:26.284 11:35:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:27:26.284 Found net devices under 0000:86:00.0: cvl_0_0 00:27:26.284 11:35:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:27:26.284 11:35:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:27:26.284 11:35:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:26.284 11:35:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:27:26.284 11:35:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:26.284 11:35:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@390 -- # [[ up == up ]] 00:27:26.284 11:35:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:27:26.284 11:35:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:26.284 11:35:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:27:26.284 Found net devices under 0000:86:00.1: cvl_0_1 00:27:26.284 11:35:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:27:26.284 11:35:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:27:26.284 11:35:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@414 -- # is_hw=yes 00:27:26.284 11:35:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:27:26.284 11:35:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:27:26.284 11:35:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:27:26.284 11:35:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:26.284 11:35:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:27:26.284 11:35:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:27:26.284 11:35:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:27:26.284 11:35:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:27:26.284 11:35:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:27:26.284 11:35:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:27:26.284 11:35:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:27:26.284 11:35:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:26.284 11:35:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:27:26.284 11:35:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:27:26.284 11:35:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:27:26.284 11:35:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:27:26.284 11:35:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:27:26.284 11:35:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:27:26.284 11:35:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:27:26.284 11:35:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:27:26.542 11:35:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:27:26.542 11:35:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:27:26.542 11:35:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:27:26.542 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:26.542 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.164 ms 00:27:26.542 00:27:26.542 --- 10.0.0.2 ping statistics --- 00:27:26.542 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:26.542 rtt min/avg/max/mdev = 0.164/0.164/0.164/0.000 ms 00:27:26.542 11:35:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:27:26.542 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:26.542 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.220 ms 00:27:26.542 00:27:26.542 --- 10.0.0.1 ping statistics --- 00:27:26.542 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:26.542 rtt min/avg/max/mdev = 0.220/0.220/0.220/0.000 ms 00:27:26.542 11:35:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:26.542 11:35:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@422 -- # return 0 00:27:26.542 11:35:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:27:26.543 11:35:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:26.543 11:35:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:27:26.543 11:35:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:27:26.543 11:35:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:26.543 11:35:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:27:26.543 11:35:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:27:26.543 11:35:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@25 -- # tgt_init 00:27:26.543 11:35:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:27:26.543 11:35:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:27:26.543 11:35:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@724 -- # xtrace_disable 00:27:26.543 11:35:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:27:26.543 11:35:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@481 -- # nvmfpid=1665008 00:27:26.543 11:35:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@482 -- # waitforlisten 1665008 00:27:26.543 11:35:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:27:26.543 11:35:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@831 -- # '[' -z 1665008 ']' 00:27:26.543 11:35:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:26.543 11:35:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@836 -- # local max_retries=100 00:27:26.543 11:35:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:26.543 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:26.543 11:35:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@840 -- # xtrace_disable 00:27:26.543 11:35:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:27:26.543 [2024-07-26 11:35:22.102340] Starting SPDK v24.09-pre git sha1 487ff9e1a / DPDK 24.03.0 initialization... 00:27:26.543 [2024-07-26 11:35:22.102379] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:26.543 EAL: No free 2048 kB hugepages reported on node 1 00:27:26.543 [2024-07-26 11:35:22.168884] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:27:26.826 [2024-07-26 11:35:22.247671] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:26.826 [2024-07-26 11:35:22.247705] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:26.826 [2024-07-26 11:35:22.247712] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:26.826 [2024-07-26 11:35:22.247718] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:26.826 [2024-07-26 11:35:22.247725] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:26.826 [2024-07-26 11:35:22.247852] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:27:26.826 [2024-07-26 11:35:22.247958] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:27:26.826 [2024-07-26 11:35:22.247960] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:27:27.391 11:35:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:27:27.391 11:35:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@864 -- # return 0 00:27:27.391 11:35:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:27:27.391 11:35:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@730 -- # xtrace_disable 00:27:27.391 11:35:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:27:27.391 11:35:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:27.391 11:35:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:27:27.391 11:35:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:27.391 11:35:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:27:27.391 [2024-07-26 11:35:22.950385] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:27.391 11:35:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:27.391 11:35:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:27:27.391 11:35:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:27.391 11:35:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:27:27.391 Malloc0 00:27:27.391 11:35:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:27.391 11:35:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:27:27.391 11:35:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:27.391 11:35:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:27:27.391 11:35:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:27.391 11:35:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:27:27.391 11:35:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:27.391 11:35:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:27:27.391 11:35:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:27.391 11:35:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:27:27.391 11:35:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:27.391 11:35:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:27:27.391 [2024-07-26 11:35:23.014672] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:27.391 11:35:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:27.391 11:35:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 128 -o 4096 -w verify -t 1 00:27:27.391 11:35:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@27 -- # gen_nvmf_target_json 00:27:27.391 11:35:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@532 -- # config=() 00:27:27.391 11:35:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@532 -- # local subsystem config 00:27:27.391 11:35:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:27.391 11:35:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:27.391 { 00:27:27.391 "params": { 00:27:27.391 "name": "Nvme$subsystem", 00:27:27.391 "trtype": "$TEST_TRANSPORT", 00:27:27.391 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:27.391 "adrfam": "ipv4", 00:27:27.391 "trsvcid": "$NVMF_PORT", 00:27:27.391 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:27.391 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:27.391 "hdgst": ${hdgst:-false}, 00:27:27.391 "ddgst": ${ddgst:-false} 00:27:27.391 }, 00:27:27.391 "method": "bdev_nvme_attach_controller" 00:27:27.391 } 00:27:27.391 EOF 00:27:27.391 )") 00:27:27.391 11:35:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@554 -- # cat 00:27:27.391 11:35:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@556 -- # jq . 00:27:27.391 11:35:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@557 -- # IFS=, 00:27:27.391 11:35:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:27:27.391 "params": { 00:27:27.391 "name": "Nvme1", 00:27:27.391 "trtype": "tcp", 00:27:27.391 "traddr": "10.0.0.2", 00:27:27.391 "adrfam": "ipv4", 00:27:27.391 "trsvcid": "4420", 00:27:27.391 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:27:27.391 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:27:27.391 "hdgst": false, 00:27:27.391 "ddgst": false 00:27:27.391 }, 00:27:27.391 "method": "bdev_nvme_attach_controller" 00:27:27.391 }' 00:27:27.647 [2024-07-26 11:35:23.063103] Starting SPDK v24.09-pre git sha1 487ff9e1a / DPDK 24.03.0 initialization... 00:27:27.647 [2024-07-26 11:35:23.063150] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1665073 ] 00:27:27.647 EAL: No free 2048 kB hugepages reported on node 1 00:27:27.647 [2024-07-26 11:35:23.130317] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:27.647 [2024-07-26 11:35:23.204821] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:27:27.903 Running I/O for 1 seconds... 00:27:28.831 00:27:28.831 Latency(us) 00:27:28.831 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:28.831 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:27:28.831 Verification LBA range: start 0x0 length 0x4000 00:27:28.831 Nvme1n1 : 1.00 11506.96 44.95 0.00 0.00 11073.62 799.70 14792.41 00:27:28.831 =================================================================================================================== 00:27:28.831 Total : 11506.96 44.95 0.00 0.00 11073.62 799.70 14792.41 00:27:29.088 11:35:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@30 -- # bdevperfpid=1665384 00:27:29.088 11:35:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@32 -- # sleep 3 00:27:29.088 11:35:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -q 128 -o 4096 -w verify -t 15 -f 00:27:29.088 11:35:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@29 -- # gen_nvmf_target_json 00:27:29.088 11:35:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@532 -- # config=() 00:27:29.088 11:35:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@532 -- # local subsystem config 00:27:29.088 11:35:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:29.088 11:35:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:29.088 { 00:27:29.088 "params": { 00:27:29.088 "name": "Nvme$subsystem", 00:27:29.088 "trtype": "$TEST_TRANSPORT", 00:27:29.088 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:29.088 "adrfam": "ipv4", 00:27:29.088 "trsvcid": "$NVMF_PORT", 00:27:29.088 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:29.088 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:29.088 "hdgst": ${hdgst:-false}, 00:27:29.088 "ddgst": ${ddgst:-false} 00:27:29.088 }, 00:27:29.088 "method": "bdev_nvme_attach_controller" 00:27:29.088 } 00:27:29.088 EOF 00:27:29.088 )") 00:27:29.088 11:35:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@554 -- # cat 00:27:29.088 11:35:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@556 -- # jq . 00:27:29.088 11:35:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@557 -- # IFS=, 00:27:29.088 11:35:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:27:29.088 "params": { 00:27:29.088 "name": "Nvme1", 00:27:29.088 "trtype": "tcp", 00:27:29.088 "traddr": "10.0.0.2", 00:27:29.088 "adrfam": "ipv4", 00:27:29.088 "trsvcid": "4420", 00:27:29.088 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:27:29.088 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:27:29.088 "hdgst": false, 00:27:29.088 "ddgst": false 00:27:29.088 }, 00:27:29.088 "method": "bdev_nvme_attach_controller" 00:27:29.088 }' 00:27:29.088 [2024-07-26 11:35:24.636252] Starting SPDK v24.09-pre git sha1 487ff9e1a / DPDK 24.03.0 initialization... 00:27:29.088 [2024-07-26 11:35:24.636300] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1665384 ] 00:27:29.088 EAL: No free 2048 kB hugepages reported on node 1 00:27:29.088 [2024-07-26 11:35:24.704455] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:29.345 [2024-07-26 11:35:24.777509] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:27:29.601 Running I/O for 15 seconds... 00:27:32.126 11:35:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@33 -- # kill -9 1665008 00:27:32.126 11:35:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@35 -- # sleep 3 00:27:32.126 [2024-07-26 11:35:27.606246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:103288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.126 [2024-07-26 11:35:27.606285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:32.126 [2024-07-26 11:35:27.606302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:103296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.126 [2024-07-26 11:35:27.606310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:32.126 [2024-07-26 11:35:27.606320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:103304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.126 [2024-07-26 11:35:27.606328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:32.126 [2024-07-26 11:35:27.606337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:103312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.126 [2024-07-26 11:35:27.606344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:32.126 [2024-07-26 11:35:27.606353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:103320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.126 [2024-07-26 11:35:27.606360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:32.126 [2024-07-26 11:35:27.606369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:103328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.126 [2024-07-26 11:35:27.606376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:32.126 [2024-07-26 11:35:27.606385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:103336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.126 [2024-07-26 11:35:27.606392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:32.126 [2024-07-26 11:35:27.606401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:103344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.126 [2024-07-26 11:35:27.606407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:32.126 [2024-07-26 11:35:27.606416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:103352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.126 [2024-07-26 11:35:27.606422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:32.126 [2024-07-26 11:35:27.606430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:103360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.126 [2024-07-26 11:35:27.606436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:32.126 [2024-07-26 11:35:27.606449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:103368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.126 [2024-07-26 11:35:27.606456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:32.126 [2024-07-26 11:35:27.606465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:103376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.126 [2024-07-26 11:35:27.606472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:32.126 [2024-07-26 11:35:27.606481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:103384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.126 [2024-07-26 11:35:27.606489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:32.126 [2024-07-26 11:35:27.606498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:103392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.126 [2024-07-26 11:35:27.606505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:32.126 [2024-07-26 11:35:27.606513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:103400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.126 [2024-07-26 11:35:27.606519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:32.126 [2024-07-26 11:35:27.606527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:103408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.126 [2024-07-26 11:35:27.606534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:32.126 [2024-07-26 11:35:27.606542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:103416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.126 [2024-07-26 11:35:27.606548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:32.126 [2024-07-26 11:35:27.606556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:103424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.126 [2024-07-26 11:35:27.606563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:32.126 [2024-07-26 11:35:27.606573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:103432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.126 [2024-07-26 11:35:27.606580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:32.126 [2024-07-26 11:35:27.606590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:103440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.127 [2024-07-26 11:35:27.606597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:32.127 [2024-07-26 11:35:27.606606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:103448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.127 [2024-07-26 11:35:27.606613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:32.127 [2024-07-26 11:35:27.606623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:103456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.127 [2024-07-26 11:35:27.606636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:32.127 [2024-07-26 11:35:27.606644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:103464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.127 [2024-07-26 11:35:27.606655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:32.127 [2024-07-26 11:35:27.606665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:103472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.127 [2024-07-26 11:35:27.606673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:32.127 [2024-07-26 11:35:27.606683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:103480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.127 [2024-07-26 11:35:27.606690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:32.127 [2024-07-26 11:35:27.606699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:103488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.127 [2024-07-26 11:35:27.606706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:32.127 [2024-07-26 11:35:27.606715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:103496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.127 [2024-07-26 11:35:27.606721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:32.127 [2024-07-26 11:35:27.606729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:103504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.127 [2024-07-26 11:35:27.606735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:32.127 [2024-07-26 11:35:27.606742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:103512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.127 [2024-07-26 11:35:27.606748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:32.127 [2024-07-26 11:35:27.606756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:103520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.127 [2024-07-26 11:35:27.606763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:32.127 [2024-07-26 11:35:27.606770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:103528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.127 [2024-07-26 11:35:27.606777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:32.127 [2024-07-26 11:35:27.606784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:103536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.127 [2024-07-26 11:35:27.606791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:32.127 [2024-07-26 11:35:27.606799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:103544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.127 [2024-07-26 11:35:27.606805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:32.127 [2024-07-26 11:35:27.606813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:103552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.127 [2024-07-26 11:35:27.606820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:32.127 [2024-07-26 11:35:27.606827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:103560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.127 [2024-07-26 11:35:27.606834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:32.127 [2024-07-26 11:35:27.606842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:103568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.127 [2024-07-26 11:35:27.606849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:32.127 [2024-07-26 11:35:27.606857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:103576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.127 [2024-07-26 11:35:27.606863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:32.127 [2024-07-26 11:35:27.606871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:103584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.127 [2024-07-26 11:35:27.606877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:32.127 [2024-07-26 11:35:27.606885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:103592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.127 [2024-07-26 11:35:27.606891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:32.127 [2024-07-26 11:35:27.606899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:103600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.127 [2024-07-26 11:35:27.606905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:32.127 [2024-07-26 11:35:27.606913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:103608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.127 [2024-07-26 11:35:27.606919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:32.127 [2024-07-26 11:35:27.606928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:103616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.127 [2024-07-26 11:35:27.606934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:32.127 [2024-07-26 11:35:27.606943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:103624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.127 [2024-07-26 11:35:27.606949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:32.127 [2024-07-26 11:35:27.606957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:103632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.127 [2024-07-26 11:35:27.606963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:32.127 [2024-07-26 11:35:27.606971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:103640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.127 [2024-07-26 11:35:27.606977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:32.127 [2024-07-26 11:35:27.606985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:103648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.127 [2024-07-26 11:35:27.606991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:32.127 [2024-07-26 11:35:27.606998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:103656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.127 [2024-07-26 11:35:27.607004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:32.127 [2024-07-26 11:35:27.607012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:103664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.127 [2024-07-26 11:35:27.607018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:32.127 [2024-07-26 11:35:27.607027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:103672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.127 [2024-07-26 11:35:27.607034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:32.127 [2024-07-26 11:35:27.607042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:103680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.127 [2024-07-26 11:35:27.607048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:32.127 [2024-07-26 11:35:27.607056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:103688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.127 [2024-07-26 11:35:27.607062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:32.127 [2024-07-26 11:35:27.607069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:103696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.127 [2024-07-26 11:35:27.607076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:32.127 [2024-07-26 11:35:27.607084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:103704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.127 [2024-07-26 11:35:27.607090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:32.127 [2024-07-26 11:35:27.607097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:103712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.127 [2024-07-26 11:35:27.607103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:32.127 [2024-07-26 11:35:27.607111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:103720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.127 [2024-07-26 11:35:27.607117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:32.127 [2024-07-26 11:35:27.607124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:103728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.127 [2024-07-26 11:35:27.607131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:32.127 [2024-07-26 11:35:27.607138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:103736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.127 [2024-07-26 11:35:27.607145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:32.127 [2024-07-26 11:35:27.607152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:103744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.127 [2024-07-26 11:35:27.607159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:32.128 [2024-07-26 11:35:27.607166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:103752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.128 [2024-07-26 11:35:27.607172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:32.128 [2024-07-26 11:35:27.607180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:103760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.128 [2024-07-26 11:35:27.607186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:32.128 [2024-07-26 11:35:27.607194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:103768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.128 [2024-07-26 11:35:27.607201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:32.128 [2024-07-26 11:35:27.607209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:102984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:32.128 [2024-07-26 11:35:27.607215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:32.128 [2024-07-26 11:35:27.607223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:102992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:32.128 [2024-07-26 11:35:27.607229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:32.128 [2024-07-26 11:35:27.607237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:103000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:32.128 [2024-07-26 11:35:27.607243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:32.128 [2024-07-26 11:35:27.607251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:103008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:32.128 [2024-07-26 11:35:27.607257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:32.128 [2024-07-26 11:35:27.607266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:103016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:32.128 [2024-07-26 11:35:27.607272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:32.128 [2024-07-26 11:35:27.607280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:103024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:32.128 [2024-07-26 11:35:27.607286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:32.128 [2024-07-26 11:35:27.607293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:103032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:32.128 [2024-07-26 11:35:27.607299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:32.128 [2024-07-26 11:35:27.607307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:103776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.128 [2024-07-26 11:35:27.607313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:32.128 [2024-07-26 11:35:27.607321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:103784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.128 [2024-07-26 11:35:27.607327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:32.128 [2024-07-26 11:35:27.607335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:103792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.128 [2024-07-26 11:35:27.607341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:32.128 [2024-07-26 11:35:27.607348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:103800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.128 [2024-07-26 11:35:27.607355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:32.128 [2024-07-26 11:35:27.607363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:103808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.128 [2024-07-26 11:35:27.607369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:32.128 [2024-07-26 11:35:27.607378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:103816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.128 [2024-07-26 11:35:27.607384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:32.128 [2024-07-26 11:35:27.607392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:103824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.128 [2024-07-26 11:35:27.607398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:32.128 [2024-07-26 11:35:27.607406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:103832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.128 [2024-07-26 11:35:27.607412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:32.128 [2024-07-26 11:35:27.607419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:103840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.128 [2024-07-26 11:35:27.607426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:32.128 [2024-07-26 11:35:27.607433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:103848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.128 [2024-07-26 11:35:27.607439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:32.128 [2024-07-26 11:35:27.607447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:103856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.128 [2024-07-26 11:35:27.607453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:32.128 [2024-07-26 11:35:27.607461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:103864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.128 [2024-07-26 11:35:27.607467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:32.128 [2024-07-26 11:35:27.607474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:103872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.128 [2024-07-26 11:35:27.607481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:32.128 [2024-07-26 11:35:27.607488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:103880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.128 [2024-07-26 11:35:27.607495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:32.128 [2024-07-26 11:35:27.607502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:103888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.128 [2024-07-26 11:35:27.607509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:32.128 [2024-07-26 11:35:27.607516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:103896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.128 [2024-07-26 11:35:27.607522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:32.128 [2024-07-26 11:35:27.607530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:103904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.128 [2024-07-26 11:35:27.607536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:32.128 [2024-07-26 11:35:27.607543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:103912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.128 [2024-07-26 11:35:27.607550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:32.128 [2024-07-26 11:35:27.607558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:103920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.128 [2024-07-26 11:35:27.607564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:32.128 [2024-07-26 11:35:27.607572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:103928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.128 [2024-07-26 11:35:27.607578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:32.128 [2024-07-26 11:35:27.607586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:103936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.128 [2024-07-26 11:35:27.607591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:32.128 [2024-07-26 11:35:27.607599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:103944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.128 [2024-07-26 11:35:27.607605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:32.128 [2024-07-26 11:35:27.607613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:103952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.128 [2024-07-26 11:35:27.607619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:32.128 [2024-07-26 11:35:27.607631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:103960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.128 [2024-07-26 11:35:27.607637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:32.128 [2024-07-26 11:35:27.607645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:103968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.128 [2024-07-26 11:35:27.607651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:32.128 [2024-07-26 11:35:27.607658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:103976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.128 [2024-07-26 11:35:27.607665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:32.128 [2024-07-26 11:35:27.607672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:103984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.128 [2024-07-26 11:35:27.607678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:32.128 [2024-07-26 11:35:27.607686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:103992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.128 [2024-07-26 11:35:27.607693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:32.128 [2024-07-26 11:35:27.607705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:103040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:32.128 [2024-07-26 11:35:27.607711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:32.129 [2024-07-26 11:35:27.607719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:103048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:32.129 [2024-07-26 11:35:27.607726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:32.129 [2024-07-26 11:35:27.607734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:103056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:32.129 [2024-07-26 11:35:27.607741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:32.129 [2024-07-26 11:35:27.607749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:103064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:32.129 [2024-07-26 11:35:27.607755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:32.129 [2024-07-26 11:35:27.607763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:103072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:32.129 [2024-07-26 11:35:27.607769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:32.129 [2024-07-26 11:35:27.607777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:103080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:32.129 [2024-07-26 11:35:27.607783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:32.129 [2024-07-26 11:35:27.607791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:103088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:32.129 [2024-07-26 11:35:27.607797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:32.129 [2024-07-26 11:35:27.607805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:103096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:32.129 [2024-07-26 11:35:27.607811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:32.129 [2024-07-26 11:35:27.607818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:103104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:32.129 [2024-07-26 11:35:27.607824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:32.129 [2024-07-26 11:35:27.607832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:103112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:32.129 [2024-07-26 11:35:27.607838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:32.129 [2024-07-26 11:35:27.607846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:103120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:32.129 [2024-07-26 11:35:27.607852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:32.129 [2024-07-26 11:35:27.607860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:103128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:32.129 [2024-07-26 11:35:27.607866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:32.129 [2024-07-26 11:35:27.607873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:103136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:32.129 [2024-07-26 11:35:27.607880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:32.129 [2024-07-26 11:35:27.607887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:103144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:32.129 [2024-07-26 11:35:27.607893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:32.129 [2024-07-26 11:35:27.607901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:103152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:32.129 [2024-07-26 11:35:27.607907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:32.129 [2024-07-26 11:35:27.607916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:103160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:32.129 [2024-07-26 11:35:27.607922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:32.129 [2024-07-26 11:35:27.607931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:104000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.129 [2024-07-26 11:35:27.607937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:32.129 [2024-07-26 11:35:27.607945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:103168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:32.129 [2024-07-26 11:35:27.607951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:32.129 [2024-07-26 11:35:27.607959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:103176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:32.129 [2024-07-26 11:35:27.607965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:32.129 [2024-07-26 11:35:27.607973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:103184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:32.129 [2024-07-26 11:35:27.607979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:32.129 [2024-07-26 11:35:27.607987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:103192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:32.129 [2024-07-26 11:35:27.607993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:32.129 [2024-07-26 11:35:27.608000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:103200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:32.129 [2024-07-26 11:35:27.608006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:32.129 [2024-07-26 11:35:27.608014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:103208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:32.129 [2024-07-26 11:35:27.608020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:32.129 [2024-07-26 11:35:27.608027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:103216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:32.129 [2024-07-26 11:35:27.608034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:32.129 [2024-07-26 11:35:27.608041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:103224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:32.129 [2024-07-26 11:35:27.608047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:32.129 [2024-07-26 11:35:27.608055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:103232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:32.129 [2024-07-26 11:35:27.608061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:32.129 [2024-07-26 11:35:27.608069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:103240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:32.129 [2024-07-26 11:35:27.608075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:32.129 [2024-07-26 11:35:27.608082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:103248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:32.129 [2024-07-26 11:35:27.608091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:32.129 [2024-07-26 11:35:27.608099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:103256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:32.129 [2024-07-26 11:35:27.608105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:32.129 [2024-07-26 11:35:27.608112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:103264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:32.129 [2024-07-26 11:35:27.608118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:32.129 [2024-07-26 11:35:27.608126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:103272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:32.129 [2024-07-26 11:35:27.608132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:32.129 [2024-07-26 11:35:27.608140] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfa8ee0 is same with the state(5) to be set 00:27:32.129 [2024-07-26 11:35:27.608147] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:32.129 [2024-07-26 11:35:27.608153] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:32.129 [2024-07-26 11:35:27.608159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:103280 len:8 PRP1 0x0 PRP2 0x0 00:27:32.129 [2024-07-26 11:35:27.608166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:32.129 [2024-07-26 11:35:27.608207] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0xfa8ee0 was disconnected and freed. reset controller. 00:27:32.129 [2024-07-26 11:35:27.611071] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:32.129 [2024-07-26 11:35:27.611124] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd76980 (9): Bad file descriptor 00:27:32.129 [2024-07-26 11:35:27.611722] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.129 [2024-07-26 11:35:27.611739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd76980 with addr=10.0.0.2, port=4420 00:27:32.129 [2024-07-26 11:35:27.611746] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd76980 is same with the state(5) to be set 00:27:32.129 [2024-07-26 11:35:27.611918] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd76980 (9): Bad file descriptor 00:27:32.129 [2024-07-26 11:35:27.612090] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:32.129 [2024-07-26 11:35:27.612098] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:32.129 [2024-07-26 11:35:27.612105] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:32.129 [2024-07-26 11:35:27.614858] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:32.129 [2024-07-26 11:35:27.624225] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:32.129 [2024-07-26 11:35:27.624615] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.129 [2024-07-26 11:35:27.624638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd76980 with addr=10.0.0.2, port=4420 00:27:32.129 [2024-07-26 11:35:27.624646] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd76980 is same with the state(5) to be set 00:27:32.129 [2024-07-26 11:35:27.624813] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd76980 (9): Bad file descriptor 00:27:32.130 [2024-07-26 11:35:27.624984] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:32.130 [2024-07-26 11:35:27.624992] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:32.130 [2024-07-26 11:35:27.624998] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:32.130 [2024-07-26 11:35:27.627602] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:32.130 [2024-07-26 11:35:27.636962] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:32.130 [2024-07-26 11:35:27.637381] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.130 [2024-07-26 11:35:27.637398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd76980 with addr=10.0.0.2, port=4420 00:27:32.130 [2024-07-26 11:35:27.637404] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd76980 is same with the state(5) to be set 00:27:32.130 [2024-07-26 11:35:27.637571] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd76980 (9): Bad file descriptor 00:27:32.130 [2024-07-26 11:35:27.637746] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:32.130 [2024-07-26 11:35:27.637754] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:32.130 [2024-07-26 11:35:27.637760] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:32.130 [2024-07-26 11:35:27.640363] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:32.130 [2024-07-26 11:35:27.649743] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:32.130 [2024-07-26 11:35:27.650115] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.130 [2024-07-26 11:35:27.650131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd76980 with addr=10.0.0.2, port=4420 00:27:32.130 [2024-07-26 11:35:27.650138] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd76980 is same with the state(5) to be set 00:27:32.130 [2024-07-26 11:35:27.650304] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd76980 (9): Bad file descriptor 00:27:32.130 [2024-07-26 11:35:27.650473] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:32.130 [2024-07-26 11:35:27.650481] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:32.130 [2024-07-26 11:35:27.650487] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:32.130 [2024-07-26 11:35:27.653153] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:32.130 [2024-07-26 11:35:27.662587] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:32.130 [2024-07-26 11:35:27.663007] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.130 [2024-07-26 11:35:27.663051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd76980 with addr=10.0.0.2, port=4420 00:27:32.130 [2024-07-26 11:35:27.663072] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd76980 is same with the state(5) to be set 00:27:32.130 [2024-07-26 11:35:27.663614] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd76980 (9): Bad file descriptor 00:27:32.130 [2024-07-26 11:35:27.663786] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:32.130 [2024-07-26 11:35:27.663794] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:32.130 [2024-07-26 11:35:27.663800] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:32.130 [2024-07-26 11:35:27.666405] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:32.130 [2024-07-26 11:35:27.675380] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:32.130 [2024-07-26 11:35:27.675822] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.130 [2024-07-26 11:35:27.675837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd76980 with addr=10.0.0.2, port=4420 00:27:32.130 [2024-07-26 11:35:27.675844] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd76980 is same with the state(5) to be set 00:27:32.130 [2024-07-26 11:35:27.676002] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd76980 (9): Bad file descriptor 00:27:32.130 [2024-07-26 11:35:27.676159] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:32.130 [2024-07-26 11:35:27.676166] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:32.130 [2024-07-26 11:35:27.676172] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:32.130 [2024-07-26 11:35:27.678751] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:32.130 [2024-07-26 11:35:27.688213] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:32.130 [2024-07-26 11:35:27.688665] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.130 [2024-07-26 11:35:27.688708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd76980 with addr=10.0.0.2, port=4420 00:27:32.130 [2024-07-26 11:35:27.688731] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd76980 is same with the state(5) to be set 00:27:32.130 [2024-07-26 11:35:27.689308] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd76980 (9): Bad file descriptor 00:27:32.130 [2024-07-26 11:35:27.689526] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:32.130 [2024-07-26 11:35:27.689533] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:32.130 [2024-07-26 11:35:27.689540] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:32.130 [2024-07-26 11:35:27.692207] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:32.130 [2024-07-26 11:35:27.701011] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:32.130 [2024-07-26 11:35:27.701413] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.130 [2024-07-26 11:35:27.701428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd76980 with addr=10.0.0.2, port=4420 00:27:32.130 [2024-07-26 11:35:27.701434] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd76980 is same with the state(5) to be set 00:27:32.130 [2024-07-26 11:35:27.701593] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd76980 (9): Bad file descriptor 00:27:32.130 [2024-07-26 11:35:27.701778] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:32.130 [2024-07-26 11:35:27.701787] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:32.130 [2024-07-26 11:35:27.701793] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:32.130 [2024-07-26 11:35:27.704392] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:32.130 [2024-07-26 11:35:27.713855] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:32.130 [2024-07-26 11:35:27.714295] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.130 [2024-07-26 11:35:27.714349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd76980 with addr=10.0.0.2, port=4420 00:27:32.130 [2024-07-26 11:35:27.714378] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd76980 is same with the state(5) to be set 00:27:32.130 [2024-07-26 11:35:27.714901] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd76980 (9): Bad file descriptor 00:27:32.130 [2024-07-26 11:35:27.715068] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:32.130 [2024-07-26 11:35:27.715076] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:32.130 [2024-07-26 11:35:27.715082] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:32.130 [2024-07-26 11:35:27.717681] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:32.130 [2024-07-26 11:35:27.726604] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:32.130 [2024-07-26 11:35:27.727011] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.130 [2024-07-26 11:35:27.727055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd76980 with addr=10.0.0.2, port=4420 00:27:32.130 [2024-07-26 11:35:27.727076] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd76980 is same with the state(5) to be set 00:27:32.130 [2024-07-26 11:35:27.727592] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd76980 (9): Bad file descriptor 00:27:32.130 [2024-07-26 11:35:27.727777] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:32.130 [2024-07-26 11:35:27.727786] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:32.130 [2024-07-26 11:35:27.727792] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:32.130 [2024-07-26 11:35:27.733442] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:32.130 [2024-07-26 11:35:27.741658] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:32.130 [2024-07-26 11:35:27.742161] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.130 [2024-07-26 11:35:27.742181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd76980 with addr=10.0.0.2, port=4420 00:27:32.130 [2024-07-26 11:35:27.742191] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd76980 is same with the state(5) to be set 00:27:32.130 [2024-07-26 11:35:27.742442] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd76980 (9): Bad file descriptor 00:27:32.130 [2024-07-26 11:35:27.742703] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:32.130 [2024-07-26 11:35:27.742714] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:32.130 [2024-07-26 11:35:27.742723] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:32.130 [2024-07-26 11:35:27.746775] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:32.131 [2024-07-26 11:35:27.754596] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:32.131 [2024-07-26 11:35:27.755009] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.131 [2024-07-26 11:35:27.755025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd76980 with addr=10.0.0.2, port=4420 00:27:32.131 [2024-07-26 11:35:27.755031] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd76980 is same with the state(5) to be set 00:27:32.131 [2024-07-26 11:35:27.755197] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd76980 (9): Bad file descriptor 00:27:32.131 [2024-07-26 11:35:27.755362] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:32.131 [2024-07-26 11:35:27.755373] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:32.131 [2024-07-26 11:35:27.755379] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:32.131 [2024-07-26 11:35:27.758046] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:32.131 [2024-07-26 11:35:27.767385] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:32.131 [2024-07-26 11:35:27.767804] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.131 [2024-07-26 11:35:27.767819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd76980 with addr=10.0.0.2, port=4420 00:27:32.131 [2024-07-26 11:35:27.767826] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd76980 is same with the state(5) to be set 00:27:32.131 [2024-07-26 11:35:27.767992] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd76980 (9): Bad file descriptor 00:27:32.131 [2024-07-26 11:35:27.768158] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:32.131 [2024-07-26 11:35:27.768165] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:32.131 [2024-07-26 11:35:27.768171] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:32.131 [2024-07-26 11:35:27.770778] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:32.131 [2024-07-26 11:35:27.780551] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:32.131 [2024-07-26 11:35:27.780972] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.131 [2024-07-26 11:35:27.781025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd76980 with addr=10.0.0.2, port=4420 00:27:32.131 [2024-07-26 11:35:27.781048] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd76980 is same with the state(5) to be set 00:27:32.131 [2024-07-26 11:35:27.781566] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd76980 (9): Bad file descriptor 00:27:32.131 [2024-07-26 11:35:27.781746] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:32.131 [2024-07-26 11:35:27.781754] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:32.131 [2024-07-26 11:35:27.781761] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:32.390 [2024-07-26 11:35:27.784591] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:32.390 [2024-07-26 11:35:27.793731] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:32.390 [2024-07-26 11:35:27.794158] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.390 [2024-07-26 11:35:27.794204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd76980 with addr=10.0.0.2, port=4420 00:27:32.390 [2024-07-26 11:35:27.794227] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd76980 is same with the state(5) to be set 00:27:32.390 [2024-07-26 11:35:27.794771] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd76980 (9): Bad file descriptor 00:27:32.390 [2024-07-26 11:35:27.794939] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:32.390 [2024-07-26 11:35:27.794947] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:32.390 [2024-07-26 11:35:27.794953] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:32.390 [2024-07-26 11:35:27.797622] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:32.390 [2024-07-26 11:35:27.806501] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:32.390 [2024-07-26 11:35:27.806852] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.390 [2024-07-26 11:35:27.806868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd76980 with addr=10.0.0.2, port=4420 00:27:32.390 [2024-07-26 11:35:27.806875] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd76980 is same with the state(5) to be set 00:27:32.390 [2024-07-26 11:35:27.807040] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd76980 (9): Bad file descriptor 00:27:32.390 [2024-07-26 11:35:27.807210] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:32.390 [2024-07-26 11:35:27.807218] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:32.390 [2024-07-26 11:35:27.807224] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:32.390 [2024-07-26 11:35:27.809831] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:32.390 [2024-07-26 11:35:27.819308] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:32.390 [2024-07-26 11:35:27.819729] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.390 [2024-07-26 11:35:27.819745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd76980 with addr=10.0.0.2, port=4420 00:27:32.390 [2024-07-26 11:35:27.819752] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd76980 is same with the state(5) to be set 00:27:32.390 [2024-07-26 11:35:27.819919] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd76980 (9): Bad file descriptor 00:27:32.390 [2024-07-26 11:35:27.820085] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:32.390 [2024-07-26 11:35:27.820092] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:32.390 [2024-07-26 11:35:27.820098] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:32.390 [2024-07-26 11:35:27.822707] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:32.390 [2024-07-26 11:35:27.832160] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:32.390 [2024-07-26 11:35:27.832539] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.390 [2024-07-26 11:35:27.832582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd76980 with addr=10.0.0.2, port=4420 00:27:32.391 [2024-07-26 11:35:27.832604] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd76980 is same with the state(5) to be set 00:27:32.391 [2024-07-26 11:35:27.833082] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd76980 (9): Bad file descriptor 00:27:32.391 [2024-07-26 11:35:27.833249] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:32.391 [2024-07-26 11:35:27.833257] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:32.391 [2024-07-26 11:35:27.833263] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:32.391 [2024-07-26 11:35:27.835863] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:32.391 [2024-07-26 11:35:27.845004] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:32.391 [2024-07-26 11:35:27.845367] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.391 [2024-07-26 11:35:27.845383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd76980 with addr=10.0.0.2, port=4420 00:27:32.391 [2024-07-26 11:35:27.845389] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd76980 is same with the state(5) to be set 00:27:32.391 [2024-07-26 11:35:27.845559] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd76980 (9): Bad file descriptor 00:27:32.391 [2024-07-26 11:35:27.845733] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:32.391 [2024-07-26 11:35:27.845742] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:32.391 [2024-07-26 11:35:27.845748] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:32.391 [2024-07-26 11:35:27.848348] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:32.391 [2024-07-26 11:35:27.857846] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:32.391 [2024-07-26 11:35:27.858266] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.391 [2024-07-26 11:35:27.858283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd76980 with addr=10.0.0.2, port=4420 00:27:32.391 [2024-07-26 11:35:27.858290] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd76980 is same with the state(5) to be set 00:27:32.391 [2024-07-26 11:35:27.858461] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd76980 (9): Bad file descriptor 00:27:32.391 [2024-07-26 11:35:27.858640] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:32.391 [2024-07-26 11:35:27.858649] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:32.391 [2024-07-26 11:35:27.858655] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:32.391 [2024-07-26 11:35:27.861400] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:32.391 [2024-07-26 11:35:27.870821] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:32.391 [2024-07-26 11:35:27.871160] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.391 [2024-07-26 11:35:27.871176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd76980 with addr=10.0.0.2, port=4420 00:27:32.391 [2024-07-26 11:35:27.871183] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd76980 is same with the state(5) to be set 00:27:32.391 [2024-07-26 11:35:27.871355] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd76980 (9): Bad file descriptor 00:27:32.391 [2024-07-26 11:35:27.871529] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:32.391 [2024-07-26 11:35:27.871537] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:32.391 [2024-07-26 11:35:27.871544] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:32.391 [2024-07-26 11:35:27.874292] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:32.391 [2024-07-26 11:35:27.883845] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:32.391 [2024-07-26 11:35:27.884227] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.391 [2024-07-26 11:35:27.884270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd76980 with addr=10.0.0.2, port=4420 00:27:32.391 [2024-07-26 11:35:27.884291] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd76980 is same with the state(5) to be set 00:27:32.391 [2024-07-26 11:35:27.884877] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd76980 (9): Bad file descriptor 00:27:32.391 [2024-07-26 11:35:27.885341] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:32.391 [2024-07-26 11:35:27.885349] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:32.391 [2024-07-26 11:35:27.885358] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:32.391 [2024-07-26 11:35:27.888077] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:32.391 [2024-07-26 11:35:27.896811] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:32.391 [2024-07-26 11:35:27.897257] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.391 [2024-07-26 11:35:27.897273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd76980 with addr=10.0.0.2, port=4420 00:27:32.391 [2024-07-26 11:35:27.897280] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd76980 is same with the state(5) to be set 00:27:32.391 [2024-07-26 11:35:27.897451] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd76980 (9): Bad file descriptor 00:27:32.391 [2024-07-26 11:35:27.897622] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:32.391 [2024-07-26 11:35:27.897637] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:32.391 [2024-07-26 11:35:27.897644] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:32.391 [2024-07-26 11:35:27.900336] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:32.391 [2024-07-26 11:35:27.909530] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:32.391 [2024-07-26 11:35:27.909953] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.391 [2024-07-26 11:35:27.909968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd76980 with addr=10.0.0.2, port=4420 00:27:32.391 [2024-07-26 11:35:27.909975] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd76980 is same with the state(5) to be set 00:27:32.391 [2024-07-26 11:35:27.910141] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd76980 (9): Bad file descriptor 00:27:32.391 [2024-07-26 11:35:27.910306] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:32.391 [2024-07-26 11:35:27.910314] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:32.391 [2024-07-26 11:35:27.910320] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:32.391 [2024-07-26 11:35:27.912930] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:32.391 [2024-07-26 11:35:27.922263] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:32.391 [2024-07-26 11:35:27.922661] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.391 [2024-07-26 11:35:27.922677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd76980 with addr=10.0.0.2, port=4420 00:27:32.391 [2024-07-26 11:35:27.922683] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd76980 is same with the state(5) to be set 00:27:32.391 [2024-07-26 11:35:27.922841] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd76980 (9): Bad file descriptor 00:27:32.391 [2024-07-26 11:35:27.922998] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:32.391 [2024-07-26 11:35:27.923005] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:32.391 [2024-07-26 11:35:27.923011] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:32.391 [2024-07-26 11:35:27.925596] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:32.391 [2024-07-26 11:35:27.935075] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:32.391 [2024-07-26 11:35:27.935519] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.391 [2024-07-26 11:35:27.935538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd76980 with addr=10.0.0.2, port=4420 00:27:32.391 [2024-07-26 11:35:27.935545] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd76980 is same with the state(5) to be set 00:27:32.391 [2024-07-26 11:35:27.935716] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd76980 (9): Bad file descriptor 00:27:32.391 [2024-07-26 11:35:27.935882] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:32.391 [2024-07-26 11:35:27.935890] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:32.391 [2024-07-26 11:35:27.935895] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:32.391 [2024-07-26 11:35:27.938556] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:32.391 [2024-07-26 11:35:27.948050] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:32.391 [2024-07-26 11:35:27.948480] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.391 [2024-07-26 11:35:27.948516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd76980 with addr=10.0.0.2, port=4420 00:27:32.391 [2024-07-26 11:35:27.948539] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd76980 is same with the state(5) to be set 00:27:32.391 [2024-07-26 11:35:27.949136] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd76980 (9): Bad file descriptor 00:27:32.391 [2024-07-26 11:35:27.949308] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:32.391 [2024-07-26 11:35:27.949315] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:32.391 [2024-07-26 11:35:27.949322] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:32.391 [2024-07-26 11:35:27.952065] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:32.391 [2024-07-26 11:35:27.961143] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:32.391 [2024-07-26 11:35:27.961569] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.391 [2024-07-26 11:35:27.961585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd76980 with addr=10.0.0.2, port=4420 00:27:32.391 [2024-07-26 11:35:27.961591] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd76980 is same with the state(5) to be set 00:27:32.391 [2024-07-26 11:35:27.961767] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd76980 (9): Bad file descriptor 00:27:32.391 [2024-07-26 11:35:27.961938] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:32.391 [2024-07-26 11:35:27.961945] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:32.392 [2024-07-26 11:35:27.961952] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:32.392 [2024-07-26 11:35:27.964697] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:32.392 [2024-07-26 11:35:27.974028] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:32.392 [2024-07-26 11:35:27.974400] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.392 [2024-07-26 11:35:27.974416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd76980 with addr=10.0.0.2, port=4420 00:27:32.392 [2024-07-26 11:35:27.974423] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd76980 is same with the state(5) to be set 00:27:32.392 [2024-07-26 11:35:27.974594] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd76980 (9): Bad file descriptor 00:27:32.392 [2024-07-26 11:35:27.974775] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:32.392 [2024-07-26 11:35:27.974783] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:32.392 [2024-07-26 11:35:27.974789] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:32.392 [2024-07-26 11:35:27.977425] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:32.392 [2024-07-26 11:35:27.986803] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:32.392 [2024-07-26 11:35:27.987237] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.392 [2024-07-26 11:35:27.987252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd76980 with addr=10.0.0.2, port=4420 00:27:32.392 [2024-07-26 11:35:27.987258] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd76980 is same with the state(5) to be set 00:27:32.392 [2024-07-26 11:35:27.987416] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd76980 (9): Bad file descriptor 00:27:32.392 [2024-07-26 11:35:27.987574] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:32.392 [2024-07-26 11:35:27.987581] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:32.392 [2024-07-26 11:35:27.987587] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:32.392 [2024-07-26 11:35:27.990205] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:32.392 [2024-07-26 11:35:27.999710] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:32.392 [2024-07-26 11:35:28.000117] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.392 [2024-07-26 11:35:28.000132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd76980 with addr=10.0.0.2, port=4420 00:27:32.392 [2024-07-26 11:35:28.000138] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd76980 is same with the state(5) to be set 00:27:32.392 [2024-07-26 11:35:28.000305] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd76980 (9): Bad file descriptor 00:27:32.392 [2024-07-26 11:35:28.000475] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:32.392 [2024-07-26 11:35:28.000482] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:32.392 [2024-07-26 11:35:28.000489] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:32.392 [2024-07-26 11:35:28.003095] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:32.392 [2024-07-26 11:35:28.012478] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:32.392 [2024-07-26 11:35:28.012916] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.392 [2024-07-26 11:35:28.012933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd76980 with addr=10.0.0.2, port=4420 00:27:32.392 [2024-07-26 11:35:28.012940] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd76980 is same with the state(5) to be set 00:27:32.392 [2024-07-26 11:35:28.013106] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd76980 (9): Bad file descriptor 00:27:32.392 [2024-07-26 11:35:28.013272] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:32.392 [2024-07-26 11:35:28.013280] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:32.392 [2024-07-26 11:35:28.013285] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:32.392 [2024-07-26 11:35:28.015895] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:32.392 [2024-07-26 11:35:28.025222] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:32.392 [2024-07-26 11:35:28.025644] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.392 [2024-07-26 11:35:28.025660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd76980 with addr=10.0.0.2, port=4420 00:27:32.392 [2024-07-26 11:35:28.025666] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd76980 is same with the state(5) to be set 00:27:32.392 [2024-07-26 11:35:28.025824] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd76980 (9): Bad file descriptor 00:27:32.392 [2024-07-26 11:35:28.025980] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:32.392 [2024-07-26 11:35:28.025988] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:32.392 [2024-07-26 11:35:28.025993] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:32.392 [2024-07-26 11:35:28.028581] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:32.392 [2024-07-26 11:35:28.038044] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:32.392 [2024-07-26 11:35:28.038462] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.392 [2024-07-26 11:35:28.038477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd76980 with addr=10.0.0.2, port=4420 00:27:32.392 [2024-07-26 11:35:28.038483] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd76980 is same with the state(5) to be set 00:27:32.392 [2024-07-26 11:35:28.038646] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd76980 (9): Bad file descriptor 00:27:32.392 [2024-07-26 11:35:28.038828] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:32.392 [2024-07-26 11:35:28.038836] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:32.392 [2024-07-26 11:35:28.038842] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:32.392 [2024-07-26 11:35:28.041449] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:32.652 [2024-07-26 11:35:28.051034] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:32.652 [2024-07-26 11:35:28.051398] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.652 [2024-07-26 11:35:28.051414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd76980 with addr=10.0.0.2, port=4420 00:27:32.652 [2024-07-26 11:35:28.051421] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd76980 is same with the state(5) to be set 00:27:32.652 [2024-07-26 11:35:28.051579] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd76980 (9): Bad file descriptor 00:27:32.652 [2024-07-26 11:35:28.051764] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:32.652 [2024-07-26 11:35:28.051773] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:32.652 [2024-07-26 11:35:28.051779] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:32.652 [2024-07-26 11:35:28.054566] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:32.652 [2024-07-26 11:35:28.063777] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:32.652 [2024-07-26 11:35:28.064193] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.652 [2024-07-26 11:35:28.064209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd76980 with addr=10.0.0.2, port=4420 00:27:32.652 [2024-07-26 11:35:28.064219] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd76980 is same with the state(5) to be set 00:27:32.652 [2024-07-26 11:35:28.064387] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd76980 (9): Bad file descriptor 00:27:32.652 [2024-07-26 11:35:28.064553] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:32.652 [2024-07-26 11:35:28.064561] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:32.652 [2024-07-26 11:35:28.064567] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:32.652 [2024-07-26 11:35:28.067175] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:32.652 [2024-07-26 11:35:28.076503] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:32.652 [2024-07-26 11:35:28.076941] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.652 [2024-07-26 11:35:28.076958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd76980 with addr=10.0.0.2, port=4420 00:27:32.652 [2024-07-26 11:35:28.076964] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd76980 is same with the state(5) to be set 00:27:32.652 [2024-07-26 11:35:28.077130] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd76980 (9): Bad file descriptor 00:27:32.652 [2024-07-26 11:35:28.077298] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:32.652 [2024-07-26 11:35:28.077306] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:32.652 [2024-07-26 11:35:28.077311] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:32.652 [2024-07-26 11:35:28.079980] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:32.652 [2024-07-26 11:35:28.089218] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:32.652 [2024-07-26 11:35:28.089647] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.652 [2024-07-26 11:35:28.089691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd76980 with addr=10.0.0.2, port=4420 00:27:32.652 [2024-07-26 11:35:28.089713] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd76980 is same with the state(5) to be set 00:27:32.652 [2024-07-26 11:35:28.090295] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd76980 (9): Bad file descriptor 00:27:32.652 [2024-07-26 11:35:28.090453] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:32.652 [2024-07-26 11:35:28.090460] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:32.652 [2024-07-26 11:35:28.090466] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:32.652 [2024-07-26 11:35:28.093138] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:32.652 [2024-07-26 11:35:28.102001] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:32.652 [2024-07-26 11:35:28.102349] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.652 [2024-07-26 11:35:28.102364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd76980 with addr=10.0.0.2, port=4420 00:27:32.652 [2024-07-26 11:35:28.102370] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd76980 is same with the state(5) to be set 00:27:32.652 [2024-07-26 11:35:28.102527] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd76980 (9): Bad file descriptor 00:27:32.652 [2024-07-26 11:35:28.102708] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:32.652 [2024-07-26 11:35:28.102720] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:32.652 [2024-07-26 11:35:28.102726] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:32.652 [2024-07-26 11:35:28.105325] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:32.652 [2024-07-26 11:35:28.114796] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:32.652 [2024-07-26 11:35:28.115214] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.652 [2024-07-26 11:35:28.115230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd76980 with addr=10.0.0.2, port=4420 00:27:32.652 [2024-07-26 11:35:28.115237] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd76980 is same with the state(5) to be set 00:27:32.652 [2024-07-26 11:35:28.115403] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd76980 (9): Bad file descriptor 00:27:32.652 [2024-07-26 11:35:28.115569] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:32.652 [2024-07-26 11:35:28.115576] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:32.652 [2024-07-26 11:35:28.115582] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:32.652 [2024-07-26 11:35:28.118339] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:32.652 [2024-07-26 11:35:28.127819] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:32.652 [2024-07-26 11:35:28.128240] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.652 [2024-07-26 11:35:28.128255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd76980 with addr=10.0.0.2, port=4420 00:27:32.652 [2024-07-26 11:35:28.128262] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd76980 is same with the state(5) to be set 00:27:32.652 [2024-07-26 11:35:28.128428] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd76980 (9): Bad file descriptor 00:27:32.652 [2024-07-26 11:35:28.128594] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:32.652 [2024-07-26 11:35:28.128602] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:32.652 [2024-07-26 11:35:28.128608] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:32.652 [2024-07-26 11:35:28.131347] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:32.652 [2024-07-26 11:35:28.140698] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:32.652 [2024-07-26 11:35:28.141064] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.652 [2024-07-26 11:35:28.141080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd76980 with addr=10.0.0.2, port=4420 00:27:32.652 [2024-07-26 11:35:28.141087] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd76980 is same with the state(5) to be set 00:27:32.652 [2024-07-26 11:35:28.141258] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd76980 (9): Bad file descriptor 00:27:32.652 [2024-07-26 11:35:28.141430] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:32.652 [2024-07-26 11:35:28.141438] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:32.652 [2024-07-26 11:35:28.141444] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:32.652 [2024-07-26 11:35:28.144180] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:32.652 [2024-07-26 11:35:28.153580] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:32.652 [2024-07-26 11:35:28.154022] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.652 [2024-07-26 11:35:28.154063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd76980 with addr=10.0.0.2, port=4420 00:27:32.652 [2024-07-26 11:35:28.154085] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd76980 is same with the state(5) to be set 00:27:32.652 [2024-07-26 11:35:28.154546] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd76980 (9): Bad file descriptor 00:27:32.652 [2024-07-26 11:35:28.154710] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:32.652 [2024-07-26 11:35:28.154718] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:32.652 [2024-07-26 11:35:28.154723] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:32.652 [2024-07-26 11:35:28.157250] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:32.652 [2024-07-26 11:35:28.166320] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:32.652 [2024-07-26 11:35:28.166652] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.652 [2024-07-26 11:35:28.166667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd76980 with addr=10.0.0.2, port=4420 00:27:32.652 [2024-07-26 11:35:28.166674] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd76980 is same with the state(5) to be set 00:27:32.652 [2024-07-26 11:35:28.166831] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd76980 (9): Bad file descriptor 00:27:32.652 [2024-07-26 11:35:28.166989] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:32.652 [2024-07-26 11:35:28.166996] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:32.653 [2024-07-26 11:35:28.167002] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:32.653 [2024-07-26 11:35:28.169586] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:32.653 [2024-07-26 11:35:28.179083] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:32.653 [2024-07-26 11:35:28.179504] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.653 [2024-07-26 11:35:28.179520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd76980 with addr=10.0.0.2, port=4420 00:27:32.653 [2024-07-26 11:35:28.179526] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd76980 is same with the state(5) to be set 00:27:32.653 [2024-07-26 11:35:28.179707] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd76980 (9): Bad file descriptor 00:27:32.653 [2024-07-26 11:35:28.179874] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:32.653 [2024-07-26 11:35:28.179881] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:32.653 [2024-07-26 11:35:28.179887] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:32.653 [2024-07-26 11:35:28.182547] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:32.653 [2024-07-26 11:35:28.191867] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:32.653 [2024-07-26 11:35:28.192252] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.653 [2024-07-26 11:35:28.192268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd76980 with addr=10.0.0.2, port=4420 00:27:32.653 [2024-07-26 11:35:28.192275] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd76980 is same with the state(5) to be set 00:27:32.653 [2024-07-26 11:35:28.192447] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd76980 (9): Bad file descriptor 00:27:32.653 [2024-07-26 11:35:28.192615] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:32.653 [2024-07-26 11:35:28.192622] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:32.653 [2024-07-26 11:35:28.192634] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:32.653 [2024-07-26 11:35:28.195260] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:32.653 [2024-07-26 11:35:28.204650] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:32.653 [2024-07-26 11:35:28.205078] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.653 [2024-07-26 11:35:28.205093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd76980 with addr=10.0.0.2, port=4420 00:27:32.653 [2024-07-26 11:35:28.205099] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd76980 is same with the state(5) to be set 00:27:32.653 [2024-07-26 11:35:28.205257] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd76980 (9): Bad file descriptor 00:27:32.653 [2024-07-26 11:35:28.205414] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:32.653 [2024-07-26 11:35:28.205421] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:32.653 [2024-07-26 11:35:28.205426] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:32.653 [2024-07-26 11:35:28.208035] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:32.653 [2024-07-26 11:35:28.217358] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:32.653 [2024-07-26 11:35:28.217754] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.653 [2024-07-26 11:35:28.217769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd76980 with addr=10.0.0.2, port=4420 00:27:32.653 [2024-07-26 11:35:28.217775] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd76980 is same with the state(5) to be set 00:27:32.653 [2024-07-26 11:35:28.217933] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd76980 (9): Bad file descriptor 00:27:32.653 [2024-07-26 11:35:28.218091] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:32.653 [2024-07-26 11:35:28.218098] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:32.653 [2024-07-26 11:35:28.218104] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:32.653 [2024-07-26 11:35:28.220693] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:32.653 [2024-07-26 11:35:28.230164] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:32.653 [2024-07-26 11:35:28.230584] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.653 [2024-07-26 11:35:28.230599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd76980 with addr=10.0.0.2, port=4420 00:27:32.653 [2024-07-26 11:35:28.230605] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd76980 is same with the state(5) to be set 00:27:32.653 [2024-07-26 11:35:28.230792] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd76980 (9): Bad file descriptor 00:27:32.653 [2024-07-26 11:35:28.230963] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:32.653 [2024-07-26 11:35:28.230971] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:32.653 [2024-07-26 11:35:28.230982] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:32.653 [2024-07-26 11:35:28.233581] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:32.653 [2024-07-26 11:35:28.242911] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:32.653 [2024-07-26 11:35:28.243349] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.653 [2024-07-26 11:35:28.243365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd76980 with addr=10.0.0.2, port=4420 00:27:32.653 [2024-07-26 11:35:28.243371] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd76980 is same with the state(5) to be set 00:27:32.653 [2024-07-26 11:35:28.243537] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd76980 (9): Bad file descriptor 00:27:32.653 [2024-07-26 11:35:28.243710] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:32.653 [2024-07-26 11:35:28.243718] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:32.653 [2024-07-26 11:35:28.243724] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:32.653 [2024-07-26 11:35:28.246323] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:32.653 [2024-07-26 11:35:28.255745] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:32.653 [2024-07-26 11:35:28.256191] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.653 [2024-07-26 11:35:28.256206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd76980 with addr=10.0.0.2, port=4420 00:27:32.653 [2024-07-26 11:35:28.256212] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd76980 is same with the state(5) to be set 00:27:32.653 [2024-07-26 11:35:28.256378] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd76980 (9): Bad file descriptor 00:27:32.653 [2024-07-26 11:35:28.256544] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:32.653 [2024-07-26 11:35:28.256551] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:32.653 [2024-07-26 11:35:28.256557] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:32.653 [2024-07-26 11:35:28.259163] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:32.653 [2024-07-26 11:35:28.268537] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:32.653 [2024-07-26 11:35:28.268948] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.653 [2024-07-26 11:35:28.268964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd76980 with addr=10.0.0.2, port=4420 00:27:32.653 [2024-07-26 11:35:28.268970] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd76980 is same with the state(5) to be set 00:27:32.653 [2024-07-26 11:35:28.269136] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd76980 (9): Bad file descriptor 00:27:32.653 [2024-07-26 11:35:28.269306] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:32.653 [2024-07-26 11:35:28.269314] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:32.653 [2024-07-26 11:35:28.269320] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:32.653 [2024-07-26 11:35:28.271926] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:32.653 [2024-07-26 11:35:28.281264] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:32.653 [2024-07-26 11:35:28.281689] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.653 [2024-07-26 11:35:28.281708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd76980 with addr=10.0.0.2, port=4420 00:27:32.653 [2024-07-26 11:35:28.281714] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd76980 is same with the state(5) to be set 00:27:32.653 [2024-07-26 11:35:28.281881] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd76980 (9): Bad file descriptor 00:27:32.653 [2024-07-26 11:35:28.282048] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:32.653 [2024-07-26 11:35:28.282056] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:32.653 [2024-07-26 11:35:28.282063] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:32.653 [2024-07-26 11:35:28.284699] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:32.653 [2024-07-26 11:35:28.294108] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:32.653 [2024-07-26 11:35:28.294551] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.653 [2024-07-26 11:35:28.294566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd76980 with addr=10.0.0.2, port=4420 00:27:32.653 [2024-07-26 11:35:28.294573] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd76980 is same with the state(5) to be set 00:27:32.653 [2024-07-26 11:35:28.294764] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd76980 (9): Bad file descriptor 00:27:32.653 [2024-07-26 11:35:28.294943] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:32.653 [2024-07-26 11:35:28.294950] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:32.654 [2024-07-26 11:35:28.294956] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:32.654 [2024-07-26 11:35:28.297557] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:32.654 [2024-07-26 11:35:28.307121] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:32.654 [2024-07-26 11:35:28.307574] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.654 [2024-07-26 11:35:28.307591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd76980 with addr=10.0.0.2, port=4420 00:27:32.654 [2024-07-26 11:35:28.307598] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd76980 is same with the state(5) to be set 00:27:32.654 [2024-07-26 11:35:28.307783] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd76980 (9): Bad file descriptor 00:27:32.654 [2024-07-26 11:35:28.307965] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:32.654 [2024-07-26 11:35:28.307975] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:32.654 [2024-07-26 11:35:28.307981] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:32.913 [2024-07-26 11:35:28.310839] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:32.913 [2024-07-26 11:35:28.319941] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:32.913 [2024-07-26 11:35:28.320353] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.913 [2024-07-26 11:35:28.320370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd76980 with addr=10.0.0.2, port=4420 00:27:32.913 [2024-07-26 11:35:28.320377] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd76980 is same with the state(5) to be set 00:27:32.913 [2024-07-26 11:35:28.320535] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd76980 (9): Bad file descriptor 00:27:32.913 [2024-07-26 11:35:28.320720] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:32.913 [2024-07-26 11:35:28.320729] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:32.913 [2024-07-26 11:35:28.320735] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:32.913 [2024-07-26 11:35:28.323338] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:32.913 [2024-07-26 11:35:28.332804] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:32.913 [2024-07-26 11:35:28.333248] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.913 [2024-07-26 11:35:28.333263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd76980 with addr=10.0.0.2, port=4420 00:27:32.913 [2024-07-26 11:35:28.333270] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd76980 is same with the state(5) to be set 00:27:32.913 [2024-07-26 11:35:28.333436] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd76980 (9): Bad file descriptor 00:27:32.913 [2024-07-26 11:35:28.333603] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:32.913 [2024-07-26 11:35:28.333611] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:32.913 [2024-07-26 11:35:28.333617] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:32.913 [2024-07-26 11:35:28.336221] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:32.913 [2024-07-26 11:35:28.345599] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:32.913 [2024-07-26 11:35:28.346059] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.913 [2024-07-26 11:35:28.346103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd76980 with addr=10.0.0.2, port=4420 00:27:32.913 [2024-07-26 11:35:28.346124] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd76980 is same with the state(5) to be set 00:27:32.913 [2024-07-26 11:35:28.346716] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd76980 (9): Bad file descriptor 00:27:32.913 [2024-07-26 11:35:28.347256] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:32.913 [2024-07-26 11:35:28.347264] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:32.913 [2024-07-26 11:35:28.347270] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:32.913 [2024-07-26 11:35:28.349871] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:32.913 [2024-07-26 11:35:28.358360] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:32.913 [2024-07-26 11:35:28.358784] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.913 [2024-07-26 11:35:28.358800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd76980 with addr=10.0.0.2, port=4420 00:27:32.913 [2024-07-26 11:35:28.358806] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd76980 is same with the state(5) to be set 00:27:32.913 [2024-07-26 11:35:28.358964] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd76980 (9): Bad file descriptor 00:27:32.913 [2024-07-26 11:35:28.359121] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:32.913 [2024-07-26 11:35:28.359128] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:32.913 [2024-07-26 11:35:28.359133] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:32.913 [2024-07-26 11:35:28.361727] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:32.914 [2024-07-26 11:35:28.371193] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:32.914 [2024-07-26 11:35:28.371523] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.914 [2024-07-26 11:35:28.371539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd76980 with addr=10.0.0.2, port=4420 00:27:32.914 [2024-07-26 11:35:28.371545] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd76980 is same with the state(5) to be set 00:27:32.914 [2024-07-26 11:35:28.371735] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd76980 (9): Bad file descriptor 00:27:32.914 [2024-07-26 11:35:28.371907] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:32.914 [2024-07-26 11:35:28.371915] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:32.914 [2024-07-26 11:35:28.371921] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:32.914 [2024-07-26 11:35:28.374664] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:32.914 [2024-07-26 11:35:28.384197] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:32.914 [2024-07-26 11:35:28.384641] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.914 [2024-07-26 11:35:28.384658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd76980 with addr=10.0.0.2, port=4420 00:27:32.914 [2024-07-26 11:35:28.384664] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd76980 is same with the state(5) to be set 00:27:32.914 [2024-07-26 11:35:28.384835] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd76980 (9): Bad file descriptor 00:27:32.914 [2024-07-26 11:35:28.385012] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:32.914 [2024-07-26 11:35:28.385020] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:32.914 [2024-07-26 11:35:28.385026] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:32.914 [2024-07-26 11:35:28.387624] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:32.914 [2024-07-26 11:35:28.396940] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:32.914 [2024-07-26 11:35:28.397387] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.914 [2024-07-26 11:35:28.397425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd76980 with addr=10.0.0.2, port=4420 00:27:32.914 [2024-07-26 11:35:28.397447] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd76980 is same with the state(5) to be set 00:27:32.914 [2024-07-26 11:35:28.398047] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd76980 (9): Bad file descriptor 00:27:32.914 [2024-07-26 11:35:28.398220] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:32.914 [2024-07-26 11:35:28.398228] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:32.914 [2024-07-26 11:35:28.398234] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:32.914 [2024-07-26 11:35:28.400929] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:32.914 [2024-07-26 11:35:28.409692] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:32.914 [2024-07-26 11:35:28.410036] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.914 [2024-07-26 11:35:28.410050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd76980 with addr=10.0.0.2, port=4420 00:27:32.914 [2024-07-26 11:35:28.410060] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd76980 is same with the state(5) to be set 00:27:32.914 [2024-07-26 11:35:28.410218] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd76980 (9): Bad file descriptor 00:27:32.914 [2024-07-26 11:35:28.410375] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:32.914 [2024-07-26 11:35:28.410382] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:32.914 [2024-07-26 11:35:28.410388] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:32.914 [2024-07-26 11:35:28.412993] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:32.914 [2024-07-26 11:35:28.422470] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:32.914 [2024-07-26 11:35:28.422879] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.914 [2024-07-26 11:35:28.422895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd76980 with addr=10.0.0.2, port=4420 00:27:32.914 [2024-07-26 11:35:28.422901] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd76980 is same with the state(5) to be set 00:27:32.914 [2024-07-26 11:35:28.423059] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd76980 (9): Bad file descriptor 00:27:32.914 [2024-07-26 11:35:28.423216] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:32.914 [2024-07-26 11:35:28.423223] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:32.914 [2024-07-26 11:35:28.423229] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:32.914 [2024-07-26 11:35:28.425825] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:32.914 [2024-07-26 11:35:28.435240] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:32.914 [2024-07-26 11:35:28.435656] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.914 [2024-07-26 11:35:28.435672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd76980 with addr=10.0.0.2, port=4420 00:27:32.914 [2024-07-26 11:35:28.435679] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd76980 is same with the state(5) to be set 00:27:32.914 [2024-07-26 11:35:28.435856] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd76980 (9): Bad file descriptor 00:27:32.914 [2024-07-26 11:35:28.436014] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:32.914 [2024-07-26 11:35:28.436021] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:32.914 [2024-07-26 11:35:28.436027] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:32.914 [2024-07-26 11:35:28.438607] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:32.914 [2024-07-26 11:35:28.448031] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:32.914 [2024-07-26 11:35:28.448471] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.914 [2024-07-26 11:35:28.448487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd76980 with addr=10.0.0.2, port=4420 00:27:32.914 [2024-07-26 11:35:28.448494] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd76980 is same with the state(5) to be set 00:27:32.914 [2024-07-26 11:35:28.448664] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd76980 (9): Bad file descriptor 00:27:32.914 [2024-07-26 11:35:28.448831] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:32.914 [2024-07-26 11:35:28.448841] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:32.914 [2024-07-26 11:35:28.448848] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:32.914 [2024-07-26 11:35:28.451447] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:32.914 [2024-07-26 11:35:28.460780] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:32.914 [2024-07-26 11:35:28.461196] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.914 [2024-07-26 11:35:28.461212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd76980 with addr=10.0.0.2, port=4420 00:27:32.914 [2024-07-26 11:35:28.461219] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd76980 is same with the state(5) to be set 00:27:32.914 [2024-07-26 11:35:28.461385] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd76980 (9): Bad file descriptor 00:27:32.914 [2024-07-26 11:35:28.461554] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:32.914 [2024-07-26 11:35:28.461562] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:32.914 [2024-07-26 11:35:28.461568] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:32.914 [2024-07-26 11:35:28.464172] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:32.914 [2024-07-26 11:35:28.473514] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:32.914 [2024-07-26 11:35:28.473861] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.914 [2024-07-26 11:35:28.473878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd76980 with addr=10.0.0.2, port=4420 00:27:32.915 [2024-07-26 11:35:28.473884] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd76980 is same with the state(5) to be set 00:27:32.915 [2024-07-26 11:35:28.474049] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd76980 (9): Bad file descriptor 00:27:32.915 [2024-07-26 11:35:28.474219] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:32.915 [2024-07-26 11:35:28.474226] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:32.915 [2024-07-26 11:35:28.474232] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:32.915 [2024-07-26 11:35:28.476839] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:32.915 [2024-07-26 11:35:28.486306] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:32.915 [2024-07-26 11:35:28.486729] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.915 [2024-07-26 11:35:28.486772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd76980 with addr=10.0.0.2, port=4420 00:27:32.915 [2024-07-26 11:35:28.486793] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd76980 is same with the state(5) to be set 00:27:32.915 [2024-07-26 11:35:28.487371] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd76980 (9): Bad file descriptor 00:27:32.915 [2024-07-26 11:35:28.487566] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:32.915 [2024-07-26 11:35:28.487574] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:32.915 [2024-07-26 11:35:28.487579] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:32.915 [2024-07-26 11:35:28.490186] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:32.915 [2024-07-26 11:35:28.499138] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:32.915 [2024-07-26 11:35:28.499527] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.915 [2024-07-26 11:35:28.499542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd76980 with addr=10.0.0.2, port=4420 00:27:32.915 [2024-07-26 11:35:28.499549] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd76980 is same with the state(5) to be set 00:27:32.915 [2024-07-26 11:35:28.499722] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd76980 (9): Bad file descriptor 00:27:32.915 [2024-07-26 11:35:28.499888] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:32.915 [2024-07-26 11:35:28.499895] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:32.915 [2024-07-26 11:35:28.499902] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:32.915 [2024-07-26 11:35:28.502565] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:32.915 [2024-07-26 11:35:28.511976] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:32.915 [2024-07-26 11:35:28.512318] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.915 [2024-07-26 11:35:28.512332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd76980 with addr=10.0.0.2, port=4420 00:27:32.915 [2024-07-26 11:35:28.512339] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd76980 is same with the state(5) to be set 00:27:32.915 [2024-07-26 11:35:28.512505] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd76980 (9): Bad file descriptor 00:27:32.915 [2024-07-26 11:35:28.512674] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:32.915 [2024-07-26 11:35:28.512681] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:32.915 [2024-07-26 11:35:28.512687] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:32.915 [2024-07-26 11:35:28.515277] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:32.915 [2024-07-26 11:35:28.524732] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:32.915 [2024-07-26 11:35:28.525101] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.915 [2024-07-26 11:35:28.525138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd76980 with addr=10.0.0.2, port=4420 00:27:32.915 [2024-07-26 11:35:28.525161] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd76980 is same with the state(5) to be set 00:27:32.915 [2024-07-26 11:35:28.525753] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd76980 (9): Bad file descriptor 00:27:32.915 [2024-07-26 11:35:28.525920] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:32.915 [2024-07-26 11:35:28.525928] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:32.915 [2024-07-26 11:35:28.525934] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:32.915 [2024-07-26 11:35:28.528529] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:32.915 [2024-07-26 11:35:28.537553] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:32.915 [2024-07-26 11:35:28.537999] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.915 [2024-07-26 11:35:28.538014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd76980 with addr=10.0.0.2, port=4420 00:27:32.915 [2024-07-26 11:35:28.538020] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd76980 is same with the state(5) to be set 00:27:32.915 [2024-07-26 11:35:28.538190] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd76980 (9): Bad file descriptor 00:27:32.915 [2024-07-26 11:35:28.538356] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:32.915 [2024-07-26 11:35:28.538364] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:32.915 [2024-07-26 11:35:28.538370] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:32.915 [2024-07-26 11:35:28.540971] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:32.915 [2024-07-26 11:35:28.550385] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:32.915 [2024-07-26 11:35:28.550751] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.915 [2024-07-26 11:35:28.550796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd76980 with addr=10.0.0.2, port=4420 00:27:32.915 [2024-07-26 11:35:28.550818] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd76980 is same with the state(5) to be set 00:27:32.915 [2024-07-26 11:35:28.551395] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd76980 (9): Bad file descriptor 00:27:32.915 [2024-07-26 11:35:28.551864] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:32.915 [2024-07-26 11:35:28.551873] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:32.915 [2024-07-26 11:35:28.551878] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:32.915 [2024-07-26 11:35:28.554547] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:32.915 [2024-07-26 11:35:28.563151] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:32.915 [2024-07-26 11:35:28.563601] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.915 [2024-07-26 11:35:28.563642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd76980 with addr=10.0.0.2, port=4420 00:27:32.915 [2024-07-26 11:35:28.563666] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd76980 is same with the state(5) to be set 00:27:32.915 [2024-07-26 11:35:28.564243] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd76980 (9): Bad file descriptor 00:27:32.915 [2024-07-26 11:35:28.564777] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:32.915 [2024-07-26 11:35:28.564785] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:32.915 [2024-07-26 11:35:28.564791] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:32.915 [2024-07-26 11:35:28.567464] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:33.175 [2024-07-26 11:35:28.575971] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:33.175 [2024-07-26 11:35:28.576368] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.175 [2024-07-26 11:35:28.576386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd76980 with addr=10.0.0.2, port=4420 00:27:33.175 [2024-07-26 11:35:28.576393] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd76980 is same with the state(5) to be set 00:27:33.175 [2024-07-26 11:35:28.576565] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd76980 (9): Bad file descriptor 00:27:33.175 [2024-07-26 11:35:28.576746] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:33.175 [2024-07-26 11:35:28.576755] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:33.175 [2024-07-26 11:35:28.576764] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:33.175 [2024-07-26 11:35:28.579491] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:33.175 [2024-07-26 11:35:28.588763] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:33.175 [2024-07-26 11:35:28.589164] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.175 [2024-07-26 11:35:28.589180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd76980 with addr=10.0.0.2, port=4420 00:27:33.175 [2024-07-26 11:35:28.589187] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd76980 is same with the state(5) to be set 00:27:33.175 [2024-07-26 11:35:28.589345] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd76980 (9): Bad file descriptor 00:27:33.175 [2024-07-26 11:35:28.589503] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:33.175 [2024-07-26 11:35:28.589511] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:33.175 [2024-07-26 11:35:28.589517] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:33.175 [2024-07-26 11:35:28.592128] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:33.175 [2024-07-26 11:35:28.601566] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:33.175 [2024-07-26 11:35:28.602012] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.175 [2024-07-26 11:35:28.602047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd76980 with addr=10.0.0.2, port=4420 00:27:33.175 [2024-07-26 11:35:28.602069] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd76980 is same with the state(5) to be set 00:27:33.175 [2024-07-26 11:35:28.602670] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd76980 (9): Bad file descriptor 00:27:33.175 [2024-07-26 11:35:28.602829] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:33.175 [2024-07-26 11:35:28.602837] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:33.175 [2024-07-26 11:35:28.602842] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:33.175 [2024-07-26 11:35:28.605359] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:33.175 [2024-07-26 11:35:28.614374] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:33.175 [2024-07-26 11:35:28.614803] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.175 [2024-07-26 11:35:28.614818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd76980 with addr=10.0.0.2, port=4420 00:27:33.175 [2024-07-26 11:35:28.614824] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd76980 is same with the state(5) to be set 00:27:33.175 [2024-07-26 11:35:28.614991] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd76980 (9): Bad file descriptor 00:27:33.175 [2024-07-26 11:35:28.615157] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:33.175 [2024-07-26 11:35:28.615165] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:33.175 [2024-07-26 11:35:28.615171] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:33.175 [2024-07-26 11:35:28.617838] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:33.175 [2024-07-26 11:35:28.627136] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:33.175 [2024-07-26 11:35:28.627486] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.175 [2024-07-26 11:35:28.627504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd76980 with addr=10.0.0.2, port=4420 00:27:33.175 [2024-07-26 11:35:28.627511] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd76980 is same with the state(5) to be set 00:27:33.175 [2024-07-26 11:35:28.627699] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd76980 (9): Bad file descriptor 00:27:33.175 [2024-07-26 11:35:28.627871] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:33.175 [2024-07-26 11:35:28.627879] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:33.175 [2024-07-26 11:35:28.627886] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:33.175 [2024-07-26 11:35:28.630730] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:33.175 [2024-07-26 11:35:28.640026] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:33.175 [2024-07-26 11:35:28.640416] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.175 [2024-07-26 11:35:28.640432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd76980 with addr=10.0.0.2, port=4420 00:27:33.175 [2024-07-26 11:35:28.640439] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd76980 is same with the state(5) to be set 00:27:33.175 [2024-07-26 11:35:28.640610] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd76980 (9): Bad file descriptor 00:27:33.175 [2024-07-26 11:35:28.640788] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:33.175 [2024-07-26 11:35:28.640797] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:33.175 [2024-07-26 11:35:28.640803] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:33.175 [2024-07-26 11:35:28.643482] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:33.175 [2024-07-26 11:35:28.652959] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:33.175 [2024-07-26 11:35:28.653375] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.175 [2024-07-26 11:35:28.653391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd76980 with addr=10.0.0.2, port=4420 00:27:33.175 [2024-07-26 11:35:28.653397] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd76980 is same with the state(5) to be set 00:27:33.175 [2024-07-26 11:35:28.653563] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd76980 (9): Bad file descriptor 00:27:33.175 [2024-07-26 11:35:28.653735] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:33.175 [2024-07-26 11:35:28.653744] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:33.175 [2024-07-26 11:35:28.653749] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:33.176 [2024-07-26 11:35:28.656348] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:33.176 [2024-07-26 11:35:28.665756] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:33.176 [2024-07-26 11:35:28.666191] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.176 [2024-07-26 11:35:28.666233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd76980 with addr=10.0.0.2, port=4420 00:27:33.176 [2024-07-26 11:35:28.666254] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd76980 is same with the state(5) to be set 00:27:33.176 [2024-07-26 11:35:28.666853] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd76980 (9): Bad file descriptor 00:27:33.176 [2024-07-26 11:35:28.667024] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:33.176 [2024-07-26 11:35:28.667032] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:33.176 [2024-07-26 11:35:28.667039] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:33.176 [2024-07-26 11:35:28.669665] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:33.176 [2024-07-26 11:35:28.678778] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:33.176 [2024-07-26 11:35:28.679217] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.176 [2024-07-26 11:35:28.679260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd76980 with addr=10.0.0.2, port=4420 00:27:33.176 [2024-07-26 11:35:28.679281] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd76980 is same with the state(5) to be set 00:27:33.176 [2024-07-26 11:35:28.679845] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd76980 (9): Bad file descriptor 00:27:33.176 [2024-07-26 11:35:28.680013] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:33.176 [2024-07-26 11:35:28.680020] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:33.176 [2024-07-26 11:35:28.680027] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:33.176 [2024-07-26 11:35:28.682694] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:33.176 [2024-07-26 11:35:28.691586] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:33.176 [2024-07-26 11:35:28.691961] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.176 [2024-07-26 11:35:28.691977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd76980 with addr=10.0.0.2, port=4420 00:27:33.176 [2024-07-26 11:35:28.691984] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd76980 is same with the state(5) to be set 00:27:33.176 [2024-07-26 11:35:28.692150] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd76980 (9): Bad file descriptor 00:27:33.176 [2024-07-26 11:35:28.692316] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:33.176 [2024-07-26 11:35:28.692324] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:33.176 [2024-07-26 11:35:28.692329] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:33.176 [2024-07-26 11:35:28.695012] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:33.176 [2024-07-26 11:35:28.704322] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:33.176 [2024-07-26 11:35:28.704739] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.176 [2024-07-26 11:35:28.704755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd76980 with addr=10.0.0.2, port=4420 00:27:33.176 [2024-07-26 11:35:28.704762] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd76980 is same with the state(5) to be set 00:27:33.176 [2024-07-26 11:35:28.704927] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd76980 (9): Bad file descriptor 00:27:33.176 [2024-07-26 11:35:28.705094] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:33.176 [2024-07-26 11:35:28.705102] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:33.176 [2024-07-26 11:35:28.705107] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:33.176 [2024-07-26 11:35:28.707710] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:33.176 [2024-07-26 11:35:28.717323] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:33.176 [2024-07-26 11:35:28.717691] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.176 [2024-07-26 11:35:28.717707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd76980 with addr=10.0.0.2, port=4420 00:27:33.176 [2024-07-26 11:35:28.717714] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd76980 is same with the state(5) to be set 00:27:33.176 [2024-07-26 11:35:28.717885] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd76980 (9): Bad file descriptor 00:27:33.176 [2024-07-26 11:35:28.718057] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:33.176 [2024-07-26 11:35:28.718065] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:33.176 [2024-07-26 11:35:28.718071] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:33.176 [2024-07-26 11:35:28.721026] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:33.176 [2024-07-26 11:35:28.730155] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:33.176 [2024-07-26 11:35:28.730576] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.176 [2024-07-26 11:35:28.730592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd76980 with addr=10.0.0.2, port=4420 00:27:33.176 [2024-07-26 11:35:28.730599] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd76980 is same with the state(5) to be set 00:27:33.176 [2024-07-26 11:35:28.730771] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd76980 (9): Bad file descriptor 00:27:33.176 [2024-07-26 11:35:28.730937] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:33.176 [2024-07-26 11:35:28.730944] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:33.176 [2024-07-26 11:35:28.730951] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:33.176 [2024-07-26 11:35:28.733550] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:33.176 [2024-07-26 11:35:28.742988] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:33.176 [2024-07-26 11:35:28.743290] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.176 [2024-07-26 11:35:28.743306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd76980 with addr=10.0.0.2, port=4420 00:27:33.176 [2024-07-26 11:35:28.743312] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd76980 is same with the state(5) to be set 00:27:33.176 [2024-07-26 11:35:28.743478] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd76980 (9): Bad file descriptor 00:27:33.176 [2024-07-26 11:35:28.743649] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:33.176 [2024-07-26 11:35:28.743657] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:33.176 [2024-07-26 11:35:28.743663] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:33.176 [2024-07-26 11:35:28.746270] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:33.176 [2024-07-26 11:35:28.755743] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:33.176 [2024-07-26 11:35:28.756173] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.176 [2024-07-26 11:35:28.756188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd76980 with addr=10.0.0.2, port=4420 00:27:33.176 [2024-07-26 11:35:28.756198] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd76980 is same with the state(5) to be set 00:27:33.176 [2024-07-26 11:35:28.756355] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd76980 (9): Bad file descriptor 00:27:33.176 [2024-07-26 11:35:28.756512] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:33.176 [2024-07-26 11:35:28.756520] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:33.176 [2024-07-26 11:35:28.756525] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:33.176 [2024-07-26 11:35:28.759149] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:33.176 [2024-07-26 11:35:28.768564] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:33.176 [2024-07-26 11:35:28.768989] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.176 [2024-07-26 11:35:28.769004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd76980 with addr=10.0.0.2, port=4420 00:27:33.176 [2024-07-26 11:35:28.769011] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd76980 is same with the state(5) to be set 00:27:33.176 [2024-07-26 11:35:28.769177] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd76980 (9): Bad file descriptor 00:27:33.176 [2024-07-26 11:35:28.769347] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:33.176 [2024-07-26 11:35:28.769355] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:33.176 [2024-07-26 11:35:28.769361] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:33.176 [2024-07-26 11:35:28.771962] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:33.176 [2024-07-26 11:35:28.781344] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:33.176 [2024-07-26 11:35:28.781723] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.176 [2024-07-26 11:35:28.781738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd76980 with addr=10.0.0.2, port=4420 00:27:33.176 [2024-07-26 11:35:28.781745] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd76980 is same with the state(5) to be set 00:27:33.176 [2024-07-26 11:35:28.781919] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd76980 (9): Bad file descriptor 00:27:33.176 [2024-07-26 11:35:28.782086] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:33.176 [2024-07-26 11:35:28.782093] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:33.177 [2024-07-26 11:35:28.782099] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:33.177 [2024-07-26 11:35:28.784772] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:33.177 [2024-07-26 11:35:28.794217] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:33.177 [2024-07-26 11:35:28.794598] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.177 [2024-07-26 11:35:28.794614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd76980 with addr=10.0.0.2, port=4420 00:27:33.177 [2024-07-26 11:35:28.794620] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd76980 is same with the state(5) to be set 00:27:33.177 [2024-07-26 11:35:28.794793] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd76980 (9): Bad file descriptor 00:27:33.177 [2024-07-26 11:35:28.794959] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:33.177 [2024-07-26 11:35:28.794970] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:33.177 [2024-07-26 11:35:28.794976] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:33.177 [2024-07-26 11:35:28.797575] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:33.177 [2024-07-26 11:35:28.807057] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:33.177 [2024-07-26 11:35:28.807357] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.177 [2024-07-26 11:35:28.807373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd76980 with addr=10.0.0.2, port=4420 00:27:33.177 [2024-07-26 11:35:28.807379] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd76980 is same with the state(5) to be set 00:27:33.177 [2024-07-26 11:35:28.807545] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd76980 (9): Bad file descriptor 00:27:33.177 [2024-07-26 11:35:28.807720] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:33.177 [2024-07-26 11:35:28.807729] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:33.177 [2024-07-26 11:35:28.807735] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:33.177 [2024-07-26 11:35:28.810335] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:33.177 [2024-07-26 11:35:28.819902] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:33.177 [2024-07-26 11:35:28.820197] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.177 [2024-07-26 11:35:28.820212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd76980 with addr=10.0.0.2, port=4420 00:27:33.177 [2024-07-26 11:35:28.820219] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd76980 is same with the state(5) to be set 00:27:33.177 [2024-07-26 11:35:28.820384] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd76980 (9): Bad file descriptor 00:27:33.177 [2024-07-26 11:35:28.820555] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:33.177 [2024-07-26 11:35:28.820563] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:33.177 [2024-07-26 11:35:28.820569] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:33.177 [2024-07-26 11:35:28.823174] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:33.436 [2024-07-26 11:35:28.833066] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:33.436 [2024-07-26 11:35:28.833427] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.436 [2024-07-26 11:35:28.833445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd76980 with addr=10.0.0.2, port=4420 00:27:33.436 [2024-07-26 11:35:28.833453] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd76980 is same with the state(5) to be set 00:27:33.436 [2024-07-26 11:35:28.833638] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd76980 (9): Bad file descriptor 00:27:33.436 [2024-07-26 11:35:28.833824] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:33.436 [2024-07-26 11:35:28.833832] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:33.436 [2024-07-26 11:35:28.833838] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:33.436 [2024-07-26 11:35:28.836464] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:33.436 [2024-07-26 11:35:28.846062] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:33.436 [2024-07-26 11:35:28.846407] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.436 [2024-07-26 11:35:28.846452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd76980 with addr=10.0.0.2, port=4420 00:27:33.436 [2024-07-26 11:35:28.846475] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd76980 is same with the state(5) to be set 00:27:33.436 [2024-07-26 11:35:28.847065] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd76980 (9): Bad file descriptor 00:27:33.436 [2024-07-26 11:35:28.847602] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:33.436 [2024-07-26 11:35:28.847610] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:33.436 [2024-07-26 11:35:28.847616] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:33.436 [2024-07-26 11:35:28.850224] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:33.436 [2024-07-26 11:35:28.858906] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:33.436 [2024-07-26 11:35:28.859268] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.436 [2024-07-26 11:35:28.859285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd76980 with addr=10.0.0.2, port=4420 00:27:33.436 [2024-07-26 11:35:28.859291] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd76980 is same with the state(5) to be set 00:27:33.436 [2024-07-26 11:35:28.859458] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd76980 (9): Bad file descriptor 00:27:33.436 [2024-07-26 11:35:28.859625] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:33.436 [2024-07-26 11:35:28.859638] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:33.436 [2024-07-26 11:35:28.859645] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:33.436 [2024-07-26 11:35:28.862246] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:33.436 [2024-07-26 11:35:28.871753] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:33.436 [2024-07-26 11:35:28.872103] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.436 [2024-07-26 11:35:28.872119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd76980 with addr=10.0.0.2, port=4420 00:27:33.436 [2024-07-26 11:35:28.872125] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd76980 is same with the state(5) to be set 00:27:33.436 [2024-07-26 11:35:28.872290] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd76980 (9): Bad file descriptor 00:27:33.436 [2024-07-26 11:35:28.872456] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:33.436 [2024-07-26 11:35:28.872464] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:33.436 [2024-07-26 11:35:28.872470] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:33.436 [2024-07-26 11:35:28.875076] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:33.436 [2024-07-26 11:35:28.884695] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:33.436 [2024-07-26 11:35:28.884990] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.436 [2024-07-26 11:35:28.885006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd76980 with addr=10.0.0.2, port=4420 00:27:33.436 [2024-07-26 11:35:28.885013] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd76980 is same with the state(5) to be set 00:27:33.436 [2024-07-26 11:35:28.885187] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd76980 (9): Bad file descriptor 00:27:33.436 [2024-07-26 11:35:28.885359] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:33.436 [2024-07-26 11:35:28.885367] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:33.436 [2024-07-26 11:35:28.885373] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:33.436 [2024-07-26 11:35:28.888120] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:33.436 [2024-07-26 11:35:28.897624] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:33.436 [2024-07-26 11:35:28.897929] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.436 [2024-07-26 11:35:28.897944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd76980 with addr=10.0.0.2, port=4420 00:27:33.436 [2024-07-26 11:35:28.897951] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd76980 is same with the state(5) to be set 00:27:33.436 [2024-07-26 11:35:28.898117] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd76980 (9): Bad file descriptor 00:27:33.436 [2024-07-26 11:35:28.898287] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:33.436 [2024-07-26 11:35:28.898295] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:33.436 [2024-07-26 11:35:28.898301] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:33.436 [2024-07-26 11:35:28.900967] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:33.436 [2024-07-26 11:35:28.910621] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:33.436 [2024-07-26 11:35:28.911035] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.436 [2024-07-26 11:35:28.911079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd76980 with addr=10.0.0.2, port=4420 00:27:33.436 [2024-07-26 11:35:28.911100] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd76980 is same with the state(5) to be set 00:27:33.436 [2024-07-26 11:35:28.911580] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd76980 (9): Bad file descriptor 00:27:33.436 [2024-07-26 11:35:28.911752] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:33.436 [2024-07-26 11:35:28.911760] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:33.436 [2024-07-26 11:35:28.911766] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:33.436 [2024-07-26 11:35:28.914430] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:33.436 [2024-07-26 11:35:28.923404] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:33.436 [2024-07-26 11:35:28.923870] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.436 [2024-07-26 11:35:28.923886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd76980 with addr=10.0.0.2, port=4420 00:27:33.436 [2024-07-26 11:35:28.923893] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd76980 is same with the state(5) to be set 00:27:33.436 [2024-07-26 11:35:28.924060] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd76980 (9): Bad file descriptor 00:27:33.436 [2024-07-26 11:35:28.924230] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:33.436 [2024-07-26 11:35:28.924238] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:33.436 [2024-07-26 11:35:28.924247] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:33.436 [2024-07-26 11:35:28.926853] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:33.436 [2024-07-26 11:35:28.936267] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:33.436 [2024-07-26 11:35:28.936748] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.436 [2024-07-26 11:35:28.936792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd76980 with addr=10.0.0.2, port=4420 00:27:33.436 [2024-07-26 11:35:28.936813] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd76980 is same with the state(5) to be set 00:27:33.436 [2024-07-26 11:35:28.937390] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd76980 (9): Bad file descriptor 00:27:33.436 [2024-07-26 11:35:28.937786] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:33.436 [2024-07-26 11:35:28.937795] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:33.436 [2024-07-26 11:35:28.937801] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:33.436 [2024-07-26 11:35:28.940403] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:33.436 [2024-07-26 11:35:28.949166] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:33.436 [2024-07-26 11:35:28.949541] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.436 [2024-07-26 11:35:28.949557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd76980 with addr=10.0.0.2, port=4420 00:27:33.436 [2024-07-26 11:35:28.949564] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd76980 is same with the state(5) to be set 00:27:33.436 [2024-07-26 11:35:28.949743] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd76980 (9): Bad file descriptor 00:27:33.436 [2024-07-26 11:35:28.949915] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:33.436 [2024-07-26 11:35:28.949922] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:33.436 [2024-07-26 11:35:28.949928] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:33.436 [2024-07-26 11:35:28.952686] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:33.436 [2024-07-26 11:35:28.962262] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:33.436 [2024-07-26 11:35:28.962692] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.436 [2024-07-26 11:35:28.962732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd76980 with addr=10.0.0.2, port=4420 00:27:33.436 [2024-07-26 11:35:28.962755] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd76980 is same with the state(5) to be set 00:27:33.436 [2024-07-26 11:35:28.963333] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd76980 (9): Bad file descriptor 00:27:33.436 [2024-07-26 11:35:28.963685] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:33.436 [2024-07-26 11:35:28.963695] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:33.436 [2024-07-26 11:35:28.963702] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:33.436 [2024-07-26 11:35:28.966442] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:33.436 [2024-07-26 11:35:28.975356] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:33.436 [2024-07-26 11:35:28.975750] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.436 [2024-07-26 11:35:28.975801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd76980 with addr=10.0.0.2, port=4420 00:27:33.436 [2024-07-26 11:35:28.975824] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd76980 is same with the state(5) to be set 00:27:33.436 [2024-07-26 11:35:28.976402] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd76980 (9): Bad file descriptor 00:27:33.436 [2024-07-26 11:35:28.976765] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:33.436 [2024-07-26 11:35:28.976776] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:33.436 [2024-07-26 11:35:28.976782] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:33.436 [2024-07-26 11:35:28.979524] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:33.436 [2024-07-26 11:35:28.988439] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:33.436 [2024-07-26 11:35:28.988794] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.436 [2024-07-26 11:35:28.988811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd76980 with addr=10.0.0.2, port=4420 00:27:33.436 [2024-07-26 11:35:28.988819] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd76980 is same with the state(5) to be set 00:27:33.436 [2024-07-26 11:35:28.988990] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd76980 (9): Bad file descriptor 00:27:33.436 [2024-07-26 11:35:28.989163] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:33.436 [2024-07-26 11:35:28.989172] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:33.436 [2024-07-26 11:35:28.989178] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:33.436 [2024-07-26 11:35:28.991930] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:33.436 [2024-07-26 11:35:29.001387] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:33.436 [2024-07-26 11:35:29.001739] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.436 [2024-07-26 11:35:29.001783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd76980 with addr=10.0.0.2, port=4420 00:27:33.436 [2024-07-26 11:35:29.001805] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd76980 is same with the state(5) to be set 00:27:33.436 [2024-07-26 11:35:29.002383] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd76980 (9): Bad file descriptor 00:27:33.436 [2024-07-26 11:35:29.002834] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:33.436 [2024-07-26 11:35:29.002844] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:33.436 [2024-07-26 11:35:29.002850] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:33.436 [2024-07-26 11:35:29.005366] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:33.436 [2024-07-26 11:35:29.014186] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:33.436 [2024-07-26 11:35:29.014533] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.436 [2024-07-26 11:35:29.014550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd76980 with addr=10.0.0.2, port=4420 00:27:33.436 [2024-07-26 11:35:29.014558] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd76980 is same with the state(5) to be set 00:27:33.436 [2024-07-26 11:35:29.014723] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd76980 (9): Bad file descriptor 00:27:33.436 [2024-07-26 11:35:29.014886] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:33.436 [2024-07-26 11:35:29.014896] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:33.436 [2024-07-26 11:35:29.014902] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:33.436 [2024-07-26 11:35:29.017423] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:33.436 [2024-07-26 11:35:29.027044] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:33.436 [2024-07-26 11:35:29.027456] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.436 [2024-07-26 11:35:29.027473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd76980 with addr=10.0.0.2, port=4420 00:27:33.436 [2024-07-26 11:35:29.027479] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd76980 is same with the state(5) to be set 00:27:33.436 [2024-07-26 11:35:29.027641] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd76980 (9): Bad file descriptor 00:27:33.436 [2024-07-26 11:35:29.027801] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:33.436 [2024-07-26 11:35:29.027810] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:33.436 [2024-07-26 11:35:29.027816] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:33.436 [2024-07-26 11:35:29.030335] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:33.436 [2024-07-26 11:35:29.039869] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:33.436 [2024-07-26 11:35:29.040189] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.436 [2024-07-26 11:35:29.040206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd76980 with addr=10.0.0.2, port=4420 00:27:33.436 [2024-07-26 11:35:29.040213] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd76980 is same with the state(5) to be set 00:27:33.436 [2024-07-26 11:35:29.040372] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd76980 (9): Bad file descriptor 00:27:33.436 [2024-07-26 11:35:29.040530] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:33.436 [2024-07-26 11:35:29.040540] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:33.436 [2024-07-26 11:35:29.040546] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:33.436 [2024-07-26 11:35:29.043083] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:33.436 [2024-07-26 11:35:29.052712] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:33.436 [2024-07-26 11:35:29.053037] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.436 [2024-07-26 11:35:29.053053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd76980 with addr=10.0.0.2, port=4420 00:27:33.436 [2024-07-26 11:35:29.053060] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd76980 is same with the state(5) to be set 00:27:33.436 [2024-07-26 11:35:29.053217] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd76980 (9): Bad file descriptor 00:27:33.436 [2024-07-26 11:35:29.053376] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:33.436 [2024-07-26 11:35:29.053385] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:33.436 [2024-07-26 11:35:29.053390] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:33.436 [2024-07-26 11:35:29.055916] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:33.436 [2024-07-26 11:35:29.065453] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:33.436 [2024-07-26 11:35:29.065765] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.437 [2024-07-26 11:35:29.065783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd76980 with addr=10.0.0.2, port=4420 00:27:33.437 [2024-07-26 11:35:29.065790] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd76980 is same with the state(5) to be set 00:27:33.437 [2024-07-26 11:35:29.065957] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd76980 (9): Bad file descriptor 00:27:33.437 [2024-07-26 11:35:29.066124] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:33.437 [2024-07-26 11:35:29.066134] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:33.437 [2024-07-26 11:35:29.066140] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:33.437 [2024-07-26 11:35:29.068705] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:33.437 [2024-07-26 11:35:29.078232] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:33.437 [2024-07-26 11:35:29.078647] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.437 [2024-07-26 11:35:29.078664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd76980 with addr=10.0.0.2, port=4420 00:27:33.437 [2024-07-26 11:35:29.078671] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd76980 is same with the state(5) to be set 00:27:33.437 [2024-07-26 11:35:29.078828] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd76980 (9): Bad file descriptor 00:27:33.437 [2024-07-26 11:35:29.078986] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:33.437 [2024-07-26 11:35:29.078995] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:33.437 [2024-07-26 11:35:29.079002] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:33.437 [2024-07-26 11:35:29.081525] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:33.437 [2024-07-26 11:35:29.091206] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:33.437 [2024-07-26 11:35:29.091658] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.437 [2024-07-26 11:35:29.091676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd76980 with addr=10.0.0.2, port=4420 00:27:33.437 [2024-07-26 11:35:29.091684] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd76980 is same with the state(5) to be set 00:27:33.437 [2024-07-26 11:35:29.091865] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd76980 (9): Bad file descriptor 00:27:33.437 [2024-07-26 11:35:29.092033] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:33.437 [2024-07-26 11:35:29.092042] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:33.437 [2024-07-26 11:35:29.092048] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:33.696 [2024-07-26 11:35:29.094908] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:33.696 [2024-07-26 11:35:29.104097] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:33.696 [2024-07-26 11:35:29.104533] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.696 [2024-07-26 11:35:29.104551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd76980 with addr=10.0.0.2, port=4420 00:27:33.696 [2024-07-26 11:35:29.104561] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd76980 is same with the state(5) to be set 00:27:33.696 [2024-07-26 11:35:29.104735] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd76980 (9): Bad file descriptor 00:27:33.696 [2024-07-26 11:35:29.104907] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:33.696 [2024-07-26 11:35:29.104917] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:33.696 [2024-07-26 11:35:29.104923] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:33.696 [2024-07-26 11:35:29.107441] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:33.696 [2024-07-26 11:35:29.116963] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:33.696 [2024-07-26 11:35:29.117386] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.696 [2024-07-26 11:35:29.117403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd76980 with addr=10.0.0.2, port=4420 00:27:33.696 [2024-07-26 11:35:29.117410] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd76980 is same with the state(5) to be set 00:27:33.696 [2024-07-26 11:35:29.117568] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd76980 (9): Bad file descriptor 00:27:33.696 [2024-07-26 11:35:29.117733] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:33.696 [2024-07-26 11:35:29.117742] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:33.696 [2024-07-26 11:35:29.117748] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:33.696 [2024-07-26 11:35:29.120270] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:33.696 [2024-07-26 11:35:29.129726] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:33.696 [2024-07-26 11:35:29.130081] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.696 [2024-07-26 11:35:29.130098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd76980 with addr=10.0.0.2, port=4420 00:27:33.696 [2024-07-26 11:35:29.130106] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd76980 is same with the state(5) to be set 00:27:33.696 [2024-07-26 11:35:29.130264] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd76980 (9): Bad file descriptor 00:27:33.696 [2024-07-26 11:35:29.130422] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:33.696 [2024-07-26 11:35:29.130431] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:33.696 [2024-07-26 11:35:29.130438] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:33.696 [2024-07-26 11:35:29.132962] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:33.696 [2024-07-26 11:35:29.142491] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:33.696 [2024-07-26 11:35:29.142920] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.696 [2024-07-26 11:35:29.142937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd76980 with addr=10.0.0.2, port=4420 00:27:33.696 [2024-07-26 11:35:29.142945] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd76980 is same with the state(5) to be set 00:27:33.696 [2024-07-26 11:35:29.143112] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd76980 (9): Bad file descriptor 00:27:33.696 [2024-07-26 11:35:29.143279] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:33.696 [2024-07-26 11:35:29.143292] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:33.696 [2024-07-26 11:35:29.143299] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:33.696 [2024-07-26 11:35:29.146047] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:33.696 [2024-07-26 11:35:29.155372] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:33.696 [2024-07-26 11:35:29.155755] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.696 [2024-07-26 11:35:29.155799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd76980 with addr=10.0.0.2, port=4420 00:27:33.696 [2024-07-26 11:35:29.155822] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd76980 is same with the state(5) to be set 00:27:33.696 [2024-07-26 11:35:29.156388] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd76980 (9): Bad file descriptor 00:27:33.696 [2024-07-26 11:35:29.156548] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:33.696 [2024-07-26 11:35:29.156555] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:33.696 [2024-07-26 11:35:29.156561] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:33.696 [2024-07-26 11:35:29.159242] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:33.696 [2024-07-26 11:35:29.168256] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:33.696 [2024-07-26 11:35:29.168671] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.696 [2024-07-26 11:35:29.168726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd76980 with addr=10.0.0.2, port=4420 00:27:33.696 [2024-07-26 11:35:29.168749] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd76980 is same with the state(5) to be set 00:27:33.696 [2024-07-26 11:35:29.169309] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd76980 (9): Bad file descriptor 00:27:33.696 [2024-07-26 11:35:29.169468] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:33.696 [2024-07-26 11:35:29.169476] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:33.697 [2024-07-26 11:35:29.169482] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:33.697 [2024-07-26 11:35:29.172153] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:33.697 [2024-07-26 11:35:29.180966] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:33.697 [2024-07-26 11:35:29.181315] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.697 [2024-07-26 11:35:29.181331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd76980 with addr=10.0.0.2, port=4420 00:27:33.697 [2024-07-26 11:35:29.181337] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd76980 is same with the state(5) to be set 00:27:33.697 [2024-07-26 11:35:29.181494] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd76980 (9): Bad file descriptor 00:27:33.697 [2024-07-26 11:35:29.181660] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:33.697 [2024-07-26 11:35:29.181670] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:33.697 [2024-07-26 11:35:29.181676] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:33.697 [2024-07-26 11:35:29.184308] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:33.697 [2024-07-26 11:35:29.193695] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:33.697 [2024-07-26 11:35:29.194114] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.697 [2024-07-26 11:35:29.194129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd76980 with addr=10.0.0.2, port=4420 00:27:33.697 [2024-07-26 11:35:29.194137] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd76980 is same with the state(5) to be set 00:27:33.697 [2024-07-26 11:35:29.194295] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd76980 (9): Bad file descriptor 00:27:33.697 [2024-07-26 11:35:29.194454] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:33.697 [2024-07-26 11:35:29.194464] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:33.697 [2024-07-26 11:35:29.194469] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:33.697 [2024-07-26 11:35:29.197057] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:33.697 [2024-07-26 11:35:29.206488] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:33.697 [2024-07-26 11:35:29.206919] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.697 [2024-07-26 11:35:29.206937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd76980 with addr=10.0.0.2, port=4420 00:27:33.697 [2024-07-26 11:35:29.206944] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd76980 is same with the state(5) to be set 00:27:33.697 [2024-07-26 11:35:29.207111] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd76980 (9): Bad file descriptor 00:27:33.697 [2024-07-26 11:35:29.207278] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:33.697 [2024-07-26 11:35:29.207287] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:33.697 [2024-07-26 11:35:29.207293] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:33.697 [2024-07-26 11:35:29.209958] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:33.697 [2024-07-26 11:35:29.219369] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:33.697 [2024-07-26 11:35:29.219757] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.697 [2024-07-26 11:35:29.219774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd76980 with addr=10.0.0.2, port=4420 00:27:33.697 [2024-07-26 11:35:29.219781] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd76980 is same with the state(5) to be set 00:27:33.697 [2024-07-26 11:35:29.219939] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd76980 (9): Bad file descriptor 00:27:33.697 [2024-07-26 11:35:29.220097] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:33.697 [2024-07-26 11:35:29.220106] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:33.697 [2024-07-26 11:35:29.220112] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:33.697 [2024-07-26 11:35:29.222763] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:33.697 [2024-07-26 11:35:29.232245] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:33.697 [2024-07-26 11:35:29.232666] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.697 [2024-07-26 11:35:29.232714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd76980 with addr=10.0.0.2, port=4420 00:27:33.697 [2024-07-26 11:35:29.232736] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd76980 is same with the state(5) to be set 00:27:33.697 [2024-07-26 11:35:29.233280] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd76980 (9): Bad file descriptor 00:27:33.697 [2024-07-26 11:35:29.233608] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:33.697 [2024-07-26 11:35:29.233636] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:33.697 [2024-07-26 11:35:29.233652] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:33.697 [2024-07-26 11:35:29.239873] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:33.697 [2024-07-26 11:35:29.247078] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:33.697 [2024-07-26 11:35:29.247600] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.697 [2024-07-26 11:35:29.247655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd76980 with addr=10.0.0.2, port=4420 00:27:33.697 [2024-07-26 11:35:29.247678] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd76980 is same with the state(5) to be set 00:27:33.697 [2024-07-26 11:35:29.248243] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd76980 (9): Bad file descriptor 00:27:33.697 [2024-07-26 11:35:29.248497] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:33.697 [2024-07-26 11:35:29.248510] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:33.697 [2024-07-26 11:35:29.248520] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:33.697 [2024-07-26 11:35:29.252569] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:33.697 [2024-07-26 11:35:29.260096] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:33.697 [2024-07-26 11:35:29.260543] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.697 [2024-07-26 11:35:29.260587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd76980 with addr=10.0.0.2, port=4420 00:27:33.697 [2024-07-26 11:35:29.260609] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd76980 is same with the state(5) to be set 00:27:33.697 [2024-07-26 11:35:29.261203] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd76980 (9): Bad file descriptor 00:27:33.697 [2024-07-26 11:35:29.261693] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:33.697 [2024-07-26 11:35:29.261703] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:33.697 [2024-07-26 11:35:29.261710] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:33.697 [2024-07-26 11:35:29.264320] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:33.697 [2024-07-26 11:35:29.272888] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:33.697 [2024-07-26 11:35:29.273289] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.697 [2024-07-26 11:35:29.273305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd76980 with addr=10.0.0.2, port=4420 00:27:33.697 [2024-07-26 11:35:29.273312] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd76980 is same with the state(5) to be set 00:27:33.697 [2024-07-26 11:35:29.273471] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd76980 (9): Bad file descriptor 00:27:33.697 [2024-07-26 11:35:29.273636] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:33.697 [2024-07-26 11:35:29.273645] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:33.697 [2024-07-26 11:35:29.273656] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:33.697 [2024-07-26 11:35:29.276174] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:33.697 [2024-07-26 11:35:29.285775] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:33.697 [2024-07-26 11:35:29.286130] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.697 [2024-07-26 11:35:29.286146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd76980 with addr=10.0.0.2, port=4420 00:27:33.697 [2024-07-26 11:35:29.286153] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd76980 is same with the state(5) to be set 00:27:33.697 [2024-07-26 11:35:29.286311] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd76980 (9): Bad file descriptor 00:27:33.697 [2024-07-26 11:35:29.286469] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:33.697 [2024-07-26 11:35:29.286478] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:33.697 [2024-07-26 11:35:29.286484] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:33.697 [2024-07-26 11:35:29.289011] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:33.697 [2024-07-26 11:35:29.298587] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:33.697 [2024-07-26 11:35:29.298978] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.697 [2024-07-26 11:35:29.298994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd76980 with addr=10.0.0.2, port=4420 00:27:33.697 [2024-07-26 11:35:29.299001] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd76980 is same with the state(5) to be set 00:27:33.697 [2024-07-26 11:35:29.299159] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd76980 (9): Bad file descriptor 00:27:33.697 [2024-07-26 11:35:29.299317] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:33.697 [2024-07-26 11:35:29.299326] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:33.698 [2024-07-26 11:35:29.299332] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:33.698 [2024-07-26 11:35:29.301859] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:33.698 [2024-07-26 11:35:29.311377] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:33.698 [2024-07-26 11:35:29.311791] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.698 [2024-07-26 11:35:29.311808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd76980 with addr=10.0.0.2, port=4420 00:27:33.698 [2024-07-26 11:35:29.311815] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd76980 is same with the state(5) to be set 00:27:33.698 [2024-07-26 11:35:29.311973] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd76980 (9): Bad file descriptor 00:27:33.698 [2024-07-26 11:35:29.312131] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:33.698 [2024-07-26 11:35:29.312140] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:33.698 [2024-07-26 11:35:29.312146] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:33.698 [2024-07-26 11:35:29.314832] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:33.698 [2024-07-26 11:35:29.324203] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:33.698 [2024-07-26 11:35:29.324594] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.698 [2024-07-26 11:35:29.324614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd76980 with addr=10.0.0.2, port=4420 00:27:33.698 [2024-07-26 11:35:29.324621] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd76980 is same with the state(5) to be set 00:27:33.698 [2024-07-26 11:35:29.324786] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd76980 (9): Bad file descriptor 00:27:33.698 [2024-07-26 11:35:29.324945] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:33.698 [2024-07-26 11:35:29.324954] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:33.698 [2024-07-26 11:35:29.324960] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:33.698 [2024-07-26 11:35:29.327573] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:33.698 [2024-07-26 11:35:29.337044] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:33.698 [2024-07-26 11:35:29.337480] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.698 [2024-07-26 11:35:29.337523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd76980 with addr=10.0.0.2, port=4420 00:27:33.698 [2024-07-26 11:35:29.337546] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd76980 is same with the state(5) to be set 00:27:33.698 [2024-07-26 11:35:29.338140] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd76980 (9): Bad file descriptor 00:27:33.698 [2024-07-26 11:35:29.338691] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:33.698 [2024-07-26 11:35:29.338700] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:33.698 [2024-07-26 11:35:29.338706] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:33.698 [2024-07-26 11:35:29.341225] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:33.698 [2024-07-26 11:35:29.349854] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:33.698 [2024-07-26 11:35:29.350281] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.698 [2024-07-26 11:35:29.350317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd76980 with addr=10.0.0.2, port=4420 00:27:33.698 [2024-07-26 11:35:29.350329] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd76980 is same with the state(5) to be set 00:27:33.698 [2024-07-26 11:35:29.350515] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd76980 (9): Bad file descriptor 00:27:33.698 [2024-07-26 11:35:29.350704] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:33.698 [2024-07-26 11:35:29.350716] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:33.698 [2024-07-26 11:35:29.350724] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:33.698 [2024-07-26 11:35:29.353578] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:33.960 [2024-07-26 11:35:29.362839] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:33.960 [2024-07-26 11:35:29.363268] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.960 [2024-07-26 11:35:29.363286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd76980 with addr=10.0.0.2, port=4420 00:27:33.960 [2024-07-26 11:35:29.363293] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd76980 is same with the state(5) to be set 00:27:33.960 [2024-07-26 11:35:29.363452] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd76980 (9): Bad file descriptor 00:27:33.960 [2024-07-26 11:35:29.363614] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:33.960 [2024-07-26 11:35:29.363624] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:33.960 [2024-07-26 11:35:29.363638] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:33.960 [2024-07-26 11:35:29.366275] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:33.960 [2024-07-26 11:35:29.375557] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:33.960 [2024-07-26 11:35:29.375973] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.960 [2024-07-26 11:35:29.376016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd76980 with addr=10.0.0.2, port=4420 00:27:33.960 [2024-07-26 11:35:29.376040] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd76980 is same with the state(5) to be set 00:27:33.960 [2024-07-26 11:35:29.376555] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd76980 (9): Bad file descriptor 00:27:33.960 [2024-07-26 11:35:29.376721] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:33.960 [2024-07-26 11:35:29.376731] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:33.960 [2024-07-26 11:35:29.376737] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:33.960 [2024-07-26 11:35:29.379254] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:33.960 [2024-07-26 11:35:29.388323] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:33.960 [2024-07-26 11:35:29.388724] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.960 [2024-07-26 11:35:29.388741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd76980 with addr=10.0.0.2, port=4420 00:27:33.960 [2024-07-26 11:35:29.388747] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd76980 is same with the state(5) to be set 00:27:33.960 [2024-07-26 11:35:29.388905] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd76980 (9): Bad file descriptor 00:27:33.960 [2024-07-26 11:35:29.389064] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:33.960 [2024-07-26 11:35:29.389073] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:33.960 [2024-07-26 11:35:29.389079] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:33.960 [2024-07-26 11:35:29.391600] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:33.960 [2024-07-26 11:35:29.401231] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:33.960 [2024-07-26 11:35:29.401677] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.960 [2024-07-26 11:35:29.401724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd76980 with addr=10.0.0.2, port=4420 00:27:33.960 [2024-07-26 11:35:29.401746] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd76980 is same with the state(5) to be set 00:27:33.960 [2024-07-26 11:35:29.402324] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd76980 (9): Bad file descriptor 00:27:33.960 [2024-07-26 11:35:29.402918] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:33.960 [2024-07-26 11:35:29.402956] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:33.960 [2024-07-26 11:35:29.402963] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:33.960 [2024-07-26 11:35:29.405713] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:33.960 [2024-07-26 11:35:29.414226] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:33.960 [2024-07-26 11:35:29.414672] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.960 [2024-07-26 11:35:29.414714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd76980 with addr=10.0.0.2, port=4420 00:27:33.960 [2024-07-26 11:35:29.414737] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd76980 is same with the state(5) to be set 00:27:33.960 [2024-07-26 11:35:29.415315] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd76980 (9): Bad file descriptor 00:27:33.960 [2024-07-26 11:35:29.415758] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:33.960 [2024-07-26 11:35:29.415768] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:33.960 [2024-07-26 11:35:29.415775] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:33.960 [2024-07-26 11:35:29.421911] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:33.960 [2024-07-26 11:35:29.429268] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:33.960 [2024-07-26 11:35:29.429790] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.960 [2024-07-26 11:35:29.429812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd76980 with addr=10.0.0.2, port=4420 00:27:33.960 [2024-07-26 11:35:29.429822] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd76980 is same with the state(5) to be set 00:27:33.960 [2024-07-26 11:35:29.430074] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd76980 (9): Bad file descriptor 00:27:33.960 [2024-07-26 11:35:29.430327] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:33.960 [2024-07-26 11:35:29.430339] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:33.960 [2024-07-26 11:35:29.430349] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:33.960 [2024-07-26 11:35:29.434402] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:33.960 [2024-07-26 11:35:29.442328] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:33.960 [2024-07-26 11:35:29.442772] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.960 [2024-07-26 11:35:29.442815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd76980 with addr=10.0.0.2, port=4420 00:27:33.960 [2024-07-26 11:35:29.442837] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd76980 is same with the state(5) to be set 00:27:33.960 [2024-07-26 11:35:29.443415] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd76980 (9): Bad file descriptor 00:27:33.960 [2024-07-26 11:35:29.443936] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:33.960 [2024-07-26 11:35:29.443946] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:33.960 [2024-07-26 11:35:29.443953] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:33.960 [2024-07-26 11:35:29.446686] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:33.960 [2024-07-26 11:35:29.455145] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:33.960 [2024-07-26 11:35:29.455547] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.960 [2024-07-26 11:35:29.455591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd76980 with addr=10.0.0.2, port=4420 00:27:33.960 [2024-07-26 11:35:29.455621] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd76980 is same with the state(5) to be set 00:27:33.960 [2024-07-26 11:35:29.456136] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd76980 (9): Bad file descriptor 00:27:33.960 [2024-07-26 11:35:29.456526] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:33.960 [2024-07-26 11:35:29.456543] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:33.960 [2024-07-26 11:35:29.456558] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:33.960 [2024-07-26 11:35:29.462793] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:33.960 [2024-07-26 11:35:29.469924] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:33.960 [2024-07-26 11:35:29.470443] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.960 [2024-07-26 11:35:29.470497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd76980 with addr=10.0.0.2, port=4420 00:27:33.960 [2024-07-26 11:35:29.470518] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd76980 is same with the state(5) to be set 00:27:33.960 [2024-07-26 11:35:29.471086] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd76980 (9): Bad file descriptor 00:27:33.960 [2024-07-26 11:35:29.471342] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:33.960 [2024-07-26 11:35:29.471355] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:33.960 [2024-07-26 11:35:29.471364] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:33.960 [2024-07-26 11:35:29.475408] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:33.960 [2024-07-26 11:35:29.482844] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:33.960 [2024-07-26 11:35:29.483273] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.960 [2024-07-26 11:35:29.483315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd76980 with addr=10.0.0.2, port=4420 00:27:33.960 [2024-07-26 11:35:29.483337] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd76980 is same with the state(5) to be set 00:27:33.960 [2024-07-26 11:35:29.483798] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd76980 (9): Bad file descriptor 00:27:33.960 [2024-07-26 11:35:29.483968] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:33.960 [2024-07-26 11:35:29.483977] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:33.960 [2024-07-26 11:35:29.483983] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:33.960 [2024-07-26 11:35:29.486628] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:33.960 [2024-07-26 11:35:29.495679] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:33.960 [2024-07-26 11:35:29.496140] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.960 [2024-07-26 11:35:29.496183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd76980 with addr=10.0.0.2, port=4420 00:27:33.960 [2024-07-26 11:35:29.496204] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd76980 is same with the state(5) to be set 00:27:33.960 [2024-07-26 11:35:29.496797] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd76980 (9): Bad file descriptor 00:27:33.960 [2024-07-26 11:35:29.497293] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:33.960 [2024-07-26 11:35:29.497308] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:33.960 [2024-07-26 11:35:29.497315] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:33.960 [2024-07-26 11:35:29.499847] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:33.960 [2024-07-26 11:35:29.508479] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:33.960 [2024-07-26 11:35:29.508828] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.960 [2024-07-26 11:35:29.508844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd76980 with addr=10.0.0.2, port=4420 00:27:33.960 [2024-07-26 11:35:29.508852] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd76980 is same with the state(5) to be set 00:27:33.960 [2024-07-26 11:35:29.509010] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd76980 (9): Bad file descriptor 00:27:33.960 [2024-07-26 11:35:29.509169] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:33.960 [2024-07-26 11:35:29.509178] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:33.960 [2024-07-26 11:35:29.509184] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:33.960 [2024-07-26 11:35:29.511714] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:33.960 [2024-07-26 11:35:29.521291] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:33.960 [2024-07-26 11:35:29.521717] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.960 [2024-07-26 11:35:29.521762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd76980 with addr=10.0.0.2, port=4420 00:27:33.960 [2024-07-26 11:35:29.521785] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd76980 is same with the state(5) to be set 00:27:33.960 [2024-07-26 11:35:29.522213] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd76980 (9): Bad file descriptor 00:27:33.960 [2024-07-26 11:35:29.522373] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:33.960 [2024-07-26 11:35:29.522382] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:33.960 [2024-07-26 11:35:29.522389] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:33.960 [2024-07-26 11:35:29.524916] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:33.960 [2024-07-26 11:35:29.534012] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:33.960 [2024-07-26 11:35:29.534425] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.960 [2024-07-26 11:35:29.534469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd76980 with addr=10.0.0.2, port=4420 00:27:33.960 [2024-07-26 11:35:29.534491] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd76980 is same with the state(5) to be set 00:27:33.960 [2024-07-26 11:35:29.534937] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd76980 (9): Bad file descriptor 00:27:33.960 [2024-07-26 11:35:29.535099] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:33.961 [2024-07-26 11:35:29.535108] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:33.961 [2024-07-26 11:35:29.535116] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:33.961 [2024-07-26 11:35:29.537644] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:33.961 [2024-07-26 11:35:29.546819] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:33.961 [2024-07-26 11:35:29.547192] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.961 [2024-07-26 11:35:29.547235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd76980 with addr=10.0.0.2, port=4420 00:27:33.961 [2024-07-26 11:35:29.547260] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd76980 is same with the state(5) to be set 00:27:33.961 [2024-07-26 11:35:29.547852] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd76980 (9): Bad file descriptor 00:27:33.961 [2024-07-26 11:35:29.548396] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:33.961 [2024-07-26 11:35:29.548406] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:33.961 [2024-07-26 11:35:29.548413] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:33.961 [2024-07-26 11:35:29.550966] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:33.961 [2024-07-26 11:35:29.559600] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:33.961 [2024-07-26 11:35:29.560042] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.961 [2024-07-26 11:35:29.560058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd76980 with addr=10.0.0.2, port=4420 00:27:33.961 [2024-07-26 11:35:29.560065] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd76980 is same with the state(5) to be set 00:27:33.961 [2024-07-26 11:35:29.560223] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd76980 (9): Bad file descriptor 00:27:33.961 [2024-07-26 11:35:29.560381] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:33.961 [2024-07-26 11:35:29.560390] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:33.961 [2024-07-26 11:35:29.560396] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:33.961 [2024-07-26 11:35:29.562918] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:33.961 [2024-07-26 11:35:29.572397] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:33.961 [2024-07-26 11:35:29.572837] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.961 [2024-07-26 11:35:29.572881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd76980 with addr=10.0.0.2, port=4420 00:27:33.961 [2024-07-26 11:35:29.572903] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd76980 is same with the state(5) to be set 00:27:33.961 [2024-07-26 11:35:29.573404] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd76980 (9): Bad file descriptor 00:27:33.961 [2024-07-26 11:35:29.573564] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:33.961 [2024-07-26 11:35:29.573572] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:33.961 [2024-07-26 11:35:29.573578] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:33.961 [2024-07-26 11:35:29.576144] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:33.961 [2024-07-26 11:35:29.585271] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:33.961 [2024-07-26 11:35:29.585669] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.961 [2024-07-26 11:35:29.585686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd76980 with addr=10.0.0.2, port=4420 00:27:33.961 [2024-07-26 11:35:29.585693] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd76980 is same with the state(5) to be set 00:27:33.961 [2024-07-26 11:35:29.585855] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd76980 (9): Bad file descriptor 00:27:33.961 [2024-07-26 11:35:29.586013] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:33.961 [2024-07-26 11:35:29.586022] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:33.961 [2024-07-26 11:35:29.586028] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:33.961 [2024-07-26 11:35:29.588550] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:33.961 [2024-07-26 11:35:29.598100] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:33.961 [2024-07-26 11:35:29.598536] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.961 [2024-07-26 11:35:29.598552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd76980 with addr=10.0.0.2, port=4420 00:27:33.961 [2024-07-26 11:35:29.598559] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd76980 is same with the state(5) to be set 00:27:33.961 [2024-07-26 11:35:29.598731] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd76980 (9): Bad file descriptor 00:27:33.961 [2024-07-26 11:35:29.598899] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:33.961 [2024-07-26 11:35:29.598909] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:33.961 [2024-07-26 11:35:29.598916] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:33.961 [2024-07-26 11:35:29.601514] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:33.961 [2024-07-26 11:35:29.610891] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:33.961 [2024-07-26 11:35:29.611307] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.961 [2024-07-26 11:35:29.611323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd76980 with addr=10.0.0.2, port=4420 00:27:33.961 [2024-07-26 11:35:29.611330] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd76980 is same with the state(5) to be set 00:27:33.961 [2024-07-26 11:35:29.611508] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd76980 (9): Bad file descriptor 00:27:33.961 [2024-07-26 11:35:29.611702] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:33.961 [2024-07-26 11:35:29.611713] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:33.961 [2024-07-26 11:35:29.611719] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:33.961 [2024-07-26 11:35:29.614478] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:34.223 [2024-07-26 11:35:29.623933] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:34.223 [2024-07-26 11:35:29.624373] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.223 [2024-07-26 11:35:29.624421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd76980 with addr=10.0.0.2, port=4420 00:27:34.223 [2024-07-26 11:35:29.624446] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd76980 is same with the state(5) to be set 00:27:34.223 [2024-07-26 11:35:29.625043] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd76980 (9): Bad file descriptor 00:27:34.223 [2024-07-26 11:35:29.625255] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:34.223 [2024-07-26 11:35:29.625265] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:34.223 [2024-07-26 11:35:29.625276] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:34.223 [2024-07-26 11:35:29.627852] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:34.223 [2024-07-26 11:35:29.636720] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:34.223 [2024-07-26 11:35:29.637164] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.223 [2024-07-26 11:35:29.637209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd76980 with addr=10.0.0.2, port=4420 00:27:34.223 [2024-07-26 11:35:29.637231] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd76980 is same with the state(5) to be set 00:27:34.223 [2024-07-26 11:35:29.637826] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd76980 (9): Bad file descriptor 00:27:34.223 [2024-07-26 11:35:29.638228] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:34.223 [2024-07-26 11:35:29.638247] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:34.223 [2024-07-26 11:35:29.638261] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:34.223 [2024-07-26 11:35:29.644493] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:34.223 [2024-07-26 11:35:29.651898] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:34.223 [2024-07-26 11:35:29.652413] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.223 [2024-07-26 11:35:29.652435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd76980 with addr=10.0.0.2, port=4420 00:27:34.223 [2024-07-26 11:35:29.652446] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd76980 is same with the state(5) to be set 00:27:34.223 [2024-07-26 11:35:29.652707] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd76980 (9): Bad file descriptor 00:27:34.223 [2024-07-26 11:35:29.652963] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:34.223 [2024-07-26 11:35:29.652976] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:34.223 [2024-07-26 11:35:29.652986] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:34.223 [2024-07-26 11:35:29.657037] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:34.223 [2024-07-26 11:35:29.664957] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:34.223 [2024-07-26 11:35:29.665386] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.223 [2024-07-26 11:35:29.665403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd76980 with addr=10.0.0.2, port=4420 00:27:34.223 [2024-07-26 11:35:29.665410] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd76980 is same with the state(5) to be set 00:27:34.223 [2024-07-26 11:35:29.665576] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd76980 (9): Bad file descriptor 00:27:34.223 [2024-07-26 11:35:29.665749] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:34.223 [2024-07-26 11:35:29.665760] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:34.223 [2024-07-26 11:35:29.665766] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:34.223 [2024-07-26 11:35:29.668426] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:34.223 [2024-07-26 11:35:29.677804] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:34.223 [2024-07-26 11:35:29.678197] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.223 [2024-07-26 11:35:29.678214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd76980 with addr=10.0.0.2, port=4420 00:27:34.223 [2024-07-26 11:35:29.678221] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd76980 is same with the state(5) to be set 00:27:34.223 [2024-07-26 11:35:29.678379] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd76980 (9): Bad file descriptor 00:27:34.223 [2024-07-26 11:35:29.678537] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:34.223 [2024-07-26 11:35:29.678546] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:34.223 [2024-07-26 11:35:29.678552] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:34.223 [2024-07-26 11:35:29.681077] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:34.223 [2024-07-26 11:35:29.690594] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:34.223 [2024-07-26 11:35:29.690991] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.223 [2024-07-26 11:35:29.691008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd76980 with addr=10.0.0.2, port=4420 00:27:34.223 [2024-07-26 11:35:29.691015] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd76980 is same with the state(5) to be set 00:27:34.223 [2024-07-26 11:35:29.691173] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd76980 (9): Bad file descriptor 00:27:34.223 [2024-07-26 11:35:29.691331] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:34.223 [2024-07-26 11:35:29.691341] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:34.223 [2024-07-26 11:35:29.691347] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:34.223 [2024-07-26 11:35:29.693904] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:34.224 [2024-07-26 11:35:29.703369] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:34.224 [2024-07-26 11:35:29.703743] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.224 [2024-07-26 11:35:29.703786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd76980 with addr=10.0.0.2, port=4420 00:27:34.224 [2024-07-26 11:35:29.703808] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd76980 is same with the state(5) to be set 00:27:34.224 [2024-07-26 11:35:29.704334] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd76980 (9): Bad file descriptor 00:27:34.224 [2024-07-26 11:35:29.704493] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:34.224 [2024-07-26 11:35:29.704501] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:34.224 [2024-07-26 11:35:29.704507] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:34.224 [2024-07-26 11:35:29.707032] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:34.224 [2024-07-26 11:35:29.716110] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:34.224 [2024-07-26 11:35:29.716502] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.224 [2024-07-26 11:35:29.716519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd76980 with addr=10.0.0.2, port=4420 00:27:34.224 [2024-07-26 11:35:29.716525] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd76980 is same with the state(5) to be set 00:27:34.224 [2024-07-26 11:35:29.716690] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd76980 (9): Bad file descriptor 00:27:34.224 [2024-07-26 11:35:29.716852] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:34.224 [2024-07-26 11:35:29.716861] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:34.224 [2024-07-26 11:35:29.716868] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:34.224 [2024-07-26 11:35:29.719389] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:34.224 [2024-07-26 11:35:29.728852] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:34.224 [2024-07-26 11:35:29.729246] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.224 [2024-07-26 11:35:29.729289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd76980 with addr=10.0.0.2, port=4420 00:27:34.224 [2024-07-26 11:35:29.729312] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd76980 is same with the state(5) to be set 00:27:34.224 [2024-07-26 11:35:29.729785] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd76980 (9): Bad file descriptor 00:27:34.224 [2024-07-26 11:35:29.729945] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:34.224 [2024-07-26 11:35:29.729953] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:34.224 [2024-07-26 11:35:29.729959] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:34.224 [2024-07-26 11:35:29.732477] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:34.224 [2024-07-26 11:35:29.741701] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:34.224 [2024-07-26 11:35:29.742112] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.224 [2024-07-26 11:35:29.742128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd76980 with addr=10.0.0.2, port=4420 00:27:34.224 [2024-07-26 11:35:29.742135] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd76980 is same with the state(5) to be set 00:27:34.224 [2024-07-26 11:35:29.742292] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd76980 (9): Bad file descriptor 00:27:34.224 [2024-07-26 11:35:29.742451] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:34.224 [2024-07-26 11:35:29.742460] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:34.224 [2024-07-26 11:35:29.742466] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:34.224 [2024-07-26 11:35:29.744998] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:34.224 [2024-07-26 11:35:29.754457] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:34.224 [2024-07-26 11:35:29.754901] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.224 [2024-07-26 11:35:29.754944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd76980 with addr=10.0.0.2, port=4420 00:27:34.224 [2024-07-26 11:35:29.754967] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd76980 is same with the state(5) to be set 00:27:34.224 [2024-07-26 11:35:29.755478] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd76980 (9): Bad file descriptor 00:27:34.224 [2024-07-26 11:35:29.755643] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:34.224 [2024-07-26 11:35:29.755651] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:34.224 [2024-07-26 11:35:29.755658] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:34.224 [2024-07-26 11:35:29.758180] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:34.224 [2024-07-26 11:35:29.767308] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:34.224 [2024-07-26 11:35:29.767597] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.224 [2024-07-26 11:35:29.767614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd76980 with addr=10.0.0.2, port=4420 00:27:34.224 [2024-07-26 11:35:29.767621] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd76980 is same with the state(5) to be set 00:27:34.224 [2024-07-26 11:35:29.767794] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd76980 (9): Bad file descriptor 00:27:34.224 [2024-07-26 11:35:29.767965] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:34.224 [2024-07-26 11:35:29.767974] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:34.224 [2024-07-26 11:35:29.767980] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:34.224 [2024-07-26 11:35:29.770498] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:34.224 [2024-07-26 11:35:29.780016] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:34.224 [2024-07-26 11:35:29.780337] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.224 [2024-07-26 11:35:29.780353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd76980 with addr=10.0.0.2, port=4420 00:27:34.224 [2024-07-26 11:35:29.780360] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd76980 is same with the state(5) to be set 00:27:34.224 [2024-07-26 11:35:29.780518] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd76980 (9): Bad file descriptor 00:27:34.224 [2024-07-26 11:35:29.780683] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:34.224 [2024-07-26 11:35:29.780693] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:34.224 [2024-07-26 11:35:29.780699] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:34.224 [2024-07-26 11:35:29.783217] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:34.224 [2024-07-26 11:35:29.792869] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:34.224 [2024-07-26 11:35:29.793277] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.224 [2024-07-26 11:35:29.793293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd76980 with addr=10.0.0.2, port=4420 00:27:34.224 [2024-07-26 11:35:29.793300] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd76980 is same with the state(5) to be set 00:27:34.224 [2024-07-26 11:35:29.793457] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd76980 (9): Bad file descriptor 00:27:34.224 [2024-07-26 11:35:29.793616] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:34.224 [2024-07-26 11:35:29.793631] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:34.224 [2024-07-26 11:35:29.793639] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:34.224 [2024-07-26 11:35:29.796158] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:34.224 [2024-07-26 11:35:29.805761] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:34.224 [2024-07-26 11:35:29.806149] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.224 [2024-07-26 11:35:29.806165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd76980 with addr=10.0.0.2, port=4420 00:27:34.224 [2024-07-26 11:35:29.806174] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd76980 is same with the state(5) to be set 00:27:34.224 [2024-07-26 11:35:29.806332] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd76980 (9): Bad file descriptor 00:27:34.224 [2024-07-26 11:35:29.806490] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:34.224 [2024-07-26 11:35:29.806499] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:34.224 [2024-07-26 11:35:29.806506] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:34.224 [2024-07-26 11:35:29.809031] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:34.224 [2024-07-26 11:35:29.818553] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:34.224 [2024-07-26 11:35:29.818980] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.224 [2024-07-26 11:35:29.819022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd76980 with addr=10.0.0.2, port=4420 00:27:34.224 [2024-07-26 11:35:29.819044] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd76980 is same with the state(5) to be set 00:27:34.224 [2024-07-26 11:35:29.819595] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd76980 (9): Bad file descriptor 00:27:34.224 [2024-07-26 11:35:29.819993] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:34.224 [2024-07-26 11:35:29.820012] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:34.224 [2024-07-26 11:35:29.820026] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:34.224 [2024-07-26 11:35:29.826244] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:34.224 [2024-07-26 11:35:29.833474] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:34.225 [2024-07-26 11:35:29.833999] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.225 [2024-07-26 11:35:29.834021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd76980 with addr=10.0.0.2, port=4420 00:27:34.225 [2024-07-26 11:35:29.834031] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd76980 is same with the state(5) to be set 00:27:34.225 [2024-07-26 11:35:29.834284] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd76980 (9): Bad file descriptor 00:27:34.225 [2024-07-26 11:35:29.834539] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:34.225 [2024-07-26 11:35:29.834551] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:34.225 [2024-07-26 11:35:29.834560] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:34.225 [2024-07-26 11:35:29.838613] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:34.225 [2024-07-26 11:35:29.846472] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:34.225 [2024-07-26 11:35:29.846829] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.225 [2024-07-26 11:35:29.846846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd76980 with addr=10.0.0.2, port=4420 00:27:34.225 [2024-07-26 11:35:29.846853] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd76980 is same with the state(5) to be set 00:27:34.225 [2024-07-26 11:35:29.847025] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd76980 (9): Bad file descriptor 00:27:34.225 [2024-07-26 11:35:29.847197] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:34.225 [2024-07-26 11:35:29.847209] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:34.225 [2024-07-26 11:35:29.847215] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:34.225 [2024-07-26 11:35:29.849940] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:34.225 [2024-07-26 11:35:29.859248] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:34.225 [2024-07-26 11:35:29.859666] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.225 [2024-07-26 11:35:29.859683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd76980 with addr=10.0.0.2, port=4420 00:27:34.225 [2024-07-26 11:35:29.859690] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd76980 is same with the state(5) to be set 00:27:34.225 [2024-07-26 11:35:29.859849] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd76980 (9): Bad file descriptor 00:27:34.225 [2024-07-26 11:35:29.860008] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:34.225 [2024-07-26 11:35:29.860017] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:34.225 [2024-07-26 11:35:29.860023] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:34.225 [2024-07-26 11:35:29.862551] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:34.225 [2024-07-26 11:35:29.872016] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:34.225 [2024-07-26 11:35:29.872428] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.225 [2024-07-26 11:35:29.872445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd76980 with addr=10.0.0.2, port=4420 00:27:34.225 [2024-07-26 11:35:29.872452] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd76980 is same with the state(5) to be set 00:27:34.225 [2024-07-26 11:35:29.872610] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd76980 (9): Bad file descriptor 00:27:34.225 [2024-07-26 11:35:29.872775] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:34.225 [2024-07-26 11:35:29.872785] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:34.225 [2024-07-26 11:35:29.872792] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:34.225 [2024-07-26 11:35:29.875309] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:34.484 [2024-07-26 11:35:29.884953] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:34.484 [2024-07-26 11:35:29.885380] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.484 [2024-07-26 11:35:29.885399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd76980 with addr=10.0.0.2, port=4420 00:27:34.484 [2024-07-26 11:35:29.885407] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd76980 is same with the state(5) to be set 00:27:34.484 [2024-07-26 11:35:29.885580] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd76980 (9): Bad file descriptor 00:27:34.484 [2024-07-26 11:35:29.885760] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:34.484 [2024-07-26 11:35:29.885771] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:34.484 [2024-07-26 11:35:29.885778] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:34.484 [2024-07-26 11:35:29.888542] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:34.484 [2024-07-26 11:35:29.897953] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:34.484 [2024-07-26 11:35:29.898314] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.484 [2024-07-26 11:35:29.898332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd76980 with addr=10.0.0.2, port=4420 00:27:34.484 [2024-07-26 11:35:29.898340] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd76980 is same with the state(5) to be set 00:27:34.484 [2024-07-26 11:35:29.898516] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd76980 (9): Bad file descriptor 00:27:34.484 [2024-07-26 11:35:29.898684] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:34.484 [2024-07-26 11:35:29.898694] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:34.484 [2024-07-26 11:35:29.898702] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:34.484 [2024-07-26 11:35:29.901222] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:34.484 [2024-07-26 11:35:29.910749] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:34.484 [2024-07-26 11:35:29.911184] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.484 [2024-07-26 11:35:29.911201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd76980 with addr=10.0.0.2, port=4420 00:27:34.484 [2024-07-26 11:35:29.911209] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd76980 is same with the state(5) to be set 00:27:34.484 [2024-07-26 11:35:29.911375] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd76980 (9): Bad file descriptor 00:27:34.484 [2024-07-26 11:35:29.911543] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:34.484 [2024-07-26 11:35:29.911553] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:34.484 [2024-07-26 11:35:29.911559] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:34.484 [2024-07-26 11:35:29.914303] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:34.484 [2024-07-26 11:35:29.923640] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:34.485 [2024-07-26 11:35:29.924009] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.485 [2024-07-26 11:35:29.924026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd76980 with addr=10.0.0.2, port=4420 00:27:34.485 [2024-07-26 11:35:29.924033] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd76980 is same with the state(5) to be set 00:27:34.485 [2024-07-26 11:35:29.924201] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd76980 (9): Bad file descriptor 00:27:34.485 [2024-07-26 11:35:29.924369] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:34.485 [2024-07-26 11:35:29.924379] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:34.485 [2024-07-26 11:35:29.924385] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:34.485 [2024-07-26 11:35:29.927055] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:34.485 [2024-07-26 11:35:29.936503] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:34.485 [2024-07-26 11:35:29.936899] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.485 [2024-07-26 11:35:29.936918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd76980 with addr=10.0.0.2, port=4420 00:27:34.485 [2024-07-26 11:35:29.936926] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd76980 is same with the state(5) to be set 00:27:34.485 [2024-07-26 11:35:29.937098] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd76980 (9): Bad file descriptor 00:27:34.485 [2024-07-26 11:35:29.937268] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:34.485 [2024-07-26 11:35:29.937278] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:34.485 [2024-07-26 11:35:29.937285] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:34.485 [2024-07-26 11:35:29.939954] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:34.485 [2024-07-26 11:35:29.949457] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:34.485 [2024-07-26 11:35:29.949858] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.485 [2024-07-26 11:35:29.949915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd76980 with addr=10.0.0.2, port=4420 00:27:34.485 [2024-07-26 11:35:29.949937] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd76980 is same with the state(5) to be set 00:27:34.485 [2024-07-26 11:35:29.950515] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd76980 (9): Bad file descriptor 00:27:34.485 [2024-07-26 11:35:29.951115] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:34.485 [2024-07-26 11:35:29.951135] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:34.485 [2024-07-26 11:35:29.951149] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:34.485 [2024-07-26 11:35:29.957384] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:34.485 [2024-07-26 11:35:29.964388] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:34.485 [2024-07-26 11:35:29.964847] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.485 [2024-07-26 11:35:29.964870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd76980 with addr=10.0.0.2, port=4420 00:27:34.485 [2024-07-26 11:35:29.964880] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd76980 is same with the state(5) to be set 00:27:34.485 [2024-07-26 11:35:29.965133] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd76980 (9): Bad file descriptor 00:27:34.485 [2024-07-26 11:35:29.965387] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:34.485 [2024-07-26 11:35:29.965399] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:34.485 [2024-07-26 11:35:29.965409] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:34.485 [2024-07-26 11:35:29.969463] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:34.485 [2024-07-26 11:35:29.977375] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:34.485 [2024-07-26 11:35:29.977783] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.485 [2024-07-26 11:35:29.977801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd76980 with addr=10.0.0.2, port=4420 00:27:34.485 [2024-07-26 11:35:29.977808] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd76980 is same with the state(5) to be set 00:27:34.485 [2024-07-26 11:35:29.977980] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd76980 (9): Bad file descriptor 00:27:34.485 [2024-07-26 11:35:29.978153] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:34.485 [2024-07-26 11:35:29.978163] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:34.485 [2024-07-26 11:35:29.978173] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:34.485 [2024-07-26 11:35:29.980918] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:34.485 [2024-07-26 11:35:29.990329] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:34.485 [2024-07-26 11:35:29.990621] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.485 [2024-07-26 11:35:29.990644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd76980 with addr=10.0.0.2, port=4420 00:27:34.485 [2024-07-26 11:35:29.990653] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd76980 is same with the state(5) to be set 00:27:34.485 [2024-07-26 11:35:29.990824] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd76980 (9): Bad file descriptor 00:27:34.485 [2024-07-26 11:35:29.990996] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:34.485 [2024-07-26 11:35:29.991005] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:34.485 [2024-07-26 11:35:29.991012] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:34.485 [2024-07-26 11:35:29.993756] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:34.485 [2024-07-26 11:35:30.003442] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:34.485 [2024-07-26 11:35:30.003867] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.485 [2024-07-26 11:35:30.003885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd76980 with addr=10.0.0.2, port=4420 00:27:34.485 [2024-07-26 11:35:30.003893] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd76980 is same with the state(5) to be set 00:27:34.485 [2024-07-26 11:35:30.004074] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd76980 (9): Bad file descriptor 00:27:34.485 [2024-07-26 11:35:30.004258] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:34.485 [2024-07-26 11:35:30.004268] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:34.485 [2024-07-26 11:35:30.004275] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:34.485 [2024-07-26 11:35:30.007064] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:34.485 [2024-07-26 11:35:30.016690] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:34.485 [2024-07-26 11:35:30.017066] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.485 [2024-07-26 11:35:30.017083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd76980 with addr=10.0.0.2, port=4420 00:27:34.485 [2024-07-26 11:35:30.017091] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd76980 is same with the state(5) to be set 00:27:34.485 [2024-07-26 11:35:30.017263] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd76980 (9): Bad file descriptor 00:27:34.485 [2024-07-26 11:35:30.017437] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:34.485 [2024-07-26 11:35:30.017446] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:34.485 [2024-07-26 11:35:30.017452] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:34.485 [2024-07-26 11:35:30.020206] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:34.485 [2024-07-26 11:35:30.029711] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:34.485 [2024-07-26 11:35:30.030125] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.485 [2024-07-26 11:35:30.030142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd76980 with addr=10.0.0.2, port=4420 00:27:34.485 [2024-07-26 11:35:30.030149] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd76980 is same with the state(5) to be set 00:27:34.485 [2024-07-26 11:35:30.030316] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd76980 (9): Bad file descriptor 00:27:34.485 [2024-07-26 11:35:30.030482] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:34.485 [2024-07-26 11:35:30.030492] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:34.485 [2024-07-26 11:35:30.030499] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:34.485 [2024-07-26 11:35:30.033171] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:34.485 [2024-07-26 11:35:30.042615] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:34.485 [2024-07-26 11:35:30.042981] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.485 [2024-07-26 11:35:30.042999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd76980 with addr=10.0.0.2, port=4420 00:27:34.485 [2024-07-26 11:35:30.043006] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd76980 is same with the state(5) to be set 00:27:34.485 [2024-07-26 11:35:30.043173] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd76980 (9): Bad file descriptor 00:27:34.485 [2024-07-26 11:35:30.043341] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:34.485 [2024-07-26 11:35:30.043350] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:34.485 [2024-07-26 11:35:30.043357] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:34.485 [2024-07-26 11:35:30.046035] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:34.485 [2024-07-26 11:35:30.055535] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:34.486 [2024-07-26 11:35:30.055965] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.486 [2024-07-26 11:35:30.055982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd76980 with addr=10.0.0.2, port=4420 00:27:34.486 [2024-07-26 11:35:30.055989] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd76980 is same with the state(5) to be set 00:27:34.486 [2024-07-26 11:35:30.056157] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd76980 (9): Bad file descriptor 00:27:34.486 [2024-07-26 11:35:30.056324] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:34.486 [2024-07-26 11:35:30.056333] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:34.486 [2024-07-26 11:35:30.056340] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:34.486 [2024-07-26 11:35:30.059029] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:34.486 [2024-07-26 11:35:30.068544] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:34.486 [2024-07-26 11:35:30.068976] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.486 [2024-07-26 11:35:30.068993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd76980 with addr=10.0.0.2, port=4420 00:27:34.486 [2024-07-26 11:35:30.069001] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd76980 is same with the state(5) to be set 00:27:34.486 [2024-07-26 11:35:30.069159] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd76980 (9): Bad file descriptor 00:27:34.486 [2024-07-26 11:35:30.069321] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:34.486 [2024-07-26 11:35:30.069330] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:34.486 [2024-07-26 11:35:30.069336] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:34.486 [2024-07-26 11:35:30.071865] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:34.486 [2024-07-26 11:35:30.081507] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:34.486 [2024-07-26 11:35:30.081947] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.486 [2024-07-26 11:35:30.081992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd76980 with addr=10.0.0.2, port=4420 00:27:34.486 [2024-07-26 11:35:30.082015] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd76980 is same with the state(5) to be set 00:27:34.486 [2024-07-26 11:35:30.082522] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd76980 (9): Bad file descriptor 00:27:34.486 [2024-07-26 11:35:30.082698] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:34.486 [2024-07-26 11:35:30.082708] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:34.486 [2024-07-26 11:35:30.082716] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:34.486 [2024-07-26 11:35:30.085412] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:34.486 [2024-07-26 11:35:30.094530] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:34.486 [2024-07-26 11:35:30.094895] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.486 [2024-07-26 11:35:30.094912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd76980 with addr=10.0.0.2, port=4420 00:27:34.486 [2024-07-26 11:35:30.094920] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd76980 is same with the state(5) to be set 00:27:34.486 [2024-07-26 11:35:30.095092] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd76980 (9): Bad file descriptor 00:27:34.486 [2024-07-26 11:35:30.095264] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:34.486 [2024-07-26 11:35:30.095274] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:34.486 [2024-07-26 11:35:30.095280] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:34.486 [2024-07-26 11:35:30.098047] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:34.486 [2024-07-26 11:35:30.107607] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:34.486 [2024-07-26 11:35:30.108049] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.486 [2024-07-26 11:35:30.108066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd76980 with addr=10.0.0.2, port=4420 00:27:34.486 [2024-07-26 11:35:30.108073] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd76980 is same with the state(5) to be set 00:27:34.486 [2024-07-26 11:35:30.108245] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd76980 (9): Bad file descriptor 00:27:34.486 [2024-07-26 11:35:30.108419] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:34.486 [2024-07-26 11:35:30.108428] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:34.486 [2024-07-26 11:35:30.108435] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:34.486 [2024-07-26 11:35:30.111189] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:34.486 [2024-07-26 11:35:30.120634] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:34.486 [2024-07-26 11:35:30.120991] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.486 [2024-07-26 11:35:30.121008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd76980 with addr=10.0.0.2, port=4420 00:27:34.486 [2024-07-26 11:35:30.121015] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd76980 is same with the state(5) to be set 00:27:34.486 [2024-07-26 11:35:30.121199] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd76980 (9): Bad file descriptor 00:27:34.486 [2024-07-26 11:35:30.121373] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:34.486 [2024-07-26 11:35:30.121383] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:34.486 [2024-07-26 11:35:30.121389] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:34.486 [2024-07-26 11:35:30.124142] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:34.486 [2024-07-26 11:35:30.133711] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:34.486 [2024-07-26 11:35:30.134050] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.486 [2024-07-26 11:35:30.134068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd76980 with addr=10.0.0.2, port=4420 00:27:34.486 [2024-07-26 11:35:30.134076] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd76980 is same with the state(5) to be set 00:27:34.486 [2024-07-26 11:35:30.134248] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd76980 (9): Bad file descriptor 00:27:34.486 [2024-07-26 11:35:30.134420] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:34.486 [2024-07-26 11:35:30.134430] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:34.486 [2024-07-26 11:35:30.134436] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:34.486 [2024-07-26 11:35:30.137184] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:34.745 [2024-07-26 11:35:30.146775] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:34.745 [2024-07-26 11:35:30.147158] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.745 [2024-07-26 11:35:30.147177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd76980 with addr=10.0.0.2, port=4420 00:27:34.745 [2024-07-26 11:35:30.147185] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd76980 is same with the state(5) to be set 00:27:34.746 [2024-07-26 11:35:30.147367] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd76980 (9): Bad file descriptor 00:27:34.746 [2024-07-26 11:35:30.147553] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:34.746 [2024-07-26 11:35:30.147563] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:34.746 [2024-07-26 11:35:30.147570] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:34.746 [2024-07-26 11:35:30.150611] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:34.746 [2024-07-26 11:35:30.159744] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:34.746 [2024-07-26 11:35:30.160150] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.746 [2024-07-26 11:35:30.160168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd76980 with addr=10.0.0.2, port=4420 00:27:34.746 [2024-07-26 11:35:30.160182] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd76980 is same with the state(5) to be set 00:27:34.746 [2024-07-26 11:35:30.160354] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd76980 (9): Bad file descriptor 00:27:34.746 [2024-07-26 11:35:30.160528] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:34.746 [2024-07-26 11:35:30.160538] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:34.746 [2024-07-26 11:35:30.160545] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:34.746 [2024-07-26 11:35:30.163303] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:34.746 [2024-07-26 11:35:30.173125] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:34.746 [2024-07-26 11:35:30.173589] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.746 [2024-07-26 11:35:30.173606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd76980 with addr=10.0.0.2, port=4420 00:27:34.746 [2024-07-26 11:35:30.173614] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd76980 is same with the state(5) to be set 00:27:34.746 [2024-07-26 11:35:30.173805] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd76980 (9): Bad file descriptor 00:27:34.746 [2024-07-26 11:35:30.173990] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:34.746 [2024-07-26 11:35:30.174001] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:34.746 [2024-07-26 11:35:30.174009] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:34.746 [2024-07-26 11:35:30.176932] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:34.746 [2024-07-26 11:35:30.186084] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:34.746 [2024-07-26 11:35:30.186438] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.746 [2024-07-26 11:35:30.186455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd76980 with addr=10.0.0.2, port=4420 00:27:34.746 [2024-07-26 11:35:30.186462] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd76980 is same with the state(5) to be set 00:27:34.746 [2024-07-26 11:35:30.186644] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd76980 (9): Bad file descriptor 00:27:34.746 [2024-07-26 11:35:30.186819] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:34.746 [2024-07-26 11:35:30.186829] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:34.746 [2024-07-26 11:35:30.186836] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:34.746 [2024-07-26 11:35:30.189579] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:34.746 [2024-07-26 11:35:30.199112] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:34.746 [2024-07-26 11:35:30.199487] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.746 [2024-07-26 11:35:30.199531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd76980 with addr=10.0.0.2, port=4420 00:27:34.746 [2024-07-26 11:35:30.199553] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd76980 is same with the state(5) to be set 00:27:34.746 [2024-07-26 11:35:30.200146] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd76980 (9): Bad file descriptor 00:27:34.746 [2024-07-26 11:35:30.200742] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:34.746 [2024-07-26 11:35:30.200775] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:34.746 [2024-07-26 11:35:30.200781] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:34.746 [2024-07-26 11:35:30.203445] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:34.746 [2024-07-26 11:35:30.212110] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:34.746 [2024-07-26 11:35:30.212492] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.746 [2024-07-26 11:35:30.212509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd76980 with addr=10.0.0.2, port=4420 00:27:34.746 [2024-07-26 11:35:30.212516] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd76980 is same with the state(5) to be set 00:27:34.746 [2024-07-26 11:35:30.212691] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd76980 (9): Bad file descriptor 00:27:34.746 [2024-07-26 11:35:30.212859] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:34.746 [2024-07-26 11:35:30.212868] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:34.746 [2024-07-26 11:35:30.212875] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:34.746 [2024-07-26 11:35:30.215562] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:34.746 [2024-07-26 11:35:30.225253] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:34.746 [2024-07-26 11:35:30.225610] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.746 [2024-07-26 11:35:30.225636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd76980 with addr=10.0.0.2, port=4420 00:27:34.746 [2024-07-26 11:35:30.225644] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd76980 is same with the state(5) to be set 00:27:34.746 [2024-07-26 11:35:30.225816] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd76980 (9): Bad file descriptor 00:27:34.746 [2024-07-26 11:35:30.225989] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:34.746 [2024-07-26 11:35:30.225999] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:34.746 [2024-07-26 11:35:30.226005] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:34.746 [2024-07-26 11:35:30.228752] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:34.746 [2024-07-26 11:35:30.238322] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:34.746 [2024-07-26 11:35:30.238714] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.746 [2024-07-26 11:35:30.238732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd76980 with addr=10.0.0.2, port=4420 00:27:34.746 [2024-07-26 11:35:30.238740] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd76980 is same with the state(5) to be set 00:27:34.746 [2024-07-26 11:35:30.238919] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd76980 (9): Bad file descriptor 00:27:34.746 [2024-07-26 11:35:30.239087] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:34.746 [2024-07-26 11:35:30.239097] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:34.746 [2024-07-26 11:35:30.239103] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:34.746 [2024-07-26 11:35:30.241771] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:34.746 [2024-07-26 11:35:30.251297] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:34.746 [2024-07-26 11:35:30.251645] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.746 [2024-07-26 11:35:30.251663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd76980 with addr=10.0.0.2, port=4420 00:27:34.746 [2024-07-26 11:35:30.251670] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd76980 is same with the state(5) to be set 00:27:34.746 [2024-07-26 11:35:30.251837] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd76980 (9): Bad file descriptor 00:27:34.746 [2024-07-26 11:35:30.252006] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:34.746 [2024-07-26 11:35:30.252015] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:34.746 [2024-07-26 11:35:30.252022] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:34.746 [2024-07-26 11:35:30.254695] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:34.746 [2024-07-26 11:35:30.264312] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:34.746 [2024-07-26 11:35:30.264665] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.747 [2024-07-26 11:35:30.264710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd76980 with addr=10.0.0.2, port=4420 00:27:34.747 [2024-07-26 11:35:30.264732] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd76980 is same with the state(5) to be set 00:27:34.747 [2024-07-26 11:35:30.265310] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd76980 (9): Bad file descriptor 00:27:34.747 [2024-07-26 11:35:30.265905] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:34.747 [2024-07-26 11:35:30.265932] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:34.747 [2024-07-26 11:35:30.265952] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:34.747 [2024-07-26 11:35:30.268743] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:34.747 [2024-07-26 11:35:30.277230] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:34.747 [2024-07-26 11:35:30.277591] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.747 [2024-07-26 11:35:30.277665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd76980 with addr=10.0.0.2, port=4420 00:27:34.747 [2024-07-26 11:35:30.277689] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd76980 is same with the state(5) to be set 00:27:34.747 [2024-07-26 11:35:30.278191] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd76980 (9): Bad file descriptor 00:27:34.747 [2024-07-26 11:35:30.278360] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:34.747 [2024-07-26 11:35:30.278370] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:34.747 [2024-07-26 11:35:30.278377] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:34.747 [2024-07-26 11:35:30.281040] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:34.747 [2024-07-26 11:35:30.290195] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:34.747 [2024-07-26 11:35:30.291012] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.747 [2024-07-26 11:35:30.291035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd76980 with addr=10.0.0.2, port=4420 00:27:34.747 [2024-07-26 11:35:30.291043] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd76980 is same with the state(5) to be set 00:27:34.747 [2024-07-26 11:35:30.291221] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd76980 (9): Bad file descriptor 00:27:34.747 [2024-07-26 11:35:30.291390] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:34.747 [2024-07-26 11:35:30.291400] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:34.747 [2024-07-26 11:35:30.291406] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:34.747 [2024-07-26 11:35:30.294076] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:34.747 [2024-07-26 11:35:30.303205] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:34.747 [2024-07-26 11:35:30.303587] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.747 [2024-07-26 11:35:30.303606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd76980 with addr=10.0.0.2, port=4420 00:27:34.747 [2024-07-26 11:35:30.303613] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd76980 is same with the state(5) to be set 00:27:34.747 [2024-07-26 11:35:30.303785] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd76980 (9): Bad file descriptor 00:27:34.747 [2024-07-26 11:35:30.303954] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:34.747 [2024-07-26 11:35:30.303964] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:34.747 [2024-07-26 11:35:30.303970] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:34.747 [2024-07-26 11:35:30.306636] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:34.747 [2024-07-26 11:35:30.316113] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:34.747 [2024-07-26 11:35:30.316405] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.747 [2024-07-26 11:35:30.316423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd76980 with addr=10.0.0.2, port=4420 00:27:34.747 [2024-07-26 11:35:30.316430] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd76980 is same with the state(5) to be set 00:27:34.747 [2024-07-26 11:35:30.316598] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd76980 (9): Bad file descriptor 00:27:34.747 [2024-07-26 11:35:30.316770] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:34.747 [2024-07-26 11:35:30.316780] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:34.747 [2024-07-26 11:35:30.316786] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:34.747 [2024-07-26 11:35:30.319449] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:34.747 [2024-07-26 11:35:30.329043] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:34.747 [2024-07-26 11:35:30.329329] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.747 [2024-07-26 11:35:30.329347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd76980 with addr=10.0.0.2, port=4420 00:27:34.747 [2024-07-26 11:35:30.329354] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd76980 is same with the state(5) to be set 00:27:34.747 [2024-07-26 11:35:30.329521] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd76980 (9): Bad file descriptor 00:27:34.747 [2024-07-26 11:35:30.329694] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:34.747 [2024-07-26 11:35:30.329704] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:34.747 [2024-07-26 11:35:30.329714] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:34.747 [2024-07-26 11:35:30.332374] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:34.747 [2024-07-26 11:35:30.341965] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:34.747 [2024-07-26 11:35:30.342251] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.747 [2024-07-26 11:35:30.342268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd76980 with addr=10.0.0.2, port=4420 00:27:34.747 [2024-07-26 11:35:30.342276] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd76980 is same with the state(5) to be set 00:27:34.747 [2024-07-26 11:35:30.342442] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd76980 (9): Bad file descriptor 00:27:34.747 [2024-07-26 11:35:30.342610] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:34.747 [2024-07-26 11:35:30.342619] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:34.747 [2024-07-26 11:35:30.342632] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:34.747 [2024-07-26 11:35:30.345296] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:34.747 [2024-07-26 11:35:30.354949] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:34.747 [2024-07-26 11:35:30.355765] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.747 [2024-07-26 11:35:30.355787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd76980 with addr=10.0.0.2, port=4420 00:27:34.747 [2024-07-26 11:35:30.355795] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd76980 is same with the state(5) to be set 00:27:34.747 [2024-07-26 11:35:30.355970] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd76980 (9): Bad file descriptor 00:27:34.747 [2024-07-26 11:35:30.356138] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:34.747 [2024-07-26 11:35:30.356148] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:34.747 [2024-07-26 11:35:30.356155] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:34.747 [2024-07-26 11:35:30.358828] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:34.747 [2024-07-26 11:35:30.367956] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:34.747 [2024-07-26 11:35:30.368235] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.747 [2024-07-26 11:35:30.368252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd76980 with addr=10.0.0.2, port=4420 00:27:34.747 [2024-07-26 11:35:30.368259] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd76980 is same with the state(5) to be set 00:27:34.747 [2024-07-26 11:35:30.368427] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd76980 (9): Bad file descriptor 00:27:34.747 [2024-07-26 11:35:30.368594] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:34.747 [2024-07-26 11:35:30.368604] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:34.747 [2024-07-26 11:35:30.368610] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:34.747 [2024-07-26 11:35:30.371279] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:34.747 [2024-07-26 11:35:30.380883] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:34.747 [2024-07-26 11:35:30.381262] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.747 [2024-07-26 11:35:30.381305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd76980 with addr=10.0.0.2, port=4420 00:27:34.747 [2024-07-26 11:35:30.381327] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd76980 is same with the state(5) to be set 00:27:34.747 [2024-07-26 11:35:30.381922] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd76980 (9): Bad file descriptor 00:27:34.747 [2024-07-26 11:35:30.382325] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:34.748 [2024-07-26 11:35:30.382335] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:34.748 [2024-07-26 11:35:30.382341] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:34.748 [2024-07-26 11:35:30.385023] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:34.748 [2024-07-26 11:35:30.393819] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:34.748 [2024-07-26 11:35:30.394151] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.748 [2024-07-26 11:35:30.394168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd76980 with addr=10.0.0.2, port=4420 00:27:34.748 [2024-07-26 11:35:30.394174] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd76980 is same with the state(5) to be set 00:27:34.748 [2024-07-26 11:35:30.394341] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd76980 (9): Bad file descriptor 00:27:34.748 [2024-07-26 11:35:30.394508] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:34.748 [2024-07-26 11:35:30.394517] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:34.748 [2024-07-26 11:35:30.394523] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:34.748 [2024-07-26 11:35:30.397192] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:35.007 [2024-07-26 11:35:30.406888] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:35.007 [2024-07-26 11:35:30.407192] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.007 [2024-07-26 11:35:30.407210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd76980 with addr=10.0.0.2, port=4420 00:27:35.007 [2024-07-26 11:35:30.407217] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd76980 is same with the state(5) to be set 00:27:35.007 [2024-07-26 11:35:30.407376] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd76980 (9): Bad file descriptor 00:27:35.007 [2024-07-26 11:35:30.407536] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:35.007 [2024-07-26 11:35:30.407545] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:35.007 [2024-07-26 11:35:30.407551] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:35.007 [2024-07-26 11:35:30.410326] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:35.007 [2024-07-26 11:35:30.419790] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:35.007 [2024-07-26 11:35:30.420094] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.007 [2024-07-26 11:35:30.420113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd76980 with addr=10.0.0.2, port=4420 00:27:35.007 [2024-07-26 11:35:30.420121] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd76980 is same with the state(5) to be set 00:27:35.007 [2024-07-26 11:35:30.420293] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd76980 (9): Bad file descriptor 00:27:35.007 [2024-07-26 11:35:30.420469] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:35.007 [2024-07-26 11:35:30.420479] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:35.007 [2024-07-26 11:35:30.420485] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:35.007 [2024-07-26 11:35:30.423232] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:35.007 [2024-07-26 11:35:30.432764] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:35.007 [2024-07-26 11:35:30.433058] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.008 [2024-07-26 11:35:30.433076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd76980 with addr=10.0.0.2, port=4420 00:27:35.008 [2024-07-26 11:35:30.433083] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd76980 is same with the state(5) to be set 00:27:35.008 [2024-07-26 11:35:30.433255] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd76980 (9): Bad file descriptor 00:27:35.008 [2024-07-26 11:35:30.433427] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:35.008 [2024-07-26 11:35:30.433437] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:35.008 [2024-07-26 11:35:30.433444] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:35.008 [2024-07-26 11:35:30.436132] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:35.008 [2024-07-26 11:35:30.445762] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:35.008 [2024-07-26 11:35:30.446059] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.008 [2024-07-26 11:35:30.446077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd76980 with addr=10.0.0.2, port=4420 00:27:35.008 [2024-07-26 11:35:30.446084] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd76980 is same with the state(5) to be set 00:27:35.008 [2024-07-26 11:35:30.446255] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd76980 (9): Bad file descriptor 00:27:35.008 [2024-07-26 11:35:30.446429] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:35.008 [2024-07-26 11:35:30.446439] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:35.008 [2024-07-26 11:35:30.446446] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:35.008 [2024-07-26 11:35:30.449166] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:35.008 [2024-07-26 11:35:30.458596] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:35.008 [2024-07-26 11:35:30.458948] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.008 [2024-07-26 11:35:30.458964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd76980 with addr=10.0.0.2, port=4420 00:27:35.008 [2024-07-26 11:35:30.458971] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd76980 is same with the state(5) to be set 00:27:35.008 [2024-07-26 11:35:30.459130] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd76980 (9): Bad file descriptor 00:27:35.008 [2024-07-26 11:35:30.459288] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:35.008 [2024-07-26 11:35:30.459297] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:35.008 [2024-07-26 11:35:30.459303] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:35.008 [2024-07-26 11:35:30.461832] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:35.008 [2024-07-26 11:35:30.471462] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:35.008 [2024-07-26 11:35:30.471842] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.008 [2024-07-26 11:35:30.471886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd76980 with addr=10.0.0.2, port=4420 00:27:35.008 [2024-07-26 11:35:30.471908] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd76980 is same with the state(5) to be set 00:27:35.008 [2024-07-26 11:35:30.472464] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd76980 (9): Bad file descriptor 00:27:35.008 [2024-07-26 11:35:30.472624] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:35.008 [2024-07-26 11:35:30.472639] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:35.008 [2024-07-26 11:35:30.472645] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:35.008 [2024-07-26 11:35:30.475165] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:35.008 [2024-07-26 11:35:30.484208] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:35.008 [2024-07-26 11:35:30.484495] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.008 [2024-07-26 11:35:30.484512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd76980 with addr=10.0.0.2, port=4420 00:27:35.008 [2024-07-26 11:35:30.484519] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd76980 is same with the state(5) to be set 00:27:35.008 [2024-07-26 11:35:30.484691] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd76980 (9): Bad file descriptor 00:27:35.008 [2024-07-26 11:35:30.484859] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:35.008 [2024-07-26 11:35:30.484869] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:35.008 [2024-07-26 11:35:30.484875] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:35.008 [2024-07-26 11:35:30.487485] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:35.008 [2024-07-26 11:35:30.497163] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:35.008 [2024-07-26 11:35:30.497448] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.008 [2024-07-26 11:35:30.497465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd76980 with addr=10.0.0.2, port=4420 00:27:35.008 [2024-07-26 11:35:30.497472] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd76980 is same with the state(5) to be set 00:27:35.008 [2024-07-26 11:35:30.497634] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd76980 (9): Bad file descriptor 00:27:35.008 [2024-07-26 11:35:30.497793] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:35.008 [2024-07-26 11:35:30.497802] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:35.008 [2024-07-26 11:35:30.497808] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:35.008 [2024-07-26 11:35:30.500421] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:35.008 [2024-07-26 11:35:30.509963] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:35.008 [2024-07-26 11:35:30.510356] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.008 [2024-07-26 11:35:30.510372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd76980 with addr=10.0.0.2, port=4420 00:27:35.008 [2024-07-26 11:35:30.510382] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd76980 is same with the state(5) to be set 00:27:35.008 [2024-07-26 11:35:30.510540] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd76980 (9): Bad file descriptor 00:27:35.008 [2024-07-26 11:35:30.510702] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:35.008 [2024-07-26 11:35:30.510712] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:35.008 [2024-07-26 11:35:30.510718] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:35.008 [2024-07-26 11:35:30.513237] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:35.008 [2024-07-26 11:35:30.522769] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:35.008 [2024-07-26 11:35:30.523126] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.008 [2024-07-26 11:35:30.523142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd76980 with addr=10.0.0.2, port=4420 00:27:35.008 [2024-07-26 11:35:30.523149] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd76980 is same with the state(5) to be set 00:27:35.008 [2024-07-26 11:35:30.523307] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd76980 (9): Bad file descriptor 00:27:35.008 [2024-07-26 11:35:30.523466] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:35.008 [2024-07-26 11:35:30.523475] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:35.008 [2024-07-26 11:35:30.523481] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:35.008 [2024-07-26 11:35:30.526008] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:35.008 [2024-07-26 11:35:30.535541] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:35.008 [2024-07-26 11:35:30.535884] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.008 [2024-07-26 11:35:30.535901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd76980 with addr=10.0.0.2, port=4420 00:27:35.008 [2024-07-26 11:35:30.535908] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd76980 is same with the state(5) to be set 00:27:35.008 [2024-07-26 11:35:30.536066] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd76980 (9): Bad file descriptor 00:27:35.008 [2024-07-26 11:35:30.536224] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:35.008 [2024-07-26 11:35:30.536233] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:35.008 [2024-07-26 11:35:30.536239] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:35.008 [2024-07-26 11:35:30.538765] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:35.008 [2024-07-26 11:35:30.548391] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:35.008 [2024-07-26 11:35:30.548827] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.008 [2024-07-26 11:35:30.548872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd76980 with addr=10.0.0.2, port=4420 00:27:35.008 [2024-07-26 11:35:30.548894] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd76980 is same with the state(5) to be set 00:27:35.008 [2024-07-26 11:35:30.549413] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd76980 (9): Bad file descriptor 00:27:35.008 [2024-07-26 11:35:30.549582] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:35.008 [2024-07-26 11:35:30.549594] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:35.008 [2024-07-26 11:35:30.549600] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:35.008 [2024-07-26 11:35:30.552169] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:35.008 [2024-07-26 11:35:30.561101] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:35.008 [2024-07-26 11:35:30.561449] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.008 [2024-07-26 11:35:30.561465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd76980 with addr=10.0.0.2, port=4420 00:27:35.009 [2024-07-26 11:35:30.561472] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd76980 is same with the state(5) to be set 00:27:35.009 [2024-07-26 11:35:30.561635] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd76980 (9): Bad file descriptor 00:27:35.009 [2024-07-26 11:35:30.561795] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:35.009 [2024-07-26 11:35:30.561804] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:35.009 [2024-07-26 11:35:30.561810] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:35.009 [2024-07-26 11:35:30.564336] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:35.009 [2024-07-26 11:35:30.573859] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:35.009 [2024-07-26 11:35:30.574281] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.009 [2024-07-26 11:35:30.574324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd76980 with addr=10.0.0.2, port=4420 00:27:35.009 [2024-07-26 11:35:30.574347] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd76980 is same with the state(5) to be set 00:27:35.009 [2024-07-26 11:35:30.574721] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd76980 (9): Bad file descriptor 00:27:35.009 [2024-07-26 11:35:30.574881] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:35.009 [2024-07-26 11:35:30.574890] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:35.009 [2024-07-26 11:35:30.574896] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:35.009 [2024-07-26 11:35:30.577416] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:35.009 [2024-07-26 11:35:30.586771] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:35.009 [2024-07-26 11:35:30.587118] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.009 [2024-07-26 11:35:30.587136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd76980 with addr=10.0.0.2, port=4420 00:27:35.009 [2024-07-26 11:35:30.587143] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd76980 is same with the state(5) to be set 00:27:35.009 [2024-07-26 11:35:30.587301] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd76980 (9): Bad file descriptor 00:27:35.009 [2024-07-26 11:35:30.587459] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:35.009 [2024-07-26 11:35:30.587468] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:35.009 [2024-07-26 11:35:30.587474] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:35.009 [2024-07-26 11:35:30.589997] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:35.009 [2024-07-26 11:35:30.599713] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:35.009 [2024-07-26 11:35:30.600138] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.009 [2024-07-26 11:35:30.600155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd76980 with addr=10.0.0.2, port=4420 00:27:35.009 [2024-07-26 11:35:30.600163] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd76980 is same with the state(5) to be set 00:27:35.009 [2024-07-26 11:35:30.600340] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd76980 (9): Bad file descriptor 00:27:35.009 [2024-07-26 11:35:30.600499] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:35.009 [2024-07-26 11:35:30.600508] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:35.009 [2024-07-26 11:35:30.600514] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:35.009 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh: line 35: 1665008 Killed "${NVMF_APP[@]}" "$@" 00:27:35.009 11:35:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@36 -- # tgt_init 00:27:35.009 11:35:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:27:35.009 11:35:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:27:35.009 11:35:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@724 -- # xtrace_disable 00:27:35.009 [2024-07-26 11:35:30.603170] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:35.009 11:35:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:27:35.009 11:35:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@481 -- # nvmfpid=1666413 00:27:35.009 11:35:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@482 -- # waitforlisten 1666413 00:27:35.009 11:35:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:27:35.009 11:35:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@831 -- # '[' -z 1666413 ']' 00:27:35.009 11:35:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:35.009 11:35:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@836 -- # local max_retries=100 00:27:35.009 11:35:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:35.009 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:35.009 11:35:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@840 -- # xtrace_disable 00:27:35.009 11:35:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:27:35.009 [2024-07-26 11:35:30.612712] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:35.009 [2024-07-26 11:35:30.613055] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.009 [2024-07-26 11:35:30.613073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd76980 with addr=10.0.0.2, port=4420 00:27:35.009 [2024-07-26 11:35:30.613081] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd76980 is same with the state(5) to be set 00:27:35.009 [2024-07-26 11:35:30.613253] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd76980 (9): Bad file descriptor 00:27:35.009 [2024-07-26 11:35:30.613427] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:35.009 [2024-07-26 11:35:30.613437] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:35.009 [2024-07-26 11:35:30.613444] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:35.009 [2024-07-26 11:35:30.616191] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:35.009 [2024-07-26 11:35:30.625751] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:35.009 [2024-07-26 11:35:30.626179] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.009 [2024-07-26 11:35:30.626197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd76980 with addr=10.0.0.2, port=4420 00:27:35.009 [2024-07-26 11:35:30.626205] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd76980 is same with the state(5) to be set 00:27:35.009 [2024-07-26 11:35:30.626377] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd76980 (9): Bad file descriptor 00:27:35.009 [2024-07-26 11:35:30.626550] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:35.009 [2024-07-26 11:35:30.626559] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:35.009 [2024-07-26 11:35:30.626567] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:35.009 [2024-07-26 11:35:30.629313] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:35.009 [2024-07-26 11:35:30.638875] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:35.009 [2024-07-26 11:35:30.639209] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.009 [2024-07-26 11:35:30.639225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd76980 with addr=10.0.0.2, port=4420 00:27:35.009 [2024-07-26 11:35:30.639232] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd76980 is same with the state(5) to be set 00:27:35.009 [2024-07-26 11:35:30.639399] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd76980 (9): Bad file descriptor 00:27:35.009 [2024-07-26 11:35:30.639566] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:35.009 [2024-07-26 11:35:30.639575] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:35.009 [2024-07-26 11:35:30.639581] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:35.009 [2024-07-26 11:35:30.642301] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:35.009 [2024-07-26 11:35:30.651892] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:35.009 [2024-07-26 11:35:30.652313] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.009 [2024-07-26 11:35:30.652330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd76980 with addr=10.0.0.2, port=4420 00:27:35.009 [2024-07-26 11:35:30.652338] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd76980 is same with the state(5) to be set 00:27:35.009 [2024-07-26 11:35:30.652505] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd76980 (9): Bad file descriptor 00:27:35.009 [2024-07-26 11:35:30.652678] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:35.009 [2024-07-26 11:35:30.652688] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:35.009 [2024-07-26 11:35:30.652695] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:35.009 [2024-07-26 11:35:30.655418] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:35.009 [2024-07-26 11:35:30.657069] Starting SPDK v24.09-pre git sha1 487ff9e1a / DPDK 24.03.0 initialization... 00:27:35.009 [2024-07-26 11:35:30.657109] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:35.009 [2024-07-26 11:35:30.665034] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:35.268 [2024-07-26 11:35:30.665383] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.268 [2024-07-26 11:35:30.665408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd76980 with addr=10.0.0.2, port=4420 00:27:35.268 [2024-07-26 11:35:30.665422] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd76980 is same with the state(5) to be set 00:27:35.268 [2024-07-26 11:35:30.665613] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd76980 (9): Bad file descriptor 00:27:35.268 [2024-07-26 11:35:30.665819] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:35.268 [2024-07-26 11:35:30.665830] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:35.268 [2024-07-26 11:35:30.665838] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:35.268 [2024-07-26 11:35:30.668612] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:35.268 [2024-07-26 11:35:30.678131] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:35.268 [2024-07-26 11:35:30.678547] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.268 [2024-07-26 11:35:30.678565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd76980 with addr=10.0.0.2, port=4420 00:27:35.268 [2024-07-26 11:35:30.678573] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd76980 is same with the state(5) to be set 00:27:35.269 [2024-07-26 11:35:30.678753] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd76980 (9): Bad file descriptor 00:27:35.269 [2024-07-26 11:35:30.678928] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:35.269 [2024-07-26 11:35:30.678938] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:35.269 [2024-07-26 11:35:30.678945] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:35.269 [2024-07-26 11:35:30.681692] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:35.269 EAL: No free 2048 kB hugepages reported on node 1 00:27:35.269 [2024-07-26 11:35:30.691095] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:35.269 [2024-07-26 11:35:30.691484] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.269 [2024-07-26 11:35:30.691502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd76980 with addr=10.0.0.2, port=4420 00:27:35.269 [2024-07-26 11:35:30.691509] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd76980 is same with the state(5) to be set 00:27:35.269 [2024-07-26 11:35:30.691686] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd76980 (9): Bad file descriptor 00:27:35.269 [2024-07-26 11:35:30.691859] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:35.269 [2024-07-26 11:35:30.691868] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:35.269 [2024-07-26 11:35:30.691875] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:35.269 [2024-07-26 11:35:30.694616] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:35.269 [2024-07-26 11:35:30.704168] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:35.269 [2024-07-26 11:35:30.704518] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.269 [2024-07-26 11:35:30.704536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd76980 with addr=10.0.0.2, port=4420 00:27:35.269 [2024-07-26 11:35:30.704547] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd76980 is same with the state(5) to be set 00:27:35.269 [2024-07-26 11:35:30.704726] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd76980 (9): Bad file descriptor 00:27:35.269 [2024-07-26 11:35:30.704901] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:35.269 [2024-07-26 11:35:30.704911] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:35.269 [2024-07-26 11:35:30.704917] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:35.269 [2024-07-26 11:35:30.707659] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:35.269 [2024-07-26 11:35:30.717218] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:35.269 [2024-07-26 11:35:30.717645] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.269 [2024-07-26 11:35:30.717663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd76980 with addr=10.0.0.2, port=4420 00:27:35.269 [2024-07-26 11:35:30.717671] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd76980 is same with the state(5) to be set 00:27:35.269 [2024-07-26 11:35:30.717838] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd76980 (9): Bad file descriptor 00:27:35.269 [2024-07-26 11:35:30.718007] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:35.269 [2024-07-26 11:35:30.718016] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:35.269 [2024-07-26 11:35:30.718023] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:35.269 [2024-07-26 11:35:30.720687] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:35.269 [2024-07-26 11:35:30.726395] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:27:35.269 [2024-07-26 11:35:30.730125] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:35.269 [2024-07-26 11:35:30.730557] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.269 [2024-07-26 11:35:30.730574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd76980 with addr=10.0.0.2, port=4420 00:27:35.269 [2024-07-26 11:35:30.730581] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd76980 is same with the state(5) to be set 00:27:35.269 [2024-07-26 11:35:30.730754] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd76980 (9): Bad file descriptor 00:27:35.269 [2024-07-26 11:35:30.730922] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:35.269 [2024-07-26 11:35:30.730932] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:35.269 [2024-07-26 11:35:30.730938] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:35.269 [2024-07-26 11:35:30.733602] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:35.269 [2024-07-26 11:35:30.743042] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:35.269 [2024-07-26 11:35:30.743475] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.269 [2024-07-26 11:35:30.743491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd76980 with addr=10.0.0.2, port=4420 00:27:35.269 [2024-07-26 11:35:30.743498] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd76980 is same with the state(5) to be set 00:27:35.269 [2024-07-26 11:35:30.743682] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd76980 (9): Bad file descriptor 00:27:35.269 [2024-07-26 11:35:30.743850] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:35.269 [2024-07-26 11:35:30.743863] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:35.269 [2024-07-26 11:35:30.743870] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:35.269 [2024-07-26 11:35:30.746531] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:35.269 [2024-07-26 11:35:30.756028] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:35.269 [2024-07-26 11:35:30.756396] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.269 [2024-07-26 11:35:30.756413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd76980 with addr=10.0.0.2, port=4420 00:27:35.269 [2024-07-26 11:35:30.756420] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd76980 is same with the state(5) to be set 00:27:35.269 [2024-07-26 11:35:30.756586] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd76980 (9): Bad file descriptor 00:27:35.269 [2024-07-26 11:35:30.756761] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:35.269 [2024-07-26 11:35:30.756771] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:35.269 [2024-07-26 11:35:30.756777] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:35.269 [2024-07-26 11:35:30.759436] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:35.269 [2024-07-26 11:35:30.769093] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:35.269 [2024-07-26 11:35:30.769534] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.269 [2024-07-26 11:35:30.769553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd76980 with addr=10.0.0.2, port=4420 00:27:35.269 [2024-07-26 11:35:30.769561] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd76980 is same with the state(5) to be set 00:27:35.269 [2024-07-26 11:35:30.769733] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd76980 (9): Bad file descriptor 00:27:35.269 [2024-07-26 11:35:30.769902] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:35.269 [2024-07-26 11:35:30.769912] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:35.269 [2024-07-26 11:35:30.769919] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:35.269 [2024-07-26 11:35:30.772579] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:35.269 [2024-07-26 11:35:30.782140] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:35.269 [2024-07-26 11:35:30.782580] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.269 [2024-07-26 11:35:30.782598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd76980 with addr=10.0.0.2, port=4420 00:27:35.269 [2024-07-26 11:35:30.782605] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd76980 is same with the state(5) to be set 00:27:35.269 [2024-07-26 11:35:30.782784] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd76980 (9): Bad file descriptor 00:27:35.269 [2024-07-26 11:35:30.782958] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:35.269 [2024-07-26 11:35:30.782967] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:35.269 [2024-07-26 11:35:30.782974] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:35.270 [2024-07-26 11:35:30.785718] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:35.270 [2024-07-26 11:35:30.795111] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:35.270 [2024-07-26 11:35:30.795556] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.270 [2024-07-26 11:35:30.795573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd76980 with addr=10.0.0.2, port=4420 00:27:35.270 [2024-07-26 11:35:30.795580] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd76980 is same with the state(5) to be set 00:27:35.270 [2024-07-26 11:35:30.795758] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd76980 (9): Bad file descriptor 00:27:35.270 [2024-07-26 11:35:30.795931] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:35.270 [2024-07-26 11:35:30.795940] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:35.270 [2024-07-26 11:35:30.795947] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:35.270 [2024-07-26 11:35:30.798661] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:35.270 [2024-07-26 11:35:30.804989] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:35.270 [2024-07-26 11:35:30.805014] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:35.270 [2024-07-26 11:35:30.805022] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:35.270 [2024-07-26 11:35:30.805028] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:35.270 [2024-07-26 11:35:30.805033] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:35.270 [2024-07-26 11:35:30.805074] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:27:35.270 [2024-07-26 11:35:30.805184] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:27:35.270 [2024-07-26 11:35:30.805185] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:27:35.270 [2024-07-26 11:35:30.808100] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:35.270 [2024-07-26 11:35:30.808540] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.270 [2024-07-26 11:35:30.808558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd76980 with addr=10.0.0.2, port=4420 00:27:35.270 [2024-07-26 11:35:30.808566] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd76980 is same with the state(5) to be set 00:27:35.270 [2024-07-26 11:35:30.808744] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd76980 (9): Bad file descriptor 00:27:35.270 [2024-07-26 11:35:30.808918] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:35.270 [2024-07-26 11:35:30.808928] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:35.270 [2024-07-26 11:35:30.808935] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:35.270 [2024-07-26 11:35:30.811680] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:35.270 [2024-07-26 11:35:30.821081] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:35.270 [2024-07-26 11:35:30.821535] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.270 [2024-07-26 11:35:30.821554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd76980 with addr=10.0.0.2, port=4420 00:27:35.270 [2024-07-26 11:35:30.821562] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd76980 is same with the state(5) to be set 00:27:35.270 [2024-07-26 11:35:30.821740] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd76980 (9): Bad file descriptor 00:27:35.270 [2024-07-26 11:35:30.821914] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:35.270 [2024-07-26 11:35:30.821929] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:35.270 [2024-07-26 11:35:30.821936] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:35.270 [2024-07-26 11:35:30.824682] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:35.270 [2024-07-26 11:35:30.834081] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:35.270 [2024-07-26 11:35:30.834527] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.270 [2024-07-26 11:35:30.834546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd76980 with addr=10.0.0.2, port=4420 00:27:35.270 [2024-07-26 11:35:30.834554] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd76980 is same with the state(5) to be set 00:27:35.270 [2024-07-26 11:35:30.834744] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd76980 (9): Bad file descriptor 00:27:35.270 [2024-07-26 11:35:30.834918] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:35.270 [2024-07-26 11:35:30.834928] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:35.270 [2024-07-26 11:35:30.834934] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:35.270 [2024-07-26 11:35:30.837686] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:35.270 [2024-07-26 11:35:30.847095] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:35.270 [2024-07-26 11:35:30.847476] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.270 [2024-07-26 11:35:30.847495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd76980 with addr=10.0.0.2, port=4420 00:27:35.270 [2024-07-26 11:35:30.847502] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd76980 is same with the state(5) to be set 00:27:35.270 [2024-07-26 11:35:30.847680] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd76980 (9): Bad file descriptor 00:27:35.270 [2024-07-26 11:35:30.847854] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:35.270 [2024-07-26 11:35:30.847864] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:35.270 [2024-07-26 11:35:30.847871] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:35.270 [2024-07-26 11:35:30.850612] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:35.270 [2024-07-26 11:35:30.860173] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:35.270 [2024-07-26 11:35:30.860620] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.270 [2024-07-26 11:35:30.860644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd76980 with addr=10.0.0.2, port=4420 00:27:35.270 [2024-07-26 11:35:30.860652] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd76980 is same with the state(5) to be set 00:27:35.270 [2024-07-26 11:35:30.860824] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd76980 (9): Bad file descriptor 00:27:35.270 [2024-07-26 11:35:30.860997] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:35.270 [2024-07-26 11:35:30.861007] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:35.270 [2024-07-26 11:35:30.861014] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:35.270 [2024-07-26 11:35:30.863768] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:35.270 [2024-07-26 11:35:30.873188] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:35.270 [2024-07-26 11:35:30.873623] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.270 [2024-07-26 11:35:30.873644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd76980 with addr=10.0.0.2, port=4420 00:27:35.270 [2024-07-26 11:35:30.873668] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd76980 is same with the state(5) to be set 00:27:35.270 [2024-07-26 11:35:30.873841] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd76980 (9): Bad file descriptor 00:27:35.270 [2024-07-26 11:35:30.874015] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:35.270 [2024-07-26 11:35:30.874024] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:35.270 [2024-07-26 11:35:30.874031] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:35.270 [2024-07-26 11:35:30.876773] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:35.270 [2024-07-26 11:35:30.886166] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:35.270 [2024-07-26 11:35:30.886580] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.270 [2024-07-26 11:35:30.886596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd76980 with addr=10.0.0.2, port=4420 00:27:35.270 [2024-07-26 11:35:30.886604] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd76980 is same with the state(5) to be set 00:27:35.270 [2024-07-26 11:35:30.886781] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd76980 (9): Bad file descriptor 00:27:35.270 [2024-07-26 11:35:30.886954] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:35.270 [2024-07-26 11:35:30.886964] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:35.270 [2024-07-26 11:35:30.886970] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:35.270 [2024-07-26 11:35:30.889711] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:35.270 [2024-07-26 11:35:30.899261] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:35.270 [2024-07-26 11:35:30.899622] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.270 [2024-07-26 11:35:30.899644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd76980 with addr=10.0.0.2, port=4420 00:27:35.270 [2024-07-26 11:35:30.899652] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd76980 is same with the state(5) to be set 00:27:35.270 [2024-07-26 11:35:30.899824] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd76980 (9): Bad file descriptor 00:27:35.270 [2024-07-26 11:35:30.899996] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:35.270 [2024-07-26 11:35:30.900006] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:35.270 [2024-07-26 11:35:30.900012] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:35.270 [2024-07-26 11:35:30.902731] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:35.270 [2024-07-26 11:35:30.912280] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:35.271 [2024-07-26 11:35:30.912697] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.271 [2024-07-26 11:35:30.912714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd76980 with addr=10.0.0.2, port=4420 00:27:35.271 [2024-07-26 11:35:30.912722] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd76980 is same with the state(5) to be set 00:27:35.271 [2024-07-26 11:35:30.912898] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd76980 (9): Bad file descriptor 00:27:35.271 [2024-07-26 11:35:30.913072] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:35.271 [2024-07-26 11:35:30.913082] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:35.271 [2024-07-26 11:35:30.913088] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:35.271 [2024-07-26 11:35:30.915830] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:35.271 [2024-07-26 11:35:30.925505] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:35.271 [2024-07-26 11:35:30.925887] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.271 [2024-07-26 11:35:30.925906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd76980 with addr=10.0.0.2, port=4420 00:27:35.271 [2024-07-26 11:35:30.925915] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd76980 is same with the state(5) to be set 00:27:35.271 [2024-07-26 11:35:30.926088] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd76980 (9): Bad file descriptor 00:27:35.271 [2024-07-26 11:35:30.926262] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:35.271 [2024-07-26 11:35:30.926273] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:35.271 [2024-07-26 11:35:30.926279] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:35.529 [2024-07-26 11:35:30.929104] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:35.529 [2024-07-26 11:35:30.938607] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:35.529 [2024-07-26 11:35:30.939082] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.529 [2024-07-26 11:35:30.939101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd76980 with addr=10.0.0.2, port=4420 00:27:35.529 [2024-07-26 11:35:30.939109] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd76980 is same with the state(5) to be set 00:27:35.529 [2024-07-26 11:35:30.939282] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd76980 (9): Bad file descriptor 00:27:35.529 [2024-07-26 11:35:30.939455] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:35.529 [2024-07-26 11:35:30.939464] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:35.529 [2024-07-26 11:35:30.939471] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:35.529 [2024-07-26 11:35:30.942218] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:35.529 [2024-07-26 11:35:30.951614] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:35.529 [2024-07-26 11:35:30.952049] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.529 [2024-07-26 11:35:30.952067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd76980 with addr=10.0.0.2, port=4420 00:27:35.529 [2024-07-26 11:35:30.952075] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd76980 is same with the state(5) to be set 00:27:35.529 [2024-07-26 11:35:30.952246] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd76980 (9): Bad file descriptor 00:27:35.529 [2024-07-26 11:35:30.952419] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:35.529 [2024-07-26 11:35:30.952429] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:35.529 [2024-07-26 11:35:30.952439] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:35.529 [2024-07-26 11:35:30.955183] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:35.529 [2024-07-26 11:35:30.964575] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:35.529 [2024-07-26 11:35:30.965011] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.529 [2024-07-26 11:35:30.965028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd76980 with addr=10.0.0.2, port=4420 00:27:35.529 [2024-07-26 11:35:30.965035] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd76980 is same with the state(5) to be set 00:27:35.529 [2024-07-26 11:35:30.965208] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd76980 (9): Bad file descriptor 00:27:35.529 [2024-07-26 11:35:30.965382] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:35.529 [2024-07-26 11:35:30.965391] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:35.529 [2024-07-26 11:35:30.965397] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:35.529 [2024-07-26 11:35:30.968140] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:35.529 [2024-07-26 11:35:30.977528] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:35.529 [2024-07-26 11:35:30.977942] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.529 [2024-07-26 11:35:30.977959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd76980 with addr=10.0.0.2, port=4420 00:27:35.529 [2024-07-26 11:35:30.977967] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd76980 is same with the state(5) to be set 00:27:35.529 [2024-07-26 11:35:30.978137] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd76980 (9): Bad file descriptor 00:27:35.529 [2024-07-26 11:35:30.978310] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:35.529 [2024-07-26 11:35:30.978320] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:35.529 [2024-07-26 11:35:30.978326] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:35.529 [2024-07-26 11:35:30.981072] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:35.529 [2024-07-26 11:35:30.990633] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:35.529 [2024-07-26 11:35:30.990991] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.529 [2024-07-26 11:35:30.991009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd76980 with addr=10.0.0.2, port=4420 00:27:35.529 [2024-07-26 11:35:30.991017] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd76980 is same with the state(5) to be set 00:27:35.529 [2024-07-26 11:35:30.991193] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd76980 (9): Bad file descriptor 00:27:35.529 [2024-07-26 11:35:30.991366] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:35.529 [2024-07-26 11:35:30.991376] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:35.529 [2024-07-26 11:35:30.991384] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:35.529 [2024-07-26 11:35:30.994131] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:35.529 [2024-07-26 11:35:31.003692] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:35.529 [2024-07-26 11:35:31.004049] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.530 [2024-07-26 11:35:31.004073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd76980 with addr=10.0.0.2, port=4420 00:27:35.530 [2024-07-26 11:35:31.004080] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd76980 is same with the state(5) to be set 00:27:35.530 [2024-07-26 11:35:31.004252] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd76980 (9): Bad file descriptor 00:27:35.530 [2024-07-26 11:35:31.004426] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:35.530 [2024-07-26 11:35:31.004435] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:35.530 [2024-07-26 11:35:31.004442] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:35.530 [2024-07-26 11:35:31.007188] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:35.530 [2024-07-26 11:35:31.016747] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:35.530 [2024-07-26 11:35:31.017100] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.530 [2024-07-26 11:35:31.017118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd76980 with addr=10.0.0.2, port=4420 00:27:35.530 [2024-07-26 11:35:31.017125] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd76980 is same with the state(5) to be set 00:27:35.530 [2024-07-26 11:35:31.017297] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd76980 (9): Bad file descriptor 00:27:35.530 [2024-07-26 11:35:31.017471] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:35.530 [2024-07-26 11:35:31.017481] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:35.530 [2024-07-26 11:35:31.017487] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:35.530 [2024-07-26 11:35:31.020232] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:35.530 [2024-07-26 11:35:31.029784] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:35.530 [2024-07-26 11:35:31.030164] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.530 [2024-07-26 11:35:31.030181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd76980 with addr=10.0.0.2, port=4420 00:27:35.530 [2024-07-26 11:35:31.030188] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd76980 is same with the state(5) to be set 00:27:35.530 [2024-07-26 11:35:31.030360] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd76980 (9): Bad file descriptor 00:27:35.530 [2024-07-26 11:35:31.030534] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:35.530 [2024-07-26 11:35:31.030543] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:35.530 [2024-07-26 11:35:31.030549] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:35.530 [2024-07-26 11:35:31.033291] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:35.530 [2024-07-26 11:35:31.042816] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:35.530 [2024-07-26 11:35:31.043179] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.530 [2024-07-26 11:35:31.043196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd76980 with addr=10.0.0.2, port=4420 00:27:35.530 [2024-07-26 11:35:31.043204] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd76980 is same with the state(5) to be set 00:27:35.530 [2024-07-26 11:35:31.043376] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd76980 (9): Bad file descriptor 00:27:35.530 [2024-07-26 11:35:31.043551] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:35.530 [2024-07-26 11:35:31.043561] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:35.530 [2024-07-26 11:35:31.043567] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:35.530 [2024-07-26 11:35:31.046316] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:35.530 [2024-07-26 11:35:31.055902] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:35.530 [2024-07-26 11:35:31.056312] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.530 [2024-07-26 11:35:31.056329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd76980 with addr=10.0.0.2, port=4420 00:27:35.530 [2024-07-26 11:35:31.056337] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd76980 is same with the state(5) to be set 00:27:35.530 [2024-07-26 11:35:31.056510] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd76980 (9): Bad file descriptor 00:27:35.530 [2024-07-26 11:35:31.056687] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:35.530 [2024-07-26 11:35:31.056698] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:35.530 [2024-07-26 11:35:31.056705] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:35.530 [2024-07-26 11:35:31.059443] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:35.530 [2024-07-26 11:35:31.068994] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:35.530 [2024-07-26 11:35:31.069361] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.530 [2024-07-26 11:35:31.069379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd76980 with addr=10.0.0.2, port=4420 00:27:35.530 [2024-07-26 11:35:31.069387] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd76980 is same with the state(5) to be set 00:27:35.530 [2024-07-26 11:35:31.069557] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd76980 (9): Bad file descriptor 00:27:35.530 [2024-07-26 11:35:31.069734] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:35.530 [2024-07-26 11:35:31.069745] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:35.530 [2024-07-26 11:35:31.069751] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:35.530 [2024-07-26 11:35:31.072489] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:35.530 [2024-07-26 11:35:31.082038] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:35.530 [2024-07-26 11:35:31.082465] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.530 [2024-07-26 11:35:31.082482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd76980 with addr=10.0.0.2, port=4420 00:27:35.530 [2024-07-26 11:35:31.082490] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd76980 is same with the state(5) to be set 00:27:35.530 [2024-07-26 11:35:31.082667] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd76980 (9): Bad file descriptor 00:27:35.530 [2024-07-26 11:35:31.082840] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:35.530 [2024-07-26 11:35:31.082849] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:35.530 [2024-07-26 11:35:31.082857] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:35.530 [2024-07-26 11:35:31.085598] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:35.530 [2024-07-26 11:35:31.094984] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:35.530 [2024-07-26 11:35:31.095392] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.530 [2024-07-26 11:35:31.095409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd76980 with addr=10.0.0.2, port=4420 00:27:35.530 [2024-07-26 11:35:31.095417] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd76980 is same with the state(5) to be set 00:27:35.530 [2024-07-26 11:35:31.095589] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd76980 (9): Bad file descriptor 00:27:35.530 [2024-07-26 11:35:31.095765] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:35.530 [2024-07-26 11:35:31.095776] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:35.530 [2024-07-26 11:35:31.095782] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:35.530 [2024-07-26 11:35:31.098519] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:35.530 [2024-07-26 11:35:31.108064] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:35.530 [2024-07-26 11:35:31.108471] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.530 [2024-07-26 11:35:31.108488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd76980 with addr=10.0.0.2, port=4420 00:27:35.530 [2024-07-26 11:35:31.108495] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd76980 is same with the state(5) to be set 00:27:35.530 [2024-07-26 11:35:31.108671] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd76980 (9): Bad file descriptor 00:27:35.530 [2024-07-26 11:35:31.108843] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:35.530 [2024-07-26 11:35:31.108853] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:35.530 [2024-07-26 11:35:31.108860] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:35.530 [2024-07-26 11:35:31.111597] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:35.530 [2024-07-26 11:35:31.121143] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:35.530 [2024-07-26 11:35:31.121575] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.531 [2024-07-26 11:35:31.121591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd76980 with addr=10.0.0.2, port=4420 00:27:35.531 [2024-07-26 11:35:31.121599] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd76980 is same with the state(5) to be set 00:27:35.531 [2024-07-26 11:35:31.121774] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd76980 (9): Bad file descriptor 00:27:35.531 [2024-07-26 11:35:31.121947] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:35.531 [2024-07-26 11:35:31.121956] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:35.531 [2024-07-26 11:35:31.121962] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:35.531 [2024-07-26 11:35:31.124706] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:35.531 [2024-07-26 11:35:31.134086] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:35.531 [2024-07-26 11:35:31.134458] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.531 [2024-07-26 11:35:31.134475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd76980 with addr=10.0.0.2, port=4420 00:27:35.531 [2024-07-26 11:35:31.134485] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd76980 is same with the state(5) to be set 00:27:35.531 [2024-07-26 11:35:31.134663] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd76980 (9): Bad file descriptor 00:27:35.531 [2024-07-26 11:35:31.134837] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:35.531 [2024-07-26 11:35:31.134847] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:35.531 [2024-07-26 11:35:31.134854] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:35.531 [2024-07-26 11:35:31.137590] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:35.531 [2024-07-26 11:35:31.147143] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:35.531 [2024-07-26 11:35:31.147503] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.531 [2024-07-26 11:35:31.147520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd76980 with addr=10.0.0.2, port=4420 00:27:35.531 [2024-07-26 11:35:31.147527] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd76980 is same with the state(5) to be set 00:27:35.531 [2024-07-26 11:35:31.147703] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd76980 (9): Bad file descriptor 00:27:35.531 [2024-07-26 11:35:31.147876] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:35.531 [2024-07-26 11:35:31.147885] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:35.531 [2024-07-26 11:35:31.147892] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:35.531 [2024-07-26 11:35:31.150631] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:35.531 [2024-07-26 11:35:31.160191] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:35.531 [2024-07-26 11:35:31.160541] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.531 [2024-07-26 11:35:31.160557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd76980 with addr=10.0.0.2, port=4420 00:27:35.531 [2024-07-26 11:35:31.160564] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd76980 is same with the state(5) to be set 00:27:35.531 [2024-07-26 11:35:31.160734] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd76980 (9): Bad file descriptor 00:27:35.531 [2024-07-26 11:35:31.160902] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:35.531 [2024-07-26 11:35:31.160911] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:35.531 [2024-07-26 11:35:31.160918] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:35.531 [2024-07-26 11:35:31.163652] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:35.531 [2024-07-26 11:35:31.173207] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:35.531 [2024-07-26 11:35:31.173619] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.531 [2024-07-26 11:35:31.173640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd76980 with addr=10.0.0.2, port=4420 00:27:35.531 [2024-07-26 11:35:31.173647] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd76980 is same with the state(5) to be set 00:27:35.531 [2024-07-26 11:35:31.173819] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd76980 (9): Bad file descriptor 00:27:35.531 [2024-07-26 11:35:31.173993] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:35.531 [2024-07-26 11:35:31.174006] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:35.531 [2024-07-26 11:35:31.174014] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:35.531 [2024-07-26 11:35:31.176756] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:35.531 [2024-07-26 11:35:31.186250] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:35.531 [2024-07-26 11:35:31.186692] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.531 [2024-07-26 11:35:31.186711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd76980 with addr=10.0.0.2, port=4420 00:27:35.531 [2024-07-26 11:35:31.186719] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd76980 is same with the state(5) to be set 00:27:35.531 [2024-07-26 11:35:31.186893] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd76980 (9): Bad file descriptor 00:27:35.531 [2024-07-26 11:35:31.187065] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:35.531 [2024-07-26 11:35:31.187075] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:35.531 [2024-07-26 11:35:31.187081] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:35.789 [2024-07-26 11:35:31.189945] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:35.790 [2024-07-26 11:35:31.199285] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:35.790 [2024-07-26 11:35:31.199722] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.790 [2024-07-26 11:35:31.199742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd76980 with addr=10.0.0.2, port=4420 00:27:35.790 [2024-07-26 11:35:31.199750] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd76980 is same with the state(5) to be set 00:27:35.790 [2024-07-26 11:35:31.199923] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd76980 (9): Bad file descriptor 00:27:35.790 [2024-07-26 11:35:31.200096] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:35.790 [2024-07-26 11:35:31.200106] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:35.790 [2024-07-26 11:35:31.200113] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:35.790 [2024-07-26 11:35:31.202853] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:35.790 [2024-07-26 11:35:31.212241] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:35.790 [2024-07-26 11:35:31.212672] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.790 [2024-07-26 11:35:31.212691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd76980 with addr=10.0.0.2, port=4420 00:27:35.790 [2024-07-26 11:35:31.212698] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd76980 is same with the state(5) to be set 00:27:35.790 [2024-07-26 11:35:31.212870] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd76980 (9): Bad file descriptor 00:27:35.790 [2024-07-26 11:35:31.213045] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:35.790 [2024-07-26 11:35:31.213054] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:35.790 [2024-07-26 11:35:31.213061] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:35.790 [2024-07-26 11:35:31.215804] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:35.790 [2024-07-26 11:35:31.225192] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:35.790 [2024-07-26 11:35:31.225621] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.790 [2024-07-26 11:35:31.225642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd76980 with addr=10.0.0.2, port=4420 00:27:35.790 [2024-07-26 11:35:31.225650] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd76980 is same with the state(5) to be set 00:27:35.790 [2024-07-26 11:35:31.225823] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd76980 (9): Bad file descriptor 00:27:35.790 [2024-07-26 11:35:31.225995] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:35.790 [2024-07-26 11:35:31.226005] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:35.790 [2024-07-26 11:35:31.226012] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:35.790 [2024-07-26 11:35:31.228751] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:35.790 [2024-07-26 11:35:31.238138] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:35.790 [2024-07-26 11:35:31.238538] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.790 [2024-07-26 11:35:31.238555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd76980 with addr=10.0.0.2, port=4420 00:27:35.790 [2024-07-26 11:35:31.238563] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd76980 is same with the state(5) to be set 00:27:35.790 [2024-07-26 11:35:31.238740] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd76980 (9): Bad file descriptor 00:27:35.790 [2024-07-26 11:35:31.238912] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:35.790 [2024-07-26 11:35:31.238921] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:35.790 [2024-07-26 11:35:31.238928] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:35.790 [2024-07-26 11:35:31.241666] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:35.790 [2024-07-26 11:35:31.251218] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:35.790 [2024-07-26 11:35:31.251652] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.790 [2024-07-26 11:35:31.251670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd76980 with addr=10.0.0.2, port=4420 00:27:35.790 [2024-07-26 11:35:31.251678] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd76980 is same with the state(5) to be set 00:27:35.790 [2024-07-26 11:35:31.251851] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd76980 (9): Bad file descriptor 00:27:35.790 [2024-07-26 11:35:31.252024] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:35.790 [2024-07-26 11:35:31.252034] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:35.790 [2024-07-26 11:35:31.252041] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:35.790 [2024-07-26 11:35:31.254780] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:35.790 [2024-07-26 11:35:31.264165] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:35.790 [2024-07-26 11:35:31.264589] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.790 [2024-07-26 11:35:31.264606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd76980 with addr=10.0.0.2, port=4420 00:27:35.790 [2024-07-26 11:35:31.264613] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd76980 is same with the state(5) to be set 00:27:35.790 [2024-07-26 11:35:31.264793] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd76980 (9): Bad file descriptor 00:27:35.790 [2024-07-26 11:35:31.264966] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:35.790 [2024-07-26 11:35:31.264975] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:35.790 [2024-07-26 11:35:31.264982] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:35.790 [2024-07-26 11:35:31.267731] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:35.790 [2024-07-26 11:35:31.277125] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:35.790 [2024-07-26 11:35:31.277532] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.790 [2024-07-26 11:35:31.277550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd76980 with addr=10.0.0.2, port=4420 00:27:35.790 [2024-07-26 11:35:31.277557] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd76980 is same with the state(5) to be set 00:27:35.790 [2024-07-26 11:35:31.277747] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd76980 (9): Bad file descriptor 00:27:35.790 [2024-07-26 11:35:31.277922] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:35.790 [2024-07-26 11:35:31.277931] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:35.790 [2024-07-26 11:35:31.277938] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:35.790 [2024-07-26 11:35:31.280680] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:35.790 [2024-07-26 11:35:31.290073] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:35.790 [2024-07-26 11:35:31.290450] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.790 [2024-07-26 11:35:31.290466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd76980 with addr=10.0.0.2, port=4420 00:27:35.790 [2024-07-26 11:35:31.290473] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd76980 is same with the state(5) to be set 00:27:35.790 [2024-07-26 11:35:31.290649] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd76980 (9): Bad file descriptor 00:27:35.790 [2024-07-26 11:35:31.290820] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:35.790 [2024-07-26 11:35:31.290828] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:35.790 [2024-07-26 11:35:31.290835] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:35.790 [2024-07-26 11:35:31.293572] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:35.790 [2024-07-26 11:35:31.303126] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:35.790 [2024-07-26 11:35:31.303526] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.790 [2024-07-26 11:35:31.303542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd76980 with addr=10.0.0.2, port=4420 00:27:35.790 [2024-07-26 11:35:31.303549] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd76980 is same with the state(5) to be set 00:27:35.790 [2024-07-26 11:35:31.303726] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd76980 (9): Bad file descriptor 00:27:35.790 [2024-07-26 11:35:31.303898] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:35.790 [2024-07-26 11:35:31.303906] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:35.790 [2024-07-26 11:35:31.303917] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:35.790 [2024-07-26 11:35:31.306660] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:35.790 [2024-07-26 11:35:31.316416] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:35.790 [2024-07-26 11:35:31.316783] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.790 [2024-07-26 11:35:31.316800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd76980 with addr=10.0.0.2, port=4420 00:27:35.790 [2024-07-26 11:35:31.316808] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd76980 is same with the state(5) to be set 00:27:35.790 [2024-07-26 11:35:31.316979] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd76980 (9): Bad file descriptor 00:27:35.790 [2024-07-26 11:35:31.317152] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:35.790 [2024-07-26 11:35:31.317160] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:35.790 [2024-07-26 11:35:31.317166] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:35.790 [2024-07-26 11:35:31.319911] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:35.790 [2024-07-26 11:35:31.329455] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:35.791 [2024-07-26 11:35:31.329894] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.791 [2024-07-26 11:35:31.329910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd76980 with addr=10.0.0.2, port=4420 00:27:35.791 [2024-07-26 11:35:31.329917] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd76980 is same with the state(5) to be set 00:27:35.791 [2024-07-26 11:35:31.330089] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd76980 (9): Bad file descriptor 00:27:35.791 [2024-07-26 11:35:31.330261] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:35.791 [2024-07-26 11:35:31.330269] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:35.791 [2024-07-26 11:35:31.330276] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:35.791 [2024-07-26 11:35:31.333018] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:35.791 [2024-07-26 11:35:31.342370] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:35.791 [2024-07-26 11:35:31.342714] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.791 [2024-07-26 11:35:31.342730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd76980 with addr=10.0.0.2, port=4420 00:27:35.791 [2024-07-26 11:35:31.342737] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd76980 is same with the state(5) to be set 00:27:35.791 [2024-07-26 11:35:31.342909] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd76980 (9): Bad file descriptor 00:27:35.791 [2024-07-26 11:35:31.343081] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:35.791 [2024-07-26 11:35:31.343089] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:35.791 [2024-07-26 11:35:31.343096] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:35.791 [2024-07-26 11:35:31.345850] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:35.791 [2024-07-26 11:35:31.355396] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:35.791 [2024-07-26 11:35:31.355830] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.791 [2024-07-26 11:35:31.355849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd76980 with addr=10.0.0.2, port=4420 00:27:35.791 [2024-07-26 11:35:31.355856] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd76980 is same with the state(5) to be set 00:27:35.791 [2024-07-26 11:35:31.356027] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd76980 (9): Bad file descriptor 00:27:35.791 [2024-07-26 11:35:31.356200] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:35.791 [2024-07-26 11:35:31.356208] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:35.791 [2024-07-26 11:35:31.356214] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:35.791 [2024-07-26 11:35:31.358958] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:35.791 [2024-07-26 11:35:31.368355] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:35.791 [2024-07-26 11:35:31.368780] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.791 [2024-07-26 11:35:31.368796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd76980 with addr=10.0.0.2, port=4420 00:27:35.791 [2024-07-26 11:35:31.368803] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd76980 is same with the state(5) to be set 00:27:35.791 [2024-07-26 11:35:31.368974] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd76980 (9): Bad file descriptor 00:27:35.791 [2024-07-26 11:35:31.369146] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:35.791 [2024-07-26 11:35:31.369154] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:35.791 [2024-07-26 11:35:31.369160] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:35.791 [2024-07-26 11:35:31.371906] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:35.791 [2024-07-26 11:35:31.381468] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:35.791 [2024-07-26 11:35:31.381896] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.791 [2024-07-26 11:35:31.381912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd76980 with addr=10.0.0.2, port=4420 00:27:35.791 [2024-07-26 11:35:31.381920] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd76980 is same with the state(5) to be set 00:27:35.791 [2024-07-26 11:35:31.382092] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd76980 (9): Bad file descriptor 00:27:35.791 [2024-07-26 11:35:31.382264] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:35.791 [2024-07-26 11:35:31.382272] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:35.791 [2024-07-26 11:35:31.382278] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:35.791 [2024-07-26 11:35:31.385024] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:35.791 [2024-07-26 11:35:31.394411] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:35.791 [2024-07-26 11:35:31.394842] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.791 [2024-07-26 11:35:31.394859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd76980 with addr=10.0.0.2, port=4420 00:27:35.791 [2024-07-26 11:35:31.394866] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd76980 is same with the state(5) to be set 00:27:35.791 [2024-07-26 11:35:31.395037] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd76980 (9): Bad file descriptor 00:27:35.791 [2024-07-26 11:35:31.395213] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:35.791 [2024-07-26 11:35:31.395222] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:35.791 [2024-07-26 11:35:31.395229] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:35.791 [2024-07-26 11:35:31.397984] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:35.791 [2024-07-26 11:35:31.407372] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:35.791 [2024-07-26 11:35:31.407809] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.791 [2024-07-26 11:35:31.407827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd76980 with addr=10.0.0.2, port=4420 00:27:35.791 [2024-07-26 11:35:31.407834] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd76980 is same with the state(5) to be set 00:27:35.791 [2024-07-26 11:35:31.408006] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd76980 (9): Bad file descriptor 00:27:35.791 [2024-07-26 11:35:31.408180] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:35.791 [2024-07-26 11:35:31.408188] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:35.791 [2024-07-26 11:35:31.408195] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:35.791 [2024-07-26 11:35:31.410940] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:35.791 [2024-07-26 11:35:31.420329] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:35.791 [2024-07-26 11:35:31.420700] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.791 [2024-07-26 11:35:31.420717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd76980 with addr=10.0.0.2, port=4420 00:27:35.791 [2024-07-26 11:35:31.420724] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd76980 is same with the state(5) to be set 00:27:35.791 [2024-07-26 11:35:31.420895] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd76980 (9): Bad file descriptor 00:27:35.791 [2024-07-26 11:35:31.421068] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:35.791 [2024-07-26 11:35:31.421076] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:35.791 [2024-07-26 11:35:31.421082] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:35.791 [2024-07-26 11:35:31.423828] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:35.791 [2024-07-26 11:35:31.433380] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:35.791 [2024-07-26 11:35:31.433775] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.791 [2024-07-26 11:35:31.433791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd76980 with addr=10.0.0.2, port=4420 00:27:35.791 [2024-07-26 11:35:31.433798] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd76980 is same with the state(5) to be set 00:27:35.791 [2024-07-26 11:35:31.433970] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd76980 (9): Bad file descriptor 00:27:35.791 [2024-07-26 11:35:31.434142] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:35.791 [2024-07-26 11:35:31.434150] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:35.791 [2024-07-26 11:35:31.434157] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:35.791 [2024-07-26 11:35:31.436909] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:35.791 [2024-07-26 11:35:31.446496] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:35.791 [2024-07-26 11:35:31.446967] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.791 [2024-07-26 11:35:31.446985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd76980 with addr=10.0.0.2, port=4420 00:27:35.791 [2024-07-26 11:35:31.446993] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd76980 is same with the state(5) to be set 00:27:35.791 [2024-07-26 11:35:31.447165] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd76980 (9): Bad file descriptor 00:27:35.791 [2024-07-26 11:35:31.447337] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:35.791 [2024-07-26 11:35:31.447345] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:35.791 [2024-07-26 11:35:31.447351] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:36.049 [2024-07-26 11:35:31.450176] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:36.049 [2024-07-26 11:35:31.459489] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:36.049 [2024-07-26 11:35:31.459912] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.049 [2024-07-26 11:35:31.459930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd76980 with addr=10.0.0.2, port=4420 00:27:36.049 [2024-07-26 11:35:31.459937] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd76980 is same with the state(5) to be set 00:27:36.049 [2024-07-26 11:35:31.460109] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd76980 (9): Bad file descriptor 00:27:36.049 11:35:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:27:36.050 [2024-07-26 11:35:31.460281] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:36.050 [2024-07-26 11:35:31.460289] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:36.050 [2024-07-26 11:35:31.460295] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:36.050 11:35:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@864 -- # return 0 00:27:36.050 11:35:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:27:36.050 11:35:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@730 -- # xtrace_disable 00:27:36.050 11:35:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:27:36.050 [2024-07-26 11:35:31.463038] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:36.050 [2024-07-26 11:35:31.472602] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:36.050 [2024-07-26 11:35:31.473039] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.050 [2024-07-26 11:35:31.473056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd76980 with addr=10.0.0.2, port=4420 00:27:36.050 [2024-07-26 11:35:31.473063] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd76980 is same with the state(5) to be set 00:27:36.050 [2024-07-26 11:35:31.473237] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd76980 (9): Bad file descriptor 00:27:36.050 [2024-07-26 11:35:31.473409] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:36.050 [2024-07-26 11:35:31.473419] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:36.050 [2024-07-26 11:35:31.473426] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:36.050 [2024-07-26 11:35:31.476174] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:36.050 [2024-07-26 11:35:31.485575] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:36.050 [2024-07-26 11:35:31.485919] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.050 [2024-07-26 11:35:31.485938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd76980 with addr=10.0.0.2, port=4420 00:27:36.050 [2024-07-26 11:35:31.485944] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd76980 is same with the state(5) to be set 00:27:36.050 [2024-07-26 11:35:31.486116] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd76980 (9): Bad file descriptor 00:27:36.050 [2024-07-26 11:35:31.486288] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:36.050 [2024-07-26 11:35:31.486296] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:36.050 [2024-07-26 11:35:31.486302] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:36.050 [2024-07-26 11:35:31.489046] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:36.050 11:35:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:36.050 11:35:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:27:36.050 11:35:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:36.050 11:35:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:27:36.050 [2024-07-26 11:35:31.498604] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:36.050 [2024-07-26 11:35:31.498996] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.050 [2024-07-26 11:35:31.499012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd76980 with addr=10.0.0.2, port=4420 00:27:36.050 [2024-07-26 11:35:31.499019] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd76980 is same with the state(5) to be set 00:27:36.050 [2024-07-26 11:35:31.499190] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd76980 (9): Bad file descriptor 00:27:36.050 [2024-07-26 11:35:31.499362] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:36.050 [2024-07-26 11:35:31.499370] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:36.050 [2024-07-26 11:35:31.499376] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:36.050 [2024-07-26 11:35:31.499912] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:36.050 [2024-07-26 11:35:31.502120] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:36.050 [2024-07-26 11:35:31.511679] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:36.050 [2024-07-26 11:35:31.512095] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.050 [2024-07-26 11:35:31.512111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd76980 with addr=10.0.0.2, port=4420 00:27:36.050 [2024-07-26 11:35:31.512118] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd76980 is same with the state(5) to be set 00:27:36.050 [2024-07-26 11:35:31.512290] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd76980 (9): Bad file descriptor 00:27:36.050 [2024-07-26 11:35:31.512461] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:36.050 [2024-07-26 11:35:31.512469] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:36.050 [2024-07-26 11:35:31.512479] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:36.050 11:35:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:36.050 11:35:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:27:36.050 11:35:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:36.050 11:35:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:27:36.050 [2024-07-26 11:35:31.515222] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:36.050 [2024-07-26 11:35:31.524782] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:36.050 [2024-07-26 11:35:31.525190] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.050 [2024-07-26 11:35:31.525207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd76980 with addr=10.0.0.2, port=4420 00:27:36.050 [2024-07-26 11:35:31.525213] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd76980 is same with the state(5) to be set 00:27:36.050 [2024-07-26 11:35:31.525385] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd76980 (9): Bad file descriptor 00:27:36.050 [2024-07-26 11:35:31.525557] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:36.050 [2024-07-26 11:35:31.525565] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:36.050 [2024-07-26 11:35:31.525571] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:36.050 [2024-07-26 11:35:31.528317] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:36.050 Malloc0 00:27:36.050 11:35:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:36.050 11:35:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:27:36.050 11:35:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:36.050 11:35:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:27:36.050 [2024-07-26 11:35:31.537882] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:36.050 [2024-07-26 11:35:31.538241] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.050 [2024-07-26 11:35:31.538258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd76980 with addr=10.0.0.2, port=4420 00:27:36.050 [2024-07-26 11:35:31.538265] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd76980 is same with the state(5) to be set 00:27:36.050 [2024-07-26 11:35:31.538439] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd76980 (9): Bad file descriptor 00:27:36.050 [2024-07-26 11:35:31.538610] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:36.050 [2024-07-26 11:35:31.538618] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:36.050 [2024-07-26 11:35:31.538625] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:36.050 [2024-07-26 11:35:31.541371] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:36.050 11:35:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:36.050 11:35:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:27:36.050 11:35:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:36.050 11:35:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:27:36.050 [2024-07-26 11:35:31.550941] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:36.050 [2024-07-26 11:35:31.551379] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.050 [2024-07-26 11:35:31.551400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd76980 with addr=10.0.0.2, port=4420 00:27:36.050 [2024-07-26 11:35:31.551407] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd76980 is same with the state(5) to be set 00:27:36.050 [2024-07-26 11:35:31.551578] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd76980 (9): Bad file descriptor 00:27:36.050 [2024-07-26 11:35:31.551755] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:36.050 [2024-07-26 11:35:31.551764] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:36.050 [2024-07-26 11:35:31.551771] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:36.050 [2024-07-26 11:35:31.554511] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:36.050 11:35:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:36.050 11:35:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:27:36.050 11:35:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:36.050 11:35:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:27:36.050 [2024-07-26 11:35:31.558687] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:36.050 11:35:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:36.050 11:35:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@38 -- # wait 1665384 00:27:36.051 [2024-07-26 11:35:31.563912] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:36.051 [2024-07-26 11:35:31.631461] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:27:46.008 00:27:46.008 Latency(us) 00:27:46.008 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:46.008 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:27:46.008 Verification LBA range: start 0x0 length 0x4000 00:27:46.008 Nvme1n1 : 15.01 8354.83 32.64 13001.05 0.00 5973.73 431.06 14917.24 00:27:46.008 =================================================================================================================== 00:27:46.008 Total : 8354.83 32.64 13001.05 0.00 5973.73 431.06 14917.24 00:27:46.008 11:35:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@39 -- # sync 00:27:46.008 11:35:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:27:46.008 11:35:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:46.008 11:35:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:27:46.008 11:35:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:46.009 11:35:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@42 -- # trap - SIGINT SIGTERM EXIT 00:27:46.009 11:35:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@44 -- # nvmftestfini 00:27:46.009 11:35:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@488 -- # nvmfcleanup 00:27:46.009 11:35:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@117 -- # sync 00:27:46.009 11:35:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:27:46.009 11:35:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@120 -- # set +e 00:27:46.009 11:35:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@121 -- # for i in {1..20} 00:27:46.009 11:35:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:27:46.009 rmmod nvme_tcp 00:27:46.009 rmmod nvme_fabrics 00:27:46.009 rmmod nvme_keyring 00:27:46.009 11:35:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:27:46.009 11:35:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@124 -- # set -e 00:27:46.009 11:35:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@125 -- # return 0 00:27:46.009 11:35:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@489 -- # '[' -n 1666413 ']' 00:27:46.009 11:35:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@490 -- # killprocess 1666413 00:27:46.009 11:35:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@950 -- # '[' -z 1666413 ']' 00:27:46.009 11:35:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@954 -- # kill -0 1666413 00:27:46.009 11:35:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@955 -- # uname 00:27:46.009 11:35:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:27:46.009 11:35:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1666413 00:27:46.009 11:35:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:27:46.009 11:35:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:27:46.009 11:35:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1666413' 00:27:46.009 killing process with pid 1666413 00:27:46.009 11:35:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@969 -- # kill 1666413 00:27:46.009 11:35:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@974 -- # wait 1666413 00:27:46.009 11:35:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:27:46.009 11:35:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:27:46.009 11:35:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:27:46.009 11:35:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:27:46.009 11:35:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@278 -- # remove_spdk_ns 00:27:46.009 11:35:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:46.009 11:35:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:46.009 11:35:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:47.387 11:35:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:27:47.388 00:27:47.388 real 0m26.494s 00:27:47.388 user 1m3.179s 00:27:47.388 sys 0m6.343s 00:27:47.388 11:35:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:27:47.388 11:35:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:27:47.388 ************************************ 00:27:47.388 END TEST nvmf_bdevperf 00:27:47.388 ************************************ 00:27:47.388 11:35:42 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@48 -- # run_test nvmf_target_disconnect /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=tcp 00:27:47.388 11:35:42 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:27:47.388 11:35:42 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:27:47.388 11:35:42 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:27:47.388 ************************************ 00:27:47.388 START TEST nvmf_target_disconnect 00:27:47.388 ************************************ 00:27:47.388 11:35:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=tcp 00:27:47.388 * Looking for test storage... 00:27:47.388 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:27:47.388 11:35:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:27:47.388 11:35:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@7 -- # uname -s 00:27:47.388 11:35:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:47.388 11:35:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:47.388 11:35:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:47.388 11:35:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:47.388 11:35:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:47.388 11:35:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:47.388 11:35:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:47.388 11:35:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:47.388 11:35:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:47.388 11:35:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:47.388 11:35:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:27:47.388 11:35:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:27:47.388 11:35:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:47.388 11:35:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:47.388 11:35:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:27:47.388 11:35:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:27:47.388 11:35:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:27:47.388 11:35:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:47.388 11:35:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:47.388 11:35:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:47.388 11:35:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:47.388 11:35:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:47.388 11:35:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:47.388 11:35:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@5 -- # export PATH 00:27:47.388 11:35:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:47.388 11:35:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@47 -- # : 0 00:27:47.388 11:35:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:27:47.388 11:35:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:27:47.388 11:35:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:27:47.388 11:35:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:47.388 11:35:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:47.388 11:35:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:27:47.388 11:35:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:27:47.388 11:35:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@51 -- # have_pci_nics=0 00:27:47.388 11:35:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@11 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme 00:27:47.388 11:35:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@13 -- # MALLOC_BDEV_SIZE=64 00:27:47.388 11:35:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:27:47.388 11:35:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@69 -- # nvmftestinit 00:27:47.388 11:35:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:27:47.388 11:35:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:47.388 11:35:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@448 -- # prepare_net_devs 00:27:47.388 11:35:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@410 -- # local -g is_hw=no 00:27:47.388 11:35:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@412 -- # remove_spdk_ns 00:27:47.388 11:35:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:47.388 11:35:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:47.388 11:35:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:47.388 11:35:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:27:47.388 11:35:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:27:47.388 11:35:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@285 -- # xtrace_disable 00:27:47.388 11:35:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:27:52.678 11:35:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:27:52.678 11:35:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@291 -- # pci_devs=() 00:27:52.678 11:35:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@291 -- # local -a pci_devs 00:27:52.678 11:35:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@292 -- # pci_net_devs=() 00:27:52.678 11:35:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:27:52.678 11:35:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@293 -- # pci_drivers=() 00:27:52.678 11:35:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@293 -- # local -A pci_drivers 00:27:52.678 11:35:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@295 -- # net_devs=() 00:27:52.678 11:35:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@295 -- # local -ga net_devs 00:27:52.678 11:35:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@296 -- # e810=() 00:27:52.678 11:35:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@296 -- # local -ga e810 00:27:52.678 11:35:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@297 -- # x722=() 00:27:52.678 11:35:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@297 -- # local -ga x722 00:27:52.678 11:35:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@298 -- # mlx=() 00:27:52.678 11:35:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@298 -- # local -ga mlx 00:27:52.678 11:35:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:52.678 11:35:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:52.678 11:35:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:52.678 11:35:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:52.678 11:35:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:52.678 11:35:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:52.678 11:35:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:52.678 11:35:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:52.678 11:35:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:52.678 11:35:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:52.678 11:35:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:52.678 11:35:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:27:52.678 11:35:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:27:52.678 11:35:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:27:52.678 11:35:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:27:52.678 11:35:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:27:52.678 11:35:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:27:52.678 11:35:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:27:52.678 11:35:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:27:52.678 Found 0000:86:00.0 (0x8086 - 0x159b) 00:27:52.678 11:35:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:27:52.678 11:35:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:27:52.678 11:35:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:52.678 11:35:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:52.678 11:35:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:27:52.678 11:35:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:27:52.678 11:35:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:27:52.678 Found 0000:86:00.1 (0x8086 - 0x159b) 00:27:52.678 11:35:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:27:52.678 11:35:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:27:52.678 11:35:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:52.678 11:35:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:52.678 11:35:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:27:52.678 11:35:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:27:52.678 11:35:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:27:52.678 11:35:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:27:52.678 11:35:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:27:52.678 11:35:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:52.678 11:35:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:27:52.678 11:35:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:52.678 11:35:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@390 -- # [[ up == up ]] 00:27:52.678 11:35:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:27:52.678 11:35:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:52.678 11:35:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:27:52.678 Found net devices under 0000:86:00.0: cvl_0_0 00:27:52.678 11:35:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:27:52.678 11:35:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:27:52.678 11:35:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:52.678 11:35:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:27:52.678 11:35:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:52.678 11:35:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@390 -- # [[ up == up ]] 00:27:52.678 11:35:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:27:52.678 11:35:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:52.678 11:35:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:27:52.678 Found net devices under 0000:86:00.1: cvl_0_1 00:27:52.678 11:35:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:27:52.678 11:35:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:27:52.678 11:35:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@414 -- # is_hw=yes 00:27:52.678 11:35:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:27:52.678 11:35:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:27:52.678 11:35:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:27:52.678 11:35:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:52.678 11:35:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:27:52.678 11:35:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:27:52.678 11:35:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:27:52.678 11:35:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:27:52.678 11:35:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:27:52.678 11:35:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:27:52.678 11:35:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:27:52.678 11:35:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:52.678 11:35:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:27:52.938 11:35:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:27:52.938 11:35:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:27:52.938 11:35:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:27:52.938 11:35:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:27:52.938 11:35:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:27:52.938 11:35:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:27:52.938 11:35:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:27:52.938 11:35:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:27:52.938 11:35:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:27:52.938 11:35:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:27:52.938 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:52.938 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.295 ms 00:27:52.938 00:27:52.938 --- 10.0.0.2 ping statistics --- 00:27:52.938 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:52.938 rtt min/avg/max/mdev = 0.295/0.295/0.295/0.000 ms 00:27:52.938 11:35:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:27:52.938 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:52.938 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.204 ms 00:27:52.938 00:27:52.938 --- 10.0.0.1 ping statistics --- 00:27:52.939 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:52.939 rtt min/avg/max/mdev = 0.204/0.204/0.204/0.000 ms 00:27:52.939 11:35:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:52.939 11:35:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@422 -- # return 0 00:27:52.939 11:35:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:27:52.939 11:35:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:52.939 11:35:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:27:52.939 11:35:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:27:52.939 11:35:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:52.939 11:35:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:27:52.939 11:35:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:27:53.199 11:35:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@70 -- # run_test nvmf_target_disconnect_tc1 nvmf_target_disconnect_tc1 00:27:53.199 11:35:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:27:53.199 11:35:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1107 -- # xtrace_disable 00:27:53.199 11:35:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:27:53.199 ************************************ 00:27:53.199 START TEST nvmf_target_disconnect_tc1 00:27:53.199 ************************************ 00:27:53.199 11:35:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@1125 -- # nvmf_target_disconnect_tc1 00:27:53.199 11:35:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- host/target_disconnect.sh@32 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:27:53.199 11:35:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@650 -- # local es=0 00:27:53.199 11:35:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:27:53.199 11:35:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:27:53.199 11:35:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:27:53.199 11:35:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:27:53.199 11:35:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:27:53.199 11:35:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:27:53.199 11:35:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:27:53.199 11:35:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:27:53.199 11:35:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect ]] 00:27:53.199 11:35:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:27:53.199 EAL: No free 2048 kB hugepages reported on node 1 00:27:53.199 [2024-07-26 11:35:48.761056] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.199 [2024-07-26 11:35:48.761162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x527e60 with addr=10.0.0.2, port=4420 00:27:53.199 [2024-07-26 11:35:48.761215] nvme_tcp.c:2711:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:27:53.199 [2024-07-26 11:35:48.761241] nvme.c: 830:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:27:53.199 [2024-07-26 11:35:48.761259] nvme.c: 913:spdk_nvme_probe: *ERROR*: Create probe context failed 00:27:53.199 spdk_nvme_probe() failed for transport address '10.0.0.2' 00:27:53.199 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect: errors occurred 00:27:53.199 Initializing NVMe Controllers 00:27:53.199 11:35:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@653 -- # es=1 00:27:53.199 11:35:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:27:53.199 11:35:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:27:53.199 11:35:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:27:53.199 00:27:53.199 real 0m0.112s 00:27:53.199 user 0m0.045s 00:27:53.199 sys 0m0.067s 00:27:53.199 11:35:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@1126 -- # xtrace_disable 00:27:53.199 11:35:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@10 -- # set +x 00:27:53.199 ************************************ 00:27:53.199 END TEST nvmf_target_disconnect_tc1 00:27:53.199 ************************************ 00:27:53.199 11:35:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@71 -- # run_test nvmf_target_disconnect_tc2 nvmf_target_disconnect_tc2 00:27:53.199 11:35:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:27:53.199 11:35:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1107 -- # xtrace_disable 00:27:53.199 11:35:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:27:53.199 ************************************ 00:27:53.199 START TEST nvmf_target_disconnect_tc2 00:27:53.199 ************************************ 00:27:53.199 11:35:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@1125 -- # nvmf_target_disconnect_tc2 00:27:53.199 11:35:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@37 -- # disconnect_init 10.0.0.2 00:27:53.199 11:35:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:27:53.199 11:35:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:27:53.199 11:35:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@724 -- # xtrace_disable 00:27:53.199 11:35:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:53.199 11:35:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@481 -- # nvmfpid=1671360 00:27:53.199 11:35:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@482 -- # waitforlisten 1671360 00:27:53.199 11:35:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:27:53.199 11:35:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@831 -- # '[' -z 1671360 ']' 00:27:53.199 11:35:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:53.199 11:35:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@836 -- # local max_retries=100 00:27:53.199 11:35:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:53.199 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:53.199 11:35:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@840 -- # xtrace_disable 00:27:53.199 11:35:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:53.458 [2024-07-26 11:35:48.896872] Starting SPDK v24.09-pre git sha1 487ff9e1a / DPDK 24.03.0 initialization... 00:27:53.458 [2024-07-26 11:35:48.896913] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:53.458 EAL: No free 2048 kB hugepages reported on node 1 00:27:53.458 [2024-07-26 11:35:48.951895] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:27:53.458 [2024-07-26 11:35:49.023786] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:53.458 [2024-07-26 11:35:49.023825] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:53.458 [2024-07-26 11:35:49.023832] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:53.458 [2024-07-26 11:35:49.023838] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:53.458 [2024-07-26 11:35:49.023843] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:53.458 [2024-07-26 11:35:49.023989] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 5 00:27:53.458 [2024-07-26 11:35:49.024172] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 6 00:27:53.458 [2024-07-26 11:35:49.024259] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:27:53.458 [2024-07-26 11:35:49.024260] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 7 00:27:53.716 11:35:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:27:53.716 11:35:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@864 -- # return 0 00:27:53.716 11:35:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:27:53.716 11:35:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@730 -- # xtrace_disable 00:27:53.716 11:35:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:53.716 11:35:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:53.716 11:35:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:27:53.716 11:35:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:53.716 11:35:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:53.716 Malloc0 00:27:53.716 11:35:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:53.716 11:35:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:27:53.716 11:35:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:53.716 11:35:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:53.716 [2024-07-26 11:35:49.199984] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:53.716 11:35:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:53.716 11:35:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:27:53.716 11:35:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:53.716 11:35:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:53.716 11:35:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:53.716 11:35:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:27:53.716 11:35:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:53.716 11:35:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:53.716 11:35:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:53.716 11:35:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:27:53.716 11:35:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:53.716 11:35:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:53.716 [2024-07-26 11:35:49.224875] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:53.716 11:35:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:53.716 11:35:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:27:53.717 11:35:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:53.717 11:35:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:53.717 11:35:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:53.717 11:35:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@42 -- # reconnectpid=1671578 00:27:53.717 11:35:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@44 -- # sleep 2 00:27:53.717 11:35:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:27:53.717 EAL: No free 2048 kB hugepages reported on node 1 00:27:55.621 11:35:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@45 -- # kill -9 1671360 00:27:55.621 11:35:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@47 -- # sleep 2 00:27:55.621 Read completed with error (sct=0, sc=8) 00:27:55.621 starting I/O failed 00:27:55.621 Read completed with error (sct=0, sc=8) 00:27:55.621 starting I/O failed 00:27:55.621 Read completed with error (sct=0, sc=8) 00:27:55.621 starting I/O failed 00:27:55.621 Read completed with error (sct=0, sc=8) 00:27:55.621 starting I/O failed 00:27:55.621 Read completed with error (sct=0, sc=8) 00:27:55.621 starting I/O failed 00:27:55.621 Read completed with error (sct=0, sc=8) 00:27:55.621 starting I/O failed 00:27:55.621 Read completed with error (sct=0, sc=8) 00:27:55.621 starting I/O failed 00:27:55.621 Read completed with error (sct=0, sc=8) 00:27:55.621 starting I/O failed 00:27:55.621 Read completed with error (sct=0, sc=8) 00:27:55.621 starting I/O failed 00:27:55.621 Read completed with error (sct=0, sc=8) 00:27:55.621 starting I/O failed 00:27:55.621 Read completed with error (sct=0, sc=8) 00:27:55.621 starting I/O failed 00:27:55.621 Read completed with error (sct=0, sc=8) 00:27:55.621 starting I/O failed 00:27:55.621 Read completed with error (sct=0, sc=8) 00:27:55.621 starting I/O failed 00:27:55.621 Read completed with error (sct=0, sc=8) 00:27:55.621 starting I/O failed 00:27:55.621 Read completed with error (sct=0, sc=8) 00:27:55.621 starting I/O failed 00:27:55.621 Write completed with error (sct=0, sc=8) 00:27:55.621 starting I/O failed 00:27:55.621 Write completed with error (sct=0, sc=8) 00:27:55.621 starting I/O failed 00:27:55.621 Write completed with error (sct=0, sc=8) 00:27:55.621 starting I/O failed 00:27:55.621 Read completed with error (sct=0, sc=8) 00:27:55.621 starting I/O failed 00:27:55.621 Write completed with error (sct=0, sc=8) 00:27:55.621 starting I/O failed 00:27:55.621 Write completed with error (sct=0, sc=8) 00:27:55.621 starting I/O failed 00:27:55.621 Write completed with error (sct=0, sc=8) 00:27:55.621 starting I/O failed 00:27:55.621 Read completed with error (sct=0, sc=8) 00:27:55.621 starting I/O failed 00:27:55.621 Read completed with error (sct=0, sc=8) 00:27:55.621 starting I/O failed 00:27:55.621 Write completed with error (sct=0, sc=8) 00:27:55.621 starting I/O failed 00:27:55.621 Read completed with error (sct=0, sc=8) 00:27:55.621 starting I/O failed 00:27:55.621 Write completed with error (sct=0, sc=8) 00:27:55.621 starting I/O failed 00:27:55.621 Read completed with error (sct=0, sc=8) 00:27:55.621 starting I/O failed 00:27:55.621 Write completed with error (sct=0, sc=8) 00:27:55.621 starting I/O failed 00:27:55.621 Write completed with error (sct=0, sc=8) 00:27:55.621 starting I/O failed 00:27:55.621 Write completed with error (sct=0, sc=8) 00:27:55.621 starting I/O failed 00:27:55.621 Write completed with error (sct=0, sc=8) 00:27:55.621 starting I/O failed 00:27:55.621 Read completed with error (sct=0, sc=8) 00:27:55.621 starting I/O failed 00:27:55.621 Read completed with error (sct=0, sc=8) 00:27:55.621 starting I/O failed 00:27:55.621 [2024-07-26 11:35:51.252424] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:55.621 Read completed with error (sct=0, sc=8) 00:27:55.621 starting I/O failed 00:27:55.621 Read completed with error (sct=0, sc=8) 00:27:55.621 starting I/O failed 00:27:55.621 Read completed with error (sct=0, sc=8) 00:27:55.621 starting I/O failed 00:27:55.621 Read completed with error (sct=0, sc=8) 00:27:55.621 starting I/O failed 00:27:55.621 Read completed with error (sct=0, sc=8) 00:27:55.621 starting I/O failed 00:27:55.621 Read completed with error (sct=0, sc=8) 00:27:55.621 starting I/O failed 00:27:55.621 Read completed with error (sct=0, sc=8) 00:27:55.621 starting I/O failed 00:27:55.621 Write completed with error (sct=0, sc=8) 00:27:55.621 starting I/O failed 00:27:55.621 Read completed with error (sct=0, sc=8) 00:27:55.621 starting I/O failed 00:27:55.621 Write completed with error (sct=0, sc=8) 00:27:55.621 starting I/O failed 00:27:55.621 Read completed with error (sct=0, sc=8) 00:27:55.621 starting I/O failed 00:27:55.622 Read completed with error (sct=0, sc=8) 00:27:55.622 starting I/O failed 00:27:55.622 Write completed with error (sct=0, sc=8) 00:27:55.622 starting I/O failed 00:27:55.622 Write completed with error (sct=0, sc=8) 00:27:55.622 starting I/O failed 00:27:55.622 Write completed with error (sct=0, sc=8) 00:27:55.622 starting I/O failed 00:27:55.622 Write completed with error (sct=0, sc=8) 00:27:55.622 starting I/O failed 00:27:55.622 Write completed with error (sct=0, sc=8) 00:27:55.622 starting I/O failed 00:27:55.622 Read completed with error (sct=0, sc=8) 00:27:55.622 starting I/O failed 00:27:55.622 Read completed with error (sct=0, sc=8) 00:27:55.622 starting I/O failed 00:27:55.622 Write completed with error (sct=0, sc=8) 00:27:55.622 starting I/O failed 00:27:55.622 Write completed with error (sct=0, sc=8) 00:27:55.622 starting I/O failed 00:27:55.622 Write completed with error (sct=0, sc=8) 00:27:55.622 starting I/O failed 00:27:55.622 Write completed with error (sct=0, sc=8) 00:27:55.622 starting I/O failed 00:27:55.622 Read completed with error (sct=0, sc=8) 00:27:55.622 starting I/O failed 00:27:55.622 Read completed with error (sct=0, sc=8) 00:27:55.622 starting I/O failed 00:27:55.622 Read completed with error (sct=0, sc=8) 00:27:55.622 starting I/O failed 00:27:55.622 Read completed with error (sct=0, sc=8) 00:27:55.622 starting I/O failed 00:27:55.622 Write completed with error (sct=0, sc=8) 00:27:55.622 starting I/O failed 00:27:55.622 Write completed with error (sct=0, sc=8) 00:27:55.622 starting I/O failed 00:27:55.622 Write completed with error (sct=0, sc=8) 00:27:55.622 starting I/O failed 00:27:55.622 [2024-07-26 11:35:51.252634] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:27:55.622 Read completed with error (sct=0, sc=8) 00:27:55.622 starting I/O failed 00:27:55.622 Read completed with error (sct=0, sc=8) 00:27:55.622 starting I/O failed 00:27:55.622 Read completed with error (sct=0, sc=8) 00:27:55.622 starting I/O failed 00:27:55.622 Read completed with error (sct=0, sc=8) 00:27:55.622 starting I/O failed 00:27:55.622 Read completed with error (sct=0, sc=8) 00:27:55.622 starting I/O failed 00:27:55.622 Read completed with error (sct=0, sc=8) 00:27:55.622 starting I/O failed 00:27:55.622 Write completed with error (sct=0, sc=8) 00:27:55.622 starting I/O failed 00:27:55.622 Read completed with error (sct=0, sc=8) 00:27:55.622 starting I/O failed 00:27:55.622 Write completed with error (sct=0, sc=8) 00:27:55.622 starting I/O failed 00:27:55.622 Write completed with error (sct=0, sc=8) 00:27:55.622 starting I/O failed 00:27:55.622 Read completed with error (sct=0, sc=8) 00:27:55.622 starting I/O failed 00:27:55.622 Write completed with error (sct=0, sc=8) 00:27:55.622 starting I/O failed 00:27:55.622 Write completed with error (sct=0, sc=8) 00:27:55.622 starting I/O failed 00:27:55.622 Write completed with error (sct=0, sc=8) 00:27:55.622 starting I/O failed 00:27:55.622 Write completed with error (sct=0, sc=8) 00:27:55.622 starting I/O failed 00:27:55.622 Read completed with error (sct=0, sc=8) 00:27:55.622 starting I/O failed 00:27:55.622 Write completed with error (sct=0, sc=8) 00:27:55.622 starting I/O failed 00:27:55.622 Read completed with error (sct=0, sc=8) 00:27:55.622 starting I/O failed 00:27:55.622 Read completed with error (sct=0, sc=8) 00:27:55.622 starting I/O failed 00:27:55.622 Read completed with error (sct=0, sc=8) 00:27:55.622 starting I/O failed 00:27:55.622 Write completed with error (sct=0, sc=8) 00:27:55.622 starting I/O failed 00:27:55.622 Read completed with error (sct=0, sc=8) 00:27:55.622 starting I/O failed 00:27:55.622 Write completed with error (sct=0, sc=8) 00:27:55.622 starting I/O failed 00:27:55.622 Write completed with error (sct=0, sc=8) 00:27:55.622 starting I/O failed 00:27:55.622 Read completed with error (sct=0, sc=8) 00:27:55.622 starting I/O failed 00:27:55.622 Write completed with error (sct=0, sc=8) 00:27:55.622 starting I/O failed 00:27:55.622 Read completed with error (sct=0, sc=8) 00:27:55.622 starting I/O failed 00:27:55.622 Write completed with error (sct=0, sc=8) 00:27:55.622 starting I/O failed 00:27:55.622 Write completed with error (sct=0, sc=8) 00:27:55.622 starting I/O failed 00:27:55.622 Read completed with error (sct=0, sc=8) 00:27:55.622 starting I/O failed 00:27:55.622 Read completed with error (sct=0, sc=8) 00:27:55.622 starting I/O failed 00:27:55.622 Write completed with error (sct=0, sc=8) 00:27:55.622 starting I/O failed 00:27:55.622 [2024-07-26 11:35:51.252817] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:55.622 [2024-07-26 11:35:51.252991] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.622 [2024-07-26 11:35:51.253008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:55.622 qpair failed and we were unable to recover it. 00:27:55.622 [2024-07-26 11:35:51.253157] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.622 [2024-07-26 11:35:51.253167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:55.622 qpair failed and we were unable to recover it. 00:27:55.622 [2024-07-26 11:35:51.253323] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.622 [2024-07-26 11:35:51.253333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:55.622 qpair failed and we were unable to recover it. 00:27:55.622 [2024-07-26 11:35:51.253573] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.622 [2024-07-26 11:35:51.253583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:55.622 qpair failed and we were unable to recover it. 00:27:55.622 [2024-07-26 11:35:51.253666] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.622 [2024-07-26 11:35:51.253680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:55.622 qpair failed and we were unable to recover it. 00:27:55.622 [2024-07-26 11:35:51.253833] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.622 [2024-07-26 11:35:51.253844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:55.622 qpair failed and we were unable to recover it. 00:27:55.622 [2024-07-26 11:35:51.253942] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.622 [2024-07-26 11:35:51.253951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:55.622 qpair failed and we were unable to recover it. 00:27:55.622 [2024-07-26 11:35:51.254080] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.622 [2024-07-26 11:35:51.254090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:55.622 qpair failed and we were unable to recover it. 00:27:55.622 [2024-07-26 11:35:51.254240] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.622 [2024-07-26 11:35:51.254250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:55.622 qpair failed and we were unable to recover it. 00:27:55.622 [2024-07-26 11:35:51.254479] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.622 [2024-07-26 11:35:51.254489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:55.622 qpair failed and we were unable to recover it. 00:27:55.622 [2024-07-26 11:35:51.254761] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.622 [2024-07-26 11:35:51.254772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:55.622 qpair failed and we were unable to recover it. 00:27:55.622 [2024-07-26 11:35:51.254924] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.622 [2024-07-26 11:35:51.254934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:55.622 qpair failed and we were unable to recover it. 00:27:55.622 [2024-07-26 11:35:51.255038] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.622 [2024-07-26 11:35:51.255048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:55.622 qpair failed and we were unable to recover it. 00:27:55.622 [2024-07-26 11:35:51.255140] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.622 [2024-07-26 11:35:51.255150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:55.622 qpair failed and we were unable to recover it. 00:27:55.622 [2024-07-26 11:35:51.255257] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.622 [2024-07-26 11:35:51.255267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:55.622 qpair failed and we were unable to recover it. 00:27:55.622 [2024-07-26 11:35:51.255408] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.622 [2024-07-26 11:35:51.255418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:55.622 qpair failed and we were unable to recover it. 00:27:55.622 [2024-07-26 11:35:51.255508] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.622 [2024-07-26 11:35:51.255517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:55.622 qpair failed and we were unable to recover it. 00:27:55.622 [2024-07-26 11:35:51.255648] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.622 [2024-07-26 11:35:51.255659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:55.622 qpair failed and we were unable to recover it. 00:27:55.622 [2024-07-26 11:35:51.255766] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.622 [2024-07-26 11:35:51.255776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:55.622 qpair failed and we were unable to recover it. 00:27:55.622 [2024-07-26 11:35:51.255932] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.623 [2024-07-26 11:35:51.255942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:55.623 qpair failed and we were unable to recover it. 00:27:55.623 [2024-07-26 11:35:51.256037] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.623 [2024-07-26 11:35:51.256046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:55.623 qpair failed and we were unable to recover it. 00:27:55.623 [2024-07-26 11:35:51.256142] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.623 [2024-07-26 11:35:51.256152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:55.623 qpair failed and we were unable to recover it. 00:27:55.623 [2024-07-26 11:35:51.256301] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.623 [2024-07-26 11:35:51.256311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:55.623 qpair failed and we were unable to recover it. 00:27:55.623 [2024-07-26 11:35:51.256487] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.623 [2024-07-26 11:35:51.256496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:55.623 qpair failed and we were unable to recover it. 00:27:55.623 [2024-07-26 11:35:51.256698] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.623 [2024-07-26 11:35:51.256708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:55.623 qpair failed and we were unable to recover it. 00:27:55.623 [2024-07-26 11:35:51.256791] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.623 [2024-07-26 11:35:51.256800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:55.623 qpair failed and we were unable to recover it. 00:27:55.623 [2024-07-26 11:35:51.256974] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.623 [2024-07-26 11:35:51.256983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:55.623 qpair failed and we were unable to recover it. 00:27:55.623 [2024-07-26 11:35:51.257155] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.623 [2024-07-26 11:35:51.257164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:55.623 qpair failed and we were unable to recover it. 00:27:55.623 [2024-07-26 11:35:51.257479] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.623 [2024-07-26 11:35:51.257509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:55.623 qpair failed and we were unable to recover it. 00:27:55.623 [2024-07-26 11:35:51.257720] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.623 [2024-07-26 11:35:51.257751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:55.623 qpair failed and we were unable to recover it. 00:27:55.623 [2024-07-26 11:35:51.257925] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.623 [2024-07-26 11:35:51.257954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:55.623 qpair failed and we were unable to recover it. 00:27:55.623 [2024-07-26 11:35:51.258081] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.623 [2024-07-26 11:35:51.258117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:55.623 qpair failed and we were unable to recover it. 00:27:55.623 [2024-07-26 11:35:51.258371] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.623 [2024-07-26 11:35:51.258401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:55.623 qpair failed and we were unable to recover it. 00:27:55.623 [2024-07-26 11:35:51.258594] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.623 [2024-07-26 11:35:51.258624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:55.623 qpair failed and we were unable to recover it. 00:27:55.623 [2024-07-26 11:35:51.258740] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.623 [2024-07-26 11:35:51.258770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:55.623 qpair failed and we were unable to recover it. 00:27:55.623 [2024-07-26 11:35:51.259035] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.623 [2024-07-26 11:35:51.259064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:55.623 qpair failed and we were unable to recover it. 00:27:55.623 [2024-07-26 11:35:51.259318] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.623 [2024-07-26 11:35:51.259348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:55.623 qpair failed and we were unable to recover it. 00:27:55.623 [2024-07-26 11:35:51.259551] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.623 [2024-07-26 11:35:51.259580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:55.623 qpair failed and we were unable to recover it. 00:27:55.623 [2024-07-26 11:35:51.259840] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.623 [2024-07-26 11:35:51.259871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:55.623 qpair failed and we were unable to recover it. 00:27:55.623 [2024-07-26 11:35:51.260054] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.623 [2024-07-26 11:35:51.260084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:55.623 qpair failed and we were unable to recover it. 00:27:55.623 [2024-07-26 11:35:51.260285] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.623 [2024-07-26 11:35:51.260294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:55.623 qpair failed and we were unable to recover it. 00:27:55.623 [2024-07-26 11:35:51.260436] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.623 [2024-07-26 11:35:51.260446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:55.623 qpair failed and we were unable to recover it. 00:27:55.623 [2024-07-26 11:35:51.260679] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.623 [2024-07-26 11:35:51.260710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:55.623 qpair failed and we were unable to recover it. 00:27:55.623 [2024-07-26 11:35:51.260971] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.623 [2024-07-26 11:35:51.261000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:55.623 qpair failed and we were unable to recover it. 00:27:55.623 [2024-07-26 11:35:51.261222] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.623 [2024-07-26 11:35:51.261252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:55.623 qpair failed and we were unable to recover it. 00:27:55.623 [2024-07-26 11:35:51.261603] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.623 [2024-07-26 11:35:51.261659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f8000b90 with addr=10.0.0.2, port=4420 00:27:55.623 qpair failed and we were unable to recover it. 00:27:55.623 [2024-07-26 11:35:51.261794] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.623 [2024-07-26 11:35:51.261826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f8000b90 with addr=10.0.0.2, port=4420 00:27:55.623 qpair failed and we were unable to recover it. 00:27:55.623 [2024-07-26 11:35:51.262029] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.623 [2024-07-26 11:35:51.262059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f8000b90 with addr=10.0.0.2, port=4420 00:27:55.623 qpair failed and we were unable to recover it. 00:27:55.623 [2024-07-26 11:35:51.262233] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.623 [2024-07-26 11:35:51.262262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f8000b90 with addr=10.0.0.2, port=4420 00:27:55.623 qpair failed and we were unable to recover it. 00:27:55.623 [2024-07-26 11:35:51.262445] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.623 [2024-07-26 11:35:51.262475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f8000b90 with addr=10.0.0.2, port=4420 00:27:55.623 qpair failed and we were unable to recover it. 00:27:55.623 [2024-07-26 11:35:51.262665] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.623 [2024-07-26 11:35:51.262696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f8000b90 with addr=10.0.0.2, port=4420 00:27:55.623 qpair failed and we were unable to recover it. 00:27:55.623 [2024-07-26 11:35:51.262835] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.623 [2024-07-26 11:35:51.262866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f8000b90 with addr=10.0.0.2, port=4420 00:27:55.623 qpair failed and we were unable to recover it. 00:27:55.623 [2024-07-26 11:35:51.262994] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.623 [2024-07-26 11:35:51.263024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f8000b90 with addr=10.0.0.2, port=4420 00:27:55.623 qpair failed and we were unable to recover it. 00:27:55.623 [2024-07-26 11:35:51.263215] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.623 [2024-07-26 11:35:51.263245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f8000b90 with addr=10.0.0.2, port=4420 00:27:55.623 qpair failed and we were unable to recover it. 00:27:55.623 [2024-07-26 11:35:51.263461] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.623 [2024-07-26 11:35:51.263491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f8000b90 with addr=10.0.0.2, port=4420 00:27:55.623 qpair failed and we were unable to recover it. 00:27:55.623 [2024-07-26 11:35:51.263683] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.623 [2024-07-26 11:35:51.263713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f8000b90 with addr=10.0.0.2, port=4420 00:27:55.623 qpair failed and we were unable to recover it. 00:27:55.623 [2024-07-26 11:35:51.263943] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.623 [2024-07-26 11:35:51.263973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f8000b90 with addr=10.0.0.2, port=4420 00:27:55.623 qpair failed and we were unable to recover it. 00:27:55.624 [2024-07-26 11:35:51.264109] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.624 [2024-07-26 11:35:51.264139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f8000b90 with addr=10.0.0.2, port=4420 00:27:55.624 qpair failed and we were unable to recover it. 00:27:55.624 [2024-07-26 11:35:51.264339] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.624 [2024-07-26 11:35:51.264377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f8000b90 with addr=10.0.0.2, port=4420 00:27:55.624 qpair failed and we were unable to recover it. 00:27:55.624 [2024-07-26 11:35:51.264617] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.624 [2024-07-26 11:35:51.264654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f8000b90 with addr=10.0.0.2, port=4420 00:27:55.624 qpair failed and we were unable to recover it. 00:27:55.624 [2024-07-26 11:35:51.264946] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.624 [2024-07-26 11:35:51.264976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f8000b90 with addr=10.0.0.2, port=4420 00:27:55.624 qpair failed and we were unable to recover it. 00:27:55.624 [2024-07-26 11:35:51.265166] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.624 [2024-07-26 11:35:51.265195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f8000b90 with addr=10.0.0.2, port=4420 00:27:55.624 qpair failed and we were unable to recover it. 00:27:55.624 [2024-07-26 11:35:51.265458] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.624 [2024-07-26 11:35:51.265488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f8000b90 with addr=10.0.0.2, port=4420 00:27:55.624 qpair failed and we were unable to recover it. 00:27:55.624 [2024-07-26 11:35:51.265760] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.624 [2024-07-26 11:35:51.265790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f8000b90 with addr=10.0.0.2, port=4420 00:27:55.624 qpair failed and we were unable to recover it. 00:27:55.624 [2024-07-26 11:35:51.266008] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.624 [2024-07-26 11:35:51.266038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f8000b90 with addr=10.0.0.2, port=4420 00:27:55.624 qpair failed and we were unable to recover it. 00:27:55.624 [2024-07-26 11:35:51.266281] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.624 [2024-07-26 11:35:51.266311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f8000b90 with addr=10.0.0.2, port=4420 00:27:55.624 qpair failed and we were unable to recover it. 00:27:55.624 [2024-07-26 11:35:51.266576] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.624 [2024-07-26 11:35:51.266606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f8000b90 with addr=10.0.0.2, port=4420 00:27:55.624 qpair failed and we were unable to recover it. 00:27:55.624 [2024-07-26 11:35:51.266821] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.624 [2024-07-26 11:35:51.266852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f8000b90 with addr=10.0.0.2, port=4420 00:27:55.624 qpair failed and we were unable to recover it. 00:27:55.624 [2024-07-26 11:35:51.267059] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.624 [2024-07-26 11:35:51.267089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f8000b90 with addr=10.0.0.2, port=4420 00:27:55.624 qpair failed and we were unable to recover it. 00:27:55.624 [2024-07-26 11:35:51.267357] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.624 [2024-07-26 11:35:51.267387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f8000b90 with addr=10.0.0.2, port=4420 00:27:55.624 qpair failed and we were unable to recover it. 00:27:55.624 [2024-07-26 11:35:51.267660] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.624 [2024-07-26 11:35:51.267692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f8000b90 with addr=10.0.0.2, port=4420 00:27:55.624 qpair failed and we were unable to recover it. 00:27:55.624 [2024-07-26 11:35:51.267830] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.624 [2024-07-26 11:35:51.267860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f8000b90 with addr=10.0.0.2, port=4420 00:27:55.624 qpair failed and we were unable to recover it. 00:27:55.624 [2024-07-26 11:35:51.268055] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.624 [2024-07-26 11:35:51.268086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f8000b90 with addr=10.0.0.2, port=4420 00:27:55.624 qpair failed and we were unable to recover it. 00:27:55.624 [2024-07-26 11:35:51.268270] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.624 [2024-07-26 11:35:51.268300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f8000b90 with addr=10.0.0.2, port=4420 00:27:55.624 qpair failed and we were unable to recover it. 00:27:55.624 [2024-07-26 11:35:51.268487] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.624 [2024-07-26 11:35:51.268517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f8000b90 with addr=10.0.0.2, port=4420 00:27:55.624 qpair failed and we were unable to recover it. 00:27:55.624 [2024-07-26 11:35:51.268779] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.624 [2024-07-26 11:35:51.268809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f8000b90 with addr=10.0.0.2, port=4420 00:27:55.624 qpair failed and we were unable to recover it. 00:27:55.624 [2024-07-26 11:35:51.268998] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.624 [2024-07-26 11:35:51.269028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f8000b90 with addr=10.0.0.2, port=4420 00:27:55.624 qpair failed and we were unable to recover it. 00:27:55.624 [2024-07-26 11:35:51.269288] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.624 [2024-07-26 11:35:51.269317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f8000b90 with addr=10.0.0.2, port=4420 00:27:55.624 qpair failed and we were unable to recover it. 00:27:55.624 [2024-07-26 11:35:51.269555] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.624 [2024-07-26 11:35:51.269585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f8000b90 with addr=10.0.0.2, port=4420 00:27:55.624 qpair failed and we were unable to recover it. 00:27:55.624 [2024-07-26 11:35:51.269863] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.624 [2024-07-26 11:35:51.269894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f8000b90 with addr=10.0.0.2, port=4420 00:27:55.624 qpair failed and we were unable to recover it. 00:27:55.624 [2024-07-26 11:35:51.270140] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.624 [2024-07-26 11:35:51.270169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f8000b90 with addr=10.0.0.2, port=4420 00:27:55.624 qpair failed and we were unable to recover it. 00:27:55.624 [2024-07-26 11:35:51.270427] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.624 [2024-07-26 11:35:51.270458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f8000b90 with addr=10.0.0.2, port=4420 00:27:55.624 qpair failed and we were unable to recover it. 00:27:55.624 [2024-07-26 11:35:51.270580] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.624 [2024-07-26 11:35:51.270610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f8000b90 with addr=10.0.0.2, port=4420 00:27:55.624 qpair failed and we were unable to recover it. 00:27:55.624 [2024-07-26 11:35:51.270834] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.624 [2024-07-26 11:35:51.270865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f8000b90 with addr=10.0.0.2, port=4420 00:27:55.624 qpair failed and we were unable to recover it. 00:27:55.624 [2024-07-26 11:35:51.271111] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.624 [2024-07-26 11:35:51.271141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f8000b90 with addr=10.0.0.2, port=4420 00:27:55.624 qpair failed and we were unable to recover it. 00:27:55.624 [2024-07-26 11:35:51.271411] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.624 [2024-07-26 11:35:51.271441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f8000b90 with addr=10.0.0.2, port=4420 00:27:55.624 qpair failed and we were unable to recover it. 00:27:55.624 [2024-07-26 11:35:51.271734] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.624 [2024-07-26 11:35:51.271765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f8000b90 with addr=10.0.0.2, port=4420 00:27:55.624 qpair failed and we were unable to recover it. 00:27:55.624 [2024-07-26 11:35:51.271945] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.624 [2024-07-26 11:35:51.271974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f8000b90 with addr=10.0.0.2, port=4420 00:27:55.624 qpair failed and we were unable to recover it. 00:27:55.624 [2024-07-26 11:35:51.272151] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.624 [2024-07-26 11:35:51.272181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f8000b90 with addr=10.0.0.2, port=4420 00:27:55.624 qpair failed and we were unable to recover it. 00:27:55.624 [2024-07-26 11:35:51.272380] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.624 [2024-07-26 11:35:51.272409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f8000b90 with addr=10.0.0.2, port=4420 00:27:55.624 qpair failed and we were unable to recover it. 00:27:55.624 [2024-07-26 11:35:51.272599] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.624 [2024-07-26 11:35:51.272637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f8000b90 with addr=10.0.0.2, port=4420 00:27:55.624 qpair failed and we were unable to recover it. 00:27:55.624 [2024-07-26 11:35:51.272912] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.624 [2024-07-26 11:35:51.272944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f8000b90 with addr=10.0.0.2, port=4420 00:27:55.624 qpair failed and we were unable to recover it. 00:27:55.624 [2024-07-26 11:35:51.273051] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.624 [2024-07-26 11:35:51.273081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f8000b90 with addr=10.0.0.2, port=4420 00:27:55.624 qpair failed and we were unable to recover it. 00:27:55.624 [2024-07-26 11:35:51.273326] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.624 [2024-07-26 11:35:51.273356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f8000b90 with addr=10.0.0.2, port=4420 00:27:55.624 qpair failed and we were unable to recover it. 00:27:55.624 [2024-07-26 11:35:51.273549] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.625 [2024-07-26 11:35:51.273579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f8000b90 with addr=10.0.0.2, port=4420 00:27:55.625 qpair failed and we were unable to recover it. 00:27:55.625 [2024-07-26 11:35:51.273798] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.625 [2024-07-26 11:35:51.273830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f8000b90 with addr=10.0.0.2, port=4420 00:27:55.625 qpair failed and we were unable to recover it. 00:27:55.625 [2024-07-26 11:35:51.274092] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.625 [2024-07-26 11:35:51.274122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f8000b90 with addr=10.0.0.2, port=4420 00:27:55.625 qpair failed and we were unable to recover it. 00:27:55.625 [2024-07-26 11:35:51.274322] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.625 [2024-07-26 11:35:51.274351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f8000b90 with addr=10.0.0.2, port=4420 00:27:55.625 qpair failed and we were unable to recover it. 00:27:55.625 [2024-07-26 11:35:51.274540] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.625 [2024-07-26 11:35:51.274576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f8000b90 with addr=10.0.0.2, port=4420 00:27:55.625 qpair failed and we were unable to recover it. 00:27:55.625 [2024-07-26 11:35:51.274772] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.625 [2024-07-26 11:35:51.274802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f8000b90 with addr=10.0.0.2, port=4420 00:27:55.625 qpair failed and we were unable to recover it. 00:27:55.625 [2024-07-26 11:35:51.275011] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.625 [2024-07-26 11:35:51.275042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f8000b90 with addr=10.0.0.2, port=4420 00:27:55.625 qpair failed and we were unable to recover it. 00:27:55.625 [2024-07-26 11:35:51.275286] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.625 [2024-07-26 11:35:51.275316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f8000b90 with addr=10.0.0.2, port=4420 00:27:55.625 qpair failed and we were unable to recover it. 00:27:55.625 [2024-07-26 11:35:51.275580] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.625 [2024-07-26 11:35:51.275609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f8000b90 with addr=10.0.0.2, port=4420 00:27:55.625 qpair failed and we were unable to recover it. 00:27:55.625 [2024-07-26 11:35:51.275887] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.625 [2024-07-26 11:35:51.275918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f8000b90 with addr=10.0.0.2, port=4420 00:27:55.625 qpair failed and we were unable to recover it. 00:27:55.625 [2024-07-26 11:35:51.276204] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.625 [2024-07-26 11:35:51.276234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f8000b90 with addr=10.0.0.2, port=4420 00:27:55.625 qpair failed and we were unable to recover it. 00:27:55.625 [2024-07-26 11:35:51.276447] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.625 [2024-07-26 11:35:51.276477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f8000b90 with addr=10.0.0.2, port=4420 00:27:55.625 qpair failed and we were unable to recover it. 00:27:55.625 [2024-07-26 11:35:51.276600] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.625 [2024-07-26 11:35:51.276641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f8000b90 with addr=10.0.0.2, port=4420 00:27:55.625 qpair failed and we were unable to recover it. 00:27:55.625 [2024-07-26 11:35:51.276820] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.625 [2024-07-26 11:35:51.276850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f8000b90 with addr=10.0.0.2, port=4420 00:27:55.625 qpair failed and we were unable to recover it. 00:27:55.625 [2024-07-26 11:35:51.277094] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.625 [2024-07-26 11:35:51.277124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f8000b90 with addr=10.0.0.2, port=4420 00:27:55.625 qpair failed and we were unable to recover it. 00:27:55.625 [2024-07-26 11:35:51.277313] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.625 [2024-07-26 11:35:51.277342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f8000b90 with addr=10.0.0.2, port=4420 00:27:55.625 qpair failed and we were unable to recover it. 00:27:55.625 [2024-07-26 11:35:51.277585] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.625 [2024-07-26 11:35:51.277615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f8000b90 with addr=10.0.0.2, port=4420 00:27:55.625 qpair failed and we were unable to recover it. 00:27:55.625 [2024-07-26 11:35:51.277816] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.625 [2024-07-26 11:35:51.277848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f8000b90 with addr=10.0.0.2, port=4420 00:27:55.625 qpair failed and we were unable to recover it. 00:27:55.625 [2024-07-26 11:35:51.277979] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.625 [2024-07-26 11:35:51.278009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f8000b90 with addr=10.0.0.2, port=4420 00:27:55.625 qpair failed and we were unable to recover it. 00:27:55.625 [2024-07-26 11:35:51.278207] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.625 [2024-07-26 11:35:51.278238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f8000b90 with addr=10.0.0.2, port=4420 00:27:55.625 qpair failed and we were unable to recover it. 00:27:55.625 [2024-07-26 11:35:51.278372] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.625 [2024-07-26 11:35:51.278402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f8000b90 with addr=10.0.0.2, port=4420 00:27:55.900 qpair failed and we were unable to recover it. 00:27:55.900 [2024-07-26 11:35:51.278711] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.900 [2024-07-26 11:35:51.278742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f8000b90 with addr=10.0.0.2, port=4420 00:27:55.900 qpair failed and we were unable to recover it. 00:27:55.900 [2024-07-26 11:35:51.278936] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.900 [2024-07-26 11:35:51.278966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f8000b90 with addr=10.0.0.2, port=4420 00:27:55.900 qpair failed and we were unable to recover it. 00:27:55.900 [2024-07-26 11:35:51.279247] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.900 [2024-07-26 11:35:51.279278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f8000b90 with addr=10.0.0.2, port=4420 00:27:55.900 qpair failed and we were unable to recover it. 00:27:55.900 [2024-07-26 11:35:51.279526] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.900 [2024-07-26 11:35:51.279556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f8000b90 with addr=10.0.0.2, port=4420 00:27:55.900 qpair failed and we were unable to recover it. 00:27:55.900 [2024-07-26 11:35:51.279803] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.900 [2024-07-26 11:35:51.279834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f8000b90 with addr=10.0.0.2, port=4420 00:27:55.900 qpair failed and we were unable to recover it. 00:27:55.900 [2024-07-26 11:35:51.280076] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.900 [2024-07-26 11:35:51.280106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f8000b90 with addr=10.0.0.2, port=4420 00:27:55.900 qpair failed and we were unable to recover it. 00:27:55.900 [2024-07-26 11:35:51.280347] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.900 [2024-07-26 11:35:51.280376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f8000b90 with addr=10.0.0.2, port=4420 00:27:55.900 qpair failed and we were unable to recover it. 00:27:55.900 [2024-07-26 11:35:51.280550] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.900 [2024-07-26 11:35:51.280579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f8000b90 with addr=10.0.0.2, port=4420 00:27:55.900 qpair failed and we were unable to recover it. 00:27:55.900 [2024-07-26 11:35:51.280773] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.900 [2024-07-26 11:35:51.280803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f8000b90 with addr=10.0.0.2, port=4420 00:27:55.900 qpair failed and we were unable to recover it. 00:27:55.900 [2024-07-26 11:35:51.281075] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.900 [2024-07-26 11:35:51.281105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f8000b90 with addr=10.0.0.2, port=4420 00:27:55.900 qpair failed and we were unable to recover it. 00:27:55.900 [2024-07-26 11:35:51.281371] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.900 [2024-07-26 11:35:51.281402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f8000b90 with addr=10.0.0.2, port=4420 00:27:55.900 qpair failed and we were unable to recover it. 00:27:55.900 [2024-07-26 11:35:51.281693] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.900 [2024-07-26 11:35:51.281723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f8000b90 with addr=10.0.0.2, port=4420 00:27:55.900 qpair failed and we were unable to recover it. 00:27:55.900 [2024-07-26 11:35:51.281975] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.900 [2024-07-26 11:35:51.282004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f8000b90 with addr=10.0.0.2, port=4420 00:27:55.900 qpair failed and we were unable to recover it. 00:27:55.900 [2024-07-26 11:35:51.282261] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.900 [2024-07-26 11:35:51.282291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f8000b90 with addr=10.0.0.2, port=4420 00:27:55.900 qpair failed and we were unable to recover it. 00:27:55.900 [2024-07-26 11:35:51.282557] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.900 [2024-07-26 11:35:51.282587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f8000b90 with addr=10.0.0.2, port=4420 00:27:55.900 qpair failed and we were unable to recover it. 00:27:55.900 [2024-07-26 11:35:51.282884] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.900 [2024-07-26 11:35:51.282915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f8000b90 with addr=10.0.0.2, port=4420 00:27:55.900 qpair failed and we were unable to recover it. 00:27:55.900 [2024-07-26 11:35:51.283191] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.901 [2024-07-26 11:35:51.283220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f8000b90 with addr=10.0.0.2, port=4420 00:27:55.901 qpair failed and we were unable to recover it. 00:27:55.901 [2024-07-26 11:35:51.283416] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.901 [2024-07-26 11:35:51.283446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f8000b90 with addr=10.0.0.2, port=4420 00:27:55.901 qpair failed and we were unable to recover it. 00:27:55.901 [2024-07-26 11:35:51.283713] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.901 [2024-07-26 11:35:51.283746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f8000b90 with addr=10.0.0.2, port=4420 00:27:55.901 qpair failed and we were unable to recover it. 00:27:55.901 [2024-07-26 11:35:51.284004] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.901 [2024-07-26 11:35:51.284034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f8000b90 with addr=10.0.0.2, port=4420 00:27:55.901 qpair failed and we were unable to recover it. 00:27:55.901 [2024-07-26 11:35:51.284250] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.901 [2024-07-26 11:35:51.284280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f8000b90 with addr=10.0.0.2, port=4420 00:27:55.901 qpair failed and we were unable to recover it. 00:27:55.901 [2024-07-26 11:35:51.284547] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.901 [2024-07-26 11:35:51.284577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f8000b90 with addr=10.0.0.2, port=4420 00:27:55.901 qpair failed and we were unable to recover it. 00:27:55.901 [2024-07-26 11:35:51.284788] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.901 [2024-07-26 11:35:51.284820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f8000b90 with addr=10.0.0.2, port=4420 00:27:55.901 qpair failed and we were unable to recover it. 00:27:55.901 [2024-07-26 11:35:51.285085] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.901 [2024-07-26 11:35:51.285120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f8000b90 with addr=10.0.0.2, port=4420 00:27:55.901 qpair failed and we were unable to recover it. 00:27:55.901 [2024-07-26 11:35:51.285407] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.901 [2024-07-26 11:35:51.285436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f8000b90 with addr=10.0.0.2, port=4420 00:27:55.901 qpair failed and we were unable to recover it. 00:27:55.901 [2024-07-26 11:35:51.285645] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.901 [2024-07-26 11:35:51.285676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f8000b90 with addr=10.0.0.2, port=4420 00:27:55.901 qpair failed and we were unable to recover it. 00:27:55.901 [2024-07-26 11:35:51.285918] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.901 [2024-07-26 11:35:51.285947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f8000b90 with addr=10.0.0.2, port=4420 00:27:55.901 qpair failed and we were unable to recover it. 00:27:55.901 [2024-07-26 11:35:51.286150] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.901 [2024-07-26 11:35:51.286179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f8000b90 with addr=10.0.0.2, port=4420 00:27:55.901 qpair failed and we were unable to recover it. 00:27:55.901 [2024-07-26 11:35:51.286447] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.901 [2024-07-26 11:35:51.286476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f8000b90 with addr=10.0.0.2, port=4420 00:27:55.901 qpair failed and we were unable to recover it. 00:27:55.901 [2024-07-26 11:35:51.286677] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.901 [2024-07-26 11:35:51.286708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f8000b90 with addr=10.0.0.2, port=4420 00:27:55.901 qpair failed and we were unable to recover it. 00:27:55.901 [2024-07-26 11:35:51.286896] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.901 [2024-07-26 11:35:51.286926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f8000b90 with addr=10.0.0.2, port=4420 00:27:55.901 qpair failed and we were unable to recover it. 00:27:55.901 [2024-07-26 11:35:51.287168] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.901 [2024-07-26 11:35:51.287198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f8000b90 with addr=10.0.0.2, port=4420 00:27:55.901 qpair failed and we were unable to recover it. 00:27:55.901 [2024-07-26 11:35:51.287383] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.901 [2024-07-26 11:35:51.287413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f8000b90 with addr=10.0.0.2, port=4420 00:27:55.901 qpair failed and we were unable to recover it. 00:27:55.901 [2024-07-26 11:35:51.287703] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.901 [2024-07-26 11:35:51.287735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f8000b90 with addr=10.0.0.2, port=4420 00:27:55.901 qpair failed and we were unable to recover it. 00:27:55.901 [2024-07-26 11:35:51.288005] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.901 [2024-07-26 11:35:51.288036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f8000b90 with addr=10.0.0.2, port=4420 00:27:55.901 qpair failed and we were unable to recover it. 00:27:55.901 [2024-07-26 11:35:51.288326] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.901 [2024-07-26 11:35:51.288356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f8000b90 with addr=10.0.0.2, port=4420 00:27:55.901 qpair failed and we were unable to recover it. 00:27:55.901 [2024-07-26 11:35:51.288569] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.901 [2024-07-26 11:35:51.288599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f8000b90 with addr=10.0.0.2, port=4420 00:27:55.901 qpair failed and we were unable to recover it. 00:27:55.901 [2024-07-26 11:35:51.288804] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.901 [2024-07-26 11:35:51.288835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f8000b90 with addr=10.0.0.2, port=4420 00:27:55.901 qpair failed and we were unable to recover it. 00:27:55.901 [2024-07-26 11:35:51.289082] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.901 [2024-07-26 11:35:51.289112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f8000b90 with addr=10.0.0.2, port=4420 00:27:55.901 qpair failed and we were unable to recover it. 00:27:55.901 [2024-07-26 11:35:51.289385] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.901 [2024-07-26 11:35:51.289415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f8000b90 with addr=10.0.0.2, port=4420 00:27:55.901 qpair failed and we were unable to recover it. 00:27:55.901 [2024-07-26 11:35:51.289616] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.901 [2024-07-26 11:35:51.289655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f8000b90 with addr=10.0.0.2, port=4420 00:27:55.901 qpair failed and we were unable to recover it. 00:27:55.901 [2024-07-26 11:35:51.289844] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.901 [2024-07-26 11:35:51.289874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f8000b90 with addr=10.0.0.2, port=4420 00:27:55.901 qpair failed and we were unable to recover it. 00:27:55.901 [2024-07-26 11:35:51.290048] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.901 [2024-07-26 11:35:51.290078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f8000b90 with addr=10.0.0.2, port=4420 00:27:55.901 qpair failed and we were unable to recover it. 00:27:55.901 [2024-07-26 11:35:51.290376] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.901 [2024-07-26 11:35:51.290405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f8000b90 with addr=10.0.0.2, port=4420 00:27:55.901 qpair failed and we were unable to recover it. 00:27:55.901 [2024-07-26 11:35:51.290661] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.901 [2024-07-26 11:35:51.290692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f8000b90 with addr=10.0.0.2, port=4420 00:27:55.901 qpair failed and we were unable to recover it. 00:27:55.901 [2024-07-26 11:35:51.290891] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.901 [2024-07-26 11:35:51.290922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f8000b90 with addr=10.0.0.2, port=4420 00:27:55.901 qpair failed and we were unable to recover it. 00:27:55.901 [2024-07-26 11:35:51.291165] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.901 [2024-07-26 11:35:51.291194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f8000b90 with addr=10.0.0.2, port=4420 00:27:55.901 qpair failed and we were unable to recover it. 00:27:55.901 [2024-07-26 11:35:51.291405] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.901 [2024-07-26 11:35:51.291435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f8000b90 with addr=10.0.0.2, port=4420 00:27:55.901 qpair failed and we were unable to recover it. 00:27:55.901 [2024-07-26 11:35:51.291667] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.901 [2024-07-26 11:35:51.291698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f8000b90 with addr=10.0.0.2, port=4420 00:27:55.901 qpair failed and we were unable to recover it. 00:27:55.901 [2024-07-26 11:35:51.291989] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.901 [2024-07-26 11:35:51.292019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f8000b90 with addr=10.0.0.2, port=4420 00:27:55.901 qpair failed and we were unable to recover it. 00:27:55.901 [2024-07-26 11:35:51.292317] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.901 [2024-07-26 11:35:51.292386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:55.901 qpair failed and we were unable to recover it. 00:27:55.901 [2024-07-26 11:35:51.292619] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.901 [2024-07-26 11:35:51.292670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:55.901 qpair failed and we were unable to recover it. 00:27:55.901 [2024-07-26 11:35:51.292944] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.901 [2024-07-26 11:35:51.292975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:55.901 qpair failed and we were unable to recover it. 00:27:55.901 [2024-07-26 11:35:51.293095] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.902 [2024-07-26 11:35:51.293124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:55.902 qpair failed and we were unable to recover it. 00:27:55.902 [2024-07-26 11:35:51.293335] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.902 [2024-07-26 11:35:51.293366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:55.902 qpair failed and we were unable to recover it. 00:27:55.902 [2024-07-26 11:35:51.293592] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.902 [2024-07-26 11:35:51.293621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:55.902 qpair failed and we were unable to recover it. 00:27:55.902 [2024-07-26 11:35:51.293877] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.902 [2024-07-26 11:35:51.293907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:55.902 qpair failed and we were unable to recover it. 00:27:55.902 [2024-07-26 11:35:51.294047] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.902 [2024-07-26 11:35:51.294076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:55.902 qpair failed and we were unable to recover it. 00:27:55.902 [2024-07-26 11:35:51.294327] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.902 [2024-07-26 11:35:51.294356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:55.902 qpair failed and we were unable to recover it. 00:27:55.902 [2024-07-26 11:35:51.294566] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.902 [2024-07-26 11:35:51.294596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:55.902 qpair failed and we were unable to recover it. 00:27:55.902 [2024-07-26 11:35:51.294804] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.902 [2024-07-26 11:35:51.294835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:55.902 qpair failed and we were unable to recover it. 00:27:55.902 [2024-07-26 11:35:51.295126] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.902 [2024-07-26 11:35:51.295156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:55.902 qpair failed and we were unable to recover it. 00:27:55.902 [2024-07-26 11:35:51.295329] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.902 [2024-07-26 11:35:51.295358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:55.902 qpair failed and we were unable to recover it. 00:27:55.902 [2024-07-26 11:35:51.295602] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.902 [2024-07-26 11:35:51.295640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:55.902 qpair failed and we were unable to recover it. 00:27:55.902 [2024-07-26 11:35:51.295898] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.902 [2024-07-26 11:35:51.295929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:55.902 qpair failed and we were unable to recover it. 00:27:55.902 [2024-07-26 11:35:51.296127] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.902 [2024-07-26 11:35:51.296157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:55.902 qpair failed and we were unable to recover it. 00:27:55.902 [2024-07-26 11:35:51.296395] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.902 [2024-07-26 11:35:51.296424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:55.902 qpair failed and we were unable to recover it. 00:27:55.902 [2024-07-26 11:35:51.296715] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.902 [2024-07-26 11:35:51.296746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:55.902 qpair failed and we were unable to recover it. 00:27:55.902 [2024-07-26 11:35:51.296873] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.902 [2024-07-26 11:35:51.296903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:55.902 qpair failed and we were unable to recover it. 00:27:55.902 [2024-07-26 11:35:51.297147] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.902 [2024-07-26 11:35:51.297177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:55.902 qpair failed and we were unable to recover it. 00:27:55.902 [2024-07-26 11:35:51.297358] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.902 [2024-07-26 11:35:51.297387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:55.902 qpair failed and we were unable to recover it. 00:27:55.902 [2024-07-26 11:35:51.297638] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.902 [2024-07-26 11:35:51.297669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:55.902 qpair failed and we were unable to recover it. 00:27:55.902 [2024-07-26 11:35:51.297935] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.902 [2024-07-26 11:35:51.297965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:55.902 qpair failed and we were unable to recover it. 00:27:55.902 [2024-07-26 11:35:51.298255] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.902 [2024-07-26 11:35:51.298285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:55.902 qpair failed and we were unable to recover it. 00:27:55.902 [2024-07-26 11:35:51.298533] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.902 [2024-07-26 11:35:51.298563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:55.902 qpair failed and we were unable to recover it. 00:27:55.902 [2024-07-26 11:35:51.298752] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.902 [2024-07-26 11:35:51.298782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:55.902 qpair failed and we were unable to recover it. 00:27:55.902 [2024-07-26 11:35:51.298968] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.902 [2024-07-26 11:35:51.298998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:55.902 qpair failed and we were unable to recover it. 00:27:55.902 [2024-07-26 11:35:51.299188] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.902 [2024-07-26 11:35:51.299222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:55.902 qpair failed and we were unable to recover it. 00:27:55.902 [2024-07-26 11:35:51.299485] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.902 [2024-07-26 11:35:51.299515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:55.902 qpair failed and we were unable to recover it. 00:27:55.902 [2024-07-26 11:35:51.299808] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.902 [2024-07-26 11:35:51.299839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:55.902 qpair failed and we were unable to recover it. 00:27:55.902 [2024-07-26 11:35:51.300112] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.902 [2024-07-26 11:35:51.300141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:55.902 qpair failed and we were unable to recover it. 00:27:55.902 [2024-07-26 11:35:51.300390] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.902 [2024-07-26 11:35:51.300420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:55.902 qpair failed and we were unable to recover it. 00:27:55.902 [2024-07-26 11:35:51.300686] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.902 [2024-07-26 11:35:51.300718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:55.902 qpair failed and we were unable to recover it. 00:27:55.902 [2024-07-26 11:35:51.301012] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.902 [2024-07-26 11:35:51.301042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:55.902 qpair failed and we were unable to recover it. 00:27:55.902 [2024-07-26 11:35:51.301261] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.902 [2024-07-26 11:35:51.301290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:55.902 qpair failed and we were unable to recover it. 00:27:55.902 [2024-07-26 11:35:51.301431] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.902 [2024-07-26 11:35:51.301460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:55.902 qpair failed and we were unable to recover it. 00:27:55.902 [2024-07-26 11:35:51.301705] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.902 [2024-07-26 11:35:51.301736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:55.902 qpair failed and we were unable to recover it. 00:27:55.902 [2024-07-26 11:35:51.301912] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.902 [2024-07-26 11:35:51.301941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:55.902 qpair failed and we were unable to recover it. 00:27:55.902 [2024-07-26 11:35:51.302180] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.902 [2024-07-26 11:35:51.302210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:55.902 qpair failed and we were unable to recover it. 00:27:55.902 [2024-07-26 11:35:51.302474] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.902 [2024-07-26 11:35:51.302504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:55.902 qpair failed and we were unable to recover it. 00:27:55.902 [2024-07-26 11:35:51.302718] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.902 [2024-07-26 11:35:51.302750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:55.902 qpair failed and we were unable to recover it. 00:27:55.903 [2024-07-26 11:35:51.302936] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.903 [2024-07-26 11:35:51.302965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:55.903 qpair failed and we were unable to recover it. 00:27:55.903 [2024-07-26 11:35:51.303142] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.903 [2024-07-26 11:35:51.303171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:55.903 qpair failed and we were unable to recover it. 00:27:55.903 [2024-07-26 11:35:51.303384] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.903 [2024-07-26 11:35:51.303414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:55.903 qpair failed and we were unable to recover it. 00:27:55.903 [2024-07-26 11:35:51.303619] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.903 [2024-07-26 11:35:51.303657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:55.903 qpair failed and we were unable to recover it. 00:27:55.903 [2024-07-26 11:35:51.303927] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.903 [2024-07-26 11:35:51.303956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:55.903 qpair failed and we were unable to recover it. 00:27:55.903 [2024-07-26 11:35:51.304065] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.903 [2024-07-26 11:35:51.304095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:55.903 qpair failed and we were unable to recover it. 00:27:55.903 [2024-07-26 11:35:51.304283] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.903 [2024-07-26 11:35:51.304312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:55.903 qpair failed and we were unable to recover it. 00:27:55.903 [2024-07-26 11:35:51.304502] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.903 [2024-07-26 11:35:51.304532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:55.903 qpair failed and we were unable to recover it. 00:27:55.903 [2024-07-26 11:35:51.304820] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.903 [2024-07-26 11:35:51.304851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:55.903 qpair failed and we were unable to recover it. 00:27:55.903 [2024-07-26 11:35:51.305061] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.903 [2024-07-26 11:35:51.305091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:55.903 qpair failed and we were unable to recover it. 00:27:55.903 [2024-07-26 11:35:51.305360] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.903 [2024-07-26 11:35:51.305389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:55.903 qpair failed and we were unable to recover it. 00:27:55.903 [2024-07-26 11:35:51.305657] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.903 [2024-07-26 11:35:51.305687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:55.903 qpair failed and we were unable to recover it. 00:27:55.903 [2024-07-26 11:35:51.305827] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.903 [2024-07-26 11:35:51.305856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:55.903 qpair failed and we were unable to recover it. 00:27:55.903 [2024-07-26 11:35:51.306099] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.903 [2024-07-26 11:35:51.306134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:55.903 qpair failed and we were unable to recover it. 00:27:55.903 [2024-07-26 11:35:51.306376] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.903 [2024-07-26 11:35:51.306406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:55.903 qpair failed and we were unable to recover it. 00:27:55.903 [2024-07-26 11:35:51.306581] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.903 [2024-07-26 11:35:51.306611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:55.903 qpair failed and we were unable to recover it. 00:27:55.903 [2024-07-26 11:35:51.306761] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.903 [2024-07-26 11:35:51.306792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:55.903 qpair failed and we were unable to recover it. 00:27:55.903 [2024-07-26 11:35:51.307060] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.903 [2024-07-26 11:35:51.307089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:55.903 qpair failed and we were unable to recover it. 00:27:55.903 [2024-07-26 11:35:51.307330] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.903 [2024-07-26 11:35:51.307359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:55.903 qpair failed and we were unable to recover it. 00:27:55.903 [2024-07-26 11:35:51.307599] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.903 [2024-07-26 11:35:51.307643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:55.903 qpair failed and we were unable to recover it. 00:27:55.903 [2024-07-26 11:35:51.307913] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.903 [2024-07-26 11:35:51.307943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:55.903 qpair failed and we were unable to recover it. 00:27:55.903 [2024-07-26 11:35:51.308202] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.903 [2024-07-26 11:35:51.308232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:55.903 qpair failed and we were unable to recover it. 00:27:55.903 [2024-07-26 11:35:51.308405] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.903 [2024-07-26 11:35:51.308435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:55.903 qpair failed and we were unable to recover it. 00:27:55.903 [2024-07-26 11:35:51.308646] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.903 [2024-07-26 11:35:51.308676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:55.903 qpair failed and we were unable to recover it. 00:27:55.903 [2024-07-26 11:35:51.308862] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.903 [2024-07-26 11:35:51.308892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:55.903 qpair failed and we were unable to recover it. 00:27:55.903 [2024-07-26 11:35:51.309084] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.903 [2024-07-26 11:35:51.309114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:55.903 qpair failed and we were unable to recover it. 00:27:55.903 [2024-07-26 11:35:51.309387] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.903 [2024-07-26 11:35:51.309417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:55.903 qpair failed and we were unable to recover it. 00:27:55.903 [2024-07-26 11:35:51.309633] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.903 [2024-07-26 11:35:51.309664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:55.903 qpair failed and we were unable to recover it. 00:27:55.903 [2024-07-26 11:35:51.309774] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.903 [2024-07-26 11:35:51.309803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:55.903 qpair failed and we were unable to recover it. 00:27:55.903 [2024-07-26 11:35:51.309987] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.903 [2024-07-26 11:35:51.310017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:55.903 qpair failed and we were unable to recover it. 00:27:55.903 [2024-07-26 11:35:51.310192] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.903 [2024-07-26 11:35:51.310222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:55.903 qpair failed and we were unable to recover it. 00:27:55.903 [2024-07-26 11:35:51.310462] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.903 [2024-07-26 11:35:51.310491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:55.903 qpair failed and we were unable to recover it. 00:27:55.903 [2024-07-26 11:35:51.310760] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.903 [2024-07-26 11:35:51.310791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:55.903 qpair failed and we were unable to recover it. 00:27:55.903 [2024-07-26 11:35:51.310967] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.903 [2024-07-26 11:35:51.310997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:55.903 qpair failed and we were unable to recover it. 00:27:55.903 [2024-07-26 11:35:51.311197] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.903 [2024-07-26 11:35:51.311227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:55.903 qpair failed and we were unable to recover it. 00:27:55.903 [2024-07-26 11:35:51.311402] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.903 [2024-07-26 11:35:51.311431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:55.903 qpair failed and we were unable to recover it. 00:27:55.903 [2024-07-26 11:35:51.311668] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.903 [2024-07-26 11:35:51.311700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:55.903 qpair failed and we were unable to recover it. 00:27:55.903 [2024-07-26 11:35:51.311885] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.903 [2024-07-26 11:35:51.311915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:55.903 qpair failed and we were unable to recover it. 00:27:55.904 [2024-07-26 11:35:51.312142] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.904 [2024-07-26 11:35:51.312172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:55.904 qpair failed and we were unable to recover it. 00:27:55.904 [2024-07-26 11:35:51.312440] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.904 [2024-07-26 11:35:51.312470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:55.904 qpair failed and we were unable to recover it. 00:27:55.904 [2024-07-26 11:35:51.312714] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.904 [2024-07-26 11:35:51.312751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:55.904 qpair failed and we were unable to recover it. 00:27:55.904 [2024-07-26 11:35:51.313281] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.904 [2024-07-26 11:35:51.313317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:55.904 qpair failed and we were unable to recover it. 00:27:55.904 [2024-07-26 11:35:51.313559] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.904 [2024-07-26 11:35:51.313593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:55.904 qpair failed and we were unable to recover it. 00:27:55.904 [2024-07-26 11:35:51.313849] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.904 [2024-07-26 11:35:51.313880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:55.904 qpair failed and we were unable to recover it. 00:27:55.904 [2024-07-26 11:35:51.314078] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.904 [2024-07-26 11:35:51.314107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:55.904 qpair failed and we were unable to recover it. 00:27:55.904 [2024-07-26 11:35:51.314300] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.904 [2024-07-26 11:35:51.314330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:55.904 qpair failed and we were unable to recover it. 00:27:55.904 [2024-07-26 11:35:51.314573] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.904 [2024-07-26 11:35:51.314602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:55.904 qpair failed and we were unable to recover it. 00:27:55.904 [2024-07-26 11:35:51.314821] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.904 [2024-07-26 11:35:51.314852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:55.904 qpair failed and we were unable to recover it. 00:27:55.904 [2024-07-26 11:35:51.315045] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.904 [2024-07-26 11:35:51.315076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:55.904 qpair failed and we were unable to recover it. 00:27:55.904 [2024-07-26 11:35:51.315254] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.904 [2024-07-26 11:35:51.315284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:55.904 qpair failed and we were unable to recover it. 00:27:55.904 [2024-07-26 11:35:51.315412] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.904 [2024-07-26 11:35:51.315442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:55.904 qpair failed and we were unable to recover it. 00:27:55.904 [2024-07-26 11:35:51.315618] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.904 [2024-07-26 11:35:51.315657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:55.904 qpair failed and we were unable to recover it. 00:27:55.904 [2024-07-26 11:35:51.315926] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.904 [2024-07-26 11:35:51.315956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:55.904 qpair failed and we were unable to recover it. 00:27:55.904 [2024-07-26 11:35:51.316143] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.904 [2024-07-26 11:35:51.316173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:55.904 qpair failed and we were unable to recover it. 00:27:55.904 [2024-07-26 11:35:51.316448] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.904 [2024-07-26 11:35:51.316478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:55.904 qpair failed and we were unable to recover it. 00:27:55.904 [2024-07-26 11:35:51.316695] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.904 [2024-07-26 11:35:51.316727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:55.904 qpair failed and we were unable to recover it. 00:27:55.904 [2024-07-26 11:35:51.316925] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.904 [2024-07-26 11:35:51.316955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:55.904 qpair failed and we were unable to recover it. 00:27:55.904 [2024-07-26 11:35:51.317143] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.904 [2024-07-26 11:35:51.317173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:55.904 qpair failed and we were unable to recover it. 00:27:55.904 [2024-07-26 11:35:51.317449] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.904 [2024-07-26 11:35:51.317479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:55.904 qpair failed and we were unable to recover it. 00:27:55.904 [2024-07-26 11:35:51.317670] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.904 [2024-07-26 11:35:51.317701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:55.904 qpair failed and we were unable to recover it. 00:27:55.904 [2024-07-26 11:35:51.317944] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.904 [2024-07-26 11:35:51.317974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:55.904 qpair failed and we were unable to recover it. 00:27:55.904 [2024-07-26 11:35:51.318167] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.904 [2024-07-26 11:35:51.318197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:55.904 qpair failed and we were unable to recover it. 00:27:55.904 [2024-07-26 11:35:51.318489] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.904 [2024-07-26 11:35:51.318519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:55.904 qpair failed and we were unable to recover it. 00:27:55.904 [2024-07-26 11:35:51.318803] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.904 [2024-07-26 11:35:51.318835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:55.904 qpair failed and we were unable to recover it. 00:27:55.904 [2024-07-26 11:35:51.319110] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.904 [2024-07-26 11:35:51.319140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:55.904 qpair failed and we were unable to recover it. 00:27:55.904 [2024-07-26 11:35:51.319429] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.904 [2024-07-26 11:35:51.319459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:55.904 qpair failed and we were unable to recover it. 00:27:55.904 [2024-07-26 11:35:51.319672] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.904 [2024-07-26 11:35:51.319703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:55.904 qpair failed and we were unable to recover it. 00:27:55.904 [2024-07-26 11:35:51.319920] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.904 [2024-07-26 11:35:51.319950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:55.904 qpair failed and we were unable to recover it. 00:27:55.904 [2024-07-26 11:35:51.320152] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.904 [2024-07-26 11:35:51.320183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:55.904 qpair failed and we were unable to recover it. 00:27:55.904 [2024-07-26 11:35:51.320369] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.904 [2024-07-26 11:35:51.320398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:55.904 qpair failed and we were unable to recover it. 00:27:55.904 [2024-07-26 11:35:51.320664] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.904 [2024-07-26 11:35:51.320695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:55.904 qpair failed and we were unable to recover it. 00:27:55.904 [2024-07-26 11:35:51.320966] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.904 [2024-07-26 11:35:51.320996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:55.904 qpair failed and we were unable to recover it. 00:27:55.904 [2024-07-26 11:35:51.321261] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.904 [2024-07-26 11:35:51.321291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:55.904 qpair failed and we were unable to recover it. 00:27:55.904 [2024-07-26 11:35:51.321535] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.904 [2024-07-26 11:35:51.321565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:55.904 qpair failed and we were unable to recover it. 00:27:55.904 [2024-07-26 11:35:51.321797] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.904 [2024-07-26 11:35:51.321830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:55.904 qpair failed and we were unable to recover it. 00:27:55.904 [2024-07-26 11:35:51.322018] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.904 [2024-07-26 11:35:51.322046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:55.904 qpair failed and we were unable to recover it. 00:27:55.905 [2024-07-26 11:35:51.322287] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.905 [2024-07-26 11:35:51.322317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:55.905 qpair failed and we were unable to recover it. 00:27:55.905 [2024-07-26 11:35:51.322574] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.905 [2024-07-26 11:35:51.322603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:55.905 qpair failed and we were unable to recover it. 00:27:55.905 [2024-07-26 11:35:51.322893] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.905 [2024-07-26 11:35:51.322924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:55.905 qpair failed and we were unable to recover it. 00:27:55.905 [2024-07-26 11:35:51.323216] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.905 [2024-07-26 11:35:51.323246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:55.905 qpair failed and we were unable to recover it. 00:27:55.905 [2024-07-26 11:35:51.323487] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.905 [2024-07-26 11:35:51.323517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:55.905 qpair failed and we were unable to recover it. 00:27:55.905 [2024-07-26 11:35:51.323764] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.905 [2024-07-26 11:35:51.323795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:55.905 qpair failed and we were unable to recover it. 00:27:55.905 [2024-07-26 11:35:51.324046] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.905 [2024-07-26 11:35:51.324076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:55.905 qpair failed and we were unable to recover it. 00:27:55.905 [2024-07-26 11:35:51.324287] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.905 [2024-07-26 11:35:51.324317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:55.905 qpair failed and we were unable to recover it. 00:27:55.905 [2024-07-26 11:35:51.324501] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.905 [2024-07-26 11:35:51.324531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:55.905 qpair failed and we were unable to recover it. 00:27:55.905 [2024-07-26 11:35:51.324804] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.905 [2024-07-26 11:35:51.324834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:55.905 qpair failed and we were unable to recover it. 00:27:55.905 [2024-07-26 11:35:51.325130] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.905 [2024-07-26 11:35:51.325160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:55.905 qpair failed and we were unable to recover it. 00:27:55.905 [2024-07-26 11:35:51.325451] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.905 [2024-07-26 11:35:51.325482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:55.905 qpair failed and we were unable to recover it. 00:27:55.905 [2024-07-26 11:35:51.325664] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.905 [2024-07-26 11:35:51.325695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:55.905 qpair failed and we were unable to recover it. 00:27:55.905 [2024-07-26 11:35:51.325966] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.905 [2024-07-26 11:35:51.325995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:55.905 qpair failed and we were unable to recover it. 00:27:55.905 [2024-07-26 11:35:51.326337] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.905 [2024-07-26 11:35:51.326368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:55.905 qpair failed and we were unable to recover it. 00:27:55.905 [2024-07-26 11:35:51.326644] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.905 [2024-07-26 11:35:51.326675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:55.905 qpair failed and we were unable to recover it. 00:27:55.905 [2024-07-26 11:35:51.326968] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.905 [2024-07-26 11:35:51.326998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:55.905 qpair failed and we were unable to recover it. 00:27:55.905 [2024-07-26 11:35:51.327194] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.905 [2024-07-26 11:35:51.327224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:55.905 qpair failed and we were unable to recover it. 00:27:55.905 [2024-07-26 11:35:51.327514] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.905 [2024-07-26 11:35:51.327545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:55.905 qpair failed and we were unable to recover it. 00:27:55.905 [2024-07-26 11:35:51.327694] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.905 [2024-07-26 11:35:51.327726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:55.905 qpair failed and we were unable to recover it. 00:27:55.905 [2024-07-26 11:35:51.327903] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.905 [2024-07-26 11:35:51.327932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:55.905 qpair failed and we were unable to recover it. 00:27:55.905 [2024-07-26 11:35:51.328109] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.905 [2024-07-26 11:35:51.328139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:55.905 qpair failed and we were unable to recover it. 00:27:55.905 [2024-07-26 11:35:51.328406] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.905 [2024-07-26 11:35:51.328436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:55.905 qpair failed and we were unable to recover it. 00:27:55.905 [2024-07-26 11:35:51.328624] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.905 [2024-07-26 11:35:51.328664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:55.905 qpair failed and we were unable to recover it. 00:27:55.905 [2024-07-26 11:35:51.328973] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.905 [2024-07-26 11:35:51.329003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:55.905 qpair failed and we were unable to recover it. 00:27:55.905 [2024-07-26 11:35:51.329280] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.905 [2024-07-26 11:35:51.329310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:55.905 qpair failed and we were unable to recover it. 00:27:55.905 [2024-07-26 11:35:51.329596] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.905 [2024-07-26 11:35:51.329636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:55.905 qpair failed and we were unable to recover it. 00:27:55.905 [2024-07-26 11:35:51.329782] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.905 [2024-07-26 11:35:51.329812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:55.905 qpair failed and we were unable to recover it. 00:27:55.905 [2024-07-26 11:35:51.330002] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.905 [2024-07-26 11:35:51.330031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:55.905 qpair failed and we were unable to recover it. 00:27:55.905 [2024-07-26 11:35:51.330179] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.906 [2024-07-26 11:35:51.330209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:55.906 qpair failed and we were unable to recover it. 00:27:55.906 [2024-07-26 11:35:51.330476] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.906 [2024-07-26 11:35:51.330506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:55.906 qpair failed and we were unable to recover it. 00:27:55.906 [2024-07-26 11:35:51.330682] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.906 [2024-07-26 11:35:51.330713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:55.906 qpair failed and we were unable to recover it. 00:27:55.906 [2024-07-26 11:35:51.330914] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.906 [2024-07-26 11:35:51.330950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:55.906 qpair failed and we were unable to recover it. 00:27:55.906 [2024-07-26 11:35:51.331215] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.906 [2024-07-26 11:35:51.331245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:55.906 qpair failed and we were unable to recover it. 00:27:55.906 [2024-07-26 11:35:51.331527] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.906 [2024-07-26 11:35:51.331557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:55.906 qpair failed and we were unable to recover it. 00:27:55.906 [2024-07-26 11:35:51.331848] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.906 [2024-07-26 11:35:51.331879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:55.906 qpair failed and we were unable to recover it. 00:27:55.906 [2024-07-26 11:35:51.332159] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.906 [2024-07-26 11:35:51.332189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:55.906 qpair failed and we were unable to recover it. 00:27:55.906 [2024-07-26 11:35:51.332380] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.906 [2024-07-26 11:35:51.332410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:55.906 qpair failed and we were unable to recover it. 00:27:55.906 [2024-07-26 11:35:51.332613] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.906 [2024-07-26 11:35:51.332652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:55.906 qpair failed and we were unable to recover it. 00:27:55.906 [2024-07-26 11:35:51.332900] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.906 [2024-07-26 11:35:51.332930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:55.906 qpair failed and we were unable to recover it. 00:27:55.906 [2024-07-26 11:35:51.333109] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.906 [2024-07-26 11:35:51.333140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:55.906 qpair failed and we were unable to recover it. 00:27:55.906 [2024-07-26 11:35:51.333311] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.906 [2024-07-26 11:35:51.333340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:55.906 qpair failed and we were unable to recover it. 00:27:55.906 [2024-07-26 11:35:51.333607] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.906 [2024-07-26 11:35:51.333646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:55.906 qpair failed and we were unable to recover it. 00:27:55.906 [2024-07-26 11:35:51.333920] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.906 [2024-07-26 11:35:51.333950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:55.906 qpair failed and we were unable to recover it. 00:27:55.906 [2024-07-26 11:35:51.334232] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.906 [2024-07-26 11:35:51.334262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:55.906 qpair failed and we were unable to recover it. 00:27:55.906 [2024-07-26 11:35:51.334465] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.906 [2024-07-26 11:35:51.334496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:55.906 qpair failed and we were unable to recover it. 00:27:55.906 [2024-07-26 11:35:51.334695] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.906 [2024-07-26 11:35:51.334726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:55.906 qpair failed and we were unable to recover it. 00:27:55.906 [2024-07-26 11:35:51.334996] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.906 [2024-07-26 11:35:51.335026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:55.906 qpair failed and we were unable to recover it. 00:27:55.906 [2024-07-26 11:35:51.335253] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.906 [2024-07-26 11:35:51.335283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:55.906 qpair failed and we were unable to recover it. 00:27:55.906 [2024-07-26 11:35:51.335555] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.906 [2024-07-26 11:35:51.335585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:55.906 qpair failed and we were unable to recover it. 00:27:55.906 [2024-07-26 11:35:51.335884] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.906 [2024-07-26 11:35:51.335916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:55.906 qpair failed and we were unable to recover it. 00:27:55.906 [2024-07-26 11:35:51.336164] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.906 [2024-07-26 11:35:51.336195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:55.906 qpair failed and we were unable to recover it. 00:27:55.906 [2024-07-26 11:35:51.336486] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.906 [2024-07-26 11:35:51.336516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:55.906 qpair failed and we were unable to recover it. 00:27:55.906 [2024-07-26 11:35:51.336791] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.906 [2024-07-26 11:35:51.336822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:55.906 qpair failed and we were unable to recover it. 00:27:55.906 [2024-07-26 11:35:51.337118] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.906 [2024-07-26 11:35:51.337148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:55.906 qpair failed and we were unable to recover it. 00:27:55.906 [2024-07-26 11:35:51.337392] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.906 [2024-07-26 11:35:51.337422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:55.906 qpair failed and we were unable to recover it. 00:27:55.906 [2024-07-26 11:35:51.337559] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.906 [2024-07-26 11:35:51.337589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:55.906 qpair failed and we were unable to recover it. 00:27:55.906 [2024-07-26 11:35:51.337738] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.906 [2024-07-26 11:35:51.337770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:55.906 qpair failed and we were unable to recover it. 00:27:55.906 [2024-07-26 11:35:51.338039] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.906 [2024-07-26 11:35:51.338070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:55.906 qpair failed and we were unable to recover it. 00:27:55.906 [2024-07-26 11:35:51.338364] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.906 [2024-07-26 11:35:51.338399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:55.906 qpair failed and we were unable to recover it. 00:27:55.906 [2024-07-26 11:35:51.338614] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.906 [2024-07-26 11:35:51.338654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:55.906 qpair failed and we were unable to recover it. 00:27:55.906 [2024-07-26 11:35:51.338918] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.906 [2024-07-26 11:35:51.338948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:55.906 qpair failed and we were unable to recover it. 00:27:55.906 [2024-07-26 11:35:51.339146] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.906 [2024-07-26 11:35:51.339175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:55.906 qpair failed and we were unable to recover it. 00:27:55.906 [2024-07-26 11:35:51.339374] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.906 [2024-07-26 11:35:51.339403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:55.906 qpair failed and we were unable to recover it. 00:27:55.906 [2024-07-26 11:35:51.339671] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.906 [2024-07-26 11:35:51.339702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:55.906 qpair failed and we were unable to recover it. 00:27:55.906 [2024-07-26 11:35:51.339924] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.906 [2024-07-26 11:35:51.339954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:55.906 qpair failed and we were unable to recover it. 00:27:55.906 [2024-07-26 11:35:51.340228] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.907 [2024-07-26 11:35:51.340258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:55.907 qpair failed and we were unable to recover it. 00:27:55.907 [2024-07-26 11:35:51.340523] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.907 [2024-07-26 11:35:51.340553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:55.907 qpair failed and we were unable to recover it. 00:27:55.907 [2024-07-26 11:35:51.340753] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.907 [2024-07-26 11:35:51.340784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:55.907 qpair failed and we were unable to recover it. 00:27:55.907 [2024-07-26 11:35:51.341038] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.907 [2024-07-26 11:35:51.341067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:55.907 qpair failed and we were unable to recover it. 00:27:55.907 [2024-07-26 11:35:51.341367] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.907 [2024-07-26 11:35:51.341398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:55.907 qpair failed and we were unable to recover it. 00:27:55.907 [2024-07-26 11:35:51.341623] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.907 [2024-07-26 11:35:51.341661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:55.907 qpair failed and we were unable to recover it. 00:27:55.907 [2024-07-26 11:35:51.341882] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.907 [2024-07-26 11:35:51.341914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:55.907 qpair failed and we were unable to recover it. 00:27:55.907 [2024-07-26 11:35:51.342148] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.907 [2024-07-26 11:35:51.342179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:55.907 qpair failed and we were unable to recover it. 00:27:55.907 [2024-07-26 11:35:51.342474] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.907 [2024-07-26 11:35:51.342504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:55.907 qpair failed and we were unable to recover it. 00:27:55.907 [2024-07-26 11:35:51.342814] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.907 [2024-07-26 11:35:51.342845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:55.907 qpair failed and we were unable to recover it. 00:27:55.907 [2024-07-26 11:35:51.343108] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.907 [2024-07-26 11:35:51.343141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:55.907 qpair failed and we were unable to recover it. 00:27:55.907 [2024-07-26 11:35:51.343282] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.907 [2024-07-26 11:35:51.343312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:55.907 qpair failed and we were unable to recover it. 00:27:55.907 [2024-07-26 11:35:51.343460] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.907 [2024-07-26 11:35:51.343490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:55.907 qpair failed and we were unable to recover it. 00:27:55.907 [2024-07-26 11:35:51.344922] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.907 [2024-07-26 11:35:51.344977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:55.907 qpair failed and we were unable to recover it. 00:27:55.907 [2024-07-26 11:35:51.345274] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.907 [2024-07-26 11:35:51.345310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:55.907 qpair failed and we were unable to recover it. 00:27:55.907 [2024-07-26 11:35:51.345530] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.907 [2024-07-26 11:35:51.345562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:55.907 qpair failed and we were unable to recover it. 00:27:55.907 [2024-07-26 11:35:51.345766] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.907 [2024-07-26 11:35:51.345798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:55.907 qpair failed and we were unable to recover it. 00:27:55.907 [2024-07-26 11:35:51.346047] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.907 [2024-07-26 11:35:51.346080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:55.907 qpair failed and we were unable to recover it. 00:27:55.907 [2024-07-26 11:35:51.346224] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.907 [2024-07-26 11:35:51.346254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:55.907 qpair failed and we were unable to recover it. 00:27:55.907 [2024-07-26 11:35:51.346502] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.907 [2024-07-26 11:35:51.346532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:55.907 qpair failed and we were unable to recover it. 00:27:55.907 [2024-07-26 11:35:51.346721] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.907 [2024-07-26 11:35:51.346753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:55.907 qpair failed and we were unable to recover it. 00:27:55.907 [2024-07-26 11:35:51.347009] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.907 [2024-07-26 11:35:51.347040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:55.907 qpair failed and we were unable to recover it. 00:27:55.907 [2024-07-26 11:35:51.347229] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.907 [2024-07-26 11:35:51.347259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:55.907 qpair failed and we were unable to recover it. 00:27:55.907 [2024-07-26 11:35:51.347455] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.907 [2024-07-26 11:35:51.347486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:55.907 qpair failed and we were unable to recover it. 00:27:55.907 [2024-07-26 11:35:51.347680] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.907 [2024-07-26 11:35:51.347711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:55.907 qpair failed and we were unable to recover it. 00:27:55.907 [2024-07-26 11:35:51.347888] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.907 [2024-07-26 11:35:51.347920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:55.907 qpair failed and we were unable to recover it. 00:27:55.907 [2024-07-26 11:35:51.348221] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.907 [2024-07-26 11:35:51.348256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:55.907 qpair failed and we were unable to recover it. 00:27:55.907 [2024-07-26 11:35:51.348439] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.907 [2024-07-26 11:35:51.348471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:55.907 qpair failed and we were unable to recover it. 00:27:55.907 [2024-07-26 11:35:51.348605] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.907 [2024-07-26 11:35:51.348646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:55.907 qpair failed and we were unable to recover it. 00:27:55.907 [2024-07-26 11:35:51.348837] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.907 [2024-07-26 11:35:51.348867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:55.907 qpair failed and we were unable to recover it. 00:27:55.907 [2024-07-26 11:35:51.349114] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.907 [2024-07-26 11:35:51.349145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:55.907 qpair failed and we were unable to recover it. 00:27:55.907 [2024-07-26 11:35:51.349345] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.907 [2024-07-26 11:35:51.349376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:55.907 qpair failed and we were unable to recover it. 00:27:55.907 [2024-07-26 11:35:51.349522] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.907 [2024-07-26 11:35:51.349551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:55.907 qpair failed and we were unable to recover it. 00:27:55.907 [2024-07-26 11:35:51.349831] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.907 [2024-07-26 11:35:51.349865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:55.907 qpair failed and we were unable to recover it. 00:27:55.907 Read completed with error (sct=0, sc=8) 00:27:55.907 starting I/O failed 00:27:55.907 Read completed with error (sct=0, sc=8) 00:27:55.907 starting I/O failed 00:27:55.907 Read completed with error (sct=0, sc=8) 00:27:55.907 starting I/O failed 00:27:55.907 Read completed with error (sct=0, sc=8) 00:27:55.907 starting I/O failed 00:27:55.907 Read completed with error (sct=0, sc=8) 00:27:55.907 starting I/O failed 00:27:55.907 Read completed with error (sct=0, sc=8) 00:27:55.907 starting I/O failed 00:27:55.907 Read completed with error (sct=0, sc=8) 00:27:55.907 starting I/O failed 00:27:55.907 Read completed with error (sct=0, sc=8) 00:27:55.907 starting I/O failed 00:27:55.907 Read completed with error (sct=0, sc=8) 00:27:55.907 starting I/O failed 00:27:55.907 Read completed with error (sct=0, sc=8) 00:27:55.907 starting I/O failed 00:27:55.907 Write completed with error (sct=0, sc=8) 00:27:55.907 starting I/O failed 00:27:55.907 Read completed with error (sct=0, sc=8) 00:27:55.907 starting I/O failed 00:27:55.907 Write completed with error (sct=0, sc=8) 00:27:55.907 starting I/O failed 00:27:55.908 Write completed with error (sct=0, sc=8) 00:27:55.908 starting I/O failed 00:27:55.908 Read completed with error (sct=0, sc=8) 00:27:55.908 starting I/O failed 00:27:55.908 Write completed with error (sct=0, sc=8) 00:27:55.908 starting I/O failed 00:27:55.908 Read completed with error (sct=0, sc=8) 00:27:55.908 starting I/O failed 00:27:55.908 Write completed with error (sct=0, sc=8) 00:27:55.908 starting I/O failed 00:27:55.908 Write completed with error (sct=0, sc=8) 00:27:55.908 starting I/O failed 00:27:55.908 Read completed with error (sct=0, sc=8) 00:27:55.908 starting I/O failed 00:27:55.908 Read completed with error (sct=0, sc=8) 00:27:55.908 starting I/O failed 00:27:55.908 Write completed with error (sct=0, sc=8) 00:27:55.908 starting I/O failed 00:27:55.908 Read completed with error (sct=0, sc=8) 00:27:55.908 starting I/O failed 00:27:55.908 Read completed with error (sct=0, sc=8) 00:27:55.908 starting I/O failed 00:27:55.908 Read completed with error (sct=0, sc=8) 00:27:55.908 starting I/O failed 00:27:55.908 Write completed with error (sct=0, sc=8) 00:27:55.908 starting I/O failed 00:27:55.908 Write completed with error (sct=0, sc=8) 00:27:55.908 starting I/O failed 00:27:55.908 Read completed with error (sct=0, sc=8) 00:27:55.908 starting I/O failed 00:27:55.908 Write completed with error (sct=0, sc=8) 00:27:55.908 starting I/O failed 00:27:55.908 Read completed with error (sct=0, sc=8) 00:27:55.908 starting I/O failed 00:27:55.908 Write completed with error (sct=0, sc=8) 00:27:55.908 starting I/O failed 00:27:55.908 Write completed with error (sct=0, sc=8) 00:27:55.908 starting I/O failed 00:27:55.908 [2024-07-26 11:35:51.350513] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:55.908 [2024-07-26 11:35:51.350793] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.908 [2024-07-26 11:35:51.350841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:55.908 qpair failed and we were unable to recover it. 00:27:55.908 [2024-07-26 11:35:51.351014] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.908 [2024-07-26 11:35:51.351045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:55.908 qpair failed and we were unable to recover it. 00:27:55.908 [2024-07-26 11:35:51.351328] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.908 [2024-07-26 11:35:51.351358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:55.908 qpair failed and we were unable to recover it. 00:27:55.908 [2024-07-26 11:35:51.351584] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.908 [2024-07-26 11:35:51.351615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:55.908 qpair failed and we were unable to recover it. 00:27:55.908 [2024-07-26 11:35:51.351839] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.908 [2024-07-26 11:35:51.351870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:55.908 qpair failed and we were unable to recover it. 00:27:55.908 [2024-07-26 11:35:51.352009] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.908 [2024-07-26 11:35:51.352040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:55.908 qpair failed and we were unable to recover it. 00:27:55.908 [2024-07-26 11:35:51.352346] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.908 [2024-07-26 11:35:51.352377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:55.908 qpair failed and we were unable to recover it. 00:27:55.908 [2024-07-26 11:35:51.352581] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.908 [2024-07-26 11:35:51.352611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:55.908 qpair failed and we were unable to recover it. 00:27:55.908 [2024-07-26 11:35:51.352834] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.908 [2024-07-26 11:35:51.352867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:55.908 qpair failed and we were unable to recover it. 00:27:55.908 [2024-07-26 11:35:51.353048] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.908 [2024-07-26 11:35:51.353079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:55.908 qpair failed and we were unable to recover it. 00:27:55.908 [2024-07-26 11:35:51.353221] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.908 [2024-07-26 11:35:51.353251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:55.908 qpair failed and we were unable to recover it. 00:27:55.908 [2024-07-26 11:35:51.353499] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.908 [2024-07-26 11:35:51.353533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:55.908 qpair failed and we were unable to recover it. 00:27:55.908 [2024-07-26 11:35:51.353761] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.908 [2024-07-26 11:35:51.353797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:55.908 qpair failed and we were unable to recover it. 00:27:55.908 [2024-07-26 11:35:51.354065] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.908 [2024-07-26 11:35:51.354096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:55.908 qpair failed and we were unable to recover it. 00:27:55.908 [2024-07-26 11:35:51.354329] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.908 [2024-07-26 11:35:51.354359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:55.908 qpair failed and we were unable to recover it. 00:27:55.908 [2024-07-26 11:35:51.354659] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.908 [2024-07-26 11:35:51.354689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:55.908 qpair failed and we were unable to recover it. 00:27:55.908 [2024-07-26 11:35:51.354916] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.908 [2024-07-26 11:35:51.354946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:55.908 qpair failed and we were unable to recover it. 00:27:55.908 [2024-07-26 11:35:51.355147] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.908 [2024-07-26 11:35:51.355177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:55.908 qpair failed and we were unable to recover it. 00:27:55.908 [2024-07-26 11:35:51.355340] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.908 [2024-07-26 11:35:51.355370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:55.908 qpair failed and we were unable to recover it. 00:27:55.908 [2024-07-26 11:35:51.355668] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.908 [2024-07-26 11:35:51.355699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:55.908 qpair failed and we were unable to recover it. 00:27:55.908 [2024-07-26 11:35:51.355971] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.908 [2024-07-26 11:35:51.356003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:55.908 qpair failed and we were unable to recover it. 00:27:55.908 [2024-07-26 11:35:51.356204] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.908 [2024-07-26 11:35:51.356234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:55.908 qpair failed and we were unable to recover it. 00:27:55.908 [2024-07-26 11:35:51.356571] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.908 [2024-07-26 11:35:51.356601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:55.908 qpair failed and we were unable to recover it. 00:27:55.908 [2024-07-26 11:35:51.356831] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.908 [2024-07-26 11:35:51.356866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:55.908 qpair failed and we were unable to recover it. 00:27:55.908 [2024-07-26 11:35:51.357066] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.908 [2024-07-26 11:35:51.357096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:55.908 qpair failed and we were unable to recover it. 00:27:55.908 [2024-07-26 11:35:51.357360] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.908 [2024-07-26 11:35:51.357391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:55.908 qpair failed and we were unable to recover it. 00:27:55.908 [2024-07-26 11:35:51.357585] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.908 [2024-07-26 11:35:51.357616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:55.908 qpair failed and we were unable to recover it. 00:27:55.908 [2024-07-26 11:35:51.357875] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.908 [2024-07-26 11:35:51.357906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:55.908 qpair failed and we were unable to recover it. 00:27:55.908 [2024-07-26 11:35:51.358111] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.908 [2024-07-26 11:35:51.358141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:55.908 qpair failed and we were unable to recover it. 00:27:55.908 [2024-07-26 11:35:51.358336] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.908 [2024-07-26 11:35:51.358367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:55.908 qpair failed and we were unable to recover it. 00:27:55.908 [2024-07-26 11:35:51.358588] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.908 [2024-07-26 11:35:51.358617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:55.908 qpair failed and we were unable to recover it. 00:27:55.909 [2024-07-26 11:35:51.358883] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.909 [2024-07-26 11:35:51.358914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:55.909 qpair failed and we were unable to recover it. 00:27:55.909 [2024-07-26 11:35:51.359046] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.909 [2024-07-26 11:35:51.359077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:55.909 qpair failed and we were unable to recover it. 00:27:55.909 [2024-07-26 11:35:51.359212] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.909 [2024-07-26 11:35:51.359248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:55.909 qpair failed and we were unable to recover it. 00:27:55.909 [2024-07-26 11:35:51.359397] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.909 [2024-07-26 11:35:51.359427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:55.909 qpair failed and we were unable to recover it. 00:27:55.909 [2024-07-26 11:35:51.359609] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.909 [2024-07-26 11:35:51.359663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:55.909 qpair failed and we were unable to recover it. 00:27:55.909 [2024-07-26 11:35:51.359811] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.909 [2024-07-26 11:35:51.359841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:55.909 qpair failed and we were unable to recover it. 00:27:55.909 [2024-07-26 11:35:51.360042] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.909 [2024-07-26 11:35:51.360072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:55.909 qpair failed and we were unable to recover it. 00:27:55.909 [2024-07-26 11:35:51.360279] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.909 [2024-07-26 11:35:51.360309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:55.909 qpair failed and we were unable to recover it. 00:27:55.909 [2024-07-26 11:35:51.360578] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.909 [2024-07-26 11:35:51.360608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:55.909 qpair failed and we were unable to recover it. 00:27:55.909 [2024-07-26 11:35:51.360819] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.909 [2024-07-26 11:35:51.360851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:55.909 qpair failed and we were unable to recover it. 00:27:55.909 [2024-07-26 11:35:51.361033] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.909 [2024-07-26 11:35:51.361063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:55.909 qpair failed and we were unable to recover it. 00:27:55.909 [2024-07-26 11:35:51.361342] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.909 [2024-07-26 11:35:51.361374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:55.909 qpair failed and we were unable to recover it. 00:27:55.909 [2024-07-26 11:35:51.361620] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.909 [2024-07-26 11:35:51.361662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:55.909 qpair failed and we were unable to recover it. 00:27:55.909 [2024-07-26 11:35:51.361911] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.909 [2024-07-26 11:35:51.361941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:55.909 qpair failed and we were unable to recover it. 00:27:55.909 [2024-07-26 11:35:51.362084] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.909 [2024-07-26 11:35:51.362113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:55.909 qpair failed and we were unable to recover it. 00:27:55.909 [2024-07-26 11:35:51.362431] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.909 [2024-07-26 11:35:51.362461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:55.909 qpair failed and we were unable to recover it. 00:27:55.909 [2024-07-26 11:35:51.362734] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.909 [2024-07-26 11:35:51.362768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:55.909 qpair failed and we were unable to recover it. 00:27:55.909 [2024-07-26 11:35:51.362921] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.909 [2024-07-26 11:35:51.362952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:55.909 qpair failed and we were unable to recover it. 00:27:55.909 [2024-07-26 11:35:51.363157] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.909 [2024-07-26 11:35:51.363187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:55.909 qpair failed and we were unable to recover it. 00:27:55.909 [2024-07-26 11:35:51.363397] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.909 [2024-07-26 11:35:51.363427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:55.909 qpair failed and we were unable to recover it. 00:27:55.909 [2024-07-26 11:35:51.363730] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.909 [2024-07-26 11:35:51.363761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:55.909 qpair failed and we were unable to recover it. 00:27:55.909 [2024-07-26 11:35:51.363900] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.909 [2024-07-26 11:35:51.363930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:55.909 qpair failed and we were unable to recover it. 00:27:55.909 [2024-07-26 11:35:51.364066] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.909 [2024-07-26 11:35:51.364095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:55.909 qpair failed and we were unable to recover it. 00:27:55.909 [2024-07-26 11:35:51.364318] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.909 [2024-07-26 11:35:51.364347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:55.909 qpair failed and we were unable to recover it. 00:27:55.909 [2024-07-26 11:35:51.364591] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.909 [2024-07-26 11:35:51.364621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:55.909 qpair failed and we were unable to recover it. 00:27:55.909 [2024-07-26 11:35:51.364776] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.909 [2024-07-26 11:35:51.364805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:55.909 qpair failed and we were unable to recover it. 00:27:55.909 [2024-07-26 11:35:51.364937] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.909 [2024-07-26 11:35:51.364968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:55.909 qpair failed and we were unable to recover it. 00:27:55.909 [2024-07-26 11:35:51.365173] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.909 [2024-07-26 11:35:51.365203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:55.909 qpair failed and we were unable to recover it. 00:27:55.909 [2024-07-26 11:35:51.365477] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.909 [2024-07-26 11:35:51.365507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:55.909 qpair failed and we were unable to recover it. 00:27:55.909 [2024-07-26 11:35:51.365652] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.909 [2024-07-26 11:35:51.365691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:55.909 qpair failed and we were unable to recover it. 00:27:55.909 [2024-07-26 11:35:51.365886] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.909 [2024-07-26 11:35:51.365915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:55.909 qpair failed and we were unable to recover it. 00:27:55.909 [2024-07-26 11:35:51.366116] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.909 [2024-07-26 11:35:51.366145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:55.909 qpair failed and we were unable to recover it. 00:27:55.909 [2024-07-26 11:35:51.366267] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.909 [2024-07-26 11:35:51.366296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:55.910 qpair failed and we were unable to recover it. 00:27:55.910 [2024-07-26 11:35:51.366490] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.910 [2024-07-26 11:35:51.366521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:55.910 qpair failed and we were unable to recover it. 00:27:55.910 [2024-07-26 11:35:51.366749] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.910 [2024-07-26 11:35:51.366780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:55.910 qpair failed and we were unable to recover it. 00:27:55.910 [2024-07-26 11:35:51.367056] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.910 [2024-07-26 11:35:51.367086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:55.910 qpair failed and we were unable to recover it. 00:27:55.910 [2024-07-26 11:35:51.367401] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.910 [2024-07-26 11:35:51.367431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:55.910 qpair failed and we were unable to recover it. 00:27:55.910 [2024-07-26 11:35:51.367622] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.910 [2024-07-26 11:35:51.367668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:55.910 qpair failed and we were unable to recover it. 00:27:55.910 [2024-07-26 11:35:51.367820] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.910 [2024-07-26 11:35:51.367851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:55.910 qpair failed and we were unable to recover it. 00:27:55.910 [2024-07-26 11:35:51.368050] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.910 [2024-07-26 11:35:51.368079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:55.910 qpair failed and we were unable to recover it. 00:27:55.910 [2024-07-26 11:35:51.368214] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.910 [2024-07-26 11:35:51.368244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:55.910 qpair failed and we were unable to recover it. 00:27:55.910 [2024-07-26 11:35:51.368461] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.910 [2024-07-26 11:35:51.368491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:55.910 qpair failed and we were unable to recover it. 00:27:55.910 [2024-07-26 11:35:51.368689] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.910 [2024-07-26 11:35:51.368720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:55.910 qpair failed and we were unable to recover it. 00:27:55.910 [2024-07-26 11:35:51.368916] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.910 [2024-07-26 11:35:51.368945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:55.910 qpair failed and we were unable to recover it. 00:27:55.910 [2024-07-26 11:35:51.369088] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.910 [2024-07-26 11:35:51.369118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:55.910 qpair failed and we were unable to recover it. 00:27:55.910 [2024-07-26 11:35:51.369315] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.910 [2024-07-26 11:35:51.369345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:55.910 qpair failed and we were unable to recover it. 00:27:55.910 [2024-07-26 11:35:51.369556] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.910 [2024-07-26 11:35:51.369586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:55.910 qpair failed and we were unable to recover it. 00:27:55.910 [2024-07-26 11:35:51.369788] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.910 [2024-07-26 11:35:51.369820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:55.910 qpair failed and we were unable to recover it. 00:27:55.910 [2024-07-26 11:35:51.370047] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.910 [2024-07-26 11:35:51.370078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:55.910 qpair failed and we were unable to recover it. 00:27:55.910 [2024-07-26 11:35:51.370355] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.910 [2024-07-26 11:35:51.370386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:55.910 qpair failed and we were unable to recover it. 00:27:55.910 [2024-07-26 11:35:51.370599] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.910 [2024-07-26 11:35:51.370650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:55.910 qpair failed and we were unable to recover it. 00:27:55.910 [2024-07-26 11:35:51.370837] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.910 [2024-07-26 11:35:51.370868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:55.910 qpair failed and we were unable to recover it. 00:27:55.910 [2024-07-26 11:35:51.371065] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.910 [2024-07-26 11:35:51.371095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:55.910 qpair failed and we were unable to recover it. 00:27:55.910 [2024-07-26 11:35:51.371297] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.910 [2024-07-26 11:35:51.371326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:55.910 qpair failed and we were unable to recover it. 00:27:55.910 [2024-07-26 11:35:51.371475] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.910 [2024-07-26 11:35:51.371505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:55.910 qpair failed and we were unable to recover it. 00:27:55.910 [2024-07-26 11:35:51.371769] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.910 [2024-07-26 11:35:51.371799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:55.910 qpair failed and we were unable to recover it. 00:27:55.910 [2024-07-26 11:35:51.371953] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.910 [2024-07-26 11:35:51.371983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:55.910 qpair failed and we were unable to recover it. 00:27:55.910 [2024-07-26 11:35:51.372123] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.910 [2024-07-26 11:35:51.372153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:55.910 qpair failed and we were unable to recover it. 00:27:55.910 [2024-07-26 11:35:51.372358] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.910 [2024-07-26 11:35:51.372387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:55.910 qpair failed and we were unable to recover it. 00:27:55.910 [2024-07-26 11:35:51.372591] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.910 [2024-07-26 11:35:51.372621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:55.910 qpair failed and we were unable to recover it. 00:27:55.910 [2024-07-26 11:35:51.372745] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.910 [2024-07-26 11:35:51.372774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:55.910 qpair failed and we were unable to recover it. 00:27:55.910 [2024-07-26 11:35:51.373033] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.910 [2024-07-26 11:35:51.373062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:55.910 qpair failed and we were unable to recover it. 00:27:55.910 [2024-07-26 11:35:51.373253] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.910 [2024-07-26 11:35:51.373282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:55.910 qpair failed and we were unable to recover it. 00:27:55.910 [2024-07-26 11:35:51.373463] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.910 [2024-07-26 11:35:51.373494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:55.910 qpair failed and we were unable to recover it. 00:27:55.910 [2024-07-26 11:35:51.373691] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.910 [2024-07-26 11:35:51.373722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:55.910 qpair failed and we were unable to recover it. 00:27:55.910 [2024-07-26 11:35:51.374028] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.910 [2024-07-26 11:35:51.374058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:55.910 qpair failed and we were unable to recover it. 00:27:55.910 [2024-07-26 11:35:51.374355] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.910 [2024-07-26 11:35:51.374385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:55.910 qpair failed and we were unable to recover it. 00:27:55.910 [2024-07-26 11:35:51.374532] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.910 [2024-07-26 11:35:51.374562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:55.910 qpair failed and we were unable to recover it. 00:27:55.910 [2024-07-26 11:35:51.374845] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.910 [2024-07-26 11:35:51.374876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:55.910 qpair failed and we were unable to recover it. 00:27:55.910 [2024-07-26 11:35:51.375011] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.911 [2024-07-26 11:35:51.375047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:55.911 qpair failed and we were unable to recover it. 00:27:55.911 [2024-07-26 11:35:51.375194] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.911 [2024-07-26 11:35:51.375223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:55.911 qpair failed and we were unable to recover it. 00:27:55.911 [2024-07-26 11:35:51.375425] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.911 [2024-07-26 11:35:51.375454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:55.911 qpair failed and we were unable to recover it. 00:27:55.911 [2024-07-26 11:35:51.375760] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.911 [2024-07-26 11:35:51.375792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:55.911 qpair failed and we were unable to recover it. 00:27:55.911 [2024-07-26 11:35:51.375978] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.911 [2024-07-26 11:35:51.376008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:55.911 qpair failed and we were unable to recover it. 00:27:55.911 [2024-07-26 11:35:51.376200] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.911 [2024-07-26 11:35:51.376229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:55.911 qpair failed and we were unable to recover it. 00:27:55.911 [2024-07-26 11:35:51.376348] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.911 [2024-07-26 11:35:51.376377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:55.911 qpair failed and we were unable to recover it. 00:27:55.911 [2024-07-26 11:35:51.376642] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.911 [2024-07-26 11:35:51.376675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:55.911 qpair failed and we were unable to recover it. 00:27:55.911 [2024-07-26 11:35:51.376928] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.911 [2024-07-26 11:35:51.376958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:55.911 qpair failed and we were unable to recover it. 00:27:55.911 [2024-07-26 11:35:51.377158] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.911 [2024-07-26 11:35:51.377188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:55.911 qpair failed and we were unable to recover it. 00:27:55.911 [2024-07-26 11:35:51.377393] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.911 [2024-07-26 11:35:51.377424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:55.911 qpair failed and we were unable to recover it. 00:27:55.911 [2024-07-26 11:35:51.377619] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.911 [2024-07-26 11:35:51.377661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:55.911 qpair failed and we were unable to recover it. 00:27:55.911 [2024-07-26 11:35:51.377937] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.911 [2024-07-26 11:35:51.377967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:55.911 qpair failed and we were unable to recover it. 00:27:55.911 [2024-07-26 11:35:51.378174] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.911 [2024-07-26 11:35:51.378204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:55.911 qpair failed and we were unable to recover it. 00:27:55.911 [2024-07-26 11:35:51.378479] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.911 [2024-07-26 11:35:51.378510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:55.911 qpair failed and we were unable to recover it. 00:27:55.911 [2024-07-26 11:35:51.378784] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.911 [2024-07-26 11:35:51.378815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:55.911 qpair failed and we were unable to recover it. 00:27:55.911 [2024-07-26 11:35:51.379011] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.911 [2024-07-26 11:35:51.379041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:55.911 qpair failed and we were unable to recover it. 00:27:55.911 [2024-07-26 11:35:51.379191] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.911 [2024-07-26 11:35:51.379221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:55.911 qpair failed and we were unable to recover it. 00:27:55.911 [2024-07-26 11:35:51.379473] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.911 [2024-07-26 11:35:51.379504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:55.911 qpair failed and we were unable to recover it. 00:27:55.911 [2024-07-26 11:35:51.379703] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.911 [2024-07-26 11:35:51.379734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:55.911 qpair failed and we were unable to recover it. 00:27:55.911 [2024-07-26 11:35:51.379920] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.911 [2024-07-26 11:35:51.379950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:55.911 qpair failed and we were unable to recover it. 00:27:55.911 [2024-07-26 11:35:51.380203] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.911 [2024-07-26 11:35:51.380233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:55.911 qpair failed and we were unable to recover it. 00:27:55.911 [2024-07-26 11:35:51.380427] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.911 [2024-07-26 11:35:51.380457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:55.911 qpair failed and we were unable to recover it. 00:27:55.911 [2024-07-26 11:35:51.380719] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.911 [2024-07-26 11:35:51.380749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:55.911 qpair failed and we were unable to recover it. 00:27:55.911 [2024-07-26 11:35:51.380931] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.911 [2024-07-26 11:35:51.380961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:55.911 qpair failed and we were unable to recover it. 00:27:55.911 [2024-07-26 11:35:51.381185] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.911 [2024-07-26 11:35:51.381215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:55.911 qpair failed and we were unable to recover it. 00:27:55.911 [2024-07-26 11:35:51.381420] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.911 [2024-07-26 11:35:51.381450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:55.911 qpair failed and we were unable to recover it. 00:27:55.911 [2024-07-26 11:35:51.381786] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.911 [2024-07-26 11:35:51.381818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:55.911 qpair failed and we were unable to recover it. 00:27:55.911 [2024-07-26 11:35:51.382022] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.911 [2024-07-26 11:35:51.382051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:55.911 qpair failed and we were unable to recover it. 00:27:55.911 [2024-07-26 11:35:51.382252] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.911 [2024-07-26 11:35:51.382283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:55.911 qpair failed and we were unable to recover it. 00:27:55.911 [2024-07-26 11:35:51.382584] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.911 [2024-07-26 11:35:51.382614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:55.911 qpair failed and we were unable to recover it. 00:27:55.911 [2024-07-26 11:35:51.382772] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.911 [2024-07-26 11:35:51.382803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:55.911 qpair failed and we were unable to recover it. 00:27:55.911 [2024-07-26 11:35:51.383078] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.911 [2024-07-26 11:35:51.383108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:55.911 qpair failed and we were unable to recover it. 00:27:55.911 [2024-07-26 11:35:51.383303] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.911 [2024-07-26 11:35:51.383333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:55.911 qpair failed and we were unable to recover it. 00:27:55.911 [2024-07-26 11:35:51.383544] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.911 [2024-07-26 11:35:51.383574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:55.911 qpair failed and we were unable to recover it. 00:27:55.911 [2024-07-26 11:35:51.383840] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.911 [2024-07-26 11:35:51.383872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:55.911 qpair failed and we were unable to recover it. 00:27:55.911 [2024-07-26 11:35:51.383984] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.911 [2024-07-26 11:35:51.384014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:55.911 qpair failed and we were unable to recover it. 00:27:55.911 [2024-07-26 11:35:51.384135] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.911 [2024-07-26 11:35:51.384165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:55.911 qpair failed and we were unable to recover it. 00:27:55.912 [2024-07-26 11:35:51.384350] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.912 [2024-07-26 11:35:51.384380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:55.912 qpair failed and we were unable to recover it. 00:27:55.912 [2024-07-26 11:35:51.384583] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.912 [2024-07-26 11:35:51.384613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:55.912 qpair failed and we were unable to recover it. 00:27:55.912 [2024-07-26 11:35:51.384854] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.912 [2024-07-26 11:35:51.384892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:55.912 qpair failed and we were unable to recover it. 00:27:55.912 [2024-07-26 11:35:51.385017] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.912 [2024-07-26 11:35:51.385047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:55.912 qpair failed and we were unable to recover it. 00:27:55.912 [2024-07-26 11:35:51.385187] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.912 [2024-07-26 11:35:51.385218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:55.912 qpair failed and we were unable to recover it. 00:27:55.912 [2024-07-26 11:35:51.385470] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.912 [2024-07-26 11:35:51.385500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:55.912 qpair failed and we were unable to recover it. 00:27:55.912 [2024-07-26 11:35:51.385693] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.912 [2024-07-26 11:35:51.385725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:55.912 qpair failed and we were unable to recover it. 00:27:55.912 [2024-07-26 11:35:51.385978] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.912 [2024-07-26 11:35:51.386008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:55.912 qpair failed and we were unable to recover it. 00:27:55.912 [2024-07-26 11:35:51.386188] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.912 [2024-07-26 11:35:51.386218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:55.912 qpair failed and we were unable to recover it. 00:27:55.912 [2024-07-26 11:35:51.386477] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.912 [2024-07-26 11:35:51.386508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:55.912 qpair failed and we were unable to recover it. 00:27:55.912 [2024-07-26 11:35:51.386691] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.912 [2024-07-26 11:35:51.386722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:55.912 qpair failed and we were unable to recover it. 00:27:55.912 [2024-07-26 11:35:51.386909] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.912 [2024-07-26 11:35:51.386939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:55.912 qpair failed and we were unable to recover it. 00:27:55.912 [2024-07-26 11:35:51.387064] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.912 [2024-07-26 11:35:51.387093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:55.912 qpair failed and we were unable to recover it. 00:27:55.912 [2024-07-26 11:35:51.387316] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.912 [2024-07-26 11:35:51.387346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:55.912 qpair failed and we were unable to recover it. 00:27:55.912 [2024-07-26 11:35:51.387528] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.912 [2024-07-26 11:35:51.387559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:55.912 qpair failed and we were unable to recover it. 00:27:55.912 [2024-07-26 11:35:51.387689] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.912 [2024-07-26 11:35:51.387720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:55.912 qpair failed and we were unable to recover it. 00:27:55.912 [2024-07-26 11:35:51.387933] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.912 [2024-07-26 11:35:51.387964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:55.912 qpair failed and we were unable to recover it. 00:27:55.912 [2024-07-26 11:35:51.388175] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.912 [2024-07-26 11:35:51.388206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:55.912 qpair failed and we were unable to recover it. 00:27:55.912 [2024-07-26 11:35:51.388407] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.912 [2024-07-26 11:35:51.388437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:55.912 qpair failed and we were unable to recover it. 00:27:55.912 [2024-07-26 11:35:51.388709] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.912 [2024-07-26 11:35:51.388740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:55.912 qpair failed and we were unable to recover it. 00:27:55.912 [2024-07-26 11:35:51.388935] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.912 [2024-07-26 11:35:51.388965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:55.912 qpair failed and we were unable to recover it. 00:27:55.912 [2024-07-26 11:35:51.389172] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.912 [2024-07-26 11:35:51.389202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:55.912 qpair failed and we were unable to recover it. 00:27:55.912 [2024-07-26 11:35:51.389398] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.912 [2024-07-26 11:35:51.389428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:55.912 qpair failed and we were unable to recover it. 00:27:55.912 [2024-07-26 11:35:51.389659] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.912 [2024-07-26 11:35:51.389690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:55.912 qpair failed and we were unable to recover it. 00:27:55.912 [2024-07-26 11:35:51.389831] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.912 [2024-07-26 11:35:51.389862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:55.912 qpair failed and we were unable to recover it. 00:27:55.912 [2024-07-26 11:35:51.390050] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.912 [2024-07-26 11:35:51.390080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:55.912 qpair failed and we were unable to recover it. 00:27:55.912 [2024-07-26 11:35:51.390202] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.912 [2024-07-26 11:35:51.390232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:55.912 qpair failed and we were unable to recover it. 00:27:55.912 [2024-07-26 11:35:51.390418] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.912 [2024-07-26 11:35:51.390448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:55.912 qpair failed and we were unable to recover it. 00:27:55.912 [2024-07-26 11:35:51.390582] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.912 [2024-07-26 11:35:51.390613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:55.912 qpair failed and we were unable to recover it. 00:27:55.912 [2024-07-26 11:35:51.390898] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.912 [2024-07-26 11:35:51.390929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:55.912 qpair failed and we were unable to recover it. 00:27:55.912 [2024-07-26 11:35:51.391181] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.912 [2024-07-26 11:35:51.391211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:55.912 qpair failed and we were unable to recover it. 00:27:55.912 [2024-07-26 11:35:51.391466] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.912 [2024-07-26 11:35:51.391496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:55.912 qpair failed and we were unable to recover it. 00:27:55.912 [2024-07-26 11:35:51.391611] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.912 [2024-07-26 11:35:51.391659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:55.912 qpair failed and we were unable to recover it. 00:27:55.912 [2024-07-26 11:35:51.391891] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.912 [2024-07-26 11:35:51.391922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:55.912 qpair failed and we were unable to recover it. 00:27:55.912 [2024-07-26 11:35:51.392058] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.912 [2024-07-26 11:35:51.392088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:55.912 qpair failed and we were unable to recover it. 00:27:55.912 [2024-07-26 11:35:51.392273] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.912 [2024-07-26 11:35:51.392302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:55.912 qpair failed and we were unable to recover it. 00:27:55.912 [2024-07-26 11:35:51.392425] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.912 [2024-07-26 11:35:51.392455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:55.912 qpair failed and we were unable to recover it. 00:27:55.912 [2024-07-26 11:35:51.392658] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.913 [2024-07-26 11:35:51.392689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:55.913 qpair failed and we were unable to recover it. 00:27:55.913 [2024-07-26 11:35:51.392963] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.913 [2024-07-26 11:35:51.392993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:55.913 qpair failed and we were unable to recover it. 00:27:55.913 [2024-07-26 11:35:51.393179] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.913 [2024-07-26 11:35:51.393209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:55.913 qpair failed and we were unable to recover it. 00:27:55.913 [2024-07-26 11:35:51.393462] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.913 [2024-07-26 11:35:51.393492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:55.913 qpair failed and we were unable to recover it. 00:27:55.913 [2024-07-26 11:35:51.393795] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.913 [2024-07-26 11:35:51.393826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:55.913 qpair failed and we were unable to recover it. 00:27:55.913 [2024-07-26 11:35:51.393958] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.913 [2024-07-26 11:35:51.393993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:55.913 qpair failed and we were unable to recover it. 00:27:55.913 [2024-07-26 11:35:51.394199] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.913 [2024-07-26 11:35:51.394230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:55.913 qpair failed and we were unable to recover it. 00:27:55.913 [2024-07-26 11:35:51.394427] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.913 [2024-07-26 11:35:51.394457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:55.913 qpair failed and we were unable to recover it. 00:27:55.913 [2024-07-26 11:35:51.394718] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.913 [2024-07-26 11:35:51.394748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:55.913 qpair failed and we were unable to recover it. 00:27:55.913 [2024-07-26 11:35:51.395010] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.913 [2024-07-26 11:35:51.395041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:55.913 qpair failed and we were unable to recover it. 00:27:55.913 [2024-07-26 11:35:51.395246] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.913 [2024-07-26 11:35:51.395276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:55.913 qpair failed and we were unable to recover it. 00:27:55.913 [2024-07-26 11:35:51.395475] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.913 [2024-07-26 11:35:51.395505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:55.913 qpair failed and we were unable to recover it. 00:27:55.913 [2024-07-26 11:35:51.395789] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.913 [2024-07-26 11:35:51.395820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:55.913 qpair failed and we were unable to recover it. 00:27:55.913 [2024-07-26 11:35:51.395931] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.913 [2024-07-26 11:35:51.395961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:55.913 qpair failed and we were unable to recover it. 00:27:55.913 [2024-07-26 11:35:51.396162] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.913 [2024-07-26 11:35:51.396192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:55.913 qpair failed and we were unable to recover it. 00:27:55.913 [2024-07-26 11:35:51.396378] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.913 [2024-07-26 11:35:51.396408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:55.913 qpair failed and we were unable to recover it. 00:27:55.913 [2024-07-26 11:35:51.396608] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.913 [2024-07-26 11:35:51.396650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:55.913 qpair failed and we were unable to recover it. 00:27:55.913 [2024-07-26 11:35:51.396841] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.913 [2024-07-26 11:35:51.396871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:55.913 qpair failed and we were unable to recover it. 00:27:55.913 [2024-07-26 11:35:51.397126] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.913 [2024-07-26 11:35:51.397157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:55.913 qpair failed and we were unable to recover it. 00:27:55.913 [2024-07-26 11:35:51.397449] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.913 [2024-07-26 11:35:51.397478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:55.913 qpair failed and we were unable to recover it. 00:27:55.913 [2024-07-26 11:35:51.397696] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.913 [2024-07-26 11:35:51.397727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:55.913 qpair failed and we were unable to recover it. 00:27:55.913 [2024-07-26 11:35:51.397915] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.913 [2024-07-26 11:35:51.397945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:55.913 qpair failed and we were unable to recover it. 00:27:55.913 [2024-07-26 11:35:51.398074] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.913 [2024-07-26 11:35:51.398104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:55.913 qpair failed and we were unable to recover it. 00:27:55.913 [2024-07-26 11:35:51.398311] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.913 [2024-07-26 11:35:51.398341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:55.913 qpair failed and we were unable to recover it. 00:27:55.913 [2024-07-26 11:35:51.398541] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.913 [2024-07-26 11:35:51.398571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:55.913 qpair failed and we were unable to recover it. 00:27:55.913 [2024-07-26 11:35:51.398765] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.913 [2024-07-26 11:35:51.398795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:55.913 qpair failed and we were unable to recover it. 00:27:55.913 [2024-07-26 11:35:51.398997] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.913 [2024-07-26 11:35:51.399027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:55.913 qpair failed and we were unable to recover it. 00:27:55.913 [2024-07-26 11:35:51.399220] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.913 [2024-07-26 11:35:51.399250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:55.913 qpair failed and we were unable to recover it. 00:27:55.913 [2024-07-26 11:35:51.399501] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.913 [2024-07-26 11:35:51.399531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:55.913 qpair failed and we were unable to recover it. 00:27:55.913 [2024-07-26 11:35:51.399791] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.913 [2024-07-26 11:35:51.399823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:55.913 qpair failed and we were unable to recover it. 00:27:55.913 [2024-07-26 11:35:51.400022] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.913 [2024-07-26 11:35:51.400052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:55.913 qpair failed and we were unable to recover it. 00:27:55.913 [2024-07-26 11:35:51.400256] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.913 [2024-07-26 11:35:51.400286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:55.913 qpair failed and we were unable to recover it. 00:27:55.913 [2024-07-26 11:35:51.400497] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.913 [2024-07-26 11:35:51.400528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:55.913 qpair failed and we were unable to recover it. 00:27:55.913 [2024-07-26 11:35:51.400711] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.913 [2024-07-26 11:35:51.400741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:55.913 qpair failed and we were unable to recover it. 00:27:55.913 [2024-07-26 11:35:51.400998] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.913 [2024-07-26 11:35:51.401028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:55.913 qpair failed and we were unable to recover it. 00:27:55.913 [2024-07-26 11:35:51.401158] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.913 [2024-07-26 11:35:51.401188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:55.913 qpair failed and we were unable to recover it. 00:27:55.913 [2024-07-26 11:35:51.401394] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.913 [2024-07-26 11:35:51.401424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:55.913 qpair failed and we were unable to recover it. 00:27:55.913 [2024-07-26 11:35:51.401704] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.913 [2024-07-26 11:35:51.401735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:55.913 qpair failed and we were unable to recover it. 00:27:55.914 [2024-07-26 11:35:51.401938] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.914 [2024-07-26 11:35:51.401968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:55.914 qpair failed and we were unable to recover it. 00:27:55.914 [2024-07-26 11:35:51.402169] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.914 [2024-07-26 11:35:51.402200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:55.914 qpair failed and we were unable to recover it. 00:27:55.914 [2024-07-26 11:35:51.402383] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.914 [2024-07-26 11:35:51.402413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:55.914 qpair failed and we were unable to recover it. 00:27:55.914 [2024-07-26 11:35:51.402613] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.914 [2024-07-26 11:35:51.402651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:55.914 qpair failed and we were unable to recover it. 00:27:55.914 [2024-07-26 11:35:51.402846] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.914 [2024-07-26 11:35:51.402876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:55.914 qpair failed and we were unable to recover it. 00:27:55.914 [2024-07-26 11:35:51.403062] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.914 [2024-07-26 11:35:51.403092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:55.914 qpair failed and we were unable to recover it. 00:27:55.914 [2024-07-26 11:35:51.403235] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.914 [2024-07-26 11:35:51.403266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:55.914 qpair failed and we were unable to recover it. 00:27:55.914 [2024-07-26 11:35:51.403462] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.914 [2024-07-26 11:35:51.403492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:55.914 qpair failed and we were unable to recover it. 00:27:55.914 [2024-07-26 11:35:51.403709] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.914 [2024-07-26 11:35:51.403741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:55.914 qpair failed and we were unable to recover it. 00:27:55.914 [2024-07-26 11:35:51.403928] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.914 [2024-07-26 11:35:51.403958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:55.914 qpair failed and we were unable to recover it. 00:27:55.914 [2024-07-26 11:35:51.404178] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.914 [2024-07-26 11:35:51.404208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:55.914 qpair failed and we were unable to recover it. 00:27:55.914 [2024-07-26 11:35:51.404336] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.914 [2024-07-26 11:35:51.404366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:55.914 qpair failed and we were unable to recover it. 00:27:55.914 [2024-07-26 11:35:51.404500] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.914 [2024-07-26 11:35:51.404530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:55.914 qpair failed and we were unable to recover it. 00:27:55.914 [2024-07-26 11:35:51.404718] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.914 [2024-07-26 11:35:51.404755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:55.914 qpair failed and we were unable to recover it. 00:27:55.914 [2024-07-26 11:35:51.405030] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.914 [2024-07-26 11:35:51.405060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:55.914 qpair failed and we were unable to recover it. 00:27:55.914 [2024-07-26 11:35:51.405262] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.914 [2024-07-26 11:35:51.405292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:55.914 qpair failed and we were unable to recover it. 00:27:55.914 [2024-07-26 11:35:51.405427] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.914 [2024-07-26 11:35:51.405457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:55.914 qpair failed and we were unable to recover it. 00:27:55.914 [2024-07-26 11:35:51.405656] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.914 [2024-07-26 11:35:51.405688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:55.914 qpair failed and we were unable to recover it. 00:27:55.914 [2024-07-26 11:35:51.405957] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.914 [2024-07-26 11:35:51.405987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:55.914 qpair failed and we were unable to recover it. 00:27:55.914 [2024-07-26 11:35:51.406138] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.914 [2024-07-26 11:35:51.406167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:55.914 qpair failed and we were unable to recover it. 00:27:55.914 [2024-07-26 11:35:51.406358] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.914 [2024-07-26 11:35:51.406389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:55.914 qpair failed and we were unable to recover it. 00:27:55.914 [2024-07-26 11:35:51.406596] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.914 [2024-07-26 11:35:51.406637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:55.914 qpair failed and we were unable to recover it. 00:27:55.914 [2024-07-26 11:35:51.406837] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.914 [2024-07-26 11:35:51.406867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:55.914 qpair failed and we were unable to recover it. 00:27:55.914 [2024-07-26 11:35:51.407067] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.914 [2024-07-26 11:35:51.407096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:55.914 qpair failed and we were unable to recover it. 00:27:55.914 [2024-07-26 11:35:51.407288] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.914 [2024-07-26 11:35:51.407318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:55.914 qpair failed and we were unable to recover it. 00:27:55.914 [2024-07-26 11:35:51.407445] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.914 [2024-07-26 11:35:51.407476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:55.914 qpair failed and we were unable to recover it. 00:27:55.914 [2024-07-26 11:35:51.407671] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.914 [2024-07-26 11:35:51.407702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:55.914 qpair failed and we were unable to recover it. 00:27:55.914 [2024-07-26 11:35:51.407844] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.914 [2024-07-26 11:35:51.407874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:55.914 qpair failed and we were unable to recover it. 00:27:55.914 [2024-07-26 11:35:51.408063] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.914 [2024-07-26 11:35:51.408092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:55.914 qpair failed and we were unable to recover it. 00:27:55.914 [2024-07-26 11:35:51.408344] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.914 [2024-07-26 11:35:51.408375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:55.914 qpair failed and we were unable to recover it. 00:27:55.914 [2024-07-26 11:35:51.408638] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.914 [2024-07-26 11:35:51.408669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:55.914 qpair failed and we were unable to recover it. 00:27:55.914 [2024-07-26 11:35:51.408799] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.914 [2024-07-26 11:35:51.408829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:55.914 qpair failed and we were unable to recover it. 00:27:55.915 [2024-07-26 11:35:51.409028] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.915 [2024-07-26 11:35:51.409058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:55.915 qpair failed and we were unable to recover it. 00:27:55.915 [2024-07-26 11:35:51.409314] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.915 [2024-07-26 11:35:51.409344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:55.915 qpair failed and we were unable to recover it. 00:27:55.915 [2024-07-26 11:35:51.409528] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.915 [2024-07-26 11:35:51.409563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:55.915 qpair failed and we were unable to recover it. 00:27:55.915 [2024-07-26 11:35:51.409763] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.915 [2024-07-26 11:35:51.409794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:55.915 qpair failed and we were unable to recover it. 00:27:55.915 [2024-07-26 11:35:51.410046] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.915 [2024-07-26 11:35:51.410076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:55.915 qpair failed and we were unable to recover it. 00:27:55.915 [2024-07-26 11:35:51.410351] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.915 [2024-07-26 11:35:51.410381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:55.915 qpair failed and we were unable to recover it. 00:27:55.915 [2024-07-26 11:35:51.410587] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.915 [2024-07-26 11:35:51.410617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:55.915 qpair failed and we were unable to recover it. 00:27:55.915 [2024-07-26 11:35:51.410805] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.915 [2024-07-26 11:35:51.410835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:55.915 qpair failed and we were unable to recover it. 00:27:55.915 [2024-07-26 11:35:51.411072] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.915 [2024-07-26 11:35:51.411102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:55.915 qpair failed and we were unable to recover it. 00:27:55.915 [2024-07-26 11:35:51.411283] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.915 [2024-07-26 11:35:51.411313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:55.915 qpair failed and we were unable to recover it. 00:27:55.915 [2024-07-26 11:35:51.411458] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.915 [2024-07-26 11:35:51.411488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:55.915 qpair failed and we were unable to recover it. 00:27:55.915 [2024-07-26 11:35:51.411618] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.915 [2024-07-26 11:35:51.411672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:55.915 qpair failed and we were unable to recover it. 00:27:55.915 [2024-07-26 11:35:51.411948] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.915 [2024-07-26 11:35:51.411977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:55.915 qpair failed and we were unable to recover it. 00:27:55.915 [2024-07-26 11:35:51.412171] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.915 [2024-07-26 11:35:51.412201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:55.915 qpair failed and we were unable to recover it. 00:27:55.915 [2024-07-26 11:35:51.412416] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.915 [2024-07-26 11:35:51.412447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:55.915 qpair failed and we were unable to recover it. 00:27:55.915 [2024-07-26 11:35:51.412575] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.915 [2024-07-26 11:35:51.412605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:55.915 qpair failed and we were unable to recover it. 00:27:55.915 [2024-07-26 11:35:51.412838] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.915 [2024-07-26 11:35:51.412869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:55.915 qpair failed and we were unable to recover it. 00:27:55.915 [2024-07-26 11:35:51.413081] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.915 [2024-07-26 11:35:51.413111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:55.915 qpair failed and we were unable to recover it. 00:27:55.915 [2024-07-26 11:35:51.413362] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.915 [2024-07-26 11:35:51.413393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:55.915 qpair failed and we were unable to recover it. 00:27:55.915 [2024-07-26 11:35:51.413586] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.915 [2024-07-26 11:35:51.413617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:55.915 qpair failed and we were unable to recover it. 00:27:55.915 [2024-07-26 11:35:51.413886] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.915 [2024-07-26 11:35:51.413917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:55.915 qpair failed and we were unable to recover it. 00:27:55.915 [2024-07-26 11:35:51.414096] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.915 [2024-07-26 11:35:51.414126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:55.915 qpair failed and we were unable to recover it. 00:27:55.915 [2024-07-26 11:35:51.414326] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.915 [2024-07-26 11:35:51.414356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:55.915 qpair failed and we were unable to recover it. 00:27:55.915 [2024-07-26 11:35:51.414620] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.915 [2024-07-26 11:35:51.414663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:55.915 qpair failed and we were unable to recover it. 00:27:55.915 [2024-07-26 11:35:51.414988] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.915 [2024-07-26 11:35:51.415018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:55.915 qpair failed and we were unable to recover it. 00:27:55.915 [2024-07-26 11:35:51.415272] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.915 [2024-07-26 11:35:51.415302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:55.915 qpair failed and we were unable to recover it. 00:27:55.915 [2024-07-26 11:35:51.415511] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.915 [2024-07-26 11:35:51.415542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:55.915 qpair failed and we were unable to recover it. 00:27:55.915 [2024-07-26 11:35:51.415666] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.915 [2024-07-26 11:35:51.415715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:55.915 qpair failed and we were unable to recover it. 00:27:55.915 [2024-07-26 11:35:51.415935] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.915 [2024-07-26 11:35:51.415965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:55.915 qpair failed and we were unable to recover it. 00:27:55.915 [2024-07-26 11:35:51.416276] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.915 [2024-07-26 11:35:51.416306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:55.915 qpair failed and we were unable to recover it. 00:27:55.915 [2024-07-26 11:35:51.416453] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.915 [2024-07-26 11:35:51.416483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:55.915 qpair failed and we were unable to recover it. 00:27:55.915 [2024-07-26 11:35:51.416735] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.915 [2024-07-26 11:35:51.416766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:55.915 qpair failed and we were unable to recover it. 00:27:55.915 [2024-07-26 11:35:51.417041] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.915 [2024-07-26 11:35:51.417071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:55.915 qpair failed and we were unable to recover it. 00:27:55.915 [2024-07-26 11:35:51.417251] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.915 [2024-07-26 11:35:51.417281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:55.915 qpair failed and we were unable to recover it. 00:27:55.915 [2024-07-26 11:35:51.417406] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.915 [2024-07-26 11:35:51.417436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:55.915 qpair failed and we were unable to recover it. 00:27:55.915 [2024-07-26 11:35:51.417720] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.915 [2024-07-26 11:35:51.417751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:55.915 qpair failed and we were unable to recover it. 00:27:55.915 [2024-07-26 11:35:51.417954] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.915 [2024-07-26 11:35:51.417985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:55.915 qpair failed and we were unable to recover it. 00:27:55.915 [2024-07-26 11:35:51.418213] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.916 [2024-07-26 11:35:51.418242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:55.916 qpair failed and we were unable to recover it. 00:27:55.916 [2024-07-26 11:35:51.418432] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.916 [2024-07-26 11:35:51.418462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:55.916 qpair failed and we were unable to recover it. 00:27:55.916 [2024-07-26 11:35:51.418652] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.916 [2024-07-26 11:35:51.418683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:55.916 qpair failed and we were unable to recover it. 00:27:55.916 [2024-07-26 11:35:51.418864] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.916 [2024-07-26 11:35:51.418893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:55.916 qpair failed and we were unable to recover it. 00:27:55.916 [2024-07-26 11:35:51.419028] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.916 [2024-07-26 11:35:51.419058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:55.916 qpair failed and we were unable to recover it. 00:27:55.916 [2024-07-26 11:35:51.419253] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.916 [2024-07-26 11:35:51.419288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:55.916 qpair failed and we were unable to recover it. 00:27:55.916 [2024-07-26 11:35:51.419514] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.916 [2024-07-26 11:35:51.419543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:55.916 qpair failed and we were unable to recover it. 00:27:55.916 [2024-07-26 11:35:51.419797] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.916 [2024-07-26 11:35:51.419828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:55.916 qpair failed and we were unable to recover it. 00:27:55.916 [2024-07-26 11:35:51.419957] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.916 [2024-07-26 11:35:51.419987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:55.916 qpair failed and we were unable to recover it. 00:27:55.916 [2024-07-26 11:35:51.420118] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.916 [2024-07-26 11:35:51.420148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:55.916 qpair failed and we were unable to recover it. 00:27:55.916 [2024-07-26 11:35:51.420400] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.916 [2024-07-26 11:35:51.420430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:55.916 qpair failed and we were unable to recover it. 00:27:55.916 [2024-07-26 11:35:51.420618] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.916 [2024-07-26 11:35:51.420657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:55.916 qpair failed and we were unable to recover it. 00:27:55.916 [2024-07-26 11:35:51.420856] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.916 [2024-07-26 11:35:51.420886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:55.916 qpair failed and we were unable to recover it. 00:27:55.916 [2024-07-26 11:35:51.421067] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.916 [2024-07-26 11:35:51.421097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:55.916 qpair failed and we were unable to recover it. 00:27:55.916 [2024-07-26 11:35:51.421321] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.916 [2024-07-26 11:35:51.421351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:55.916 qpair failed and we were unable to recover it. 00:27:55.916 [2024-07-26 11:35:51.421650] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.916 [2024-07-26 11:35:51.421681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:55.916 qpair failed and we were unable to recover it. 00:27:55.916 [2024-07-26 11:35:51.421823] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.916 [2024-07-26 11:35:51.421852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:55.916 qpair failed and we were unable to recover it. 00:27:55.916 [2024-07-26 11:35:51.421983] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.916 [2024-07-26 11:35:51.422013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:55.916 qpair failed and we were unable to recover it. 00:27:55.916 [2024-07-26 11:35:51.422192] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.916 [2024-07-26 11:35:51.422221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:55.916 qpair failed and we were unable to recover it. 00:27:55.916 [2024-07-26 11:35:51.422422] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.916 [2024-07-26 11:35:51.422453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:55.916 qpair failed and we were unable to recover it. 00:27:55.916 [2024-07-26 11:35:51.422645] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.916 [2024-07-26 11:35:51.422676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:55.916 qpair failed and we were unable to recover it. 00:27:55.916 [2024-07-26 11:35:51.422868] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.916 [2024-07-26 11:35:51.422898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:55.916 qpair failed and we were unable to recover it. 00:27:55.916 [2024-07-26 11:35:51.423111] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.916 [2024-07-26 11:35:51.423141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:55.916 qpair failed and we were unable to recover it. 00:27:55.916 [2024-07-26 11:35:51.423335] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.916 [2024-07-26 11:35:51.423365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:55.916 qpair failed and we were unable to recover it. 00:27:55.916 [2024-07-26 11:35:51.423493] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.916 [2024-07-26 11:35:51.423522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:55.916 qpair failed and we were unable to recover it. 00:27:55.916 [2024-07-26 11:35:51.423668] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.916 [2024-07-26 11:35:51.423700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:55.916 qpair failed and we were unable to recover it. 00:27:55.916 [2024-07-26 11:35:51.423894] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.916 [2024-07-26 11:35:51.423924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:55.916 qpair failed and we were unable to recover it. 00:27:55.916 [2024-07-26 11:35:51.424196] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.916 [2024-07-26 11:35:51.424226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:55.916 qpair failed and we were unable to recover it. 00:27:55.916 [2024-07-26 11:35:51.424354] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.916 [2024-07-26 11:35:51.424383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:55.916 qpair failed and we were unable to recover it. 00:27:55.916 [2024-07-26 11:35:51.424601] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.916 [2024-07-26 11:35:51.424637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:55.916 qpair failed and we were unable to recover it. 00:27:55.916 [2024-07-26 11:35:51.424761] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.916 [2024-07-26 11:35:51.424791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:55.916 qpair failed and we were unable to recover it. 00:27:55.916 [2024-07-26 11:35:51.424923] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.916 [2024-07-26 11:35:51.424953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:55.916 qpair failed and we were unable to recover it. 00:27:55.916 [2024-07-26 11:35:51.425108] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.916 [2024-07-26 11:35:51.425138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:55.916 qpair failed and we were unable to recover it. 00:27:55.916 [2024-07-26 11:35:51.425390] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.916 [2024-07-26 11:35:51.425420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:55.916 qpair failed and we were unable to recover it. 00:27:55.916 [2024-07-26 11:35:51.425622] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.916 [2024-07-26 11:35:51.425660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:55.916 qpair failed and we were unable to recover it. 00:27:55.916 [2024-07-26 11:35:51.425855] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.916 [2024-07-26 11:35:51.425884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:55.916 qpair failed and we were unable to recover it. 00:27:55.916 [2024-07-26 11:35:51.426095] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.916 [2024-07-26 11:35:51.426125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:55.916 qpair failed and we were unable to recover it. 00:27:55.916 [2024-07-26 11:35:51.426318] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.916 [2024-07-26 11:35:51.426348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:55.916 qpair failed and we were unable to recover it. 00:27:55.917 [2024-07-26 11:35:51.426477] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.917 [2024-07-26 11:35:51.426507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:55.917 qpair failed and we were unable to recover it. 00:27:55.917 [2024-07-26 11:35:51.426702] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.917 [2024-07-26 11:35:51.426732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:55.917 qpair failed and we were unable to recover it. 00:27:55.917 [2024-07-26 11:35:51.426859] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.917 [2024-07-26 11:35:51.426889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:55.917 qpair failed and we were unable to recover it. 00:27:55.917 [2024-07-26 11:35:51.427129] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.917 [2024-07-26 11:35:51.427159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:55.917 qpair failed and we were unable to recover it. 00:27:55.917 [2024-07-26 11:35:51.427476] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.917 [2024-07-26 11:35:51.427505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:55.917 qpair failed and we were unable to recover it. 00:27:55.917 [2024-07-26 11:35:51.427722] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.917 [2024-07-26 11:35:51.427754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:55.917 qpair failed and we were unable to recover it. 00:27:55.917 [2024-07-26 11:35:51.428027] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.917 [2024-07-26 11:35:51.428056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:55.917 qpair failed and we were unable to recover it. 00:27:55.917 [2024-07-26 11:35:51.428256] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.917 [2024-07-26 11:35:51.428292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:55.917 qpair failed and we were unable to recover it. 00:27:55.917 [2024-07-26 11:35:51.428419] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.917 [2024-07-26 11:35:51.428449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:55.917 qpair failed and we were unable to recover it. 00:27:55.917 [2024-07-26 11:35:51.428645] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.917 [2024-07-26 11:35:51.428676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:55.917 qpair failed and we were unable to recover it. 00:27:55.917 [2024-07-26 11:35:51.428867] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.917 [2024-07-26 11:35:51.428897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:55.917 qpair failed and we were unable to recover it. 00:27:55.917 [2024-07-26 11:35:51.429028] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.917 [2024-07-26 11:35:51.429058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:55.917 qpair failed and we were unable to recover it. 00:27:55.917 [2024-07-26 11:35:51.429191] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.917 [2024-07-26 11:35:51.429220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:55.917 qpair failed and we were unable to recover it. 00:27:55.917 [2024-07-26 11:35:51.429468] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.917 [2024-07-26 11:35:51.429497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:55.917 qpair failed and we were unable to recover it. 00:27:55.917 [2024-07-26 11:35:51.429694] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.917 [2024-07-26 11:35:51.429725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:55.917 qpair failed and we were unable to recover it. 00:27:55.917 [2024-07-26 11:35:51.430001] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.917 [2024-07-26 11:35:51.430030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:55.917 qpair failed and we were unable to recover it. 00:27:55.917 [2024-07-26 11:35:51.430249] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.917 [2024-07-26 11:35:51.430279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:55.917 qpair failed and we were unable to recover it. 00:27:55.917 [2024-07-26 11:35:51.430475] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.917 [2024-07-26 11:35:51.430505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:55.917 qpair failed and we were unable to recover it. 00:27:55.917 [2024-07-26 11:35:51.430765] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.917 [2024-07-26 11:35:51.430795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:55.917 qpair failed and we were unable to recover it. 00:27:55.917 [2024-07-26 11:35:51.430989] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.917 [2024-07-26 11:35:51.431019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:55.917 qpair failed and we were unable to recover it. 00:27:55.917 [2024-07-26 11:35:51.431149] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.917 [2024-07-26 11:35:51.431178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:55.917 qpair failed and we were unable to recover it. 00:27:55.917 [2024-07-26 11:35:51.431410] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.917 [2024-07-26 11:35:51.431440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:55.917 qpair failed and we were unable to recover it. 00:27:55.917 [2024-07-26 11:35:51.431665] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.917 [2024-07-26 11:35:51.431697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:55.917 qpair failed and we were unable to recover it. 00:27:55.917 [2024-07-26 11:35:51.431999] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.917 [2024-07-26 11:35:51.432029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:55.917 qpair failed and we were unable to recover it. 00:27:55.917 [2024-07-26 11:35:51.432224] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.917 [2024-07-26 11:35:51.432254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:55.917 qpair failed and we were unable to recover it. 00:27:55.917 [2024-07-26 11:35:51.432536] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.917 [2024-07-26 11:35:51.432566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:55.917 qpair failed and we were unable to recover it. 00:27:55.917 [2024-07-26 11:35:51.432692] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.917 [2024-07-26 11:35:51.432723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:55.917 qpair failed and we were unable to recover it. 00:27:55.917 [2024-07-26 11:35:51.432912] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.917 [2024-07-26 11:35:51.432942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:55.917 qpair failed and we were unable to recover it. 00:27:55.917 [2024-07-26 11:35:51.433245] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.917 [2024-07-26 11:35:51.433275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:55.917 qpair failed and we were unable to recover it. 00:27:55.917 [2024-07-26 11:35:51.433400] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.917 [2024-07-26 11:35:51.433430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:55.917 qpair failed and we were unable to recover it. 00:27:55.917 [2024-07-26 11:35:51.433553] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.917 [2024-07-26 11:35:51.433582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:55.917 qpair failed and we were unable to recover it. 00:27:55.917 [2024-07-26 11:35:51.433786] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.917 [2024-07-26 11:35:51.433818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:55.917 qpair failed and we were unable to recover it. 00:27:55.917 [2024-07-26 11:35:51.433951] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.917 [2024-07-26 11:35:51.433980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:55.917 qpair failed and we were unable to recover it. 00:27:55.917 [2024-07-26 11:35:51.434170] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.917 [2024-07-26 11:35:51.434200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:55.917 qpair failed and we were unable to recover it. 00:27:55.917 [2024-07-26 11:35:51.434480] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.917 [2024-07-26 11:35:51.434511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:55.917 qpair failed and we were unable to recover it. 00:27:55.917 [2024-07-26 11:35:51.434723] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.917 [2024-07-26 11:35:51.434754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:55.917 qpair failed and we were unable to recover it. 00:27:55.917 [2024-07-26 11:35:51.434885] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.917 [2024-07-26 11:35:51.434915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:55.917 qpair failed and we were unable to recover it. 00:27:55.917 [2024-07-26 11:35:51.435199] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.918 [2024-07-26 11:35:51.435229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:55.918 qpair failed and we were unable to recover it. 00:27:55.918 [2024-07-26 11:35:51.435473] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.918 [2024-07-26 11:35:51.435502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:55.918 qpair failed and we were unable to recover it. 00:27:55.918 [2024-07-26 11:35:51.435695] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.918 [2024-07-26 11:35:51.435726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:55.918 qpair failed and we were unable to recover it. 00:27:55.918 [2024-07-26 11:35:51.435919] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.918 [2024-07-26 11:35:51.435949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:55.918 qpair failed and we were unable to recover it. 00:27:55.918 [2024-07-26 11:35:51.436153] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.918 [2024-07-26 11:35:51.436183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:55.918 qpair failed and we were unable to recover it. 00:27:55.918 [2024-07-26 11:35:51.436388] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.918 [2024-07-26 11:35:51.436418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:55.918 qpair failed and we were unable to recover it. 00:27:55.918 [2024-07-26 11:35:51.436560] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.918 [2024-07-26 11:35:51.436590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:55.918 qpair failed and we were unable to recover it. 00:27:55.918 [2024-07-26 11:35:51.436778] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.918 [2024-07-26 11:35:51.436809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:55.918 qpair failed and we were unable to recover it. 00:27:55.918 [2024-07-26 11:35:51.437006] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.918 [2024-07-26 11:35:51.437036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:55.918 qpair failed and we were unable to recover it. 00:27:55.918 [2024-07-26 11:35:51.437252] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.918 [2024-07-26 11:35:51.437282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:55.918 qpair failed and we were unable to recover it. 00:27:55.918 [2024-07-26 11:35:51.437478] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.918 [2024-07-26 11:35:51.437514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:55.918 qpair failed and we were unable to recover it. 00:27:55.918 [2024-07-26 11:35:51.437689] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.918 [2024-07-26 11:35:51.437720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:55.918 qpair failed and we were unable to recover it. 00:27:55.918 [2024-07-26 11:35:51.437911] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.918 [2024-07-26 11:35:51.437940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:55.918 qpair failed and we were unable to recover it. 00:27:55.918 [2024-07-26 11:35:51.438137] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.918 [2024-07-26 11:35:51.438167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:55.918 qpair failed and we were unable to recover it. 00:27:55.918 [2024-07-26 11:35:51.438381] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.918 [2024-07-26 11:35:51.438411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:55.918 qpair failed and we were unable to recover it. 00:27:55.918 [2024-07-26 11:35:51.438602] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.918 [2024-07-26 11:35:51.438640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:55.918 qpair failed and we were unable to recover it. 00:27:55.918 [2024-07-26 11:35:51.438771] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.918 [2024-07-26 11:35:51.438801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:55.918 qpair failed and we were unable to recover it. 00:27:55.918 [2024-07-26 11:35:51.438938] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.918 [2024-07-26 11:35:51.438968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:55.918 qpair failed and we were unable to recover it. 00:27:55.918 [2024-07-26 11:35:51.439149] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.918 [2024-07-26 11:35:51.439179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:55.918 qpair failed and we were unable to recover it. 00:27:55.918 [2024-07-26 11:35:51.439392] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.918 [2024-07-26 11:35:51.439422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:55.918 qpair failed and we were unable to recover it. 00:27:55.918 [2024-07-26 11:35:51.439555] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.918 [2024-07-26 11:35:51.439584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:55.918 qpair failed and we were unable to recover it. 00:27:55.918 [2024-07-26 11:35:51.439806] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.918 [2024-07-26 11:35:51.439837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:55.918 qpair failed and we were unable to recover it. 00:27:55.918 [2024-07-26 11:35:51.440014] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.918 [2024-07-26 11:35:51.440044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:55.918 qpair failed and we were unable to recover it. 00:27:55.918 [2024-07-26 11:35:51.440293] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.918 [2024-07-26 11:35:51.440322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:55.918 qpair failed and we were unable to recover it. 00:27:55.918 [2024-07-26 11:35:51.440584] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.918 [2024-07-26 11:35:51.440615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:55.918 qpair failed and we were unable to recover it. 00:27:55.918 [2024-07-26 11:35:51.440875] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.918 [2024-07-26 11:35:51.440905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:55.918 qpair failed and we were unable to recover it. 00:27:55.918 [2024-07-26 11:35:51.441098] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.918 [2024-07-26 11:35:51.441127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:55.918 qpair failed and we were unable to recover it. 00:27:55.918 [2024-07-26 11:35:51.441331] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.918 [2024-07-26 11:35:51.441361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:55.918 qpair failed and we were unable to recover it. 00:27:55.918 [2024-07-26 11:35:51.441655] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.918 [2024-07-26 11:35:51.441687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:55.918 qpair failed and we were unable to recover it. 00:27:55.918 [2024-07-26 11:35:51.441884] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.918 [2024-07-26 11:35:51.441914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:55.918 qpair failed and we were unable to recover it. 00:27:55.918 [2024-07-26 11:35:51.442094] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.918 [2024-07-26 11:35:51.442124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:55.918 qpair failed and we were unable to recover it. 00:27:55.918 [2024-07-26 11:35:51.442377] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.918 [2024-07-26 11:35:51.442406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:55.918 qpair failed and we were unable to recover it. 00:27:55.918 [2024-07-26 11:35:51.442549] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.918 [2024-07-26 11:35:51.442578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:55.918 qpair failed and we were unable to recover it. 00:27:55.918 [2024-07-26 11:35:51.442850] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.918 [2024-07-26 11:35:51.442880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:55.918 qpair failed and we were unable to recover it. 00:27:55.918 [2024-07-26 11:35:51.443132] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.918 [2024-07-26 11:35:51.443161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:55.918 qpair failed and we were unable to recover it. 00:27:55.918 [2024-07-26 11:35:51.443346] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.918 [2024-07-26 11:35:51.443375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:55.918 qpair failed and we were unable to recover it. 00:27:55.918 [2024-07-26 11:35:51.443498] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.918 [2024-07-26 11:35:51.443528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:55.918 qpair failed and we were unable to recover it. 00:27:55.918 [2024-07-26 11:35:51.443797] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.918 [2024-07-26 11:35:51.443828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:55.918 qpair failed and we were unable to recover it. 00:27:55.918 [2024-07-26 11:35:51.444018] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.919 [2024-07-26 11:35:51.444048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:55.919 qpair failed and we were unable to recover it. 00:27:55.919 [2024-07-26 11:35:51.444299] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.919 [2024-07-26 11:35:51.444329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:55.919 qpair failed and we were unable to recover it. 00:27:55.919 [2024-07-26 11:35:51.444594] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.919 [2024-07-26 11:35:51.444623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:55.919 qpair failed and we were unable to recover it. 00:27:55.919 [2024-07-26 11:35:51.444765] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.919 [2024-07-26 11:35:51.444795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:55.919 qpair failed and we were unable to recover it. 00:27:55.919 [2024-07-26 11:35:51.445068] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.919 [2024-07-26 11:35:51.445097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:55.919 qpair failed and we were unable to recover it. 00:27:55.919 [2024-07-26 11:35:51.445239] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.919 [2024-07-26 11:35:51.445268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:55.919 qpair failed and we were unable to recover it. 00:27:55.919 [2024-07-26 11:35:51.445399] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.919 [2024-07-26 11:35:51.445429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:55.919 qpair failed and we were unable to recover it. 00:27:55.919 [2024-07-26 11:35:51.445701] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.919 [2024-07-26 11:35:51.445732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:55.919 qpair failed and we were unable to recover it. 00:27:55.919 [2024-07-26 11:35:51.445910] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.919 [2024-07-26 11:35:51.445940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:55.919 qpair failed and we were unable to recover it. 00:27:55.919 [2024-07-26 11:35:51.446186] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.919 [2024-07-26 11:35:51.446216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:55.919 qpair failed and we were unable to recover it. 00:27:55.919 [2024-07-26 11:35:51.446350] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.919 [2024-07-26 11:35:51.446380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:55.919 qpair failed and we were unable to recover it. 00:27:55.919 [2024-07-26 11:35:51.446523] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.919 [2024-07-26 11:35:51.446552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:55.919 qpair failed and we were unable to recover it. 00:27:55.919 [2024-07-26 11:35:51.446803] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.919 [2024-07-26 11:35:51.446840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:55.919 qpair failed and we were unable to recover it. 00:27:55.919 [2024-07-26 11:35:51.446969] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.919 [2024-07-26 11:35:51.446999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:55.919 qpair failed and we were unable to recover it. 00:27:55.919 [2024-07-26 11:35:51.447244] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.919 [2024-07-26 11:35:51.447273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:55.919 qpair failed and we were unable to recover it. 00:27:55.919 [2024-07-26 11:35:51.447520] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.919 [2024-07-26 11:35:51.447549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:55.919 qpair failed and we were unable to recover it. 00:27:55.919 [2024-07-26 11:35:51.447772] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.919 [2024-07-26 11:35:51.447803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:55.919 qpair failed and we were unable to recover it. 00:27:55.919 [2024-07-26 11:35:51.447931] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.919 [2024-07-26 11:35:51.447960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:55.919 qpair failed and we were unable to recover it. 00:27:55.919 [2024-07-26 11:35:51.448136] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.919 [2024-07-26 11:35:51.448165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:55.919 qpair failed and we were unable to recover it. 00:27:55.919 [2024-07-26 11:35:51.448283] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.919 [2024-07-26 11:35:51.448313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:55.919 qpair failed and we were unable to recover it. 00:27:55.919 [2024-07-26 11:35:51.448583] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.919 [2024-07-26 11:35:51.448614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:55.919 qpair failed and we were unable to recover it. 00:27:55.919 [2024-07-26 11:35:51.448841] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.919 [2024-07-26 11:35:51.448872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:55.919 qpair failed and we were unable to recover it. 00:27:55.919 [2024-07-26 11:35:51.449137] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.919 [2024-07-26 11:35:51.449167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:55.919 qpair failed and we were unable to recover it. 00:27:55.919 [2024-07-26 11:35:51.449439] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.919 [2024-07-26 11:35:51.449469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:55.919 qpair failed and we were unable to recover it. 00:27:55.919 [2024-07-26 11:35:51.449744] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.919 [2024-07-26 11:35:51.449774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:55.919 qpair failed and we were unable to recover it. 00:27:55.919 [2024-07-26 11:35:51.449890] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.919 [2024-07-26 11:35:51.449920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:55.919 qpair failed and we were unable to recover it. 00:27:55.919 [2024-07-26 11:35:51.450104] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.919 [2024-07-26 11:35:51.450134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:55.919 qpair failed and we were unable to recover it. 00:27:55.919 [2024-07-26 11:35:51.450253] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.919 [2024-07-26 11:35:51.450282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:55.919 qpair failed and we were unable to recover it. 00:27:55.919 [2024-07-26 11:35:51.450554] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.919 [2024-07-26 11:35:51.450585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:55.919 qpair failed and we were unable to recover it. 00:27:55.919 [2024-07-26 11:35:51.450790] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.919 [2024-07-26 11:35:51.450820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:55.919 qpair failed and we were unable to recover it. 00:27:55.919 [2024-07-26 11:35:51.450995] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.919 [2024-07-26 11:35:51.451025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:55.919 qpair failed and we were unable to recover it. 00:27:55.919 [2024-07-26 11:35:51.451208] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.919 [2024-07-26 11:35:51.451238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:55.919 qpair failed and we were unable to recover it. 00:27:55.919 [2024-07-26 11:35:51.451364] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.920 [2024-07-26 11:35:51.451394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:55.920 qpair failed and we were unable to recover it. 00:27:55.920 [2024-07-26 11:35:51.451588] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.920 [2024-07-26 11:35:51.451618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:55.920 qpair failed and we were unable to recover it. 00:27:55.920 [2024-07-26 11:35:51.451768] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.920 [2024-07-26 11:35:51.451798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:55.920 qpair failed and we were unable to recover it. 00:27:55.920 [2024-07-26 11:35:51.452048] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.920 [2024-07-26 11:35:51.452078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:55.920 qpair failed and we were unable to recover it. 00:27:55.920 [2024-07-26 11:35:51.452192] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.920 [2024-07-26 11:35:51.452225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:55.920 qpair failed and we were unable to recover it. 00:27:55.920 [2024-07-26 11:35:51.452490] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.920 [2024-07-26 11:35:51.452519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:55.920 qpair failed and we were unable to recover it. 00:27:55.920 [2024-07-26 11:35:51.452763] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.920 [2024-07-26 11:35:51.452795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:55.920 qpair failed and we were unable to recover it. 00:27:55.920 [2024-07-26 11:35:51.452933] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.920 [2024-07-26 11:35:51.452963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:55.920 qpair failed and we were unable to recover it. 00:27:55.920 [2024-07-26 11:35:51.453207] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.920 [2024-07-26 11:35:51.453236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:55.920 qpair failed and we were unable to recover it. 00:27:55.920 [2024-07-26 11:35:51.453498] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.920 [2024-07-26 11:35:51.453528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:55.920 qpair failed and we were unable to recover it. 00:27:55.920 [2024-07-26 11:35:51.453704] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.920 [2024-07-26 11:35:51.453734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:55.920 qpair failed and we were unable to recover it. 00:27:55.920 [2024-07-26 11:35:51.453976] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.920 [2024-07-26 11:35:51.454006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:55.920 qpair failed and we were unable to recover it. 00:27:55.920 [2024-07-26 11:35:51.454282] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.920 [2024-07-26 11:35:51.454311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:55.920 qpair failed and we were unable to recover it. 00:27:55.920 [2024-07-26 11:35:51.454418] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.920 [2024-07-26 11:35:51.454447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:55.920 qpair failed and we were unable to recover it. 00:27:55.920 [2024-07-26 11:35:51.454739] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.920 [2024-07-26 11:35:51.454770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:55.920 qpair failed and we were unable to recover it. 00:27:55.920 [2024-07-26 11:35:51.454952] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.920 [2024-07-26 11:35:51.454982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:55.920 qpair failed and we were unable to recover it. 00:27:55.920 [2024-07-26 11:35:51.455167] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.920 [2024-07-26 11:35:51.455197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:55.920 qpair failed and we were unable to recover it. 00:27:55.920 [2024-07-26 11:35:51.455341] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.920 [2024-07-26 11:35:51.455371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:55.920 qpair failed and we were unable to recover it. 00:27:55.920 [2024-07-26 11:35:51.455668] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.920 [2024-07-26 11:35:51.455699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:55.920 qpair failed and we were unable to recover it. 00:27:55.920 [2024-07-26 11:35:51.455834] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.920 [2024-07-26 11:35:51.455864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:55.920 qpair failed and we were unable to recover it. 00:27:55.920 [2024-07-26 11:35:51.456110] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.920 [2024-07-26 11:35:51.456157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:55.920 qpair failed and we were unable to recover it. 00:27:55.920 [2024-07-26 11:35:51.456428] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.920 [2024-07-26 11:35:51.456458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:55.920 qpair failed and we were unable to recover it. 00:27:55.920 [2024-07-26 11:35:51.456652] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.920 [2024-07-26 11:35:51.456683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:55.920 qpair failed and we were unable to recover it. 00:27:55.920 [2024-07-26 11:35:51.456822] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.920 [2024-07-26 11:35:51.456853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:55.920 qpair failed and we were unable to recover it. 00:27:55.920 [2024-07-26 11:35:51.457045] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.920 [2024-07-26 11:35:51.457075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:55.920 qpair failed and we were unable to recover it. 00:27:55.920 [2024-07-26 11:35:51.457251] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.920 [2024-07-26 11:35:51.457281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:55.920 qpair failed and we were unable to recover it. 00:27:55.920 [2024-07-26 11:35:51.457493] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.920 [2024-07-26 11:35:51.457523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:55.920 qpair failed and we were unable to recover it. 00:27:55.920 [2024-07-26 11:35:51.457771] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.920 [2024-07-26 11:35:51.457801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:55.920 qpair failed and we were unable to recover it. 00:27:55.920 [2024-07-26 11:35:51.457977] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.920 [2024-07-26 11:35:51.458007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:55.920 qpair failed and we were unable to recover it. 00:27:55.920 [2024-07-26 11:35:51.458195] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.920 [2024-07-26 11:35:51.458224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:55.920 qpair failed and we were unable to recover it. 00:27:55.920 [2024-07-26 11:35:51.458490] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.920 [2024-07-26 11:35:51.458520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:55.920 qpair failed and we were unable to recover it. 00:27:55.920 [2024-07-26 11:35:51.458719] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.920 [2024-07-26 11:35:51.458749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:55.920 qpair failed and we were unable to recover it. 00:27:55.920 [2024-07-26 11:35:51.458881] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.920 [2024-07-26 11:35:51.458910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:55.920 qpair failed and we were unable to recover it. 00:27:55.920 [2024-07-26 11:35:51.459126] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.920 [2024-07-26 11:35:51.459156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:55.920 qpair failed and we were unable to recover it. 00:27:55.920 [2024-07-26 11:35:51.459414] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.920 [2024-07-26 11:35:51.459444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:55.920 qpair failed and we were unable to recover it. 00:27:55.920 [2024-07-26 11:35:51.459714] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.920 [2024-07-26 11:35:51.459745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:55.920 qpair failed and we were unable to recover it. 00:27:55.920 [2024-07-26 11:35:51.459993] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.920 [2024-07-26 11:35:51.460023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:55.920 qpair failed and we were unable to recover it. 00:27:55.920 [2024-07-26 11:35:51.460222] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.920 [2024-07-26 11:35:51.460252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:55.920 qpair failed and we were unable to recover it. 00:27:55.921 [2024-07-26 11:35:51.460543] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.921 [2024-07-26 11:35:51.460573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:55.921 qpair failed and we were unable to recover it. 00:27:55.921 [2024-07-26 11:35:51.460829] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.921 [2024-07-26 11:35:51.460860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:55.921 qpair failed and we were unable to recover it. 00:27:55.921 [2024-07-26 11:35:51.461129] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.921 [2024-07-26 11:35:51.461159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:55.921 qpair failed and we were unable to recover it. 00:27:55.921 [2024-07-26 11:35:51.461478] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.921 [2024-07-26 11:35:51.461508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:55.921 qpair failed and we were unable to recover it. 00:27:55.921 [2024-07-26 11:35:51.461643] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.921 [2024-07-26 11:35:51.461673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:55.921 qpair failed and we were unable to recover it. 00:27:55.921 [2024-07-26 11:35:51.461816] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.921 [2024-07-26 11:35:51.461845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:55.921 qpair failed and we were unable to recover it. 00:27:55.921 [2024-07-26 11:35:51.462029] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.921 [2024-07-26 11:35:51.462059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:55.921 qpair failed and we were unable to recover it. 00:27:55.921 [2024-07-26 11:35:51.462311] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.921 [2024-07-26 11:35:51.462341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:55.921 qpair failed and we were unable to recover it. 00:27:55.921 [2024-07-26 11:35:51.462531] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.921 [2024-07-26 11:35:51.462560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:55.921 qpair failed and we were unable to recover it. 00:27:55.921 [2024-07-26 11:35:51.462819] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.921 [2024-07-26 11:35:51.462851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:55.921 qpair failed and we were unable to recover it. 00:27:55.921 [2024-07-26 11:35:51.463059] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.921 [2024-07-26 11:35:51.463089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:55.921 qpair failed and we were unable to recover it. 00:27:55.921 [2024-07-26 11:35:51.463213] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.921 [2024-07-26 11:35:51.463242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:55.921 qpair failed and we were unable to recover it. 00:27:55.921 [2024-07-26 11:35:51.463431] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.921 [2024-07-26 11:35:51.463462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:55.921 qpair failed and we were unable to recover it. 00:27:55.921 [2024-07-26 11:35:51.463668] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.921 [2024-07-26 11:35:51.463701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:55.921 qpair failed and we were unable to recover it. 00:27:55.921 [2024-07-26 11:35:51.463964] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.921 [2024-07-26 11:35:51.463994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:55.921 qpair failed and we were unable to recover it. 00:27:55.921 [2024-07-26 11:35:51.464139] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.921 [2024-07-26 11:35:51.464169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:55.921 qpair failed and we were unable to recover it. 00:27:55.921 [2024-07-26 11:35:51.464359] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.921 [2024-07-26 11:35:51.464388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:55.921 qpair failed and we were unable to recover it. 00:27:55.921 [2024-07-26 11:35:51.464584] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.921 [2024-07-26 11:35:51.464614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:55.921 qpair failed and we were unable to recover it. 00:27:55.921 [2024-07-26 11:35:51.464846] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.921 [2024-07-26 11:35:51.464876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:55.921 qpair failed and we were unable to recover it. 00:27:55.921 [2024-07-26 11:35:51.465052] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.921 [2024-07-26 11:35:51.465082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:55.921 qpair failed and we were unable to recover it. 00:27:55.921 [2024-07-26 11:35:51.465271] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.921 [2024-07-26 11:35:51.465301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:55.921 qpair failed and we were unable to recover it. 00:27:55.921 [2024-07-26 11:35:51.465489] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.921 [2024-07-26 11:35:51.465519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:55.921 qpair failed and we were unable to recover it. 00:27:55.921 [2024-07-26 11:35:51.465738] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.921 [2024-07-26 11:35:51.465774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:55.921 qpair failed and we were unable to recover it. 00:27:55.921 [2024-07-26 11:35:51.466015] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.921 [2024-07-26 11:35:51.466045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:55.921 qpair failed and we were unable to recover it. 00:27:55.921 [2024-07-26 11:35:51.466186] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.921 [2024-07-26 11:35:51.466216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:55.921 qpair failed and we were unable to recover it. 00:27:55.921 [2024-07-26 11:35:51.466394] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.921 [2024-07-26 11:35:51.466424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:55.921 qpair failed and we were unable to recover it. 00:27:55.921 [2024-07-26 11:35:51.466694] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.921 [2024-07-26 11:35:51.466725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:55.921 qpair failed and we were unable to recover it. 00:27:55.921 [2024-07-26 11:35:51.466848] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.921 [2024-07-26 11:35:51.466878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:55.921 qpair failed and we were unable to recover it. 00:27:55.921 [2024-07-26 11:35:51.467134] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.921 [2024-07-26 11:35:51.467164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:55.921 qpair failed and we were unable to recover it. 00:27:55.921 [2024-07-26 11:35:51.467376] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.921 [2024-07-26 11:35:51.467406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:55.921 qpair failed and we were unable to recover it. 00:27:55.921 [2024-07-26 11:35:51.467657] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.921 [2024-07-26 11:35:51.467688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:55.921 qpair failed and we were unable to recover it. 00:27:55.921 [2024-07-26 11:35:51.467878] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.921 [2024-07-26 11:35:51.467908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:55.921 qpair failed and we were unable to recover it. 00:27:55.921 [2024-07-26 11:35:51.468037] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.921 [2024-07-26 11:35:51.468066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:55.921 qpair failed and we were unable to recover it. 00:27:55.921 [2024-07-26 11:35:51.468286] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.921 [2024-07-26 11:35:51.468316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:55.921 qpair failed and we were unable to recover it. 00:27:55.921 [2024-07-26 11:35:51.468526] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.921 [2024-07-26 11:35:51.468556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:55.921 qpair failed and we were unable to recover it. 00:27:55.921 [2024-07-26 11:35:51.468682] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.921 [2024-07-26 11:35:51.468713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:55.921 qpair failed and we were unable to recover it. 00:27:55.921 [2024-07-26 11:35:51.468844] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.921 [2024-07-26 11:35:51.468873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:55.921 qpair failed and we were unable to recover it. 00:27:55.921 [2024-07-26 11:35:51.469140] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.922 [2024-07-26 11:35:51.469170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:55.922 qpair failed and we were unable to recover it. 00:27:55.922 [2024-07-26 11:35:51.469411] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.922 [2024-07-26 11:35:51.469440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:55.922 qpair failed and we were unable to recover it. 00:27:55.922 [2024-07-26 11:35:51.469681] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.922 [2024-07-26 11:35:51.469712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:55.922 qpair failed and we were unable to recover it. 00:27:55.922 [2024-07-26 11:35:51.469838] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.922 [2024-07-26 11:35:51.469867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:55.922 qpair failed and we were unable to recover it. 00:27:55.922 [2024-07-26 11:35:51.470043] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.922 [2024-07-26 11:35:51.470072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:55.922 qpair failed and we were unable to recover it. 00:27:55.922 [2024-07-26 11:35:51.470266] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.922 [2024-07-26 11:35:51.470296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:55.922 qpair failed and we were unable to recover it. 00:27:55.922 [2024-07-26 11:35:51.470427] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.922 [2024-07-26 11:35:51.470457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:55.922 qpair failed and we were unable to recover it. 00:27:55.922 [2024-07-26 11:35:51.470717] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.922 [2024-07-26 11:35:51.470747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:55.922 qpair failed and we were unable to recover it. 00:27:55.922 [2024-07-26 11:35:51.470941] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.922 [2024-07-26 11:35:51.470971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:55.922 qpair failed and we were unable to recover it. 00:27:55.922 [2024-07-26 11:35:51.471162] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.922 [2024-07-26 11:35:51.471192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:55.922 qpair failed and we were unable to recover it. 00:27:55.922 [2024-07-26 11:35:51.471405] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.922 [2024-07-26 11:35:51.471435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:55.922 qpair failed and we were unable to recover it. 00:27:55.922 [2024-07-26 11:35:51.471665] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.922 [2024-07-26 11:35:51.471696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:55.922 qpair failed and we were unable to recover it. 00:27:55.922 [2024-07-26 11:35:51.471890] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.922 [2024-07-26 11:35:51.471922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:55.922 qpair failed and we were unable to recover it. 00:27:55.922 [2024-07-26 11:35:51.472062] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.922 [2024-07-26 11:35:51.472091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:55.922 qpair failed and we were unable to recover it. 00:27:55.922 [2024-07-26 11:35:51.472337] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.922 [2024-07-26 11:35:51.472367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:55.922 qpair failed and we were unable to recover it. 00:27:55.922 [2024-07-26 11:35:51.472484] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.922 [2024-07-26 11:35:51.472514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:55.922 qpair failed and we were unable to recover it. 00:27:55.922 [2024-07-26 11:35:51.472756] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.922 [2024-07-26 11:35:51.472786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:55.922 qpair failed and we were unable to recover it. 00:27:55.922 [2024-07-26 11:35:51.473045] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.922 [2024-07-26 11:35:51.473075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:55.922 qpair failed and we were unable to recover it. 00:27:55.922 [2024-07-26 11:35:51.473183] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.922 [2024-07-26 11:35:51.473212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:55.922 qpair failed and we were unable to recover it. 00:27:55.922 [2024-07-26 11:35:51.473386] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.922 [2024-07-26 11:35:51.473416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:55.922 qpair failed and we were unable to recover it. 00:27:55.922 [2024-07-26 11:35:51.473589] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.922 [2024-07-26 11:35:51.473618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:55.922 qpair failed and we were unable to recover it. 00:27:55.922 [2024-07-26 11:35:51.473905] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.922 [2024-07-26 11:35:51.473935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:55.922 qpair failed and we were unable to recover it. 00:27:55.922 [2024-07-26 11:35:51.474076] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.922 [2024-07-26 11:35:51.474106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:55.922 qpair failed and we were unable to recover it. 00:27:55.922 [2024-07-26 11:35:51.474320] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.922 [2024-07-26 11:35:51.474350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:55.922 qpair failed and we were unable to recover it. 00:27:55.922 [2024-07-26 11:35:51.474528] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.922 [2024-07-26 11:35:51.474558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:55.922 qpair failed and we were unable to recover it. 00:27:55.922 [2024-07-26 11:35:51.474699] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.922 [2024-07-26 11:35:51.474734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:55.922 qpair failed and we were unable to recover it. 00:27:55.922 [2024-07-26 11:35:51.474913] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.922 [2024-07-26 11:35:51.474943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:55.922 qpair failed and we were unable to recover it. 00:27:55.922 [2024-07-26 11:35:51.475065] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.922 [2024-07-26 11:35:51.475095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:55.922 qpair failed and we were unable to recover it. 00:27:55.922 [2024-07-26 11:35:51.475341] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.922 [2024-07-26 11:35:51.475370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:55.922 qpair failed and we were unable to recover it. 00:27:55.922 [2024-07-26 11:35:51.475542] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.922 [2024-07-26 11:35:51.475571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:55.922 qpair failed and we were unable to recover it. 00:27:55.922 [2024-07-26 11:35:51.475764] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.922 [2024-07-26 11:35:51.475795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:55.922 qpair failed and we were unable to recover it. 00:27:55.922 [2024-07-26 11:35:51.475993] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.922 [2024-07-26 11:35:51.476022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:55.922 qpair failed and we were unable to recover it. 00:27:55.922 [2024-07-26 11:35:51.476211] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.922 [2024-07-26 11:35:51.476240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:55.922 qpair failed and we were unable to recover it. 00:27:55.922 [2024-07-26 11:35:51.476392] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.922 [2024-07-26 11:35:51.476422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:55.922 qpair failed and we were unable to recover it. 00:27:55.922 [2024-07-26 11:35:51.476696] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.922 [2024-07-26 11:35:51.476727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:55.922 qpair failed and we were unable to recover it. 00:27:55.922 [2024-07-26 11:35:51.476918] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.922 [2024-07-26 11:35:51.476948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:55.922 qpair failed and we were unable to recover it. 00:27:55.922 [2024-07-26 11:35:51.477177] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.922 [2024-07-26 11:35:51.477207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:55.922 qpair failed and we were unable to recover it. 00:27:55.922 [2024-07-26 11:35:51.477390] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.922 [2024-07-26 11:35:51.477421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:55.922 qpair failed and we were unable to recover it. 00:27:55.923 [2024-07-26 11:35:51.477644] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.923 [2024-07-26 11:35:51.477675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:55.923 qpair failed and we were unable to recover it. 00:27:55.923 [2024-07-26 11:35:51.477822] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.923 [2024-07-26 11:35:51.477852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:55.923 qpair failed and we were unable to recover it. 00:27:55.923 [2024-07-26 11:35:51.478039] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.923 [2024-07-26 11:35:51.478068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:55.923 qpair failed and we were unable to recover it. 00:27:55.923 [2024-07-26 11:35:51.478331] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.923 [2024-07-26 11:35:51.478361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:55.923 qpair failed and we were unable to recover it. 00:27:55.923 [2024-07-26 11:35:51.478482] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.923 [2024-07-26 11:35:51.478512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:55.923 qpair failed and we were unable to recover it. 00:27:55.923 [2024-07-26 11:35:51.478717] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.923 [2024-07-26 11:35:51.478747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:55.923 qpair failed and we were unable to recover it. 00:27:55.923 [2024-07-26 11:35:51.478934] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.923 [2024-07-26 11:35:51.478963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:55.923 qpair failed and we were unable to recover it. 00:27:55.923 [2024-07-26 11:35:51.479159] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.923 [2024-07-26 11:35:51.479188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:55.923 qpair failed and we were unable to recover it. 00:27:55.923 [2024-07-26 11:35:51.479403] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.923 [2024-07-26 11:35:51.479432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:55.923 qpair failed and we were unable to recover it. 00:27:55.923 [2024-07-26 11:35:51.479620] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.923 [2024-07-26 11:35:51.479672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:55.923 qpair failed and we were unable to recover it. 00:27:55.923 [2024-07-26 11:35:51.479864] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.923 [2024-07-26 11:35:51.479893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:55.923 qpair failed and we were unable to recover it. 00:27:55.923 [2024-07-26 11:35:51.480078] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.923 [2024-07-26 11:35:51.480108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:55.923 qpair failed and we were unable to recover it. 00:27:55.923 [2024-07-26 11:35:51.480303] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.923 [2024-07-26 11:35:51.480333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:55.923 qpair failed and we were unable to recover it. 00:27:55.923 [2024-07-26 11:35:51.480507] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.923 [2024-07-26 11:35:51.480537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:55.923 qpair failed and we were unable to recover it. 00:27:55.923 [2024-07-26 11:35:51.480669] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.923 [2024-07-26 11:35:51.480701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:55.923 qpair failed and we were unable to recover it. 00:27:55.923 [2024-07-26 11:35:51.480911] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.923 [2024-07-26 11:35:51.480941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:55.923 qpair failed and we were unable to recover it. 00:27:55.923 [2024-07-26 11:35:51.481230] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.923 [2024-07-26 11:35:51.481261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:55.923 qpair failed and we were unable to recover it. 00:27:55.923 [2024-07-26 11:35:51.481536] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.923 [2024-07-26 11:35:51.481565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:55.923 qpair failed and we were unable to recover it. 00:27:55.923 [2024-07-26 11:35:51.481810] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.923 [2024-07-26 11:35:51.481841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:55.923 qpair failed and we were unable to recover it. 00:27:55.923 [2024-07-26 11:35:51.481966] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.923 [2024-07-26 11:35:51.481996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:55.923 qpair failed and we were unable to recover it. 00:27:55.923 [2024-07-26 11:35:51.482196] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.923 [2024-07-26 11:35:51.482225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:55.923 qpair failed and we were unable to recover it. 00:27:55.923 [2024-07-26 11:35:51.482477] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.923 [2024-07-26 11:35:51.482507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:55.923 qpair failed and we were unable to recover it. 00:27:55.923 [2024-07-26 11:35:51.482723] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.923 [2024-07-26 11:35:51.482754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:55.923 qpair failed and we were unable to recover it. 00:27:55.923 [2024-07-26 11:35:51.482898] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.923 [2024-07-26 11:35:51.482928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:55.923 qpair failed and we were unable to recover it. 00:27:55.923 [2024-07-26 11:35:51.483056] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.923 [2024-07-26 11:35:51.483086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:55.923 qpair failed and we were unable to recover it. 00:27:55.923 [2024-07-26 11:35:51.483261] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.923 [2024-07-26 11:35:51.483290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:55.923 qpair failed and we were unable to recover it. 00:27:55.923 [2024-07-26 11:35:51.483492] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.923 [2024-07-26 11:35:51.483521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:55.923 qpair failed and we were unable to recover it. 00:27:55.923 [2024-07-26 11:35:51.483666] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.923 [2024-07-26 11:35:51.483703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:55.923 qpair failed and we were unable to recover it. 00:27:55.923 [2024-07-26 11:35:51.483893] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.923 [2024-07-26 11:35:51.483923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:55.923 qpair failed and we were unable to recover it. 00:27:55.923 [2024-07-26 11:35:51.484042] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.923 [2024-07-26 11:35:51.484072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:55.923 qpair failed and we were unable to recover it. 00:27:55.923 [2024-07-26 11:35:51.484171] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.923 [2024-07-26 11:35:51.484200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:55.923 qpair failed and we were unable to recover it. 00:27:55.923 [2024-07-26 11:35:51.484417] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.923 [2024-07-26 11:35:51.484447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:55.924 qpair failed and we were unable to recover it. 00:27:55.924 [2024-07-26 11:35:51.484622] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.924 [2024-07-26 11:35:51.484664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:55.924 qpair failed and we were unable to recover it. 00:27:55.924 [2024-07-26 11:35:51.484784] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.924 [2024-07-26 11:35:51.484813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:55.924 qpair failed and we were unable to recover it. 00:27:55.924 [2024-07-26 11:35:51.484918] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.924 [2024-07-26 11:35:51.484948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:55.924 qpair failed and we were unable to recover it. 00:27:55.924 [2024-07-26 11:35:51.485149] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.924 [2024-07-26 11:35:51.485179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:55.924 qpair failed and we were unable to recover it. 00:27:55.924 [2024-07-26 11:35:51.485392] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.924 [2024-07-26 11:35:51.485421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:55.924 qpair failed and we were unable to recover it. 00:27:55.924 [2024-07-26 11:35:51.485599] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.924 [2024-07-26 11:35:51.485637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:55.924 qpair failed and we were unable to recover it. 00:27:55.924 [2024-07-26 11:35:51.485829] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.924 [2024-07-26 11:35:51.485859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:55.924 qpair failed and we were unable to recover it. 00:27:55.924 [2024-07-26 11:35:51.486031] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.924 [2024-07-26 11:35:51.486061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:55.924 qpair failed and we were unable to recover it. 00:27:55.924 [2024-07-26 11:35:51.486266] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.924 [2024-07-26 11:35:51.486295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:55.924 qpair failed and we were unable to recover it. 00:27:55.924 [2024-07-26 11:35:51.486467] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.924 [2024-07-26 11:35:51.486497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:55.924 qpair failed and we were unable to recover it. 00:27:55.924 [2024-07-26 11:35:51.486620] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.924 [2024-07-26 11:35:51.486659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:55.924 qpair failed and we were unable to recover it. 00:27:55.924 [2024-07-26 11:35:51.486758] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.924 [2024-07-26 11:35:51.486788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:55.924 qpair failed and we were unable to recover it. 00:27:55.924 [2024-07-26 11:35:51.486961] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.924 [2024-07-26 11:35:51.486991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:55.924 qpair failed and we were unable to recover it. 00:27:55.924 [2024-07-26 11:35:51.487253] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.924 [2024-07-26 11:35:51.487299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:55.924 qpair failed and we were unable to recover it. 00:27:55.924 [2024-07-26 11:35:51.487574] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.924 [2024-07-26 11:35:51.487604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:55.924 qpair failed and we were unable to recover it. 00:27:55.924 [2024-07-26 11:35:51.487878] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.924 [2024-07-26 11:35:51.487909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:55.924 qpair failed and we were unable to recover it. 00:27:55.924 [2024-07-26 11:35:51.488154] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.924 [2024-07-26 11:35:51.488184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:55.924 qpair failed and we were unable to recover it. 00:27:55.924 [2024-07-26 11:35:51.488385] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.924 [2024-07-26 11:35:51.488414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:55.924 qpair failed and we were unable to recover it. 00:27:55.924 [2024-07-26 11:35:51.488660] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.924 [2024-07-26 11:35:51.488692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:55.924 qpair failed and we were unable to recover it. 00:27:55.924 [2024-07-26 11:35:51.488821] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.924 [2024-07-26 11:35:51.488851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:55.924 qpair failed and we were unable to recover it. 00:27:55.924 [2024-07-26 11:35:51.489064] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.924 [2024-07-26 11:35:51.489094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:55.924 qpair failed and we were unable to recover it. 00:27:55.924 [2024-07-26 11:35:51.489341] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.924 [2024-07-26 11:35:51.489370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:55.924 qpair failed and we were unable to recover it. 00:27:55.924 [2024-07-26 11:35:51.489559] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.924 [2024-07-26 11:35:51.489589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:55.924 qpair failed and we were unable to recover it. 00:27:55.924 [2024-07-26 11:35:51.489790] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.924 [2024-07-26 11:35:51.489821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:55.924 qpair failed and we were unable to recover it. 00:27:55.924 [2024-07-26 11:35:51.490018] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.924 [2024-07-26 11:35:51.490048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:55.924 qpair failed and we were unable to recover it. 00:27:55.924 [2024-07-26 11:35:51.490238] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.924 [2024-07-26 11:35:51.490267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:55.924 qpair failed and we were unable to recover it. 00:27:55.924 [2024-07-26 11:35:51.490514] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.924 [2024-07-26 11:35:51.490543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:55.924 qpair failed and we were unable to recover it. 00:27:55.924 [2024-07-26 11:35:51.490724] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.924 [2024-07-26 11:35:51.490755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:55.924 qpair failed and we were unable to recover it. 00:27:55.924 [2024-07-26 11:35:51.491055] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.924 [2024-07-26 11:35:51.491085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:55.924 qpair failed and we were unable to recover it. 00:27:55.924 [2024-07-26 11:35:51.491206] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.924 [2024-07-26 11:35:51.491236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:55.924 qpair failed and we were unable to recover it. 00:27:55.924 [2024-07-26 11:35:51.491447] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.924 [2024-07-26 11:35:51.491476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:55.924 qpair failed and we were unable to recover it. 00:27:55.924 [2024-07-26 11:35:51.491674] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.924 [2024-07-26 11:35:51.491705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:55.924 qpair failed and we were unable to recover it. 00:27:55.924 [2024-07-26 11:35:51.491902] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.924 [2024-07-26 11:35:51.491931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:55.924 qpair failed and we were unable to recover it. 00:27:55.924 [2024-07-26 11:35:51.492103] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.924 [2024-07-26 11:35:51.492133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:55.924 qpair failed and we were unable to recover it. 00:27:55.924 [2024-07-26 11:35:51.492308] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.924 [2024-07-26 11:35:51.492339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:55.924 qpair failed and we were unable to recover it. 00:27:55.924 [2024-07-26 11:35:51.492458] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.924 [2024-07-26 11:35:51.492493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:55.924 qpair failed and we were unable to recover it. 00:27:55.924 [2024-07-26 11:35:51.492753] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.924 [2024-07-26 11:35:51.492784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:55.924 qpair failed and we were unable to recover it. 00:27:55.924 [2024-07-26 11:35:51.492964] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.925 [2024-07-26 11:35:51.492994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:55.925 qpair failed and we were unable to recover it. 00:27:55.925 [2024-07-26 11:35:51.493176] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.925 [2024-07-26 11:35:51.493206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:55.925 qpair failed and we were unable to recover it. 00:27:55.925 [2024-07-26 11:35:51.493446] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.925 [2024-07-26 11:35:51.493475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:55.925 qpair failed and we were unable to recover it. 00:27:55.925 [2024-07-26 11:35:51.493589] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.925 [2024-07-26 11:35:51.493619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:55.925 qpair failed and we were unable to recover it. 00:27:55.925 [2024-07-26 11:35:51.493802] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.925 [2024-07-26 11:35:51.493831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:55.925 qpair failed and we were unable to recover it. 00:27:55.925 [2024-07-26 11:35:51.494097] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.925 [2024-07-26 11:35:51.494126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:55.925 qpair failed and we were unable to recover it. 00:27:55.925 [2024-07-26 11:35:51.494315] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.925 [2024-07-26 11:35:51.494345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:55.925 qpair failed and we were unable to recover it. 00:27:55.925 [2024-07-26 11:35:51.494625] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.925 [2024-07-26 11:35:51.494666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:55.925 qpair failed and we were unable to recover it. 00:27:55.925 [2024-07-26 11:35:51.494792] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.925 [2024-07-26 11:35:51.494822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:55.925 qpair failed and we were unable to recover it. 00:27:55.925 [2024-07-26 11:35:51.494943] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.925 [2024-07-26 11:35:51.494973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:55.925 qpair failed and we were unable to recover it. 00:27:55.925 [2024-07-26 11:35:51.495094] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.925 [2024-07-26 11:35:51.495123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:55.925 qpair failed and we were unable to recover it. 00:27:55.925 [2024-07-26 11:35:51.495294] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.925 [2024-07-26 11:35:51.495323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:55.925 qpair failed and we were unable to recover it. 00:27:55.925 [2024-07-26 11:35:51.495545] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.925 [2024-07-26 11:35:51.495575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:55.925 qpair failed and we were unable to recover it. 00:27:55.925 [2024-07-26 11:35:51.495721] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.925 [2024-07-26 11:35:51.495752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:55.925 qpair failed and we were unable to recover it. 00:27:55.925 [2024-07-26 11:35:51.495935] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.925 [2024-07-26 11:35:51.495965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:55.925 qpair failed and we were unable to recover it. 00:27:55.925 [2024-07-26 11:35:51.496226] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.925 [2024-07-26 11:35:51.496255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:55.925 qpair failed and we were unable to recover it. 00:27:55.925 [2024-07-26 11:35:51.496431] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.925 [2024-07-26 11:35:51.496460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:55.925 qpair failed and we were unable to recover it. 00:27:55.925 [2024-07-26 11:35:51.496604] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.925 [2024-07-26 11:35:51.496644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:55.925 qpair failed and we were unable to recover it. 00:27:55.925 [2024-07-26 11:35:51.496886] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.925 [2024-07-26 11:35:51.496916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:55.925 qpair failed and we were unable to recover it. 00:27:55.925 [2024-07-26 11:35:51.497054] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.925 [2024-07-26 11:35:51.497083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:55.925 qpair failed and we were unable to recover it. 00:27:55.925 [2024-07-26 11:35:51.497277] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.925 [2024-07-26 11:35:51.497306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:55.925 qpair failed and we were unable to recover it. 00:27:55.925 [2024-07-26 11:35:51.497514] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.925 [2024-07-26 11:35:51.497544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:55.925 qpair failed and we were unable to recover it. 00:27:55.925 [2024-07-26 11:35:51.497786] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.925 [2024-07-26 11:35:51.497816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:55.925 qpair failed and we were unable to recover it. 00:27:55.925 [2024-07-26 11:35:51.498004] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.925 [2024-07-26 11:35:51.498034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:55.925 qpair failed and we were unable to recover it. 00:27:55.925 [2024-07-26 11:35:51.498295] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.925 [2024-07-26 11:35:51.498325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:55.925 qpair failed and we were unable to recover it. 00:27:55.925 [2024-07-26 11:35:51.498528] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.925 [2024-07-26 11:35:51.498557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:55.925 qpair failed and we were unable to recover it. 00:27:55.925 [2024-07-26 11:35:51.498776] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.925 [2024-07-26 11:35:51.498807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:55.925 qpair failed and we were unable to recover it. 00:27:55.925 [2024-07-26 11:35:51.498996] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.925 [2024-07-26 11:35:51.499026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:55.925 qpair failed and we were unable to recover it. 00:27:55.925 [2024-07-26 11:35:51.499223] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.925 [2024-07-26 11:35:51.499253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:55.925 qpair failed and we were unable to recover it. 00:27:55.925 [2024-07-26 11:35:51.499536] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.925 [2024-07-26 11:35:51.499566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:55.925 qpair failed and we were unable to recover it. 00:27:55.925 [2024-07-26 11:35:51.499758] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.925 [2024-07-26 11:35:51.499788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:55.925 qpair failed and we were unable to recover it. 00:27:55.925 [2024-07-26 11:35:51.499916] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.925 [2024-07-26 11:35:51.499946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:55.925 qpair failed and we were unable to recover it. 00:27:55.925 [2024-07-26 11:35:51.500251] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.925 [2024-07-26 11:35:51.500281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:55.925 qpair failed and we were unable to recover it. 00:27:55.925 [2024-07-26 11:35:51.500524] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.925 [2024-07-26 11:35:51.500554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:55.925 qpair failed and we were unable to recover it. 00:27:55.925 [2024-07-26 11:35:51.500744] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.925 [2024-07-26 11:35:51.500778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:55.925 qpair failed and we were unable to recover it. 00:27:55.925 [2024-07-26 11:35:51.500898] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.925 [2024-07-26 11:35:51.500927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:55.925 qpair failed and we were unable to recover it. 00:27:55.925 [2024-07-26 11:35:51.501101] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.925 [2024-07-26 11:35:51.501130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:55.925 qpair failed and we were unable to recover it. 00:27:55.925 [2024-07-26 11:35:51.501377] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.926 [2024-07-26 11:35:51.501407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:55.926 qpair failed and we were unable to recover it. 00:27:55.926 [2024-07-26 11:35:51.501575] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.926 [2024-07-26 11:35:51.501615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:55.926 qpair failed and we were unable to recover it. 00:27:55.926 [2024-07-26 11:35:51.501818] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.926 [2024-07-26 11:35:51.501848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:55.926 qpair failed and we were unable to recover it. 00:27:55.926 [2024-07-26 11:35:51.502060] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.926 [2024-07-26 11:35:51.502089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:55.926 qpair failed and we were unable to recover it. 00:27:55.926 [2024-07-26 11:35:51.502283] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.926 [2024-07-26 11:35:51.502312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:55.926 qpair failed and we were unable to recover it. 00:27:55.926 [2024-07-26 11:35:51.502502] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.926 [2024-07-26 11:35:51.502531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:55.926 qpair failed and we were unable to recover it. 00:27:55.926 [2024-07-26 11:35:51.502662] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.926 [2024-07-26 11:35:51.502692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:55.926 qpair failed and we were unable to recover it. 00:27:55.926 [2024-07-26 11:35:51.502879] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.926 [2024-07-26 11:35:51.502908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:55.926 qpair failed and we were unable to recover it. 00:27:55.926 [2024-07-26 11:35:51.503038] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.926 [2024-07-26 11:35:51.503067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:55.926 qpair failed and we were unable to recover it. 00:27:55.926 [2024-07-26 11:35:51.503263] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.926 [2024-07-26 11:35:51.503293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:55.926 qpair failed and we were unable to recover it. 00:27:55.926 [2024-07-26 11:35:51.503545] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.926 [2024-07-26 11:35:51.503575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:55.926 qpair failed and we were unable to recover it. 00:27:55.926 [2024-07-26 11:35:51.503768] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.926 [2024-07-26 11:35:51.503798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:55.926 qpair failed and we were unable to recover it. 00:27:55.926 [2024-07-26 11:35:51.504043] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.926 [2024-07-26 11:35:51.504072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:55.926 qpair failed and we were unable to recover it. 00:27:55.926 [2024-07-26 11:35:51.504259] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.926 [2024-07-26 11:35:51.504289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:55.926 qpair failed and we were unable to recover it. 00:27:55.926 [2024-07-26 11:35:51.504490] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.926 [2024-07-26 11:35:51.504519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:55.926 qpair failed and we were unable to recover it. 00:27:55.926 [2024-07-26 11:35:51.504729] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.926 [2024-07-26 11:35:51.504759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:55.926 qpair failed and we were unable to recover it. 00:27:55.926 [2024-07-26 11:35:51.504886] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.926 [2024-07-26 11:35:51.504915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:55.926 qpair failed and we were unable to recover it. 00:27:55.926 [2024-07-26 11:35:51.505032] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.926 [2024-07-26 11:35:51.505061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:55.926 qpair failed and we were unable to recover it. 00:27:55.926 [2024-07-26 11:35:51.505251] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.926 [2024-07-26 11:35:51.505281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:55.926 qpair failed and we were unable to recover it. 00:27:55.926 [2024-07-26 11:35:51.505493] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.926 [2024-07-26 11:35:51.505523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:55.926 qpair failed and we were unable to recover it. 00:27:55.926 [2024-07-26 11:35:51.505699] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.926 [2024-07-26 11:35:51.505729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:55.926 qpair failed and we were unable to recover it. 00:27:55.926 [2024-07-26 11:35:51.505939] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.926 [2024-07-26 11:35:51.505969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:55.926 qpair failed and we were unable to recover it. 00:27:55.926 [2024-07-26 11:35:51.506097] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.926 [2024-07-26 11:35:51.506126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:55.926 qpair failed and we were unable to recover it. 00:27:55.926 [2024-07-26 11:35:51.506310] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.926 [2024-07-26 11:35:51.506339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:55.926 qpair failed and we were unable to recover it. 00:27:55.926 [2024-07-26 11:35:51.506479] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.926 [2024-07-26 11:35:51.506508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:55.926 qpair failed and we were unable to recover it. 00:27:55.926 [2024-07-26 11:35:51.506636] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.926 [2024-07-26 11:35:51.506666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:55.926 qpair failed and we were unable to recover it. 00:27:55.926 [2024-07-26 11:35:51.506839] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.926 [2024-07-26 11:35:51.506869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:55.926 qpair failed and we were unable to recover it. 00:27:55.926 [2024-07-26 11:35:51.507071] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.926 [2024-07-26 11:35:51.507100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:55.926 qpair failed and we were unable to recover it. 00:27:55.926 [2024-07-26 11:35:51.507234] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.926 [2024-07-26 11:35:51.507264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:55.926 qpair failed and we were unable to recover it. 00:27:55.926 [2024-07-26 11:35:51.507467] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.926 [2024-07-26 11:35:51.507496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:55.926 qpair failed and we were unable to recover it. 00:27:55.926 [2024-07-26 11:35:51.507686] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.926 [2024-07-26 11:35:51.507717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:55.926 qpair failed and we were unable to recover it. 00:27:55.926 [2024-07-26 11:35:51.507932] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.926 [2024-07-26 11:35:51.507962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:55.926 qpair failed and we were unable to recover it. 00:27:55.926 [2024-07-26 11:35:51.508137] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.926 [2024-07-26 11:35:51.508167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:55.926 qpair failed and we were unable to recover it. 00:27:55.926 [2024-07-26 11:35:51.508288] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.926 [2024-07-26 11:35:51.508317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:55.926 qpair failed and we were unable to recover it. 00:27:55.926 [2024-07-26 11:35:51.508492] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.926 [2024-07-26 11:35:51.508521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:55.926 qpair failed and we were unable to recover it. 00:27:55.926 [2024-07-26 11:35:51.508818] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.926 [2024-07-26 11:35:51.508849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:55.926 qpair failed and we were unable to recover it. 00:27:55.926 [2024-07-26 11:35:51.509042] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.926 [2024-07-26 11:35:51.509072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:55.926 qpair failed and we were unable to recover it. 00:27:55.926 [2024-07-26 11:35:51.509314] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.926 [2024-07-26 11:35:51.509344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:55.926 qpair failed and we were unable to recover it. 00:27:55.926 [2024-07-26 11:35:51.509597] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.927 [2024-07-26 11:35:51.509635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:55.927 qpair failed and we were unable to recover it. 00:27:55.927 [2024-07-26 11:35:51.509815] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.927 [2024-07-26 11:35:51.509845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:55.927 qpair failed and we were unable to recover it. 00:27:55.927 [2024-07-26 11:35:51.509982] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.927 [2024-07-26 11:35:51.510011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:55.927 qpair failed and we were unable to recover it. 00:27:55.927 [2024-07-26 11:35:51.510278] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.927 [2024-07-26 11:35:51.510313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:55.927 qpair failed and we were unable to recover it. 00:27:55.927 [2024-07-26 11:35:51.510499] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.927 [2024-07-26 11:35:51.510529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:55.927 qpair failed and we were unable to recover it. 00:27:55.927 [2024-07-26 11:35:51.510769] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.927 [2024-07-26 11:35:51.510800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:55.927 qpair failed and we were unable to recover it. 00:27:55.927 [2024-07-26 11:35:51.510922] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.927 [2024-07-26 11:35:51.510952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:55.927 qpair failed and we were unable to recover it. 00:27:55.927 [2024-07-26 11:35:51.511198] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.927 [2024-07-26 11:35:51.511228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:55.927 qpair failed and we were unable to recover it. 00:27:55.927 [2024-07-26 11:35:51.511415] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.927 [2024-07-26 11:35:51.511445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:55.927 qpair failed and we were unable to recover it. 00:27:55.927 [2024-07-26 11:35:51.511640] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.927 [2024-07-26 11:35:51.511671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:55.927 qpair failed and we were unable to recover it. 00:27:55.927 [2024-07-26 11:35:51.511881] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.927 [2024-07-26 11:35:51.511910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:55.927 qpair failed and we were unable to recover it. 00:27:55.927 [2024-07-26 11:35:51.512096] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.927 [2024-07-26 11:35:51.512126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:55.927 qpair failed and we were unable to recover it. 00:27:55.927 [2024-07-26 11:35:51.512261] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.927 [2024-07-26 11:35:51.512290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:55.927 qpair failed and we were unable to recover it. 00:27:55.927 [2024-07-26 11:35:51.512420] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.927 [2024-07-26 11:35:51.512449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:55.927 qpair failed and we were unable to recover it. 00:27:55.927 [2024-07-26 11:35:51.512647] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.927 [2024-07-26 11:35:51.512678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:55.927 qpair failed and we were unable to recover it. 00:27:55.927 [2024-07-26 11:35:51.512872] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.927 [2024-07-26 11:35:51.512902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:55.927 qpair failed and we were unable to recover it. 00:27:55.927 [2024-07-26 11:35:51.513086] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.927 [2024-07-26 11:35:51.513115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:55.927 qpair failed and we were unable to recover it. 00:27:55.927 [2024-07-26 11:35:51.513310] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.927 [2024-07-26 11:35:51.513340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:55.927 qpair failed and we were unable to recover it. 00:27:55.927 [2024-07-26 11:35:51.513459] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.927 [2024-07-26 11:35:51.513489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:55.927 qpair failed and we were unable to recover it. 00:27:55.927 [2024-07-26 11:35:51.513751] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.927 [2024-07-26 11:35:51.513781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:55.927 qpair failed and we were unable to recover it. 00:27:55.927 [2024-07-26 11:35:51.513954] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.927 [2024-07-26 11:35:51.513983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:55.927 qpair failed and we were unable to recover it. 00:27:55.927 [2024-07-26 11:35:51.514118] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.927 [2024-07-26 11:35:51.514147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:55.927 qpair failed and we were unable to recover it. 00:27:55.927 [2024-07-26 11:35:51.514387] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.927 [2024-07-26 11:35:51.514417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:55.927 qpair failed and we were unable to recover it. 00:27:55.927 [2024-07-26 11:35:51.514541] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.927 [2024-07-26 11:35:51.514570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:55.927 qpair failed and we were unable to recover it. 00:27:55.927 [2024-07-26 11:35:51.514759] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.927 [2024-07-26 11:35:51.514789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:55.927 qpair failed and we were unable to recover it. 00:27:55.927 [2024-07-26 11:35:51.514922] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.927 [2024-07-26 11:35:51.514952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:55.927 qpair failed and we were unable to recover it. 00:27:55.927 [2024-07-26 11:35:51.515214] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.927 [2024-07-26 11:35:51.515243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:55.927 qpair failed and we were unable to recover it. 00:27:55.927 [2024-07-26 11:35:51.515434] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.927 [2024-07-26 11:35:51.515463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:55.927 qpair failed and we were unable to recover it. 00:27:55.927 [2024-07-26 11:35:51.515664] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.927 [2024-07-26 11:35:51.515695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:55.927 qpair failed and we were unable to recover it. 00:27:55.927 [2024-07-26 11:35:51.515951] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.927 [2024-07-26 11:35:51.515982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:55.927 qpair failed and we were unable to recover it. 00:27:55.927 [2024-07-26 11:35:51.516243] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.927 [2024-07-26 11:35:51.516274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:55.927 qpair failed and we were unable to recover it. 00:27:55.927 [2024-07-26 11:35:51.516460] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.927 [2024-07-26 11:35:51.516489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:55.927 qpair failed and we were unable to recover it. 00:27:55.927 [2024-07-26 11:35:51.516685] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.927 [2024-07-26 11:35:51.516716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:55.927 qpair failed and we were unable to recover it. 00:27:55.927 [2024-07-26 11:35:51.516977] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.927 [2024-07-26 11:35:51.517006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:55.927 qpair failed and we were unable to recover it. 00:27:55.927 [2024-07-26 11:35:51.517179] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.927 [2024-07-26 11:35:51.517209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:55.927 qpair failed and we were unable to recover it. 00:27:55.927 [2024-07-26 11:35:51.517326] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.927 [2024-07-26 11:35:51.517355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:55.927 qpair failed and we were unable to recover it. 00:27:55.927 [2024-07-26 11:35:51.517478] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.927 [2024-07-26 11:35:51.517507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:55.927 qpair failed and we were unable to recover it. 00:27:55.927 [2024-07-26 11:35:51.517688] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.927 [2024-07-26 11:35:51.517720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:55.928 qpair failed and we were unable to recover it. 00:27:55.928 [2024-07-26 11:35:51.517963] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.928 [2024-07-26 11:35:51.517993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:55.928 qpair failed and we were unable to recover it. 00:27:55.928 [2024-07-26 11:35:51.518164] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.928 [2024-07-26 11:35:51.518194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:55.928 qpair failed and we were unable to recover it. 00:27:55.928 [2024-07-26 11:35:51.518437] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.928 [2024-07-26 11:35:51.518466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:55.928 qpair failed and we were unable to recover it. 00:27:55.928 [2024-07-26 11:35:51.518592] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.928 [2024-07-26 11:35:51.518622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:55.928 qpair failed and we were unable to recover it. 00:27:55.928 [2024-07-26 11:35:51.518766] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.928 [2024-07-26 11:35:51.518796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:55.928 qpair failed and we were unable to recover it. 00:27:55.928 [2024-07-26 11:35:51.518981] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.928 [2024-07-26 11:35:51.519011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:55.928 qpair failed and we were unable to recover it. 00:27:55.928 [2024-07-26 11:35:51.519205] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.928 [2024-07-26 11:35:51.519234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:55.928 qpair failed and we were unable to recover it. 00:27:55.928 [2024-07-26 11:35:51.519500] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.928 [2024-07-26 11:35:51.519531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:55.928 qpair failed and we were unable to recover it. 00:27:55.928 [2024-07-26 11:35:51.519655] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.928 [2024-07-26 11:35:51.519685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:55.928 qpair failed and we were unable to recover it. 00:27:55.928 [2024-07-26 11:35:51.519872] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.928 [2024-07-26 11:35:51.519901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:55.928 qpair failed and we were unable to recover it. 00:27:55.928 [2024-07-26 11:35:51.520007] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.928 [2024-07-26 11:35:51.520037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:55.928 qpair failed and we were unable to recover it. 00:27:55.928 [2024-07-26 11:35:51.520252] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.928 [2024-07-26 11:35:51.520281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:55.928 qpair failed and we were unable to recover it. 00:27:55.928 [2024-07-26 11:35:51.520476] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.928 [2024-07-26 11:35:51.520506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:55.928 qpair failed and we were unable to recover it. 00:27:55.928 [2024-07-26 11:35:51.520691] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.928 [2024-07-26 11:35:51.520721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:55.928 qpair failed and we were unable to recover it. 00:27:55.928 [2024-07-26 11:35:51.520840] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.928 [2024-07-26 11:35:51.520870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:55.928 qpair failed and we were unable to recover it. 00:27:55.928 [2024-07-26 11:35:51.521068] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.928 [2024-07-26 11:35:51.521098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:55.928 qpair failed and we were unable to recover it. 00:27:55.928 [2024-07-26 11:35:51.521358] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.928 [2024-07-26 11:35:51.521388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:55.928 qpair failed and we were unable to recover it. 00:27:55.928 [2024-07-26 11:35:51.521559] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.928 [2024-07-26 11:35:51.521589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:55.928 qpair failed and we were unable to recover it. 00:27:55.928 [2024-07-26 11:35:51.521793] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.928 [2024-07-26 11:35:51.521824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:55.928 qpair failed and we were unable to recover it. 00:27:55.928 [2024-07-26 11:35:51.522020] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.928 [2024-07-26 11:35:51.522050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:55.928 qpair failed and we were unable to recover it. 00:27:55.928 [2024-07-26 11:35:51.522312] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.928 [2024-07-26 11:35:51.522342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:55.928 qpair failed and we were unable to recover it. 00:27:55.928 [2024-07-26 11:35:51.522611] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.928 [2024-07-26 11:35:51.522652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:55.928 qpair failed and we were unable to recover it. 00:27:55.928 [2024-07-26 11:35:51.522836] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.928 [2024-07-26 11:35:51.522866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:55.928 qpair failed and we were unable to recover it. 00:27:55.928 [2024-07-26 11:35:51.523073] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.928 [2024-07-26 11:35:51.523102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:55.928 qpair failed and we were unable to recover it. 00:27:55.928 [2024-07-26 11:35:51.523365] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.928 [2024-07-26 11:35:51.523395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:55.928 qpair failed and we were unable to recover it. 00:27:55.928 [2024-07-26 11:35:51.523647] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.928 [2024-07-26 11:35:51.523678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:55.928 qpair failed and we were unable to recover it. 00:27:55.928 [2024-07-26 11:35:51.523785] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.928 [2024-07-26 11:35:51.523815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:55.928 qpair failed and we were unable to recover it. 00:27:55.928 [2024-07-26 11:35:51.524054] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.928 [2024-07-26 11:35:51.524083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:55.928 qpair failed and we were unable to recover it. 00:27:55.928 [2024-07-26 11:35:51.524219] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.928 [2024-07-26 11:35:51.524249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:55.928 qpair failed and we were unable to recover it. 00:27:55.928 [2024-07-26 11:35:51.524422] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.928 [2024-07-26 11:35:51.524452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:55.928 qpair failed and we were unable to recover it. 00:27:55.928 [2024-07-26 11:35:51.524635] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.928 [2024-07-26 11:35:51.524665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:55.928 qpair failed and we were unable to recover it. 00:27:55.928 [2024-07-26 11:35:51.524930] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.929 [2024-07-26 11:35:51.524960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:55.929 qpair failed and we were unable to recover it. 00:27:55.929 [2024-07-26 11:35:51.525071] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.929 [2024-07-26 11:35:51.525106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:55.929 qpair failed and we were unable to recover it. 00:27:55.929 [2024-07-26 11:35:51.525371] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.929 [2024-07-26 11:35:51.525401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:55.929 qpair failed and we were unable to recover it. 00:27:55.929 [2024-07-26 11:35:51.525589] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.929 [2024-07-26 11:35:51.525619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:55.929 qpair failed and we were unable to recover it. 00:27:55.929 [2024-07-26 11:35:51.525815] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.929 [2024-07-26 11:35:51.525845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:55.929 qpair failed and we were unable to recover it. 00:27:55.929 [2024-07-26 11:35:51.526031] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.929 [2024-07-26 11:35:51.526061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:55.929 qpair failed and we were unable to recover it. 00:27:55.929 [2024-07-26 11:35:51.526252] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.929 [2024-07-26 11:35:51.526282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:55.929 qpair failed and we were unable to recover it. 00:27:55.929 [2024-07-26 11:35:51.526470] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.929 [2024-07-26 11:35:51.526499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:55.929 qpair failed and we were unable to recover it. 00:27:55.929 [2024-07-26 11:35:51.526749] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.929 [2024-07-26 11:35:51.526780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:55.929 qpair failed and we were unable to recover it. 00:27:55.929 [2024-07-26 11:35:51.526970] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.929 [2024-07-26 11:35:51.526999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:55.929 qpair failed and we were unable to recover it. 00:27:55.929 [2024-07-26 11:35:51.527111] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.929 [2024-07-26 11:35:51.527141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:55.929 qpair failed and we were unable to recover it. 00:27:55.929 [2024-07-26 11:35:51.527386] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.929 [2024-07-26 11:35:51.527416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:55.929 qpair failed and we were unable to recover it. 00:27:55.929 [2024-07-26 11:35:51.527608] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.929 [2024-07-26 11:35:51.527645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:55.929 qpair failed and we were unable to recover it. 00:27:55.929 [2024-07-26 11:35:51.527772] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.929 [2024-07-26 11:35:51.527801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:55.929 qpair failed and we were unable to recover it. 00:27:55.929 [2024-07-26 11:35:51.527978] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.929 [2024-07-26 11:35:51.528007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:55.929 qpair failed and we were unable to recover it. 00:27:55.929 [2024-07-26 11:35:51.528184] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.929 [2024-07-26 11:35:51.528214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:55.929 qpair failed and we were unable to recover it. 00:27:55.929 [2024-07-26 11:35:51.528350] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.929 [2024-07-26 11:35:51.528380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:55.929 qpair failed and we were unable to recover it. 00:27:55.929 [2024-07-26 11:35:51.528594] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.929 [2024-07-26 11:35:51.528623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:55.929 qpair failed and we were unable to recover it. 00:27:55.929 [2024-07-26 11:35:51.528814] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.929 [2024-07-26 11:35:51.528844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:55.929 qpair failed and we were unable to recover it. 00:27:55.929 [2024-07-26 11:35:51.528972] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.929 [2024-07-26 11:35:51.529001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:55.929 qpair failed and we were unable to recover it. 00:27:55.929 [2024-07-26 11:35:51.529238] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.929 [2024-07-26 11:35:51.529268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:55.929 qpair failed and we were unable to recover it. 00:27:55.929 [2024-07-26 11:35:51.529451] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.929 [2024-07-26 11:35:51.529480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:55.929 qpair failed and we were unable to recover it. 00:27:55.929 [2024-07-26 11:35:51.529591] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.929 [2024-07-26 11:35:51.529621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:55.929 qpair failed and we were unable to recover it. 00:27:55.929 [2024-07-26 11:35:51.529766] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.929 [2024-07-26 11:35:51.529795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:55.929 qpair failed and we were unable to recover it. 00:27:55.929 [2024-07-26 11:35:51.529986] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.929 [2024-07-26 11:35:51.530015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:55.929 qpair failed and we were unable to recover it. 00:27:55.929 [2024-07-26 11:35:51.530211] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.929 [2024-07-26 11:35:51.530240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:55.929 qpair failed and we were unable to recover it. 00:27:55.929 [2024-07-26 11:35:51.530420] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.929 [2024-07-26 11:35:51.530450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:55.929 qpair failed and we were unable to recover it. 00:27:55.929 [2024-07-26 11:35:51.530621] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.929 [2024-07-26 11:35:51.530662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:55.929 qpair failed and we were unable to recover it. 00:27:55.929 [2024-07-26 11:35:51.530958] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.929 [2024-07-26 11:35:51.530988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:55.929 qpair failed and we were unable to recover it. 00:27:55.929 [2024-07-26 11:35:51.531173] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.929 [2024-07-26 11:35:51.531203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:55.929 qpair failed and we were unable to recover it. 00:27:55.929 [2024-07-26 11:35:51.531380] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.929 [2024-07-26 11:35:51.531410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:55.929 qpair failed and we were unable to recover it. 00:27:55.929 [2024-07-26 11:35:51.531647] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.929 [2024-07-26 11:35:51.531678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:55.929 qpair failed and we were unable to recover it. 00:27:55.929 [2024-07-26 11:35:51.531874] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.929 [2024-07-26 11:35:51.531904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:55.929 qpair failed and we were unable to recover it. 00:27:55.929 [2024-07-26 11:35:51.532092] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.929 [2024-07-26 11:35:51.532122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:55.929 qpair failed and we were unable to recover it. 00:27:55.929 [2024-07-26 11:35:51.532313] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.929 [2024-07-26 11:35:51.532342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:55.929 qpair failed and we were unable to recover it. 00:27:55.929 [2024-07-26 11:35:51.532529] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.929 [2024-07-26 11:35:51.532559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:55.929 qpair failed and we were unable to recover it. 00:27:55.929 [2024-07-26 11:35:51.532674] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.929 [2024-07-26 11:35:51.532704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:55.929 qpair failed and we were unable to recover it. 00:27:55.929 [2024-07-26 11:35:51.532877] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.929 [2024-07-26 11:35:51.532906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:55.929 qpair failed and we were unable to recover it. 00:27:55.930 [2024-07-26 11:35:51.533011] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.930 [2024-07-26 11:35:51.533040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:55.930 qpair failed and we were unable to recover it. 00:27:55.930 [2024-07-26 11:35:51.533212] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.930 [2024-07-26 11:35:51.533241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:55.930 qpair failed and we were unable to recover it. 00:27:55.930 [2024-07-26 11:35:51.533508] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.930 [2024-07-26 11:35:51.533538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:55.930 qpair failed and we were unable to recover it. 00:27:55.930 [2024-07-26 11:35:51.533660] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.930 [2024-07-26 11:35:51.533707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:55.930 qpair failed and we were unable to recover it. 00:27:55.930 [2024-07-26 11:35:51.533931] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.930 [2024-07-26 11:35:51.533960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:55.930 qpair failed and we were unable to recover it. 00:27:55.930 [2024-07-26 11:35:51.534253] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.930 [2024-07-26 11:35:51.534282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:55.930 qpair failed and we were unable to recover it. 00:27:55.930 [2024-07-26 11:35:51.534475] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.930 [2024-07-26 11:35:51.534504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:55.930 qpair failed and we were unable to recover it. 00:27:55.930 [2024-07-26 11:35:51.534623] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.930 [2024-07-26 11:35:51.534660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:55.930 qpair failed and we were unable to recover it. 00:27:55.930 [2024-07-26 11:35:51.534857] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.930 [2024-07-26 11:35:51.534887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:55.930 qpair failed and we were unable to recover it. 00:27:55.930 [2024-07-26 11:35:51.535075] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.930 [2024-07-26 11:35:51.535105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:55.930 qpair failed and we were unable to recover it. 00:27:55.930 [2024-07-26 11:35:51.535281] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.930 [2024-07-26 11:35:51.535311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:55.930 qpair failed and we were unable to recover it. 00:27:55.930 [2024-07-26 11:35:51.535493] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.930 [2024-07-26 11:35:51.535522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:55.930 qpair failed and we were unable to recover it. 00:27:55.930 [2024-07-26 11:35:51.535710] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.930 [2024-07-26 11:35:51.535740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:55.930 qpair failed and we were unable to recover it. 00:27:55.930 [2024-07-26 11:35:51.535913] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.930 [2024-07-26 11:35:51.535942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:55.930 qpair failed and we were unable to recover it. 00:27:55.930 [2024-07-26 11:35:51.536152] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.930 [2024-07-26 11:35:51.536182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:55.930 qpair failed and we were unable to recover it. 00:27:55.930 [2024-07-26 11:35:51.536361] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.930 [2024-07-26 11:35:51.536390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:55.930 qpair failed and we were unable to recover it. 00:27:55.930 [2024-07-26 11:35:51.536654] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.930 [2024-07-26 11:35:51.536685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:55.930 qpair failed and we were unable to recover it. 00:27:55.930 [2024-07-26 11:35:51.536946] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.930 [2024-07-26 11:35:51.536977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:55.930 qpair failed and we were unable to recover it. 00:27:55.930 [2024-07-26 11:35:51.537120] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.930 [2024-07-26 11:35:51.537149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:55.930 qpair failed and we were unable to recover it. 00:27:55.930 [2024-07-26 11:35:51.537413] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.930 [2024-07-26 11:35:51.537443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:55.930 qpair failed and we were unable to recover it. 00:27:55.930 [2024-07-26 11:35:51.537651] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.930 [2024-07-26 11:35:51.537682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:55.930 qpair failed and we were unable to recover it. 00:27:55.930 [2024-07-26 11:35:51.537786] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.930 [2024-07-26 11:35:51.537816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:55.930 qpair failed and we were unable to recover it. 00:27:55.930 [2024-07-26 11:35:51.538003] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.930 [2024-07-26 11:35:51.538032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:55.930 qpair failed and we were unable to recover it. 00:27:55.930 [2024-07-26 11:35:51.538166] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.930 [2024-07-26 11:35:51.538195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:55.930 qpair failed and we were unable to recover it. 00:27:55.930 [2024-07-26 11:35:51.538483] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.930 [2024-07-26 11:35:51.538513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:55.930 qpair failed and we were unable to recover it. 00:27:55.930 [2024-07-26 11:35:51.538639] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.930 [2024-07-26 11:35:51.538670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:55.930 qpair failed and we were unable to recover it. 00:27:55.930 [2024-07-26 11:35:51.538857] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.930 [2024-07-26 11:35:51.538887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:55.930 qpair failed and we were unable to recover it. 00:27:55.930 [2024-07-26 11:35:51.539092] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.930 [2024-07-26 11:35:51.539122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:55.930 qpair failed and we were unable to recover it. 00:27:55.930 [2024-07-26 11:35:51.539310] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.930 [2024-07-26 11:35:51.539339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:55.930 qpair failed and we were unable to recover it. 00:27:55.930 [2024-07-26 11:35:51.539514] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.930 [2024-07-26 11:35:51.539543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:55.930 qpair failed and we were unable to recover it. 00:27:55.930 [2024-07-26 11:35:51.539720] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.930 [2024-07-26 11:35:51.539751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:55.930 qpair failed and we were unable to recover it. 00:27:55.930 [2024-07-26 11:35:51.539857] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.930 [2024-07-26 11:35:51.539887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:55.930 qpair failed and we were unable to recover it. 00:27:55.930 [2024-07-26 11:35:51.540109] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.930 [2024-07-26 11:35:51.540139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:55.930 qpair failed and we were unable to recover it. 00:27:55.930 [2024-07-26 11:35:51.540275] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.930 [2024-07-26 11:35:51.540304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:55.930 qpair failed and we were unable to recover it. 00:27:55.930 [2024-07-26 11:35:51.540481] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.930 [2024-07-26 11:35:51.540511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:55.930 qpair failed and we were unable to recover it. 00:27:55.930 [2024-07-26 11:35:51.540702] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.930 [2024-07-26 11:35:51.540732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:55.930 qpair failed and we were unable to recover it. 00:27:55.930 [2024-07-26 11:35:51.540924] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.930 [2024-07-26 11:35:51.540953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:55.930 qpair failed and we were unable to recover it. 00:27:55.930 [2024-07-26 11:35:51.541138] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.931 [2024-07-26 11:35:51.541168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:55.931 qpair failed and we were unable to recover it. 00:27:55.931 [2024-07-26 11:35:51.541333] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.931 [2024-07-26 11:35:51.541363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:55.931 qpair failed and we were unable to recover it. 00:27:55.931 [2024-07-26 11:35:51.541490] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.931 [2024-07-26 11:35:51.541520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:55.931 qpair failed and we were unable to recover it. 00:27:55.931 [2024-07-26 11:35:51.541720] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.931 [2024-07-26 11:35:51.541750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:55.931 qpair failed and we were unable to recover it. 00:27:55.931 [2024-07-26 11:35:51.541948] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.931 [2024-07-26 11:35:51.541978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:55.931 qpair failed and we were unable to recover it. 00:27:55.931 [2024-07-26 11:35:51.542195] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.931 [2024-07-26 11:35:51.542224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:55.931 qpair failed and we were unable to recover it. 00:27:55.931 [2024-07-26 11:35:51.542345] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.931 [2024-07-26 11:35:51.542380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:55.931 qpair failed and we were unable to recover it. 00:27:55.931 [2024-07-26 11:35:51.542500] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.931 [2024-07-26 11:35:51.542530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:55.931 qpair failed and we were unable to recover it. 00:27:55.931 [2024-07-26 11:35:51.542640] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.931 [2024-07-26 11:35:51.542671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:55.931 qpair failed and we were unable to recover it. 00:27:55.931 [2024-07-26 11:35:51.542884] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.931 [2024-07-26 11:35:51.542914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:55.931 qpair failed and we were unable to recover it. 00:27:55.931 [2024-07-26 11:35:51.543040] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.931 [2024-07-26 11:35:51.543070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:55.931 qpair failed and we were unable to recover it. 00:27:55.931 [2024-07-26 11:35:51.543325] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.931 [2024-07-26 11:35:51.543354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:55.931 qpair failed and we were unable to recover it. 00:27:56.211 [2024-07-26 11:35:51.543532] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.211 [2024-07-26 11:35:51.543562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.211 qpair failed and we were unable to recover it. 00:27:56.211 [2024-07-26 11:35:51.543857] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.211 [2024-07-26 11:35:51.543888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.211 qpair failed and we were unable to recover it. 00:27:56.211 [2024-07-26 11:35:51.543998] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.211 [2024-07-26 11:35:51.544028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.211 qpair failed and we were unable to recover it. 00:27:56.211 [2024-07-26 11:35:51.544280] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.211 [2024-07-26 11:35:51.544310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.211 qpair failed and we were unable to recover it. 00:27:56.211 [2024-07-26 11:35:51.544492] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.211 [2024-07-26 11:35:51.544521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.211 qpair failed and we were unable to recover it. 00:27:56.211 [2024-07-26 11:35:51.544706] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.211 [2024-07-26 11:35:51.544736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.211 qpair failed and we were unable to recover it. 00:27:56.211 [2024-07-26 11:35:51.544918] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.211 [2024-07-26 11:35:51.544949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.211 qpair failed and we were unable to recover it. 00:27:56.211 [2024-07-26 11:35:51.545057] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.211 [2024-07-26 11:35:51.545085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.211 qpair failed and we were unable to recover it. 00:27:56.211 [2024-07-26 11:35:51.545224] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.211 [2024-07-26 11:35:51.545252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.211 qpair failed and we were unable to recover it. 00:27:56.211 [2024-07-26 11:35:51.545353] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.211 [2024-07-26 11:35:51.545381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.211 qpair failed and we were unable to recover it. 00:27:56.211 [2024-07-26 11:35:51.545553] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.211 [2024-07-26 11:35:51.545581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.211 qpair failed and we were unable to recover it. 00:27:56.211 [2024-07-26 11:35:51.545792] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.211 [2024-07-26 11:35:51.545822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.211 qpair failed and we were unable to recover it. 00:27:56.211 [2024-07-26 11:35:51.546004] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.211 [2024-07-26 11:35:51.546033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.211 qpair failed and we were unable to recover it. 00:27:56.211 [2024-07-26 11:35:51.546157] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.211 [2024-07-26 11:35:51.546185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.211 qpair failed and we were unable to recover it. 00:27:56.211 [2024-07-26 11:35:51.546317] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.211 [2024-07-26 11:35:51.546345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.211 qpair failed and we were unable to recover it. 00:27:56.211 [2024-07-26 11:35:51.546549] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.211 [2024-07-26 11:35:51.546579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.211 qpair failed and we were unable to recover it. 00:27:56.211 [2024-07-26 11:35:51.546711] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.211 [2024-07-26 11:35:51.546740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.211 qpair failed and we were unable to recover it. 00:27:56.211 [2024-07-26 11:35:51.546859] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.211 [2024-07-26 11:35:51.546887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.211 qpair failed and we were unable to recover it. 00:27:56.211 [2024-07-26 11:35:51.547013] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.211 [2024-07-26 11:35:51.547041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.211 qpair failed and we were unable to recover it. 00:27:56.211 [2024-07-26 11:35:51.547281] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.211 [2024-07-26 11:35:51.547308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.211 qpair failed and we were unable to recover it. 00:27:56.211 [2024-07-26 11:35:51.547573] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.211 [2024-07-26 11:35:51.547602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.211 qpair failed and we were unable to recover it. 00:27:56.211 [2024-07-26 11:35:51.547823] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.211 [2024-07-26 11:35:51.547854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.211 qpair failed and we were unable to recover it. 00:27:56.211 [2024-07-26 11:35:51.548026] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.211 [2024-07-26 11:35:51.548055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.211 qpair failed and we were unable to recover it. 00:27:56.211 [2024-07-26 11:35:51.548181] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.211 [2024-07-26 11:35:51.548209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.211 qpair failed and we were unable to recover it. 00:27:56.211 [2024-07-26 11:35:51.548331] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.211 [2024-07-26 11:35:51.548359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.211 qpair failed and we were unable to recover it. 00:27:56.211 [2024-07-26 11:35:51.548533] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.211 [2024-07-26 11:35:51.548562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.211 qpair failed and we were unable to recover it. 00:27:56.211 [2024-07-26 11:35:51.548690] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.211 [2024-07-26 11:35:51.548719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.211 qpair failed and we were unable to recover it. 00:27:56.211 [2024-07-26 11:35:51.548985] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.211 [2024-07-26 11:35:51.549015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.211 qpair failed and we were unable to recover it. 00:27:56.211 [2024-07-26 11:35:51.549148] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.212 [2024-07-26 11:35:51.549177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.212 qpair failed and we were unable to recover it. 00:27:56.212 [2024-07-26 11:35:51.549438] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.212 [2024-07-26 11:35:51.549468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.212 qpair failed and we were unable to recover it. 00:27:56.212 [2024-07-26 11:35:51.549572] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.212 [2024-07-26 11:35:51.549602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.212 qpair failed and we were unable to recover it. 00:27:56.212 [2024-07-26 11:35:51.549821] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.212 [2024-07-26 11:35:51.549851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.212 qpair failed and we were unable to recover it. 00:27:56.212 [2024-07-26 11:35:51.550039] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.212 [2024-07-26 11:35:51.550069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.212 qpair failed and we were unable to recover it. 00:27:56.212 [2024-07-26 11:35:51.550206] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.212 [2024-07-26 11:35:51.550235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.212 qpair failed and we were unable to recover it. 00:27:56.212 [2024-07-26 11:35:51.550524] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.212 [2024-07-26 11:35:51.550563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.212 qpair failed and we were unable to recover it. 00:27:56.212 [2024-07-26 11:35:51.550691] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.212 [2024-07-26 11:35:51.550722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.212 qpair failed and we were unable to recover it. 00:27:56.212 [2024-07-26 11:35:51.550945] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.212 [2024-07-26 11:35:51.550975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.212 qpair failed and we were unable to recover it. 00:27:56.212 [2024-07-26 11:35:51.551184] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.212 [2024-07-26 11:35:51.551214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.212 qpair failed and we were unable to recover it. 00:27:56.212 [2024-07-26 11:35:51.551392] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.212 [2024-07-26 11:35:51.551421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.212 qpair failed and we were unable to recover it. 00:27:56.212 [2024-07-26 11:35:51.551595] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.212 [2024-07-26 11:35:51.551624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.212 qpair failed and we were unable to recover it. 00:27:56.212 [2024-07-26 11:35:51.551842] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.212 [2024-07-26 11:35:51.551873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.212 qpair failed and we were unable to recover it. 00:27:56.212 [2024-07-26 11:35:51.552000] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.212 [2024-07-26 11:35:51.552030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.212 qpair failed and we were unable to recover it. 00:27:56.212 [2024-07-26 11:35:51.552145] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.212 [2024-07-26 11:35:51.552175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.212 qpair failed and we were unable to recover it. 00:27:56.212 [2024-07-26 11:35:51.552373] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.212 [2024-07-26 11:35:51.552402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.212 qpair failed and we were unable to recover it. 00:27:56.212 [2024-07-26 11:35:51.552610] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.212 [2024-07-26 11:35:51.552650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.212 qpair failed and we were unable to recover it. 00:27:56.212 [2024-07-26 11:35:51.552843] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.212 [2024-07-26 11:35:51.552874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.212 qpair failed and we were unable to recover it. 00:27:56.212 [2024-07-26 11:35:51.553049] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.212 [2024-07-26 11:35:51.553080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.212 qpair failed and we were unable to recover it. 00:27:56.212 [2024-07-26 11:35:51.553277] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.212 [2024-07-26 11:35:51.553307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.212 qpair failed and we were unable to recover it. 00:27:56.212 [2024-07-26 11:35:51.553578] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.212 [2024-07-26 11:35:51.553608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.212 qpair failed and we were unable to recover it. 00:27:56.212 [2024-07-26 11:35:51.553808] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.212 [2024-07-26 11:35:51.553839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.212 qpair failed and we were unable to recover it. 00:27:56.212 [2024-07-26 11:35:51.554097] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.212 [2024-07-26 11:35:51.554127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.212 qpair failed and we were unable to recover it. 00:27:56.212 [2024-07-26 11:35:51.554348] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.212 [2024-07-26 11:35:51.554377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.212 qpair failed and we were unable to recover it. 00:27:56.212 [2024-07-26 11:35:51.554494] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.212 [2024-07-26 11:35:51.554524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.212 qpair failed and we were unable to recover it. 00:27:56.212 [2024-07-26 11:35:51.554773] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.212 [2024-07-26 11:35:51.554804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.212 qpair failed and we were unable to recover it. 00:27:56.212 [2024-07-26 11:35:51.555009] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.212 [2024-07-26 11:35:51.555039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.212 qpair failed and we were unable to recover it. 00:27:56.212 [2024-07-26 11:35:51.555223] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.212 [2024-07-26 11:35:51.555253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.212 qpair failed and we were unable to recover it. 00:27:56.212 [2024-07-26 11:35:51.555382] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.212 [2024-07-26 11:35:51.555412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.212 qpair failed and we were unable to recover it. 00:27:56.212 [2024-07-26 11:35:51.555554] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.212 [2024-07-26 11:35:51.555584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.212 qpair failed and we were unable to recover it. 00:27:56.212 [2024-07-26 11:35:51.555777] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.212 [2024-07-26 11:35:51.555808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.212 qpair failed and we were unable to recover it. 00:27:56.212 [2024-07-26 11:35:51.556022] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.212 [2024-07-26 11:35:51.556050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.212 qpair failed and we were unable to recover it. 00:27:56.212 [2024-07-26 11:35:51.556166] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.212 [2024-07-26 11:35:51.556195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.212 qpair failed and we were unable to recover it. 00:27:56.212 [2024-07-26 11:35:51.556324] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.212 [2024-07-26 11:35:51.556354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.212 qpair failed and we were unable to recover it. 00:27:56.212 [2024-07-26 11:35:51.556481] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.212 [2024-07-26 11:35:51.556511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.212 qpair failed and we were unable to recover it. 00:27:56.212 [2024-07-26 11:35:51.556684] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.212 [2024-07-26 11:35:51.556715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.212 qpair failed and we were unable to recover it. 00:27:56.212 [2024-07-26 11:35:51.556952] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.212 [2024-07-26 11:35:51.556981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.212 qpair failed and we were unable to recover it. 00:27:56.212 [2024-07-26 11:35:51.557095] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.213 [2024-07-26 11:35:51.557125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.213 qpair failed and we were unable to recover it. 00:27:56.213 [2024-07-26 11:35:51.557322] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.213 [2024-07-26 11:35:51.557351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.213 qpair failed and we were unable to recover it. 00:27:56.213 [2024-07-26 11:35:51.557541] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.213 [2024-07-26 11:35:51.557571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.213 qpair failed and we were unable to recover it. 00:27:56.213 [2024-07-26 11:35:51.557820] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.213 [2024-07-26 11:35:51.557850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.213 qpair failed and we were unable to recover it. 00:27:56.213 [2024-07-26 11:35:51.557973] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.213 [2024-07-26 11:35:51.558001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.213 qpair failed and we were unable to recover it. 00:27:56.213 [2024-07-26 11:35:51.558146] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.213 [2024-07-26 11:35:51.558175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.213 qpair failed and we were unable to recover it. 00:27:56.213 [2024-07-26 11:35:51.558364] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.213 [2024-07-26 11:35:51.558394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.213 qpair failed and we were unable to recover it. 00:27:56.213 [2024-07-26 11:35:51.558574] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.213 [2024-07-26 11:35:51.558604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.213 qpair failed and we were unable to recover it. 00:27:56.213 [2024-07-26 11:35:51.558838] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.213 [2024-07-26 11:35:51.558868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.213 qpair failed and we were unable to recover it. 00:27:56.213 [2024-07-26 11:35:51.559108] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.213 [2024-07-26 11:35:51.559142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.213 qpair failed and we were unable to recover it. 00:27:56.213 [2024-07-26 11:35:51.559278] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.213 [2024-07-26 11:35:51.559308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.213 qpair failed and we were unable to recover it. 00:27:56.213 [2024-07-26 11:35:51.559442] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.213 [2024-07-26 11:35:51.559471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.213 qpair failed and we were unable to recover it. 00:27:56.213 [2024-07-26 11:35:51.559603] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.213 [2024-07-26 11:35:51.559642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.213 qpair failed and we were unable to recover it. 00:27:56.213 [2024-07-26 11:35:51.559779] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.213 [2024-07-26 11:35:51.559809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.213 qpair failed and we were unable to recover it. 00:27:56.213 [2024-07-26 11:35:51.559982] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.213 [2024-07-26 11:35:51.560012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.213 qpair failed and we were unable to recover it. 00:27:56.213 [2024-07-26 11:35:51.560269] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.213 [2024-07-26 11:35:51.560299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.213 qpair failed and we were unable to recover it. 00:27:56.213 [2024-07-26 11:35:51.560556] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.213 [2024-07-26 11:35:51.560586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.213 qpair failed and we were unable to recover it. 00:27:56.213 [2024-07-26 11:35:51.560850] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.213 [2024-07-26 11:35:51.560881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.213 qpair failed and we were unable to recover it. 00:27:56.213 [2024-07-26 11:35:51.560996] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.213 [2024-07-26 11:35:51.561026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.213 qpair failed and we were unable to recover it. 00:27:56.213 [2024-07-26 11:35:51.561294] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.213 [2024-07-26 11:35:51.561323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.213 qpair failed and we were unable to recover it. 00:27:56.213 [2024-07-26 11:35:51.561564] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.213 [2024-07-26 11:35:51.561594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.213 qpair failed and we were unable to recover it. 00:27:56.213 [2024-07-26 11:35:51.561808] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.213 [2024-07-26 11:35:51.561839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.213 qpair failed and we were unable to recover it. 00:27:56.213 [2024-07-26 11:35:51.562053] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.213 [2024-07-26 11:35:51.562083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.213 qpair failed and we were unable to recover it. 00:27:56.213 [2024-07-26 11:35:51.562285] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.213 [2024-07-26 11:35:51.562315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.213 qpair failed and we were unable to recover it. 00:27:56.213 [2024-07-26 11:35:51.562581] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.213 [2024-07-26 11:35:51.562611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.213 qpair failed and we were unable to recover it. 00:27:56.213 [2024-07-26 11:35:51.562740] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.213 [2024-07-26 11:35:51.562771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.213 qpair failed and we were unable to recover it. 00:27:56.213 [2024-07-26 11:35:51.563013] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.213 [2024-07-26 11:35:51.563043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.213 qpair failed and we were unable to recover it. 00:27:56.213 [2024-07-26 11:35:51.563229] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.213 [2024-07-26 11:35:51.563258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.213 qpair failed and we were unable to recover it. 00:27:56.213 [2024-07-26 11:35:51.563499] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.213 [2024-07-26 11:35:51.563529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.213 qpair failed and we were unable to recover it. 00:27:56.213 [2024-07-26 11:35:51.563791] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.213 [2024-07-26 11:35:51.563823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.213 qpair failed and we were unable to recover it. 00:27:56.213 [2024-07-26 11:35:51.564012] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.213 [2024-07-26 11:35:51.564041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.213 qpair failed and we were unable to recover it. 00:27:56.213 [2024-07-26 11:35:51.564183] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.213 [2024-07-26 11:35:51.564212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.213 qpair failed and we were unable to recover it. 00:27:56.213 [2024-07-26 11:35:51.564386] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.213 [2024-07-26 11:35:51.564416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.213 qpair failed and we were unable to recover it. 00:27:56.213 [2024-07-26 11:35:51.564656] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.213 [2024-07-26 11:35:51.564686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.213 qpair failed and we were unable to recover it. 00:27:56.213 [2024-07-26 11:35:51.564927] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.213 [2024-07-26 11:35:51.564957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.213 qpair failed and we were unable to recover it. 00:27:56.213 [2024-07-26 11:35:51.565132] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.213 [2024-07-26 11:35:51.565162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.213 qpair failed and we were unable to recover it. 00:27:56.213 [2024-07-26 11:35:51.565292] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.213 [2024-07-26 11:35:51.565321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.213 qpair failed and we were unable to recover it. 00:27:56.213 [2024-07-26 11:35:51.565604] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.214 [2024-07-26 11:35:51.565642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.214 qpair failed and we were unable to recover it. 00:27:56.214 [2024-07-26 11:35:51.565902] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.214 [2024-07-26 11:35:51.565931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.214 qpair failed and we were unable to recover it. 00:27:56.214 [2024-07-26 11:35:51.566124] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.214 [2024-07-26 11:35:51.566154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.214 qpair failed and we were unable to recover it. 00:27:56.214 [2024-07-26 11:35:51.566325] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.214 [2024-07-26 11:35:51.566354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.214 qpair failed and we were unable to recover it. 00:27:56.214 [2024-07-26 11:35:51.566459] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.214 [2024-07-26 11:35:51.566488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.214 qpair failed and we were unable to recover it. 00:27:56.214 [2024-07-26 11:35:51.566589] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.214 [2024-07-26 11:35:51.566618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.214 qpair failed and we were unable to recover it. 00:27:56.214 [2024-07-26 11:35:51.566867] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.214 [2024-07-26 11:35:51.566897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.214 qpair failed and we were unable to recover it. 00:27:56.214 [2024-07-26 11:35:51.567147] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.214 [2024-07-26 11:35:51.567176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.214 qpair failed and we were unable to recover it. 00:27:56.214 [2024-07-26 11:35:51.567369] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.214 [2024-07-26 11:35:51.567399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.214 qpair failed and we were unable to recover it. 00:27:56.214 [2024-07-26 11:35:51.567652] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.214 [2024-07-26 11:35:51.567683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.214 qpair failed and we were unable to recover it. 00:27:56.214 [2024-07-26 11:35:51.567931] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.214 [2024-07-26 11:35:51.567960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.214 qpair failed and we were unable to recover it. 00:27:56.214 [2024-07-26 11:35:51.568138] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.214 [2024-07-26 11:35:51.568167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.214 qpair failed and we were unable to recover it. 00:27:56.214 [2024-07-26 11:35:51.568345] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.214 [2024-07-26 11:35:51.568380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.214 qpair failed and we were unable to recover it. 00:27:56.214 [2024-07-26 11:35:51.568644] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.214 [2024-07-26 11:35:51.568674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.214 qpair failed and we were unable to recover it. 00:27:56.214 [2024-07-26 11:35:51.568800] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.214 [2024-07-26 11:35:51.568830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.214 qpair failed and we were unable to recover it. 00:27:56.214 [2024-07-26 11:35:51.569034] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.214 [2024-07-26 11:35:51.569064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.214 qpair failed and we were unable to recover it. 00:27:56.214 [2024-07-26 11:35:51.569189] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.214 [2024-07-26 11:35:51.569218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.214 qpair failed and we were unable to recover it. 00:27:56.214 [2024-07-26 11:35:51.569354] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.214 [2024-07-26 11:35:51.569384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.214 qpair failed and we were unable to recover it. 00:27:56.214 [2024-07-26 11:35:51.569668] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.214 [2024-07-26 11:35:51.569699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.214 qpair failed and we were unable to recover it. 00:27:56.214 [2024-07-26 11:35:51.569820] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.214 [2024-07-26 11:35:51.569850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.214 qpair failed and we were unable to recover it. 00:27:56.214 [2024-07-26 11:35:51.570057] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.214 [2024-07-26 11:35:51.570087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.214 qpair failed and we were unable to recover it. 00:27:56.214 [2024-07-26 11:35:51.570298] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.214 [2024-07-26 11:35:51.570328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.214 qpair failed and we were unable to recover it. 00:27:56.214 [2024-07-26 11:35:51.570548] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.214 [2024-07-26 11:35:51.570578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.214 qpair failed and we were unable to recover it. 00:27:56.214 [2024-07-26 11:35:51.570726] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.214 [2024-07-26 11:35:51.570756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.214 qpair failed and we were unable to recover it. 00:27:56.214 [2024-07-26 11:35:51.570927] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.214 [2024-07-26 11:35:51.570956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.214 qpair failed and we were unable to recover it. 00:27:56.214 [2024-07-26 11:35:51.571082] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.214 [2024-07-26 11:35:51.571112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.214 qpair failed and we were unable to recover it. 00:27:56.214 [2024-07-26 11:35:51.571286] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.214 [2024-07-26 11:35:51.571316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.214 qpair failed and we were unable to recover it. 00:27:56.214 [2024-07-26 11:35:51.571513] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.214 [2024-07-26 11:35:51.571543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.214 qpair failed and we were unable to recover it. 00:27:56.214 [2024-07-26 11:35:51.571662] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.214 [2024-07-26 11:35:51.571693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.214 qpair failed and we were unable to recover it. 00:27:56.214 [2024-07-26 11:35:51.571884] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.214 [2024-07-26 11:35:51.571914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.214 qpair failed and we were unable to recover it. 00:27:56.214 [2024-07-26 11:35:51.572086] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.214 [2024-07-26 11:35:51.572115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.214 qpair failed and we were unable to recover it. 00:27:56.214 [2024-07-26 11:35:51.572309] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.214 [2024-07-26 11:35:51.572338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.214 qpair failed and we were unable to recover it. 00:27:56.214 [2024-07-26 11:35:51.572597] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.214 [2024-07-26 11:35:51.572635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.214 qpair failed and we were unable to recover it. 00:27:56.214 [2024-07-26 11:35:51.572857] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.214 [2024-07-26 11:35:51.572887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.214 qpair failed and we were unable to recover it. 00:27:56.214 [2024-07-26 11:35:51.573068] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.214 [2024-07-26 11:35:51.573097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.214 qpair failed and we were unable to recover it. 00:27:56.214 [2024-07-26 11:35:51.573298] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.214 [2024-07-26 11:35:51.573327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.214 qpair failed and we were unable to recover it. 00:27:56.214 [2024-07-26 11:35:51.573504] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.214 [2024-07-26 11:35:51.573534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.214 qpair failed and we were unable to recover it. 00:27:56.214 [2024-07-26 11:35:51.573704] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.214 [2024-07-26 11:35:51.573735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.215 qpair failed and we were unable to recover it. 00:27:56.215 [2024-07-26 11:35:51.573928] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.215 [2024-07-26 11:35:51.573957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.215 qpair failed and we were unable to recover it. 00:27:56.215 [2024-07-26 11:35:51.574138] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.215 [2024-07-26 11:35:51.574169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.215 qpair failed and we were unable to recover it. 00:27:56.215 [2024-07-26 11:35:51.574371] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.215 [2024-07-26 11:35:51.574401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.215 qpair failed and we were unable to recover it. 00:27:56.215 [2024-07-26 11:35:51.574644] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.215 [2024-07-26 11:35:51.574676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.215 qpair failed and we were unable to recover it. 00:27:56.215 [2024-07-26 11:35:51.574879] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.215 [2024-07-26 11:35:51.574908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.215 qpair failed and we were unable to recover it. 00:27:56.215 [2024-07-26 11:35:51.575097] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.215 [2024-07-26 11:35:51.575127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.215 qpair failed and we were unable to recover it. 00:27:56.215 [2024-07-26 11:35:51.575240] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.215 [2024-07-26 11:35:51.575270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.215 qpair failed and we were unable to recover it. 00:27:56.215 [2024-07-26 11:35:51.575484] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.215 [2024-07-26 11:35:51.575513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.215 qpair failed and we were unable to recover it. 00:27:56.215 [2024-07-26 11:35:51.575700] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.215 [2024-07-26 11:35:51.575740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.215 qpair failed and we were unable to recover it. 00:27:56.215 [2024-07-26 11:35:51.575847] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.215 [2024-07-26 11:35:51.575876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.215 qpair failed and we were unable to recover it. 00:27:56.215 [2024-07-26 11:35:51.576074] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.215 [2024-07-26 11:35:51.576103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.215 qpair failed and we were unable to recover it. 00:27:56.215 [2024-07-26 11:35:51.576297] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.215 [2024-07-26 11:35:51.576327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.215 qpair failed and we were unable to recover it. 00:27:56.215 [2024-07-26 11:35:51.576514] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.215 [2024-07-26 11:35:51.576544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.215 qpair failed and we were unable to recover it. 00:27:56.215 [2024-07-26 11:35:51.576720] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.215 [2024-07-26 11:35:51.576752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.215 qpair failed and we were unable to recover it. 00:27:56.215 [2024-07-26 11:35:51.576898] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.215 [2024-07-26 11:35:51.576934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.215 qpair failed and we were unable to recover it. 00:27:56.215 [2024-07-26 11:35:51.577186] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.215 [2024-07-26 11:35:51.577215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.215 qpair failed and we were unable to recover it. 00:27:56.215 [2024-07-26 11:35:51.577352] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.215 [2024-07-26 11:35:51.577382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.215 qpair failed and we were unable to recover it. 00:27:56.215 [2024-07-26 11:35:51.577575] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.215 [2024-07-26 11:35:51.577605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.215 qpair failed and we were unable to recover it. 00:27:56.215 [2024-07-26 11:35:51.577785] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.215 [2024-07-26 11:35:51.577816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.215 qpair failed and we were unable to recover it. 00:27:56.215 [2024-07-26 11:35:51.578100] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.215 [2024-07-26 11:35:51.578130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.215 qpair failed and we were unable to recover it. 00:27:56.215 [2024-07-26 11:35:51.578270] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.215 [2024-07-26 11:35:51.578299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.215 qpair failed and we were unable to recover it. 00:27:56.215 [2024-07-26 11:35:51.578580] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.215 [2024-07-26 11:35:51.578609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.215 qpair failed and we were unable to recover it. 00:27:56.215 [2024-07-26 11:35:51.578885] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.215 [2024-07-26 11:35:51.578915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.215 qpair failed and we were unable to recover it. 00:27:56.215 [2024-07-26 11:35:51.579033] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.215 [2024-07-26 11:35:51.579063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.215 qpair failed and we were unable to recover it. 00:27:56.215 [2024-07-26 11:35:51.579266] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.215 [2024-07-26 11:35:51.579295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.215 qpair failed and we were unable to recover it. 00:27:56.215 [2024-07-26 11:35:51.579557] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.215 [2024-07-26 11:35:51.579586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.215 qpair failed and we were unable to recover it. 00:27:56.215 [2024-07-26 11:35:51.579778] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.215 [2024-07-26 11:35:51.579809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.215 qpair failed and we were unable to recover it. 00:27:56.215 [2024-07-26 11:35:51.579996] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.215 [2024-07-26 11:35:51.580026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.215 qpair failed and we were unable to recover it. 00:27:56.215 [2024-07-26 11:35:51.580174] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.215 [2024-07-26 11:35:51.580204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.215 qpair failed and we were unable to recover it. 00:27:56.215 [2024-07-26 11:35:51.580516] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.216 [2024-07-26 11:35:51.580545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.216 qpair failed and we were unable to recover it. 00:27:56.216 [2024-07-26 11:35:51.580662] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.216 [2024-07-26 11:35:51.580693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.216 qpair failed and we were unable to recover it. 00:27:56.216 [2024-07-26 11:35:51.580869] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.216 [2024-07-26 11:35:51.580898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.216 qpair failed and we were unable to recover it. 00:27:56.216 [2024-07-26 11:35:51.581144] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.216 [2024-07-26 11:35:51.581173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.216 qpair failed and we were unable to recover it. 00:27:56.216 [2024-07-26 11:35:51.581401] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.216 [2024-07-26 11:35:51.581430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.216 qpair failed and we were unable to recover it. 00:27:56.216 [2024-07-26 11:35:51.581558] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.216 [2024-07-26 11:35:51.581588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.216 qpair failed and we were unable to recover it. 00:27:56.216 [2024-07-26 11:35:51.581758] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.216 [2024-07-26 11:35:51.581788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.216 qpair failed and we were unable to recover it. 00:27:56.216 [2024-07-26 11:35:51.581964] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.216 [2024-07-26 11:35:51.581994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.216 qpair failed and we were unable to recover it. 00:27:56.216 [2024-07-26 11:35:51.582182] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.216 [2024-07-26 11:35:51.582212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.216 qpair failed and we were unable to recover it. 00:27:56.216 [2024-07-26 11:35:51.582390] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.216 [2024-07-26 11:35:51.582420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.216 qpair failed and we were unable to recover it. 00:27:56.216 [2024-07-26 11:35:51.582662] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.216 [2024-07-26 11:35:51.582693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.216 qpair failed and we were unable to recover it. 00:27:56.216 [2024-07-26 11:35:51.582963] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.216 [2024-07-26 11:35:51.582992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.216 qpair failed and we were unable to recover it. 00:27:56.216 [2024-07-26 11:35:51.583170] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.216 [2024-07-26 11:35:51.583200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.216 qpair failed and we were unable to recover it. 00:27:56.216 [2024-07-26 11:35:51.583335] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.216 [2024-07-26 11:35:51.583365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.216 qpair failed and we were unable to recover it. 00:27:56.216 [2024-07-26 11:35:51.583490] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.216 [2024-07-26 11:35:51.583520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.216 qpair failed and we were unable to recover it. 00:27:56.216 [2024-07-26 11:35:51.583722] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.216 [2024-07-26 11:35:51.583753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.216 qpair failed and we were unable to recover it. 00:27:56.216 [2024-07-26 11:35:51.583873] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.216 [2024-07-26 11:35:51.583903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.216 qpair failed and we were unable to recover it. 00:27:56.216 [2024-07-26 11:35:51.584142] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.216 [2024-07-26 11:35:51.584172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.216 qpair failed and we were unable to recover it. 00:27:56.216 [2024-07-26 11:35:51.584382] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.216 [2024-07-26 11:35:51.584411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.216 qpair failed and we were unable to recover it. 00:27:56.216 [2024-07-26 11:35:51.584555] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.216 [2024-07-26 11:35:51.584585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.216 qpair failed and we were unable to recover it. 00:27:56.216 [2024-07-26 11:35:51.584784] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.216 [2024-07-26 11:35:51.584815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.216 qpair failed and we were unable to recover it. 00:27:56.216 [2024-07-26 11:35:51.585016] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.216 [2024-07-26 11:35:51.585046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.216 qpair failed and we were unable to recover it. 00:27:56.216 [2024-07-26 11:35:51.585157] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.216 [2024-07-26 11:35:51.585187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.216 qpair failed and we were unable to recover it. 00:27:56.216 [2024-07-26 11:35:51.585364] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.216 [2024-07-26 11:35:51.585393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.216 qpair failed and we were unable to recover it. 00:27:56.216 [2024-07-26 11:35:51.585505] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.216 [2024-07-26 11:35:51.585534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.216 qpair failed and we were unable to recover it. 00:27:56.216 [2024-07-26 11:35:51.585773] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.216 [2024-07-26 11:35:51.585809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.216 qpair failed and we were unable to recover it. 00:27:56.216 [2024-07-26 11:35:51.586009] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.216 [2024-07-26 11:35:51.586039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.216 qpair failed and we were unable to recover it. 00:27:56.216 [2024-07-26 11:35:51.586209] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.216 [2024-07-26 11:35:51.586239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.216 qpair failed and we were unable to recover it. 00:27:56.216 [2024-07-26 11:35:51.586500] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.216 [2024-07-26 11:35:51.586530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.216 qpair failed and we were unable to recover it. 00:27:56.216 [2024-07-26 11:35:51.586668] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.216 [2024-07-26 11:35:51.586698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.216 qpair failed and we were unable to recover it. 00:27:56.216 [2024-07-26 11:35:51.586881] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.216 [2024-07-26 11:35:51.586911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.216 qpair failed and we were unable to recover it. 00:27:56.216 [2024-07-26 11:35:51.587119] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.216 [2024-07-26 11:35:51.587149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.216 qpair failed and we were unable to recover it. 00:27:56.216 [2024-07-26 11:35:51.587272] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.216 [2024-07-26 11:35:51.587303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.216 qpair failed and we were unable to recover it. 00:27:56.216 [2024-07-26 11:35:51.587561] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.216 [2024-07-26 11:35:51.587591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.216 qpair failed and we were unable to recover it. 00:27:56.216 [2024-07-26 11:35:51.587771] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.216 [2024-07-26 11:35:51.587802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.216 qpair failed and we were unable to recover it. 00:27:56.216 [2024-07-26 11:35:51.588056] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.216 [2024-07-26 11:35:51.588086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.216 qpair failed and we were unable to recover it. 00:27:56.216 [2024-07-26 11:35:51.588269] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.216 [2024-07-26 11:35:51.588298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.216 qpair failed and we were unable to recover it. 00:27:56.216 [2024-07-26 11:35:51.588423] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.216 [2024-07-26 11:35:51.588453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.217 qpair failed and we were unable to recover it. 00:27:56.217 [2024-07-26 11:35:51.588594] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.217 [2024-07-26 11:35:51.588623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.217 qpair failed and we were unable to recover it. 00:27:56.217 [2024-07-26 11:35:51.588835] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.217 [2024-07-26 11:35:51.588865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.217 qpair failed and we were unable to recover it. 00:27:56.217 [2024-07-26 11:35:51.589049] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.217 [2024-07-26 11:35:51.589079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.217 qpair failed and we were unable to recover it. 00:27:56.217 [2024-07-26 11:35:51.589251] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.217 [2024-07-26 11:35:51.589280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.217 qpair failed and we were unable to recover it. 00:27:56.217 [2024-07-26 11:35:51.589410] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.217 [2024-07-26 11:35:51.589440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.217 qpair failed and we were unable to recover it. 00:27:56.217 [2024-07-26 11:35:51.589651] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.217 [2024-07-26 11:35:51.589697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.217 qpair failed and we were unable to recover it. 00:27:56.217 [2024-07-26 11:35:51.589883] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.217 [2024-07-26 11:35:51.589914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.217 qpair failed and we were unable to recover it. 00:27:56.217 [2024-07-26 11:35:51.590102] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.217 [2024-07-26 11:35:51.590132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.217 qpair failed and we were unable to recover it. 00:27:56.217 [2024-07-26 11:35:51.590316] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.217 [2024-07-26 11:35:51.590346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.217 qpair failed and we were unable to recover it. 00:27:56.217 [2024-07-26 11:35:51.590523] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.217 [2024-07-26 11:35:51.590554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.217 qpair failed and we were unable to recover it. 00:27:56.217 [2024-07-26 11:35:51.590765] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.217 [2024-07-26 11:35:51.590796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.217 qpair failed and we were unable to recover it. 00:27:56.217 [2024-07-26 11:35:51.590986] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.217 [2024-07-26 11:35:51.591016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.217 qpair failed and we were unable to recover it. 00:27:56.217 [2024-07-26 11:35:51.591190] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.217 [2024-07-26 11:35:51.591220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.217 qpair failed and we were unable to recover it. 00:27:56.217 [2024-07-26 11:35:51.591334] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.217 [2024-07-26 11:35:51.591363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.217 qpair failed and we were unable to recover it. 00:27:56.217 [2024-07-26 11:35:51.591558] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.217 [2024-07-26 11:35:51.591588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.217 qpair failed and we were unable to recover it. 00:27:56.217 [2024-07-26 11:35:51.591862] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.217 [2024-07-26 11:35:51.591893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.217 qpair failed and we were unable to recover it. 00:27:56.217 [2024-07-26 11:35:51.591995] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.217 [2024-07-26 11:35:51.592025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.217 qpair failed and we were unable to recover it. 00:27:56.217 [2024-07-26 11:35:51.592291] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.217 [2024-07-26 11:35:51.592321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.217 qpair failed and we were unable to recover it. 00:27:56.217 [2024-07-26 11:35:51.592423] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.217 [2024-07-26 11:35:51.592453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.217 qpair failed and we were unable to recover it. 00:27:56.217 [2024-07-26 11:35:51.592620] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.217 [2024-07-26 11:35:51.592658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.217 qpair failed and we were unable to recover it. 00:27:56.217 [2024-07-26 11:35:51.592859] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.217 [2024-07-26 11:35:51.592889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.217 qpair failed and we were unable to recover it. 00:27:56.217 [2024-07-26 11:35:51.593164] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.217 [2024-07-26 11:35:51.593194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.217 qpair failed and we were unable to recover it. 00:27:56.217 [2024-07-26 11:35:51.593297] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.217 [2024-07-26 11:35:51.593326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.217 qpair failed and we were unable to recover it. 00:27:56.217 [2024-07-26 11:35:51.593469] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.217 [2024-07-26 11:35:51.593499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.217 qpair failed and we were unable to recover it. 00:27:56.217 [2024-07-26 11:35:51.593703] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.217 [2024-07-26 11:35:51.593734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.217 qpair failed and we were unable to recover it. 00:27:56.217 [2024-07-26 11:35:51.593976] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.217 [2024-07-26 11:35:51.594007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.217 qpair failed and we were unable to recover it. 00:27:56.217 [2024-07-26 11:35:51.594208] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.217 [2024-07-26 11:35:51.594238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.217 qpair failed and we were unable to recover it. 00:27:56.217 [2024-07-26 11:35:51.594479] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.217 [2024-07-26 11:35:51.594518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.217 qpair failed and we were unable to recover it. 00:27:56.217 [2024-07-26 11:35:51.594640] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.217 [2024-07-26 11:35:51.594670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.217 qpair failed and we were unable to recover it. 00:27:56.217 [2024-07-26 11:35:51.594793] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.217 [2024-07-26 11:35:51.594823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.217 qpair failed and we were unable to recover it. 00:27:56.217 [2024-07-26 11:35:51.594921] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.217 [2024-07-26 11:35:51.594951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.217 qpair failed and we were unable to recover it. 00:27:56.217 [2024-07-26 11:35:51.595090] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.217 [2024-07-26 11:35:51.595120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.217 qpair failed and we were unable to recover it. 00:27:56.217 [2024-07-26 11:35:51.595378] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.217 [2024-07-26 11:35:51.595408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.217 qpair failed and we were unable to recover it. 00:27:56.217 [2024-07-26 11:35:51.595533] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.217 [2024-07-26 11:35:51.595562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.217 qpair failed and we were unable to recover it. 00:27:56.217 [2024-07-26 11:35:51.595781] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.217 [2024-07-26 11:35:51.595812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.217 qpair failed and we were unable to recover it. 00:27:56.217 [2024-07-26 11:35:51.596035] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.217 [2024-07-26 11:35:51.596064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.217 qpair failed and we were unable to recover it. 00:27:56.217 [2024-07-26 11:35:51.596184] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.217 [2024-07-26 11:35:51.596213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.217 qpair failed and we were unable to recover it. 00:27:56.217 [2024-07-26 11:35:51.596384] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.218 [2024-07-26 11:35:51.596414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.218 qpair failed and we were unable to recover it. 00:27:56.218 [2024-07-26 11:35:51.596589] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.218 [2024-07-26 11:35:51.596618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.218 qpair failed and we were unable to recover it. 00:27:56.218 [2024-07-26 11:35:51.596868] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.218 [2024-07-26 11:35:51.596898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.218 qpair failed and we were unable to recover it. 00:27:56.218 [2024-07-26 11:35:51.597085] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.218 [2024-07-26 11:35:51.597114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.218 qpair failed and we were unable to recover it. 00:27:56.218 [2024-07-26 11:35:51.597309] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.218 [2024-07-26 11:35:51.597339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.218 qpair failed and we were unable to recover it. 00:27:56.218 [2024-07-26 11:35:51.597545] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.218 [2024-07-26 11:35:51.597576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.218 qpair failed and we were unable to recover it. 00:27:56.218 [2024-07-26 11:35:51.597860] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.218 [2024-07-26 11:35:51.597890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.218 qpair failed and we were unable to recover it. 00:27:56.218 [2024-07-26 11:35:51.598010] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.218 [2024-07-26 11:35:51.598039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.218 qpair failed and we were unable to recover it. 00:27:56.218 [2024-07-26 11:35:51.598244] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.218 [2024-07-26 11:35:51.598273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.218 qpair failed and we were unable to recover it. 00:27:56.218 [2024-07-26 11:35:51.598459] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.218 [2024-07-26 11:35:51.598489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.218 qpair failed and we were unable to recover it. 00:27:56.218 [2024-07-26 11:35:51.598679] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.218 [2024-07-26 11:35:51.598710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.218 qpair failed and we were unable to recover it. 00:27:56.218 [2024-07-26 11:35:51.598956] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.218 [2024-07-26 11:35:51.598986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.218 qpair failed and we were unable to recover it. 00:27:56.218 [2024-07-26 11:35:51.599110] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.218 [2024-07-26 11:35:51.599140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.218 qpair failed and we were unable to recover it. 00:27:56.218 [2024-07-26 11:35:51.599332] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.218 [2024-07-26 11:35:51.599361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.218 qpair failed and we were unable to recover it. 00:27:56.218 [2024-07-26 11:35:51.599489] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.218 [2024-07-26 11:35:51.599518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.218 qpair failed and we were unable to recover it. 00:27:56.218 [2024-07-26 11:35:51.599638] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.218 [2024-07-26 11:35:51.599669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.218 qpair failed and we were unable to recover it. 00:27:56.218 [2024-07-26 11:35:51.599943] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.218 [2024-07-26 11:35:51.599973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.218 qpair failed and we were unable to recover it. 00:27:56.218 [2024-07-26 11:35:51.600161] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.218 [2024-07-26 11:35:51.600191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.218 qpair failed and we were unable to recover it. 00:27:56.218 [2024-07-26 11:35:51.600452] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.218 [2024-07-26 11:35:51.600482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.218 qpair failed and we were unable to recover it. 00:27:56.218 [2024-07-26 11:35:51.600658] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.218 [2024-07-26 11:35:51.600688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.218 qpair failed and we were unable to recover it. 00:27:56.218 [2024-07-26 11:35:51.600955] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.218 [2024-07-26 11:35:51.600985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.218 qpair failed and we were unable to recover it. 00:27:56.218 [2024-07-26 11:35:51.601124] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.218 [2024-07-26 11:35:51.601154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.218 qpair failed and we were unable to recover it. 00:27:56.218 [2024-07-26 11:35:51.601393] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.218 [2024-07-26 11:35:51.601423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.218 qpair failed and we were unable to recover it. 00:27:56.218 [2024-07-26 11:35:51.601549] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.218 [2024-07-26 11:35:51.601579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.218 qpair failed and we were unable to recover it. 00:27:56.218 [2024-07-26 11:35:51.601774] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.218 [2024-07-26 11:35:51.601804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.218 qpair failed and we were unable to recover it. 00:27:56.218 [2024-07-26 11:35:51.601990] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.218 [2024-07-26 11:35:51.602020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.218 qpair failed and we were unable to recover it. 00:27:56.218 [2024-07-26 11:35:51.602258] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.218 [2024-07-26 11:35:51.602287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.218 qpair failed and we were unable to recover it. 00:27:56.218 [2024-07-26 11:35:51.602481] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.218 [2024-07-26 11:35:51.602510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.218 qpair failed and we were unable to recover it. 00:27:56.218 [2024-07-26 11:35:51.602721] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.218 [2024-07-26 11:35:51.602751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.218 qpair failed and we were unable to recover it. 00:27:56.218 [2024-07-26 11:35:51.602926] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.218 [2024-07-26 11:35:51.602956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.218 qpair failed and we were unable to recover it. 00:27:56.218 [2024-07-26 11:35:51.603198] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.218 [2024-07-26 11:35:51.603232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.218 qpair failed and we were unable to recover it. 00:27:56.218 [2024-07-26 11:35:51.603344] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.218 [2024-07-26 11:35:51.603374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.218 qpair failed and we were unable to recover it. 00:27:56.218 [2024-07-26 11:35:51.603546] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.218 [2024-07-26 11:35:51.603575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.218 qpair failed and we were unable to recover it. 00:27:56.218 [2024-07-26 11:35:51.603755] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.218 [2024-07-26 11:35:51.603785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.218 qpair failed and we were unable to recover it. 00:27:56.218 [2024-07-26 11:35:51.603990] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.218 [2024-07-26 11:35:51.604020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.218 qpair failed and we were unable to recover it. 00:27:56.218 [2024-07-26 11:35:51.604192] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.218 [2024-07-26 11:35:51.604222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.218 qpair failed and we were unable to recover it. 00:27:56.218 [2024-07-26 11:35:51.604463] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.218 [2024-07-26 11:35:51.604493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.218 qpair failed and we were unable to recover it. 00:27:56.218 [2024-07-26 11:35:51.604758] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.218 [2024-07-26 11:35:51.604789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.219 qpair failed and we were unable to recover it. 00:27:56.219 [2024-07-26 11:35:51.604963] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.219 [2024-07-26 11:35:51.604992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.219 qpair failed and we were unable to recover it. 00:27:56.219 [2024-07-26 11:35:51.605097] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.219 [2024-07-26 11:35:51.605126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.219 qpair failed and we were unable to recover it. 00:27:56.219 [2024-07-26 11:35:51.605317] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.219 [2024-07-26 11:35:51.605347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.219 qpair failed and we were unable to recover it. 00:27:56.219 [2024-07-26 11:35:51.605609] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.219 [2024-07-26 11:35:51.605664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.219 qpair failed and we were unable to recover it. 00:27:56.219 [2024-07-26 11:35:51.605849] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.219 [2024-07-26 11:35:51.605879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.219 qpair failed and we were unable to recover it. 00:27:56.219 [2024-07-26 11:35:51.606092] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.219 [2024-07-26 11:35:51.606122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.219 qpair failed and we were unable to recover it. 00:27:56.219 [2024-07-26 11:35:51.606250] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.219 [2024-07-26 11:35:51.606281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.219 qpair failed and we were unable to recover it. 00:27:56.219 [2024-07-26 11:35:51.606469] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.219 [2024-07-26 11:35:51.606498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.219 qpair failed and we were unable to recover it. 00:27:56.219 [2024-07-26 11:35:51.606679] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.219 [2024-07-26 11:35:51.606710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.219 qpair failed and we were unable to recover it. 00:27:56.219 [2024-07-26 11:35:51.606810] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.219 [2024-07-26 11:35:51.606840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.219 qpair failed and we were unable to recover it. 00:27:56.219 [2024-07-26 11:35:51.607012] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.219 [2024-07-26 11:35:51.607041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.219 qpair failed and we were unable to recover it. 00:27:56.219 [2024-07-26 11:35:51.607160] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.219 [2024-07-26 11:35:51.607189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.219 qpair failed and we were unable to recover it. 00:27:56.219 [2024-07-26 11:35:51.607319] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.219 [2024-07-26 11:35:51.607348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.219 qpair failed and we were unable to recover it. 00:27:56.219 [2024-07-26 11:35:51.607521] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.219 [2024-07-26 11:35:51.607550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.219 qpair failed and we were unable to recover it. 00:27:56.219 [2024-07-26 11:35:51.607762] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.219 [2024-07-26 11:35:51.607793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.219 qpair failed and we were unable to recover it. 00:27:56.219 [2024-07-26 11:35:51.608077] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.219 [2024-07-26 11:35:51.608107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.219 qpair failed and we were unable to recover it. 00:27:56.219 [2024-07-26 11:35:51.608291] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.219 [2024-07-26 11:35:51.608321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.219 qpair failed and we were unable to recover it. 00:27:56.219 [2024-07-26 11:35:51.608444] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.219 [2024-07-26 11:35:51.608474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.219 qpair failed and we were unable to recover it. 00:27:56.219 [2024-07-26 11:35:51.608679] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.219 [2024-07-26 11:35:51.608710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.219 qpair failed and we were unable to recover it. 00:27:56.219 [2024-07-26 11:35:51.608896] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.219 [2024-07-26 11:35:51.608926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.219 qpair failed and we were unable to recover it. 00:27:56.219 [2024-07-26 11:35:51.609211] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.219 [2024-07-26 11:35:51.609241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.219 qpair failed and we were unable to recover it. 00:27:56.219 [2024-07-26 11:35:51.609361] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.219 [2024-07-26 11:35:51.609391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.219 qpair failed and we were unable to recover it. 00:27:56.219 [2024-07-26 11:35:51.609559] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.219 [2024-07-26 11:35:51.609589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.219 qpair failed and we were unable to recover it. 00:27:56.219 [2024-07-26 11:35:51.609760] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.219 [2024-07-26 11:35:51.609790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.219 qpair failed and we were unable to recover it. 00:27:56.219 [2024-07-26 11:35:51.609981] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.219 [2024-07-26 11:35:51.610011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.219 qpair failed and we were unable to recover it. 00:27:56.219 [2024-07-26 11:35:51.610147] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.219 [2024-07-26 11:35:51.610176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.219 qpair failed and we were unable to recover it. 00:27:56.219 [2024-07-26 11:35:51.610365] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.219 [2024-07-26 11:35:51.610395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.219 qpair failed and we were unable to recover it. 00:27:56.219 [2024-07-26 11:35:51.610584] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.219 [2024-07-26 11:35:51.610614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.219 qpair failed and we were unable to recover it. 00:27:56.219 [2024-07-26 11:35:51.610875] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.219 [2024-07-26 11:35:51.610906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.219 qpair failed and we were unable to recover it. 00:27:56.219 [2024-07-26 11:35:51.611140] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.219 [2024-07-26 11:35:51.611170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.219 qpair failed and we were unable to recover it. 00:27:56.219 [2024-07-26 11:35:51.611308] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.219 [2024-07-26 11:35:51.611337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.219 qpair failed and we were unable to recover it. 00:27:56.219 [2024-07-26 11:35:51.611509] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.219 [2024-07-26 11:35:51.611538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.219 qpair failed and we were unable to recover it. 00:27:56.219 [2024-07-26 11:35:51.611709] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.219 [2024-07-26 11:35:51.611745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.219 qpair failed and we were unable to recover it. 00:27:56.219 [2024-07-26 11:35:51.611919] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.219 [2024-07-26 11:35:51.611950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.219 qpair failed and we were unable to recover it. 00:27:56.219 [2024-07-26 11:35:51.612081] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.219 [2024-07-26 11:35:51.612110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.219 qpair failed and we were unable to recover it. 00:27:56.219 [2024-07-26 11:35:51.612239] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.219 [2024-07-26 11:35:51.612269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.219 qpair failed and we were unable to recover it. 00:27:56.219 [2024-07-26 11:35:51.612401] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.219 [2024-07-26 11:35:51.612431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.219 qpair failed and we were unable to recover it. 00:27:56.219 [2024-07-26 11:35:51.612651] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.220 [2024-07-26 11:35:51.612682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.220 qpair failed and we were unable to recover it. 00:27:56.220 [2024-07-26 11:35:51.612870] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.220 [2024-07-26 11:35:51.612900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.220 qpair failed and we were unable to recover it. 00:27:56.220 [2024-07-26 11:35:51.613076] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.220 [2024-07-26 11:35:51.613106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.220 qpair failed and we were unable to recover it. 00:27:56.220 [2024-07-26 11:35:51.613222] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.220 [2024-07-26 11:35:51.613252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.220 qpair failed and we were unable to recover it. 00:27:56.220 [2024-07-26 11:35:51.613382] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.220 [2024-07-26 11:35:51.613412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.220 qpair failed and we were unable to recover it. 00:27:56.220 [2024-07-26 11:35:51.613682] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.220 [2024-07-26 11:35:51.613713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.220 qpair failed and we were unable to recover it. 00:27:56.220 [2024-07-26 11:35:51.613907] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.220 [2024-07-26 11:35:51.613937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.220 qpair failed and we were unable to recover it. 00:27:56.220 [2024-07-26 11:35:51.614063] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.220 [2024-07-26 11:35:51.614093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.220 qpair failed and we were unable to recover it. 00:27:56.220 [2024-07-26 11:35:51.614266] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.220 [2024-07-26 11:35:51.614296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.220 qpair failed and we were unable to recover it. 00:27:56.220 [2024-07-26 11:35:51.614541] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.220 [2024-07-26 11:35:51.614571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.220 qpair failed and we were unable to recover it. 00:27:56.220 [2024-07-26 11:35:51.614822] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.220 [2024-07-26 11:35:51.614852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.220 qpair failed and we were unable to recover it. 00:27:56.220 [2024-07-26 11:35:51.615028] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.220 [2024-07-26 11:35:51.615058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.220 qpair failed and we were unable to recover it. 00:27:56.220 [2024-07-26 11:35:51.615237] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.220 [2024-07-26 11:35:51.615266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.220 qpair failed and we were unable to recover it. 00:27:56.220 [2024-07-26 11:35:51.615533] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.220 [2024-07-26 11:35:51.615563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.220 qpair failed and we were unable to recover it. 00:27:56.220 [2024-07-26 11:35:51.615736] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.220 [2024-07-26 11:35:51.615767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.220 qpair failed and we were unable to recover it. 00:27:56.220 [2024-07-26 11:35:51.615954] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.220 [2024-07-26 11:35:51.615983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.220 qpair failed and we were unable to recover it. 00:27:56.220 [2024-07-26 11:35:51.616268] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.220 [2024-07-26 11:35:51.616298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.220 qpair failed and we were unable to recover it. 00:27:56.220 [2024-07-26 11:35:51.616574] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.220 [2024-07-26 11:35:51.616603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.220 qpair failed and we were unable to recover it. 00:27:56.220 [2024-07-26 11:35:51.616735] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.220 [2024-07-26 11:35:51.616765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.220 qpair failed and we were unable to recover it. 00:27:56.220 [2024-07-26 11:35:51.616937] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.220 [2024-07-26 11:35:51.616967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.220 qpair failed and we were unable to recover it. 00:27:56.220 [2024-07-26 11:35:51.617078] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.220 [2024-07-26 11:35:51.617107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.220 qpair failed and we were unable to recover it. 00:27:56.220 [2024-07-26 11:35:51.617312] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.220 [2024-07-26 11:35:51.617342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.220 qpair failed and we were unable to recover it. 00:27:56.220 [2024-07-26 11:35:51.617542] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.220 [2024-07-26 11:35:51.617573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.220 qpair failed and we were unable to recover it. 00:27:56.220 [2024-07-26 11:35:51.617845] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.220 [2024-07-26 11:35:51.617876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.220 qpair failed and we were unable to recover it. 00:27:56.220 [2024-07-26 11:35:51.617993] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.220 [2024-07-26 11:35:51.618023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.220 qpair failed and we were unable to recover it. 00:27:56.220 [2024-07-26 11:35:51.618142] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.220 [2024-07-26 11:35:51.618171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.220 qpair failed and we were unable to recover it. 00:27:56.220 [2024-07-26 11:35:51.618430] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.220 [2024-07-26 11:35:51.618459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.220 qpair failed and we were unable to recover it. 00:27:56.220 [2024-07-26 11:35:51.618727] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.220 [2024-07-26 11:35:51.618758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.220 qpair failed and we were unable to recover it. 00:27:56.220 [2024-07-26 11:35:51.618942] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.220 [2024-07-26 11:35:51.618972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.220 qpair failed and we were unable to recover it. 00:27:56.220 [2024-07-26 11:35:51.619158] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.220 [2024-07-26 11:35:51.619188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.220 qpair failed and we were unable to recover it. 00:27:56.220 [2024-07-26 11:35:51.619336] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.220 [2024-07-26 11:35:51.619366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.220 qpair failed and we were unable to recover it. 00:27:56.220 [2024-07-26 11:35:51.619550] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.220 [2024-07-26 11:35:51.619579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.221 qpair failed and we were unable to recover it. 00:27:56.221 [2024-07-26 11:35:51.619757] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.221 [2024-07-26 11:35:51.619787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.221 qpair failed and we were unable to recover it. 00:27:56.221 [2024-07-26 11:35:51.620031] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.221 [2024-07-26 11:35:51.620061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.221 qpair failed and we were unable to recover it. 00:27:56.221 [2024-07-26 11:35:51.620301] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.221 [2024-07-26 11:35:51.620331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.221 qpair failed and we were unable to recover it. 00:27:56.221 [2024-07-26 11:35:51.620605] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.221 [2024-07-26 11:35:51.620647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.221 qpair failed and we were unable to recover it. 00:27:56.221 [2024-07-26 11:35:51.620767] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.221 [2024-07-26 11:35:51.620797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.221 qpair failed and we were unable to recover it. 00:27:56.221 [2024-07-26 11:35:51.621036] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.221 [2024-07-26 11:35:51.621066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.221 qpair failed and we were unable to recover it. 00:27:56.221 [2024-07-26 11:35:51.621328] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.221 [2024-07-26 11:35:51.621357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.221 qpair failed and we were unable to recover it. 00:27:56.221 [2024-07-26 11:35:51.621546] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.221 [2024-07-26 11:35:51.621577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.221 qpair failed and we were unable to recover it. 00:27:56.221 [2024-07-26 11:35:51.621722] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.221 [2024-07-26 11:35:51.621752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.221 qpair failed and we were unable to recover it. 00:27:56.221 [2024-07-26 11:35:51.621876] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.221 [2024-07-26 11:35:51.621905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.221 qpair failed and we were unable to recover it. 00:27:56.221 [2024-07-26 11:35:51.622044] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.221 [2024-07-26 11:35:51.622074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.221 qpair failed and we were unable to recover it. 00:27:56.221 [2024-07-26 11:35:51.622183] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.221 [2024-07-26 11:35:51.622213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.221 qpair failed and we were unable to recover it. 00:27:56.221 [2024-07-26 11:35:51.622379] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.221 [2024-07-26 11:35:51.622408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.221 qpair failed and we were unable to recover it. 00:27:56.221 [2024-07-26 11:35:51.622581] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.221 [2024-07-26 11:35:51.622610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.221 qpair failed and we were unable to recover it. 00:27:56.221 [2024-07-26 11:35:51.622808] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.221 [2024-07-26 11:35:51.622840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.221 qpair failed and we were unable to recover it. 00:27:56.221 [2024-07-26 11:35:51.623032] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.221 [2024-07-26 11:35:51.623062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.221 qpair failed and we were unable to recover it. 00:27:56.221 [2024-07-26 11:35:51.623198] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.221 [2024-07-26 11:35:51.623228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.221 qpair failed and we were unable to recover it. 00:27:56.221 [2024-07-26 11:35:51.623405] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.221 [2024-07-26 11:35:51.623435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.221 qpair failed and we were unable to recover it. 00:27:56.221 [2024-07-26 11:35:51.623554] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.221 [2024-07-26 11:35:51.623584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.221 qpair failed and we were unable to recover it. 00:27:56.221 [2024-07-26 11:35:51.623833] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.221 [2024-07-26 11:35:51.623863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.221 qpair failed and we were unable to recover it. 00:27:56.221 [2024-07-26 11:35:51.624058] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.221 [2024-07-26 11:35:51.624088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.221 qpair failed and we were unable to recover it. 00:27:56.221 [2024-07-26 11:35:51.624205] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.221 [2024-07-26 11:35:51.624235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.221 qpair failed and we were unable to recover it. 00:27:56.221 [2024-07-26 11:35:51.624421] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.221 [2024-07-26 11:35:51.624450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.221 qpair failed and we were unable to recover it. 00:27:56.221 [2024-07-26 11:35:51.624739] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.221 [2024-07-26 11:35:51.624769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.221 qpair failed and we were unable to recover it. 00:27:56.221 [2024-07-26 11:35:51.624954] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.221 [2024-07-26 11:35:51.624985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.221 qpair failed and we were unable to recover it. 00:27:56.221 [2024-07-26 11:35:51.625165] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.221 [2024-07-26 11:35:51.625195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.221 qpair failed and we were unable to recover it. 00:27:56.221 [2024-07-26 11:35:51.625460] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.221 [2024-07-26 11:35:51.625490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.221 qpair failed and we were unable to recover it. 00:27:56.221 [2024-07-26 11:35:51.625754] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.221 [2024-07-26 11:35:51.625785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.221 qpair failed and we were unable to recover it. 00:27:56.221 [2024-07-26 11:35:51.626051] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.221 [2024-07-26 11:35:51.626081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.221 qpair failed and we were unable to recover it. 00:27:56.221 [2024-07-26 11:35:51.626254] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.221 [2024-07-26 11:35:51.626284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.221 qpair failed and we were unable to recover it. 00:27:56.221 [2024-07-26 11:35:51.626399] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.221 [2024-07-26 11:35:51.626429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.221 qpair failed and we were unable to recover it. 00:27:56.221 [2024-07-26 11:35:51.626563] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.221 [2024-07-26 11:35:51.626593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.221 qpair failed and we were unable to recover it. 00:27:56.221 [2024-07-26 11:35:51.626773] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.221 [2024-07-26 11:35:51.626803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.221 qpair failed and we were unable to recover it. 00:27:56.221 [2024-07-26 11:35:51.626996] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.221 [2024-07-26 11:35:51.627026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.221 qpair failed and we were unable to recover it. 00:27:56.221 [2024-07-26 11:35:51.627292] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.221 [2024-07-26 11:35:51.627321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.221 qpair failed and we were unable to recover it. 00:27:56.221 [2024-07-26 11:35:51.627522] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.221 [2024-07-26 11:35:51.627551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.221 qpair failed and we were unable to recover it. 00:27:56.221 [2024-07-26 11:35:51.627664] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.221 [2024-07-26 11:35:51.627694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.221 qpair failed and we were unable to recover it. 00:27:56.221 [2024-07-26 11:35:51.627935] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.222 [2024-07-26 11:35:51.627965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.222 qpair failed and we were unable to recover it. 00:27:56.222 [2024-07-26 11:35:51.628224] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.222 [2024-07-26 11:35:51.628254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.222 qpair failed and we were unable to recover it. 00:27:56.222 [2024-07-26 11:35:51.628439] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.222 [2024-07-26 11:35:51.628469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.222 qpair failed and we were unable to recover it. 00:27:56.222 [2024-07-26 11:35:51.628761] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.222 [2024-07-26 11:35:51.628791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.222 qpair failed and we were unable to recover it. 00:27:56.222 [2024-07-26 11:35:51.629038] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.222 [2024-07-26 11:35:51.629068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.222 qpair failed and we were unable to recover it. 00:27:56.222 [2024-07-26 11:35:51.629186] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.222 [2024-07-26 11:35:51.629215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.222 qpair failed and we were unable to recover it. 00:27:56.222 [2024-07-26 11:35:51.629404] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.222 [2024-07-26 11:35:51.629433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.222 qpair failed and we were unable to recover it. 00:27:56.222 [2024-07-26 11:35:51.629684] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.222 [2024-07-26 11:35:51.629715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.222 qpair failed and we were unable to recover it. 00:27:56.222 [2024-07-26 11:35:51.629883] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.222 [2024-07-26 11:35:51.629912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.222 qpair failed and we were unable to recover it. 00:27:56.222 [2024-07-26 11:35:51.630104] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.222 [2024-07-26 11:35:51.630134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.222 qpair failed and we were unable to recover it. 00:27:56.222 [2024-07-26 11:35:51.630381] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.222 [2024-07-26 11:35:51.630411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.222 qpair failed and we were unable to recover it. 00:27:56.222 [2024-07-26 11:35:51.630651] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.222 [2024-07-26 11:35:51.630681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.222 qpair failed and we were unable to recover it. 00:27:56.222 [2024-07-26 11:35:51.630805] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.222 [2024-07-26 11:35:51.630834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.222 qpair failed and we were unable to recover it. 00:27:56.222 [2024-07-26 11:35:51.631074] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.222 [2024-07-26 11:35:51.631104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.222 qpair failed and we were unable to recover it. 00:27:56.222 [2024-07-26 11:35:51.631294] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.222 [2024-07-26 11:35:51.631323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.222 qpair failed and we were unable to recover it. 00:27:56.222 [2024-07-26 11:35:51.631555] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.222 [2024-07-26 11:35:51.631585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.222 qpair failed and we were unable to recover it. 00:27:56.222 [2024-07-26 11:35:51.631810] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.222 [2024-07-26 11:35:51.631841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.222 qpair failed and we were unable to recover it. 00:27:56.222 [2024-07-26 11:35:51.632029] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.222 [2024-07-26 11:35:51.632059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.222 qpair failed and we were unable to recover it. 00:27:56.222 [2024-07-26 11:35:51.632299] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.222 [2024-07-26 11:35:51.632329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.222 qpair failed and we were unable to recover it. 00:27:56.222 [2024-07-26 11:35:51.632460] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.222 [2024-07-26 11:35:51.632489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.222 qpair failed and we were unable to recover it. 00:27:56.222 [2024-07-26 11:35:51.632641] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.222 [2024-07-26 11:35:51.632672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.222 qpair failed and we were unable to recover it. 00:27:56.222 [2024-07-26 11:35:51.632937] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.222 [2024-07-26 11:35:51.632967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.222 qpair failed and we were unable to recover it. 00:27:56.222 [2024-07-26 11:35:51.633073] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.222 [2024-07-26 11:35:51.633103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.222 qpair failed and we were unable to recover it. 00:27:56.222 [2024-07-26 11:35:51.633310] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.222 [2024-07-26 11:35:51.633339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.222 qpair failed and we were unable to recover it. 00:27:56.222 [2024-07-26 11:35:51.633580] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.222 [2024-07-26 11:35:51.633610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.222 qpair failed and we were unable to recover it. 00:27:56.222 [2024-07-26 11:35:51.633848] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.222 [2024-07-26 11:35:51.633878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.222 qpair failed and we were unable to recover it. 00:27:56.222 [2024-07-26 11:35:51.634073] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.222 [2024-07-26 11:35:51.634103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.222 qpair failed and we were unable to recover it. 00:27:56.222 [2024-07-26 11:35:51.634277] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.222 [2024-07-26 11:35:51.634307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.222 qpair failed and we were unable to recover it. 00:27:56.222 [2024-07-26 11:35:51.634424] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.222 [2024-07-26 11:35:51.634454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.222 qpair failed and we were unable to recover it. 00:27:56.222 [2024-07-26 11:35:51.634671] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.222 [2024-07-26 11:35:51.634702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.222 qpair failed and we were unable to recover it. 00:27:56.222 [2024-07-26 11:35:51.634897] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.222 [2024-07-26 11:35:51.634927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.222 qpair failed and we were unable to recover it. 00:27:56.222 [2024-07-26 11:35:51.635170] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.222 [2024-07-26 11:35:51.635200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.222 qpair failed and we were unable to recover it. 00:27:56.222 [2024-07-26 11:35:51.635451] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.222 [2024-07-26 11:35:51.635480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.222 qpair failed and we were unable to recover it. 00:27:56.222 [2024-07-26 11:35:51.635756] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.222 [2024-07-26 11:35:51.635792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.222 qpair failed and we were unable to recover it. 00:27:56.222 [2024-07-26 11:35:51.635970] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.222 [2024-07-26 11:35:51.636000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.222 qpair failed and we were unable to recover it. 00:27:56.222 [2024-07-26 11:35:51.636242] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.222 [2024-07-26 11:35:51.636271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.222 qpair failed and we were unable to recover it. 00:27:56.222 [2024-07-26 11:35:51.636456] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.222 [2024-07-26 11:35:51.636485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.222 qpair failed and we were unable to recover it. 00:27:56.222 [2024-07-26 11:35:51.636696] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.222 [2024-07-26 11:35:51.636727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.222 qpair failed and we were unable to recover it. 00:27:56.223 [2024-07-26 11:35:51.636898] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.223 [2024-07-26 11:35:51.636927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.223 qpair failed and we were unable to recover it. 00:27:56.223 [2024-07-26 11:35:51.637103] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.223 [2024-07-26 11:35:51.637132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.223 qpair failed and we were unable to recover it. 00:27:56.223 [2024-07-26 11:35:51.637319] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.223 [2024-07-26 11:35:51.637348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.223 qpair failed and we were unable to recover it. 00:27:56.223 [2024-07-26 11:35:51.637536] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.223 [2024-07-26 11:35:51.637565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.223 qpair failed and we were unable to recover it. 00:27:56.223 [2024-07-26 11:35:51.637808] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.223 [2024-07-26 11:35:51.637838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.223 qpair failed and we were unable to recover it. 00:27:56.223 [2024-07-26 11:35:51.638046] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.223 [2024-07-26 11:35:51.638075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.223 qpair failed and we were unable to recover it. 00:27:56.223 [2024-07-26 11:35:51.638181] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.223 [2024-07-26 11:35:51.638211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.223 qpair failed and we were unable to recover it. 00:27:56.223 [2024-07-26 11:35:51.638406] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.223 [2024-07-26 11:35:51.638436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.223 qpair failed and we were unable to recover it. 00:27:56.223 [2024-07-26 11:35:51.638604] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.223 [2024-07-26 11:35:51.638642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.223 qpair failed and we were unable to recover it. 00:27:56.223 [2024-07-26 11:35:51.638845] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.223 [2024-07-26 11:35:51.638875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.223 qpair failed and we were unable to recover it. 00:27:56.223 [2024-07-26 11:35:51.639070] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.223 [2024-07-26 11:35:51.639100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.223 qpair failed and we were unable to recover it. 00:27:56.223 [2024-07-26 11:35:51.639279] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.223 [2024-07-26 11:35:51.639309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.223 qpair failed and we were unable to recover it. 00:27:56.223 [2024-07-26 11:35:51.639481] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.223 [2024-07-26 11:35:51.639511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.223 qpair failed and we were unable to recover it. 00:27:56.223 [2024-07-26 11:35:51.639701] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.223 [2024-07-26 11:35:51.639732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.223 qpair failed and we were unable to recover it. 00:27:56.223 [2024-07-26 11:35:51.639916] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.223 [2024-07-26 11:35:51.639945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.223 qpair failed and we were unable to recover it. 00:27:56.223 [2024-07-26 11:35:51.640071] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.223 [2024-07-26 11:35:51.640101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.223 qpair failed and we were unable to recover it. 00:27:56.223 [2024-07-26 11:35:51.640223] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.223 [2024-07-26 11:35:51.640253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.223 qpair failed and we were unable to recover it. 00:27:56.223 [2024-07-26 11:35:51.640425] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.223 [2024-07-26 11:35:51.640454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.223 qpair failed and we were unable to recover it. 00:27:56.223 [2024-07-26 11:35:51.640653] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.223 [2024-07-26 11:35:51.640683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.223 qpair failed and we were unable to recover it. 00:27:56.223 [2024-07-26 11:35:51.640873] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.223 [2024-07-26 11:35:51.640903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.223 qpair failed and we were unable to recover it. 00:27:56.223 [2024-07-26 11:35:51.641087] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.223 [2024-07-26 11:35:51.641116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.223 qpair failed and we were unable to recover it. 00:27:56.223 [2024-07-26 11:35:51.641290] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.223 [2024-07-26 11:35:51.641320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.223 qpair failed and we were unable to recover it. 00:27:56.223 [2024-07-26 11:35:51.641596] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.223 [2024-07-26 11:35:51.641643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.223 qpair failed and we were unable to recover it. 00:27:56.223 [2024-07-26 11:35:51.641816] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.223 [2024-07-26 11:35:51.641846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.223 qpair failed and we were unable to recover it. 00:27:56.223 [2024-07-26 11:35:51.641953] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.223 [2024-07-26 11:35:51.641982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.223 qpair failed and we were unable to recover it. 00:27:56.223 [2024-07-26 11:35:51.642167] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.223 [2024-07-26 11:35:51.642196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.223 qpair failed and we were unable to recover it. 00:27:56.223 [2024-07-26 11:35:51.642484] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.223 [2024-07-26 11:35:51.642513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.223 qpair failed and we were unable to recover it. 00:27:56.223 [2024-07-26 11:35:51.642727] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.223 [2024-07-26 11:35:51.642757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.223 qpair failed and we were unable to recover it. 00:27:56.223 [2024-07-26 11:35:51.642947] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.223 [2024-07-26 11:35:51.642977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.223 qpair failed and we were unable to recover it. 00:27:56.223 [2024-07-26 11:35:51.643100] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.223 [2024-07-26 11:35:51.643130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.223 qpair failed and we were unable to recover it. 00:27:56.223 [2024-07-26 11:35:51.643271] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.223 [2024-07-26 11:35:51.643300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.223 qpair failed and we were unable to recover it. 00:27:56.223 [2024-07-26 11:35:51.643550] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.223 [2024-07-26 11:35:51.643580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.223 qpair failed and we were unable to recover it. 00:27:56.223 [2024-07-26 11:35:51.643794] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.223 [2024-07-26 11:35:51.643824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.223 qpair failed and we were unable to recover it. 00:27:56.223 [2024-07-26 11:35:51.643995] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.223 [2024-07-26 11:35:51.644025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.223 qpair failed and we were unable to recover it. 00:27:56.223 [2024-07-26 11:35:51.644237] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.223 [2024-07-26 11:35:51.644266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.223 qpair failed and we were unable to recover it. 00:27:56.223 [2024-07-26 11:35:51.644444] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.223 [2024-07-26 11:35:51.644478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.223 qpair failed and we were unable to recover it. 00:27:56.223 [2024-07-26 11:35:51.644669] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.223 [2024-07-26 11:35:51.644699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.223 qpair failed and we were unable to recover it. 00:27:56.223 [2024-07-26 11:35:51.644919] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.224 [2024-07-26 11:35:51.644948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.224 qpair failed and we were unable to recover it. 00:27:56.224 [2024-07-26 11:35:51.645216] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.224 [2024-07-26 11:35:51.645245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.224 qpair failed and we were unable to recover it. 00:27:56.224 [2024-07-26 11:35:51.645376] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.224 [2024-07-26 11:35:51.645405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.224 qpair failed and we were unable to recover it. 00:27:56.224 [2024-07-26 11:35:51.645667] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.224 [2024-07-26 11:35:51.645697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.224 qpair failed and we were unable to recover it. 00:27:56.224 [2024-07-26 11:35:51.645883] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.224 [2024-07-26 11:35:51.645913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.224 qpair failed and we were unable to recover it. 00:27:56.224 [2024-07-26 11:35:51.646166] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.224 [2024-07-26 11:35:51.646195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.224 qpair failed and we were unable to recover it. 00:27:56.224 [2024-07-26 11:35:51.646435] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.224 [2024-07-26 11:35:51.646465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.224 qpair failed and we were unable to recover it. 00:27:56.224 [2024-07-26 11:35:51.646675] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.224 [2024-07-26 11:35:51.646706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.224 qpair failed and we were unable to recover it. 00:27:56.224 [2024-07-26 11:35:51.646916] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.224 [2024-07-26 11:35:51.646945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.224 qpair failed and we were unable to recover it. 00:27:56.224 [2024-07-26 11:35:51.647120] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.224 [2024-07-26 11:35:51.647150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.224 qpair failed and we were unable to recover it. 00:27:56.224 [2024-07-26 11:35:51.647285] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.224 [2024-07-26 11:35:51.647315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.224 qpair failed and we were unable to recover it. 00:27:56.224 [2024-07-26 11:35:51.647505] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.224 [2024-07-26 11:35:51.647535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.224 qpair failed and we were unable to recover it. 00:27:56.224 [2024-07-26 11:35:51.647658] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.224 [2024-07-26 11:35:51.647689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.224 qpair failed and we were unable to recover it. 00:27:56.224 [2024-07-26 11:35:51.647809] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.224 [2024-07-26 11:35:51.647838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.224 qpair failed and we were unable to recover it. 00:27:56.224 [2024-07-26 11:35:51.648088] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.224 [2024-07-26 11:35:51.648117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.224 qpair failed and we were unable to recover it. 00:27:56.224 [2024-07-26 11:35:51.648395] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.224 [2024-07-26 11:35:51.648425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.224 qpair failed and we were unable to recover it. 00:27:56.224 [2024-07-26 11:35:51.648665] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.224 [2024-07-26 11:35:51.648696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.224 qpair failed and we were unable to recover it. 00:27:56.224 [2024-07-26 11:35:51.648966] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.224 [2024-07-26 11:35:51.648995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.224 qpair failed and we were unable to recover it. 00:27:56.224 [2024-07-26 11:35:51.649169] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.224 [2024-07-26 11:35:51.649199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.224 qpair failed and we were unable to recover it. 00:27:56.224 [2024-07-26 11:35:51.649411] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.224 [2024-07-26 11:35:51.649441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.224 qpair failed and we were unable to recover it. 00:27:56.224 [2024-07-26 11:35:51.649637] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.224 [2024-07-26 11:35:51.649668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.224 qpair failed and we were unable to recover it. 00:27:56.224 [2024-07-26 11:35:51.649952] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.224 [2024-07-26 11:35:51.649982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.224 qpair failed and we were unable to recover it. 00:27:56.224 [2024-07-26 11:35:51.650173] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.224 [2024-07-26 11:35:51.650202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.224 qpair failed and we were unable to recover it. 00:27:56.224 [2024-07-26 11:35:51.650341] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.224 [2024-07-26 11:35:51.650371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.224 qpair failed and we were unable to recover it. 00:27:56.224 [2024-07-26 11:35:51.650491] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.224 [2024-07-26 11:35:51.650521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.224 qpair failed and we were unable to recover it. 00:27:56.224 [2024-07-26 11:35:51.650700] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.224 [2024-07-26 11:35:51.650730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.224 qpair failed and we were unable to recover it. 00:27:56.224 [2024-07-26 11:35:51.650858] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.224 [2024-07-26 11:35:51.650888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.224 qpair failed and we were unable to recover it. 00:27:56.224 [2024-07-26 11:35:51.651064] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.224 [2024-07-26 11:35:51.651093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.224 qpair failed and we were unable to recover it. 00:27:56.224 [2024-07-26 11:35:51.651283] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.224 [2024-07-26 11:35:51.651312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.224 qpair failed and we were unable to recover it. 00:27:56.224 [2024-07-26 11:35:51.651556] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.224 [2024-07-26 11:35:51.651586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.224 qpair failed and we were unable to recover it. 00:27:56.224 [2024-07-26 11:35:51.651802] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.224 [2024-07-26 11:35:51.651833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.224 qpair failed and we were unable to recover it. 00:27:56.224 [2024-07-26 11:35:51.651947] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.224 [2024-07-26 11:35:51.651977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.224 qpair failed and we were unable to recover it. 00:27:56.224 [2024-07-26 11:35:51.652153] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.224 [2024-07-26 11:35:51.652183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.224 qpair failed and we were unable to recover it. 00:27:56.224 [2024-07-26 11:35:51.652369] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.224 [2024-07-26 11:35:51.652400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.224 qpair failed and we were unable to recover it. 00:27:56.224 [2024-07-26 11:35:51.652596] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.224 [2024-07-26 11:35:51.652635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.224 qpair failed and we were unable to recover it. 00:27:56.224 [2024-07-26 11:35:51.652786] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.224 [2024-07-26 11:35:51.652816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.224 qpair failed and we were unable to recover it. 00:27:56.224 [2024-07-26 11:35:51.653007] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.224 [2024-07-26 11:35:51.653037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.224 qpair failed and we were unable to recover it. 00:27:56.224 [2024-07-26 11:35:51.653258] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.224 [2024-07-26 11:35:51.653287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.224 qpair failed and we were unable to recover it. 00:27:56.225 [2024-07-26 11:35:51.653412] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.225 [2024-07-26 11:35:51.653447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.225 qpair failed and we were unable to recover it. 00:27:56.225 [2024-07-26 11:35:51.653695] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.225 [2024-07-26 11:35:51.653726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.225 qpair failed and we were unable to recover it. 00:27:56.225 [2024-07-26 11:35:51.653998] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.225 [2024-07-26 11:35:51.654027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.225 qpair failed and we were unable to recover it. 00:27:56.225 [2024-07-26 11:35:51.654240] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.225 [2024-07-26 11:35:51.654270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.225 qpair failed and we were unable to recover it. 00:27:56.225 [2024-07-26 11:35:51.654392] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.225 [2024-07-26 11:35:51.654422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.225 qpair failed and we were unable to recover it. 00:27:56.225 [2024-07-26 11:35:51.654611] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.225 [2024-07-26 11:35:51.654649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.225 qpair failed and we were unable to recover it. 00:27:56.225 [2024-07-26 11:35:51.654798] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.225 [2024-07-26 11:35:51.654827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.225 qpair failed and we were unable to recover it. 00:27:56.225 [2024-07-26 11:35:51.655071] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.225 [2024-07-26 11:35:51.655101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.225 qpair failed and we were unable to recover it. 00:27:56.225 [2024-07-26 11:35:51.655219] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.225 [2024-07-26 11:35:51.655249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.225 qpair failed and we were unable to recover it. 00:27:56.225 [2024-07-26 11:35:51.655443] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.225 [2024-07-26 11:35:51.655472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.225 qpair failed and we were unable to recover it. 00:27:56.225 [2024-07-26 11:35:51.655679] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.225 [2024-07-26 11:35:51.655710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.225 qpair failed and we were unable to recover it. 00:27:56.225 [2024-07-26 11:35:51.655951] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.225 [2024-07-26 11:35:51.655981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.225 qpair failed and we were unable to recover it. 00:27:56.225 [2024-07-26 11:35:51.656250] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.225 [2024-07-26 11:35:51.656280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.225 qpair failed and we were unable to recover it. 00:27:56.225 [2024-07-26 11:35:51.656478] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.225 [2024-07-26 11:35:51.656508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.225 qpair failed and we were unable to recover it. 00:27:56.225 [2024-07-26 11:35:51.656712] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.225 [2024-07-26 11:35:51.656744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.225 qpair failed and we were unable to recover it. 00:27:56.225 [2024-07-26 11:35:51.657010] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.225 [2024-07-26 11:35:51.657039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.225 qpair failed and we were unable to recover it. 00:27:56.225 [2024-07-26 11:35:51.657229] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.225 [2024-07-26 11:35:51.657258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.225 qpair failed and we were unable to recover it. 00:27:56.225 [2024-07-26 11:35:51.657429] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.225 [2024-07-26 11:35:51.657459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.225 qpair failed and we were unable to recover it. 00:27:56.225 [2024-07-26 11:35:51.657638] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.225 [2024-07-26 11:35:51.657669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.225 qpair failed and we were unable to recover it. 00:27:56.225 [2024-07-26 11:35:51.657864] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.225 [2024-07-26 11:35:51.657893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.225 qpair failed and we were unable to recover it. 00:27:56.225 [2024-07-26 11:35:51.658134] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.225 [2024-07-26 11:35:51.658164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.225 qpair failed and we were unable to recover it. 00:27:56.225 [2024-07-26 11:35:51.658343] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.225 [2024-07-26 11:35:51.658372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.225 qpair failed and we were unable to recover it. 00:27:56.225 [2024-07-26 11:35:51.658588] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.225 [2024-07-26 11:35:51.658618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.225 qpair failed and we were unable to recover it. 00:27:56.225 [2024-07-26 11:35:51.658875] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.225 [2024-07-26 11:35:51.658906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.225 qpair failed and we were unable to recover it. 00:27:56.225 [2024-07-26 11:35:51.659188] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.225 [2024-07-26 11:35:51.659218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.225 qpair failed and we were unable to recover it. 00:27:56.225 [2024-07-26 11:35:51.659402] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.225 [2024-07-26 11:35:51.659432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.225 qpair failed and we were unable to recover it. 00:27:56.225 [2024-07-26 11:35:51.659646] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.225 [2024-07-26 11:35:51.659676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.225 qpair failed and we were unable to recover it. 00:27:56.225 [2024-07-26 11:35:51.659857] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.225 [2024-07-26 11:35:51.659887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.225 qpair failed and we were unable to recover it. 00:27:56.225 [2024-07-26 11:35:51.660147] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.225 [2024-07-26 11:35:51.660177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.225 qpair failed and we were unable to recover it. 00:27:56.225 [2024-07-26 11:35:51.660300] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.225 [2024-07-26 11:35:51.660329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.225 qpair failed and we were unable to recover it. 00:27:56.225 [2024-07-26 11:35:51.660522] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.225 [2024-07-26 11:35:51.660552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.225 qpair failed and we were unable to recover it. 00:27:56.225 [2024-07-26 11:35:51.660678] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.226 [2024-07-26 11:35:51.660709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.226 qpair failed and we were unable to recover it. 00:27:56.226 [2024-07-26 11:35:51.660914] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.226 [2024-07-26 11:35:51.660943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.226 qpair failed and we were unable to recover it. 00:27:56.226 [2024-07-26 11:35:51.661117] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.226 [2024-07-26 11:35:51.661147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.226 qpair failed and we were unable to recover it. 00:27:56.226 [2024-07-26 11:35:51.661391] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.226 [2024-07-26 11:35:51.661421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.226 qpair failed and we were unable to recover it. 00:27:56.226 [2024-07-26 11:35:51.661600] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.226 [2024-07-26 11:35:51.661638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.226 qpair failed and we were unable to recover it. 00:27:56.226 [2024-07-26 11:35:51.661831] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.226 [2024-07-26 11:35:51.661860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.226 qpair failed and we were unable to recover it. 00:27:56.226 [2024-07-26 11:35:51.662101] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.226 [2024-07-26 11:35:51.662131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.226 qpair failed and we were unable to recover it. 00:27:56.226 [2024-07-26 11:35:51.662300] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.226 [2024-07-26 11:35:51.662329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.226 qpair failed and we were unable to recover it. 00:27:56.226 [2024-07-26 11:35:51.662521] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.226 [2024-07-26 11:35:51.662551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.226 qpair failed and we were unable to recover it. 00:27:56.226 [2024-07-26 11:35:51.662734] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.226 [2024-07-26 11:35:51.662770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.226 qpair failed and we were unable to recover it. 00:27:56.226 [2024-07-26 11:35:51.662887] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.226 [2024-07-26 11:35:51.662916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.226 qpair failed and we were unable to recover it. 00:27:56.226 [2024-07-26 11:35:51.663108] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.226 [2024-07-26 11:35:51.663138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.226 qpair failed and we were unable to recover it. 00:27:56.226 [2024-07-26 11:35:51.663316] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.226 [2024-07-26 11:35:51.663345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.226 qpair failed and we were unable to recover it. 00:27:56.226 [2024-07-26 11:35:51.663551] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.226 [2024-07-26 11:35:51.663581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.226 qpair failed and we were unable to recover it. 00:27:56.226 [2024-07-26 11:35:51.663767] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.226 [2024-07-26 11:35:51.663798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.226 qpair failed and we were unable to recover it. 00:27:56.226 [2024-07-26 11:35:51.664010] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.226 [2024-07-26 11:35:51.664039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.226 qpair failed and we were unable to recover it. 00:27:56.226 [2024-07-26 11:35:51.664167] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.226 [2024-07-26 11:35:51.664197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.226 qpair failed and we were unable to recover it. 00:27:56.226 [2024-07-26 11:35:51.664314] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.226 [2024-07-26 11:35:51.664344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.226 qpair failed and we were unable to recover it. 00:27:56.226 [2024-07-26 11:35:51.664605] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.226 [2024-07-26 11:35:51.664653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.226 qpair failed and we were unable to recover it. 00:27:56.226 [2024-07-26 11:35:51.664937] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.226 [2024-07-26 11:35:51.664967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.226 qpair failed and we were unable to recover it. 00:27:56.226 [2024-07-26 11:35:51.665206] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.226 [2024-07-26 11:35:51.665236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.226 qpair failed and we were unable to recover it. 00:27:56.226 [2024-07-26 11:35:51.665477] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.226 [2024-07-26 11:35:51.665507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.226 qpair failed and we were unable to recover it. 00:27:56.226 [2024-07-26 11:35:51.665610] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.226 [2024-07-26 11:35:51.665650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.226 qpair failed and we were unable to recover it. 00:27:56.226 [2024-07-26 11:35:51.665769] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.226 [2024-07-26 11:35:51.665799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.226 qpair failed and we were unable to recover it. 00:27:56.226 [2024-07-26 11:35:51.666079] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.226 [2024-07-26 11:35:51.666109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.226 qpair failed and we were unable to recover it. 00:27:56.226 [2024-07-26 11:35:51.666282] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.226 [2024-07-26 11:35:51.666311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.226 qpair failed and we were unable to recover it. 00:27:56.226 [2024-07-26 11:35:51.666506] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.226 [2024-07-26 11:35:51.666535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.226 qpair failed and we were unable to recover it. 00:27:56.226 [2024-07-26 11:35:51.666658] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.226 [2024-07-26 11:35:51.666690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.226 qpair failed and we were unable to recover it. 00:27:56.226 [2024-07-26 11:35:51.666939] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.226 [2024-07-26 11:35:51.666969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.226 qpair failed and we were unable to recover it. 00:27:56.226 [2024-07-26 11:35:51.667153] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.226 [2024-07-26 11:35:51.667182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.226 qpair failed and we were unable to recover it. 00:27:56.226 [2024-07-26 11:35:51.667428] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.226 [2024-07-26 11:35:51.667457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.226 qpair failed and we were unable to recover it. 00:27:56.226 [2024-07-26 11:35:51.667584] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.226 [2024-07-26 11:35:51.667614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.226 qpair failed and we were unable to recover it. 00:27:56.226 [2024-07-26 11:35:51.667802] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.226 [2024-07-26 11:35:51.667832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.226 qpair failed and we were unable to recover it. 00:27:56.226 [2024-07-26 11:35:51.667938] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.226 [2024-07-26 11:35:51.667968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.226 qpair failed and we were unable to recover it. 00:27:56.226 [2024-07-26 11:35:51.668096] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.226 [2024-07-26 11:35:51.668126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.226 qpair failed and we were unable to recover it. 00:27:56.226 [2024-07-26 11:35:51.668246] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.226 [2024-07-26 11:35:51.668276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.226 qpair failed and we were unable to recover it. 00:27:56.226 [2024-07-26 11:35:51.668526] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.226 [2024-07-26 11:35:51.668556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.226 qpair failed and we were unable to recover it. 00:27:56.226 [2024-07-26 11:35:51.668694] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.226 [2024-07-26 11:35:51.668724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.226 qpair failed and we were unable to recover it. 00:27:56.227 [2024-07-26 11:35:51.668990] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.227 [2024-07-26 11:35:51.669020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.227 qpair failed and we were unable to recover it. 00:27:56.227 [2024-07-26 11:35:51.669130] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.227 [2024-07-26 11:35:51.669159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.227 qpair failed and we were unable to recover it. 00:27:56.227 [2024-07-26 11:35:51.669369] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.227 [2024-07-26 11:35:51.669399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.227 qpair failed and we were unable to recover it. 00:27:56.227 [2024-07-26 11:35:51.669573] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.227 [2024-07-26 11:35:51.669602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.227 qpair failed and we were unable to recover it. 00:27:56.227 [2024-07-26 11:35:51.669851] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.227 [2024-07-26 11:35:51.669881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.227 qpair failed and we were unable to recover it. 00:27:56.227 [2024-07-26 11:35:51.670092] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.227 [2024-07-26 11:35:51.670122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.227 qpair failed and we were unable to recover it. 00:27:56.227 [2024-07-26 11:35:51.670323] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.227 [2024-07-26 11:35:51.670352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.227 qpair failed and we were unable to recover it. 00:27:56.227 [2024-07-26 11:35:51.670618] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.227 [2024-07-26 11:35:51.670656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.227 qpair failed and we were unable to recover it. 00:27:56.227 [2024-07-26 11:35:51.670782] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.227 [2024-07-26 11:35:51.670812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.227 qpair failed and we were unable to recover it. 00:27:56.227 [2024-07-26 11:35:51.671017] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.227 [2024-07-26 11:35:51.671046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.227 qpair failed and we were unable to recover it. 00:27:56.227 [2024-07-26 11:35:51.671164] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.227 [2024-07-26 11:35:51.671194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.227 qpair failed and we were unable to recover it. 00:27:56.227 [2024-07-26 11:35:51.671378] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.227 [2024-07-26 11:35:51.671413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.227 qpair failed and we were unable to recover it. 00:27:56.227 [2024-07-26 11:35:51.671550] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.227 [2024-07-26 11:35:51.671580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.227 qpair failed and we were unable to recover it. 00:27:56.227 [2024-07-26 11:35:51.671806] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.227 [2024-07-26 11:35:51.671837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.227 qpair failed and we were unable to recover it. 00:27:56.227 [2024-07-26 11:35:51.672015] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.227 [2024-07-26 11:35:51.672044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.227 qpair failed and we were unable to recover it. 00:27:56.227 [2024-07-26 11:35:51.672251] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.227 [2024-07-26 11:35:51.672281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.227 qpair failed and we were unable to recover it. 00:27:56.227 [2024-07-26 11:35:51.672474] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.227 [2024-07-26 11:35:51.672503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.227 qpair failed and we were unable to recover it. 00:27:56.227 [2024-07-26 11:35:51.672715] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.227 [2024-07-26 11:35:51.672745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.227 qpair failed and we were unable to recover it. 00:27:56.227 [2024-07-26 11:35:51.672986] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.227 [2024-07-26 11:35:51.673016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.227 qpair failed and we were unable to recover it. 00:27:56.227 [2024-07-26 11:35:51.673152] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.227 [2024-07-26 11:35:51.673181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.227 qpair failed and we were unable to recover it. 00:27:56.227 [2024-07-26 11:35:51.673363] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.227 [2024-07-26 11:35:51.673393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.227 qpair failed and we were unable to recover it. 00:27:56.227 [2024-07-26 11:35:51.673640] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.227 [2024-07-26 11:35:51.673670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.227 qpair failed and we were unable to recover it. 00:27:56.227 [2024-07-26 11:35:51.673793] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.227 [2024-07-26 11:35:51.673822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.227 qpair failed and we were unable to recover it. 00:27:56.227 [2024-07-26 11:35:51.674006] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.227 [2024-07-26 11:35:51.674035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.227 qpair failed and we were unable to recover it. 00:27:56.227 [2024-07-26 11:35:51.674221] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.227 [2024-07-26 11:35:51.674251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.227 qpair failed and we were unable to recover it. 00:27:56.227 [2024-07-26 11:35:51.674499] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.227 [2024-07-26 11:35:51.674529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.227 qpair failed and we were unable to recover it. 00:27:56.227 [2024-07-26 11:35:51.674792] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.227 [2024-07-26 11:35:51.674822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.227 qpair failed and we were unable to recover it. 00:27:56.227 [2024-07-26 11:35:51.675030] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.227 [2024-07-26 11:35:51.675060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.227 qpair failed and we were unable to recover it. 00:27:56.227 [2024-07-26 11:35:51.675250] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.227 [2024-07-26 11:35:51.675280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.227 qpair failed and we were unable to recover it. 00:27:56.227 [2024-07-26 11:35:51.675495] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.227 [2024-07-26 11:35:51.675525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.227 qpair failed and we were unable to recover it. 00:27:56.227 [2024-07-26 11:35:51.675764] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.227 [2024-07-26 11:35:51.675794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.227 qpair failed and we were unable to recover it. 00:27:56.227 [2024-07-26 11:35:51.675985] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.227 [2024-07-26 11:35:51.676014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.227 qpair failed and we were unable to recover it. 00:27:56.227 [2024-07-26 11:35:51.676201] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.227 [2024-07-26 11:35:51.676230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.227 qpair failed and we were unable to recover it. 00:27:56.227 [2024-07-26 11:35:51.676365] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.227 [2024-07-26 11:35:51.676395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.227 qpair failed and we were unable to recover it. 00:27:56.227 [2024-07-26 11:35:51.676578] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.227 [2024-07-26 11:35:51.676607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.227 qpair failed and we were unable to recover it. 00:27:56.227 [2024-07-26 11:35:51.676916] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.227 [2024-07-26 11:35:51.676946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.227 qpair failed and we were unable to recover it. 00:27:56.227 [2024-07-26 11:35:51.677121] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.227 [2024-07-26 11:35:51.677151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.227 qpair failed and we were unable to recover it. 00:27:56.227 [2024-07-26 11:35:51.677274] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.227 [2024-07-26 11:35:51.677304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.228 qpair failed and we were unable to recover it. 00:27:56.228 [2024-07-26 11:35:51.677441] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.228 [2024-07-26 11:35:51.677471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.228 qpair failed and we were unable to recover it. 00:27:56.228 [2024-07-26 11:35:51.677738] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.228 [2024-07-26 11:35:51.677768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.228 qpair failed and we were unable to recover it. 00:27:56.228 [2024-07-26 11:35:51.677963] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.228 [2024-07-26 11:35:51.677992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.228 qpair failed and we were unable to recover it. 00:27:56.228 [2024-07-26 11:35:51.678176] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.228 [2024-07-26 11:35:51.678206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.228 qpair failed and we were unable to recover it. 00:27:56.228 [2024-07-26 11:35:51.678321] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.228 [2024-07-26 11:35:51.678350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.228 qpair failed and we were unable to recover it. 00:27:56.228 [2024-07-26 11:35:51.678545] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.228 [2024-07-26 11:35:51.678574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.228 qpair failed and we were unable to recover it. 00:27:56.228 [2024-07-26 11:35:51.678710] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.228 [2024-07-26 11:35:51.678740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.228 qpair failed and we were unable to recover it. 00:27:56.228 [2024-07-26 11:35:51.679008] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.228 [2024-07-26 11:35:51.679038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.228 qpair failed and we were unable to recover it. 00:27:56.228 [2024-07-26 11:35:51.679225] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.228 [2024-07-26 11:35:51.679255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.228 qpair failed and we were unable to recover it. 00:27:56.228 [2024-07-26 11:35:51.679431] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.228 [2024-07-26 11:35:51.679460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.228 qpair failed and we were unable to recover it. 00:27:56.228 [2024-07-26 11:35:51.679646] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.228 [2024-07-26 11:35:51.679677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.228 qpair failed and we were unable to recover it. 00:27:56.228 [2024-07-26 11:35:51.679948] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.228 [2024-07-26 11:35:51.679977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.228 qpair failed and we were unable to recover it. 00:27:56.228 [2024-07-26 11:35:51.680186] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.228 [2024-07-26 11:35:51.680216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.228 qpair failed and we were unable to recover it. 00:27:56.228 [2024-07-26 11:35:51.680454] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.228 [2024-07-26 11:35:51.680490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.228 qpair failed and we were unable to recover it. 00:27:56.228 [2024-07-26 11:35:51.680647] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.228 [2024-07-26 11:35:51.680678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.228 qpair failed and we were unable to recover it. 00:27:56.228 [2024-07-26 11:35:51.680878] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.228 [2024-07-26 11:35:51.680908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.228 qpair failed and we were unable to recover it. 00:27:56.228 [2024-07-26 11:35:51.681125] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.228 [2024-07-26 11:35:51.681154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.228 qpair failed and we were unable to recover it. 00:27:56.228 [2024-07-26 11:35:51.681348] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.228 [2024-07-26 11:35:51.681377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.228 qpair failed and we were unable to recover it. 00:27:56.228 [2024-07-26 11:35:51.681506] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.228 [2024-07-26 11:35:51.681535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.228 qpair failed and we were unable to recover it. 00:27:56.228 [2024-07-26 11:35:51.681724] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.228 [2024-07-26 11:35:51.681755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.228 qpair failed and we were unable to recover it. 00:27:56.228 [2024-07-26 11:35:51.681935] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.228 [2024-07-26 11:35:51.681964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.228 qpair failed and we were unable to recover it. 00:27:56.228 [2024-07-26 11:35:51.682210] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.228 [2024-07-26 11:35:51.682239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.228 qpair failed and we were unable to recover it. 00:27:56.228 [2024-07-26 11:35:51.682438] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.228 [2024-07-26 11:35:51.682468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.228 qpair failed and we were unable to recover it. 00:27:56.228 [2024-07-26 11:35:51.682730] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.228 [2024-07-26 11:35:51.682760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.228 qpair failed and we were unable to recover it. 00:27:56.228 [2024-07-26 11:35:51.682955] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.228 [2024-07-26 11:35:51.682984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.228 qpair failed and we were unable to recover it. 00:27:56.228 [2024-07-26 11:35:51.683182] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.228 [2024-07-26 11:35:51.683212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.228 qpair failed and we were unable to recover it. 00:27:56.228 [2024-07-26 11:35:51.683390] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.228 [2024-07-26 11:35:51.683419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.228 qpair failed and we were unable to recover it. 00:27:56.228 [2024-07-26 11:35:51.683613] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.228 [2024-07-26 11:35:51.683650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.228 qpair failed and we were unable to recover it. 00:27:56.228 [2024-07-26 11:35:51.683864] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.228 [2024-07-26 11:35:51.683893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.228 qpair failed and we were unable to recover it. 00:27:56.228 [2024-07-26 11:35:51.684080] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.228 [2024-07-26 11:35:51.684109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.228 qpair failed and we were unable to recover it. 00:27:56.228 [2024-07-26 11:35:51.684283] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.228 [2024-07-26 11:35:51.684312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.228 qpair failed and we were unable to recover it. 00:27:56.228 [2024-07-26 11:35:51.684534] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.228 [2024-07-26 11:35:51.684563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.228 qpair failed and we were unable to recover it. 00:27:56.228 [2024-07-26 11:35:51.684827] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.228 [2024-07-26 11:35:51.684858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.228 qpair failed and we were unable to recover it. 00:27:56.228 [2024-07-26 11:35:51.685109] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.228 [2024-07-26 11:35:51.685139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.228 qpair failed and we were unable to recover it. 00:27:56.228 [2024-07-26 11:35:51.685270] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.228 [2024-07-26 11:35:51.685300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.228 qpair failed and we were unable to recover it. 00:27:56.228 [2024-07-26 11:35:51.685495] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.228 [2024-07-26 11:35:51.685524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.228 qpair failed and we were unable to recover it. 00:27:56.228 [2024-07-26 11:35:51.685724] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.228 [2024-07-26 11:35:51.685755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.228 qpair failed and we were unable to recover it. 00:27:56.228 [2024-07-26 11:35:51.685948] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.229 [2024-07-26 11:35:51.685976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.229 qpair failed and we were unable to recover it. 00:27:56.229 [2024-07-26 11:35:51.686221] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.229 [2024-07-26 11:35:51.686250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.229 qpair failed and we were unable to recover it. 00:27:56.229 [2024-07-26 11:35:51.686452] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.229 [2024-07-26 11:35:51.686482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.229 qpair failed and we were unable to recover it. 00:27:56.229 [2024-07-26 11:35:51.686711] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.229 [2024-07-26 11:35:51.686743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.229 qpair failed and we were unable to recover it. 00:27:56.229 [2024-07-26 11:35:51.686865] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.229 [2024-07-26 11:35:51.686895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.229 qpair failed and we were unable to recover it. 00:27:56.229 [2024-07-26 11:35:51.687087] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.229 [2024-07-26 11:35:51.687116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.229 qpair failed and we were unable to recover it. 00:27:56.229 [2024-07-26 11:35:51.687326] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.229 [2024-07-26 11:35:51.687355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.229 qpair failed and we were unable to recover it. 00:27:56.229 [2024-07-26 11:35:51.687477] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.229 [2024-07-26 11:35:51.687507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.229 qpair failed and we were unable to recover it. 00:27:56.229 [2024-07-26 11:35:51.687753] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.229 [2024-07-26 11:35:51.687782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.229 qpair failed and we were unable to recover it. 00:27:56.229 [2024-07-26 11:35:51.687918] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.229 [2024-07-26 11:35:51.687948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.229 qpair failed and we were unable to recover it. 00:27:56.229 [2024-07-26 11:35:51.688090] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.229 [2024-07-26 11:35:51.688120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.229 qpair failed and we were unable to recover it. 00:27:56.229 [2024-07-26 11:35:51.688242] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.229 [2024-07-26 11:35:51.688271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.229 qpair failed and we were unable to recover it. 00:27:56.229 [2024-07-26 11:35:51.688448] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.229 [2024-07-26 11:35:51.688477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.229 qpair failed and we were unable to recover it. 00:27:56.229 [2024-07-26 11:35:51.688752] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.229 [2024-07-26 11:35:51.688783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.229 qpair failed and we were unable to recover it. 00:27:56.229 [2024-07-26 11:35:51.688997] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.229 [2024-07-26 11:35:51.689027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.229 qpair failed and we were unable to recover it. 00:27:56.229 [2024-07-26 11:35:51.689156] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.229 [2024-07-26 11:35:51.689185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.229 qpair failed and we were unable to recover it. 00:27:56.229 [2024-07-26 11:35:51.689473] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.229 [2024-07-26 11:35:51.689508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.229 qpair failed and we were unable to recover it. 00:27:56.229 [2024-07-26 11:35:51.689616] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.229 [2024-07-26 11:35:51.689657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.229 qpair failed and we were unable to recover it. 00:27:56.229 [2024-07-26 11:35:51.689845] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.229 [2024-07-26 11:35:51.689875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.229 qpair failed and we were unable to recover it. 00:27:56.229 [2024-07-26 11:35:51.690012] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.229 [2024-07-26 11:35:51.690041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.229 qpair failed and we were unable to recover it. 00:27:56.229 [2024-07-26 11:35:51.690282] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.229 [2024-07-26 11:35:51.690311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.229 qpair failed and we were unable to recover it. 00:27:56.229 [2024-07-26 11:35:51.690447] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.229 [2024-07-26 11:35:51.690475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.229 qpair failed and we were unable to recover it. 00:27:56.229 [2024-07-26 11:35:51.690578] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.229 [2024-07-26 11:35:51.690607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.229 qpair failed and we were unable to recover it. 00:27:56.229 [2024-07-26 11:35:51.690887] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.229 [2024-07-26 11:35:51.690917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.229 qpair failed and we were unable to recover it. 00:27:56.229 [2024-07-26 11:35:51.691114] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.229 [2024-07-26 11:35:51.691143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.229 qpair failed and we were unable to recover it. 00:27:56.229 [2024-07-26 11:35:51.691258] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.229 [2024-07-26 11:35:51.691288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.229 qpair failed and we were unable to recover it. 00:27:56.229 [2024-07-26 11:35:51.691478] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.229 [2024-07-26 11:35:51.691508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.229 qpair failed and we were unable to recover it. 00:27:56.229 [2024-07-26 11:35:51.691804] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.229 [2024-07-26 11:35:51.691835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.229 qpair failed and we were unable to recover it. 00:27:56.229 [2024-07-26 11:35:51.692103] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.229 [2024-07-26 11:35:51.692132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.229 qpair failed and we were unable to recover it. 00:27:56.229 [2024-07-26 11:35:51.692398] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.229 [2024-07-26 11:35:51.692427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.229 qpair failed and we were unable to recover it. 00:27:56.229 [2024-07-26 11:35:51.692677] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.229 [2024-07-26 11:35:51.692708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.229 qpair failed and we were unable to recover it. 00:27:56.229 [2024-07-26 11:35:51.692973] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.229 [2024-07-26 11:35:51.693002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.230 qpair failed and we were unable to recover it. 00:27:56.230 [2024-07-26 11:35:51.693136] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.230 [2024-07-26 11:35:51.693166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.230 qpair failed and we were unable to recover it. 00:27:56.230 [2024-07-26 11:35:51.693385] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.230 [2024-07-26 11:35:51.693414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.230 qpair failed and we were unable to recover it. 00:27:56.230 [2024-07-26 11:35:51.693597] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.230 [2024-07-26 11:35:51.693635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.230 qpair failed and we were unable to recover it. 00:27:56.230 [2024-07-26 11:35:51.693917] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.230 [2024-07-26 11:35:51.693947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.230 qpair failed and we were unable to recover it. 00:27:56.230 [2024-07-26 11:35:51.694160] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.230 [2024-07-26 11:35:51.694189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.230 qpair failed and we were unable to recover it. 00:27:56.230 [2024-07-26 11:35:51.694383] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.230 [2024-07-26 11:35:51.694413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.230 qpair failed and we were unable to recover it. 00:27:56.230 [2024-07-26 11:35:51.694639] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.230 [2024-07-26 11:35:51.694669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.230 qpair failed and we were unable to recover it. 00:27:56.230 [2024-07-26 11:35:51.694785] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.230 [2024-07-26 11:35:51.694814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.230 qpair failed and we were unable to recover it. 00:27:56.230 [2024-07-26 11:35:51.695011] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.230 [2024-07-26 11:35:51.695040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.230 qpair failed and we were unable to recover it. 00:27:56.230 [2024-07-26 11:35:51.695309] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.230 [2024-07-26 11:35:51.695338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.230 qpair failed and we were unable to recover it. 00:27:56.230 [2024-07-26 11:35:51.695471] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.230 [2024-07-26 11:35:51.695501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.230 qpair failed and we were unable to recover it. 00:27:56.230 [2024-07-26 11:35:51.695762] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.230 [2024-07-26 11:35:51.695793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.230 qpair failed and we were unable to recover it. 00:27:56.230 [2024-07-26 11:35:51.696019] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.230 [2024-07-26 11:35:51.696049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.230 qpair failed and we were unable to recover it. 00:27:56.230 [2024-07-26 11:35:51.696225] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.230 [2024-07-26 11:35:51.696255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.230 qpair failed and we were unable to recover it. 00:27:56.230 [2024-07-26 11:35:51.696526] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.230 [2024-07-26 11:35:51.696556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.230 qpair failed and we were unable to recover it. 00:27:56.230 [2024-07-26 11:35:51.696742] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.230 [2024-07-26 11:35:51.696773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.230 qpair failed and we were unable to recover it. 00:27:56.230 [2024-07-26 11:35:51.697044] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.230 [2024-07-26 11:35:51.697073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.230 qpair failed and we were unable to recover it. 00:27:56.230 [2024-07-26 11:35:51.697262] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.230 [2024-07-26 11:35:51.697293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.230 qpair failed and we were unable to recover it. 00:27:56.230 [2024-07-26 11:35:51.697503] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.230 [2024-07-26 11:35:51.697533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.230 qpair failed and we were unable to recover it. 00:27:56.230 [2024-07-26 11:35:51.697800] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.230 [2024-07-26 11:35:51.697831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.230 qpair failed and we were unable to recover it. 00:27:56.230 [2024-07-26 11:35:51.697975] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.230 [2024-07-26 11:35:51.698004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.230 qpair failed and we were unable to recover it. 00:27:56.230 [2024-07-26 11:35:51.698284] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.230 [2024-07-26 11:35:51.698313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.230 qpair failed and we were unable to recover it. 00:27:56.230 [2024-07-26 11:35:51.698432] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.230 [2024-07-26 11:35:51.698462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.230 qpair failed and we were unable to recover it. 00:27:56.230 [2024-07-26 11:35:51.698643] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.230 [2024-07-26 11:35:51.698674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.230 qpair failed and we were unable to recover it. 00:27:56.230 [2024-07-26 11:35:51.698863] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.230 [2024-07-26 11:35:51.698898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.230 qpair failed and we were unable to recover it. 00:27:56.230 [2024-07-26 11:35:51.699028] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.230 [2024-07-26 11:35:51.699058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.230 qpair failed and we were unable to recover it. 00:27:56.230 [2024-07-26 11:35:51.699239] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.230 [2024-07-26 11:35:51.699268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.230 qpair failed and we were unable to recover it. 00:27:56.230 [2024-07-26 11:35:51.699441] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.230 [2024-07-26 11:35:51.699471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.230 qpair failed and we were unable to recover it. 00:27:56.230 [2024-07-26 11:35:51.699661] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.230 [2024-07-26 11:35:51.699691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.230 qpair failed and we were unable to recover it. 00:27:56.230 [2024-07-26 11:35:51.699959] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.230 [2024-07-26 11:35:51.699989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.230 qpair failed and we were unable to recover it. 00:27:56.230 [2024-07-26 11:35:51.700167] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.230 [2024-07-26 11:35:51.700197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.230 qpair failed and we were unable to recover it. 00:27:56.230 [2024-07-26 11:35:51.700382] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.230 [2024-07-26 11:35:51.700411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.230 qpair failed and we were unable to recover it. 00:27:56.230 [2024-07-26 11:35:51.700596] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.230 [2024-07-26 11:35:51.700633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.230 qpair failed and we were unable to recover it. 00:27:56.230 [2024-07-26 11:35:51.700807] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.230 [2024-07-26 11:35:51.700837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.230 qpair failed and we were unable to recover it. 00:27:56.230 [2024-07-26 11:35:51.701078] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.230 [2024-07-26 11:35:51.701108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.230 qpair failed and we were unable to recover it. 00:27:56.230 [2024-07-26 11:35:51.701235] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.230 [2024-07-26 11:35:51.701264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.230 qpair failed and we were unable to recover it. 00:27:56.230 [2024-07-26 11:35:51.701455] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.230 [2024-07-26 11:35:51.701485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.230 qpair failed and we were unable to recover it. 00:27:56.230 [2024-07-26 11:35:51.701690] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.231 [2024-07-26 11:35:51.701720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.231 qpair failed and we were unable to recover it. 00:27:56.231 [2024-07-26 11:35:51.701903] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.231 [2024-07-26 11:35:51.701932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.231 qpair failed and we were unable to recover it. 00:27:56.231 [2024-07-26 11:35:51.702194] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.231 [2024-07-26 11:35:51.702224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.231 qpair failed and we were unable to recover it. 00:27:56.231 [2024-07-26 11:35:51.702329] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.231 [2024-07-26 11:35:51.702359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.231 qpair failed and we were unable to recover it. 00:27:56.231 [2024-07-26 11:35:51.702546] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.231 [2024-07-26 11:35:51.702575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.231 qpair failed and we were unable to recover it. 00:27:56.231 [2024-07-26 11:35:51.702709] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.231 [2024-07-26 11:35:51.702740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.231 qpair failed and we were unable to recover it. 00:27:56.231 [2024-07-26 11:35:51.703006] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.231 [2024-07-26 11:35:51.703035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.231 qpair failed and we were unable to recover it. 00:27:56.231 [2024-07-26 11:35:51.703163] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.231 [2024-07-26 11:35:51.703193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.231 qpair failed and we were unable to recover it. 00:27:56.231 [2024-07-26 11:35:51.703319] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.231 [2024-07-26 11:35:51.703349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.231 qpair failed and we were unable to recover it. 00:27:56.231 [2024-07-26 11:35:51.703607] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.231 [2024-07-26 11:35:51.703647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.231 qpair failed and we were unable to recover it. 00:27:56.231 [2024-07-26 11:35:51.703792] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.231 [2024-07-26 11:35:51.703822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.231 qpair failed and we were unable to recover it. 00:27:56.231 [2024-07-26 11:35:51.704004] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.231 [2024-07-26 11:35:51.704033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.231 qpair failed and we were unable to recover it. 00:27:56.231 [2024-07-26 11:35:51.704295] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.231 [2024-07-26 11:35:51.704325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.231 qpair failed and we were unable to recover it. 00:27:56.231 [2024-07-26 11:35:51.704520] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.231 [2024-07-26 11:35:51.704550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.231 qpair failed and we were unable to recover it. 00:27:56.231 [2024-07-26 11:35:51.704752] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.231 [2024-07-26 11:35:51.704783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.231 qpair failed and we were unable to recover it. 00:27:56.231 [2024-07-26 11:35:51.704971] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.231 [2024-07-26 11:35:51.705001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.231 qpair failed and we were unable to recover it. 00:27:56.231 [2024-07-26 11:35:51.705265] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.231 [2024-07-26 11:35:51.705295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.231 qpair failed and we were unable to recover it. 00:27:56.231 [2024-07-26 11:35:51.705477] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.231 [2024-07-26 11:35:51.705507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.231 qpair failed and we were unable to recover it. 00:27:56.231 [2024-07-26 11:35:51.705760] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.231 [2024-07-26 11:35:51.705791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.231 qpair failed and we were unable to recover it. 00:27:56.231 [2024-07-26 11:35:51.706009] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.231 [2024-07-26 11:35:51.706039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.231 qpair failed and we were unable to recover it. 00:27:56.231 [2024-07-26 11:35:51.706228] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.231 [2024-07-26 11:35:51.706257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.231 qpair failed and we were unable to recover it. 00:27:56.231 [2024-07-26 11:35:51.706447] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.231 [2024-07-26 11:35:51.706477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.231 qpair failed and we were unable to recover it. 00:27:56.231 [2024-07-26 11:35:51.706614] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.231 [2024-07-26 11:35:51.706653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.231 qpair failed and we were unable to recover it. 00:27:56.231 [2024-07-26 11:35:51.706758] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.231 [2024-07-26 11:35:51.706787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.231 qpair failed and we were unable to recover it. 00:27:56.231 [2024-07-26 11:35:51.707028] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.231 [2024-07-26 11:35:51.707057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.231 qpair failed and we were unable to recover it. 00:27:56.231 [2024-07-26 11:35:51.707178] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.231 [2024-07-26 11:35:51.707207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.231 qpair failed and we were unable to recover it. 00:27:56.231 [2024-07-26 11:35:51.707388] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.231 [2024-07-26 11:35:51.707418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.231 qpair failed and we were unable to recover it. 00:27:56.231 [2024-07-26 11:35:51.707675] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.231 [2024-07-26 11:35:51.707710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.231 qpair failed and we were unable to recover it. 00:27:56.231 [2024-07-26 11:35:51.707954] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.231 [2024-07-26 11:35:51.707983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.231 qpair failed and we were unable to recover it. 00:27:56.231 [2024-07-26 11:35:51.708190] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.231 [2024-07-26 11:35:51.708220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.231 qpair failed and we were unable to recover it. 00:27:56.231 [2024-07-26 11:35:51.708469] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.231 [2024-07-26 11:35:51.708499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.231 qpair failed and we were unable to recover it. 00:27:56.231 [2024-07-26 11:35:51.708646] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.231 [2024-07-26 11:35:51.708677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.231 qpair failed and we were unable to recover it. 00:27:56.231 [2024-07-26 11:35:51.708819] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.231 [2024-07-26 11:35:51.708849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.231 qpair failed and we were unable to recover it. 00:27:56.231 [2024-07-26 11:35:51.709059] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.231 [2024-07-26 11:35:51.709088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.231 qpair failed and we were unable to recover it. 00:27:56.231 [2024-07-26 11:35:51.709279] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.231 [2024-07-26 11:35:51.709309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.231 qpair failed and we were unable to recover it. 00:27:56.231 [2024-07-26 11:35:51.709488] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.231 [2024-07-26 11:35:51.709518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.231 qpair failed and we were unable to recover it. 00:27:56.231 [2024-07-26 11:35:51.709714] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.231 [2024-07-26 11:35:51.709745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.231 qpair failed and we were unable to recover it. 00:27:56.231 [2024-07-26 11:35:51.710010] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.231 [2024-07-26 11:35:51.710040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.231 qpair failed and we were unable to recover it. 00:27:56.232 [2024-07-26 11:35:51.710236] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.232 [2024-07-26 11:35:51.710265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.232 qpair failed and we were unable to recover it. 00:27:56.232 [2024-07-26 11:35:51.710411] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.232 [2024-07-26 11:35:51.710441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.232 qpair failed and we were unable to recover it. 00:27:56.232 [2024-07-26 11:35:51.710614] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.232 [2024-07-26 11:35:51.710653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.232 qpair failed and we were unable to recover it. 00:27:56.232 [2024-07-26 11:35:51.710852] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.232 [2024-07-26 11:35:51.710882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.232 qpair failed and we were unable to recover it. 00:27:56.232 [2024-07-26 11:35:51.711070] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.232 [2024-07-26 11:35:51.711100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.232 qpair failed and we were unable to recover it. 00:27:56.232 [2024-07-26 11:35:51.711341] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.232 [2024-07-26 11:35:51.711370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.232 qpair failed and we were unable to recover it. 00:27:56.232 [2024-07-26 11:35:51.711583] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.232 [2024-07-26 11:35:51.711612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.232 qpair failed and we were unable to recover it. 00:27:56.232 [2024-07-26 11:35:51.711811] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.232 [2024-07-26 11:35:51.711841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.232 qpair failed and we were unable to recover it. 00:27:56.232 [2024-07-26 11:35:51.711962] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.232 [2024-07-26 11:35:51.711992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.232 qpair failed and we were unable to recover it. 00:27:56.232 [2024-07-26 11:35:51.712146] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.232 [2024-07-26 11:35:51.712176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.232 qpair failed and we were unable to recover it. 00:27:56.232 [2024-07-26 11:35:51.712314] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.232 [2024-07-26 11:35:51.712343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.232 qpair failed and we were unable to recover it. 00:27:56.232 [2024-07-26 11:35:51.712604] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.232 [2024-07-26 11:35:51.712640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.232 qpair failed and we were unable to recover it. 00:27:56.232 [2024-07-26 11:35:51.712766] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.232 [2024-07-26 11:35:51.712796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.232 qpair failed and we were unable to recover it. 00:27:56.232 [2024-07-26 11:35:51.712963] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.232 [2024-07-26 11:35:51.712992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.232 qpair failed and we were unable to recover it. 00:27:56.232 [2024-07-26 11:35:51.713285] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.232 [2024-07-26 11:35:51.713314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.232 qpair failed and we were unable to recover it. 00:27:56.232 [2024-07-26 11:35:51.713444] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.232 [2024-07-26 11:35:51.713473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.232 qpair failed and we were unable to recover it. 00:27:56.232 [2024-07-26 11:35:51.713658] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.232 [2024-07-26 11:35:51.713690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.232 qpair failed and we were unable to recover it. 00:27:56.232 [2024-07-26 11:35:51.713867] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.232 [2024-07-26 11:35:51.713897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.232 qpair failed and we were unable to recover it. 00:27:56.232 [2024-07-26 11:35:51.714080] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.232 [2024-07-26 11:35:51.714109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.232 qpair failed and we were unable to recover it. 00:27:56.232 [2024-07-26 11:35:51.714346] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.232 [2024-07-26 11:35:51.714376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.232 qpair failed and we were unable to recover it. 00:27:56.232 [2024-07-26 11:35:51.714616] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.232 [2024-07-26 11:35:51.714654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.232 qpair failed and we were unable to recover it. 00:27:56.232 [2024-07-26 11:35:51.714894] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.232 [2024-07-26 11:35:51.714924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.232 qpair failed and we were unable to recover it. 00:27:56.232 [2024-07-26 11:35:51.715121] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.232 [2024-07-26 11:35:51.715152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.232 qpair failed and we were unable to recover it. 00:27:56.232 [2024-07-26 11:35:51.715265] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.232 [2024-07-26 11:35:51.715294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.232 qpair failed and we were unable to recover it. 00:27:56.232 [2024-07-26 11:35:51.715559] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.232 [2024-07-26 11:35:51.715589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.232 qpair failed and we were unable to recover it. 00:27:56.232 [2024-07-26 11:35:51.715717] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.232 [2024-07-26 11:35:51.715748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.232 qpair failed and we were unable to recover it. 00:27:56.232 [2024-07-26 11:35:51.715879] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.232 [2024-07-26 11:35:51.715909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.232 qpair failed and we were unable to recover it. 00:27:56.232 [2024-07-26 11:35:51.716123] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.232 [2024-07-26 11:35:51.716153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.232 qpair failed and we were unable to recover it. 00:27:56.232 [2024-07-26 11:35:51.716384] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.232 [2024-07-26 11:35:51.716414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.232 qpair failed and we were unable to recover it. 00:27:56.232 [2024-07-26 11:35:51.716623] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.232 [2024-07-26 11:35:51.716684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.232 qpair failed and we were unable to recover it. 00:27:56.232 [2024-07-26 11:35:51.716859] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.232 [2024-07-26 11:35:51.716889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.232 qpair failed and we were unable to recover it. 00:27:56.232 [2024-07-26 11:35:51.717154] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.232 [2024-07-26 11:35:51.717184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.232 qpair failed and we were unable to recover it. 00:27:56.232 [2024-07-26 11:35:51.717422] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.232 [2024-07-26 11:35:51.717452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.232 qpair failed and we were unable to recover it. 00:27:56.232 [2024-07-26 11:35:51.717557] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.232 [2024-07-26 11:35:51.717586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.232 qpair failed and we were unable to recover it. 00:27:56.232 [2024-07-26 11:35:51.717865] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.232 [2024-07-26 11:35:51.717896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.232 qpair failed and we were unable to recover it. 00:27:56.232 [2024-07-26 11:35:51.718030] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.232 [2024-07-26 11:35:51.718059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.232 qpair failed and we were unable to recover it. 00:27:56.232 [2024-07-26 11:35:51.718299] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.232 [2024-07-26 11:35:51.718329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.232 qpair failed and we were unable to recover it. 00:27:56.232 [2024-07-26 11:35:51.718512] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.233 [2024-07-26 11:35:51.718542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.233 qpair failed and we were unable to recover it. 00:27:56.233 [2024-07-26 11:35:51.718800] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.233 [2024-07-26 11:35:51.718830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.233 qpair failed and we were unable to recover it. 00:27:56.233 [2024-07-26 11:35:51.719007] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.233 [2024-07-26 11:35:51.719037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.233 qpair failed and we were unable to recover it. 00:27:56.233 [2024-07-26 11:35:51.719232] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.233 [2024-07-26 11:35:51.719262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.233 qpair failed and we were unable to recover it. 00:27:56.233 [2024-07-26 11:35:51.719402] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.233 [2024-07-26 11:35:51.719432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.233 qpair failed and we were unable to recover it. 00:27:56.233 [2024-07-26 11:35:51.719606] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.233 [2024-07-26 11:35:51.719643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.233 qpair failed and we were unable to recover it. 00:27:56.233 [2024-07-26 11:35:51.719831] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.233 [2024-07-26 11:35:51.719862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.233 qpair failed and we were unable to recover it. 00:27:56.233 [2024-07-26 11:35:51.720107] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.233 [2024-07-26 11:35:51.720137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.233 qpair failed and we were unable to recover it. 00:27:56.233 [2024-07-26 11:35:51.720332] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.233 [2024-07-26 11:35:51.720361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.233 qpair failed and we were unable to recover it. 00:27:56.233 [2024-07-26 11:35:51.720547] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.233 [2024-07-26 11:35:51.720577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.233 qpair failed and we were unable to recover it. 00:27:56.233 [2024-07-26 11:35:51.720837] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.233 [2024-07-26 11:35:51.720868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.233 qpair failed and we were unable to recover it. 00:27:56.233 [2024-07-26 11:35:51.721008] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.233 [2024-07-26 11:35:51.721038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.233 qpair failed and we were unable to recover it. 00:27:56.233 [2024-07-26 11:35:51.721162] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.233 [2024-07-26 11:35:51.721192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.233 qpair failed and we were unable to recover it. 00:27:56.233 [2024-07-26 11:35:51.721313] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.233 [2024-07-26 11:35:51.721342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.233 qpair failed and we were unable to recover it. 00:27:56.233 [2024-07-26 11:35:51.721529] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.233 [2024-07-26 11:35:51.721559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.233 qpair failed and we were unable to recover it. 00:27:56.233 [2024-07-26 11:35:51.721673] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.233 [2024-07-26 11:35:51.721704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.233 qpair failed and we were unable to recover it. 00:27:56.233 [2024-07-26 11:35:51.721892] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.233 [2024-07-26 11:35:51.721921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.233 qpair failed and we were unable to recover it. 00:27:56.233 [2024-07-26 11:35:51.722095] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.233 [2024-07-26 11:35:51.722124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.233 qpair failed and we were unable to recover it. 00:27:56.233 [2024-07-26 11:35:51.722303] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.233 [2024-07-26 11:35:51.722333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.233 qpair failed and we were unable to recover it. 00:27:56.233 [2024-07-26 11:35:51.722510] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.233 [2024-07-26 11:35:51.722541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.233 qpair failed and we were unable to recover it. 00:27:56.233 [2024-07-26 11:35:51.722740] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.233 [2024-07-26 11:35:51.722770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.233 qpair failed and we were unable to recover it. 00:27:56.233 [2024-07-26 11:35:51.722971] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.233 [2024-07-26 11:35:51.723000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.233 qpair failed and we were unable to recover it. 00:27:56.233 [2024-07-26 11:35:51.723188] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.233 [2024-07-26 11:35:51.723218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.233 qpair failed and we were unable to recover it. 00:27:56.233 [2024-07-26 11:35:51.723392] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.233 [2024-07-26 11:35:51.723421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.233 qpair failed and we were unable to recover it. 00:27:56.233 [2024-07-26 11:35:51.723619] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.233 [2024-07-26 11:35:51.723656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.233 qpair failed and we were unable to recover it. 00:27:56.233 [2024-07-26 11:35:51.723919] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.233 [2024-07-26 11:35:51.723949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.233 qpair failed and we were unable to recover it. 00:27:56.233 [2024-07-26 11:35:51.724123] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.233 [2024-07-26 11:35:51.724153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.233 qpair failed and we were unable to recover it. 00:27:56.233 [2024-07-26 11:35:51.724347] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.233 [2024-07-26 11:35:51.724376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.233 qpair failed and we were unable to recover it. 00:27:56.233 [2024-07-26 11:35:51.724566] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.233 [2024-07-26 11:35:51.724595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.233 qpair failed and we were unable to recover it. 00:27:56.233 [2024-07-26 11:35:51.724808] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.233 [2024-07-26 11:35:51.724838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.233 qpair failed and we were unable to recover it. 00:27:56.233 [2024-07-26 11:35:51.724970] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.233 [2024-07-26 11:35:51.724999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.233 qpair failed and we were unable to recover it. 00:27:56.233 [2024-07-26 11:35:51.725191] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.233 [2024-07-26 11:35:51.725221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.233 qpair failed and we were unable to recover it. 00:27:56.233 [2024-07-26 11:35:51.725464] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.233 [2024-07-26 11:35:51.725499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.233 qpair failed and we were unable to recover it. 00:27:56.233 [2024-07-26 11:35:51.725689] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.233 [2024-07-26 11:35:51.725720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.233 qpair failed and we were unable to recover it. 00:27:56.233 [2024-07-26 11:35:51.725844] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.233 [2024-07-26 11:35:51.725874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.233 qpair failed and we were unable to recover it. 00:27:56.233 [2024-07-26 11:35:51.726118] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.233 [2024-07-26 11:35:51.726147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.233 qpair failed and we were unable to recover it. 00:27:56.233 [2024-07-26 11:35:51.726320] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.233 [2024-07-26 11:35:51.726350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.233 qpair failed and we were unable to recover it. 00:27:56.233 [2024-07-26 11:35:51.726591] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.233 [2024-07-26 11:35:51.726620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.233 qpair failed and we were unable to recover it. 00:27:56.233 [2024-07-26 11:35:51.726747] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.234 [2024-07-26 11:35:51.726777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.234 qpair failed and we were unable to recover it. 00:27:56.234 [2024-07-26 11:35:51.727028] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.234 [2024-07-26 11:35:51.727058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.234 qpair failed and we were unable to recover it. 00:27:56.234 [2024-07-26 11:35:51.727297] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.234 [2024-07-26 11:35:51.727326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.234 qpair failed and we were unable to recover it. 00:27:56.234 [2024-07-26 11:35:51.727568] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.234 [2024-07-26 11:35:51.727597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.234 qpair failed and we were unable to recover it. 00:27:56.234 [2024-07-26 11:35:51.727827] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.234 [2024-07-26 11:35:51.727858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.234 qpair failed and we were unable to recover it. 00:27:56.234 [2024-07-26 11:35:51.728044] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.234 [2024-07-26 11:35:51.728074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.234 qpair failed and we were unable to recover it. 00:27:56.234 [2024-07-26 11:35:51.728259] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.234 [2024-07-26 11:35:51.728289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.234 qpair failed and we were unable to recover it. 00:27:56.234 [2024-07-26 11:35:51.728414] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.234 [2024-07-26 11:35:51.728443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.234 qpair failed and we were unable to recover it. 00:27:56.234 [2024-07-26 11:35:51.728662] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.234 [2024-07-26 11:35:51.728694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.234 qpair failed and we were unable to recover it. 00:27:56.234 [2024-07-26 11:35:51.728829] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.234 [2024-07-26 11:35:51.728858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.234 qpair failed and we were unable to recover it. 00:27:56.234 [2024-07-26 11:35:51.729122] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.234 [2024-07-26 11:35:51.729152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.234 qpair failed and we were unable to recover it. 00:27:56.234 [2024-07-26 11:35:51.729266] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.234 [2024-07-26 11:35:51.729296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.234 qpair failed and we were unable to recover it. 00:27:56.234 [2024-07-26 11:35:51.729472] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.234 [2024-07-26 11:35:51.729501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.234 qpair failed and we were unable to recover it. 00:27:56.234 [2024-07-26 11:35:51.729646] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.234 [2024-07-26 11:35:51.729677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.234 qpair failed and we were unable to recover it. 00:27:56.234 [2024-07-26 11:35:51.729829] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.234 [2024-07-26 11:35:51.729858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.234 qpair failed and we were unable to recover it. 00:27:56.234 [2024-07-26 11:35:51.730076] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.234 [2024-07-26 11:35:51.730105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.234 qpair failed and we were unable to recover it. 00:27:56.234 [2024-07-26 11:35:51.730292] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.234 [2024-07-26 11:35:51.730322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.234 qpair failed and we were unable to recover it. 00:27:56.234 [2024-07-26 11:35:51.730505] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.234 [2024-07-26 11:35:51.730535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.234 qpair failed and we were unable to recover it. 00:27:56.234 [2024-07-26 11:35:51.730799] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.234 [2024-07-26 11:35:51.730830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.234 qpair failed and we were unable to recover it. 00:27:56.234 [2024-07-26 11:35:51.730967] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.234 [2024-07-26 11:35:51.730997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.234 qpair failed and we were unable to recover it. 00:27:56.234 [2024-07-26 11:35:51.731126] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.234 [2024-07-26 11:35:51.731156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.234 qpair failed and we were unable to recover it. 00:27:56.234 [2024-07-26 11:35:51.731339] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.234 [2024-07-26 11:35:51.731370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.234 qpair failed and we were unable to recover it. 00:27:56.234 [2024-07-26 11:35:51.731611] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.234 [2024-07-26 11:35:51.731649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.234 qpair failed and we were unable to recover it. 00:27:56.234 [2024-07-26 11:35:51.731772] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.234 [2024-07-26 11:35:51.731802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.234 qpair failed and we were unable to recover it. 00:27:56.234 [2024-07-26 11:35:51.731927] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.234 [2024-07-26 11:35:51.731957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.234 qpair failed and we were unable to recover it. 00:27:56.234 [2024-07-26 11:35:51.732198] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.234 [2024-07-26 11:35:51.732228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.234 qpair failed and we were unable to recover it. 00:27:56.234 [2024-07-26 11:35:51.732400] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.234 [2024-07-26 11:35:51.732430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.234 qpair failed and we were unable to recover it. 00:27:56.234 [2024-07-26 11:35:51.732619] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.234 [2024-07-26 11:35:51.732666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.234 qpair failed and we were unable to recover it. 00:27:56.234 [2024-07-26 11:35:51.732802] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.234 [2024-07-26 11:35:51.732832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.234 qpair failed and we were unable to recover it. 00:27:56.234 [2024-07-26 11:35:51.733046] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.234 [2024-07-26 11:35:51.733076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.234 qpair failed and we were unable to recover it. 00:27:56.234 [2024-07-26 11:35:51.733182] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.234 [2024-07-26 11:35:51.733213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.234 qpair failed and we were unable to recover it. 00:27:56.234 [2024-07-26 11:35:51.733409] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.234 [2024-07-26 11:35:51.733439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.234 qpair failed and we were unable to recover it. 00:27:56.234 [2024-07-26 11:35:51.733546] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.235 [2024-07-26 11:35:51.733577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.235 qpair failed and we were unable to recover it. 00:27:56.235 [2024-07-26 11:35:51.733784] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.235 [2024-07-26 11:35:51.733817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.235 qpair failed and we were unable to recover it. 00:27:56.235 [2024-07-26 11:35:51.734089] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.235 [2024-07-26 11:35:51.734124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.235 qpair failed and we were unable to recover it. 00:27:56.235 [2024-07-26 11:35:51.734315] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.235 [2024-07-26 11:35:51.734345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.235 qpair failed and we were unable to recover it. 00:27:56.235 [2024-07-26 11:35:51.734545] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.235 [2024-07-26 11:35:51.734575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.235 qpair failed and we were unable to recover it. 00:27:56.235 [2024-07-26 11:35:51.734770] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.235 [2024-07-26 11:35:51.734800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.235 qpair failed and we were unable to recover it. 00:27:56.235 [2024-07-26 11:35:51.734979] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.235 [2024-07-26 11:35:51.735008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.235 qpair failed and we were unable to recover it. 00:27:56.235 [2024-07-26 11:35:51.735135] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.235 [2024-07-26 11:35:51.735164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.235 qpair failed and we were unable to recover it. 00:27:56.235 [2024-07-26 11:35:51.735348] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.235 [2024-07-26 11:35:51.735378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.235 qpair failed and we were unable to recover it. 00:27:56.235 [2024-07-26 11:35:51.735486] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.235 [2024-07-26 11:35:51.735516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.235 qpair failed and we were unable to recover it. 00:27:56.235 [2024-07-26 11:35:51.735650] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.235 [2024-07-26 11:35:51.735680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.235 qpair failed and we were unable to recover it. 00:27:56.235 [2024-07-26 11:35:51.735787] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.235 [2024-07-26 11:35:51.735817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.235 qpair failed and we were unable to recover it. 00:27:56.235 [2024-07-26 11:35:51.735929] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.235 [2024-07-26 11:35:51.735959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.235 qpair failed and we were unable to recover it. 00:27:56.235 [2024-07-26 11:35:51.736081] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.235 [2024-07-26 11:35:51.736111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.235 qpair failed and we were unable to recover it. 00:27:56.235 [2024-07-26 11:35:51.736399] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.235 [2024-07-26 11:35:51.736429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.235 qpair failed and we were unable to recover it. 00:27:56.235 [2024-07-26 11:35:51.736673] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.235 [2024-07-26 11:35:51.736703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.235 qpair failed and we were unable to recover it. 00:27:56.235 [2024-07-26 11:35:51.736845] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.235 [2024-07-26 11:35:51.736876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.235 qpair failed and we were unable to recover it. 00:27:56.235 [2024-07-26 11:35:51.737010] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.235 [2024-07-26 11:35:51.737040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.235 qpair failed and we were unable to recover it. 00:27:56.235 [2024-07-26 11:35:51.737162] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.235 [2024-07-26 11:35:51.737192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.235 qpair failed and we were unable to recover it. 00:27:56.235 [2024-07-26 11:35:51.737320] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.235 [2024-07-26 11:35:51.737349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.235 qpair failed and we were unable to recover it. 00:27:56.235 [2024-07-26 11:35:51.737546] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.235 [2024-07-26 11:35:51.737576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.235 qpair failed and we were unable to recover it. 00:27:56.235 [2024-07-26 11:35:51.737780] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.235 [2024-07-26 11:35:51.737811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.235 qpair failed and we were unable to recover it. 00:27:56.235 [2024-07-26 11:35:51.737935] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.235 [2024-07-26 11:35:51.737965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.235 qpair failed and we were unable to recover it. 00:27:56.235 [2024-07-26 11:35:51.738077] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.235 [2024-07-26 11:35:51.738106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.235 qpair failed and we were unable to recover it. 00:27:56.235 [2024-07-26 11:35:51.738215] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.235 [2024-07-26 11:35:51.738245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.235 qpair failed and we were unable to recover it. 00:27:56.235 [2024-07-26 11:35:51.738447] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.235 [2024-07-26 11:35:51.738477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.235 qpair failed and we were unable to recover it. 00:27:56.235 [2024-07-26 11:35:51.738697] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.235 [2024-07-26 11:35:51.738727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.235 qpair failed and we were unable to recover it. 00:27:56.235 [2024-07-26 11:35:51.738916] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.235 [2024-07-26 11:35:51.738946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.235 qpair failed and we were unable to recover it. 00:27:56.235 [2024-07-26 11:35:51.739126] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.235 [2024-07-26 11:35:51.739155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.235 qpair failed and we were unable to recover it. 00:27:56.235 [2024-07-26 11:35:51.739353] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.235 [2024-07-26 11:35:51.739383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.235 qpair failed and we were unable to recover it. 00:27:56.235 [2024-07-26 11:35:51.739509] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.235 [2024-07-26 11:35:51.739539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.235 qpair failed and we were unable to recover it. 00:27:56.235 [2024-07-26 11:35:51.739717] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.235 [2024-07-26 11:35:51.739748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.235 qpair failed and we were unable to recover it. 00:27:56.235 [2024-07-26 11:35:51.739941] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.235 [2024-07-26 11:35:51.739971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.235 qpair failed and we were unable to recover it. 00:27:56.235 [2024-07-26 11:35:51.740096] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.235 [2024-07-26 11:35:51.740126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.235 qpair failed and we were unable to recover it. 00:27:56.235 [2024-07-26 11:35:51.740367] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.235 [2024-07-26 11:35:51.740397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.235 qpair failed and we were unable to recover it. 00:27:56.235 [2024-07-26 11:35:51.740642] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.235 [2024-07-26 11:35:51.740672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.235 qpair failed and we were unable to recover it. 00:27:56.235 [2024-07-26 11:35:51.740847] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.235 [2024-07-26 11:35:51.740877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.235 qpair failed and we were unable to recover it. 00:27:56.235 [2024-07-26 11:35:51.740992] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.235 [2024-07-26 11:35:51.741021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.235 qpair failed and we were unable to recover it. 00:27:56.236 [2024-07-26 11:35:51.741267] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.236 [2024-07-26 11:35:51.741298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.236 qpair failed and we were unable to recover it. 00:27:56.236 [2024-07-26 11:35:51.741480] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.236 [2024-07-26 11:35:51.741510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.236 qpair failed and we were unable to recover it. 00:27:56.236 [2024-07-26 11:35:51.741683] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.236 [2024-07-26 11:35:51.741714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.236 qpair failed and we were unable to recover it. 00:27:56.236 [2024-07-26 11:35:51.741901] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.236 [2024-07-26 11:35:51.741932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.236 qpair failed and we were unable to recover it. 00:27:56.236 [2024-07-26 11:35:51.742058] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.236 [2024-07-26 11:35:51.742088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.236 qpair failed and we were unable to recover it. 00:27:56.236 [2024-07-26 11:35:51.742334] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.236 [2024-07-26 11:35:51.742364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.236 qpair failed and we were unable to recover it. 00:27:56.236 [2024-07-26 11:35:51.742490] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.236 [2024-07-26 11:35:51.742520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.236 qpair failed and we were unable to recover it. 00:27:56.236 [2024-07-26 11:35:51.742699] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.236 [2024-07-26 11:35:51.742730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.236 qpair failed and we were unable to recover it. 00:27:56.236 [2024-07-26 11:35:51.742913] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.236 [2024-07-26 11:35:51.742942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.236 qpair failed and we were unable to recover it. 00:27:56.236 [2024-07-26 11:35:51.743116] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.236 [2024-07-26 11:35:51.743145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.236 qpair failed and we were unable to recover it. 00:27:56.236 [2024-07-26 11:35:51.743261] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.236 [2024-07-26 11:35:51.743291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.236 qpair failed and we were unable to recover it. 00:27:56.236 [2024-07-26 11:35:51.743563] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.236 [2024-07-26 11:35:51.743594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.236 qpair failed and we were unable to recover it. 00:27:56.236 [2024-07-26 11:35:51.743779] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.236 [2024-07-26 11:35:51.743809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.236 qpair failed and we were unable to recover it. 00:27:56.236 [2024-07-26 11:35:51.743998] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.236 [2024-07-26 11:35:51.744028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.236 qpair failed and we were unable to recover it. 00:27:56.236 [2024-07-26 11:35:51.744160] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.236 [2024-07-26 11:35:51.744190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.236 qpair failed and we were unable to recover it. 00:27:56.236 [2024-07-26 11:35:51.744446] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.236 [2024-07-26 11:35:51.744476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.236 qpair failed and we were unable to recover it. 00:27:56.236 [2024-07-26 11:35:51.744662] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.236 [2024-07-26 11:35:51.744693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.236 qpair failed and we were unable to recover it. 00:27:56.236 [2024-07-26 11:35:51.744887] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.236 [2024-07-26 11:35:51.744917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.236 qpair failed and we were unable to recover it. 00:27:56.236 [2024-07-26 11:35:51.745133] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.236 [2024-07-26 11:35:51.745164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.236 qpair failed and we were unable to recover it. 00:27:56.236 [2024-07-26 11:35:51.745337] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.236 [2024-07-26 11:35:51.745367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.236 qpair failed and we were unable to recover it. 00:27:56.236 [2024-07-26 11:35:51.745568] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.236 [2024-07-26 11:35:51.745597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.236 qpair failed and we were unable to recover it. 00:27:56.236 [2024-07-26 11:35:51.745777] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.236 [2024-07-26 11:35:51.745808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.236 qpair failed and we were unable to recover it. 00:27:56.236 [2024-07-26 11:35:51.746006] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.236 [2024-07-26 11:35:51.746036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.236 qpair failed and we were unable to recover it. 00:27:56.236 [2024-07-26 11:35:51.746162] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.236 [2024-07-26 11:35:51.746192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.236 qpair failed and we were unable to recover it. 00:27:56.236 [2024-07-26 11:35:51.746312] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.236 [2024-07-26 11:35:51.746342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.236 qpair failed and we were unable to recover it. 00:27:56.236 [2024-07-26 11:35:51.746553] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.236 [2024-07-26 11:35:51.746583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.236 qpair failed and we were unable to recover it. 00:27:56.236 [2024-07-26 11:35:51.746704] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.236 [2024-07-26 11:35:51.746735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.236 qpair failed and we were unable to recover it. 00:27:56.236 [2024-07-26 11:35:51.746915] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.236 [2024-07-26 11:35:51.746945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.236 qpair failed and we were unable to recover it. 00:27:56.236 [2024-07-26 11:35:51.747072] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.236 [2024-07-26 11:35:51.747102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.236 qpair failed and we were unable to recover it. 00:27:56.236 [2024-07-26 11:35:51.747227] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.236 [2024-07-26 11:35:51.747256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.236 qpair failed and we were unable to recover it. 00:27:56.236 [2024-07-26 11:35:51.747375] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.236 [2024-07-26 11:35:51.747405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.236 qpair failed and we were unable to recover it. 00:27:56.236 [2024-07-26 11:35:51.747587] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.236 [2024-07-26 11:35:51.747622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.236 qpair failed and we were unable to recover it. 00:27:56.236 [2024-07-26 11:35:51.747871] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.236 [2024-07-26 11:35:51.747902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.236 qpair failed and we were unable to recover it. 00:27:56.236 [2024-07-26 11:35:51.748081] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.236 [2024-07-26 11:35:51.748111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.236 qpair failed and we were unable to recover it. 00:27:56.236 [2024-07-26 11:35:51.748230] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.236 [2024-07-26 11:35:51.748260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.236 qpair failed and we were unable to recover it. 00:27:56.236 [2024-07-26 11:35:51.748437] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.236 [2024-07-26 11:35:51.748466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.236 qpair failed and we were unable to recover it. 00:27:56.236 [2024-07-26 11:35:51.748659] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.236 [2024-07-26 11:35:51.748690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.236 qpair failed and we were unable to recover it. 00:27:56.236 [2024-07-26 11:35:51.748933] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.236 [2024-07-26 11:35:51.748964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.237 qpair failed and we were unable to recover it. 00:27:56.237 [2024-07-26 11:35:51.749231] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.237 [2024-07-26 11:35:51.749261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.237 qpair failed and we were unable to recover it. 00:27:56.237 [2024-07-26 11:35:51.749368] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.237 [2024-07-26 11:35:51.749397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.237 qpair failed and we were unable to recover it. 00:27:56.237 [2024-07-26 11:35:51.749651] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.237 [2024-07-26 11:35:51.749681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.237 qpair failed and we were unable to recover it. 00:27:56.237 [2024-07-26 11:35:51.749969] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.237 [2024-07-26 11:35:51.749999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.237 qpair failed and we were unable to recover it. 00:27:56.237 [2024-07-26 11:35:51.750118] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.237 [2024-07-26 11:35:51.750148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.237 qpair failed and we were unable to recover it. 00:27:56.237 [2024-07-26 11:35:51.750260] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.237 [2024-07-26 11:35:51.750290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.237 qpair failed and we were unable to recover it. 00:27:56.237 [2024-07-26 11:35:51.750480] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.237 [2024-07-26 11:35:51.750509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.237 qpair failed and we were unable to recover it. 00:27:56.237 [2024-07-26 11:35:51.750638] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.237 [2024-07-26 11:35:51.750669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.237 qpair failed and we were unable to recover it. 00:27:56.237 [2024-07-26 11:35:51.750862] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.237 [2024-07-26 11:35:51.750892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.237 qpair failed and we were unable to recover it. 00:27:56.237 [2024-07-26 11:35:51.751014] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.237 [2024-07-26 11:35:51.751051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.237 qpair failed and we were unable to recover it. 00:27:56.237 [2024-07-26 11:35:51.751261] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.237 [2024-07-26 11:35:51.751291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.237 qpair failed and we were unable to recover it. 00:27:56.237 [2024-07-26 11:35:51.751398] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.237 [2024-07-26 11:35:51.751428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.237 qpair failed and we were unable to recover it. 00:27:56.237 [2024-07-26 11:35:51.751570] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.237 [2024-07-26 11:35:51.751600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.237 qpair failed and we were unable to recover it. 00:27:56.237 [2024-07-26 11:35:51.751727] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.237 [2024-07-26 11:35:51.751758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.237 qpair failed and we were unable to recover it. 00:27:56.237 [2024-07-26 11:35:51.751881] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.237 [2024-07-26 11:35:51.751911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.237 qpair failed and we were unable to recover it. 00:27:56.237 [2024-07-26 11:35:51.752031] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.237 [2024-07-26 11:35:51.752060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.237 qpair failed and we were unable to recover it. 00:27:56.237 [2024-07-26 11:35:51.752241] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.237 [2024-07-26 11:35:51.752270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.237 qpair failed and we were unable to recover it. 00:27:56.237 [2024-07-26 11:35:51.752461] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.237 [2024-07-26 11:35:51.752491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.237 qpair failed and we were unable to recover it. 00:27:56.237 [2024-07-26 11:35:51.752606] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.237 [2024-07-26 11:35:51.752646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.237 qpair failed and we were unable to recover it. 00:27:56.237 [2024-07-26 11:35:51.752951] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.237 [2024-07-26 11:35:51.752980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.237 qpair failed and we were unable to recover it. 00:27:56.237 [2024-07-26 11:35:51.753155] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.237 [2024-07-26 11:35:51.753185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.237 qpair failed and we were unable to recover it. 00:27:56.237 [2024-07-26 11:35:51.753387] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.237 [2024-07-26 11:35:51.753417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.237 qpair failed and we were unable to recover it. 00:27:56.237 [2024-07-26 11:35:51.753595] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.237 [2024-07-26 11:35:51.753625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.237 qpair failed and we were unable to recover it. 00:27:56.237 [2024-07-26 11:35:51.753804] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.237 [2024-07-26 11:35:51.753834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.237 qpair failed and we were unable to recover it. 00:27:56.237 [2024-07-26 11:35:51.754075] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.237 [2024-07-26 11:35:51.754104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.237 qpair failed and we were unable to recover it. 00:27:56.237 [2024-07-26 11:35:51.754348] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.237 [2024-07-26 11:35:51.754378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.237 qpair failed and we were unable to recover it. 00:27:56.237 [2024-07-26 11:35:51.754585] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.237 [2024-07-26 11:35:51.754615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.237 qpair failed and we were unable to recover it. 00:27:56.237 [2024-07-26 11:35:51.754733] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.237 [2024-07-26 11:35:51.754763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.237 qpair failed and we were unable to recover it. 00:27:56.237 [2024-07-26 11:35:51.754944] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.237 [2024-07-26 11:35:51.754974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.237 qpair failed and we were unable to recover it. 00:27:56.237 [2024-07-26 11:35:51.755104] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.237 [2024-07-26 11:35:51.755134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.237 qpair failed and we were unable to recover it. 00:27:56.237 [2024-07-26 11:35:51.755259] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.237 [2024-07-26 11:35:51.755288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.237 qpair failed and we were unable to recover it. 00:27:56.237 [2024-07-26 11:35:51.755476] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.237 [2024-07-26 11:35:51.755505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.237 qpair failed and we were unable to recover it. 00:27:56.237 [2024-07-26 11:35:51.755606] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.237 [2024-07-26 11:35:51.755646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.237 qpair failed and we were unable to recover it. 00:27:56.237 [2024-07-26 11:35:51.755887] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.237 [2024-07-26 11:35:51.755922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.237 qpair failed and we were unable to recover it. 00:27:56.237 [2024-07-26 11:35:51.756162] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.237 [2024-07-26 11:35:51.756191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.237 qpair failed and we were unable to recover it. 00:27:56.237 [2024-07-26 11:35:51.756440] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.237 [2024-07-26 11:35:51.756470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.237 qpair failed and we were unable to recover it. 00:27:56.237 [2024-07-26 11:35:51.756670] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.237 [2024-07-26 11:35:51.756702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.237 qpair failed and we were unable to recover it. 00:27:56.237 [2024-07-26 11:35:51.756919] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.238 [2024-07-26 11:35:51.756950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.238 qpair failed and we were unable to recover it. 00:27:56.238 [2024-07-26 11:35:51.757140] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.238 [2024-07-26 11:35:51.757169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.238 qpair failed and we were unable to recover it. 00:27:56.238 [2024-07-26 11:35:51.757293] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.238 [2024-07-26 11:35:51.757322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.238 qpair failed and we were unable to recover it. 00:27:56.238 [2024-07-26 11:35:51.757441] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.238 [2024-07-26 11:35:51.757470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.238 qpair failed and we were unable to recover it. 00:27:56.238 [2024-07-26 11:35:51.757579] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.238 [2024-07-26 11:35:51.757609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.238 qpair failed and we were unable to recover it. 00:27:56.238 [2024-07-26 11:35:51.757724] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.238 [2024-07-26 11:35:51.757755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.238 qpair failed and we were unable to recover it. 00:27:56.238 [2024-07-26 11:35:51.758005] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.238 [2024-07-26 11:35:51.758034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.238 qpair failed and we were unable to recover it. 00:27:56.238 [2024-07-26 11:35:51.758222] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.238 [2024-07-26 11:35:51.758252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.238 qpair failed and we were unable to recover it. 00:27:56.238 [2024-07-26 11:35:51.758425] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.238 [2024-07-26 11:35:51.758455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.238 qpair failed and we were unable to recover it. 00:27:56.238 [2024-07-26 11:35:51.758574] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.238 [2024-07-26 11:35:51.758603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.238 qpair failed and we were unable to recover it. 00:27:56.238 [2024-07-26 11:35:51.758866] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.238 [2024-07-26 11:35:51.758896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.238 qpair failed and we were unable to recover it. 00:27:56.238 [2024-07-26 11:35:51.759066] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.238 [2024-07-26 11:35:51.759096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.238 qpair failed and we were unable to recover it. 00:27:56.238 [2024-07-26 11:35:51.759296] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.238 [2024-07-26 11:35:51.759325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.238 qpair failed and we were unable to recover it. 00:27:56.238 [2024-07-26 11:35:51.759514] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.238 [2024-07-26 11:35:51.759543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.238 qpair failed and we were unable to recover it. 00:27:56.238 [2024-07-26 11:35:51.759666] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.238 [2024-07-26 11:35:51.759696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.238 qpair failed and we were unable to recover it. 00:27:56.238 [2024-07-26 11:35:51.759847] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.238 [2024-07-26 11:35:51.759877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.238 qpair failed and we were unable to recover it. 00:27:56.238 [2024-07-26 11:35:51.760023] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.238 [2024-07-26 11:35:51.760054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.238 qpair failed and we were unable to recover it. 00:27:56.238 [2024-07-26 11:35:51.760167] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.238 [2024-07-26 11:35:51.760196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.238 qpair failed and we were unable to recover it. 00:27:56.238 [2024-07-26 11:35:51.760382] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.238 [2024-07-26 11:35:51.760412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.238 qpair failed and we were unable to recover it. 00:27:56.238 [2024-07-26 11:35:51.760600] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.238 [2024-07-26 11:35:51.760637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.238 qpair failed and we were unable to recover it. 00:27:56.238 [2024-07-26 11:35:51.760764] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.238 [2024-07-26 11:35:51.760795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.238 qpair failed and we were unable to recover it. 00:27:56.238 [2024-07-26 11:35:51.760910] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.238 [2024-07-26 11:35:51.760939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.238 qpair failed and we were unable to recover it. 00:27:56.238 [2024-07-26 11:35:51.761080] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.238 [2024-07-26 11:35:51.761110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.238 qpair failed and we were unable to recover it. 00:27:56.238 [2024-07-26 11:35:51.761250] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.238 [2024-07-26 11:35:51.761280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.238 qpair failed and we were unable to recover it. 00:27:56.238 [2024-07-26 11:35:51.761426] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.238 [2024-07-26 11:35:51.761455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.238 qpair failed and we were unable to recover it. 00:27:56.238 [2024-07-26 11:35:51.761643] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.238 [2024-07-26 11:35:51.761673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.238 qpair failed and we were unable to recover it. 00:27:56.238 [2024-07-26 11:35:51.761849] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.238 [2024-07-26 11:35:51.761878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.238 qpair failed and we were unable to recover it. 00:27:56.238 [2024-07-26 11:35:51.762073] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.238 [2024-07-26 11:35:51.762103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.238 qpair failed and we were unable to recover it. 00:27:56.238 [2024-07-26 11:35:51.762227] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.238 [2024-07-26 11:35:51.762257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.238 qpair failed and we were unable to recover it. 00:27:56.238 [2024-07-26 11:35:51.762374] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.238 [2024-07-26 11:35:51.762403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.238 qpair failed and we were unable to recover it. 00:27:56.238 [2024-07-26 11:35:51.762512] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.238 [2024-07-26 11:35:51.762542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.238 qpair failed and we were unable to recover it. 00:27:56.238 [2024-07-26 11:35:51.762664] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.238 [2024-07-26 11:35:51.762694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.238 qpair failed and we were unable to recover it. 00:27:56.238 [2024-07-26 11:35:51.762812] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.238 [2024-07-26 11:35:51.762841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.238 qpair failed and we were unable to recover it. 00:27:56.238 [2024-07-26 11:35:51.763089] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.238 [2024-07-26 11:35:51.763119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.238 qpair failed and we were unable to recover it. 00:27:56.238 [2024-07-26 11:35:51.763333] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.238 [2024-07-26 11:35:51.763362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.238 qpair failed and we were unable to recover it. 00:27:56.238 [2024-07-26 11:35:51.763481] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.238 [2024-07-26 11:35:51.763511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.238 qpair failed and we were unable to recover it. 00:27:56.238 [2024-07-26 11:35:51.763778] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.238 [2024-07-26 11:35:51.763818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.238 qpair failed and we were unable to recover it. 00:27:56.238 [2024-07-26 11:35:51.763994] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.238 [2024-07-26 11:35:51.764023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.238 qpair failed and we were unable to recover it. 00:27:56.239 [2024-07-26 11:35:51.764269] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.239 [2024-07-26 11:35:51.764299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.239 qpair failed and we were unable to recover it. 00:27:56.239 [2024-07-26 11:35:51.764417] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.239 [2024-07-26 11:35:51.764446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.239 qpair failed and we were unable to recover it. 00:27:56.239 [2024-07-26 11:35:51.764584] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.239 [2024-07-26 11:35:51.764614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.239 qpair failed and we were unable to recover it. 00:27:56.239 [2024-07-26 11:35:51.764750] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.239 [2024-07-26 11:35:51.764781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.239 qpair failed and we were unable to recover it. 00:27:56.239 [2024-07-26 11:35:51.764965] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.239 [2024-07-26 11:35:51.764994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.239 qpair failed and we were unable to recover it. 00:27:56.239 [2024-07-26 11:35:51.765122] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.239 [2024-07-26 11:35:51.765153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.239 qpair failed and we were unable to recover it. 00:27:56.239 [2024-07-26 11:35:51.765264] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.239 [2024-07-26 11:35:51.765293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.239 qpair failed and we were unable to recover it. 00:27:56.239 [2024-07-26 11:35:51.765486] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.239 [2024-07-26 11:35:51.765515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.239 qpair failed and we were unable to recover it. 00:27:56.239 [2024-07-26 11:35:51.765695] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.239 [2024-07-26 11:35:51.765726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.239 qpair failed and we were unable to recover it. 00:27:56.239 [2024-07-26 11:35:51.765878] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.239 [2024-07-26 11:35:51.765908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.239 qpair failed and we were unable to recover it. 00:27:56.239 [2024-07-26 11:35:51.766088] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.239 [2024-07-26 11:35:51.766118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.239 qpair failed and we were unable to recover it. 00:27:56.239 [2024-07-26 11:35:51.766308] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.239 [2024-07-26 11:35:51.766337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.239 qpair failed and we were unable to recover it. 00:27:56.239 [2024-07-26 11:35:51.766523] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.239 [2024-07-26 11:35:51.766553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.239 qpair failed and we were unable to recover it. 00:27:56.239 [2024-07-26 11:35:51.766729] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.239 [2024-07-26 11:35:51.766761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.239 qpair failed and we were unable to recover it. 00:27:56.239 [2024-07-26 11:35:51.766875] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.239 [2024-07-26 11:35:51.766905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.239 qpair failed and we were unable to recover it. 00:27:56.239 [2024-07-26 11:35:51.767010] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.239 [2024-07-26 11:35:51.767040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.239 qpair failed and we were unable to recover it. 00:27:56.239 [2024-07-26 11:35:51.767153] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.239 [2024-07-26 11:35:51.767182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.239 qpair failed and we were unable to recover it. 00:27:56.239 [2024-07-26 11:35:51.767358] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.239 [2024-07-26 11:35:51.767388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.239 qpair failed and we were unable to recover it. 00:27:56.239 [2024-07-26 11:35:51.767492] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.239 [2024-07-26 11:35:51.767522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.239 qpair failed and we were unable to recover it. 00:27:56.239 [2024-07-26 11:35:51.767698] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.239 [2024-07-26 11:35:51.767729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.239 qpair failed and we were unable to recover it. 00:27:56.239 [2024-07-26 11:35:51.767972] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.239 [2024-07-26 11:35:51.768002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.239 qpair failed and we were unable to recover it. 00:27:56.239 [2024-07-26 11:35:51.768121] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.239 [2024-07-26 11:35:51.768150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.239 qpair failed and we were unable to recover it. 00:27:56.239 [2024-07-26 11:35:51.768337] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.239 [2024-07-26 11:35:51.768367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.239 qpair failed and we were unable to recover it. 00:27:56.239 [2024-07-26 11:35:51.768506] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.239 [2024-07-26 11:35:51.768536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.239 qpair failed and we were unable to recover it. 00:27:56.239 [2024-07-26 11:35:51.768648] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.239 [2024-07-26 11:35:51.768679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.239 qpair failed and we were unable to recover it. 00:27:56.239 [2024-07-26 11:35:51.768876] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.239 [2024-07-26 11:35:51.768906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.239 qpair failed and we were unable to recover it. 00:27:56.239 [2024-07-26 11:35:51.769030] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.239 [2024-07-26 11:35:51.769060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.239 qpair failed and we were unable to recover it. 00:27:56.239 [2024-07-26 11:35:51.769265] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.239 [2024-07-26 11:35:51.769295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.239 qpair failed and we were unable to recover it. 00:27:56.239 [2024-07-26 11:35:51.769507] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.239 [2024-07-26 11:35:51.769536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.239 qpair failed and we were unable to recover it. 00:27:56.239 [2024-07-26 11:35:51.769658] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.239 [2024-07-26 11:35:51.769688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.239 qpair failed and we were unable to recover it. 00:27:56.239 [2024-07-26 11:35:51.769864] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.239 [2024-07-26 11:35:51.769894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.239 qpair failed and we were unable to recover it. 00:27:56.239 [2024-07-26 11:35:51.770017] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.239 [2024-07-26 11:35:51.770047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.239 qpair failed and we were unable to recover it. 00:27:56.240 [2024-07-26 11:35:51.770184] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.240 [2024-07-26 11:35:51.770213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.240 qpair failed and we were unable to recover it. 00:27:56.240 [2024-07-26 11:35:51.770334] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.240 [2024-07-26 11:35:51.770365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.240 qpair failed and we were unable to recover it. 00:27:56.240 [2024-07-26 11:35:51.770500] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.240 [2024-07-26 11:35:51.770530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.240 qpair failed and we were unable to recover it. 00:27:56.240 [2024-07-26 11:35:51.770741] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.240 [2024-07-26 11:35:51.770772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.240 qpair failed and we were unable to recover it. 00:27:56.240 [2024-07-26 11:35:51.770995] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.240 [2024-07-26 11:35:51.771024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.240 qpair failed and we were unable to recover it. 00:27:56.240 [2024-07-26 11:35:51.771147] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.240 [2024-07-26 11:35:51.771177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.240 qpair failed and we were unable to recover it. 00:27:56.240 [2024-07-26 11:35:51.771370] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.240 [2024-07-26 11:35:51.771405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.240 qpair failed and we were unable to recover it. 00:27:56.240 [2024-07-26 11:35:51.771661] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.240 [2024-07-26 11:35:51.771692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.240 qpair failed and we were unable to recover it. 00:27:56.240 [2024-07-26 11:35:51.771814] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.240 [2024-07-26 11:35:51.771843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.240 qpair failed and we were unable to recover it. 00:27:56.240 [2024-07-26 11:35:51.772031] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.240 [2024-07-26 11:35:51.772061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.240 qpair failed and we were unable to recover it. 00:27:56.240 [2024-07-26 11:35:51.772258] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.240 [2024-07-26 11:35:51.772288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.240 qpair failed and we were unable to recover it. 00:27:56.240 [2024-07-26 11:35:51.772508] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.240 [2024-07-26 11:35:51.772537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.240 qpair failed and we were unable to recover it. 00:27:56.240 [2024-07-26 11:35:51.772664] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.240 [2024-07-26 11:35:51.772695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.240 qpair failed and we were unable to recover it. 00:27:56.240 [2024-07-26 11:35:51.772958] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.240 [2024-07-26 11:35:51.772988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.240 qpair failed and we were unable to recover it. 00:27:56.240 [2024-07-26 11:35:51.773174] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.240 [2024-07-26 11:35:51.773204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.240 qpair failed and we were unable to recover it. 00:27:56.240 [2024-07-26 11:35:51.773414] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.240 [2024-07-26 11:35:51.773444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.240 qpair failed and we were unable to recover it. 00:27:56.240 [2024-07-26 11:35:51.773562] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.240 [2024-07-26 11:35:51.773592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.240 qpair failed and we were unable to recover it. 00:27:56.240 [2024-07-26 11:35:51.773787] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.240 [2024-07-26 11:35:51.773817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.240 qpair failed and we were unable to recover it. 00:27:56.240 [2024-07-26 11:35:51.773931] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.240 [2024-07-26 11:35:51.773960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.240 qpair failed and we were unable to recover it. 00:27:56.240 [2024-07-26 11:35:51.774080] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.240 [2024-07-26 11:35:51.774110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.240 qpair failed and we were unable to recover it. 00:27:56.240 [2024-07-26 11:35:51.774355] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.240 [2024-07-26 11:35:51.774386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.240 qpair failed and we were unable to recover it. 00:27:56.240 [2024-07-26 11:35:51.774633] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.240 [2024-07-26 11:35:51.774664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.240 qpair failed and we were unable to recover it. 00:27:56.240 [2024-07-26 11:35:51.774938] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.240 [2024-07-26 11:35:51.774968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.240 qpair failed and we were unable to recover it. 00:27:56.240 [2024-07-26 11:35:51.775079] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.240 [2024-07-26 11:35:51.775108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.240 qpair failed and we were unable to recover it. 00:27:56.240 [2024-07-26 11:35:51.775320] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.240 [2024-07-26 11:35:51.775350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.240 qpair failed and we were unable to recover it. 00:27:56.240 [2024-07-26 11:35:51.775555] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.240 [2024-07-26 11:35:51.775585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.240 qpair failed and we were unable to recover it. 00:27:56.240 [2024-07-26 11:35:51.775835] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.240 [2024-07-26 11:35:51.775866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.240 qpair failed and we were unable to recover it. 00:27:56.240 [2024-07-26 11:35:51.775994] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.240 [2024-07-26 11:35:51.776023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.240 qpair failed and we were unable to recover it. 00:27:56.240 [2024-07-26 11:35:51.776198] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.240 [2024-07-26 11:35:51.776228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.240 qpair failed and we were unable to recover it. 00:27:56.240 [2024-07-26 11:35:51.776339] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.240 [2024-07-26 11:35:51.776370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.240 qpair failed and we were unable to recover it. 00:27:56.240 [2024-07-26 11:35:51.776490] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.240 [2024-07-26 11:35:51.776520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.240 qpair failed and we were unable to recover it. 00:27:56.240 [2024-07-26 11:35:51.776697] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.240 [2024-07-26 11:35:51.776728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.240 qpair failed and we were unable to recover it. 00:27:56.240 [2024-07-26 11:35:51.776902] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.240 [2024-07-26 11:35:51.776936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.240 qpair failed and we were unable to recover it. 00:27:56.240 [2024-07-26 11:35:51.777211] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.240 [2024-07-26 11:35:51.777241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.240 qpair failed and we were unable to recover it. 00:27:56.240 [2024-07-26 11:35:51.777462] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.240 [2024-07-26 11:35:51.777492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.240 qpair failed and we were unable to recover it. 00:27:56.240 [2024-07-26 11:35:51.777598] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.240 [2024-07-26 11:35:51.777647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.240 qpair failed and we were unable to recover it. 00:27:56.240 [2024-07-26 11:35:51.777832] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.240 [2024-07-26 11:35:51.777861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.240 qpair failed and we were unable to recover it. 00:27:56.240 [2024-07-26 11:35:51.778104] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.241 [2024-07-26 11:35:51.778134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.241 qpair failed and we were unable to recover it. 00:27:56.241 [2024-07-26 11:35:51.778253] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.241 [2024-07-26 11:35:51.778282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.241 qpair failed and we were unable to recover it. 00:27:56.241 [2024-07-26 11:35:51.778487] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.241 [2024-07-26 11:35:51.778517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.241 qpair failed and we were unable to recover it. 00:27:56.241 [2024-07-26 11:35:51.778694] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.241 [2024-07-26 11:35:51.778725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.241 qpair failed and we were unable to recover it. 00:27:56.241 [2024-07-26 11:35:51.778904] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.241 [2024-07-26 11:35:51.778933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.241 qpair failed and we were unable to recover it. 00:27:56.241 [2024-07-26 11:35:51.779036] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.241 [2024-07-26 11:35:51.779065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.241 qpair failed and we were unable to recover it. 00:27:56.241 [2024-07-26 11:35:51.779193] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.241 [2024-07-26 11:35:51.779222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.241 qpair failed and we were unable to recover it. 00:27:56.241 [2024-07-26 11:35:51.779406] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.241 [2024-07-26 11:35:51.779436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.241 qpair failed and we were unable to recover it. 00:27:56.241 [2024-07-26 11:35:51.779573] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.241 [2024-07-26 11:35:51.779603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.241 qpair failed and we were unable to recover it. 00:27:56.241 [2024-07-26 11:35:51.779788] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.241 [2024-07-26 11:35:51.779823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.241 qpair failed and we were unable to recover it. 00:27:56.241 [2024-07-26 11:35:51.780086] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.241 [2024-07-26 11:35:51.780116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.241 qpair failed and we were unable to recover it. 00:27:56.241 [2024-07-26 11:35:51.780291] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.241 [2024-07-26 11:35:51.780320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.241 qpair failed and we were unable to recover it. 00:27:56.241 [2024-07-26 11:35:51.780439] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.241 [2024-07-26 11:35:51.780469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.241 qpair failed and we were unable to recover it. 00:27:56.241 [2024-07-26 11:35:51.780659] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.241 [2024-07-26 11:35:51.780690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.241 qpair failed and we were unable to recover it. 00:27:56.241 [2024-07-26 11:35:51.780859] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.241 [2024-07-26 11:35:51.780888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.241 qpair failed and we were unable to recover it. 00:27:56.241 [2024-07-26 11:35:51.781025] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.241 [2024-07-26 11:35:51.781056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.241 qpair failed and we were unable to recover it. 00:27:56.241 [2024-07-26 11:35:51.781233] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.241 [2024-07-26 11:35:51.781262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.241 qpair failed and we were unable to recover it. 00:27:56.241 [2024-07-26 11:35:51.781439] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.241 [2024-07-26 11:35:51.781469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.241 qpair failed and we were unable to recover it. 00:27:56.241 [2024-07-26 11:35:51.781603] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.241 [2024-07-26 11:35:51.781666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.241 qpair failed and we were unable to recover it. 00:27:56.241 [2024-07-26 11:35:51.781791] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.241 [2024-07-26 11:35:51.781820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.241 qpair failed and we were unable to recover it. 00:27:56.241 [2024-07-26 11:35:51.781993] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.241 [2024-07-26 11:35:51.782023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.241 qpair failed and we were unable to recover it. 00:27:56.241 [2024-07-26 11:35:51.782134] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.241 [2024-07-26 11:35:51.782163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.241 qpair failed and we were unable to recover it. 00:27:56.241 [2024-07-26 11:35:51.782273] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.241 [2024-07-26 11:35:51.782303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.241 qpair failed and we were unable to recover it. 00:27:56.241 [2024-07-26 11:35:51.782553] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.241 [2024-07-26 11:35:51.782584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.241 qpair failed and we were unable to recover it. 00:27:56.241 [2024-07-26 11:35:51.782729] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.241 [2024-07-26 11:35:51.782760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.241 qpair failed and we were unable to recover it. 00:27:56.241 [2024-07-26 11:35:51.782878] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.241 [2024-07-26 11:35:51.782907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.241 qpair failed and we were unable to recover it. 00:27:56.241 [2024-07-26 11:35:51.783081] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.241 [2024-07-26 11:35:51.783111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.241 qpair failed and we were unable to recover it. 00:27:56.241 [2024-07-26 11:35:51.783290] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.241 [2024-07-26 11:35:51.783320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.241 qpair failed and we were unable to recover it. 00:27:56.241 [2024-07-26 11:35:51.783519] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.241 [2024-07-26 11:35:51.783549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.241 qpair failed and we were unable to recover it. 00:27:56.241 [2024-07-26 11:35:51.783793] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.241 [2024-07-26 11:35:51.783824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.241 qpair failed and we were unable to recover it. 00:27:56.241 [2024-07-26 11:35:51.784017] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.241 [2024-07-26 11:35:51.784047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.241 qpair failed and we were unable to recover it. 00:27:56.241 [2024-07-26 11:35:51.784171] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.241 [2024-07-26 11:35:51.784200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.241 qpair failed and we were unable to recover it. 00:27:56.241 [2024-07-26 11:35:51.784305] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.241 [2024-07-26 11:35:51.784335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.241 qpair failed and we were unable to recover it. 00:27:56.241 [2024-07-26 11:35:51.784462] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.241 [2024-07-26 11:35:51.784492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.241 qpair failed and we were unable to recover it. 00:27:56.241 [2024-07-26 11:35:51.784619] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.241 [2024-07-26 11:35:51.784657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.241 qpair failed and we were unable to recover it. 00:27:56.241 [2024-07-26 11:35:51.784776] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.241 [2024-07-26 11:35:51.784807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.241 qpair failed and we were unable to recover it. 00:27:56.241 [2024-07-26 11:35:51.784995] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.241 [2024-07-26 11:35:51.785025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.241 qpair failed and we were unable to recover it. 00:27:56.242 [2024-07-26 11:35:51.785264] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.242 [2024-07-26 11:35:51.785294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.242 qpair failed and we were unable to recover it. 00:27:56.242 [2024-07-26 11:35:51.785479] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.242 [2024-07-26 11:35:51.785509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.242 qpair failed and we were unable to recover it. 00:27:56.242 [2024-07-26 11:35:51.785645] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.242 [2024-07-26 11:35:51.785675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.242 qpair failed and we were unable to recover it. 00:27:56.242 [2024-07-26 11:35:51.785942] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.242 [2024-07-26 11:35:51.785972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.242 qpair failed and we were unable to recover it. 00:27:56.242 [2024-07-26 11:35:51.786219] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.242 [2024-07-26 11:35:51.786250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.242 qpair failed and we were unable to recover it. 00:27:56.242 [2024-07-26 11:35:51.786375] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.242 [2024-07-26 11:35:51.786406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.242 qpair failed and we were unable to recover it. 00:27:56.242 [2024-07-26 11:35:51.786512] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.242 [2024-07-26 11:35:51.786542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.242 qpair failed and we were unable to recover it. 00:27:56.242 [2024-07-26 11:35:51.786665] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.242 [2024-07-26 11:35:51.786696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.242 qpair failed and we were unable to recover it. 00:27:56.242 [2024-07-26 11:35:51.786824] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.242 [2024-07-26 11:35:51.786853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.242 qpair failed and we were unable to recover it. 00:27:56.242 [2024-07-26 11:35:51.787027] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.242 [2024-07-26 11:35:51.787056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.242 qpair failed and we were unable to recover it. 00:27:56.242 [2024-07-26 11:35:51.787161] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.242 [2024-07-26 11:35:51.787190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.242 qpair failed and we were unable to recover it. 00:27:56.242 [2024-07-26 11:35:51.787314] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.242 [2024-07-26 11:35:51.787343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.242 qpair failed and we were unable to recover it. 00:27:56.242 [2024-07-26 11:35:51.787535] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.242 [2024-07-26 11:35:51.787570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.242 qpair failed and we were unable to recover it. 00:27:56.242 [2024-07-26 11:35:51.787767] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.242 [2024-07-26 11:35:51.787798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.242 qpair failed and we were unable to recover it. 00:27:56.242 [2024-07-26 11:35:51.787906] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.242 [2024-07-26 11:35:51.787935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.242 qpair failed and we were unable to recover it. 00:27:56.242 [2024-07-26 11:35:51.788175] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.242 [2024-07-26 11:35:51.788205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.242 qpair failed and we were unable to recover it. 00:27:56.242 [2024-07-26 11:35:51.788325] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.242 [2024-07-26 11:35:51.788355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.242 qpair failed and we were unable to recover it. 00:27:56.242 [2024-07-26 11:35:51.788530] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.242 [2024-07-26 11:35:51.788560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.242 qpair failed and we were unable to recover it. 00:27:56.242 [2024-07-26 11:35:51.788768] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.242 [2024-07-26 11:35:51.788798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.242 qpair failed and we were unable to recover it. 00:27:56.242 [2024-07-26 11:35:51.788916] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.242 [2024-07-26 11:35:51.788946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.242 qpair failed and we were unable to recover it. 00:27:56.242 [2024-07-26 11:35:51.789152] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.242 [2024-07-26 11:35:51.789182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.242 qpair failed and we were unable to recover it. 00:27:56.242 [2024-07-26 11:35:51.789401] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.242 [2024-07-26 11:35:51.789431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.242 qpair failed and we were unable to recover it. 00:27:56.242 [2024-07-26 11:35:51.789610] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.242 [2024-07-26 11:35:51.789648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.242 qpair failed and we were unable to recover it. 00:27:56.242 [2024-07-26 11:35:51.789864] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.242 [2024-07-26 11:35:51.789893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.242 qpair failed and we were unable to recover it. 00:27:56.242 [2024-07-26 11:35:51.790088] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.242 [2024-07-26 11:35:51.790118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.242 qpair failed and we were unable to recover it. 00:27:56.242 [2024-07-26 11:35:51.790326] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.242 [2024-07-26 11:35:51.790356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.242 qpair failed and we were unable to recover it. 00:27:56.242 [2024-07-26 11:35:51.790487] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.242 [2024-07-26 11:35:51.790517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.242 qpair failed and we were unable to recover it. 00:27:56.242 [2024-07-26 11:35:51.790656] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.242 [2024-07-26 11:35:51.790687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.242 qpair failed and we were unable to recover it. 00:27:56.242 [2024-07-26 11:35:51.790874] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.242 [2024-07-26 11:35:51.790904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.242 qpair failed and we were unable to recover it. 00:27:56.242 [2024-07-26 11:35:51.791019] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.242 [2024-07-26 11:35:51.791048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.242 qpair failed and we were unable to recover it. 00:27:56.242 [2024-07-26 11:35:51.791238] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.242 [2024-07-26 11:35:51.791268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.242 qpair failed and we were unable to recover it. 00:27:56.242 [2024-07-26 11:35:51.791555] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.242 [2024-07-26 11:35:51.791584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.242 qpair failed and we were unable to recover it. 00:27:56.242 [2024-07-26 11:35:51.791765] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.242 [2024-07-26 11:35:51.791795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.242 qpair failed and we were unable to recover it. 00:27:56.242 [2024-07-26 11:35:51.791937] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.242 [2024-07-26 11:35:51.791966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.242 qpair failed and we were unable to recover it. 00:27:56.242 [2024-07-26 11:35:51.792137] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.242 [2024-07-26 11:35:51.792167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.242 qpair failed and we were unable to recover it. 00:27:56.242 [2024-07-26 11:35:51.792404] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.242 [2024-07-26 11:35:51.792433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.242 qpair failed and we were unable to recover it. 00:27:56.242 [2024-07-26 11:35:51.792625] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.242 [2024-07-26 11:35:51.792663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.242 qpair failed and we were unable to recover it. 00:27:56.242 [2024-07-26 11:35:51.792771] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.243 [2024-07-26 11:35:51.792800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.243 qpair failed and we were unable to recover it. 00:27:56.243 [2024-07-26 11:35:51.792907] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.243 [2024-07-26 11:35:51.792942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.243 qpair failed and we were unable to recover it. 00:27:56.243 [2024-07-26 11:35:51.793173] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.243 [2024-07-26 11:35:51.793203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.243 qpair failed and we were unable to recover it. 00:27:56.243 [2024-07-26 11:35:51.793382] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.243 [2024-07-26 11:35:51.793412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.243 qpair failed and we were unable to recover it. 00:27:56.243 [2024-07-26 11:35:51.793599] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.243 [2024-07-26 11:35:51.793635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.243 qpair failed and we were unable to recover it. 00:27:56.243 [2024-07-26 11:35:51.793810] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.243 [2024-07-26 11:35:51.793840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.243 qpair failed and we were unable to recover it. 00:27:56.243 [2024-07-26 11:35:51.793949] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.243 [2024-07-26 11:35:51.793978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.243 qpair failed and we were unable to recover it. 00:27:56.243 [2024-07-26 11:35:51.794118] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.243 [2024-07-26 11:35:51.794148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.243 qpair failed and we were unable to recover it. 00:27:56.243 [2024-07-26 11:35:51.794268] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.243 [2024-07-26 11:35:51.794297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.243 qpair failed and we were unable to recover it. 00:27:56.243 [2024-07-26 11:35:51.794469] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.243 [2024-07-26 11:35:51.794498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.243 qpair failed and we were unable to recover it. 00:27:56.243 [2024-07-26 11:35:51.794598] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.243 [2024-07-26 11:35:51.794636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.243 qpair failed and we were unable to recover it. 00:27:56.243 [2024-07-26 11:35:51.794740] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.243 [2024-07-26 11:35:51.794769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.243 qpair failed and we were unable to recover it. 00:27:56.243 [2024-07-26 11:35:51.794893] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.243 [2024-07-26 11:35:51.794922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.243 qpair failed and we were unable to recover it. 00:27:56.243 [2024-07-26 11:35:51.795017] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.243 [2024-07-26 11:35:51.795047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.243 qpair failed and we were unable to recover it. 00:27:56.243 [2024-07-26 11:35:51.795236] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.243 [2024-07-26 11:35:51.795265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.243 qpair failed and we were unable to recover it. 00:27:56.243 [2024-07-26 11:35:51.795363] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.243 [2024-07-26 11:35:51.795398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.243 qpair failed and we were unable to recover it. 00:27:56.243 [2024-07-26 11:35:51.795507] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.243 [2024-07-26 11:35:51.795537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.243 qpair failed and we were unable to recover it. 00:27:56.243 [2024-07-26 11:35:51.795709] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.243 [2024-07-26 11:35:51.795740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.243 qpair failed and we were unable to recover it. 00:27:56.243 [2024-07-26 11:35:51.795924] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.243 [2024-07-26 11:35:51.795954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.243 qpair failed and we were unable to recover it. 00:27:56.243 [2024-07-26 11:35:51.796167] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.243 [2024-07-26 11:35:51.796196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.243 qpair failed and we were unable to recover it. 00:27:56.243 [2024-07-26 11:35:51.796377] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.243 [2024-07-26 11:35:51.796406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.243 qpair failed and we were unable to recover it. 00:27:56.243 [2024-07-26 11:35:51.796593] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.243 [2024-07-26 11:35:51.796622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.243 qpair failed and we were unable to recover it. 00:27:56.243 [2024-07-26 11:35:51.796822] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.243 [2024-07-26 11:35:51.796851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.243 qpair failed and we were unable to recover it. 00:27:56.243 [2024-07-26 11:35:51.797023] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.243 [2024-07-26 11:35:51.797052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.243 qpair failed and we were unable to recover it. 00:27:56.243 [2024-07-26 11:35:51.797182] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.243 [2024-07-26 11:35:51.797212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.243 qpair failed and we were unable to recover it. 00:27:56.243 [2024-07-26 11:35:51.797335] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.243 [2024-07-26 11:35:51.797365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.243 qpair failed and we were unable to recover it. 00:27:56.243 [2024-07-26 11:35:51.797490] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.243 [2024-07-26 11:35:51.797520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.243 qpair failed and we were unable to recover it. 00:27:56.243 [2024-07-26 11:35:51.797718] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.243 [2024-07-26 11:35:51.797749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.243 qpair failed and we were unable to recover it. 00:27:56.243 [2024-07-26 11:35:51.797916] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.243 [2024-07-26 11:35:51.797945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.243 qpair failed and we were unable to recover it. 00:27:56.243 [2024-07-26 11:35:51.798137] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.243 [2024-07-26 11:35:51.798166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.243 qpair failed and we were unable to recover it. 00:27:56.243 [2024-07-26 11:35:51.798353] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.243 [2024-07-26 11:35:51.798383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.243 qpair failed and we were unable to recover it. 00:27:56.243 [2024-07-26 11:35:51.798595] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.243 [2024-07-26 11:35:51.798624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.243 qpair failed and we were unable to recover it. 00:27:56.243 [2024-07-26 11:35:51.798803] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.244 [2024-07-26 11:35:51.798833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.244 qpair failed and we were unable to recover it. 00:27:56.244 [2024-07-26 11:35:51.799009] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.244 [2024-07-26 11:35:51.799038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.244 qpair failed and we were unable to recover it. 00:27:56.244 [2024-07-26 11:35:51.799223] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.244 [2024-07-26 11:35:51.799253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.244 qpair failed and we were unable to recover it. 00:27:56.244 [2024-07-26 11:35:51.799368] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.244 [2024-07-26 11:35:51.799399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.244 qpair failed and we were unable to recover it. 00:27:56.244 [2024-07-26 11:35:51.799555] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.244 [2024-07-26 11:35:51.799585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.244 qpair failed and we were unable to recover it. 00:27:56.244 [2024-07-26 11:35:51.799834] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.244 [2024-07-26 11:35:51.799864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.244 qpair failed and we were unable to recover it. 00:27:56.244 [2024-07-26 11:35:51.800073] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.244 [2024-07-26 11:35:51.800103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.244 qpair failed and we were unable to recover it. 00:27:56.244 [2024-07-26 11:35:51.800280] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.244 [2024-07-26 11:35:51.800309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.244 qpair failed and we were unable to recover it. 00:27:56.244 [2024-07-26 11:35:51.800602] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.244 [2024-07-26 11:35:51.800642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.244 qpair failed and we were unable to recover it. 00:27:56.244 [2024-07-26 11:35:51.800828] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.244 [2024-07-26 11:35:51.800858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.244 qpair failed and we were unable to recover it. 00:27:56.244 [2024-07-26 11:35:51.801143] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.244 [2024-07-26 11:35:51.801173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.244 qpair failed and we were unable to recover it. 00:27:56.244 [2024-07-26 11:35:51.801350] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.244 [2024-07-26 11:35:51.801379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.244 qpair failed and we were unable to recover it. 00:27:56.244 [2024-07-26 11:35:51.801500] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.244 [2024-07-26 11:35:51.801530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.244 qpair failed and we were unable to recover it. 00:27:56.244 [2024-07-26 11:35:51.801648] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.244 [2024-07-26 11:35:51.801679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.244 qpair failed and we were unable to recover it. 00:27:56.244 [2024-07-26 11:35:51.801803] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.244 [2024-07-26 11:35:51.801833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.244 qpair failed and we were unable to recover it. 00:27:56.244 [2024-07-26 11:35:51.801969] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.244 [2024-07-26 11:35:51.801998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.244 qpair failed and we were unable to recover it. 00:27:56.244 [2024-07-26 11:35:51.802115] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.244 [2024-07-26 11:35:51.802144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.244 qpair failed and we were unable to recover it. 00:27:56.244 [2024-07-26 11:35:51.802318] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.244 [2024-07-26 11:35:51.802348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.244 qpair failed and we were unable to recover it. 00:27:56.244 [2024-07-26 11:35:51.802469] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.244 [2024-07-26 11:35:51.802499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.244 qpair failed and we were unable to recover it. 00:27:56.244 [2024-07-26 11:35:51.802703] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.244 [2024-07-26 11:35:51.802734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.244 qpair failed and we were unable to recover it. 00:27:56.244 [2024-07-26 11:35:51.802936] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.244 [2024-07-26 11:35:51.802966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.244 qpair failed and we were unable to recover it. 00:27:56.244 [2024-07-26 11:35:51.803100] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.244 [2024-07-26 11:35:51.803130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.244 qpair failed and we were unable to recover it. 00:27:56.244 [2024-07-26 11:35:51.803257] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.244 [2024-07-26 11:35:51.803286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.244 qpair failed and we were unable to recover it. 00:27:56.244 [2024-07-26 11:35:51.803419] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.244 [2024-07-26 11:35:51.803454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.244 qpair failed and we were unable to recover it. 00:27:56.244 [2024-07-26 11:35:51.803640] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.244 [2024-07-26 11:35:51.803671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.244 qpair failed and we were unable to recover it. 00:27:56.244 [2024-07-26 11:35:51.803791] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.244 [2024-07-26 11:35:51.803820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.244 qpair failed and we were unable to recover it. 00:27:56.244 [2024-07-26 11:35:51.804063] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.244 [2024-07-26 11:35:51.804093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.244 qpair failed and we were unable to recover it. 00:27:56.244 [2024-07-26 11:35:51.804216] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.244 [2024-07-26 11:35:51.804245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.244 qpair failed and we were unable to recover it. 00:27:56.244 [2024-07-26 11:35:51.804378] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.244 [2024-07-26 11:35:51.804408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.244 qpair failed and we were unable to recover it. 00:27:56.244 [2024-07-26 11:35:51.804608] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.244 [2024-07-26 11:35:51.804664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.244 qpair failed and we were unable to recover it. 00:27:56.244 [2024-07-26 11:35:51.804860] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.244 [2024-07-26 11:35:51.804889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.244 qpair failed and we were unable to recover it. 00:27:56.244 [2024-07-26 11:35:51.805065] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.244 [2024-07-26 11:35:51.805094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.244 qpair failed and we were unable to recover it. 00:27:56.244 [2024-07-26 11:35:51.805223] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.244 [2024-07-26 11:35:51.805253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.244 qpair failed and we were unable to recover it. 00:27:56.244 [2024-07-26 11:35:51.805373] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.244 [2024-07-26 11:35:51.805402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.244 qpair failed and we were unable to recover it. 00:27:56.244 [2024-07-26 11:35:51.805516] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.244 [2024-07-26 11:35:51.805545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.244 qpair failed and we were unable to recover it. 00:27:56.244 [2024-07-26 11:35:51.805732] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.244 [2024-07-26 11:35:51.805763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.244 qpair failed and we were unable to recover it. 00:27:56.244 [2024-07-26 11:35:51.805889] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.244 [2024-07-26 11:35:51.805919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.244 qpair failed and we were unable to recover it. 00:27:56.244 [2024-07-26 11:35:51.806051] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.244 [2024-07-26 11:35:51.806080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.244 qpair failed and we were unable to recover it. 00:27:56.244 [2024-07-26 11:35:51.806183] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.245 [2024-07-26 11:35:51.806213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.245 qpair failed and we were unable to recover it. 00:27:56.245 [2024-07-26 11:35:51.806340] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.245 [2024-07-26 11:35:51.806369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.245 qpair failed and we were unable to recover it. 00:27:56.245 [2024-07-26 11:35:51.806610] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.245 [2024-07-26 11:35:51.806649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.245 qpair failed and we were unable to recover it. 00:27:56.245 [2024-07-26 11:35:51.806782] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.245 [2024-07-26 11:35:51.806812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.245 qpair failed and we were unable to recover it. 00:27:56.245 [2024-07-26 11:35:51.807009] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.245 [2024-07-26 11:35:51.807039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.245 qpair failed and we were unable to recover it. 00:27:56.245 [2024-07-26 11:35:51.807169] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.245 [2024-07-26 11:35:51.807198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.245 qpair failed and we were unable to recover it. 00:27:56.245 [2024-07-26 11:35:51.807411] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.245 [2024-07-26 11:35:51.807441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.245 qpair failed and we were unable to recover it. 00:27:56.245 [2024-07-26 11:35:51.807541] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.245 [2024-07-26 11:35:51.807570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.245 qpair failed and we were unable to recover it. 00:27:56.245 [2024-07-26 11:35:51.807762] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.245 [2024-07-26 11:35:51.807792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.245 qpair failed and we were unable to recover it. 00:27:56.245 [2024-07-26 11:35:51.807896] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.245 [2024-07-26 11:35:51.807925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.245 qpair failed and we were unable to recover it. 00:27:56.245 [2024-07-26 11:35:51.808060] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.245 [2024-07-26 11:35:51.808090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.245 qpair failed and we were unable to recover it. 00:27:56.245 [2024-07-26 11:35:51.808263] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.245 [2024-07-26 11:35:51.808293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.245 qpair failed and we were unable to recover it. 00:27:56.245 [2024-07-26 11:35:51.808489] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.245 [2024-07-26 11:35:51.808520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.245 qpair failed and we were unable to recover it. 00:27:56.245 [2024-07-26 11:35:51.808639] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.245 [2024-07-26 11:35:51.808670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.245 qpair failed and we were unable to recover it. 00:27:56.245 [2024-07-26 11:35:51.808959] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.245 [2024-07-26 11:35:51.808988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.245 qpair failed and we were unable to recover it. 00:27:56.245 [2024-07-26 11:35:51.809209] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.245 [2024-07-26 11:35:51.809239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.245 qpair failed and we were unable to recover it. 00:27:56.245 [2024-07-26 11:35:51.809419] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.245 [2024-07-26 11:35:51.809448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.245 qpair failed and we were unable to recover it. 00:27:56.245 [2024-07-26 11:35:51.809552] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.245 [2024-07-26 11:35:51.809582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.245 qpair failed and we were unable to recover it. 00:27:56.245 [2024-07-26 11:35:51.809859] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.245 [2024-07-26 11:35:51.809890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.245 qpair failed and we were unable to recover it. 00:27:56.245 [2024-07-26 11:35:51.810024] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.245 [2024-07-26 11:35:51.810053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.245 qpair failed and we were unable to recover it. 00:27:56.245 [2024-07-26 11:35:51.810159] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.245 [2024-07-26 11:35:51.810190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.245 qpair failed and we were unable to recover it. 00:27:56.245 [2024-07-26 11:35:51.810363] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.245 [2024-07-26 11:35:51.810393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.245 qpair failed and we were unable to recover it. 00:27:56.245 [2024-07-26 11:35:51.810506] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.245 [2024-07-26 11:35:51.810536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.245 qpair failed and we were unable to recover it. 00:27:56.245 [2024-07-26 11:35:51.810712] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.245 [2024-07-26 11:35:51.810743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.245 qpair failed and we were unable to recover it. 00:27:56.245 [2024-07-26 11:35:51.810969] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.245 [2024-07-26 11:35:51.810999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.245 qpair failed and we were unable to recover it. 00:27:56.245 [2024-07-26 11:35:51.811118] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.245 [2024-07-26 11:35:51.811154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.245 qpair failed and we were unable to recover it. 00:27:56.245 [2024-07-26 11:35:51.811331] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.245 [2024-07-26 11:35:51.811361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.245 qpair failed and we were unable to recover it. 00:27:56.245 [2024-07-26 11:35:51.811464] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.245 [2024-07-26 11:35:51.811493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.245 qpair failed and we were unable to recover it. 00:27:56.245 [2024-07-26 11:35:51.811609] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.245 [2024-07-26 11:35:51.811646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.245 qpair failed and we were unable to recover it. 00:27:56.245 [2024-07-26 11:35:51.811852] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.245 [2024-07-26 11:35:51.811882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.245 qpair failed and we were unable to recover it. 00:27:56.245 [2024-07-26 11:35:51.812002] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.245 [2024-07-26 11:35:51.812032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.245 qpair failed and we were unable to recover it. 00:27:56.245 [2024-07-26 11:35:51.812170] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.245 [2024-07-26 11:35:51.812199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.245 qpair failed and we were unable to recover it. 00:27:56.245 [2024-07-26 11:35:51.812323] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.245 [2024-07-26 11:35:51.812353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.245 qpair failed and we were unable to recover it. 00:27:56.245 [2024-07-26 11:35:51.812488] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.245 [2024-07-26 11:35:51.812517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.245 qpair failed and we were unable to recover it. 00:27:56.245 [2024-07-26 11:35:51.812661] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.245 [2024-07-26 11:35:51.812691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.245 qpair failed and we were unable to recover it. 00:27:56.245 [2024-07-26 11:35:51.812874] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.245 [2024-07-26 11:35:51.812904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.245 qpair failed and we were unable to recover it. 00:27:56.245 [2024-07-26 11:35:51.813020] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.245 [2024-07-26 11:35:51.813049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.245 qpair failed and we were unable to recover it. 00:27:56.245 [2024-07-26 11:35:51.813230] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.245 [2024-07-26 11:35:51.813259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.245 qpair failed and we were unable to recover it. 00:27:56.246 [2024-07-26 11:35:51.813379] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.246 [2024-07-26 11:35:51.813409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.246 qpair failed and we were unable to recover it. 00:27:56.246 [2024-07-26 11:35:51.813603] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.246 [2024-07-26 11:35:51.813639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.246 qpair failed and we were unable to recover it. 00:27:56.246 [2024-07-26 11:35:51.813755] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.246 [2024-07-26 11:35:51.813784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.246 qpair failed and we were unable to recover it. 00:27:56.246 [2024-07-26 11:35:51.814048] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.246 [2024-07-26 11:35:51.814077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.246 qpair failed and we were unable to recover it. 00:27:56.246 [2024-07-26 11:35:51.814206] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.246 [2024-07-26 11:35:51.814235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.246 qpair failed and we were unable to recover it. 00:27:56.246 [2024-07-26 11:35:51.814362] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.246 [2024-07-26 11:35:51.814392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.246 qpair failed and we were unable to recover it. 00:27:56.246 [2024-07-26 11:35:51.814511] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.246 [2024-07-26 11:35:51.814540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.246 qpair failed and we were unable to recover it. 00:27:56.246 [2024-07-26 11:35:51.814669] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.246 [2024-07-26 11:35:51.814700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.246 qpair failed and we were unable to recover it. 00:27:56.246 [2024-07-26 11:35:51.814894] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.246 [2024-07-26 11:35:51.814923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.246 qpair failed and we were unable to recover it. 00:27:56.246 [2024-07-26 11:35:51.815100] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.246 [2024-07-26 11:35:51.815130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.246 qpair failed and we were unable to recover it. 00:27:56.246 [2024-07-26 11:35:51.815251] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.246 [2024-07-26 11:35:51.815280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.246 qpair failed and we were unable to recover it. 00:27:56.246 [2024-07-26 11:35:51.815401] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.246 [2024-07-26 11:35:51.815430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.246 qpair failed and we were unable to recover it. 00:27:56.246 [2024-07-26 11:35:51.815603] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.246 [2024-07-26 11:35:51.815641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.246 qpair failed and we were unable to recover it. 00:27:56.246 [2024-07-26 11:35:51.815753] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.246 [2024-07-26 11:35:51.815782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.246 qpair failed and we were unable to recover it. 00:27:56.246 [2024-07-26 11:35:51.815983] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.246 [2024-07-26 11:35:51.816013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.246 qpair failed and we were unable to recover it. 00:27:56.246 [2024-07-26 11:35:51.816135] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.246 [2024-07-26 11:35:51.816164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.246 qpair failed and we were unable to recover it. 00:27:56.246 [2024-07-26 11:35:51.816272] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.246 [2024-07-26 11:35:51.816302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.246 qpair failed and we were unable to recover it. 00:27:56.246 [2024-07-26 11:35:51.816405] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.246 [2024-07-26 11:35:51.816435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.246 qpair failed and we were unable to recover it. 00:27:56.246 [2024-07-26 11:35:51.816667] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.246 [2024-07-26 11:35:51.816698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.246 qpair failed and we were unable to recover it. 00:27:56.246 [2024-07-26 11:35:51.816870] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.246 [2024-07-26 11:35:51.816900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.246 qpair failed and we were unable to recover it. 00:27:56.246 [2024-07-26 11:35:51.817012] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.246 [2024-07-26 11:35:51.817041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.246 qpair failed and we were unable to recover it. 00:27:56.246 [2024-07-26 11:35:51.817219] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.246 [2024-07-26 11:35:51.817249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.246 qpair failed and we were unable to recover it. 00:27:56.246 [2024-07-26 11:35:51.817461] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.246 [2024-07-26 11:35:51.817491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.246 qpair failed and we were unable to recover it. 00:27:56.246 [2024-07-26 11:35:51.817602] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.246 [2024-07-26 11:35:51.817641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.246 qpair failed and we were unable to recover it. 00:27:56.246 [2024-07-26 11:35:51.817911] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.246 [2024-07-26 11:35:51.817941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.246 qpair failed and we were unable to recover it. 00:27:56.246 [2024-07-26 11:35:51.818048] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.246 [2024-07-26 11:35:51.818078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.246 qpair failed and we were unable to recover it. 00:27:56.246 [2024-07-26 11:35:51.818195] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.246 [2024-07-26 11:35:51.818224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.246 qpair failed and we were unable to recover it. 00:27:56.246 [2024-07-26 11:35:51.818456] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.246 [2024-07-26 11:35:51.818492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.246 qpair failed and we were unable to recover it. 00:27:56.246 [2024-07-26 11:35:51.818647] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.246 [2024-07-26 11:35:51.818677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.246 qpair failed and we were unable to recover it. 00:27:56.246 [2024-07-26 11:35:51.818866] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.246 [2024-07-26 11:35:51.818896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.246 qpair failed and we were unable to recover it. 00:27:56.246 [2024-07-26 11:35:51.819026] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.246 [2024-07-26 11:35:51.819055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.246 qpair failed and we were unable to recover it. 00:27:56.246 [2024-07-26 11:35:51.819226] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.246 [2024-07-26 11:35:51.819255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.246 qpair failed and we were unable to recover it. 00:27:56.246 [2024-07-26 11:35:51.819369] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.246 [2024-07-26 11:35:51.819399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.246 qpair failed and we were unable to recover it. 00:27:56.246 [2024-07-26 11:35:51.819514] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.246 [2024-07-26 11:35:51.819543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.246 qpair failed and we were unable to recover it. 00:27:56.246 [2024-07-26 11:35:51.819712] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.246 [2024-07-26 11:35:51.819742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.246 qpair failed and we were unable to recover it. 00:27:56.246 [2024-07-26 11:35:51.820007] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.246 [2024-07-26 11:35:51.820037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.246 qpair failed and we were unable to recover it. 00:27:56.246 [2024-07-26 11:35:51.820224] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.246 [2024-07-26 11:35:51.820253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.246 qpair failed and we were unable to recover it. 00:27:56.246 [2024-07-26 11:35:51.820371] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.247 [2024-07-26 11:35:51.820400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.247 qpair failed and we were unable to recover it. 00:27:56.247 [2024-07-26 11:35:51.820596] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.247 [2024-07-26 11:35:51.820636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.247 qpair failed and we were unable to recover it. 00:27:56.247 [2024-07-26 11:35:51.820811] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.247 [2024-07-26 11:35:51.820841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.247 qpair failed and we were unable to recover it. 00:27:56.247 [2024-07-26 11:35:51.821016] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.247 [2024-07-26 11:35:51.821045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.247 qpair failed and we were unable to recover it. 00:27:56.247 [2024-07-26 11:35:51.821173] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.247 [2024-07-26 11:35:51.821203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.247 qpair failed and we were unable to recover it. 00:27:56.247 [2024-07-26 11:35:51.821443] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.247 [2024-07-26 11:35:51.821473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.247 qpair failed and we were unable to recover it. 00:27:56.247 [2024-07-26 11:35:51.821608] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.247 [2024-07-26 11:35:51.821646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.247 qpair failed and we were unable to recover it. 00:27:56.247 [2024-07-26 11:35:51.821759] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.247 [2024-07-26 11:35:51.821788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.247 qpair failed and we were unable to recover it. 00:27:56.247 [2024-07-26 11:35:51.821917] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.247 [2024-07-26 11:35:51.821947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.247 qpair failed and we were unable to recover it. 00:27:56.247 [2024-07-26 11:35:51.822074] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.247 [2024-07-26 11:35:51.822103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.247 qpair failed and we were unable to recover it. 00:27:56.247 [2024-07-26 11:35:51.822367] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.247 [2024-07-26 11:35:51.822397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.247 qpair failed and we were unable to recover it. 00:27:56.247 [2024-07-26 11:35:51.822607] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.247 [2024-07-26 11:35:51.822643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.247 qpair failed and we were unable to recover it. 00:27:56.247 [2024-07-26 11:35:51.822756] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.247 [2024-07-26 11:35:51.822786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.247 qpair failed and we were unable to recover it. 00:27:56.247 [2024-07-26 11:35:51.822970] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.247 [2024-07-26 11:35:51.822999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.247 qpair failed and we were unable to recover it. 00:27:56.247 [2024-07-26 11:35:51.823244] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.247 [2024-07-26 11:35:51.823273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.247 qpair failed and we were unable to recover it. 00:27:56.247 [2024-07-26 11:35:51.823463] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.247 [2024-07-26 11:35:51.823493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.247 qpair failed and we were unable to recover it. 00:27:56.247 [2024-07-26 11:35:51.823679] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.247 [2024-07-26 11:35:51.823710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.247 qpair failed and we were unable to recover it. 00:27:56.247 [2024-07-26 11:35:51.823836] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.247 [2024-07-26 11:35:51.823867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.247 qpair failed and we were unable to recover it. 00:27:56.247 [2024-07-26 11:35:51.823986] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.247 [2024-07-26 11:35:51.824015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.247 qpair failed and we were unable to recover it. 00:27:56.247 [2024-07-26 11:35:51.824112] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.247 [2024-07-26 11:35:51.824141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.247 qpair failed and we were unable to recover it. 00:27:56.247 [2024-07-26 11:35:51.824334] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.247 [2024-07-26 11:35:51.824364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.247 qpair failed and we were unable to recover it. 00:27:56.247 [2024-07-26 11:35:51.824486] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.247 [2024-07-26 11:35:51.824516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.247 qpair failed and we were unable to recover it. 00:27:56.247 [2024-07-26 11:35:51.824659] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.247 [2024-07-26 11:35:51.824690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.247 qpair failed and we were unable to recover it. 00:27:56.247 [2024-07-26 11:35:51.824871] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.247 [2024-07-26 11:35:51.824901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.247 qpair failed and we were unable to recover it. 00:27:56.247 [2024-07-26 11:35:51.825013] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.247 [2024-07-26 11:35:51.825042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.247 qpair failed and we were unable to recover it. 00:27:56.247 [2024-07-26 11:35:51.825222] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.247 [2024-07-26 11:35:51.825252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.247 qpair failed and we were unable to recover it. 00:27:56.247 [2024-07-26 11:35:51.825424] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.247 [2024-07-26 11:35:51.825453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.247 qpair failed and we were unable to recover it. 00:27:56.247 [2024-07-26 11:35:51.825568] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.247 [2024-07-26 11:35:51.825598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.247 qpair failed and we were unable to recover it. 00:27:56.247 [2024-07-26 11:35:51.825795] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.247 [2024-07-26 11:35:51.825825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.247 qpair failed and we were unable to recover it. 00:27:56.247 [2024-07-26 11:35:51.826002] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.247 [2024-07-26 11:35:51.826031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.247 qpair failed and we were unable to recover it. 00:27:56.247 [2024-07-26 11:35:51.826290] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.247 [2024-07-26 11:35:51.826325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.247 qpair failed and we were unable to recover it. 00:27:56.247 [2024-07-26 11:35:51.826452] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.247 [2024-07-26 11:35:51.826481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.247 qpair failed and we were unable to recover it. 00:27:56.247 [2024-07-26 11:35:51.826592] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.247 [2024-07-26 11:35:51.826621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.247 qpair failed and we were unable to recover it. 00:27:56.247 [2024-07-26 11:35:51.826759] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.247 [2024-07-26 11:35:51.826790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.247 qpair failed and we were unable to recover it. 00:27:56.247 [2024-07-26 11:35:51.826914] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.247 [2024-07-26 11:35:51.826943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.247 qpair failed and we were unable to recover it. 00:27:56.247 [2024-07-26 11:35:51.827071] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.247 [2024-07-26 11:35:51.827100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.247 qpair failed and we were unable to recover it. 00:27:56.247 [2024-07-26 11:35:51.827290] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.247 [2024-07-26 11:35:51.827320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.247 qpair failed and we were unable to recover it. 00:27:56.247 [2024-07-26 11:35:51.827422] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.247 [2024-07-26 11:35:51.827452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.247 qpair failed and we were unable to recover it. 00:27:56.248 [2024-07-26 11:35:51.827573] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.248 [2024-07-26 11:35:51.827603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.248 qpair failed and we were unable to recover it. 00:27:56.248 [2024-07-26 11:35:51.827793] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.248 [2024-07-26 11:35:51.827824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.248 qpair failed and we were unable to recover it. 00:27:56.248 [2024-07-26 11:35:51.828016] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.248 [2024-07-26 11:35:51.828045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.248 qpair failed and we were unable to recover it. 00:27:56.248 [2024-07-26 11:35:51.828168] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.248 [2024-07-26 11:35:51.828198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.248 qpair failed and we were unable to recover it. 00:27:56.248 [2024-07-26 11:35:51.828303] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.248 [2024-07-26 11:35:51.828332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.248 qpair failed and we were unable to recover it. 00:27:56.248 [2024-07-26 11:35:51.828598] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.248 [2024-07-26 11:35:51.828636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.248 qpair failed and we were unable to recover it. 00:27:56.248 [2024-07-26 11:35:51.828817] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.248 [2024-07-26 11:35:51.828848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.248 qpair failed and we were unable to recover it. 00:27:56.248 [2024-07-26 11:35:51.828960] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.248 [2024-07-26 11:35:51.828990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.248 qpair failed and we were unable to recover it. 00:27:56.248 [2024-07-26 11:35:51.829187] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.248 [2024-07-26 11:35:51.829216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.248 qpair failed and we were unable to recover it. 00:27:56.248 [2024-07-26 11:35:51.829333] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.248 [2024-07-26 11:35:51.829363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.248 qpair failed and we were unable to recover it. 00:27:56.248 [2024-07-26 11:35:51.829488] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.248 [2024-07-26 11:35:51.829518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.248 qpair failed and we were unable to recover it. 00:27:56.248 [2024-07-26 11:35:51.829648] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.248 [2024-07-26 11:35:51.829679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.248 qpair failed and we were unable to recover it. 00:27:56.248 [2024-07-26 11:35:51.829808] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.248 [2024-07-26 11:35:51.829838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.248 qpair failed and we were unable to recover it. 00:27:56.248 [2024-07-26 11:35:51.830010] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.248 [2024-07-26 11:35:51.830040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.248 qpair failed and we were unable to recover it. 00:27:56.248 [2024-07-26 11:35:51.830143] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.248 [2024-07-26 11:35:51.830173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.248 qpair failed and we were unable to recover it. 00:27:56.248 [2024-07-26 11:35:51.830302] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.248 [2024-07-26 11:35:51.830332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.248 qpair failed and we were unable to recover it. 00:27:56.248 [2024-07-26 11:35:51.830509] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.248 [2024-07-26 11:35:51.830538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.248 qpair failed and we were unable to recover it. 00:27:56.248 [2024-07-26 11:35:51.830729] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.248 [2024-07-26 11:35:51.830761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.248 qpair failed and we were unable to recover it. 00:27:56.248 [2024-07-26 11:35:51.830942] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.248 [2024-07-26 11:35:51.830972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.248 qpair failed and we were unable to recover it. 00:27:56.248 [2024-07-26 11:35:51.831096] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.248 [2024-07-26 11:35:51.831126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.248 qpair failed and we were unable to recover it. 00:27:56.248 [2024-07-26 11:35:51.831314] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.248 [2024-07-26 11:35:51.831344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.248 qpair failed and we were unable to recover it. 00:27:56.248 [2024-07-26 11:35:51.831473] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.248 [2024-07-26 11:35:51.831503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.248 qpair failed and we were unable to recover it. 00:27:56.248 [2024-07-26 11:35:51.831616] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.248 [2024-07-26 11:35:51.831656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.248 qpair failed and we were unable to recover it. 00:27:56.248 [2024-07-26 11:35:51.831790] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.248 [2024-07-26 11:35:51.831819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.248 qpair failed and we were unable to recover it. 00:27:56.248 [2024-07-26 11:35:51.831935] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.248 [2024-07-26 11:35:51.831964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.248 qpair failed and we were unable to recover it. 00:27:56.248 [2024-07-26 11:35:51.832142] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.248 [2024-07-26 11:35:51.832171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.248 qpair failed and we were unable to recover it. 00:27:56.248 [2024-07-26 11:35:51.832350] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.248 [2024-07-26 11:35:51.832380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.248 qpair failed and we were unable to recover it. 00:27:56.248 [2024-07-26 11:35:51.832521] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.248 [2024-07-26 11:35:51.832551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.248 qpair failed and we were unable to recover it. 00:27:56.248 [2024-07-26 11:35:51.832765] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.248 [2024-07-26 11:35:51.832795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.248 qpair failed and we were unable to recover it. 00:27:56.248 [2024-07-26 11:35:51.833059] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.248 [2024-07-26 11:35:51.833089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.248 qpair failed and we were unable to recover it. 00:27:56.248 [2024-07-26 11:35:51.833333] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.248 [2024-07-26 11:35:51.833363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.248 qpair failed and we were unable to recover it. 00:27:56.248 [2024-07-26 11:35:51.833603] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.248 [2024-07-26 11:35:51.833643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.248 qpair failed and we were unable to recover it. 00:27:56.249 [2024-07-26 11:35:51.833752] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.249 [2024-07-26 11:35:51.833788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.249 qpair failed and we were unable to recover it. 00:27:56.249 [2024-07-26 11:35:51.833969] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.249 [2024-07-26 11:35:51.833999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.249 qpair failed and we were unable to recover it. 00:27:56.249 [2024-07-26 11:35:51.834118] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.249 [2024-07-26 11:35:51.834147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.249 qpair failed and we were unable to recover it. 00:27:56.249 [2024-07-26 11:35:51.834326] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.249 [2024-07-26 11:35:51.834356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.249 qpair failed and we were unable to recover it. 00:27:56.249 [2024-07-26 11:35:51.834484] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.249 [2024-07-26 11:35:51.834513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.249 qpair failed and we were unable to recover it. 00:27:56.249 [2024-07-26 11:35:51.834718] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.249 [2024-07-26 11:35:51.834749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.249 qpair failed and we were unable to recover it. 00:27:56.249 [2024-07-26 11:35:51.834984] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.249 [2024-07-26 11:35:51.835014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.249 qpair failed and we were unable to recover it. 00:27:56.249 [2024-07-26 11:35:51.835152] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.249 [2024-07-26 11:35:51.835182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.249 qpair failed and we were unable to recover it. 00:27:56.249 [2024-07-26 11:35:51.835359] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.249 [2024-07-26 11:35:51.835389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.249 qpair failed and we were unable to recover it. 00:27:56.249 [2024-07-26 11:35:51.835510] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.249 [2024-07-26 11:35:51.835540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.249 qpair failed and we were unable to recover it. 00:27:56.249 [2024-07-26 11:35:51.835727] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.249 [2024-07-26 11:35:51.835758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.249 qpair failed and we were unable to recover it. 00:27:56.249 [2024-07-26 11:35:51.835898] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.249 [2024-07-26 11:35:51.835928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.249 qpair failed and we were unable to recover it. 00:27:56.249 [2024-07-26 11:35:51.836117] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.249 [2024-07-26 11:35:51.836147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.249 qpair failed and we were unable to recover it. 00:27:56.249 [2024-07-26 11:35:51.836416] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.249 [2024-07-26 11:35:51.836445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.249 qpair failed and we were unable to recover it. 00:27:56.249 [2024-07-26 11:35:51.836573] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.249 [2024-07-26 11:35:51.836603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.249 qpair failed and we were unable to recover it. 00:27:56.249 [2024-07-26 11:35:51.836740] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.249 [2024-07-26 11:35:51.836772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.249 qpair failed and we were unable to recover it. 00:27:56.249 [2024-07-26 11:35:51.837038] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.249 [2024-07-26 11:35:51.837068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.249 qpair failed and we were unable to recover it. 00:27:56.249 [2024-07-26 11:35:51.837208] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.249 [2024-07-26 11:35:51.837238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.249 qpair failed and we were unable to recover it. 00:27:56.249 [2024-07-26 11:35:51.837458] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.249 [2024-07-26 11:35:51.837488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.249 qpair failed and we were unable to recover it. 00:27:56.249 [2024-07-26 11:35:51.837772] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.249 [2024-07-26 11:35:51.837803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.249 qpair failed and we were unable to recover it. 00:27:56.249 [2024-07-26 11:35:51.837919] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.249 [2024-07-26 11:35:51.837949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.249 qpair failed and we were unable to recover it. 00:27:56.249 [2024-07-26 11:35:51.838186] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.249 [2024-07-26 11:35:51.838215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.249 qpair failed and we were unable to recover it. 00:27:56.249 [2024-07-26 11:35:51.838502] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.249 [2024-07-26 11:35:51.838531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.249 qpair failed and we were unable to recover it. 00:27:56.249 [2024-07-26 11:35:51.838656] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.249 [2024-07-26 11:35:51.838687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.249 qpair failed and we were unable to recover it. 00:27:56.249 [2024-07-26 11:35:51.838880] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.249 [2024-07-26 11:35:51.838910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.249 qpair failed and we were unable to recover it. 00:27:56.249 [2024-07-26 11:35:51.839022] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.249 [2024-07-26 11:35:51.839051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.249 qpair failed and we were unable to recover it. 00:27:56.249 [2024-07-26 11:35:51.839243] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.249 [2024-07-26 11:35:51.839272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.249 qpair failed and we were unable to recover it. 00:27:56.249 [2024-07-26 11:35:51.839406] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.249 [2024-07-26 11:35:51.839437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.249 qpair failed and we were unable to recover it. 00:27:56.249 [2024-07-26 11:35:51.839617] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.249 [2024-07-26 11:35:51.839655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.249 qpair failed and we were unable to recover it. 00:27:56.249 [2024-07-26 11:35:51.839844] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.249 [2024-07-26 11:35:51.839873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.249 qpair failed and we were unable to recover it. 00:27:56.249 [2024-07-26 11:35:51.840059] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.249 [2024-07-26 11:35:51.840089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.249 qpair failed and we were unable to recover it. 00:27:56.249 [2024-07-26 11:35:51.840269] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.249 [2024-07-26 11:35:51.840298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.249 qpair failed and we were unable to recover it. 00:27:56.249 [2024-07-26 11:35:51.840407] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.249 [2024-07-26 11:35:51.840437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.249 qpair failed and we were unable to recover it. 00:27:56.249 [2024-07-26 11:35:51.840648] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.249 [2024-07-26 11:35:51.840678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.249 qpair failed and we were unable to recover it. 00:27:56.249 [2024-07-26 11:35:51.840853] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.249 [2024-07-26 11:35:51.840883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.249 qpair failed and we were unable to recover it. 00:27:56.249 [2024-07-26 11:35:51.841072] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.249 [2024-07-26 11:35:51.841102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.249 qpair failed and we were unable to recover it. 00:27:56.249 [2024-07-26 11:35:51.841350] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.249 [2024-07-26 11:35:51.841379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.249 qpair failed and we were unable to recover it. 00:27:56.249 [2024-07-26 11:35:51.841485] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.250 [2024-07-26 11:35:51.841515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.250 qpair failed and we were unable to recover it. 00:27:56.250 [2024-07-26 11:35:51.841644] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.250 [2024-07-26 11:35:51.841674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.250 qpair failed and we were unable to recover it. 00:27:56.250 [2024-07-26 11:35:51.841823] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.250 [2024-07-26 11:35:51.841853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.250 qpair failed and we were unable to recover it. 00:27:56.250 [2024-07-26 11:35:51.842038] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.250 [2024-07-26 11:35:51.842068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.250 qpair failed and we were unable to recover it. 00:27:56.250 [2024-07-26 11:35:51.842203] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.250 [2024-07-26 11:35:51.842233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.250 qpair failed and we were unable to recover it. 00:27:56.250 [2024-07-26 11:35:51.842363] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.250 [2024-07-26 11:35:51.842393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.250 qpair failed and we were unable to recover it. 00:27:56.250 [2024-07-26 11:35:51.842498] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.250 [2024-07-26 11:35:51.842528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.250 qpair failed and we were unable to recover it. 00:27:56.250 [2024-07-26 11:35:51.842670] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.250 [2024-07-26 11:35:51.842702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.250 qpair failed and we were unable to recover it. 00:27:56.250 [2024-07-26 11:35:51.842822] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.250 [2024-07-26 11:35:51.842852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.250 qpair failed and we were unable to recover it. 00:27:56.250 [2024-07-26 11:35:51.843124] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.250 [2024-07-26 11:35:51.843154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.250 qpair failed and we were unable to recover it. 00:27:56.250 [2024-07-26 11:35:51.843280] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.250 [2024-07-26 11:35:51.843308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.250 qpair failed and we were unable to recover it. 00:27:56.250 [2024-07-26 11:35:51.843497] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.250 [2024-07-26 11:35:51.843527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.250 qpair failed and we were unable to recover it. 00:27:56.250 [2024-07-26 11:35:51.843721] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.250 [2024-07-26 11:35:51.843753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.250 qpair failed and we were unable to recover it. 00:27:56.250 [2024-07-26 11:35:51.843965] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.250 [2024-07-26 11:35:51.843994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.250 qpair failed and we were unable to recover it. 00:27:56.250 [2024-07-26 11:35:51.844100] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.250 [2024-07-26 11:35:51.844130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.250 qpair failed and we were unable to recover it. 00:27:56.250 [2024-07-26 11:35:51.844255] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.250 [2024-07-26 11:35:51.844285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.250 qpair failed and we were unable to recover it. 00:27:56.250 [2024-07-26 11:35:51.844492] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.250 [2024-07-26 11:35:51.844521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.250 qpair failed and we were unable to recover it. 00:27:56.250 [2024-07-26 11:35:51.844639] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.250 [2024-07-26 11:35:51.844669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.250 qpair failed and we were unable to recover it. 00:27:56.250 [2024-07-26 11:35:51.844919] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.250 [2024-07-26 11:35:51.844948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.250 qpair failed and we were unable to recover it. 00:27:56.250 [2024-07-26 11:35:51.845053] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.250 [2024-07-26 11:35:51.845083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.250 qpair failed and we were unable to recover it. 00:27:56.250 [2024-07-26 11:35:51.845344] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.250 [2024-07-26 11:35:51.845374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.250 qpair failed and we were unable to recover it. 00:27:56.250 [2024-07-26 11:35:51.845508] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.250 [2024-07-26 11:35:51.845538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.250 qpair failed and we were unable to recover it. 00:27:56.250 [2024-07-26 11:35:51.845734] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.250 [2024-07-26 11:35:51.845765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.250 qpair failed and we were unable to recover it. 00:27:56.250 [2024-07-26 11:35:51.845884] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.250 [2024-07-26 11:35:51.845913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.250 qpair failed and we were unable to recover it. 00:27:56.250 [2024-07-26 11:35:51.846037] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.250 [2024-07-26 11:35:51.846066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.250 qpair failed and we were unable to recover it. 00:27:56.250 [2024-07-26 11:35:51.846194] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.250 [2024-07-26 11:35:51.846223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.250 qpair failed and we were unable to recover it. 00:27:56.250 [2024-07-26 11:35:51.846353] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.250 [2024-07-26 11:35:51.846383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.250 qpair failed and we were unable to recover it. 00:27:56.250 [2024-07-26 11:35:51.846501] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.250 [2024-07-26 11:35:51.846530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.250 qpair failed and we were unable to recover it. 00:27:56.250 [2024-07-26 11:35:51.846729] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.250 [2024-07-26 11:35:51.846760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.250 qpair failed and we were unable to recover it. 00:27:56.250 [2024-07-26 11:35:51.846878] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.250 [2024-07-26 11:35:51.846907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.250 qpair failed and we were unable to recover it. 00:27:56.250 [2024-07-26 11:35:51.847010] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.250 [2024-07-26 11:35:51.847048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.250 qpair failed and we were unable to recover it. 00:27:56.250 [2024-07-26 11:35:51.847225] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.250 [2024-07-26 11:35:51.847255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.250 qpair failed and we were unable to recover it. 00:27:56.250 [2024-07-26 11:35:51.847548] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.250 [2024-07-26 11:35:51.847578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.250 qpair failed and we were unable to recover it. 00:27:56.250 [2024-07-26 11:35:51.847716] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.250 [2024-07-26 11:35:51.847747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.250 qpair failed and we were unable to recover it. 00:27:56.250 [2024-07-26 11:35:51.847887] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.250 [2024-07-26 11:35:51.847916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.250 qpair failed and we were unable to recover it. 00:27:56.250 [2024-07-26 11:35:51.848031] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.250 [2024-07-26 11:35:51.848060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.250 qpair failed and we were unable to recover it. 00:27:56.250 [2024-07-26 11:35:51.848253] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.250 [2024-07-26 11:35:51.848282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.250 qpair failed and we were unable to recover it. 00:27:56.250 [2024-07-26 11:35:51.848387] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.250 [2024-07-26 11:35:51.848418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.251 qpair failed and we were unable to recover it. 00:27:56.251 [2024-07-26 11:35:51.848538] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.251 [2024-07-26 11:35:51.848568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.251 qpair failed and we were unable to recover it. 00:27:56.251 [2024-07-26 11:35:51.848688] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.251 [2024-07-26 11:35:51.848719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.251 qpair failed and we were unable to recover it. 00:27:56.251 [2024-07-26 11:35:51.848960] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.251 [2024-07-26 11:35:51.848989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.251 qpair failed and we were unable to recover it. 00:27:56.251 [2024-07-26 11:35:51.849094] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.251 [2024-07-26 11:35:51.849123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.251 qpair failed and we were unable to recover it. 00:27:56.251 [2024-07-26 11:35:51.849296] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.251 [2024-07-26 11:35:51.849326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.251 qpair failed and we were unable to recover it. 00:27:56.535 [2024-07-26 11:35:51.849432] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.535 [2024-07-26 11:35:51.849460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.535 qpair failed and we were unable to recover it. 00:27:56.535 [2024-07-26 11:35:51.849654] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.535 [2024-07-26 11:35:51.849685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.535 qpair failed and we were unable to recover it. 00:27:56.535 [2024-07-26 11:35:51.849934] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.535 [2024-07-26 11:35:51.849964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.535 qpair failed and we were unable to recover it. 00:27:56.535 [2024-07-26 11:35:51.850157] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.535 [2024-07-26 11:35:51.850186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.535 qpair failed and we were unable to recover it. 00:27:56.535 [2024-07-26 11:35:51.850321] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.535 [2024-07-26 11:35:51.850351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.535 qpair failed and we were unable to recover it. 00:27:56.535 [2024-07-26 11:35:51.850464] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.535 [2024-07-26 11:35:51.850494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.535 qpair failed and we were unable to recover it. 00:27:56.535 [2024-07-26 11:35:51.850649] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.535 [2024-07-26 11:35:51.850679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.535 qpair failed and we were unable to recover it. 00:27:56.535 [2024-07-26 11:35:51.850884] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.535 [2024-07-26 11:35:51.850914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.535 qpair failed and we were unable to recover it. 00:27:56.535 [2024-07-26 11:35:51.851085] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.535 [2024-07-26 11:35:51.851114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.535 qpair failed and we were unable to recover it. 00:27:56.535 [2024-07-26 11:35:51.851237] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.535 [2024-07-26 11:35:51.851266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.535 qpair failed and we were unable to recover it. 00:27:56.535 [2024-07-26 11:35:51.851377] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.535 [2024-07-26 11:35:51.851406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.535 qpair failed and we were unable to recover it. 00:27:56.535 [2024-07-26 11:35:51.851526] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.535 [2024-07-26 11:35:51.851555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.535 qpair failed and we were unable to recover it. 00:27:56.535 [2024-07-26 11:35:51.851770] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.535 [2024-07-26 11:35:51.851801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.535 qpair failed and we were unable to recover it. 00:27:56.535 [2024-07-26 11:35:51.851978] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.535 [2024-07-26 11:35:51.852008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.535 qpair failed and we were unable to recover it. 00:27:56.535 [2024-07-26 11:35:51.852281] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.535 [2024-07-26 11:35:51.852312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.535 qpair failed and we were unable to recover it. 00:27:56.535 [2024-07-26 11:35:51.852459] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.535 [2024-07-26 11:35:51.852489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.535 qpair failed and we were unable to recover it. 00:27:56.535 [2024-07-26 11:35:51.852612] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.535 [2024-07-26 11:35:51.852650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.535 qpair failed and we were unable to recover it. 00:27:56.535 [2024-07-26 11:35:51.852773] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.535 [2024-07-26 11:35:51.852802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.535 qpair failed and we were unable to recover it. 00:27:56.535 [2024-07-26 11:35:51.852935] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.535 [2024-07-26 11:35:51.852965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.535 qpair failed and we were unable to recover it. 00:27:56.535 [2024-07-26 11:35:51.853076] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.535 [2024-07-26 11:35:51.853105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.535 qpair failed and we were unable to recover it. 00:27:56.535 [2024-07-26 11:35:51.853221] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.535 [2024-07-26 11:35:51.853251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.535 qpair failed and we were unable to recover it. 00:27:56.535 [2024-07-26 11:35:51.853553] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.535 [2024-07-26 11:35:51.853583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.535 qpair failed and we were unable to recover it. 00:27:56.535 [2024-07-26 11:35:51.853724] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.535 [2024-07-26 11:35:51.853755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.535 qpair failed and we were unable to recover it. 00:27:56.535 [2024-07-26 11:35:51.853873] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.535 [2024-07-26 11:35:51.853902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.535 qpair failed and we were unable to recover it. 00:27:56.535 [2024-07-26 11:35:51.854079] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.535 [2024-07-26 11:35:51.854109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.535 qpair failed and we were unable to recover it. 00:27:56.535 [2024-07-26 11:35:51.854229] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.535 [2024-07-26 11:35:51.854258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.535 qpair failed and we were unable to recover it. 00:27:56.535 [2024-07-26 11:35:51.854443] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.535 [2024-07-26 11:35:51.854473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.535 qpair failed and we were unable to recover it. 00:27:56.535 [2024-07-26 11:35:51.854597] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.535 [2024-07-26 11:35:51.854642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.535 qpair failed and we were unable to recover it. 00:27:56.535 [2024-07-26 11:35:51.854757] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.535 [2024-07-26 11:35:51.854786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.535 qpair failed and we were unable to recover it. 00:27:56.536 [2024-07-26 11:35:51.854888] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.536 [2024-07-26 11:35:51.854918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.536 qpair failed and we were unable to recover it. 00:27:56.536 [2024-07-26 11:35:51.855147] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.536 [2024-07-26 11:35:51.855177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.536 qpair failed and we were unable to recover it. 00:27:56.536 [2024-07-26 11:35:51.855310] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.536 [2024-07-26 11:35:51.855339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.536 qpair failed and we were unable to recover it. 00:27:56.536 [2024-07-26 11:35:51.855466] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.536 [2024-07-26 11:35:51.855495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.536 qpair failed and we were unable to recover it. 00:27:56.536 [2024-07-26 11:35:51.855705] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.536 [2024-07-26 11:35:51.855736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.536 qpair failed and we were unable to recover it. 00:27:56.536 [2024-07-26 11:35:51.855845] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.536 [2024-07-26 11:35:51.855875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.536 qpair failed and we were unable to recover it. 00:27:56.536 [2024-07-26 11:35:51.855994] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.536 [2024-07-26 11:35:51.856023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.536 qpair failed and we were unable to recover it. 00:27:56.536 [2024-07-26 11:35:51.856145] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.536 [2024-07-26 11:35:51.856174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.536 qpair failed and we were unable to recover it. 00:27:56.536 [2024-07-26 11:35:51.856301] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.536 [2024-07-26 11:35:51.856331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.536 qpair failed and we were unable to recover it. 00:27:56.536 [2024-07-26 11:35:51.856438] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.536 [2024-07-26 11:35:51.856468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.536 qpair failed and we were unable to recover it. 00:27:56.536 [2024-07-26 11:35:51.856585] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.536 [2024-07-26 11:35:51.856615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.536 qpair failed and we were unable to recover it. 00:27:56.536 [2024-07-26 11:35:51.856729] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.536 [2024-07-26 11:35:51.856759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.536 qpair failed and we were unable to recover it. 00:27:56.536 [2024-07-26 11:35:51.856999] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.536 [2024-07-26 11:35:51.857029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.536 qpair failed and we were unable to recover it. 00:27:56.536 [2024-07-26 11:35:51.857148] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.536 [2024-07-26 11:35:51.857177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.536 qpair failed and we were unable to recover it. 00:27:56.536 [2024-07-26 11:35:51.857288] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.536 [2024-07-26 11:35:51.857317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.536 qpair failed and we were unable to recover it. 00:27:56.536 [2024-07-26 11:35:51.857438] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.536 [2024-07-26 11:35:51.857467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.536 qpair failed and we were unable to recover it. 00:27:56.536 [2024-07-26 11:35:51.857591] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.536 [2024-07-26 11:35:51.857621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.536 qpair failed and we were unable to recover it. 00:27:56.536 [2024-07-26 11:35:51.857754] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.536 [2024-07-26 11:35:51.857785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.536 qpair failed and we were unable to recover it. 00:27:56.536 [2024-07-26 11:35:51.857926] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.536 [2024-07-26 11:35:51.857955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.536 qpair failed and we were unable to recover it. 00:27:56.536 [2024-07-26 11:35:51.858080] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.536 [2024-07-26 11:35:51.858109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.536 qpair failed and we were unable to recover it. 00:27:56.536 [2024-07-26 11:35:51.858235] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.536 [2024-07-26 11:35:51.858265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.536 qpair failed and we were unable to recover it. 00:27:56.536 [2024-07-26 11:35:51.858394] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.536 [2024-07-26 11:35:51.858424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.536 qpair failed and we were unable to recover it. 00:27:56.536 [2024-07-26 11:35:51.858598] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.536 [2024-07-26 11:35:51.858650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.536 qpair failed and we were unable to recover it. 00:27:56.536 [2024-07-26 11:35:51.858850] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.536 [2024-07-26 11:35:51.858881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.536 qpair failed and we were unable to recover it. 00:27:56.536 [2024-07-26 11:35:51.859000] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.536 [2024-07-26 11:35:51.859030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.536 qpair failed and we were unable to recover it. 00:27:56.536 [2024-07-26 11:35:51.859159] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.536 [2024-07-26 11:35:51.859189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.536 qpair failed and we were unable to recover it. 00:27:56.536 [2024-07-26 11:35:51.859319] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.536 [2024-07-26 11:35:51.859349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.536 qpair failed and we were unable to recover it. 00:27:56.536 [2024-07-26 11:35:51.859568] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.536 [2024-07-26 11:35:51.859597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.536 qpair failed and we were unable to recover it. 00:27:56.536 [2024-07-26 11:35:51.859719] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.536 [2024-07-26 11:35:51.859750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.536 qpair failed and we were unable to recover it. 00:27:56.536 [2024-07-26 11:35:51.859877] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.536 [2024-07-26 11:35:51.859906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.536 qpair failed and we were unable to recover it. 00:27:56.536 [2024-07-26 11:35:51.860025] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.536 [2024-07-26 11:35:51.860055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.536 qpair failed and we were unable to recover it. 00:27:56.536 [2024-07-26 11:35:51.860165] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.536 [2024-07-26 11:35:51.860195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.536 qpair failed and we were unable to recover it. 00:27:56.536 [2024-07-26 11:35:51.860317] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.536 [2024-07-26 11:35:51.860346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.536 qpair failed and we were unable to recover it. 00:27:56.536 [2024-07-26 11:35:51.860549] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.536 [2024-07-26 11:35:51.860579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.536 qpair failed and we were unable to recover it. 00:27:56.536 [2024-07-26 11:35:51.860706] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.536 [2024-07-26 11:35:51.860737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.536 qpair failed and we were unable to recover it. 00:27:56.536 [2024-07-26 11:35:51.861019] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.536 [2024-07-26 11:35:51.861048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.536 qpair failed and we were unable to recover it. 00:27:56.536 [2024-07-26 11:35:51.861159] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.536 [2024-07-26 11:35:51.861189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.536 qpair failed and we were unable to recover it. 00:27:56.536 [2024-07-26 11:35:51.861312] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.537 [2024-07-26 11:35:51.861342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.537 qpair failed and we were unable to recover it. 00:27:56.537 [2024-07-26 11:35:51.861450] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.537 [2024-07-26 11:35:51.861485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.537 qpair failed and we were unable to recover it. 00:27:56.537 [2024-07-26 11:35:51.861597] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.537 [2024-07-26 11:35:51.861637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.537 qpair failed and we were unable to recover it. 00:27:56.537 [2024-07-26 11:35:51.861747] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.537 [2024-07-26 11:35:51.861776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.537 qpair failed and we were unable to recover it. 00:27:56.537 [2024-07-26 11:35:51.861971] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.537 [2024-07-26 11:35:51.862002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.537 qpair failed and we were unable to recover it. 00:27:56.537 [2024-07-26 11:35:51.862117] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.537 [2024-07-26 11:35:51.862147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.537 qpair failed and we were unable to recover it. 00:27:56.537 [2024-07-26 11:35:51.862270] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.537 [2024-07-26 11:35:51.862299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.537 qpair failed and we were unable to recover it. 00:27:56.537 [2024-07-26 11:35:51.862404] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.537 [2024-07-26 11:35:51.862433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.537 qpair failed and we were unable to recover it. 00:27:56.537 [2024-07-26 11:35:51.862621] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.537 [2024-07-26 11:35:51.862659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.537 qpair failed and we were unable to recover it. 00:27:56.537 [2024-07-26 11:35:51.862844] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.537 [2024-07-26 11:35:51.862874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.537 qpair failed and we were unable to recover it. 00:27:56.537 [2024-07-26 11:35:51.862998] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.537 [2024-07-26 11:35:51.863027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.537 qpair failed and we were unable to recover it. 00:27:56.537 [2024-07-26 11:35:51.863144] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.537 [2024-07-26 11:35:51.863174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.537 qpair failed and we were unable to recover it. 00:27:56.537 [2024-07-26 11:35:51.863319] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.537 [2024-07-26 11:35:51.863349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.537 qpair failed and we were unable to recover it. 00:27:56.537 [2024-07-26 11:35:51.863476] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.537 [2024-07-26 11:35:51.863505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.537 qpair failed and we were unable to recover it. 00:27:56.537 [2024-07-26 11:35:51.863694] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.537 [2024-07-26 11:35:51.863725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.537 qpair failed and we were unable to recover it. 00:27:56.537 [2024-07-26 11:35:51.863973] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.537 [2024-07-26 11:35:51.864003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.537 qpair failed and we were unable to recover it. 00:27:56.537 [2024-07-26 11:35:51.864121] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.537 [2024-07-26 11:35:51.864151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.537 qpair failed and we were unable to recover it. 00:27:56.537 [2024-07-26 11:35:51.864286] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.537 [2024-07-26 11:35:51.864315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.537 qpair failed and we were unable to recover it. 00:27:56.537 [2024-07-26 11:35:51.864430] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.537 [2024-07-26 11:35:51.864458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.537 qpair failed and we were unable to recover it. 00:27:56.537 [2024-07-26 11:35:51.864716] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.537 [2024-07-26 11:35:51.864747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.537 qpair failed and we were unable to recover it. 00:27:56.537 [2024-07-26 11:35:51.864881] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.537 [2024-07-26 11:35:51.864910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.537 qpair failed and we were unable to recover it. 00:27:56.537 [2024-07-26 11:35:51.865046] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.537 [2024-07-26 11:35:51.865075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.537 qpair failed and we were unable to recover it. 00:27:56.537 [2024-07-26 11:35:51.865183] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.537 [2024-07-26 11:35:51.865213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.537 qpair failed and we were unable to recover it. 00:27:56.537 [2024-07-26 11:35:51.865401] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.537 [2024-07-26 11:35:51.865430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.537 qpair failed and we were unable to recover it. 00:27:56.537 [2024-07-26 11:35:51.865675] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.537 [2024-07-26 11:35:51.865706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.537 qpair failed and we were unable to recover it. 00:27:56.537 [2024-07-26 11:35:51.865822] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.537 [2024-07-26 11:35:51.865851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.537 qpair failed and we were unable to recover it. 00:27:56.537 [2024-07-26 11:35:51.866024] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.537 [2024-07-26 11:35:51.866054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.537 qpair failed and we were unable to recover it. 00:27:56.537 [2024-07-26 11:35:51.866175] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.537 [2024-07-26 11:35:51.866204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.537 qpair failed and we were unable to recover it. 00:27:56.537 [2024-07-26 11:35:51.866331] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.537 [2024-07-26 11:35:51.866361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.537 qpair failed and we were unable to recover it. 00:27:56.537 [2024-07-26 11:35:51.866531] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.537 [2024-07-26 11:35:51.866561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.537 qpair failed and we were unable to recover it. 00:27:56.537 [2024-07-26 11:35:51.866692] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.537 [2024-07-26 11:35:51.866723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.537 qpair failed and we were unable to recover it. 00:27:56.537 [2024-07-26 11:35:51.866902] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.537 [2024-07-26 11:35:51.866931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.537 qpair failed and we were unable to recover it. 00:27:56.537 [2024-07-26 11:35:51.867053] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.537 [2024-07-26 11:35:51.867083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.537 qpair failed and we were unable to recover it. 00:27:56.537 [2024-07-26 11:35:51.867290] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.537 [2024-07-26 11:35:51.867319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.537 qpair failed and we were unable to recover it. 00:27:56.537 [2024-07-26 11:35:51.867428] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.537 [2024-07-26 11:35:51.867457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.537 qpair failed and we were unable to recover it. 00:27:56.537 [2024-07-26 11:35:51.867579] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.537 [2024-07-26 11:35:51.867608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.537 qpair failed and we were unable to recover it. 00:27:56.537 [2024-07-26 11:35:51.867811] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.537 [2024-07-26 11:35:51.867842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.537 qpair failed and we were unable to recover it. 00:27:56.537 [2024-07-26 11:35:51.868015] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.538 [2024-07-26 11:35:51.868045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.538 qpair failed and we were unable to recover it. 00:27:56.538 [2024-07-26 11:35:51.868220] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.538 [2024-07-26 11:35:51.868250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.538 qpair failed and we were unable to recover it. 00:27:56.538 [2024-07-26 11:35:51.868368] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.538 [2024-07-26 11:35:51.868398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.538 qpair failed and we were unable to recover it. 00:27:56.538 [2024-07-26 11:35:51.868573] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.538 [2024-07-26 11:35:51.868603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.538 qpair failed and we were unable to recover it. 00:27:56.538 [2024-07-26 11:35:51.868804] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.538 [2024-07-26 11:35:51.868840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.538 qpair failed and we were unable to recover it. 00:27:56.538 [2024-07-26 11:35:51.868963] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.538 [2024-07-26 11:35:51.868992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.538 qpair failed and we were unable to recover it. 00:27:56.538 [2024-07-26 11:35:51.869169] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.538 [2024-07-26 11:35:51.869199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.538 qpair failed and we were unable to recover it. 00:27:56.538 [2024-07-26 11:35:51.869319] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.538 [2024-07-26 11:35:51.869348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.538 qpair failed and we were unable to recover it. 00:27:56.538 [2024-07-26 11:35:51.869464] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.538 [2024-07-26 11:35:51.869493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.538 qpair failed and we were unable to recover it. 00:27:56.538 [2024-07-26 11:35:51.869613] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.538 [2024-07-26 11:35:51.869654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.538 qpair failed and we were unable to recover it. 00:27:56.538 [2024-07-26 11:35:51.869777] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.538 [2024-07-26 11:35:51.869807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.538 qpair failed and we were unable to recover it. 00:27:56.538 [2024-07-26 11:35:51.869925] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.538 [2024-07-26 11:35:51.869954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.538 qpair failed and we were unable to recover it. 00:27:56.538 [2024-07-26 11:35:51.870145] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.538 [2024-07-26 11:35:51.870175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.538 qpair failed and we were unable to recover it. 00:27:56.538 [2024-07-26 11:35:51.870351] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.538 [2024-07-26 11:35:51.870380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.538 qpair failed and we were unable to recover it. 00:27:56.538 [2024-07-26 11:35:51.870673] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.538 [2024-07-26 11:35:51.870704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.538 qpair failed and we were unable to recover it. 00:27:56.538 [2024-07-26 11:35:51.870879] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.538 [2024-07-26 11:35:51.870910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.538 qpair failed and we were unable to recover it. 00:27:56.538 [2024-07-26 11:35:51.871023] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.538 [2024-07-26 11:35:51.871053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.538 qpair failed and we were unable to recover it. 00:27:56.538 [2024-07-26 11:35:51.871168] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.538 [2024-07-26 11:35:51.871198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.538 qpair failed and we were unable to recover it. 00:27:56.538 [2024-07-26 11:35:51.871475] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.538 [2024-07-26 11:35:51.871505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.538 qpair failed and we were unable to recover it. 00:27:56.538 [2024-07-26 11:35:51.871643] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.538 [2024-07-26 11:35:51.871673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.538 qpair failed and we were unable to recover it. 00:27:56.538 [2024-07-26 11:35:51.871793] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.538 [2024-07-26 11:35:51.871822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.538 qpair failed and we were unable to recover it. 00:27:56.538 [2024-07-26 11:35:51.871945] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.538 [2024-07-26 11:35:51.871975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.538 qpair failed and we were unable to recover it. 00:27:56.538 [2024-07-26 11:35:51.872191] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.538 [2024-07-26 11:35:51.872221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.538 qpair failed and we were unable to recover it. 00:27:56.538 [2024-07-26 11:35:51.872345] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.538 [2024-07-26 11:35:51.872374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.538 qpair failed and we were unable to recover it. 00:27:56.538 [2024-07-26 11:35:51.872494] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.538 [2024-07-26 11:35:51.872523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.538 qpair failed and we were unable to recover it. 00:27:56.538 [2024-07-26 11:35:51.872648] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.538 [2024-07-26 11:35:51.872679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.538 qpair failed and we were unable to recover it. 00:27:56.538 [2024-07-26 11:35:51.872796] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.538 [2024-07-26 11:35:51.872826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.538 qpair failed and we were unable to recover it. 00:27:56.538 [2024-07-26 11:35:51.872943] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.538 [2024-07-26 11:35:51.872972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.538 qpair failed and we were unable to recover it. 00:27:56.538 [2024-07-26 11:35:51.873099] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.538 [2024-07-26 11:35:51.873129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.538 qpair failed and we were unable to recover it. 00:27:56.538 [2024-07-26 11:35:51.873312] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.538 [2024-07-26 11:35:51.873342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.538 qpair failed and we were unable to recover it. 00:27:56.538 [2024-07-26 11:35:51.873459] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.538 [2024-07-26 11:35:51.873489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.538 qpair failed and we were unable to recover it. 00:27:56.538 [2024-07-26 11:35:51.873678] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.538 [2024-07-26 11:35:51.873709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.538 qpair failed and we were unable to recover it. 00:27:56.538 [2024-07-26 11:35:51.873901] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.538 [2024-07-26 11:35:51.873931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.538 qpair failed and we were unable to recover it. 00:27:56.538 [2024-07-26 11:35:51.874058] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.538 [2024-07-26 11:35:51.874088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.538 qpair failed and we were unable to recover it. 00:27:56.538 [2024-07-26 11:35:51.874218] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.538 [2024-07-26 11:35:51.874248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.538 qpair failed and we were unable to recover it. 00:27:56.538 [2024-07-26 11:35:51.874359] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.538 [2024-07-26 11:35:51.874388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.538 qpair failed and we were unable to recover it. 00:27:56.538 [2024-07-26 11:35:51.874563] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.538 [2024-07-26 11:35:51.874593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.538 qpair failed and we were unable to recover it. 00:27:56.538 [2024-07-26 11:35:51.874723] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.538 [2024-07-26 11:35:51.874754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.538 qpair failed and we were unable to recover it. 00:27:56.539 [2024-07-26 11:35:51.874885] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.539 [2024-07-26 11:35:51.874914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.539 qpair failed and we were unable to recover it. 00:27:56.539 [2024-07-26 11:35:51.875092] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.539 [2024-07-26 11:35:51.875122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.539 qpair failed and we were unable to recover it. 00:27:56.539 [2024-07-26 11:35:51.875306] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.539 [2024-07-26 11:35:51.875336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.539 qpair failed and we were unable to recover it. 00:27:56.539 [2024-07-26 11:35:51.875525] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.539 [2024-07-26 11:35:51.875555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.539 qpair failed and we were unable to recover it. 00:27:56.539 [2024-07-26 11:35:51.875685] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.539 [2024-07-26 11:35:51.875715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.539 qpair failed and we were unable to recover it. 00:27:56.539 [2024-07-26 11:35:51.875823] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.539 [2024-07-26 11:35:51.875852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.539 qpair failed and we were unable to recover it. 00:27:56.539 [2024-07-26 11:35:51.875957] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.539 [2024-07-26 11:35:51.875991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.539 qpair failed and we were unable to recover it. 00:27:56.539 [2024-07-26 11:35:51.876234] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.539 [2024-07-26 11:35:51.876263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.539 qpair failed and we were unable to recover it. 00:27:56.539 [2024-07-26 11:35:51.876449] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.539 [2024-07-26 11:35:51.876479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.539 qpair failed and we were unable to recover it. 00:27:56.539 [2024-07-26 11:35:51.876612] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.539 [2024-07-26 11:35:51.876675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.539 qpair failed and we were unable to recover it. 00:27:56.539 [2024-07-26 11:35:51.876918] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.539 [2024-07-26 11:35:51.876948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.539 qpair failed and we were unable to recover it. 00:27:56.539 [2024-07-26 11:35:51.877150] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.539 [2024-07-26 11:35:51.877180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.539 qpair failed and we were unable to recover it. 00:27:56.539 [2024-07-26 11:35:51.877301] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.539 [2024-07-26 11:35:51.877331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.539 qpair failed and we were unable to recover it. 00:27:56.539 [2024-07-26 11:35:51.877536] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.539 [2024-07-26 11:35:51.877566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.539 qpair failed and we were unable to recover it. 00:27:56.539 [2024-07-26 11:35:51.877671] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.539 [2024-07-26 11:35:51.877701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.539 qpair failed and we were unable to recover it. 00:27:56.539 [2024-07-26 11:35:51.877813] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.539 [2024-07-26 11:35:51.877843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.539 qpair failed and we were unable to recover it. 00:27:56.539 [2024-07-26 11:35:51.877953] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.539 [2024-07-26 11:35:51.877982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.539 qpair failed and we were unable to recover it. 00:27:56.539 [2024-07-26 11:35:51.878089] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.539 [2024-07-26 11:35:51.878119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.539 qpair failed and we were unable to recover it. 00:27:56.539 [2024-07-26 11:35:51.878243] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.539 [2024-07-26 11:35:51.878274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.539 qpair failed and we were unable to recover it. 00:27:56.539 [2024-07-26 11:35:51.878412] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.539 [2024-07-26 11:35:51.878442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.539 qpair failed and we were unable to recover it. 00:27:56.539 [2024-07-26 11:35:51.878622] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.539 [2024-07-26 11:35:51.878674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.539 qpair failed and we were unable to recover it. 00:27:56.539 [2024-07-26 11:35:51.878810] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.539 [2024-07-26 11:35:51.878840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.539 qpair failed and we were unable to recover it. 00:27:56.539 [2024-07-26 11:35:51.878945] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.539 [2024-07-26 11:35:51.878975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.539 qpair failed and we were unable to recover it. 00:27:56.539 [2024-07-26 11:35:51.879095] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.539 [2024-07-26 11:35:51.879125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.539 qpair failed and we were unable to recover it. 00:27:56.539 [2024-07-26 11:35:51.879243] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.539 [2024-07-26 11:35:51.879273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.539 qpair failed and we were unable to recover it. 00:27:56.539 [2024-07-26 11:35:51.879458] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.539 [2024-07-26 11:35:51.879487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.539 qpair failed and we were unable to recover it. 00:27:56.539 [2024-07-26 11:35:51.879607] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.539 [2024-07-26 11:35:51.879646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.539 qpair failed and we were unable to recover it. 00:27:56.539 [2024-07-26 11:35:51.879744] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.539 [2024-07-26 11:35:51.879773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.539 qpair failed and we were unable to recover it. 00:27:56.539 [2024-07-26 11:35:51.879900] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.539 [2024-07-26 11:35:51.879930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.539 qpair failed and we were unable to recover it. 00:27:56.539 [2024-07-26 11:35:51.880104] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.539 [2024-07-26 11:35:51.880133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.539 qpair failed and we were unable to recover it. 00:27:56.540 [2024-07-26 11:35:51.880250] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.540 [2024-07-26 11:35:51.880279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.540 qpair failed and we were unable to recover it. 00:27:56.540 [2024-07-26 11:35:51.880465] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.540 [2024-07-26 11:35:51.880495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.540 qpair failed and we were unable to recover it. 00:27:56.540 [2024-07-26 11:35:51.880692] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.540 [2024-07-26 11:35:51.880722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.540 qpair failed and we were unable to recover it. 00:27:56.540 [2024-07-26 11:35:51.880909] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.540 [2024-07-26 11:35:51.880940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.540 qpair failed and we were unable to recover it. 00:27:56.540 [2024-07-26 11:35:51.881060] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.540 [2024-07-26 11:35:51.881090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.540 qpair failed and we were unable to recover it. 00:27:56.540 [2024-07-26 11:35:51.881205] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.540 [2024-07-26 11:35:51.881235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.540 qpair failed and we were unable to recover it. 00:27:56.540 [2024-07-26 11:35:51.881434] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.540 [2024-07-26 11:35:51.881464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.540 qpair failed and we were unable to recover it. 00:27:56.540 [2024-07-26 11:35:51.881574] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.540 [2024-07-26 11:35:51.881603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.540 qpair failed and we were unable to recover it. 00:27:56.540 [2024-07-26 11:35:51.881793] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.540 [2024-07-26 11:35:51.881823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.540 qpair failed and we were unable to recover it. 00:27:56.540 [2024-07-26 11:35:51.882006] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.540 [2024-07-26 11:35:51.882036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.540 qpair failed and we were unable to recover it. 00:27:56.540 [2024-07-26 11:35:51.882154] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.540 [2024-07-26 11:35:51.882183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.540 qpair failed and we were unable to recover it. 00:27:56.540 [2024-07-26 11:35:51.882445] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.540 [2024-07-26 11:35:51.882516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f8000b90 with addr=10.0.0.2, port=4420 00:27:56.540 qpair failed and we were unable to recover it. 00:27:56.540 [2024-07-26 11:35:51.882753] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.540 [2024-07-26 11:35:51.882788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f8000b90 with addr=10.0.0.2, port=4420 00:27:56.540 qpair failed and we were unable to recover it. 00:27:56.540 [2024-07-26 11:35:51.882974] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.540 [2024-07-26 11:35:51.883005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f8000b90 with addr=10.0.0.2, port=4420 00:27:56.540 qpair failed and we were unable to recover it. 00:27:56.540 [2024-07-26 11:35:51.883127] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.540 [2024-07-26 11:35:51.883159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f8000b90 with addr=10.0.0.2, port=4420 00:27:56.540 qpair failed and we were unable to recover it. 00:27:56.540 [2024-07-26 11:35:51.883402] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.540 [2024-07-26 11:35:51.883432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f8000b90 with addr=10.0.0.2, port=4420 00:27:56.540 qpair failed and we were unable to recover it. 00:27:56.540 [2024-07-26 11:35:51.883693] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.540 [2024-07-26 11:35:51.883733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f8000b90 with addr=10.0.0.2, port=4420 00:27:56.540 qpair failed and we were unable to recover it. 00:27:56.540 [2024-07-26 11:35:51.883915] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.540 [2024-07-26 11:35:51.883945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f8000b90 with addr=10.0.0.2, port=4420 00:27:56.540 qpair failed and we were unable to recover it. 00:27:56.540 [2024-07-26 11:35:51.884136] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.540 [2024-07-26 11:35:51.884166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f8000b90 with addr=10.0.0.2, port=4420 00:27:56.540 qpair failed and we were unable to recover it. 00:27:56.540 [2024-07-26 11:35:51.884286] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.540 [2024-07-26 11:35:51.884321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f8000b90 with addr=10.0.0.2, port=4420 00:27:56.540 qpair failed and we were unable to recover it. 00:27:56.540 [2024-07-26 11:35:51.884435] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.540 [2024-07-26 11:35:51.884465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f8000b90 with addr=10.0.0.2, port=4420 00:27:56.540 qpair failed and we were unable to recover it. 00:27:56.540 [2024-07-26 11:35:51.884589] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.540 [2024-07-26 11:35:51.884619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f8000b90 with addr=10.0.0.2, port=4420 00:27:56.540 qpair failed and we were unable to recover it. 00:27:56.540 [2024-07-26 11:35:51.884754] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.540 [2024-07-26 11:35:51.884784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f8000b90 with addr=10.0.0.2, port=4420 00:27:56.540 qpair failed and we were unable to recover it. 00:27:56.540 [2024-07-26 11:35:51.885000] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.540 [2024-07-26 11:35:51.885030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f8000b90 with addr=10.0.0.2, port=4420 00:27:56.540 qpair failed and we were unable to recover it. 00:27:56.540 [2024-07-26 11:35:51.885148] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.540 [2024-07-26 11:35:51.885178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f8000b90 with addr=10.0.0.2, port=4420 00:27:56.540 qpair failed and we were unable to recover it. 00:27:56.540 [2024-07-26 11:35:51.885294] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.540 [2024-07-26 11:35:51.885324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f8000b90 with addr=10.0.0.2, port=4420 00:27:56.540 qpair failed and we were unable to recover it. 00:27:56.540 [2024-07-26 11:35:51.885510] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.540 [2024-07-26 11:35:51.885540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f8000b90 with addr=10.0.0.2, port=4420 00:27:56.540 qpair failed and we were unable to recover it. 00:27:56.540 [2024-07-26 11:35:51.885684] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.540 [2024-07-26 11:35:51.885715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f8000b90 with addr=10.0.0.2, port=4420 00:27:56.540 qpair failed and we were unable to recover it. 00:27:56.540 [2024-07-26 11:35:51.885827] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.540 [2024-07-26 11:35:51.885857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f8000b90 with addr=10.0.0.2, port=4420 00:27:56.540 qpair failed and we were unable to recover it. 00:27:56.540 [2024-07-26 11:35:51.886036] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.540 [2024-07-26 11:35:51.886066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f8000b90 with addr=10.0.0.2, port=4420 00:27:56.540 qpair failed and we were unable to recover it. 00:27:56.540 [2024-07-26 11:35:51.886281] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.540 [2024-07-26 11:35:51.886311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f8000b90 with addr=10.0.0.2, port=4420 00:27:56.540 qpair failed and we were unable to recover it. 00:27:56.540 [2024-07-26 11:35:51.886443] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.540 [2024-07-26 11:35:51.886473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f8000b90 with addr=10.0.0.2, port=4420 00:27:56.540 qpair failed and we were unable to recover it. 00:27:56.540 [2024-07-26 11:35:51.886664] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.540 [2024-07-26 11:35:51.886695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f8000b90 with addr=10.0.0.2, port=4420 00:27:56.540 qpair failed and we were unable to recover it. 00:27:56.540 [2024-07-26 11:35:51.886820] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.540 [2024-07-26 11:35:51.886850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f8000b90 with addr=10.0.0.2, port=4420 00:27:56.540 qpair failed and we were unable to recover it. 00:27:56.540 [2024-07-26 11:35:51.886975] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.540 [2024-07-26 11:35:51.887005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f8000b90 with addr=10.0.0.2, port=4420 00:27:56.540 qpair failed and we were unable to recover it. 00:27:56.540 [2024-07-26 11:35:51.887138] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.540 [2024-07-26 11:35:51.887167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f8000b90 with addr=10.0.0.2, port=4420 00:27:56.540 qpair failed and we were unable to recover it. 00:27:56.540 [2024-07-26 11:35:51.887362] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.540 [2024-07-26 11:35:51.887393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f8000b90 with addr=10.0.0.2, port=4420 00:27:56.540 qpair failed and we were unable to recover it. 00:27:56.540 [2024-07-26 11:35:51.887644] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.541 [2024-07-26 11:35:51.887674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f8000b90 with addr=10.0.0.2, port=4420 00:27:56.541 qpair failed and we were unable to recover it. 00:27:56.541 [2024-07-26 11:35:51.887856] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.541 [2024-07-26 11:35:51.887885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f8000b90 with addr=10.0.0.2, port=4420 00:27:56.541 qpair failed and we were unable to recover it. 00:27:56.541 [2024-07-26 11:35:51.888028] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.541 [2024-07-26 11:35:51.888058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f8000b90 with addr=10.0.0.2, port=4420 00:27:56.541 qpair failed and we were unable to recover it. 00:27:56.541 [2024-07-26 11:35:51.888237] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.541 [2024-07-26 11:35:51.888267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f8000b90 with addr=10.0.0.2, port=4420 00:27:56.541 qpair failed and we were unable to recover it. 00:27:56.541 [2024-07-26 11:35:51.888458] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.541 [2024-07-26 11:35:51.888489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f8000b90 with addr=10.0.0.2, port=4420 00:27:56.541 qpair failed and we were unable to recover it. 00:27:56.541 [2024-07-26 11:35:51.888700] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.541 [2024-07-26 11:35:51.888731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f8000b90 with addr=10.0.0.2, port=4420 00:27:56.541 qpair failed and we were unable to recover it. 00:27:56.541 [2024-07-26 11:35:51.888932] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.541 [2024-07-26 11:35:51.888962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f8000b90 with addr=10.0.0.2, port=4420 00:27:56.541 qpair failed and we were unable to recover it. 00:27:56.541 [2024-07-26 11:35:51.889151] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.541 [2024-07-26 11:35:51.889181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f8000b90 with addr=10.0.0.2, port=4420 00:27:56.541 qpair failed and we were unable to recover it. 00:27:56.541 [2024-07-26 11:35:51.889311] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.541 [2024-07-26 11:35:51.889340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f8000b90 with addr=10.0.0.2, port=4420 00:27:56.541 qpair failed and we were unable to recover it. 00:27:56.541 [2024-07-26 11:35:51.889586] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.541 [2024-07-26 11:35:51.889617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f8000b90 with addr=10.0.0.2, port=4420 00:27:56.541 qpair failed and we were unable to recover it. 00:27:56.541 [2024-07-26 11:35:51.889759] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.541 [2024-07-26 11:35:51.889790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f8000b90 with addr=10.0.0.2, port=4420 00:27:56.541 qpair failed and we were unable to recover it. 00:27:56.541 [2024-07-26 11:35:51.890002] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.541 [2024-07-26 11:35:51.890031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f8000b90 with addr=10.0.0.2, port=4420 00:27:56.541 qpair failed and we were unable to recover it. 00:27:56.541 [2024-07-26 11:35:51.890213] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.541 [2024-07-26 11:35:51.890243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f8000b90 with addr=10.0.0.2, port=4420 00:27:56.541 qpair failed and we were unable to recover it. 00:27:56.541 [2024-07-26 11:35:51.890375] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.541 [2024-07-26 11:35:51.890405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f8000b90 with addr=10.0.0.2, port=4420 00:27:56.541 qpair failed and we were unable to recover it. 00:27:56.541 [2024-07-26 11:35:51.890596] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.541 [2024-07-26 11:35:51.890640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f8000b90 with addr=10.0.0.2, port=4420 00:27:56.541 qpair failed and we were unable to recover it. 00:27:56.541 [2024-07-26 11:35:51.890832] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.541 [2024-07-26 11:35:51.890862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f8000b90 with addr=10.0.0.2, port=4420 00:27:56.541 qpair failed and we were unable to recover it. 00:27:56.541 [2024-07-26 11:35:51.890989] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.541 [2024-07-26 11:35:51.891018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f8000b90 with addr=10.0.0.2, port=4420 00:27:56.541 qpair failed and we were unable to recover it. 00:27:56.541 [2024-07-26 11:35:51.891208] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.541 [2024-07-26 11:35:51.891238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f8000b90 with addr=10.0.0.2, port=4420 00:27:56.541 qpair failed and we were unable to recover it. 00:27:56.541 [2024-07-26 11:35:51.891359] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.541 [2024-07-26 11:35:51.891388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f8000b90 with addr=10.0.0.2, port=4420 00:27:56.541 qpair failed and we were unable to recover it. 00:27:56.541 [2024-07-26 11:35:51.891575] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.541 [2024-07-26 11:35:51.891611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f8000b90 with addr=10.0.0.2, port=4420 00:27:56.541 qpair failed and we were unable to recover it. 00:27:56.541 [2024-07-26 11:35:51.891806] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.541 [2024-07-26 11:35:51.891844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f8000b90 with addr=10.0.0.2, port=4420 00:27:56.541 qpair failed and we were unable to recover it. 00:27:56.541 [2024-07-26 11:35:51.891951] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.541 [2024-07-26 11:35:51.891980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f8000b90 with addr=10.0.0.2, port=4420 00:27:56.541 qpair failed and we were unable to recover it. 00:27:56.541 [2024-07-26 11:35:51.892175] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.541 [2024-07-26 11:35:51.892205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f8000b90 with addr=10.0.0.2, port=4420 00:27:56.541 qpair failed and we were unable to recover it. 00:27:56.541 [2024-07-26 11:35:51.892426] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.541 [2024-07-26 11:35:51.892455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f8000b90 with addr=10.0.0.2, port=4420 00:27:56.541 qpair failed and we were unable to recover it. 00:27:56.541 [2024-07-26 11:35:51.892646] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.541 [2024-07-26 11:35:51.892677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f8000b90 with addr=10.0.0.2, port=4420 00:27:56.541 qpair failed and we were unable to recover it. 00:27:56.541 [2024-07-26 11:35:51.892799] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.541 [2024-07-26 11:35:51.892829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f8000b90 with addr=10.0.0.2, port=4420 00:27:56.541 qpair failed and we were unable to recover it. 00:27:56.541 [2024-07-26 11:35:51.893017] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.541 [2024-07-26 11:35:51.893047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f8000b90 with addr=10.0.0.2, port=4420 00:27:56.541 qpair failed and we were unable to recover it. 00:27:56.541 [2024-07-26 11:35:51.893178] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.541 [2024-07-26 11:35:51.893207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f8000b90 with addr=10.0.0.2, port=4420 00:27:56.541 qpair failed and we were unable to recover it. 00:27:56.541 [2024-07-26 11:35:51.893382] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.541 [2024-07-26 11:35:51.893412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f8000b90 with addr=10.0.0.2, port=4420 00:27:56.541 qpair failed and we were unable to recover it. 00:27:56.541 [2024-07-26 11:35:51.893541] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.541 [2024-07-26 11:35:51.893571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f8000b90 with addr=10.0.0.2, port=4420 00:27:56.541 qpair failed and we were unable to recover it. 00:27:56.541 [2024-07-26 11:35:51.893832] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.541 [2024-07-26 11:35:51.893862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f8000b90 with addr=10.0.0.2, port=4420 00:27:56.541 qpair failed and we were unable to recover it. 00:27:56.541 [2024-07-26 11:35:51.894103] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.541 [2024-07-26 11:35:51.894132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f8000b90 with addr=10.0.0.2, port=4420 00:27:56.541 qpair failed and we were unable to recover it. 00:27:56.541 [2024-07-26 11:35:51.894308] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.541 [2024-07-26 11:35:51.894338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f8000b90 with addr=10.0.0.2, port=4420 00:27:56.541 qpair failed and we were unable to recover it. 00:27:56.541 [2024-07-26 11:35:51.894517] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.541 [2024-07-26 11:35:51.894547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f8000b90 with addr=10.0.0.2, port=4420 00:27:56.541 qpair failed and we were unable to recover it. 00:27:56.541 [2024-07-26 11:35:51.894739] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.541 [2024-07-26 11:35:51.894771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f8000b90 with addr=10.0.0.2, port=4420 00:27:56.541 qpair failed and we were unable to recover it. 00:27:56.541 [2024-07-26 11:35:51.894887] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.541 [2024-07-26 11:35:51.894916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f8000b90 with addr=10.0.0.2, port=4420 00:27:56.541 qpair failed and we were unable to recover it. 00:27:56.541 [2024-07-26 11:35:51.895040] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.541 [2024-07-26 11:35:51.895070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f8000b90 with addr=10.0.0.2, port=4420 00:27:56.541 qpair failed and we were unable to recover it. 00:27:56.541 [2024-07-26 11:35:51.895201] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.541 [2024-07-26 11:35:51.895231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f8000b90 with addr=10.0.0.2, port=4420 00:27:56.541 qpair failed and we were unable to recover it. 00:27:56.542 [2024-07-26 11:35:51.895355] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.542 [2024-07-26 11:35:51.895385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f8000b90 with addr=10.0.0.2, port=4420 00:27:56.542 qpair failed and we were unable to recover it. 00:27:56.542 [2024-07-26 11:35:51.895557] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.542 [2024-07-26 11:35:51.895587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f8000b90 with addr=10.0.0.2, port=4420 00:27:56.542 qpair failed and we were unable to recover it. 00:27:56.542 [2024-07-26 11:35:51.895776] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.542 [2024-07-26 11:35:51.895806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f8000b90 with addr=10.0.0.2, port=4420 00:27:56.542 qpair failed and we were unable to recover it. 00:27:56.542 [2024-07-26 11:35:51.895940] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.542 [2024-07-26 11:35:51.895970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f8000b90 with addr=10.0.0.2, port=4420 00:27:56.542 qpair failed and we were unable to recover it. 00:27:56.542 [2024-07-26 11:35:51.896105] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.542 [2024-07-26 11:35:51.896135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f8000b90 with addr=10.0.0.2, port=4420 00:27:56.542 qpair failed and we were unable to recover it. 00:27:56.542 [2024-07-26 11:35:51.896332] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.542 [2024-07-26 11:35:51.896362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f8000b90 with addr=10.0.0.2, port=4420 00:27:56.542 qpair failed and we were unable to recover it. 00:27:56.542 [2024-07-26 11:35:51.896499] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.542 [2024-07-26 11:35:51.896529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f8000b90 with addr=10.0.0.2, port=4420 00:27:56.542 qpair failed and we were unable to recover it. 00:27:56.542 [2024-07-26 11:35:51.896662] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.542 [2024-07-26 11:35:51.896693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f8000b90 with addr=10.0.0.2, port=4420 00:27:56.542 qpair failed and we were unable to recover it. 00:27:56.542 [2024-07-26 11:35:51.896853] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.542 [2024-07-26 11:35:51.896920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.542 qpair failed and we were unable to recover it. 00:27:56.542 [2024-07-26 11:35:51.897065] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.542 [2024-07-26 11:35:51.897099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.542 qpair failed and we were unable to recover it. 00:27:56.542 [2024-07-26 11:35:51.897220] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.542 [2024-07-26 11:35:51.897251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.542 qpair failed and we were unable to recover it. 00:27:56.542 [2024-07-26 11:35:51.897375] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.542 [2024-07-26 11:35:51.897405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.542 qpair failed and we were unable to recover it. 00:27:56.542 [2024-07-26 11:35:51.897525] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.542 [2024-07-26 11:35:51.897555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.542 qpair failed and we were unable to recover it. 00:27:56.542 [2024-07-26 11:35:51.897735] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.542 [2024-07-26 11:35:51.897767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.542 qpair failed and we were unable to recover it. 00:27:56.542 [2024-07-26 11:35:51.897955] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.542 [2024-07-26 11:35:51.897984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.542 qpair failed and we were unable to recover it. 00:27:56.542 [2024-07-26 11:35:51.898224] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.542 [2024-07-26 11:35:51.898254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.542 qpair failed and we were unable to recover it. 00:27:56.542 [2024-07-26 11:35:51.898364] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.542 [2024-07-26 11:35:51.898394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.542 qpair failed and we were unable to recover it. 00:27:56.542 [2024-07-26 11:35:51.898521] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.542 [2024-07-26 11:35:51.898550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.542 qpair failed and we were unable to recover it. 00:27:56.542 [2024-07-26 11:35:51.898740] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.542 [2024-07-26 11:35:51.898771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.542 qpair failed and we were unable to recover it. 00:27:56.542 [2024-07-26 11:35:51.898890] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.542 [2024-07-26 11:35:51.898919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.542 qpair failed and we were unable to recover it. 00:27:56.542 [2024-07-26 11:35:51.899112] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.542 [2024-07-26 11:35:51.899142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.542 qpair failed and we were unable to recover it. 00:27:56.542 [2024-07-26 11:35:51.899331] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.542 [2024-07-26 11:35:51.899374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.542 qpair failed and we were unable to recover it. 00:27:56.542 [2024-07-26 11:35:51.899620] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.542 [2024-07-26 11:35:51.899663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.542 qpair failed and we were unable to recover it. 00:27:56.542 [2024-07-26 11:35:51.899849] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.542 [2024-07-26 11:35:51.899879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.542 qpair failed and we were unable to recover it. 00:27:56.542 [2024-07-26 11:35:51.900065] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.542 [2024-07-26 11:35:51.900094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.542 qpair failed and we were unable to recover it. 00:27:56.542 [2024-07-26 11:35:51.900236] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.542 [2024-07-26 11:35:51.900266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.542 qpair failed and we were unable to recover it. 00:27:56.542 [2024-07-26 11:35:51.900381] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.542 [2024-07-26 11:35:51.900411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.542 qpair failed and we were unable to recover it. 00:27:56.542 [2024-07-26 11:35:51.900622] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.542 [2024-07-26 11:35:51.900664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.542 qpair failed and we were unable to recover it. 00:27:56.542 [2024-07-26 11:35:51.900845] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.542 [2024-07-26 11:35:51.900875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.542 qpair failed and we were unable to recover it. 00:27:56.542 [2024-07-26 11:35:51.901061] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.542 [2024-07-26 11:35:51.901091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.542 qpair failed and we were unable to recover it. 00:27:56.542 [2024-07-26 11:35:51.901297] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.542 [2024-07-26 11:35:51.901327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.542 qpair failed and we were unable to recover it. 00:27:56.542 [2024-07-26 11:35:51.901448] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.542 [2024-07-26 11:35:51.901478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.542 qpair failed and we were unable to recover it. 00:27:56.542 [2024-07-26 11:35:51.901653] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.542 [2024-07-26 11:35:51.901684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.542 qpair failed and we were unable to recover it. 00:27:56.542 [2024-07-26 11:35:51.901916] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.542 [2024-07-26 11:35:51.901947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.542 qpair failed and we were unable to recover it. 00:27:56.542 [2024-07-26 11:35:51.902060] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.542 [2024-07-26 11:35:51.902089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.542 qpair failed and we were unable to recover it. 00:27:56.542 [2024-07-26 11:35:51.902287] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.542 [2024-07-26 11:35:51.902317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.542 qpair failed and we were unable to recover it. 00:27:56.542 [2024-07-26 11:35:51.902429] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.542 [2024-07-26 11:35:51.902459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.542 qpair failed and we were unable to recover it. 00:27:56.542 [2024-07-26 11:35:51.902571] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.543 [2024-07-26 11:35:51.902601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.543 qpair failed and we were unable to recover it. 00:27:56.543 [2024-07-26 11:35:51.902786] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.543 [2024-07-26 11:35:51.902816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.543 qpair failed and we were unable to recover it. 00:27:56.543 [2024-07-26 11:35:51.902940] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.543 [2024-07-26 11:35:51.902970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.543 qpair failed and we were unable to recover it. 00:27:56.543 [2024-07-26 11:35:51.903124] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.543 [2024-07-26 11:35:51.903154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.543 qpair failed and we were unable to recover it. 00:27:56.543 [2024-07-26 11:35:51.903348] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.543 [2024-07-26 11:35:51.903378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.543 qpair failed and we were unable to recover it. 00:27:56.543 [2024-07-26 11:35:51.903500] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.543 [2024-07-26 11:35:51.903530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.543 qpair failed and we were unable to recover it. 00:27:56.543 [2024-07-26 11:35:51.903706] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.543 [2024-07-26 11:35:51.903737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.543 qpair failed and we were unable to recover it. 00:27:56.543 [2024-07-26 11:35:51.903853] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.543 [2024-07-26 11:35:51.903882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.543 qpair failed and we were unable to recover it. 00:27:56.543 [2024-07-26 11:35:51.903997] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.543 [2024-07-26 11:35:51.904027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.543 qpair failed and we were unable to recover it. 00:27:56.543 [2024-07-26 11:35:51.904155] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.543 [2024-07-26 11:35:51.904185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.543 qpair failed and we were unable to recover it. 00:27:56.543 [2024-07-26 11:35:51.904396] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.543 [2024-07-26 11:35:51.904426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.543 qpair failed and we were unable to recover it. 00:27:56.543 [2024-07-26 11:35:51.904563] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.543 [2024-07-26 11:35:51.904597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f8000b90 with addr=10.0.0.2, port=4420 00:27:56.543 qpair failed and we were unable to recover it. 00:27:56.543 [2024-07-26 11:35:51.904781] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.543 [2024-07-26 11:35:51.904811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f8000b90 with addr=10.0.0.2, port=4420 00:27:56.543 qpair failed and we were unable to recover it. 00:27:56.543 [2024-07-26 11:35:51.905065] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.543 [2024-07-26 11:35:51.905095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f8000b90 with addr=10.0.0.2, port=4420 00:27:56.543 qpair failed and we were unable to recover it. 00:27:56.543 [2024-07-26 11:35:51.905223] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.543 [2024-07-26 11:35:51.905252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f8000b90 with addr=10.0.0.2, port=4420 00:27:56.543 qpair failed and we were unable to recover it. 00:27:56.543 [2024-07-26 11:35:51.905387] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.543 [2024-07-26 11:35:51.905416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f8000b90 with addr=10.0.0.2, port=4420 00:27:56.543 qpair failed and we were unable to recover it. 00:27:56.543 [2024-07-26 11:35:51.905587] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.543 [2024-07-26 11:35:51.905617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f8000b90 with addr=10.0.0.2, port=4420 00:27:56.543 qpair failed and we were unable to recover it. 00:27:56.543 [2024-07-26 11:35:51.905809] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.543 [2024-07-26 11:35:51.905839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f8000b90 with addr=10.0.0.2, port=4420 00:27:56.543 qpair failed and we were unable to recover it. 00:27:56.543 [2024-07-26 11:35:51.905967] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.543 [2024-07-26 11:35:51.905996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f8000b90 with addr=10.0.0.2, port=4420 00:27:56.543 qpair failed and we were unable to recover it. 00:27:56.543 [2024-07-26 11:35:51.906135] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.543 [2024-07-26 11:35:51.906165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f8000b90 with addr=10.0.0.2, port=4420 00:27:56.543 qpair failed and we were unable to recover it. 00:27:56.543 [2024-07-26 11:35:51.906351] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.543 [2024-07-26 11:35:51.906381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f8000b90 with addr=10.0.0.2, port=4420 00:27:56.543 qpair failed and we were unable to recover it. 00:27:56.543 [2024-07-26 11:35:51.906488] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.543 [2024-07-26 11:35:51.906518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f8000b90 with addr=10.0.0.2, port=4420 00:27:56.543 qpair failed and we were unable to recover it. 00:27:56.543 [2024-07-26 11:35:51.906664] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.543 [2024-07-26 11:35:51.906696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f8000b90 with addr=10.0.0.2, port=4420 00:27:56.543 qpair failed and we were unable to recover it. 00:27:56.543 [2024-07-26 11:35:51.906808] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.543 [2024-07-26 11:35:51.906838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f8000b90 with addr=10.0.0.2, port=4420 00:27:56.543 qpair failed and we were unable to recover it. 00:27:56.543 [2024-07-26 11:35:51.906955] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.543 [2024-07-26 11:35:51.906990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f8000b90 with addr=10.0.0.2, port=4420 00:27:56.543 qpair failed and we were unable to recover it. 00:27:56.543 [2024-07-26 11:35:51.907183] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.543 [2024-07-26 11:35:51.907212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f8000b90 with addr=10.0.0.2, port=4420 00:27:56.543 qpair failed and we were unable to recover it. 00:27:56.543 [2024-07-26 11:35:51.907328] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.543 [2024-07-26 11:35:51.907358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f8000b90 with addr=10.0.0.2, port=4420 00:27:56.543 qpair failed and we were unable to recover it. 00:27:56.543 [2024-07-26 11:35:51.907469] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.543 [2024-07-26 11:35:51.907498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f8000b90 with addr=10.0.0.2, port=4420 00:27:56.543 qpair failed and we were unable to recover it. 00:27:56.543 [2024-07-26 11:35:51.907784] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.543 [2024-07-26 11:35:51.907815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f8000b90 with addr=10.0.0.2, port=4420 00:27:56.543 qpair failed and we were unable to recover it. 00:27:56.543 [2024-07-26 11:35:51.907943] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.543 [2024-07-26 11:35:51.907973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f8000b90 with addr=10.0.0.2, port=4420 00:27:56.543 qpair failed and we were unable to recover it. 00:27:56.543 [2024-07-26 11:35:51.908099] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.543 [2024-07-26 11:35:51.908128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f8000b90 with addr=10.0.0.2, port=4420 00:27:56.543 qpair failed and we were unable to recover it. 00:27:56.543 [2024-07-26 11:35:51.908230] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.543 [2024-07-26 11:35:51.908260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f8000b90 with addr=10.0.0.2, port=4420 00:27:56.543 qpair failed and we were unable to recover it. 00:27:56.543 [2024-07-26 11:35:51.908434] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.543 [2024-07-26 11:35:51.908463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f8000b90 with addr=10.0.0.2, port=4420 00:27:56.543 qpair failed and we were unable to recover it. 00:27:56.543 [2024-07-26 11:35:51.908651] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.543 [2024-07-26 11:35:51.908681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f8000b90 with addr=10.0.0.2, port=4420 00:27:56.543 qpair failed and we were unable to recover it. 00:27:56.543 [2024-07-26 11:35:51.908793] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.543 [2024-07-26 11:35:51.908823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f8000b90 with addr=10.0.0.2, port=4420 00:27:56.543 qpair failed and we were unable to recover it. 00:27:56.543 [2024-07-26 11:35:51.908998] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.543 [2024-07-26 11:35:51.909028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f8000b90 with addr=10.0.0.2, port=4420 00:27:56.543 qpair failed and we were unable to recover it. 00:27:56.543 [2024-07-26 11:35:51.909204] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.543 [2024-07-26 11:35:51.909234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f8000b90 with addr=10.0.0.2, port=4420 00:27:56.543 qpair failed and we were unable to recover it. 00:27:56.543 [2024-07-26 11:35:51.909352] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.543 [2024-07-26 11:35:51.909381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f8000b90 with addr=10.0.0.2, port=4420 00:27:56.544 qpair failed and we were unable to recover it. 00:27:56.544 [2024-07-26 11:35:51.909507] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.544 [2024-07-26 11:35:51.909537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f8000b90 with addr=10.0.0.2, port=4420 00:27:56.544 qpair failed and we were unable to recover it. 00:27:56.544 [2024-07-26 11:35:51.909732] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.544 [2024-07-26 11:35:51.909763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f8000b90 with addr=10.0.0.2, port=4420 00:27:56.544 qpair failed and we were unable to recover it. 00:27:56.544 [2024-07-26 11:35:51.909941] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.544 [2024-07-26 11:35:51.909970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f8000b90 with addr=10.0.0.2, port=4420 00:27:56.544 qpair failed and we were unable to recover it. 00:27:56.544 [2024-07-26 11:35:51.910184] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.544 [2024-07-26 11:35:51.910213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f8000b90 with addr=10.0.0.2, port=4420 00:27:56.544 qpair failed and we were unable to recover it. 00:27:56.544 [2024-07-26 11:35:51.910385] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.544 [2024-07-26 11:35:51.910414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f8000b90 with addr=10.0.0.2, port=4420 00:27:56.544 qpair failed and we were unable to recover it. 00:27:56.544 [2024-07-26 11:35:51.910535] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.544 [2024-07-26 11:35:51.910565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f8000b90 with addr=10.0.0.2, port=4420 00:27:56.544 qpair failed and we were unable to recover it. 00:27:56.544 [2024-07-26 11:35:51.910696] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.544 [2024-07-26 11:35:51.910727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f8000b90 with addr=10.0.0.2, port=4420 00:27:56.544 qpair failed and we were unable to recover it. 00:27:56.544 [2024-07-26 11:35:51.910922] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.544 [2024-07-26 11:35:51.910952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f8000b90 with addr=10.0.0.2, port=4420 00:27:56.544 qpair failed and we were unable to recover it. 00:27:56.544 [2024-07-26 11:35:51.911063] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.544 [2024-07-26 11:35:51.911093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f8000b90 with addr=10.0.0.2, port=4420 00:27:56.544 qpair failed and we were unable to recover it. 00:27:56.544 [2024-07-26 11:35:51.911207] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.544 [2024-07-26 11:35:51.911237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f8000b90 with addr=10.0.0.2, port=4420 00:27:56.544 qpair failed and we were unable to recover it. 00:27:56.544 [2024-07-26 11:35:51.911413] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.544 [2024-07-26 11:35:51.911442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f8000b90 with addr=10.0.0.2, port=4420 00:27:56.544 qpair failed and we were unable to recover it. 00:27:56.544 [2024-07-26 11:35:51.911563] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.544 [2024-07-26 11:35:51.911592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f8000b90 with addr=10.0.0.2, port=4420 00:27:56.544 qpair failed and we were unable to recover it. 00:27:56.544 [2024-07-26 11:35:51.911720] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.544 [2024-07-26 11:35:51.911751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f8000b90 with addr=10.0.0.2, port=4420 00:27:56.544 qpair failed and we were unable to recover it. 00:27:56.544 [2024-07-26 11:35:51.912054] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.544 [2024-07-26 11:35:51.912087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.544 qpair failed and we were unable to recover it. 00:27:56.544 [2024-07-26 11:35:51.912337] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.544 [2024-07-26 11:35:51.912367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.544 qpair failed and we were unable to recover it. 00:27:56.544 [2024-07-26 11:35:51.912481] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.544 [2024-07-26 11:35:51.912511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.544 qpair failed and we were unable to recover it. 00:27:56.544 [2024-07-26 11:35:51.912646] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.544 [2024-07-26 11:35:51.912675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.544 qpair failed and we were unable to recover it. 00:27:56.544 [2024-07-26 11:35:51.912848] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.544 [2024-07-26 11:35:51.912878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.544 qpair failed and we were unable to recover it. 00:27:56.544 [2024-07-26 11:35:51.912988] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.544 [2024-07-26 11:35:51.913017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.544 qpair failed and we were unable to recover it. 00:27:56.544 [2024-07-26 11:35:51.913134] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.544 [2024-07-26 11:35:51.913164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.544 qpair failed and we were unable to recover it. 00:27:56.544 [2024-07-26 11:35:51.913341] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.544 [2024-07-26 11:35:51.913371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.544 qpair failed and we were unable to recover it. 00:27:56.544 [2024-07-26 11:35:51.913561] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.544 [2024-07-26 11:35:51.913590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.544 qpair failed and we were unable to recover it. 00:27:56.544 [2024-07-26 11:35:51.913774] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.544 [2024-07-26 11:35:51.913805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.544 qpair failed and we were unable to recover it. 00:27:56.544 [2024-07-26 11:35:51.913917] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.544 [2024-07-26 11:35:51.913946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.544 qpair failed and we were unable to recover it. 00:27:56.544 [2024-07-26 11:35:51.914073] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.544 [2024-07-26 11:35:51.914102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.544 qpair failed and we were unable to recover it. 00:27:56.544 [2024-07-26 11:35:51.914227] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.544 [2024-07-26 11:35:51.914257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.544 qpair failed and we were unable to recover it. 00:27:56.544 [2024-07-26 11:35:51.914433] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.544 [2024-07-26 11:35:51.914468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.544 qpair failed and we were unable to recover it. 00:27:56.544 [2024-07-26 11:35:51.914667] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.544 [2024-07-26 11:35:51.914699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.544 qpair failed and we were unable to recover it. 00:27:56.544 [2024-07-26 11:35:51.914893] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.544 [2024-07-26 11:35:51.914923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.544 qpair failed and we were unable to recover it. 00:27:56.544 [2024-07-26 11:35:51.915165] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.544 [2024-07-26 11:35:51.915195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.544 qpair failed and we were unable to recover it. 00:27:56.544 [2024-07-26 11:35:51.915331] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.544 [2024-07-26 11:35:51.915361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.544 qpair failed and we were unable to recover it. 00:27:56.544 [2024-07-26 11:35:51.915487] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.545 [2024-07-26 11:35:51.915517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.545 qpair failed and we were unable to recover it. 00:27:56.545 [2024-07-26 11:35:51.915657] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.545 [2024-07-26 11:35:51.915688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.545 qpair failed and we were unable to recover it. 00:27:56.545 [2024-07-26 11:35:51.915796] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.545 [2024-07-26 11:35:51.915825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.545 qpair failed and we were unable to recover it. 00:27:56.545 [2024-07-26 11:35:51.915942] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.545 [2024-07-26 11:35:51.915972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.545 qpair failed and we were unable to recover it. 00:27:56.545 [2024-07-26 11:35:51.916099] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.545 [2024-07-26 11:35:51.916130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.545 qpair failed and we were unable to recover it. 00:27:56.545 [2024-07-26 11:35:51.916259] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.545 [2024-07-26 11:35:51.916288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.545 qpair failed and we were unable to recover it. 00:27:56.545 [2024-07-26 11:35:51.916397] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.545 [2024-07-26 11:35:51.916427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.545 qpair failed and we were unable to recover it. 00:27:56.545 [2024-07-26 11:35:51.916556] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.545 [2024-07-26 11:35:51.916586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.545 qpair failed and we were unable to recover it. 00:27:56.545 [2024-07-26 11:35:51.916724] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.545 [2024-07-26 11:35:51.916754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.545 qpair failed and we were unable to recover it. 00:27:56.545 [2024-07-26 11:35:51.916935] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.545 [2024-07-26 11:35:51.916966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.545 qpair failed and we were unable to recover it. 00:27:56.545 [2024-07-26 11:35:51.917144] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.545 [2024-07-26 11:35:51.917173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.545 qpair failed and we were unable to recover it. 00:27:56.545 [2024-07-26 11:35:51.917365] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.545 [2024-07-26 11:35:51.917395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.545 qpair failed and we were unable to recover it. 00:27:56.545 [2024-07-26 11:35:51.917503] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.545 [2024-07-26 11:35:51.917533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.545 qpair failed and we were unable to recover it. 00:27:56.545 [2024-07-26 11:35:51.917716] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.545 [2024-07-26 11:35:51.917747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.545 qpair failed and we were unable to recover it. 00:27:56.545 [2024-07-26 11:35:51.917853] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.545 [2024-07-26 11:35:51.917883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.545 qpair failed and we were unable to recover it. 00:27:56.545 [2024-07-26 11:35:51.918100] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.545 [2024-07-26 11:35:51.918129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.545 qpair failed and we were unable to recover it. 00:27:56.545 [2024-07-26 11:35:51.918313] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.545 [2024-07-26 11:35:51.918343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.545 qpair failed and we were unable to recover it. 00:27:56.545 [2024-07-26 11:35:51.918516] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.545 [2024-07-26 11:35:51.918546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.545 qpair failed and we were unable to recover it. 00:27:56.545 [2024-07-26 11:35:51.918837] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.545 [2024-07-26 11:35:51.918867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.545 qpair failed and we were unable to recover it. 00:27:56.545 [2024-07-26 11:35:51.919051] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.545 [2024-07-26 11:35:51.919081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.545 qpair failed and we were unable to recover it. 00:27:56.545 [2024-07-26 11:35:51.919266] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.545 [2024-07-26 11:35:51.919296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.545 qpair failed and we were unable to recover it. 00:27:56.545 [2024-07-26 11:35:51.919471] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.545 [2024-07-26 11:35:51.919501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.545 qpair failed and we were unable to recover it. 00:27:56.545 [2024-07-26 11:35:51.919620] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.545 [2024-07-26 11:35:51.919664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f8000b90 with addr=10.0.0.2, port=4420 00:27:56.545 qpair failed and we were unable to recover it. 00:27:56.545 [2024-07-26 11:35:51.919774] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.545 [2024-07-26 11:35:51.919803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f8000b90 with addr=10.0.0.2, port=4420 00:27:56.545 qpair failed and we were unable to recover it. 00:27:56.545 [2024-07-26 11:35:51.919929] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.545 [2024-07-26 11:35:51.919959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f8000b90 with addr=10.0.0.2, port=4420 00:27:56.545 qpair failed and we were unable to recover it. 00:27:56.545 [2024-07-26 11:35:51.920085] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.545 [2024-07-26 11:35:51.920114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f8000b90 with addr=10.0.0.2, port=4420 00:27:56.545 qpair failed and we were unable to recover it. 00:27:56.545 [2024-07-26 11:35:51.920284] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.545 [2024-07-26 11:35:51.920313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f8000b90 with addr=10.0.0.2, port=4420 00:27:56.545 qpair failed and we were unable to recover it. 00:27:56.545 [2024-07-26 11:35:51.920504] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.545 [2024-07-26 11:35:51.920533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f8000b90 with addr=10.0.0.2, port=4420 00:27:56.545 qpair failed and we were unable to recover it. 00:27:56.545 [2024-07-26 11:35:51.920729] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.545 [2024-07-26 11:35:51.920759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f8000b90 with addr=10.0.0.2, port=4420 00:27:56.545 qpair failed and we were unable to recover it. 00:27:56.545 [2024-07-26 11:35:51.920888] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.545 [2024-07-26 11:35:51.920918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f8000b90 with addr=10.0.0.2, port=4420 00:27:56.545 qpair failed and we were unable to recover it. 00:27:56.545 [2024-07-26 11:35:51.921034] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.545 [2024-07-26 11:35:51.921062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f8000b90 with addr=10.0.0.2, port=4420 00:27:56.545 qpair failed and we were unable to recover it. 00:27:56.545 [2024-07-26 11:35:51.921249] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.545 [2024-07-26 11:35:51.921279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f8000b90 with addr=10.0.0.2, port=4420 00:27:56.545 qpair failed and we were unable to recover it. 00:27:56.545 [2024-07-26 11:35:51.921456] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.545 [2024-07-26 11:35:51.921485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f8000b90 with addr=10.0.0.2, port=4420 00:27:56.545 qpair failed and we were unable to recover it. 00:27:56.545 [2024-07-26 11:35:51.921597] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.545 [2024-07-26 11:35:51.921635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f8000b90 with addr=10.0.0.2, port=4420 00:27:56.545 qpair failed and we were unable to recover it. 00:27:56.545 [2024-07-26 11:35:51.921863] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.545 [2024-07-26 11:35:51.921896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f8000b90 with addr=10.0.0.2, port=4420 00:27:56.545 qpair failed and we were unable to recover it. 00:27:56.545 [2024-07-26 11:35:51.922025] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.545 [2024-07-26 11:35:51.922058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f8000b90 with addr=10.0.0.2, port=4420 00:27:56.545 qpair failed and we were unable to recover it. 00:27:56.545 [2024-07-26 11:35:51.922172] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.545 [2024-07-26 11:35:51.922200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f8000b90 with addr=10.0.0.2, port=4420 00:27:56.545 qpair failed and we were unable to recover it. 00:27:56.545 [2024-07-26 11:35:51.922443] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.546 [2024-07-26 11:35:51.922472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f8000b90 with addr=10.0.0.2, port=4420 00:27:56.546 qpair failed and we were unable to recover it. 00:27:56.546 [2024-07-26 11:35:51.922713] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.546 [2024-07-26 11:35:51.922745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f8000b90 with addr=10.0.0.2, port=4420 00:27:56.546 qpair failed and we were unable to recover it. 00:27:56.546 [2024-07-26 11:35:51.922884] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.546 [2024-07-26 11:35:51.922914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f8000b90 with addr=10.0.0.2, port=4420 00:27:56.546 qpair failed and we were unable to recover it. 00:27:56.546 [2024-07-26 11:35:51.923041] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.546 [2024-07-26 11:35:51.923070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f8000b90 with addr=10.0.0.2, port=4420 00:27:56.546 qpair failed and we were unable to recover it. 00:27:56.546 [2024-07-26 11:35:51.923193] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.546 [2024-07-26 11:35:51.923222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f8000b90 with addr=10.0.0.2, port=4420 00:27:56.546 qpair failed and we were unable to recover it. 00:27:56.546 [2024-07-26 11:35:51.923340] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.546 [2024-07-26 11:35:51.923370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f8000b90 with addr=10.0.0.2, port=4420 00:27:56.546 qpair failed and we were unable to recover it. 00:27:56.546 [2024-07-26 11:35:51.923478] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.546 [2024-07-26 11:35:51.923507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f8000b90 with addr=10.0.0.2, port=4420 00:27:56.546 qpair failed and we were unable to recover it. 00:27:56.546 [2024-07-26 11:35:51.923609] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.546 [2024-07-26 11:35:51.923648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f8000b90 with addr=10.0.0.2, port=4420 00:27:56.546 qpair failed and we were unable to recover it. 00:27:56.546 [2024-07-26 11:35:51.923756] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.546 [2024-07-26 11:35:51.923786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f8000b90 with addr=10.0.0.2, port=4420 00:27:56.546 qpair failed and we were unable to recover it. 00:27:56.546 [2024-07-26 11:35:51.923977] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.546 [2024-07-26 11:35:51.924007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f8000b90 with addr=10.0.0.2, port=4420 00:27:56.546 qpair failed and we were unable to recover it. 00:27:56.546 [2024-07-26 11:35:51.924188] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.546 [2024-07-26 11:35:51.924218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f8000b90 with addr=10.0.0.2, port=4420 00:27:56.546 qpair failed and we were unable to recover it. 00:27:56.546 [2024-07-26 11:35:51.924356] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.546 [2024-07-26 11:35:51.924385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f8000b90 with addr=10.0.0.2, port=4420 00:27:56.546 qpair failed and we were unable to recover it. 00:27:56.546 [2024-07-26 11:35:51.924534] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.546 [2024-07-26 11:35:51.924564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f8000b90 with addr=10.0.0.2, port=4420 00:27:56.546 qpair failed and we were unable to recover it. 00:27:56.546 [2024-07-26 11:35:51.924762] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.546 [2024-07-26 11:35:51.924792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f8000b90 with addr=10.0.0.2, port=4420 00:27:56.546 qpair failed and we were unable to recover it. 00:27:56.546 [2024-07-26 11:35:51.924909] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.546 [2024-07-26 11:35:51.924939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f8000b90 with addr=10.0.0.2, port=4420 00:27:56.546 qpair failed and we were unable to recover it. 00:27:56.546 [2024-07-26 11:35:51.925108] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.546 [2024-07-26 11:35:51.925138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f8000b90 with addr=10.0.0.2, port=4420 00:27:56.546 qpair failed and we were unable to recover it. 00:27:56.546 [2024-07-26 11:35:51.925350] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.546 [2024-07-26 11:35:51.925380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f8000b90 with addr=10.0.0.2, port=4420 00:27:56.546 qpair failed and we were unable to recover it. 00:27:56.546 [2024-07-26 11:35:51.925519] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.546 [2024-07-26 11:35:51.925549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f8000b90 with addr=10.0.0.2, port=4420 00:27:56.546 qpair failed and we were unable to recover it. 00:27:56.546 [2024-07-26 11:35:51.925687] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.546 [2024-07-26 11:35:51.925717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f8000b90 with addr=10.0.0.2, port=4420 00:27:56.546 qpair failed and we were unable to recover it. 00:27:56.546 [2024-07-26 11:35:51.925825] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.546 [2024-07-26 11:35:51.925855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f8000b90 with addr=10.0.0.2, port=4420 00:27:56.546 qpair failed and we were unable to recover it. 00:27:56.546 [2024-07-26 11:35:51.925985] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.546 [2024-07-26 11:35:51.926014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f8000b90 with addr=10.0.0.2, port=4420 00:27:56.546 qpair failed and we were unable to recover it. 00:27:56.546 [2024-07-26 11:35:51.926120] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.546 [2024-07-26 11:35:51.926150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f8000b90 with addr=10.0.0.2, port=4420 00:27:56.546 qpair failed and we were unable to recover it. 00:27:56.546 [2024-07-26 11:35:51.926356] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.546 [2024-07-26 11:35:51.926386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f8000b90 with addr=10.0.0.2, port=4420 00:27:56.546 qpair failed and we were unable to recover it. 00:27:56.546 [2024-07-26 11:35:51.926509] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.546 [2024-07-26 11:35:51.926538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f8000b90 with addr=10.0.0.2, port=4420 00:27:56.546 qpair failed and we were unable to recover it. 00:27:56.546 [2024-07-26 11:35:51.926746] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.546 [2024-07-26 11:35:51.926777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f8000b90 with addr=10.0.0.2, port=4420 00:27:56.546 qpair failed and we were unable to recover it. 00:27:56.546 [2024-07-26 11:35:51.926980] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.546 [2024-07-26 11:35:51.927014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.546 qpair failed and we were unable to recover it. 00:27:56.546 [2024-07-26 11:35:51.927124] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.546 [2024-07-26 11:35:51.927154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.546 qpair failed and we were unable to recover it. 00:27:56.546 [2024-07-26 11:35:51.927330] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.546 [2024-07-26 11:35:51.927360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.546 qpair failed and we were unable to recover it. 00:27:56.546 [2024-07-26 11:35:51.927505] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.546 [2024-07-26 11:35:51.927536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.546 qpair failed and we were unable to recover it. 00:27:56.546 [2024-07-26 11:35:51.927652] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.546 [2024-07-26 11:35:51.927683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.546 qpair failed and we were unable to recover it. 00:27:56.546 [2024-07-26 11:35:51.927861] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.546 [2024-07-26 11:35:51.927891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.546 qpair failed and we were unable to recover it. 00:27:56.546 [2024-07-26 11:35:51.928011] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.546 [2024-07-26 11:35:51.928042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.546 qpair failed and we were unable to recover it. 00:27:56.546 [2024-07-26 11:35:51.928171] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.546 [2024-07-26 11:35:51.928201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.546 qpair failed and we were unable to recover it. 00:27:56.546 [2024-07-26 11:35:51.928330] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.546 [2024-07-26 11:35:51.928360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.546 qpair failed and we were unable to recover it. 00:27:56.546 [2024-07-26 11:35:51.928468] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.546 [2024-07-26 11:35:51.928498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.546 qpair failed and we were unable to recover it. 00:27:56.546 [2024-07-26 11:35:51.928622] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.546 [2024-07-26 11:35:51.928661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.546 qpair failed and we were unable to recover it. 00:27:56.546 [2024-07-26 11:35:51.928793] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.546 [2024-07-26 11:35:51.928823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.546 qpair failed and we were unable to recover it. 00:27:56.546 [2024-07-26 11:35:51.928958] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.546 [2024-07-26 11:35:51.928988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.546 qpair failed and we were unable to recover it. 00:27:56.547 [2024-07-26 11:35:51.929099] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.547 [2024-07-26 11:35:51.929135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.547 qpair failed and we were unable to recover it. 00:27:56.547 [2024-07-26 11:35:51.929329] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.547 [2024-07-26 11:35:51.929359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.547 qpair failed and we were unable to recover it. 00:27:56.547 [2024-07-26 11:35:51.929482] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.547 [2024-07-26 11:35:51.929512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.547 qpair failed and we were unable to recover it. 00:27:56.547 [2024-07-26 11:35:51.929617] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.547 [2024-07-26 11:35:51.929656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.547 qpair failed and we were unable to recover it. 00:27:56.547 [2024-07-26 11:35:51.929767] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.547 [2024-07-26 11:35:51.929797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.547 qpair failed and we were unable to recover it. 00:27:56.547 [2024-07-26 11:35:51.929973] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.547 [2024-07-26 11:35:51.930002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.547 qpair failed and we were unable to recover it. 00:27:56.547 [2024-07-26 11:35:51.930127] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.547 [2024-07-26 11:35:51.930157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.547 qpair failed and we were unable to recover it. 00:27:56.547 [2024-07-26 11:35:51.930337] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.547 [2024-07-26 11:35:51.930367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.547 qpair failed and we were unable to recover it. 00:27:56.547 [2024-07-26 11:35:51.930561] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.547 [2024-07-26 11:35:51.930592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.547 qpair failed and we were unable to recover it. 00:27:56.547 [2024-07-26 11:35:51.930732] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.547 [2024-07-26 11:35:51.930763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.547 qpair failed and we were unable to recover it. 00:27:56.547 [2024-07-26 11:35:51.930905] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.547 [2024-07-26 11:35:51.930934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.547 qpair failed and we were unable to recover it. 00:27:56.547 [2024-07-26 11:35:51.931053] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.547 [2024-07-26 11:35:51.931082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.547 qpair failed and we were unable to recover it. 00:27:56.547 [2024-07-26 11:35:51.931259] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.547 [2024-07-26 11:35:51.931289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.547 qpair failed and we were unable to recover it. 00:27:56.547 [2024-07-26 11:35:51.931415] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.547 [2024-07-26 11:35:51.931445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.547 qpair failed and we were unable to recover it. 00:27:56.547 [2024-07-26 11:35:51.931569] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.547 [2024-07-26 11:35:51.931600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.547 qpair failed and we were unable to recover it. 00:27:56.547 [2024-07-26 11:35:51.931924] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.547 [2024-07-26 11:35:51.931957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f8000b90 with addr=10.0.0.2, port=4420 00:27:56.547 qpair failed and we were unable to recover it. 00:27:56.547 [2024-07-26 11:35:51.932200] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.547 [2024-07-26 11:35:51.932229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f8000b90 with addr=10.0.0.2, port=4420 00:27:56.547 qpair failed and we were unable to recover it. 00:27:56.547 [2024-07-26 11:35:51.932344] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.547 [2024-07-26 11:35:51.932373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f8000b90 with addr=10.0.0.2, port=4420 00:27:56.547 qpair failed and we were unable to recover it. 00:27:56.547 [2024-07-26 11:35:51.932545] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.547 [2024-07-26 11:35:51.932575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f8000b90 with addr=10.0.0.2, port=4420 00:27:56.547 qpair failed and we were unable to recover it. 00:27:56.547 [2024-07-26 11:35:51.932769] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.547 [2024-07-26 11:35:51.932800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f8000b90 with addr=10.0.0.2, port=4420 00:27:56.547 qpair failed and we were unable to recover it. 00:27:56.547 [2024-07-26 11:35:51.932975] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.547 [2024-07-26 11:35:51.933006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f8000b90 with addr=10.0.0.2, port=4420 00:27:56.547 qpair failed and we were unable to recover it. 00:27:56.547 [2024-07-26 11:35:51.933120] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.547 [2024-07-26 11:35:51.933149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f8000b90 with addr=10.0.0.2, port=4420 00:27:56.547 qpair failed and we were unable to recover it. 00:27:56.547 [2024-07-26 11:35:51.933323] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.547 [2024-07-26 11:35:51.933352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f8000b90 with addr=10.0.0.2, port=4420 00:27:56.547 qpair failed and we were unable to recover it. 00:27:56.547 [2024-07-26 11:35:51.933526] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.547 [2024-07-26 11:35:51.933555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f8000b90 with addr=10.0.0.2, port=4420 00:27:56.547 qpair failed and we were unable to recover it. 00:27:56.547 [2024-07-26 11:35:51.933686] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.547 [2024-07-26 11:35:51.933716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f8000b90 with addr=10.0.0.2, port=4420 00:27:56.547 qpair failed and we were unable to recover it. 00:27:56.547 [2024-07-26 11:35:51.933852] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.547 [2024-07-26 11:35:51.933882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f8000b90 with addr=10.0.0.2, port=4420 00:27:56.547 qpair failed and we were unable to recover it. 00:27:56.547 [2024-07-26 11:35:51.934068] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.547 [2024-07-26 11:35:51.934098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f8000b90 with addr=10.0.0.2, port=4420 00:27:56.547 qpair failed and we were unable to recover it. 00:27:56.547 [2024-07-26 11:35:51.934373] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.547 [2024-07-26 11:35:51.934406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.547 qpair failed and we were unable to recover it. 00:27:56.547 [2024-07-26 11:35:51.934538] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.547 [2024-07-26 11:35:51.934569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.547 qpair failed and we were unable to recover it. 00:27:56.547 [2024-07-26 11:35:51.934685] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.547 [2024-07-26 11:35:51.934715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.547 qpair failed and we were unable to recover it. 00:27:56.547 [2024-07-26 11:35:51.934825] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.547 [2024-07-26 11:35:51.934855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.547 qpair failed and we were unable to recover it. 00:27:56.547 [2024-07-26 11:35:51.934980] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.547 [2024-07-26 11:35:51.935010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.547 qpair failed and we were unable to recover it. 00:27:56.547 [2024-07-26 11:35:51.935193] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.547 [2024-07-26 11:35:51.935223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.547 qpair failed and we were unable to recover it. 00:27:56.547 [2024-07-26 11:35:51.935404] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.547 [2024-07-26 11:35:51.935434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.547 qpair failed and we were unable to recover it. 00:27:56.547 [2024-07-26 11:35:51.935619] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.547 [2024-07-26 11:35:51.935657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.547 qpair failed and we were unable to recover it. 00:27:56.547 [2024-07-26 11:35:51.935840] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.547 [2024-07-26 11:35:51.935870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.547 qpair failed and we were unable to recover it. 00:27:56.547 [2024-07-26 11:35:51.935981] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.547 [2024-07-26 11:35:51.936011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.547 qpair failed and we were unable to recover it. 00:27:56.547 [2024-07-26 11:35:51.936273] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.548 [2024-07-26 11:35:51.936303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.548 qpair failed and we were unable to recover it. 00:27:56.548 [2024-07-26 11:35:51.936408] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.548 [2024-07-26 11:35:51.936438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.548 qpair failed and we were unable to recover it. 00:27:56.548 [2024-07-26 11:35:51.936647] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.548 [2024-07-26 11:35:51.936678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.548 qpair failed and we were unable to recover it. 00:27:56.548 [2024-07-26 11:35:51.936806] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.548 [2024-07-26 11:35:51.936841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.548 qpair failed and we were unable to recover it. 00:27:56.548 [2024-07-26 11:35:51.936964] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.548 [2024-07-26 11:35:51.936994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.548 qpair failed and we were unable to recover it. 00:27:56.548 [2024-07-26 11:35:51.937187] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.548 [2024-07-26 11:35:51.937217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.548 qpair failed and we were unable to recover it. 00:27:56.548 [2024-07-26 11:35:51.937334] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.548 [2024-07-26 11:35:51.937364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.548 qpair failed and we were unable to recover it. 00:27:56.548 [2024-07-26 11:35:51.937543] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.548 [2024-07-26 11:35:51.937573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.548 qpair failed and we were unable to recover it. 00:27:56.548 [2024-07-26 11:35:51.937835] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.548 [2024-07-26 11:35:51.937866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.548 qpair failed and we were unable to recover it. 00:27:56.548 [2024-07-26 11:35:51.938113] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.548 [2024-07-26 11:35:51.938144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.548 qpair failed and we were unable to recover it. 00:27:56.548 [2024-07-26 11:35:51.938274] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.548 [2024-07-26 11:35:51.938304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.548 qpair failed and we were unable to recover it. 00:27:56.548 [2024-07-26 11:35:51.938409] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.548 [2024-07-26 11:35:51.938439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.548 qpair failed and we were unable to recover it. 00:27:56.548 [2024-07-26 11:35:51.938646] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.548 [2024-07-26 11:35:51.938677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.548 qpair failed and we were unable to recover it. 00:27:56.548 [2024-07-26 11:35:51.938881] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.548 [2024-07-26 11:35:51.938911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.548 qpair failed and we were unable to recover it. 00:27:56.548 [2024-07-26 11:35:51.939134] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.548 [2024-07-26 11:35:51.939165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.548 qpair failed and we were unable to recover it. 00:27:56.548 [2024-07-26 11:35:51.939293] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.548 [2024-07-26 11:35:51.939323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.548 qpair failed and we were unable to recover it. 00:27:56.548 [2024-07-26 11:35:51.939458] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.548 [2024-07-26 11:35:51.939488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.548 qpair failed and we were unable to recover it. 00:27:56.548 [2024-07-26 11:35:51.939676] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.548 [2024-07-26 11:35:51.939707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.548 qpair failed and we were unable to recover it. 00:27:56.548 [2024-07-26 11:35:51.939820] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.548 [2024-07-26 11:35:51.939851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.548 qpair failed and we were unable to recover it. 00:27:56.548 [2024-07-26 11:35:51.940098] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.548 [2024-07-26 11:35:51.940129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.548 qpair failed and we were unable to recover it. 00:27:56.548 [2024-07-26 11:35:51.940254] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.548 [2024-07-26 11:35:51.940284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.548 qpair failed and we were unable to recover it. 00:27:56.548 [2024-07-26 11:35:51.940397] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.548 [2024-07-26 11:35:51.940426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.548 qpair failed and we were unable to recover it. 00:27:56.548 [2024-07-26 11:35:51.940532] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.548 [2024-07-26 11:35:51.940562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.548 qpair failed and we were unable to recover it. 00:27:56.548 [2024-07-26 11:35:51.940746] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.548 [2024-07-26 11:35:51.940777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.548 qpair failed and we were unable to recover it. 00:27:56.548 [2024-07-26 11:35:51.940967] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.548 [2024-07-26 11:35:51.940997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.548 qpair failed and we were unable to recover it. 00:27:56.548 [2024-07-26 11:35:51.941169] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.548 [2024-07-26 11:35:51.941198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.548 qpair failed and we were unable to recover it. 00:27:56.548 [2024-07-26 11:35:51.941377] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.548 [2024-07-26 11:35:51.941407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.548 qpair failed and we were unable to recover it. 00:27:56.548 [2024-07-26 11:35:51.941579] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.548 [2024-07-26 11:35:51.941609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.548 qpair failed and we were unable to recover it. 00:27:56.548 [2024-07-26 11:35:51.941732] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.548 [2024-07-26 11:35:51.941762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.548 qpair failed and we were unable to recover it. 00:27:56.548 [2024-07-26 11:35:51.941884] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.548 [2024-07-26 11:35:51.941914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.548 qpair failed and we were unable to recover it. 00:27:56.548 [2024-07-26 11:35:51.942091] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.548 [2024-07-26 11:35:51.942125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.548 qpair failed and we were unable to recover it. 00:27:56.548 [2024-07-26 11:35:51.942255] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.548 [2024-07-26 11:35:51.942286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.548 qpair failed and we were unable to recover it. 00:27:56.548 [2024-07-26 11:35:51.942395] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.548 [2024-07-26 11:35:51.942425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.548 qpair failed and we were unable to recover it. 00:27:56.548 [2024-07-26 11:35:51.942610] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.548 [2024-07-26 11:35:51.942646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.548 qpair failed and we were unable to recover it. 00:27:56.548 [2024-07-26 11:35:51.942783] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.548 [2024-07-26 11:35:51.942813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.548 qpair failed and we were unable to recover it. 00:27:56.548 [2024-07-26 11:35:51.942925] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.548 [2024-07-26 11:35:51.942954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.548 qpair failed and we were unable to recover it. 00:27:56.548 [2024-07-26 11:35:51.943236] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.548 [2024-07-26 11:35:51.943266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.548 qpair failed and we were unable to recover it. 00:27:56.548 [2024-07-26 11:35:51.943468] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.549 [2024-07-26 11:35:51.943498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.549 qpair failed and we were unable to recover it. 00:27:56.549 [2024-07-26 11:35:51.943605] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.549 [2024-07-26 11:35:51.943654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.549 qpair failed and we were unable to recover it. 00:27:56.549 [2024-07-26 11:35:51.943869] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.549 [2024-07-26 11:35:51.943899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.549 qpair failed and we were unable to recover it. 00:27:56.549 [2024-07-26 11:35:51.944017] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.549 [2024-07-26 11:35:51.944047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.549 qpair failed and we were unable to recover it. 00:27:56.549 [2024-07-26 11:35:51.944312] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.549 [2024-07-26 11:35:51.944342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.549 qpair failed and we were unable to recover it. 00:27:56.549 [2024-07-26 11:35:51.944447] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.549 [2024-07-26 11:35:51.944477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.549 qpair failed and we were unable to recover it. 00:27:56.549 [2024-07-26 11:35:51.944690] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.549 [2024-07-26 11:35:51.944722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.549 qpair failed and we were unable to recover it. 00:27:56.549 [2024-07-26 11:35:51.944924] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.549 [2024-07-26 11:35:51.944954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.549 qpair failed and we were unable to recover it. 00:27:56.549 [2024-07-26 11:35:51.945072] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.549 [2024-07-26 11:35:51.945102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.549 qpair failed and we were unable to recover it. 00:27:56.549 [2024-07-26 11:35:51.945271] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.549 [2024-07-26 11:35:51.945301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.549 qpair failed and we were unable to recover it. 00:27:56.549 [2024-07-26 11:35:51.945479] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.549 [2024-07-26 11:35:51.945509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.549 qpair failed and we were unable to recover it. 00:27:56.549 [2024-07-26 11:35:51.945682] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.549 [2024-07-26 11:35:51.945713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.549 qpair failed and we were unable to recover it. 00:27:56.549 [2024-07-26 11:35:51.945985] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.549 [2024-07-26 11:35:51.946015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.549 qpair failed and we were unable to recover it. 00:27:56.549 [2024-07-26 11:35:51.946135] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.549 [2024-07-26 11:35:51.946164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.549 qpair failed and we were unable to recover it. 00:27:56.549 [2024-07-26 11:35:51.946293] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.549 [2024-07-26 11:35:51.946324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.549 qpair failed and we were unable to recover it. 00:27:56.549 [2024-07-26 11:35:51.946572] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.549 [2024-07-26 11:35:51.946601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.549 qpair failed and we were unable to recover it. 00:27:56.549 [2024-07-26 11:35:51.946730] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.549 [2024-07-26 11:35:51.946760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.549 qpair failed and we were unable to recover it. 00:27:56.549 [2024-07-26 11:35:51.946935] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.549 [2024-07-26 11:35:51.946964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.549 qpair failed and we were unable to recover it. 00:27:56.549 [2024-07-26 11:35:51.947172] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.549 [2024-07-26 11:35:51.947202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.549 qpair failed and we were unable to recover it. 00:27:56.549 [2024-07-26 11:35:51.947319] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.549 [2024-07-26 11:35:51.947349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.549 qpair failed and we were unable to recover it. 00:27:56.549 [2024-07-26 11:35:51.947538] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.549 [2024-07-26 11:35:51.947568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.549 qpair failed and we were unable to recover it. 00:27:56.549 [2024-07-26 11:35:51.947685] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.549 [2024-07-26 11:35:51.947716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.549 qpair failed and we were unable to recover it. 00:27:56.549 [2024-07-26 11:35:51.947928] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.549 [2024-07-26 11:35:51.947959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.549 qpair failed and we were unable to recover it. 00:27:56.549 [2024-07-26 11:35:51.948136] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.549 [2024-07-26 11:35:51.948165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.549 qpair failed and we were unable to recover it. 00:27:56.549 [2024-07-26 11:35:51.948377] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.549 [2024-07-26 11:35:51.948406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.549 qpair failed and we were unable to recover it. 00:27:56.549 [2024-07-26 11:35:51.948600] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.549 [2024-07-26 11:35:51.948640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.549 qpair failed and we were unable to recover it. 00:27:56.549 [2024-07-26 11:35:51.948835] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.549 [2024-07-26 11:35:51.948865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.549 qpair failed and we were unable to recover it. 00:27:56.549 [2024-07-26 11:35:51.949102] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.549 [2024-07-26 11:35:51.949132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.549 qpair failed and we were unable to recover it. 00:27:56.549 [2024-07-26 11:35:51.949317] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.549 [2024-07-26 11:35:51.949347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.549 qpair failed and we were unable to recover it. 00:27:56.549 [2024-07-26 11:35:51.949540] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.549 [2024-07-26 11:35:51.949570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.549 qpair failed and we were unable to recover it. 00:27:56.549 [2024-07-26 11:35:51.949780] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.549 [2024-07-26 11:35:51.949811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.549 qpair failed and we were unable to recover it. 00:27:56.549 [2024-07-26 11:35:51.949984] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.549 [2024-07-26 11:35:51.950013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.550 qpair failed and we were unable to recover it. 00:27:56.550 [2024-07-26 11:35:51.950264] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.550 [2024-07-26 11:35:51.950294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.550 qpair failed and we were unable to recover it. 00:27:56.550 [2024-07-26 11:35:51.950415] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.550 [2024-07-26 11:35:51.950449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.550 qpair failed and we were unable to recover it. 00:27:56.550 [2024-07-26 11:35:51.950649] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.550 [2024-07-26 11:35:51.950680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.550 qpair failed and we were unable to recover it. 00:27:56.550 [2024-07-26 11:35:51.950876] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.550 [2024-07-26 11:35:51.950906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.550 qpair failed and we were unable to recover it. 00:27:56.550 [2024-07-26 11:35:51.951042] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.550 [2024-07-26 11:35:51.951072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.550 qpair failed and we were unable to recover it. 00:27:56.550 [2024-07-26 11:35:51.951265] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.550 [2024-07-26 11:35:51.951294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.550 qpair failed and we were unable to recover it. 00:27:56.550 [2024-07-26 11:35:51.951479] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.550 [2024-07-26 11:35:51.951508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.550 qpair failed and we were unable to recover it. 00:27:56.550 [2024-07-26 11:35:51.951694] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.550 [2024-07-26 11:35:51.951725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.550 qpair failed and we were unable to recover it. 00:27:56.550 [2024-07-26 11:35:51.951866] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.550 [2024-07-26 11:35:51.951895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.550 qpair failed and we were unable to recover it. 00:27:56.550 [2024-07-26 11:35:51.952134] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.550 [2024-07-26 11:35:51.952163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.550 qpair failed and we were unable to recover it. 00:27:56.550 [2024-07-26 11:35:51.952272] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.550 [2024-07-26 11:35:51.952301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.550 qpair failed and we were unable to recover it. 00:27:56.550 [2024-07-26 11:35:51.952498] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.550 [2024-07-26 11:35:51.952528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.550 qpair failed and we were unable to recover it. 00:27:56.550 [2024-07-26 11:35:51.952714] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.550 [2024-07-26 11:35:51.952745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.550 qpair failed and we were unable to recover it. 00:27:56.550 [2024-07-26 11:35:51.953031] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.550 [2024-07-26 11:35:51.953061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.550 qpair failed and we were unable to recover it. 00:27:56.550 [2024-07-26 11:35:51.953193] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.550 [2024-07-26 11:35:51.953223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.550 qpair failed and we were unable to recover it. 00:27:56.550 [2024-07-26 11:35:51.953355] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.550 [2024-07-26 11:35:51.953385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.550 qpair failed and we were unable to recover it. 00:27:56.550 [2024-07-26 11:35:51.953578] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.550 [2024-07-26 11:35:51.953608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.550 qpair failed and we were unable to recover it. 00:27:56.550 [2024-07-26 11:35:51.953796] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.550 [2024-07-26 11:35:51.953826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.550 qpair failed and we were unable to recover it. 00:27:56.550 [2024-07-26 11:35:51.953924] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.550 [2024-07-26 11:35:51.953954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.550 qpair failed and we were unable to recover it. 00:27:56.550 [2024-07-26 11:35:51.954081] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.550 [2024-07-26 11:35:51.954111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.550 qpair failed and we were unable to recover it. 00:27:56.550 [2024-07-26 11:35:51.954232] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.550 [2024-07-26 11:35:51.954262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.550 qpair failed and we were unable to recover it. 00:27:56.550 [2024-07-26 11:35:51.954439] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.550 [2024-07-26 11:35:51.954469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.550 qpair failed and we were unable to recover it. 00:27:56.550 [2024-07-26 11:35:51.954651] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.550 [2024-07-26 11:35:51.954683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.550 qpair failed and we were unable to recover it. 00:27:56.550 [2024-07-26 11:35:51.954810] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.550 [2024-07-26 11:35:51.954840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.550 qpair failed and we were unable to recover it. 00:27:56.550 [2024-07-26 11:35:51.955077] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.550 [2024-07-26 11:35:51.955107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.550 qpair failed and we were unable to recover it. 00:27:56.550 [2024-07-26 11:35:51.955298] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.550 [2024-07-26 11:35:51.955329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.550 qpair failed and we were unable to recover it. 00:27:56.550 [2024-07-26 11:35:51.955575] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.550 [2024-07-26 11:35:51.955604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.550 qpair failed and we were unable to recover it. 00:27:56.550 [2024-07-26 11:35:51.955801] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.550 [2024-07-26 11:35:51.955832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.550 qpair failed and we were unable to recover it. 00:27:56.550 [2024-07-26 11:35:51.956052] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.550 [2024-07-26 11:35:51.956082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.550 qpair failed and we were unable to recover it. 00:27:56.550 [2024-07-26 11:35:51.956196] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.550 [2024-07-26 11:35:51.956226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.550 qpair failed and we were unable to recover it. 00:27:56.550 [2024-07-26 11:35:51.956437] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.550 [2024-07-26 11:35:51.956467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.550 qpair failed and we were unable to recover it. 00:27:56.550 [2024-07-26 11:35:51.956734] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.550 [2024-07-26 11:35:51.956765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.550 qpair failed and we were unable to recover it. 00:27:56.550 [2024-07-26 11:35:51.956902] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.550 [2024-07-26 11:35:51.956932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.550 qpair failed and we were unable to recover it. 00:27:56.550 [2024-07-26 11:35:51.957056] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.550 [2024-07-26 11:35:51.957086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.550 qpair failed and we were unable to recover it. 00:27:56.550 [2024-07-26 11:35:51.957259] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.550 [2024-07-26 11:35:51.957289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.550 qpair failed and we were unable to recover it. 00:27:56.550 [2024-07-26 11:35:51.957411] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.550 [2024-07-26 11:35:51.957442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.550 qpair failed and we were unable to recover it. 00:27:56.550 [2024-07-26 11:35:51.957569] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.550 [2024-07-26 11:35:51.957598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.550 qpair failed and we were unable to recover it. 00:27:56.550 [2024-07-26 11:35:51.957719] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.551 [2024-07-26 11:35:51.957751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.551 qpair failed and we were unable to recover it. 00:27:56.551 [2024-07-26 11:35:51.957919] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.551 [2024-07-26 11:35:51.957949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.551 qpair failed and we were unable to recover it. 00:27:56.551 [2024-07-26 11:35:51.958059] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.551 [2024-07-26 11:35:51.958089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.551 qpair failed and we were unable to recover it. 00:27:56.551 [2024-07-26 11:35:51.958227] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.551 [2024-07-26 11:35:51.958258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.551 qpair failed and we were unable to recover it. 00:27:56.551 [2024-07-26 11:35:51.958381] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.551 [2024-07-26 11:35:51.958416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.551 qpair failed and we were unable to recover it. 00:27:56.551 [2024-07-26 11:35:51.958617] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.551 [2024-07-26 11:35:51.958658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.551 qpair failed and we were unable to recover it. 00:27:56.551 [2024-07-26 11:35:51.958781] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.551 [2024-07-26 11:35:51.958811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.551 qpair failed and we were unable to recover it. 00:27:56.551 [2024-07-26 11:35:51.958934] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.551 [2024-07-26 11:35:51.958964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.551 qpair failed and we were unable to recover it. 00:27:56.551 [2024-07-26 11:35:51.959154] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.551 [2024-07-26 11:35:51.959184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.551 qpair failed and we were unable to recover it. 00:27:56.551 [2024-07-26 11:35:51.959289] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.551 [2024-07-26 11:35:51.959319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.551 qpair failed and we were unable to recover it. 00:27:56.551 [2024-07-26 11:35:51.959435] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.551 [2024-07-26 11:35:51.959464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.551 qpair failed and we were unable to recover it. 00:27:56.551 [2024-07-26 11:35:51.959573] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.551 [2024-07-26 11:35:51.959602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.551 qpair failed and we were unable to recover it. 00:27:56.551 [2024-07-26 11:35:51.959732] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.551 [2024-07-26 11:35:51.959763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.551 qpair failed and we were unable to recover it. 00:27:56.551 [2024-07-26 11:35:51.960026] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.551 [2024-07-26 11:35:51.960055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.551 qpair failed and we were unable to recover it. 00:27:56.551 [2024-07-26 11:35:51.960190] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.551 [2024-07-26 11:35:51.960220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.551 qpair failed and we were unable to recover it. 00:27:56.551 [2024-07-26 11:35:51.960340] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.551 [2024-07-26 11:35:51.960370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.551 qpair failed and we were unable to recover it. 00:27:56.551 [2024-07-26 11:35:51.960478] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.551 [2024-07-26 11:35:51.960507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.551 qpair failed and we were unable to recover it. 00:27:56.551 [2024-07-26 11:35:51.960687] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.551 [2024-07-26 11:35:51.960719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.551 qpair failed and we were unable to recover it. 00:27:56.551 [2024-07-26 11:35:51.960831] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.551 [2024-07-26 11:35:51.960861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.551 qpair failed and we were unable to recover it. 00:27:56.551 [2024-07-26 11:35:51.961045] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.551 [2024-07-26 11:35:51.961075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.551 qpair failed and we were unable to recover it. 00:27:56.551 [2024-07-26 11:35:51.961196] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.551 [2024-07-26 11:35:51.961227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.551 qpair failed and we were unable to recover it. 00:27:56.551 [2024-07-26 11:35:51.961353] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.551 [2024-07-26 11:35:51.961383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.551 qpair failed and we were unable to recover it. 00:27:56.551 [2024-07-26 11:35:51.961491] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.551 [2024-07-26 11:35:51.961521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.551 qpair failed and we were unable to recover it. 00:27:56.551 [2024-07-26 11:35:51.961648] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.551 [2024-07-26 11:35:51.961679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.551 qpair failed and we were unable to recover it. 00:27:56.551 [2024-07-26 11:35:51.961871] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.551 [2024-07-26 11:35:51.961901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.551 qpair failed and we were unable to recover it. 00:27:56.551 [2024-07-26 11:35:51.962085] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.551 [2024-07-26 11:35:51.962116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.551 qpair failed and we were unable to recover it. 00:27:56.551 [2024-07-26 11:35:51.962298] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.551 [2024-07-26 11:35:51.962328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.551 qpair failed and we were unable to recover it. 00:27:56.551 [2024-07-26 11:35:51.962447] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.551 [2024-07-26 11:35:51.962477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.551 qpair failed and we were unable to recover it. 00:27:56.551 [2024-07-26 11:35:51.962594] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.551 [2024-07-26 11:35:51.962624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.551 qpair failed and we were unable to recover it. 00:27:56.551 [2024-07-26 11:35:51.962756] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.551 [2024-07-26 11:35:51.962785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.551 qpair failed and we were unable to recover it. 00:27:56.551 [2024-07-26 11:35:51.962893] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.551 [2024-07-26 11:35:51.962923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.551 qpair failed and we were unable to recover it. 00:27:56.551 [2024-07-26 11:35:51.963101] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.551 [2024-07-26 11:35:51.963131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.551 qpair failed and we were unable to recover it. 00:27:56.551 [2024-07-26 11:35:51.963236] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.551 [2024-07-26 11:35:51.963266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.551 qpair failed and we were unable to recover it. 00:27:56.551 [2024-07-26 11:35:51.963374] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.551 [2024-07-26 11:35:51.963404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.551 qpair failed and we were unable to recover it. 00:27:56.551 [2024-07-26 11:35:51.963584] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.551 [2024-07-26 11:35:51.963613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.551 qpair failed and we were unable to recover it. 00:27:56.551 [2024-07-26 11:35:51.963732] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.551 [2024-07-26 11:35:51.963762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.551 qpair failed and we were unable to recover it. 00:27:56.551 [2024-07-26 11:35:51.963884] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.551 [2024-07-26 11:35:51.963914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.551 qpair failed and we were unable to recover it. 00:27:56.551 [2024-07-26 11:35:51.964079] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.551 [2024-07-26 11:35:51.964110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.551 qpair failed and we were unable to recover it. 00:27:56.552 [2024-07-26 11:35:51.964305] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.552 [2024-07-26 11:35:51.964335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.552 qpair failed and we were unable to recover it. 00:27:56.552 [2024-07-26 11:35:51.964457] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.552 [2024-07-26 11:35:51.964487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.552 qpair failed and we were unable to recover it. 00:27:56.552 [2024-07-26 11:35:51.964605] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.552 [2024-07-26 11:35:51.964663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.552 qpair failed and we were unable to recover it. 00:27:56.552 [2024-07-26 11:35:51.964806] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.552 [2024-07-26 11:35:51.964836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.552 qpair failed and we were unable to recover it. 00:27:56.552 [2024-07-26 11:35:51.965032] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.552 [2024-07-26 11:35:51.965062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.552 qpair failed and we were unable to recover it. 00:27:56.552 [2024-07-26 11:35:51.965167] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.552 [2024-07-26 11:35:51.965197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.552 qpair failed and we were unable to recover it. 00:27:56.552 [2024-07-26 11:35:51.965320] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.552 [2024-07-26 11:35:51.965355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.552 qpair failed and we were unable to recover it. 00:27:56.552 [2024-07-26 11:35:51.965465] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.552 [2024-07-26 11:35:51.965494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.552 qpair failed and we were unable to recover it. 00:27:56.552 [2024-07-26 11:35:51.965616] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.552 [2024-07-26 11:35:51.965658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.552 qpair failed and we were unable to recover it. 00:27:56.552 [2024-07-26 11:35:51.965771] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.552 [2024-07-26 11:35:51.965802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.552 qpair failed and we were unable to recover it. 00:27:56.552 [2024-07-26 11:35:51.965904] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.552 [2024-07-26 11:35:51.965935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.552 qpair failed and we were unable to recover it. 00:27:56.552 [2024-07-26 11:35:51.966129] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.552 [2024-07-26 11:35:51.966159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.552 qpair failed and we were unable to recover it. 00:27:56.552 [2024-07-26 11:35:51.966333] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.552 [2024-07-26 11:35:51.966363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.552 qpair failed and we were unable to recover it. 00:27:56.552 [2024-07-26 11:35:51.966639] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.552 [2024-07-26 11:35:51.966670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.552 qpair failed and we were unable to recover it. 00:27:56.552 [2024-07-26 11:35:51.966777] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.552 [2024-07-26 11:35:51.966807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.552 qpair failed and we were unable to recover it. 00:27:56.552 [2024-07-26 11:35:51.966984] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.552 [2024-07-26 11:35:51.967014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.552 qpair failed and we were unable to recover it. 00:27:56.552 [2024-07-26 11:35:51.967125] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.552 [2024-07-26 11:35:51.967155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.552 qpair failed and we were unable to recover it. 00:27:56.552 [2024-07-26 11:35:51.967327] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.552 [2024-07-26 11:35:51.967357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.552 qpair failed and we were unable to recover it. 00:27:56.552 [2024-07-26 11:35:51.967475] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.552 [2024-07-26 11:35:51.967505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.552 qpair failed and we were unable to recover it. 00:27:56.552 [2024-07-26 11:35:51.967639] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.552 [2024-07-26 11:35:51.967670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.552 qpair failed and we were unable to recover it. 00:27:56.552 [2024-07-26 11:35:51.967817] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.552 [2024-07-26 11:35:51.967847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.552 qpair failed and we were unable to recover it. 00:27:56.552 [2024-07-26 11:35:51.967952] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.552 [2024-07-26 11:35:51.967981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.552 qpair failed and we were unable to recover it. 00:27:56.552 [2024-07-26 11:35:51.968093] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.552 [2024-07-26 11:35:51.968123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.552 qpair failed and we were unable to recover it. 00:27:56.552 [2024-07-26 11:35:51.968265] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.552 [2024-07-26 11:35:51.968295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.552 qpair failed and we were unable to recover it. 00:27:56.552 [2024-07-26 11:35:51.968552] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.552 [2024-07-26 11:35:51.968583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.552 qpair failed and we were unable to recover it. 00:27:56.552 [2024-07-26 11:35:51.968714] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.552 [2024-07-26 11:35:51.968745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.552 qpair failed and we were unable to recover it. 00:27:56.552 [2024-07-26 11:35:51.968868] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.552 [2024-07-26 11:35:51.968897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.552 qpair failed and we were unable to recover it. 00:27:56.552 [2024-07-26 11:35:51.969024] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.552 [2024-07-26 11:35:51.969054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.552 qpair failed and we were unable to recover it. 00:27:56.552 [2024-07-26 11:35:51.969248] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.552 [2024-07-26 11:35:51.969278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.552 qpair failed and we were unable to recover it. 00:27:56.552 [2024-07-26 11:35:51.969483] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.552 [2024-07-26 11:35:51.969513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.552 qpair failed and we were unable to recover it. 00:27:56.552 [2024-07-26 11:35:51.969695] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.552 [2024-07-26 11:35:51.969726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.552 qpair failed and we were unable to recover it. 00:27:56.552 [2024-07-26 11:35:51.969952] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.552 [2024-07-26 11:35:51.969982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.552 qpair failed and we were unable to recover it. 00:27:56.552 [2024-07-26 11:35:51.970155] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.552 [2024-07-26 11:35:51.970186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.552 qpair failed and we were unable to recover it. 00:27:56.552 [2024-07-26 11:35:51.970314] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.552 [2024-07-26 11:35:51.970344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.552 qpair failed and we were unable to recover it. 00:27:56.552 [2024-07-26 11:35:51.970449] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.552 [2024-07-26 11:35:51.970479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.552 qpair failed and we were unable to recover it. 00:27:56.552 [2024-07-26 11:35:51.970659] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.552 [2024-07-26 11:35:51.970689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.552 qpair failed and we were unable to recover it. 00:27:56.552 [2024-07-26 11:35:51.970861] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.552 [2024-07-26 11:35:51.970890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.552 qpair failed and we were unable to recover it. 00:27:56.552 [2024-07-26 11:35:51.971000] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.553 [2024-07-26 11:35:51.971030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.553 qpair failed and we were unable to recover it. 00:27:56.553 [2024-07-26 11:35:51.971150] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.553 [2024-07-26 11:35:51.971180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.553 qpair failed and we were unable to recover it. 00:27:56.553 [2024-07-26 11:35:51.971285] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.553 [2024-07-26 11:35:51.971315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.553 qpair failed and we were unable to recover it. 00:27:56.553 [2024-07-26 11:35:51.971436] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.553 [2024-07-26 11:35:51.971466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.553 qpair failed and we were unable to recover it. 00:27:56.553 [2024-07-26 11:35:51.971585] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.553 [2024-07-26 11:35:51.971615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.553 qpair failed and we were unable to recover it. 00:27:56.553 [2024-07-26 11:35:51.971762] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.553 [2024-07-26 11:35:51.971792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.553 qpair failed and we were unable to recover it. 00:27:56.553 [2024-07-26 11:35:51.971918] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.553 [2024-07-26 11:35:51.971948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.553 qpair failed and we were unable to recover it. 00:27:56.553 [2024-07-26 11:35:51.972126] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.553 [2024-07-26 11:35:51.972156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.553 qpair failed and we were unable to recover it. 00:27:56.553 [2024-07-26 11:35:51.972338] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.553 [2024-07-26 11:35:51.972369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.553 qpair failed and we were unable to recover it. 00:27:56.553 [2024-07-26 11:35:51.972501] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.553 [2024-07-26 11:35:51.972536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.553 qpair failed and we were unable to recover it. 00:27:56.553 [2024-07-26 11:35:51.972673] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.553 [2024-07-26 11:35:51.972704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.553 qpair failed and we were unable to recover it. 00:27:56.553 [2024-07-26 11:35:51.972953] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.553 [2024-07-26 11:35:51.972982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.553 qpair failed and we were unable to recover it. 00:27:56.553 [2024-07-26 11:35:51.973127] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.553 [2024-07-26 11:35:51.973157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.553 qpair failed and we were unable to recover it. 00:27:56.553 [2024-07-26 11:35:51.973274] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.553 [2024-07-26 11:35:51.973304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.553 qpair failed and we were unable to recover it. 00:27:56.553 [2024-07-26 11:35:51.973499] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.553 [2024-07-26 11:35:51.973529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.553 qpair failed and we were unable to recover it. 00:27:56.553 [2024-07-26 11:35:51.973657] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.553 [2024-07-26 11:35:51.973688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.553 qpair failed and we were unable to recover it. 00:27:56.553 [2024-07-26 11:35:51.973796] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.553 [2024-07-26 11:35:51.973825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.553 qpair failed and we were unable to recover it. 00:27:56.553 [2024-07-26 11:35:51.973930] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.553 [2024-07-26 11:35:51.973960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.553 qpair failed and we were unable to recover it. 00:27:56.553 [2024-07-26 11:35:51.974135] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.553 [2024-07-26 11:35:51.974165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.553 qpair failed and we were unable to recover it. 00:27:56.553 [2024-07-26 11:35:51.974343] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.553 [2024-07-26 11:35:51.974373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.553 qpair failed and we were unable to recover it. 00:27:56.553 [2024-07-26 11:35:51.974496] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.553 [2024-07-26 11:35:51.974526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.553 qpair failed and we were unable to recover it. 00:27:56.553 [2024-07-26 11:35:51.974707] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.553 [2024-07-26 11:35:51.974737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.553 qpair failed and we were unable to recover it. 00:27:56.553 [2024-07-26 11:35:51.974863] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.553 [2024-07-26 11:35:51.974893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.553 qpair failed and we were unable to recover it. 00:27:56.553 [2024-07-26 11:35:51.975084] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.553 [2024-07-26 11:35:51.975114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.553 qpair failed and we were unable to recover it. 00:27:56.553 [2024-07-26 11:35:51.975360] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.553 [2024-07-26 11:35:51.975389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.553 qpair failed and we were unable to recover it. 00:27:56.553 [2024-07-26 11:35:51.975493] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.553 [2024-07-26 11:35:51.975523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.553 qpair failed and we were unable to recover it. 00:27:56.553 [2024-07-26 11:35:51.975654] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.553 [2024-07-26 11:35:51.975690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.553 qpair failed and we were unable to recover it. 00:27:56.553 [2024-07-26 11:35:51.975882] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.553 [2024-07-26 11:35:51.975912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.553 qpair failed and we were unable to recover it. 00:27:56.553 [2024-07-26 11:35:51.976037] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.553 [2024-07-26 11:35:51.976068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.553 qpair failed and we were unable to recover it. 00:27:56.553 [2024-07-26 11:35:51.976243] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.553 [2024-07-26 11:35:51.976273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.553 qpair failed and we were unable to recover it. 00:27:56.553 [2024-07-26 11:35:51.976398] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.553 [2024-07-26 11:35:51.976428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.553 qpair failed and we were unable to recover it. 00:27:56.553 [2024-07-26 11:35:51.976610] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.553 [2024-07-26 11:35:51.976673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.553 qpair failed and we were unable to recover it. 00:27:56.553 [2024-07-26 11:35:51.976804] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.554 [2024-07-26 11:35:51.976836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.554 qpair failed and we were unable to recover it. 00:27:56.554 [2024-07-26 11:35:51.977031] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.554 [2024-07-26 11:35:51.977061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.554 qpair failed and we were unable to recover it. 00:27:56.554 [2024-07-26 11:35:51.977231] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.554 [2024-07-26 11:35:51.977261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.554 qpair failed and we were unable to recover it. 00:27:56.554 [2024-07-26 11:35:51.977439] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.554 [2024-07-26 11:35:51.977469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.554 qpair failed and we were unable to recover it. 00:27:56.554 [2024-07-26 11:35:51.977609] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.554 [2024-07-26 11:35:51.977651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.554 qpair failed and we were unable to recover it. 00:27:56.554 [2024-07-26 11:35:51.977851] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.554 [2024-07-26 11:35:51.977881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.554 qpair failed and we were unable to recover it. 00:27:56.554 [2024-07-26 11:35:51.978008] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.554 [2024-07-26 11:35:51.978038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.554 qpair failed and we were unable to recover it. 00:27:56.554 [2024-07-26 11:35:51.978298] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.554 [2024-07-26 11:35:51.978327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.554 qpair failed and we were unable to recover it. 00:27:56.554 [2024-07-26 11:35:51.978451] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.554 [2024-07-26 11:35:51.978483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.554 qpair failed and we were unable to recover it. 00:27:56.554 [2024-07-26 11:35:51.978704] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.554 [2024-07-26 11:35:51.978735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.554 qpair failed and we were unable to recover it. 00:27:56.554 [2024-07-26 11:35:51.978860] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.554 [2024-07-26 11:35:51.978890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.554 qpair failed and we were unable to recover it. 00:27:56.554 [2024-07-26 11:35:51.979158] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.554 [2024-07-26 11:35:51.979189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.554 qpair failed and we were unable to recover it. 00:27:56.554 [2024-07-26 11:35:51.979314] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.554 [2024-07-26 11:35:51.979344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.554 qpair failed and we were unable to recover it. 00:27:56.554 [2024-07-26 11:35:51.979449] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.554 [2024-07-26 11:35:51.979480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.554 qpair failed and we were unable to recover it. 00:27:56.554 [2024-07-26 11:35:51.979674] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.554 [2024-07-26 11:35:51.979706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.554 qpair failed and we were unable to recover it. 00:27:56.554 [2024-07-26 11:35:51.979905] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.554 [2024-07-26 11:35:51.979935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.554 qpair failed and we were unable to recover it. 00:27:56.554 [2024-07-26 11:35:51.980051] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.554 [2024-07-26 11:35:51.980082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.554 qpair failed and we were unable to recover it. 00:27:56.554 [2024-07-26 11:35:51.980211] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.554 [2024-07-26 11:35:51.980246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.554 qpair failed and we were unable to recover it. 00:27:56.554 [2024-07-26 11:35:51.980485] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.554 [2024-07-26 11:35:51.980515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.554 qpair failed and we were unable to recover it. 00:27:56.554 [2024-07-26 11:35:51.980701] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.554 [2024-07-26 11:35:51.980731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.554 qpair failed and we were unable to recover it. 00:27:56.554 [2024-07-26 11:35:51.980839] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.554 [2024-07-26 11:35:51.980869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.554 qpair failed and we were unable to recover it. 00:27:56.554 [2024-07-26 11:35:51.980984] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.554 [2024-07-26 11:35:51.981014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.554 qpair failed and we were unable to recover it. 00:27:56.554 [2024-07-26 11:35:51.981136] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.554 [2024-07-26 11:35:51.981166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.554 qpair failed and we were unable to recover it. 00:27:56.554 [2024-07-26 11:35:51.981341] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.554 [2024-07-26 11:35:51.981371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.554 qpair failed and we were unable to recover it. 00:27:56.554 [2024-07-26 11:35:51.981551] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.554 [2024-07-26 11:35:51.981580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.554 qpair failed and we were unable to recover it. 00:27:56.554 [2024-07-26 11:35:51.981787] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.554 [2024-07-26 11:35:51.981818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.554 qpair failed and we were unable to recover it. 00:27:56.554 [2024-07-26 11:35:51.981994] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.554 [2024-07-26 11:35:51.982024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.554 qpair failed and we were unable to recover it. 00:27:56.554 [2024-07-26 11:35:51.982145] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.554 [2024-07-26 11:35:51.982174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.554 qpair failed and we were unable to recover it. 00:27:56.554 [2024-07-26 11:35:51.982298] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.554 [2024-07-26 11:35:51.982327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.554 qpair failed and we were unable to recover it. 00:27:56.554 [2024-07-26 11:35:51.982448] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.554 [2024-07-26 11:35:51.982478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.554 qpair failed and we were unable to recover it. 00:27:56.554 [2024-07-26 11:35:51.982614] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.554 [2024-07-26 11:35:51.982654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.554 qpair failed and we were unable to recover it. 00:27:56.554 [2024-07-26 11:35:51.982775] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.554 [2024-07-26 11:35:51.982805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.554 qpair failed and we were unable to recover it. 00:27:56.554 [2024-07-26 11:35:51.982969] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.554 [2024-07-26 11:35:51.982999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.554 qpair failed and we were unable to recover it. 00:27:56.554 [2024-07-26 11:35:51.983128] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.554 [2024-07-26 11:35:51.983158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.554 qpair failed and we were unable to recover it. 00:27:56.554 [2024-07-26 11:35:51.983264] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.554 [2024-07-26 11:35:51.983294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.554 qpair failed and we were unable to recover it. 00:27:56.554 [2024-07-26 11:35:51.983417] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.554 [2024-07-26 11:35:51.983446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.554 qpair failed and we were unable to recover it. 00:27:56.554 [2024-07-26 11:35:51.983552] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.554 [2024-07-26 11:35:51.983582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.554 qpair failed and we were unable to recover it. 00:27:56.554 [2024-07-26 11:35:51.983729] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.555 [2024-07-26 11:35:51.983760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.555 qpair failed and we were unable to recover it. 00:27:56.555 [2024-07-26 11:35:51.983898] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.555 [2024-07-26 11:35:51.983928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.555 qpair failed and we were unable to recover it. 00:27:56.555 [2024-07-26 11:35:51.984112] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.555 [2024-07-26 11:35:51.984141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.555 qpair failed and we were unable to recover it. 00:27:56.555 [2024-07-26 11:35:51.984259] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.555 [2024-07-26 11:35:51.984289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.555 qpair failed and we were unable to recover it. 00:27:56.555 [2024-07-26 11:35:51.984421] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.555 [2024-07-26 11:35:51.984452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.555 qpair failed and we were unable to recover it. 00:27:56.555 [2024-07-26 11:35:51.984660] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.555 [2024-07-26 11:35:51.984691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.555 qpair failed and we were unable to recover it. 00:27:56.555 [2024-07-26 11:35:51.984872] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.555 [2024-07-26 11:35:51.984902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.555 qpair failed and we were unable to recover it. 00:27:56.555 [2024-07-26 11:35:51.985034] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.555 [2024-07-26 11:35:51.985064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.555 qpair failed and we were unable to recover it. 00:27:56.555 [2024-07-26 11:35:51.985274] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.555 [2024-07-26 11:35:51.985304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.555 qpair failed and we were unable to recover it. 00:27:56.555 [2024-07-26 11:35:51.985544] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.555 [2024-07-26 11:35:51.985575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.555 qpair failed and we were unable to recover it. 00:27:56.555 [2024-07-26 11:35:51.985707] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.555 [2024-07-26 11:35:51.985738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.555 qpair failed and we were unable to recover it. 00:27:56.555 [2024-07-26 11:35:51.985863] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.555 [2024-07-26 11:35:51.985893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.555 qpair failed and we were unable to recover it. 00:27:56.555 [2024-07-26 11:35:51.986080] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.555 [2024-07-26 11:35:51.986110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.555 qpair failed and we were unable to recover it. 00:27:56.555 [2024-07-26 11:35:51.986379] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.555 [2024-07-26 11:35:51.986409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.555 qpair failed and we were unable to recover it. 00:27:56.555 [2024-07-26 11:35:51.986506] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.555 [2024-07-26 11:35:51.986536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.555 qpair failed and we were unable to recover it. 00:27:56.555 [2024-07-26 11:35:51.986674] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.555 [2024-07-26 11:35:51.986705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.555 qpair failed and we were unable to recover it. 00:27:56.555 [2024-07-26 11:35:51.986821] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.555 [2024-07-26 11:35:51.986851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.555 qpair failed and we were unable to recover it. 00:27:56.555 [2024-07-26 11:35:51.987023] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.555 [2024-07-26 11:35:51.987052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.555 qpair failed and we were unable to recover it. 00:27:56.555 [2024-07-26 11:35:51.987271] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.555 [2024-07-26 11:35:51.987301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.555 qpair failed and we were unable to recover it. 00:27:56.555 [2024-07-26 11:35:51.987484] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.555 [2024-07-26 11:35:51.987515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.555 qpair failed and we were unable to recover it. 00:27:56.555 [2024-07-26 11:35:51.987710] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.555 [2024-07-26 11:35:51.987746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.555 qpair failed and we were unable to recover it. 00:27:56.555 [2024-07-26 11:35:51.987869] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.555 [2024-07-26 11:35:51.987898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.555 qpair failed and we were unable to recover it. 00:27:56.555 [2024-07-26 11:35:51.988140] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.555 [2024-07-26 11:35:51.988170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.555 qpair failed and we were unable to recover it. 00:27:56.555 [2024-07-26 11:35:51.988309] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.555 [2024-07-26 11:35:51.988339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.555 qpair failed and we were unable to recover it. 00:27:56.555 [2024-07-26 11:35:51.988528] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.555 [2024-07-26 11:35:51.988558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.555 qpair failed and we were unable to recover it. 00:27:56.555 [2024-07-26 11:35:51.988753] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.555 [2024-07-26 11:35:51.988784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.555 qpair failed and we were unable to recover it. 00:27:56.555 [2024-07-26 11:35:51.988958] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.555 [2024-07-26 11:35:51.988987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.555 qpair failed and we were unable to recover it. 00:27:56.555 [2024-07-26 11:35:51.989167] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.555 [2024-07-26 11:35:51.989196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.555 qpair failed and we were unable to recover it. 00:27:56.555 [2024-07-26 11:35:51.989458] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.555 [2024-07-26 11:35:51.989487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.555 qpair failed and we were unable to recover it. 00:27:56.555 [2024-07-26 11:35:51.989619] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.555 [2024-07-26 11:35:51.989660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.555 qpair failed and we were unable to recover it. 00:27:56.555 [2024-07-26 11:35:51.989779] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.555 [2024-07-26 11:35:51.989809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.555 qpair failed and we were unable to recover it. 00:27:56.555 [2024-07-26 11:35:51.989986] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.555 [2024-07-26 11:35:51.990016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.555 qpair failed and we were unable to recover it. 00:27:56.555 [2024-07-26 11:35:51.990148] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.555 [2024-07-26 11:35:51.990179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.555 qpair failed and we were unable to recover it. 00:27:56.555 [2024-07-26 11:35:51.990355] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.555 [2024-07-26 11:35:51.990385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.555 qpair failed and we were unable to recover it. 00:27:56.555 [2024-07-26 11:35:51.990570] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.555 [2024-07-26 11:35:51.990600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.555 qpair failed and we were unable to recover it. 00:27:56.555 [2024-07-26 11:35:51.990832] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.555 [2024-07-26 11:35:51.990862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.555 qpair failed and we were unable to recover it. 00:27:56.555 [2024-07-26 11:35:51.991057] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.555 [2024-07-26 11:35:51.991087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.555 qpair failed and we were unable to recover it. 00:27:56.555 [2024-07-26 11:35:51.991208] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.555 [2024-07-26 11:35:51.991238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.555 qpair failed and we were unable to recover it. 00:27:56.556 [2024-07-26 11:35:51.991379] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.556 [2024-07-26 11:35:51.991408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.556 qpair failed and we were unable to recover it. 00:27:56.556 [2024-07-26 11:35:51.991682] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.556 [2024-07-26 11:35:51.991713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.556 qpair failed and we were unable to recover it. 00:27:56.556 [2024-07-26 11:35:51.991905] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.556 [2024-07-26 11:35:51.991934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.556 qpair failed and we were unable to recover it. 00:27:56.556 [2024-07-26 11:35:51.992121] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.556 [2024-07-26 11:35:51.992151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.556 qpair failed and we were unable to recover it. 00:27:56.556 [2024-07-26 11:35:51.992365] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.556 [2024-07-26 11:35:51.992395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.556 qpair failed and we were unable to recover it. 00:27:56.556 [2024-07-26 11:35:51.992511] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.556 [2024-07-26 11:35:51.992541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.556 qpair failed and we were unable to recover it. 00:27:56.556 [2024-07-26 11:35:51.992652] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.556 [2024-07-26 11:35:51.992683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.556 qpair failed and we were unable to recover it. 00:27:56.556 [2024-07-26 11:35:51.992944] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.556 [2024-07-26 11:35:51.992973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.556 qpair failed and we were unable to recover it. 00:27:56.556 [2024-07-26 11:35:51.993103] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.556 [2024-07-26 11:35:51.993133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.556 qpair failed and we were unable to recover it. 00:27:56.556 [2024-07-26 11:35:51.993264] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.556 [2024-07-26 11:35:51.993294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.556 qpair failed and we were unable to recover it. 00:27:56.556 [2024-07-26 11:35:51.993482] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.556 [2024-07-26 11:35:51.993512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.556 qpair failed and we were unable to recover it. 00:27:56.556 [2024-07-26 11:35:51.993640] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.556 [2024-07-26 11:35:51.993672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.556 qpair failed and we were unable to recover it. 00:27:56.556 [2024-07-26 11:35:51.993851] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.556 [2024-07-26 11:35:51.993881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.556 qpair failed and we were unable to recover it. 00:27:56.556 [2024-07-26 11:35:51.994063] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.556 [2024-07-26 11:35:51.994093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.556 qpair failed and we were unable to recover it. 00:27:56.556 [2024-07-26 11:35:51.994278] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.556 [2024-07-26 11:35:51.994308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.556 qpair failed and we were unable to recover it. 00:27:56.556 [2024-07-26 11:35:51.994426] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.556 [2024-07-26 11:35:51.994457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.556 qpair failed and we were unable to recover it. 00:27:56.556 [2024-07-26 11:35:51.994748] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.556 [2024-07-26 11:35:51.994779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.556 qpair failed and we were unable to recover it. 00:27:56.556 [2024-07-26 11:35:51.994969] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.556 [2024-07-26 11:35:51.994999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.556 qpair failed and we were unable to recover it. 00:27:56.556 [2024-07-26 11:35:51.995219] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.556 [2024-07-26 11:35:51.995249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.556 qpair failed and we were unable to recover it. 00:27:56.556 [2024-07-26 11:35:51.995371] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.556 [2024-07-26 11:35:51.995401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.556 qpair failed and we were unable to recover it. 00:27:56.556 [2024-07-26 11:35:51.995529] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.556 [2024-07-26 11:35:51.995558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.556 qpair failed and we were unable to recover it. 00:27:56.556 [2024-07-26 11:35:51.995736] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.556 [2024-07-26 11:35:51.995768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.556 qpair failed and we were unable to recover it. 00:27:56.556 [2024-07-26 11:35:51.995887] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.556 [2024-07-26 11:35:51.995923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.556 qpair failed and we were unable to recover it. 00:27:56.556 [2024-07-26 11:35:51.996039] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.556 [2024-07-26 11:35:51.996069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.556 qpair failed and we were unable to recover it. 00:27:56.556 [2024-07-26 11:35:51.996261] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.556 [2024-07-26 11:35:51.996292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.556 qpair failed and we were unable to recover it. 00:27:56.556 [2024-07-26 11:35:51.996574] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.556 [2024-07-26 11:35:51.996604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.556 qpair failed and we were unable to recover it. 00:27:56.556 [2024-07-26 11:35:51.996732] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.556 [2024-07-26 11:35:51.996762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.556 qpair failed and we were unable to recover it. 00:27:56.556 [2024-07-26 11:35:51.996953] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.556 [2024-07-26 11:35:51.996982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.556 qpair failed and we were unable to recover it. 00:27:56.556 [2024-07-26 11:35:51.997225] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.556 [2024-07-26 11:35:51.997256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.556 qpair failed and we were unable to recover it. 00:27:56.556 [2024-07-26 11:35:51.997502] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.556 [2024-07-26 11:35:51.997532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.556 qpair failed and we were unable to recover it. 00:27:56.556 [2024-07-26 11:35:51.997667] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.556 [2024-07-26 11:35:51.997697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.556 qpair failed and we were unable to recover it. 00:27:56.556 [2024-07-26 11:35:51.997832] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.556 [2024-07-26 11:35:51.997861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.556 qpair failed and we were unable to recover it. 00:27:56.556 [2024-07-26 11:35:51.998045] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.556 [2024-07-26 11:35:51.998075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.556 qpair failed and we were unable to recover it. 00:27:56.556 [2024-07-26 11:35:51.998199] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.556 [2024-07-26 11:35:51.998230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.556 qpair failed and we were unable to recover it. 00:27:56.556 [2024-07-26 11:35:51.998407] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.556 [2024-07-26 11:35:51.998437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.556 qpair failed and we were unable to recover it. 00:27:56.556 [2024-07-26 11:35:51.998611] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.556 [2024-07-26 11:35:51.998659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.556 qpair failed and we were unable to recover it. 00:27:56.556 [2024-07-26 11:35:51.998862] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.556 [2024-07-26 11:35:51.998892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.556 qpair failed and we were unable to recover it. 00:27:56.556 [2024-07-26 11:35:51.999021] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.556 [2024-07-26 11:35:51.999050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.557 qpair failed and we were unable to recover it. 00:27:56.557 [2024-07-26 11:35:51.999188] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.557 [2024-07-26 11:35:51.999218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.557 qpair failed and we were unable to recover it. 00:27:56.557 [2024-07-26 11:35:51.999407] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.557 [2024-07-26 11:35:51.999438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.557 qpair failed and we were unable to recover it. 00:27:56.557 [2024-07-26 11:35:51.999555] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.557 [2024-07-26 11:35:51.999584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.557 qpair failed and we were unable to recover it. 00:27:56.557 [2024-07-26 11:35:51.999784] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.557 [2024-07-26 11:35:51.999815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.557 qpair failed and we were unable to recover it. 00:27:56.557 [2024-07-26 11:35:51.999950] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.557 [2024-07-26 11:35:51.999980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.557 qpair failed and we were unable to recover it. 00:27:56.557 [2024-07-26 11:35:52.000164] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.557 [2024-07-26 11:35:52.000195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.557 qpair failed and we were unable to recover it. 00:27:56.557 [2024-07-26 11:35:52.000414] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.557 [2024-07-26 11:35:52.000444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.557 qpair failed and we were unable to recover it. 00:27:56.557 [2024-07-26 11:35:52.000704] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.557 [2024-07-26 11:35:52.000735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.557 qpair failed and we were unable to recover it. 00:27:56.557 [2024-07-26 11:35:52.000975] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.557 [2024-07-26 11:35:52.001005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.557 qpair failed and we were unable to recover it. 00:27:56.557 [2024-07-26 11:35:52.001240] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.557 [2024-07-26 11:35:52.001270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.557 qpair failed and we were unable to recover it. 00:27:56.557 [2024-07-26 11:35:52.001480] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.557 [2024-07-26 11:35:52.001510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.557 qpair failed and we were unable to recover it. 00:27:56.557 [2024-07-26 11:35:52.001636] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.557 [2024-07-26 11:35:52.001667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.557 qpair failed and we were unable to recover it. 00:27:56.557 [2024-07-26 11:35:52.001853] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.557 [2024-07-26 11:35:52.001884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.557 qpair failed and we were unable to recover it. 00:27:56.557 [2024-07-26 11:35:52.002015] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.557 [2024-07-26 11:35:52.002044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.557 qpair failed and we were unable to recover it. 00:27:56.557 [2024-07-26 11:35:52.002231] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.557 [2024-07-26 11:35:52.002260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.557 qpair failed and we were unable to recover it. 00:27:56.557 [2024-07-26 11:35:52.002437] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.557 [2024-07-26 11:35:52.002467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.557 qpair failed and we were unable to recover it. 00:27:56.557 [2024-07-26 11:35:52.002646] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.557 [2024-07-26 11:35:52.002676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.557 qpair failed and we were unable to recover it. 00:27:56.557 [2024-07-26 11:35:52.002879] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.557 [2024-07-26 11:35:52.002908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.557 qpair failed and we were unable to recover it. 00:27:56.557 [2024-07-26 11:35:52.003028] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.557 [2024-07-26 11:35:52.003057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.557 qpair failed and we were unable to recover it. 00:27:56.557 [2024-07-26 11:35:52.003248] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.557 [2024-07-26 11:35:52.003278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.557 qpair failed and we were unable to recover it. 00:27:56.557 [2024-07-26 11:35:52.003413] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.557 [2024-07-26 11:35:52.003443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.557 qpair failed and we were unable to recover it. 00:27:56.557 [2024-07-26 11:35:52.003645] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.557 [2024-07-26 11:35:52.003675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.557 qpair failed and we were unable to recover it. 00:27:56.557 [2024-07-26 11:35:52.003792] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.557 [2024-07-26 11:35:52.003822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.557 qpair failed and we were unable to recover it. 00:27:56.557 [2024-07-26 11:35:52.003999] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.557 [2024-07-26 11:35:52.004030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.557 qpair failed and we were unable to recover it. 00:27:56.557 [2024-07-26 11:35:52.004214] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.557 [2024-07-26 11:35:52.004250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.557 qpair failed and we were unable to recover it. 00:27:56.557 [2024-07-26 11:35:52.004376] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.557 [2024-07-26 11:35:52.004406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.557 qpair failed and we were unable to recover it. 00:27:56.557 [2024-07-26 11:35:52.004652] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.557 [2024-07-26 11:35:52.004683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.557 qpair failed and we were unable to recover it. 00:27:56.557 [2024-07-26 11:35:52.004819] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.557 [2024-07-26 11:35:52.004849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.557 qpair failed and we were unable to recover it. 00:27:56.557 [2024-07-26 11:35:52.005041] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.557 [2024-07-26 11:35:52.005071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.557 qpair failed and we were unable to recover it. 00:27:56.557 [2024-07-26 11:35:52.005259] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.557 [2024-07-26 11:35:52.005289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.557 qpair failed and we were unable to recover it. 00:27:56.557 [2024-07-26 11:35:52.005486] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.557 [2024-07-26 11:35:52.005516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.557 qpair failed and we were unable to recover it. 00:27:56.557 [2024-07-26 11:35:52.005649] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.557 [2024-07-26 11:35:52.005692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.557 qpair failed and we were unable to recover it. 00:27:56.557 [2024-07-26 11:35:52.005825] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.557 [2024-07-26 11:35:52.005855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.557 qpair failed and we were unable to recover it. 00:27:56.557 [2024-07-26 11:35:52.006056] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.557 [2024-07-26 11:35:52.006086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.557 qpair failed and we were unable to recover it. 00:27:56.557 [2024-07-26 11:35:52.006282] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.557 [2024-07-26 11:35:52.006312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.557 qpair failed and we were unable to recover it. 00:27:56.557 [2024-07-26 11:35:52.006502] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.557 [2024-07-26 11:35:52.006531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.557 qpair failed and we were unable to recover it. 00:27:56.557 [2024-07-26 11:35:52.006726] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.557 [2024-07-26 11:35:52.006757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.557 qpair failed and we were unable to recover it. 00:27:56.557 [2024-07-26 11:35:52.006869] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.558 [2024-07-26 11:35:52.006899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.558 qpair failed and we were unable to recover it. 00:27:56.558 [2024-07-26 11:35:52.007118] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.558 [2024-07-26 11:35:52.007148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.558 qpair failed and we were unable to recover it. 00:27:56.558 [2024-07-26 11:35:52.007340] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.558 [2024-07-26 11:35:52.007370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.558 qpair failed and we were unable to recover it. 00:27:56.558 [2024-07-26 11:35:52.007481] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.558 [2024-07-26 11:35:52.007511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.558 qpair failed and we were unable to recover it. 00:27:56.558 [2024-07-26 11:35:52.007753] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.558 [2024-07-26 11:35:52.007784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.558 qpair failed and we were unable to recover it. 00:27:56.558 [2024-07-26 11:35:52.007959] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.558 [2024-07-26 11:35:52.007989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.558 qpair failed and we were unable to recover it. 00:27:56.558 [2024-07-26 11:35:52.008177] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.558 [2024-07-26 11:35:52.008207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.558 qpair failed and we were unable to recover it. 00:27:56.558 [2024-07-26 11:35:52.008400] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.558 [2024-07-26 11:35:52.008429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.558 qpair failed and we were unable to recover it. 00:27:56.558 [2024-07-26 11:35:52.008535] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.558 [2024-07-26 11:35:52.008566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.558 qpair failed and we were unable to recover it. 00:27:56.558 [2024-07-26 11:35:52.008700] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.558 [2024-07-26 11:35:52.008730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.558 qpair failed and we were unable to recover it. 00:27:56.558 [2024-07-26 11:35:52.008866] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.558 [2024-07-26 11:35:52.008896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.558 qpair failed and we were unable to recover it. 00:27:56.558 [2024-07-26 11:35:52.009157] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.558 [2024-07-26 11:35:52.009187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.558 qpair failed and we were unable to recover it. 00:27:56.558 [2024-07-26 11:35:52.009360] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.558 [2024-07-26 11:35:52.009390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.558 qpair failed and we were unable to recover it. 00:27:56.558 [2024-07-26 11:35:52.009606] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.558 [2024-07-26 11:35:52.009644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.558 qpair failed and we were unable to recover it. 00:27:56.558 [2024-07-26 11:35:52.009776] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.558 [2024-07-26 11:35:52.009806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.558 qpair failed and we were unable to recover it. 00:27:56.558 [2024-07-26 11:35:52.009917] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.558 [2024-07-26 11:35:52.009947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.558 qpair failed and we were unable to recover it. 00:27:56.558 [2024-07-26 11:35:52.010079] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.558 [2024-07-26 11:35:52.010110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.558 qpair failed and we were unable to recover it. 00:27:56.558 [2024-07-26 11:35:52.010297] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.558 [2024-07-26 11:35:52.010326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.558 qpair failed and we were unable to recover it. 00:27:56.558 [2024-07-26 11:35:52.010456] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.558 [2024-07-26 11:35:52.010486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.558 qpair failed and we were unable to recover it. 00:27:56.558 [2024-07-26 11:35:52.010618] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.558 [2024-07-26 11:35:52.010660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.558 qpair failed and we were unable to recover it. 00:27:56.558 [2024-07-26 11:35:52.010840] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.558 [2024-07-26 11:35:52.010869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.558 qpair failed and we were unable to recover it. 00:27:56.558 [2024-07-26 11:35:52.010982] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.558 [2024-07-26 11:35:52.011012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.558 qpair failed and we were unable to recover it. 00:27:56.558 [2024-07-26 11:35:52.011261] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.558 [2024-07-26 11:35:52.011291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.558 qpair failed and we were unable to recover it. 00:27:56.558 [2024-07-26 11:35:52.011427] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.558 [2024-07-26 11:35:52.011457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.558 qpair failed and we were unable to recover it. 00:27:56.558 [2024-07-26 11:35:52.011578] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.558 [2024-07-26 11:35:52.011608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.558 qpair failed and we were unable to recover it. 00:27:56.558 [2024-07-26 11:35:52.011833] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.558 [2024-07-26 11:35:52.011864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.558 qpair failed and we were unable to recover it. 00:27:56.558 [2024-07-26 11:35:52.012129] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.558 [2024-07-26 11:35:52.012159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.558 qpair failed and we were unable to recover it. 00:27:56.558 [2024-07-26 11:35:52.012339] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.558 [2024-07-26 11:35:52.012374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.558 qpair failed and we were unable to recover it. 00:27:56.558 [2024-07-26 11:35:52.012501] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.558 [2024-07-26 11:35:52.012531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.558 qpair failed and we were unable to recover it. 00:27:56.558 [2024-07-26 11:35:52.012705] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.558 [2024-07-26 11:35:52.012736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.558 qpair failed and we were unable to recover it. 00:27:56.558 [2024-07-26 11:35:52.013054] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.558 [2024-07-26 11:35:52.013084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.558 qpair failed and we were unable to recover it. 00:27:56.558 [2024-07-26 11:35:52.013277] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.559 [2024-07-26 11:35:52.013307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.559 qpair failed and we were unable to recover it. 00:27:56.559 [2024-07-26 11:35:52.013501] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.559 [2024-07-26 11:35:52.013531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.559 qpair failed and we were unable to recover it. 00:27:56.559 [2024-07-26 11:35:52.013655] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.559 [2024-07-26 11:35:52.013686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.559 qpair failed and we were unable to recover it. 00:27:56.559 [2024-07-26 11:35:52.013800] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.559 [2024-07-26 11:35:52.013829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.559 qpair failed and we were unable to recover it. 00:27:56.559 [2024-07-26 11:35:52.013958] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.559 [2024-07-26 11:35:52.013988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.559 qpair failed and we were unable to recover it. 00:27:56.559 [2024-07-26 11:35:52.014177] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.559 [2024-07-26 11:35:52.014207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.559 qpair failed and we were unable to recover it. 00:27:56.559 [2024-07-26 11:35:52.014392] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.559 [2024-07-26 11:35:52.014422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.559 qpair failed and we were unable to recover it. 00:27:56.559 [2024-07-26 11:35:52.014542] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.559 [2024-07-26 11:35:52.014572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.559 qpair failed and we were unable to recover it. 00:27:56.559 [2024-07-26 11:35:52.014849] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.559 [2024-07-26 11:35:52.014879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.559 qpair failed and we were unable to recover it. 00:27:56.559 [2024-07-26 11:35:52.014999] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.559 [2024-07-26 11:35:52.015029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.559 qpair failed and we were unable to recover it. 00:27:56.559 [2024-07-26 11:35:52.015220] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.559 [2024-07-26 11:35:52.015250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.559 qpair failed and we were unable to recover it. 00:27:56.559 [2024-07-26 11:35:52.015377] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.559 [2024-07-26 11:35:52.015407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.559 qpair failed and we were unable to recover it. 00:27:56.559 [2024-07-26 11:35:52.015589] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.559 [2024-07-26 11:35:52.015618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.559 qpair failed and we were unable to recover it. 00:27:56.559 [2024-07-26 11:35:52.015816] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.559 [2024-07-26 11:35:52.015846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.559 qpair failed and we were unable to recover it. 00:27:56.559 [2024-07-26 11:35:52.015959] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.559 [2024-07-26 11:35:52.015989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.559 qpair failed and we were unable to recover it. 00:27:56.559 [2024-07-26 11:35:52.016213] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.559 [2024-07-26 11:35:52.016243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.559 qpair failed and we were unable to recover it. 00:27:56.559 [2024-07-26 11:35:52.016381] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.559 [2024-07-26 11:35:52.016410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.559 qpair failed and we were unable to recover it. 00:27:56.559 [2024-07-26 11:35:52.016585] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.559 [2024-07-26 11:35:52.016616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.559 qpair failed and we were unable to recover it. 00:27:56.559 [2024-07-26 11:35:52.016769] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.559 [2024-07-26 11:35:52.016800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.559 qpair failed and we were unable to recover it. 00:27:56.559 [2024-07-26 11:35:52.016935] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.559 [2024-07-26 11:35:52.016965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.559 qpair failed and we were unable to recover it. 00:27:56.559 [2024-07-26 11:35:52.017086] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.559 [2024-07-26 11:35:52.017115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.559 qpair failed and we were unable to recover it. 00:27:56.559 [2024-07-26 11:35:52.017314] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.559 [2024-07-26 11:35:52.017343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.559 qpair failed and we were unable to recover it. 00:27:56.559 [2024-07-26 11:35:52.017451] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.559 [2024-07-26 11:35:52.017481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.559 qpair failed and we were unable to recover it. 00:27:56.559 [2024-07-26 11:35:52.017596] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.559 [2024-07-26 11:35:52.017637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.559 qpair failed and we were unable to recover it. 00:27:56.559 [2024-07-26 11:35:52.017753] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.559 [2024-07-26 11:35:52.017784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.559 qpair failed and we were unable to recover it. 00:27:56.559 [2024-07-26 11:35:52.017970] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.559 [2024-07-26 11:35:52.018000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.559 qpair failed and we were unable to recover it. 00:27:56.559 [2024-07-26 11:35:52.018126] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.559 [2024-07-26 11:35:52.018156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.559 qpair failed and we were unable to recover it. 00:27:56.559 [2024-07-26 11:35:52.018430] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.559 [2024-07-26 11:35:52.018461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.559 qpair failed and we were unable to recover it. 00:27:56.559 [2024-07-26 11:35:52.018651] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.559 [2024-07-26 11:35:52.018681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.559 qpair failed and we were unable to recover it. 00:27:56.559 [2024-07-26 11:35:52.018816] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.559 [2024-07-26 11:35:52.018847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.559 qpair failed and we were unable to recover it. 00:27:56.559 [2024-07-26 11:35:52.019032] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.559 [2024-07-26 11:35:52.019062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.559 qpair failed and we were unable to recover it. 00:27:56.559 [2024-07-26 11:35:52.019339] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.559 [2024-07-26 11:35:52.019369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.559 qpair failed and we were unable to recover it. 00:27:56.559 [2024-07-26 11:35:52.019555] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.559 [2024-07-26 11:35:52.019585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.559 qpair failed and we were unable to recover it. 00:27:56.559 [2024-07-26 11:35:52.019785] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.559 [2024-07-26 11:35:52.019816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.559 qpair failed and we were unable to recover it. 00:27:56.559 [2024-07-26 11:35:52.020038] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.559 [2024-07-26 11:35:52.020068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.559 qpair failed and we were unable to recover it. 00:27:56.559 [2024-07-26 11:35:52.020202] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.559 [2024-07-26 11:35:52.020232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.559 qpair failed and we were unable to recover it. 00:27:56.559 [2024-07-26 11:35:52.020356] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.559 [2024-07-26 11:35:52.020391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.559 qpair failed and we were unable to recover it. 00:27:56.559 [2024-07-26 11:35:52.020604] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.559 [2024-07-26 11:35:52.020641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.559 qpair failed and we were unable to recover it. 00:27:56.560 [2024-07-26 11:35:52.020824] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.560 [2024-07-26 11:35:52.020854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.560 qpair failed and we were unable to recover it. 00:27:56.560 [2024-07-26 11:35:52.021046] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.560 [2024-07-26 11:35:52.021075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.560 qpair failed and we were unable to recover it. 00:27:56.560 [2024-07-26 11:35:52.021260] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.560 [2024-07-26 11:35:52.021290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.560 qpair failed and we were unable to recover it. 00:27:56.560 [2024-07-26 11:35:52.021578] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.560 [2024-07-26 11:35:52.021608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.560 qpair failed and we were unable to recover it. 00:27:56.560 [2024-07-26 11:35:52.021750] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.560 [2024-07-26 11:35:52.021781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.560 qpair failed and we were unable to recover it. 00:27:56.560 [2024-07-26 11:35:52.021909] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.560 [2024-07-26 11:35:52.021939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.560 qpair failed and we were unable to recover it. 00:27:56.560 [2024-07-26 11:35:52.022077] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.560 [2024-07-26 11:35:52.022106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.560 qpair failed and we were unable to recover it. 00:27:56.560 [2024-07-26 11:35:52.022224] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.560 [2024-07-26 11:35:52.022254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.560 qpair failed and we were unable to recover it. 00:27:56.560 [2024-07-26 11:35:52.022433] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.560 [2024-07-26 11:35:52.022463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.560 qpair failed and we were unable to recover it. 00:27:56.560 [2024-07-26 11:35:52.022644] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.560 [2024-07-26 11:35:52.022675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.560 qpair failed and we were unable to recover it. 00:27:56.560 [2024-07-26 11:35:52.022859] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.560 [2024-07-26 11:35:52.022888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.560 qpair failed and we were unable to recover it. 00:27:56.560 [2024-07-26 11:35:52.023097] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.560 [2024-07-26 11:35:52.023128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.560 qpair failed and we were unable to recover it. 00:27:56.560 [2024-07-26 11:35:52.023311] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.560 [2024-07-26 11:35:52.023341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.560 qpair failed and we were unable to recover it. 00:27:56.560 [2024-07-26 11:35:52.023535] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.560 [2024-07-26 11:35:52.023564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.560 qpair failed and we were unable to recover it. 00:27:56.560 [2024-07-26 11:35:52.023751] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.560 [2024-07-26 11:35:52.023783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.560 qpair failed and we were unable to recover it. 00:27:56.560 [2024-07-26 11:35:52.023958] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.560 [2024-07-26 11:35:52.023988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.560 qpair failed and we were unable to recover it. 00:27:56.560 [2024-07-26 11:35:52.024124] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.560 [2024-07-26 11:35:52.024154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.560 qpair failed and we were unable to recover it. 00:27:56.560 [2024-07-26 11:35:52.024269] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.560 [2024-07-26 11:35:52.024299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.560 qpair failed and we were unable to recover it. 00:27:56.560 [2024-07-26 11:35:52.024433] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.560 [2024-07-26 11:35:52.024463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.560 qpair failed and we were unable to recover it. 00:27:56.560 [2024-07-26 11:35:52.024711] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.560 [2024-07-26 11:35:52.024742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.560 qpair failed and we were unable to recover it. 00:27:56.560 [2024-07-26 11:35:52.024859] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.560 [2024-07-26 11:35:52.024888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.560 qpair failed and we were unable to recover it. 00:27:56.560 [2024-07-26 11:35:52.025005] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.560 [2024-07-26 11:35:52.025035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.560 qpair failed and we were unable to recover it. 00:27:56.560 [2024-07-26 11:35:52.025157] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.560 [2024-07-26 11:35:52.025187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.560 qpair failed and we were unable to recover it. 00:27:56.560 [2024-07-26 11:35:52.025441] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.560 [2024-07-26 11:35:52.025471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.560 qpair failed and we were unable to recover it. 00:27:56.560 [2024-07-26 11:35:52.025696] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.560 [2024-07-26 11:35:52.025727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.560 qpair failed and we were unable to recover it. 00:27:56.560 [2024-07-26 11:35:52.025841] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.560 [2024-07-26 11:35:52.025872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.560 qpair failed and we were unable to recover it. 00:27:56.560 [2024-07-26 11:35:52.026056] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.560 [2024-07-26 11:35:52.026086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.560 qpair failed and we were unable to recover it. 00:27:56.560 [2024-07-26 11:35:52.026217] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.560 [2024-07-26 11:35:52.026247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.560 qpair failed and we were unable to recover it. 00:27:56.560 [2024-07-26 11:35:52.026386] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.560 [2024-07-26 11:35:52.026415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.560 qpair failed and we were unable to recover it. 00:27:56.560 [2024-07-26 11:35:52.026645] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.560 [2024-07-26 11:35:52.026675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.560 qpair failed and we were unable to recover it. 00:27:56.560 [2024-07-26 11:35:52.026884] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.560 [2024-07-26 11:35:52.026914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.560 qpair failed and we were unable to recover it. 00:27:56.560 [2024-07-26 11:35:52.027044] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.560 [2024-07-26 11:35:52.027073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.560 qpair failed and we were unable to recover it. 00:27:56.560 [2024-07-26 11:35:52.027253] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.560 [2024-07-26 11:35:52.027284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.560 qpair failed and we were unable to recover it. 00:27:56.560 [2024-07-26 11:35:52.027486] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.560 [2024-07-26 11:35:52.027516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.560 qpair failed and we were unable to recover it. 00:27:56.560 [2024-07-26 11:35:52.027780] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.560 [2024-07-26 11:35:52.027811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.560 qpair failed and we were unable to recover it. 00:27:56.560 [2024-07-26 11:35:52.027937] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.560 [2024-07-26 11:35:52.027967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.560 qpair failed and we were unable to recover it. 00:27:56.560 [2024-07-26 11:35:52.028102] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.560 [2024-07-26 11:35:52.028132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.560 qpair failed and we were unable to recover it. 00:27:56.560 [2024-07-26 11:35:52.028247] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.561 [2024-07-26 11:35:52.028277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.561 qpair failed and we were unable to recover it. 00:27:56.561 [2024-07-26 11:35:52.028483] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.561 [2024-07-26 11:35:52.028518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.561 qpair failed and we were unable to recover it. 00:27:56.561 [2024-07-26 11:35:52.028711] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.561 [2024-07-26 11:35:52.028743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.561 qpair failed and we were unable to recover it. 00:27:56.561 [2024-07-26 11:35:52.028866] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.561 [2024-07-26 11:35:52.028896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.561 qpair failed and we were unable to recover it. 00:27:56.561 [2024-07-26 11:35:52.029010] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.561 [2024-07-26 11:35:52.029040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.561 qpair failed and we were unable to recover it. 00:27:56.561 [2024-07-26 11:35:52.029223] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.561 [2024-07-26 11:35:52.029253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.561 qpair failed and we were unable to recover it. 00:27:56.561 [2024-07-26 11:35:52.029367] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.561 [2024-07-26 11:35:52.029397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.561 qpair failed and we were unable to recover it. 00:27:56.561 [2024-07-26 11:35:52.029646] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.561 [2024-07-26 11:35:52.029677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.561 qpair failed and we were unable to recover it. 00:27:56.561 [2024-07-26 11:35:52.029792] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.561 [2024-07-26 11:35:52.029821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.561 qpair failed and we were unable to recover it. 00:27:56.561 [2024-07-26 11:35:52.029929] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.561 [2024-07-26 11:35:52.029959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.561 qpair failed and we were unable to recover it. 00:27:56.561 [2024-07-26 11:35:52.030142] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.561 [2024-07-26 11:35:52.030171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.561 qpair failed and we were unable to recover it. 00:27:56.561 [2024-07-26 11:35:52.030289] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.561 [2024-07-26 11:35:52.030319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.561 qpair failed and we were unable to recover it. 00:27:56.561 [2024-07-26 11:35:52.030598] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.561 [2024-07-26 11:35:52.030637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.561 qpair failed and we were unable to recover it. 00:27:56.561 [2024-07-26 11:35:52.030882] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.561 [2024-07-26 11:35:52.030912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.561 qpair failed and we were unable to recover it. 00:27:56.561 [2024-07-26 11:35:52.031099] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.561 [2024-07-26 11:35:52.031129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.561 qpair failed and we were unable to recover it. 00:27:56.561 [2024-07-26 11:35:52.031253] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.561 [2024-07-26 11:35:52.031283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.561 qpair failed and we were unable to recover it. 00:27:56.561 [2024-07-26 11:35:52.031407] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.561 [2024-07-26 11:35:52.031436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.561 qpair failed and we were unable to recover it. 00:27:56.561 [2024-07-26 11:35:52.031574] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.561 [2024-07-26 11:35:52.031604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.561 qpair failed and we were unable to recover it. 00:27:56.561 [2024-07-26 11:35:52.031741] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.561 [2024-07-26 11:35:52.031771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.561 qpair failed and we were unable to recover it. 00:27:56.561 [2024-07-26 11:35:52.031899] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.561 [2024-07-26 11:35:52.031929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.561 qpair failed and we were unable to recover it. 00:27:56.561 [2024-07-26 11:35:52.032048] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.561 [2024-07-26 11:35:52.032078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.561 qpair failed and we were unable to recover it. 00:27:56.561 [2024-07-26 11:35:52.032203] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.561 [2024-07-26 11:35:52.032233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.561 qpair failed and we were unable to recover it. 00:27:56.561 [2024-07-26 11:35:52.032426] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.561 [2024-07-26 11:35:52.032456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.561 qpair failed and we were unable to recover it. 00:27:56.561 [2024-07-26 11:35:52.032650] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.561 [2024-07-26 11:35:52.032680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.561 qpair failed and we were unable to recover it. 00:27:56.561 [2024-07-26 11:35:52.032802] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.561 [2024-07-26 11:35:52.032832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.561 qpair failed and we were unable to recover it. 00:27:56.561 [2024-07-26 11:35:52.033009] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.561 [2024-07-26 11:35:52.033039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.561 qpair failed and we were unable to recover it. 00:27:56.561 [2024-07-26 11:35:52.033153] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.561 [2024-07-26 11:35:52.033183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.561 qpair failed and we were unable to recover it. 00:27:56.561 [2024-07-26 11:35:52.033361] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.561 [2024-07-26 11:35:52.033390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.561 qpair failed and we were unable to recover it. 00:27:56.561 [2024-07-26 11:35:52.033518] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.561 [2024-07-26 11:35:52.033548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.561 qpair failed and we were unable to recover it. 00:27:56.561 [2024-07-26 11:35:52.033733] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.561 [2024-07-26 11:35:52.033764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.561 qpair failed and we were unable to recover it. 00:27:56.561 [2024-07-26 11:35:52.033956] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.561 [2024-07-26 11:35:52.033986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.561 qpair failed and we were unable to recover it. 00:27:56.561 [2024-07-26 11:35:52.034177] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.561 [2024-07-26 11:35:52.034207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.561 qpair failed and we were unable to recover it. 00:27:56.561 [2024-07-26 11:35:52.034459] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.561 [2024-07-26 11:35:52.034488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.561 qpair failed and we were unable to recover it. 00:27:56.561 [2024-07-26 11:35:52.034613] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.561 [2024-07-26 11:35:52.034651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.561 qpair failed and we were unable to recover it. 00:27:56.561 [2024-07-26 11:35:52.034896] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.561 [2024-07-26 11:35:52.034927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.561 qpair failed and we were unable to recover it. 00:27:56.561 [2024-07-26 11:35:52.035116] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.561 [2024-07-26 11:35:52.035146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.561 qpair failed and we were unable to recover it. 00:27:56.561 [2024-07-26 11:35:52.035256] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.561 [2024-07-26 11:35:52.035286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.561 qpair failed and we were unable to recover it. 00:27:56.561 [2024-07-26 11:35:52.035469] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.561 [2024-07-26 11:35:52.035500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.561 qpair failed and we were unable to recover it. 00:27:56.562 [2024-07-26 11:35:52.035721] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.562 [2024-07-26 11:35:52.035752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.562 qpair failed and we were unable to recover it. 00:27:56.562 [2024-07-26 11:35:52.035880] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.562 [2024-07-26 11:35:52.035909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.562 qpair failed and we were unable to recover it. 00:27:56.562 [2024-07-26 11:35:52.036025] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.562 [2024-07-26 11:35:52.036055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.562 qpair failed and we were unable to recover it. 00:27:56.562 [2024-07-26 11:35:52.036194] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.562 [2024-07-26 11:35:52.036229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.562 qpair failed and we were unable to recover it. 00:27:56.562 [2024-07-26 11:35:52.036347] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.562 [2024-07-26 11:35:52.036377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.562 qpair failed and we were unable to recover it. 00:27:56.562 [2024-07-26 11:35:52.036638] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.562 [2024-07-26 11:35:52.036668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.562 qpair failed and we were unable to recover it. 00:27:56.562 [2024-07-26 11:35:52.036846] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.562 [2024-07-26 11:35:52.036876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.562 qpair failed and we were unable to recover it. 00:27:56.562 [2024-07-26 11:35:52.037004] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.562 [2024-07-26 11:35:52.037034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.562 qpair failed and we were unable to recover it. 00:27:56.562 [2024-07-26 11:35:52.037164] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.562 [2024-07-26 11:35:52.037193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.562 qpair failed and we were unable to recover it. 00:27:56.562 [2024-07-26 11:35:52.037316] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.562 [2024-07-26 11:35:52.037346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.562 qpair failed and we were unable to recover it. 00:27:56.562 [2024-07-26 11:35:52.037554] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.562 [2024-07-26 11:35:52.037583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.562 qpair failed and we were unable to recover it. 00:27:56.562 [2024-07-26 11:35:52.037735] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.562 [2024-07-26 11:35:52.037764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.562 qpair failed and we were unable to recover it. 00:27:56.562 [2024-07-26 11:35:52.037963] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.562 [2024-07-26 11:35:52.037992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.562 qpair failed and we were unable to recover it. 00:27:56.562 [2024-07-26 11:35:52.038174] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.562 [2024-07-26 11:35:52.038204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.562 qpair failed and we were unable to recover it. 00:27:56.562 [2024-07-26 11:35:52.038316] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.562 [2024-07-26 11:35:52.038345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.562 qpair failed and we were unable to recover it. 00:27:56.562 [2024-07-26 11:35:52.038475] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.562 [2024-07-26 11:35:52.038504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.562 qpair failed and we were unable to recover it. 00:27:56.562 [2024-07-26 11:35:52.038661] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.562 [2024-07-26 11:35:52.038692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.562 qpair failed and we were unable to recover it. 00:27:56.562 [2024-07-26 11:35:52.038944] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.562 [2024-07-26 11:35:52.038975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.562 qpair failed and we were unable to recover it. 00:27:56.562 [2024-07-26 11:35:52.039095] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.562 [2024-07-26 11:35:52.039124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.562 qpair failed and we were unable to recover it. 00:27:56.562 [2024-07-26 11:35:52.039307] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.562 [2024-07-26 11:35:52.039335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.562 qpair failed and we were unable to recover it. 00:27:56.562 [2024-07-26 11:35:52.039526] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.562 [2024-07-26 11:35:52.039555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.562 qpair failed and we were unable to recover it. 00:27:56.562 [2024-07-26 11:35:52.039684] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.562 [2024-07-26 11:35:52.039716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.562 qpair failed and we were unable to recover it. 00:27:56.562 [2024-07-26 11:35:52.039911] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.562 [2024-07-26 11:35:52.039940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.562 qpair failed and we were unable to recover it. 00:27:56.562 [2024-07-26 11:35:52.040063] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.562 [2024-07-26 11:35:52.040093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.562 qpair failed and we were unable to recover it. 00:27:56.562 [2024-07-26 11:35:52.040302] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.562 [2024-07-26 11:35:52.040332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.562 qpair failed and we were unable to recover it. 00:27:56.562 [2024-07-26 11:35:52.040465] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.562 [2024-07-26 11:35:52.040495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.562 qpair failed and we were unable to recover it. 00:27:56.562 [2024-07-26 11:35:52.040645] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.562 [2024-07-26 11:35:52.040676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.562 qpair failed and we were unable to recover it. 00:27:56.562 [2024-07-26 11:35:52.040884] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.562 [2024-07-26 11:35:52.040913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.562 qpair failed and we were unable to recover it. 00:27:56.562 [2024-07-26 11:35:52.041169] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.562 [2024-07-26 11:35:52.041198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.562 qpair failed and we were unable to recover it. 00:27:56.562 [2024-07-26 11:35:52.041326] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.562 [2024-07-26 11:35:52.041355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.562 qpair failed and we were unable to recover it. 00:27:56.562 [2024-07-26 11:35:52.041530] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.562 [2024-07-26 11:35:52.041567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.562 qpair failed and we were unable to recover it. 00:27:56.562 [2024-07-26 11:35:52.041706] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.562 [2024-07-26 11:35:52.041737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.562 qpair failed and we were unable to recover it. 00:27:56.562 [2024-07-26 11:35:52.041875] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.562 [2024-07-26 11:35:52.041904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.562 qpair failed and we were unable to recover it. 00:27:56.562 [2024-07-26 11:35:52.042083] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.562 [2024-07-26 11:35:52.042113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.562 qpair failed and we were unable to recover it. 00:27:56.562 [2024-07-26 11:35:52.042227] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.562 [2024-07-26 11:35:52.042258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.562 qpair failed and we were unable to recover it. 00:27:56.562 [2024-07-26 11:35:52.042431] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.562 [2024-07-26 11:35:52.042460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.562 qpair failed and we were unable to recover it. 00:27:56.562 [2024-07-26 11:35:52.042574] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.562 [2024-07-26 11:35:52.042604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.562 qpair failed and we were unable to recover it. 00:27:56.562 [2024-07-26 11:35:52.042728] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.562 [2024-07-26 11:35:52.042757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.562 qpair failed and we were unable to recover it. 00:27:56.563 [2024-07-26 11:35:52.042879] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.563 [2024-07-26 11:35:52.042908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.563 qpair failed and we were unable to recover it. 00:27:56.563 [2024-07-26 11:35:52.043026] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.563 [2024-07-26 11:35:52.043056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.563 qpair failed and we were unable to recover it. 00:27:56.563 [2024-07-26 11:35:52.043297] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.563 [2024-07-26 11:35:52.043327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.563 qpair failed and we were unable to recover it. 00:27:56.563 [2024-07-26 11:35:52.043591] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.563 [2024-07-26 11:35:52.043622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.563 qpair failed and we were unable to recover it. 00:27:56.563 [2024-07-26 11:35:52.043890] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.563 [2024-07-26 11:35:52.043920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.563 qpair failed and we were unable to recover it. 00:27:56.563 [2024-07-26 11:35:52.044058] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.563 [2024-07-26 11:35:52.044088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.563 qpair failed and we were unable to recover it. 00:27:56.563 [2024-07-26 11:35:52.044210] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.563 [2024-07-26 11:35:52.044239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.563 qpair failed and we were unable to recover it. 00:27:56.563 [2024-07-26 11:35:52.044437] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.563 [2024-07-26 11:35:52.044467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.563 qpair failed and we were unable to recover it. 00:27:56.563 [2024-07-26 11:35:52.044595] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.563 [2024-07-26 11:35:52.044636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.563 qpair failed and we were unable to recover it. 00:27:56.563 [2024-07-26 11:35:52.044758] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.563 [2024-07-26 11:35:52.044789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.563 qpair failed and we were unable to recover it. 00:27:56.563 [2024-07-26 11:35:52.044977] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.563 [2024-07-26 11:35:52.045005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.563 qpair failed and we were unable to recover it. 00:27:56.563 [2024-07-26 11:35:52.045111] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.563 [2024-07-26 11:35:52.045141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.563 qpair failed and we were unable to recover it. 00:27:56.563 [2024-07-26 11:35:52.045350] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.563 [2024-07-26 11:35:52.045380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.563 qpair failed and we were unable to recover it. 00:27:56.563 [2024-07-26 11:35:52.045564] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.563 [2024-07-26 11:35:52.045593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.563 qpair failed and we were unable to recover it. 00:27:56.563 [2024-07-26 11:35:52.045805] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.563 [2024-07-26 11:35:52.045835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.563 qpair failed and we were unable to recover it. 00:27:56.563 [2024-07-26 11:35:52.045962] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.563 [2024-07-26 11:35:52.045991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.563 qpair failed and we were unable to recover it. 00:27:56.563 [2024-07-26 11:35:52.046096] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.563 [2024-07-26 11:35:52.046126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.563 qpair failed and we were unable to recover it. 00:27:56.563 [2024-07-26 11:35:52.046226] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.563 [2024-07-26 11:35:52.046256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.563 qpair failed and we were unable to recover it. 00:27:56.563 [2024-07-26 11:35:52.046409] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.563 [2024-07-26 11:35:52.046439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.563 qpair failed and we were unable to recover it. 00:27:56.563 [2024-07-26 11:35:52.046611] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.563 [2024-07-26 11:35:52.046651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.563 qpair failed and we were unable to recover it. 00:27:56.563 [2024-07-26 11:35:52.046933] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.563 [2024-07-26 11:35:52.046963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.563 qpair failed and we were unable to recover it. 00:27:56.563 [2024-07-26 11:35:52.047082] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.563 [2024-07-26 11:35:52.047112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.563 qpair failed and we were unable to recover it. 00:27:56.563 [2024-07-26 11:35:52.047320] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.563 [2024-07-26 11:35:52.047350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.563 qpair failed and we were unable to recover it. 00:27:56.563 [2024-07-26 11:35:52.047470] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.563 [2024-07-26 11:35:52.047498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.563 qpair failed and we were unable to recover it. 00:27:56.563 [2024-07-26 11:35:52.047665] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.563 [2024-07-26 11:35:52.047697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.563 qpair failed and we were unable to recover it. 00:27:56.563 [2024-07-26 11:35:52.047881] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.563 [2024-07-26 11:35:52.047911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.563 qpair failed and we were unable to recover it. 00:27:56.563 [2024-07-26 11:35:52.048020] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.563 [2024-07-26 11:35:52.048049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.563 qpair failed and we were unable to recover it. 00:27:56.563 [2024-07-26 11:35:52.048240] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.563 [2024-07-26 11:35:52.048271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.563 qpair failed and we were unable to recover it. 00:27:56.563 [2024-07-26 11:35:52.048408] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.563 [2024-07-26 11:35:52.048438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.563 qpair failed and we were unable to recover it. 00:27:56.563 [2024-07-26 11:35:52.048558] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.563 [2024-07-26 11:35:52.048587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.563 qpair failed and we were unable to recover it. 00:27:56.563 [2024-07-26 11:35:52.048784] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.563 [2024-07-26 11:35:52.048816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.563 qpair failed and we were unable to recover it. 00:27:56.563 [2024-07-26 11:35:52.048993] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.563 [2024-07-26 11:35:52.049023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.563 qpair failed and we were unable to recover it. 00:27:56.564 [2024-07-26 11:35:52.049284] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.564 [2024-07-26 11:35:52.049320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.564 qpair failed and we were unable to recover it. 00:27:56.564 [2024-07-26 11:35:52.049465] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.564 [2024-07-26 11:35:52.049495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.564 qpair failed and we were unable to recover it. 00:27:56.564 [2024-07-26 11:35:52.049691] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.564 [2024-07-26 11:35:52.049722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.564 qpair failed and we were unable to recover it. 00:27:56.564 [2024-07-26 11:35:52.049829] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.564 [2024-07-26 11:35:52.049857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.564 qpair failed and we were unable to recover it. 00:27:56.564 [2024-07-26 11:35:52.049978] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.564 [2024-07-26 11:35:52.050008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.564 qpair failed and we were unable to recover it. 00:27:56.564 [2024-07-26 11:35:52.050130] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.564 [2024-07-26 11:35:52.050161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.564 qpair failed and we were unable to recover it. 00:27:56.564 [2024-07-26 11:35:52.050349] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.564 [2024-07-26 11:35:52.050380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.564 qpair failed and we were unable to recover it. 00:27:56.564 [2024-07-26 11:35:52.050570] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.564 [2024-07-26 11:35:52.050600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.564 qpair failed and we were unable to recover it. 00:27:56.564 [2024-07-26 11:35:52.050782] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.564 [2024-07-26 11:35:52.050813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.564 qpair failed and we were unable to recover it. 00:27:56.564 [2024-07-26 11:35:52.050988] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.564 [2024-07-26 11:35:52.051017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.564 qpair failed and we were unable to recover it. 00:27:56.564 [2024-07-26 11:35:52.051210] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.564 [2024-07-26 11:35:52.051240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.564 qpair failed and we were unable to recover it. 00:27:56.564 [2024-07-26 11:35:52.051504] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.564 [2024-07-26 11:35:52.051533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.564 qpair failed and we were unable to recover it. 00:27:56.564 [2024-07-26 11:35:52.051660] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.564 [2024-07-26 11:35:52.051690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.564 qpair failed and we were unable to recover it. 00:27:56.564 [2024-07-26 11:35:52.051794] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.564 [2024-07-26 11:35:52.051824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.564 qpair failed and we were unable to recover it. 00:27:56.564 [2024-07-26 11:35:52.052034] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.564 [2024-07-26 11:35:52.052064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.564 qpair failed and we were unable to recover it. 00:27:56.564 [2024-07-26 11:35:52.052177] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.564 [2024-07-26 11:35:52.052206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.564 qpair failed and we were unable to recover it. 00:27:56.564 [2024-07-26 11:35:52.052322] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.564 [2024-07-26 11:35:52.052352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.564 qpair failed and we were unable to recover it. 00:27:56.564 [2024-07-26 11:35:52.052471] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.564 [2024-07-26 11:35:52.052501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.564 qpair failed and we were unable to recover it. 00:27:56.564 [2024-07-26 11:35:52.052751] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.564 [2024-07-26 11:35:52.052782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.564 qpair failed and we were unable to recover it. 00:27:56.564 [2024-07-26 11:35:52.053094] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.564 [2024-07-26 11:35:52.053125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.564 qpair failed and we were unable to recover it. 00:27:56.564 [2024-07-26 11:35:52.053253] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.564 [2024-07-26 11:35:52.053284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.564 qpair failed and we were unable to recover it. 00:27:56.564 [2024-07-26 11:35:52.053410] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.564 [2024-07-26 11:35:52.053439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.564 qpair failed and we were unable to recover it. 00:27:56.564 [2024-07-26 11:35:52.053645] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.564 [2024-07-26 11:35:52.053675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.564 qpair failed and we were unable to recover it. 00:27:56.564 [2024-07-26 11:35:52.053862] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.564 [2024-07-26 11:35:52.053891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.564 qpair failed and we were unable to recover it. 00:27:56.564 [2024-07-26 11:35:52.054023] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.564 [2024-07-26 11:35:52.054053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.564 qpair failed and we were unable to recover it. 00:27:56.564 [2024-07-26 11:35:52.054235] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.564 [2024-07-26 11:35:52.054265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.564 qpair failed and we were unable to recover it. 00:27:56.564 [2024-07-26 11:35:52.054394] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.564 [2024-07-26 11:35:52.054423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.564 qpair failed and we were unable to recover it. 00:27:56.564 [2024-07-26 11:35:52.054555] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.564 [2024-07-26 11:35:52.054584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.564 qpair failed and we were unable to recover it. 00:27:56.564 [2024-07-26 11:35:52.054774] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.564 [2024-07-26 11:35:52.054805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.564 qpair failed and we were unable to recover it. 00:27:56.564 [2024-07-26 11:35:52.054978] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.564 [2024-07-26 11:35:52.055007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.564 qpair failed and we were unable to recover it. 00:27:56.564 [2024-07-26 11:35:52.055287] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.564 [2024-07-26 11:35:52.055317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.564 qpair failed and we were unable to recover it. 00:27:56.564 [2024-07-26 11:35:52.055592] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.564 [2024-07-26 11:35:52.055622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.564 qpair failed and we were unable to recover it. 00:27:56.564 [2024-07-26 11:35:52.055795] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.564 [2024-07-26 11:35:52.055826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.564 qpair failed and we were unable to recover it. 00:27:56.564 [2024-07-26 11:35:52.056011] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.564 [2024-07-26 11:35:52.056041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.564 qpair failed and we were unable to recover it. 00:27:56.564 [2024-07-26 11:35:52.056225] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.564 [2024-07-26 11:35:52.056254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.564 qpair failed and we were unable to recover it. 00:27:56.564 [2024-07-26 11:35:52.056386] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.564 [2024-07-26 11:35:52.056415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.564 qpair failed and we were unable to recover it. 00:27:56.564 [2024-07-26 11:35:52.056533] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.564 [2024-07-26 11:35:52.056563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.564 qpair failed and we were unable to recover it. 00:27:56.564 [2024-07-26 11:35:52.056694] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.565 [2024-07-26 11:35:52.056725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.565 qpair failed and we were unable to recover it. 00:27:56.565 [2024-07-26 11:35:52.056936] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.565 [2024-07-26 11:35:52.056966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.565 qpair failed and we were unable to recover it. 00:27:56.565 [2024-07-26 11:35:52.057140] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.565 [2024-07-26 11:35:52.057169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.565 qpair failed and we were unable to recover it. 00:27:56.565 [2024-07-26 11:35:52.057295] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.565 [2024-07-26 11:35:52.057331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.565 qpair failed and we were unable to recover it. 00:27:56.565 [2024-07-26 11:35:52.057462] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.565 [2024-07-26 11:35:52.057492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.565 qpair failed and we were unable to recover it. 00:27:56.565 [2024-07-26 11:35:52.057673] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.565 [2024-07-26 11:35:52.057704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.565 qpair failed and we were unable to recover it. 00:27:56.565 [2024-07-26 11:35:52.057948] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.565 [2024-07-26 11:35:52.057978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.565 qpair failed and we were unable to recover it. 00:27:56.565 [2024-07-26 11:35:52.058099] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.565 [2024-07-26 11:35:52.058129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.565 qpair failed and we were unable to recover it. 00:27:56.565 [2024-07-26 11:35:52.058251] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.565 [2024-07-26 11:35:52.058281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.565 qpair failed and we were unable to recover it. 00:27:56.565 [2024-07-26 11:35:52.058411] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.565 [2024-07-26 11:35:52.058440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.565 qpair failed and we were unable to recover it. 00:27:56.565 [2024-07-26 11:35:52.058614] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.565 [2024-07-26 11:35:52.058650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.565 qpair failed and we were unable to recover it. 00:27:56.565 [2024-07-26 11:35:52.058913] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.565 [2024-07-26 11:35:52.058944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.565 qpair failed and we were unable to recover it. 00:27:56.565 [2024-07-26 11:35:52.059071] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.565 [2024-07-26 11:35:52.059101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.565 qpair failed and we were unable to recover it. 00:27:56.565 [2024-07-26 11:35:52.059226] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.565 [2024-07-26 11:35:52.059256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.565 qpair failed and we were unable to recover it. 00:27:56.565 [2024-07-26 11:35:52.059455] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.565 [2024-07-26 11:35:52.059484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.565 qpair failed and we were unable to recover it. 00:27:56.565 [2024-07-26 11:35:52.059592] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.565 [2024-07-26 11:35:52.059622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.565 qpair failed and we were unable to recover it. 00:27:56.565 [2024-07-26 11:35:52.059747] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.565 [2024-07-26 11:35:52.059776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.565 qpair failed and we were unable to recover it. 00:27:56.565 [2024-07-26 11:35:52.059968] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.565 [2024-07-26 11:35:52.059997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.565 qpair failed and we were unable to recover it. 00:27:56.565 [2024-07-26 11:35:52.060110] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.565 [2024-07-26 11:35:52.060138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.565 qpair failed and we were unable to recover it. 00:27:56.565 [2024-07-26 11:35:52.060352] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.565 [2024-07-26 11:35:52.060382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.565 qpair failed and we were unable to recover it. 00:27:56.565 [2024-07-26 11:35:52.060493] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.565 [2024-07-26 11:35:52.060521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.565 qpair failed and we were unable to recover it. 00:27:56.565 [2024-07-26 11:35:52.060648] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.565 [2024-07-26 11:35:52.060678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.565 qpair failed and we were unable to recover it. 00:27:56.565 [2024-07-26 11:35:52.060800] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.565 [2024-07-26 11:35:52.060830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.565 qpair failed and we were unable to recover it. 00:27:56.565 [2024-07-26 11:35:52.060954] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.565 [2024-07-26 11:35:52.060983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.565 qpair failed and we were unable to recover it. 00:27:56.565 [2024-07-26 11:35:52.061104] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.565 [2024-07-26 11:35:52.061134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.565 qpair failed and we were unable to recover it. 00:27:56.565 [2024-07-26 11:35:52.061259] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.565 [2024-07-26 11:35:52.061287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.565 qpair failed and we were unable to recover it. 00:27:56.565 [2024-07-26 11:35:52.061463] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.565 [2024-07-26 11:35:52.061492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.565 qpair failed and we were unable to recover it. 00:27:56.565 [2024-07-26 11:35:52.061672] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.565 [2024-07-26 11:35:52.061704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.565 qpair failed and we were unable to recover it. 00:27:56.565 [2024-07-26 11:35:52.061835] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.565 [2024-07-26 11:35:52.061866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.565 qpair failed and we were unable to recover it. 00:27:56.565 [2024-07-26 11:35:52.062051] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.565 [2024-07-26 11:35:52.062081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.565 qpair failed and we were unable to recover it. 00:27:56.565 [2024-07-26 11:35:52.062261] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.565 [2024-07-26 11:35:52.062289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.565 qpair failed and we were unable to recover it. 00:27:56.565 [2024-07-26 11:35:52.062414] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.565 [2024-07-26 11:35:52.062445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.565 qpair failed and we were unable to recover it. 00:27:56.565 [2024-07-26 11:35:52.062566] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.565 [2024-07-26 11:35:52.062595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.565 qpair failed and we were unable to recover it. 00:27:56.565 [2024-07-26 11:35:52.062804] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.565 [2024-07-26 11:35:52.062833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.565 qpair failed and we were unable to recover it. 00:27:56.565 [2024-07-26 11:35:52.062955] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.565 [2024-07-26 11:35:52.062983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.565 qpair failed and we were unable to recover it. 00:27:56.565 [2024-07-26 11:35:52.063167] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.565 [2024-07-26 11:35:52.063197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.565 qpair failed and we were unable to recover it. 00:27:56.565 [2024-07-26 11:35:52.063329] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.565 [2024-07-26 11:35:52.063358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.565 qpair failed and we were unable to recover it. 00:27:56.565 [2024-07-26 11:35:52.063561] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.565 [2024-07-26 11:35:52.063590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.565 qpair failed and we were unable to recover it. 00:27:56.566 [2024-07-26 11:35:52.063722] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.566 [2024-07-26 11:35:52.063751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.566 qpair failed and we were unable to recover it. 00:27:56.566 [2024-07-26 11:35:52.063971] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.566 [2024-07-26 11:35:52.064001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.566 qpair failed and we were unable to recover it. 00:27:56.566 [2024-07-26 11:35:52.064111] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.566 [2024-07-26 11:35:52.064140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.566 qpair failed and we were unable to recover it. 00:27:56.566 [2024-07-26 11:35:52.064328] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.566 [2024-07-26 11:35:52.064357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.566 qpair failed and we were unable to recover it. 00:27:56.566 [2024-07-26 11:35:52.064538] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.566 [2024-07-26 11:35:52.064568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.566 qpair failed and we were unable to recover it. 00:27:56.566 [2024-07-26 11:35:52.064676] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.566 [2024-07-26 11:35:52.064713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.566 qpair failed and we were unable to recover it. 00:27:56.566 [2024-07-26 11:35:52.064955] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.566 [2024-07-26 11:35:52.064985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.566 qpair failed and we were unable to recover it. 00:27:56.566 [2024-07-26 11:35:52.065094] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.566 [2024-07-26 11:35:52.065123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.566 qpair failed and we were unable to recover it. 00:27:56.566 [2024-07-26 11:35:52.065306] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.566 [2024-07-26 11:35:52.065336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.566 qpair failed and we were unable to recover it. 00:27:56.566 [2024-07-26 11:35:52.065449] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.566 [2024-07-26 11:35:52.065480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.566 qpair failed and we were unable to recover it. 00:27:56.566 [2024-07-26 11:35:52.065590] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.566 [2024-07-26 11:35:52.065619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.566 qpair failed and we were unable to recover it. 00:27:56.566 [2024-07-26 11:35:52.065751] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.566 [2024-07-26 11:35:52.065782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.566 qpair failed and we were unable to recover it. 00:27:56.566 [2024-07-26 11:35:52.065896] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.566 [2024-07-26 11:35:52.065925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.566 qpair failed and we were unable to recover it. 00:27:56.566 [2024-07-26 11:35:52.066111] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.566 [2024-07-26 11:35:52.066140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.566 qpair failed and we were unable to recover it. 00:27:56.566 [2024-07-26 11:35:52.066267] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.566 [2024-07-26 11:35:52.066296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.566 qpair failed and we were unable to recover it. 00:27:56.566 [2024-07-26 11:35:52.066487] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.566 [2024-07-26 11:35:52.066518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.566 qpair failed and we were unable to recover it. 00:27:56.566 [2024-07-26 11:35:52.066641] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.566 [2024-07-26 11:35:52.066671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.566 qpair failed and we were unable to recover it. 00:27:56.566 [2024-07-26 11:35:52.066782] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.566 [2024-07-26 11:35:52.066812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.566 qpair failed and we were unable to recover it. 00:27:56.566 [2024-07-26 11:35:52.066920] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.566 [2024-07-26 11:35:52.066949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.566 qpair failed and we were unable to recover it. 00:27:56.566 [2024-07-26 11:35:52.067070] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.566 [2024-07-26 11:35:52.067099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.566 qpair failed and we were unable to recover it. 00:27:56.566 [2024-07-26 11:35:52.067388] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.566 [2024-07-26 11:35:52.067418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.566 qpair failed and we were unable to recover it. 00:27:56.566 [2024-07-26 11:35:52.067535] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.566 [2024-07-26 11:35:52.067565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.566 qpair failed and we were unable to recover it. 00:27:56.566 [2024-07-26 11:35:52.067689] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.566 [2024-07-26 11:35:52.067719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.566 qpair failed and we were unable to recover it. 00:27:56.566 [2024-07-26 11:35:52.067905] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.566 [2024-07-26 11:35:52.067935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.566 qpair failed and we were unable to recover it. 00:27:56.566 [2024-07-26 11:35:52.068149] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.566 [2024-07-26 11:35:52.068179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.566 qpair failed and we were unable to recover it. 00:27:56.566 [2024-07-26 11:35:52.068289] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.566 [2024-07-26 11:35:52.068319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.566 qpair failed and we were unable to recover it. 00:27:56.566 [2024-07-26 11:35:52.068517] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.566 [2024-07-26 11:35:52.068546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.566 qpair failed and we were unable to recover it. 00:27:56.566 [2024-07-26 11:35:52.068731] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.566 [2024-07-26 11:35:52.068761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.566 qpair failed and we were unable to recover it. 00:27:56.566 [2024-07-26 11:35:52.068968] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.566 [2024-07-26 11:35:52.068999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.566 qpair failed and we were unable to recover it. 00:27:56.566 [2024-07-26 11:35:52.069140] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.566 [2024-07-26 11:35:52.069169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.566 qpair failed and we were unable to recover it. 00:27:56.566 [2024-07-26 11:35:52.069291] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.566 [2024-07-26 11:35:52.069321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.566 qpair failed and we were unable to recover it. 00:27:56.566 [2024-07-26 11:35:52.069429] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.566 [2024-07-26 11:35:52.069457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.566 qpair failed and we were unable to recover it. 00:27:56.566 [2024-07-26 11:35:52.069596] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.566 [2024-07-26 11:35:52.069625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.566 qpair failed and we were unable to recover it. 00:27:56.566 [2024-07-26 11:35:52.069816] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.566 [2024-07-26 11:35:52.069846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.566 qpair failed and we were unable to recover it. 00:27:56.566 [2024-07-26 11:35:52.069955] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.566 [2024-07-26 11:35:52.069985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.566 qpair failed and we were unable to recover it. 00:27:56.566 [2024-07-26 11:35:52.070196] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.566 [2024-07-26 11:35:52.070225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.566 qpair failed and we were unable to recover it. 00:27:56.566 [2024-07-26 11:35:52.070413] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.566 [2024-07-26 11:35:52.070442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.566 qpair failed and we were unable to recover it. 00:27:56.566 [2024-07-26 11:35:52.070616] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.567 [2024-07-26 11:35:52.070654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.567 qpair failed and we were unable to recover it. 00:27:56.567 [2024-07-26 11:35:52.070842] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.567 [2024-07-26 11:35:52.070874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.567 qpair failed and we were unable to recover it. 00:27:56.567 [2024-07-26 11:35:52.071073] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.567 [2024-07-26 11:35:52.071103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.567 qpair failed and we were unable to recover it. 00:27:56.567 [2024-07-26 11:35:52.071346] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.567 [2024-07-26 11:35:52.071376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.567 qpair failed and we were unable to recover it. 00:27:56.567 [2024-07-26 11:35:52.071492] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.567 [2024-07-26 11:35:52.071521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.567 qpair failed and we were unable to recover it. 00:27:56.567 [2024-07-26 11:35:52.071707] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.567 [2024-07-26 11:35:52.071737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.567 qpair failed and we were unable to recover it. 00:27:56.567 [2024-07-26 11:35:52.072030] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.567 [2024-07-26 11:35:52.072061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.567 qpair failed and we were unable to recover it. 00:27:56.567 [2024-07-26 11:35:52.072304] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.567 [2024-07-26 11:35:52.072334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.567 qpair failed and we were unable to recover it. 00:27:56.567 [2024-07-26 11:35:52.072449] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.567 [2024-07-26 11:35:52.072489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.567 qpair failed and we were unable to recover it. 00:27:56.567 [2024-07-26 11:35:52.072593] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.567 [2024-07-26 11:35:52.072624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.567 qpair failed and we were unable to recover it. 00:27:56.567 [2024-07-26 11:35:52.072753] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.567 [2024-07-26 11:35:52.072783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.567 qpair failed and we were unable to recover it. 00:27:56.567 [2024-07-26 11:35:52.072971] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.567 [2024-07-26 11:35:52.073001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.567 qpair failed and we were unable to recover it. 00:27:56.567 [2024-07-26 11:35:52.073181] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.567 [2024-07-26 11:35:52.073209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.567 qpair failed and we were unable to recover it. 00:27:56.567 [2024-07-26 11:35:52.073400] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.567 [2024-07-26 11:35:52.073429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.567 qpair failed and we were unable to recover it. 00:27:56.567 [2024-07-26 11:35:52.073612] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.567 [2024-07-26 11:35:52.073661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.567 qpair failed and we were unable to recover it. 00:27:56.567 [2024-07-26 11:35:52.073852] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.567 [2024-07-26 11:35:52.073882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.567 qpair failed and we were unable to recover it. 00:27:56.567 [2024-07-26 11:35:52.074054] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.567 [2024-07-26 11:35:52.074084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.567 qpair failed and we were unable to recover it. 00:27:56.567 [2024-07-26 11:35:52.074220] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.567 [2024-07-26 11:35:52.074250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.567 qpair failed and we were unable to recover it. 00:27:56.567 [2024-07-26 11:35:52.074438] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.567 [2024-07-26 11:35:52.074467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.567 qpair failed and we were unable to recover it. 00:27:56.567 [2024-07-26 11:35:52.074645] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.567 [2024-07-26 11:35:52.074675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.567 qpair failed and we were unable to recover it. 00:27:56.567 [2024-07-26 11:35:52.074886] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.567 [2024-07-26 11:35:52.074916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.567 qpair failed and we were unable to recover it. 00:27:56.567 [2024-07-26 11:35:52.075032] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.567 [2024-07-26 11:35:52.075062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.567 qpair failed and we were unable to recover it. 00:27:56.567 [2024-07-26 11:35:52.075194] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.567 [2024-07-26 11:35:52.075223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.567 qpair failed and we were unable to recover it. 00:27:56.567 [2024-07-26 11:35:52.075339] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.567 [2024-07-26 11:35:52.075369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.567 qpair failed and we were unable to recover it. 00:27:56.567 [2024-07-26 11:35:52.075538] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.567 [2024-07-26 11:35:52.075568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.567 qpair failed and we were unable to recover it. 00:27:56.567 [2024-07-26 11:35:52.075751] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.567 [2024-07-26 11:35:52.075781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.567 qpair failed and we were unable to recover it. 00:27:56.567 [2024-07-26 11:35:52.075904] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.567 [2024-07-26 11:35:52.075933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.567 qpair failed and we were unable to recover it. 00:27:56.567 [2024-07-26 11:35:52.076060] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.567 [2024-07-26 11:35:52.076091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.567 qpair failed and we were unable to recover it. 00:27:56.567 [2024-07-26 11:35:52.076275] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.567 [2024-07-26 11:35:52.076304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.567 qpair failed and we were unable to recover it. 00:27:56.567 [2024-07-26 11:35:52.076552] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.567 [2024-07-26 11:35:52.076582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.567 qpair failed and we were unable to recover it. 00:27:56.567 [2024-07-26 11:35:52.076712] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.567 [2024-07-26 11:35:52.076741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.567 qpair failed and we were unable to recover it. 00:27:56.567 [2024-07-26 11:35:52.076862] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.568 [2024-07-26 11:35:52.076893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.568 qpair failed and we were unable to recover it. 00:27:56.568 [2024-07-26 11:35:52.077080] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.568 [2024-07-26 11:35:52.077109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.568 qpair failed and we were unable to recover it. 00:27:56.568 [2024-07-26 11:35:52.077324] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.568 [2024-07-26 11:35:52.077353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.568 qpair failed and we were unable to recover it. 00:27:56.568 [2024-07-26 11:35:52.077536] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.568 [2024-07-26 11:35:52.077565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.568 qpair failed and we were unable to recover it. 00:27:56.568 [2024-07-26 11:35:52.077748] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.568 [2024-07-26 11:35:52.077778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.568 qpair failed and we were unable to recover it. 00:27:56.568 [2024-07-26 11:35:52.077966] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.568 [2024-07-26 11:35:52.077996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.568 qpair failed and we were unable to recover it. 00:27:56.568 [2024-07-26 11:35:52.078100] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.568 [2024-07-26 11:35:52.078130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.568 qpair failed and we were unable to recover it. 00:27:56.568 [2024-07-26 11:35:52.078310] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.568 [2024-07-26 11:35:52.078339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.568 qpair failed and we were unable to recover it. 00:27:56.568 [2024-07-26 11:35:52.078542] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.568 [2024-07-26 11:35:52.078572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.568 qpair failed and we were unable to recover it. 00:27:56.568 [2024-07-26 11:35:52.078716] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.568 [2024-07-26 11:35:52.078747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.568 qpair failed and we were unable to recover it. 00:27:56.568 [2024-07-26 11:35:52.078923] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.568 [2024-07-26 11:35:52.078953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.568 qpair failed and we were unable to recover it. 00:27:56.568 [2024-07-26 11:35:52.079130] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.568 [2024-07-26 11:35:52.079159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.568 qpair failed and we were unable to recover it. 00:27:56.568 [2024-07-26 11:35:52.079282] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.568 [2024-07-26 11:35:52.079311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.568 qpair failed and we were unable to recover it. 00:27:56.568 [2024-07-26 11:35:52.079490] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.568 [2024-07-26 11:35:52.079520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.568 qpair failed and we were unable to recover it. 00:27:56.568 [2024-07-26 11:35:52.079657] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.568 [2024-07-26 11:35:52.079688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.568 qpair failed and we were unable to recover it. 00:27:56.568 [2024-07-26 11:35:52.079886] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.568 [2024-07-26 11:35:52.079915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.568 qpair failed and we were unable to recover it. 00:27:56.568 [2024-07-26 11:35:52.080119] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.568 [2024-07-26 11:35:52.080148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.568 qpair failed and we were unable to recover it. 00:27:56.568 [2024-07-26 11:35:52.080345] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.568 [2024-07-26 11:35:52.080380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.568 qpair failed and we were unable to recover it. 00:27:56.568 [2024-07-26 11:35:52.080568] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.568 [2024-07-26 11:35:52.080599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.568 qpair failed and we were unable to recover it. 00:27:56.568 [2024-07-26 11:35:52.080852] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.568 [2024-07-26 11:35:52.080883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.568 qpair failed and we were unable to recover it. 00:27:56.568 [2024-07-26 11:35:52.081091] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.568 [2024-07-26 11:35:52.081121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.568 qpair failed and we were unable to recover it. 00:27:56.568 [2024-07-26 11:35:52.081227] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.568 [2024-07-26 11:35:52.081257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.568 qpair failed and we were unable to recover it. 00:27:56.568 [2024-07-26 11:35:52.081449] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.568 [2024-07-26 11:35:52.081478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.568 qpair failed and we were unable to recover it. 00:27:56.568 [2024-07-26 11:35:52.081718] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.568 [2024-07-26 11:35:52.081747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.568 qpair failed and we were unable to recover it. 00:27:56.568 [2024-07-26 11:35:52.081854] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.568 [2024-07-26 11:35:52.081883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.568 qpair failed and we were unable to recover it. 00:27:56.568 [2024-07-26 11:35:52.082128] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.568 [2024-07-26 11:35:52.082157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.568 qpair failed and we were unable to recover it. 00:27:56.568 [2024-07-26 11:35:52.082358] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.568 [2024-07-26 11:35:52.082388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.568 qpair failed and we were unable to recover it. 00:27:56.568 [2024-07-26 11:35:52.082654] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.568 [2024-07-26 11:35:52.082686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.568 qpair failed and we were unable to recover it. 00:27:56.568 [2024-07-26 11:35:52.082806] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.568 [2024-07-26 11:35:52.082835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.568 qpair failed and we were unable to recover it. 00:27:56.568 [2024-07-26 11:35:52.082969] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.568 [2024-07-26 11:35:52.082999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.568 qpair failed and we were unable to recover it. 00:27:56.568 [2024-07-26 11:35:52.083181] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.568 [2024-07-26 11:35:52.083209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.568 qpair failed and we were unable to recover it. 00:27:56.568 [2024-07-26 11:35:52.083337] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.568 [2024-07-26 11:35:52.083366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.568 qpair failed and we were unable to recover it. 00:27:56.568 [2024-07-26 11:35:52.083494] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.568 [2024-07-26 11:35:52.083523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.568 qpair failed and we were unable to recover it. 00:27:56.568 [2024-07-26 11:35:52.083705] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.568 [2024-07-26 11:35:52.083734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.568 qpair failed and we were unable to recover it. 00:27:56.568 [2024-07-26 11:35:52.083869] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.568 [2024-07-26 11:35:52.083899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.568 qpair failed and we were unable to recover it. 00:27:56.568 [2024-07-26 11:35:52.084111] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.568 [2024-07-26 11:35:52.084141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.568 qpair failed and we were unable to recover it. 00:27:56.568 [2024-07-26 11:35:52.084283] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.568 [2024-07-26 11:35:52.084312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.568 qpair failed and we were unable to recover it. 00:27:56.568 [2024-07-26 11:35:52.084486] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.568 [2024-07-26 11:35:52.084515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.568 qpair failed and we were unable to recover it. 00:27:56.569 [2024-07-26 11:35:52.084689] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.569 [2024-07-26 11:35:52.084719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.569 qpair failed and we were unable to recover it. 00:27:56.569 [2024-07-26 11:35:52.084849] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.569 [2024-07-26 11:35:52.084879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.569 qpair failed and we were unable to recover it. 00:27:56.569 [2024-07-26 11:35:52.085079] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.569 [2024-07-26 11:35:52.085108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.569 qpair failed and we were unable to recover it. 00:27:56.569 [2024-07-26 11:35:52.085291] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.569 [2024-07-26 11:35:52.085319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.569 qpair failed and we were unable to recover it. 00:27:56.569 [2024-07-26 11:35:52.085564] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.569 [2024-07-26 11:35:52.085594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.569 qpair failed and we were unable to recover it. 00:27:56.569 [2024-07-26 11:35:52.085817] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.569 [2024-07-26 11:35:52.085848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.569 qpair failed and we were unable to recover it. 00:27:56.569 [2024-07-26 11:35:52.085970] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.569 [2024-07-26 11:35:52.086001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.569 qpair failed and we were unable to recover it. 00:27:56.569 [2024-07-26 11:35:52.086210] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.569 [2024-07-26 11:35:52.086239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.569 qpair failed and we were unable to recover it. 00:27:56.569 [2024-07-26 11:35:52.086434] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.569 [2024-07-26 11:35:52.086464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.569 qpair failed and we were unable to recover it. 00:27:56.569 [2024-07-26 11:35:52.086644] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.569 [2024-07-26 11:35:52.086674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.569 qpair failed and we were unable to recover it. 00:27:56.569 [2024-07-26 11:35:52.086797] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.569 [2024-07-26 11:35:52.086827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.569 qpair failed and we were unable to recover it. 00:27:56.569 [2024-07-26 11:35:52.086956] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.569 [2024-07-26 11:35:52.086986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.569 qpair failed and we were unable to recover it. 00:27:56.569 [2024-07-26 11:35:52.087107] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.569 [2024-07-26 11:35:52.087137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.569 qpair failed and we were unable to recover it. 00:27:56.569 [2024-07-26 11:35:52.087312] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.569 [2024-07-26 11:35:52.087342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.569 qpair failed and we were unable to recover it. 00:27:56.569 [2024-07-26 11:35:52.087532] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.569 [2024-07-26 11:35:52.087562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.569 qpair failed and we were unable to recover it. 00:27:56.569 [2024-07-26 11:35:52.087748] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.569 [2024-07-26 11:35:52.087779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.569 qpair failed and we were unable to recover it. 00:27:56.569 [2024-07-26 11:35:52.087902] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.569 [2024-07-26 11:35:52.087932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.569 qpair failed and we were unable to recover it. 00:27:56.569 [2024-07-26 11:35:52.088065] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.569 [2024-07-26 11:35:52.088094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.569 qpair failed and we were unable to recover it. 00:27:56.569 [2024-07-26 11:35:52.088205] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.569 [2024-07-26 11:35:52.088235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.569 qpair failed and we were unable to recover it. 00:27:56.569 [2024-07-26 11:35:52.088405] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.569 [2024-07-26 11:35:52.088440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.569 qpair failed and we were unable to recover it. 00:27:56.569 [2024-07-26 11:35:52.088654] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.569 [2024-07-26 11:35:52.088684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.569 qpair failed and we were unable to recover it. 00:27:56.569 [2024-07-26 11:35:52.088789] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.569 [2024-07-26 11:35:52.088819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.569 qpair failed and we were unable to recover it. 00:27:56.569 [2024-07-26 11:35:52.088928] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.569 [2024-07-26 11:35:52.088957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.569 qpair failed and we were unable to recover it. 00:27:56.569 [2024-07-26 11:35:52.089139] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.569 [2024-07-26 11:35:52.089169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.569 qpair failed and we were unable to recover it. 00:27:56.569 [2024-07-26 11:35:52.089371] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.569 [2024-07-26 11:35:52.089400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.569 qpair failed and we were unable to recover it. 00:27:56.569 [2024-07-26 11:35:52.089600] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.569 [2024-07-26 11:35:52.089637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.569 qpair failed and we were unable to recover it. 00:27:56.569 [2024-07-26 11:35:52.089744] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.569 [2024-07-26 11:35:52.089774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.569 qpair failed and we were unable to recover it. 00:27:56.569 [2024-07-26 11:35:52.089885] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.569 [2024-07-26 11:35:52.089915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.569 qpair failed and we were unable to recover it. 00:27:56.569 [2024-07-26 11:35:52.090028] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.569 [2024-07-26 11:35:52.090058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.569 qpair failed and we were unable to recover it. 00:27:56.569 [2024-07-26 11:35:52.090240] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.569 [2024-07-26 11:35:52.090270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.569 qpair failed and we were unable to recover it. 00:27:56.569 [2024-07-26 11:35:52.090387] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.569 [2024-07-26 11:35:52.090417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.569 qpair failed and we were unable to recover it. 00:27:56.569 [2024-07-26 11:35:52.090659] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.569 [2024-07-26 11:35:52.090690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.569 qpair failed and we were unable to recover it. 00:27:56.569 [2024-07-26 11:35:52.090805] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.569 [2024-07-26 11:35:52.090834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.569 qpair failed and we were unable to recover it. 00:27:56.569 [2024-07-26 11:35:52.090978] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.569 [2024-07-26 11:35:52.091008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.569 qpair failed and we were unable to recover it. 00:27:56.569 [2024-07-26 11:35:52.091140] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.569 [2024-07-26 11:35:52.091169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.569 qpair failed and we were unable to recover it. 00:27:56.569 [2024-07-26 11:35:52.091292] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.569 [2024-07-26 11:35:52.091322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.569 qpair failed and we were unable to recover it. 00:27:56.569 [2024-07-26 11:35:52.091456] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.569 [2024-07-26 11:35:52.091486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.569 qpair failed and we were unable to recover it. 00:27:56.569 [2024-07-26 11:35:52.091605] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.569 [2024-07-26 11:35:52.091647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.570 qpair failed and we were unable to recover it. 00:27:56.570 [2024-07-26 11:35:52.091941] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.570 [2024-07-26 11:35:52.091972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.570 qpair failed and we were unable to recover it. 00:27:56.570 [2024-07-26 11:35:52.092162] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.570 [2024-07-26 11:35:52.092192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.570 qpair failed and we were unable to recover it. 00:27:56.570 [2024-07-26 11:35:52.092364] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.570 [2024-07-26 11:35:52.092394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.570 qpair failed and we were unable to recover it. 00:27:56.570 [2024-07-26 11:35:52.092496] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.570 [2024-07-26 11:35:52.092526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.570 qpair failed and we were unable to recover it. 00:27:56.570 [2024-07-26 11:35:52.092714] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.570 [2024-07-26 11:35:52.092744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.570 qpair failed and we were unable to recover it. 00:27:56.570 [2024-07-26 11:35:52.092882] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.570 [2024-07-26 11:35:52.092912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.570 qpair failed and we were unable to recover it. 00:27:56.570 [2024-07-26 11:35:52.093023] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.570 [2024-07-26 11:35:52.093053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.570 qpair failed and we were unable to recover it. 00:27:56.570 [2024-07-26 11:35:52.093238] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.570 [2024-07-26 11:35:52.093268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.570 qpair failed and we were unable to recover it. 00:27:56.570 [2024-07-26 11:35:52.093465] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.570 [2024-07-26 11:35:52.093496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.570 qpair failed and we were unable to recover it. 00:27:56.570 [2024-07-26 11:35:52.093615] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.570 [2024-07-26 11:35:52.093652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.570 qpair failed and we were unable to recover it. 00:27:56.570 [2024-07-26 11:35:52.093838] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.570 [2024-07-26 11:35:52.093867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.570 qpair failed and we were unable to recover it. 00:27:56.570 [2024-07-26 11:35:52.093996] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.570 [2024-07-26 11:35:52.094025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.570 qpair failed and we were unable to recover it. 00:27:56.570 [2024-07-26 11:35:52.094156] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.570 [2024-07-26 11:35:52.094185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.570 qpair failed and we were unable to recover it. 00:27:56.570 [2024-07-26 11:35:52.094290] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.570 [2024-07-26 11:35:52.094321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.570 qpair failed and we were unable to recover it. 00:27:56.570 [2024-07-26 11:35:52.094510] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.570 [2024-07-26 11:35:52.094540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.570 qpair failed and we were unable to recover it. 00:27:56.570 [2024-07-26 11:35:52.094676] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.570 [2024-07-26 11:35:52.094706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.570 qpair failed and we were unable to recover it. 00:27:56.570 [2024-07-26 11:35:52.094812] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.570 [2024-07-26 11:35:52.094841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.570 qpair failed and we were unable to recover it. 00:27:56.570 [2024-07-26 11:35:52.095019] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.570 [2024-07-26 11:35:52.095049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.570 qpair failed and we were unable to recover it. 00:27:56.570 [2024-07-26 11:35:52.095242] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.570 [2024-07-26 11:35:52.095272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.570 qpair failed and we were unable to recover it. 00:27:56.570 [2024-07-26 11:35:52.095388] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.570 [2024-07-26 11:35:52.095418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.570 qpair failed and we were unable to recover it. 00:27:56.570 [2024-07-26 11:35:52.095621] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.570 [2024-07-26 11:35:52.095657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.570 qpair failed and we were unable to recover it. 00:27:56.570 [2024-07-26 11:35:52.095852] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.570 [2024-07-26 11:35:52.095887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.570 qpair failed and we were unable to recover it. 00:27:56.570 [2024-07-26 11:35:52.095997] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.570 [2024-07-26 11:35:52.096026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.570 qpair failed and we were unable to recover it. 00:27:56.570 [2024-07-26 11:35:52.096290] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.570 [2024-07-26 11:35:52.096320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.570 qpair failed and we were unable to recover it. 00:27:56.570 [2024-07-26 11:35:52.096441] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.570 [2024-07-26 11:35:52.096471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.570 qpair failed and we were unable to recover it. 00:27:56.570 [2024-07-26 11:35:52.096594] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.570 [2024-07-26 11:35:52.096624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.570 qpair failed and we were unable to recover it. 00:27:56.570 [2024-07-26 11:35:52.096765] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.570 [2024-07-26 11:35:52.096795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.570 qpair failed and we were unable to recover it. 00:27:56.570 [2024-07-26 11:35:52.096915] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.570 [2024-07-26 11:35:52.096944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.570 qpair failed and we were unable to recover it. 00:27:56.570 [2024-07-26 11:35:52.097132] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.570 [2024-07-26 11:35:52.097163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.570 qpair failed and we were unable to recover it. 00:27:56.570 [2024-07-26 11:35:52.097338] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.570 [2024-07-26 11:35:52.097367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.570 qpair failed and we were unable to recover it. 00:27:56.570 [2024-07-26 11:35:52.097478] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.570 [2024-07-26 11:35:52.097508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.570 qpair failed and we were unable to recover it. 00:27:56.570 [2024-07-26 11:35:52.097665] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.570 [2024-07-26 11:35:52.097696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.570 qpair failed and we were unable to recover it. 00:27:56.570 [2024-07-26 11:35:52.097811] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.570 [2024-07-26 11:35:52.097841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.570 qpair failed and we were unable to recover it. 00:27:56.570 [2024-07-26 11:35:52.098009] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.570 [2024-07-26 11:35:52.098039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.570 qpair failed and we were unable to recover it. 00:27:56.570 [2024-07-26 11:35:52.098158] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.570 [2024-07-26 11:35:52.098187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.570 qpair failed and we were unable to recover it. 00:27:56.570 [2024-07-26 11:35:52.098301] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.570 [2024-07-26 11:35:52.098331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.570 qpair failed and we were unable to recover it. 00:27:56.570 [2024-07-26 11:35:52.098512] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.570 [2024-07-26 11:35:52.098541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.570 qpair failed and we were unable to recover it. 00:27:56.570 [2024-07-26 11:35:52.098786] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.571 [2024-07-26 11:35:52.098816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.571 qpair failed and we were unable to recover it. 00:27:56.571 [2024-07-26 11:35:52.098953] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.571 [2024-07-26 11:35:52.098983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.571 qpair failed and we were unable to recover it. 00:27:56.571 [2024-07-26 11:35:52.099111] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.571 [2024-07-26 11:35:52.099140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.571 qpair failed and we were unable to recover it. 00:27:56.571 [2024-07-26 11:35:52.099435] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.571 [2024-07-26 11:35:52.099465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.571 qpair failed and we were unable to recover it. 00:27:56.571 [2024-07-26 11:35:52.099675] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.571 [2024-07-26 11:35:52.099705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.571 qpair failed and we were unable to recover it. 00:27:56.571 [2024-07-26 11:35:52.099898] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.571 [2024-07-26 11:35:52.099928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.571 qpair failed and we were unable to recover it. 00:27:56.571 [2024-07-26 11:35:52.100051] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.571 [2024-07-26 11:35:52.100081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.571 qpair failed and we were unable to recover it. 00:27:56.571 [2024-07-26 11:35:52.100212] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.571 [2024-07-26 11:35:52.100241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.571 qpair failed and we were unable to recover it. 00:27:56.571 [2024-07-26 11:35:52.100376] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.571 [2024-07-26 11:35:52.100405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.571 qpair failed and we were unable to recover it. 00:27:56.571 [2024-07-26 11:35:52.100587] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.571 [2024-07-26 11:35:52.100617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.571 qpair failed and we were unable to recover it. 00:27:56.571 [2024-07-26 11:35:52.100762] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.571 [2024-07-26 11:35:52.100792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.571 qpair failed and we were unable to recover it. 00:27:56.571 [2024-07-26 11:35:52.100968] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.571 [2024-07-26 11:35:52.100998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.571 qpair failed and we were unable to recover it. 00:27:56.571 [2024-07-26 11:35:52.101181] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.571 [2024-07-26 11:35:52.101211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.571 qpair failed and we were unable to recover it. 00:27:56.571 [2024-07-26 11:35:52.101333] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.571 [2024-07-26 11:35:52.101364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.571 qpair failed and we were unable to recover it. 00:27:56.571 [2024-07-26 11:35:52.101544] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.571 [2024-07-26 11:35:52.101573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.571 qpair failed and we were unable to recover it. 00:27:56.571 [2024-07-26 11:35:52.101712] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.571 [2024-07-26 11:35:52.101743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.571 qpair failed and we were unable to recover it. 00:27:56.571 [2024-07-26 11:35:52.101929] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.571 [2024-07-26 11:35:52.101959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.571 qpair failed and we were unable to recover it. 00:27:56.571 [2024-07-26 11:35:52.102132] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.571 [2024-07-26 11:35:52.102161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.571 qpair failed and we were unable to recover it. 00:27:56.571 [2024-07-26 11:35:52.102347] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.571 [2024-07-26 11:35:52.102376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.571 qpair failed and we were unable to recover it. 00:27:56.571 [2024-07-26 11:35:52.102479] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.571 [2024-07-26 11:35:52.102508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.571 qpair failed and we were unable to recover it. 00:27:56.571 [2024-07-26 11:35:52.102685] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.571 [2024-07-26 11:35:52.102715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.571 qpair failed and we were unable to recover it. 00:27:56.571 [2024-07-26 11:35:52.102910] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.571 [2024-07-26 11:35:52.102940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.571 qpair failed and we were unable to recover it. 00:27:56.571 [2024-07-26 11:35:52.103058] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.571 [2024-07-26 11:35:52.103088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.571 qpair failed and we were unable to recover it. 00:27:56.571 [2024-07-26 11:35:52.103215] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.571 [2024-07-26 11:35:52.103245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.571 qpair failed and we were unable to recover it. 00:27:56.571 [2024-07-26 11:35:52.103373] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.571 [2024-07-26 11:35:52.103408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.571 qpair failed and we were unable to recover it. 00:27:56.571 [2024-07-26 11:35:52.103604] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.571 [2024-07-26 11:35:52.103641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.571 qpair failed and we were unable to recover it. 00:27:56.571 [2024-07-26 11:35:52.103772] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.571 [2024-07-26 11:35:52.103802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.571 qpair failed and we were unable to recover it. 00:27:56.571 [2024-07-26 11:35:52.103977] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.571 [2024-07-26 11:35:52.104007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.571 qpair failed and we were unable to recover it. 00:27:56.571 [2024-07-26 11:35:52.104248] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.571 [2024-07-26 11:35:52.104277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.571 qpair failed and we were unable to recover it. 00:27:56.571 [2024-07-26 11:35:52.104396] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.571 [2024-07-26 11:35:52.104426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.571 qpair failed and we were unable to recover it. 00:27:56.571 [2024-07-26 11:35:52.104600] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.571 [2024-07-26 11:35:52.104636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.571 qpair failed and we were unable to recover it. 00:27:56.571 [2024-07-26 11:35:52.104810] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.571 [2024-07-26 11:35:52.104840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.571 qpair failed and we were unable to recover it. 00:27:56.571 [2024-07-26 11:35:52.104967] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.571 [2024-07-26 11:35:52.104996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.571 qpair failed and we were unable to recover it. 00:27:56.571 [2024-07-26 11:35:52.105202] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.571 [2024-07-26 11:35:52.105232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.571 qpair failed and we were unable to recover it. 00:27:56.571 [2024-07-26 11:35:52.105418] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.571 [2024-07-26 11:35:52.105448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.571 qpair failed and we were unable to recover it. 00:27:56.571 [2024-07-26 11:35:52.105619] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.571 [2024-07-26 11:35:52.105672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.571 qpair failed and we were unable to recover it. 00:27:56.571 [2024-07-26 11:35:52.105866] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.571 [2024-07-26 11:35:52.105896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.571 qpair failed and we were unable to recover it. 00:27:56.571 [2024-07-26 11:35:52.106022] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.571 [2024-07-26 11:35:52.106051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.572 qpair failed and we were unable to recover it. 00:27:56.572 [2024-07-26 11:35:52.106318] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.572 [2024-07-26 11:35:52.106348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.572 qpair failed and we were unable to recover it. 00:27:56.572 [2024-07-26 11:35:52.106454] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.572 [2024-07-26 11:35:52.106484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.572 qpair failed and we were unable to recover it. 00:27:56.572 [2024-07-26 11:35:52.106597] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.572 [2024-07-26 11:35:52.106637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.572 qpair failed and we were unable to recover it. 00:27:56.572 [2024-07-26 11:35:52.106842] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.572 [2024-07-26 11:35:52.106872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.572 qpair failed and we were unable to recover it. 00:27:56.572 [2024-07-26 11:35:52.106998] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.572 [2024-07-26 11:35:52.107028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.572 qpair failed and we were unable to recover it. 00:27:56.572 [2024-07-26 11:35:52.107200] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.572 [2024-07-26 11:35:52.107230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.572 qpair failed and we were unable to recover it. 00:27:56.572 [2024-07-26 11:35:52.107436] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.572 [2024-07-26 11:35:52.107465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.572 qpair failed and we were unable to recover it. 00:27:56.572 [2024-07-26 11:35:52.107582] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.572 [2024-07-26 11:35:52.107612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.572 qpair failed and we were unable to recover it. 00:27:56.572 [2024-07-26 11:35:52.107819] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.572 [2024-07-26 11:35:52.107849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.572 qpair failed and we were unable to recover it. 00:27:56.572 [2024-07-26 11:35:52.107966] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.572 [2024-07-26 11:35:52.107995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.572 qpair failed and we were unable to recover it. 00:27:56.572 [2024-07-26 11:35:52.108166] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.572 [2024-07-26 11:35:52.108195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.572 qpair failed and we were unable to recover it. 00:27:56.572 [2024-07-26 11:35:52.108332] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.572 [2024-07-26 11:35:52.108362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.572 qpair failed and we were unable to recover it. 00:27:56.572 [2024-07-26 11:35:52.108490] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.572 [2024-07-26 11:35:52.108520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.572 qpair failed and we were unable to recover it. 00:27:56.572 [2024-07-26 11:35:52.108654] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.572 [2024-07-26 11:35:52.108686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.572 qpair failed and we were unable to recover it. 00:27:56.572 [2024-07-26 11:35:52.108797] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.572 [2024-07-26 11:35:52.108826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.572 qpair failed and we were unable to recover it. 00:27:56.572 [2024-07-26 11:35:52.108947] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.572 [2024-07-26 11:35:52.108977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.572 qpair failed and we were unable to recover it. 00:27:56.572 [2024-07-26 11:35:52.109118] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.572 [2024-07-26 11:35:52.109148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.572 qpair failed and we were unable to recover it. 00:27:56.572 [2024-07-26 11:35:52.109280] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.572 [2024-07-26 11:35:52.109309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.572 qpair failed and we were unable to recover it. 00:27:56.572 [2024-07-26 11:35:52.109419] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.572 [2024-07-26 11:35:52.109448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.572 qpair failed and we were unable to recover it. 00:27:56.572 [2024-07-26 11:35:52.109622] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.572 [2024-07-26 11:35:52.109672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.572 qpair failed and we were unable to recover it. 00:27:56.572 [2024-07-26 11:35:52.109783] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.572 [2024-07-26 11:35:52.109813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.572 qpair failed and we were unable to recover it. 00:27:56.572 [2024-07-26 11:35:52.109924] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.572 [2024-07-26 11:35:52.109953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.572 qpair failed and we were unable to recover it. 00:27:56.572 [2024-07-26 11:35:52.110196] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.572 [2024-07-26 11:35:52.110225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.572 qpair failed and we were unable to recover it. 00:27:56.572 [2024-07-26 11:35:52.110342] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.572 [2024-07-26 11:35:52.110372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.572 qpair failed and we were unable to recover it. 00:27:56.572 [2024-07-26 11:35:52.110547] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.572 [2024-07-26 11:35:52.110577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.572 qpair failed and we were unable to recover it. 00:27:56.572 [2024-07-26 11:35:52.110704] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.573 [2024-07-26 11:35:52.110734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.573 qpair failed and we were unable to recover it. 00:27:56.573 [2024-07-26 11:35:52.110844] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.573 [2024-07-26 11:35:52.110884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.573 qpair failed and we were unable to recover it. 00:27:56.573 [2024-07-26 11:35:52.111072] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.573 [2024-07-26 11:35:52.111101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.573 qpair failed and we were unable to recover it. 00:27:56.573 [2024-07-26 11:35:52.111293] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.573 [2024-07-26 11:35:52.111323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.573 qpair failed and we were unable to recover it. 00:27:56.573 [2024-07-26 11:35:52.111462] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.573 [2024-07-26 11:35:52.111492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.573 qpair failed and we were unable to recover it. 00:27:56.573 [2024-07-26 11:35:52.111609] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.573 [2024-07-26 11:35:52.111662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.573 qpair failed and we were unable to recover it. 00:27:56.573 [2024-07-26 11:35:52.111795] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.573 [2024-07-26 11:35:52.111824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.573 qpair failed and we were unable to recover it. 00:27:56.573 [2024-07-26 11:35:52.111940] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.573 [2024-07-26 11:35:52.111969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.573 qpair failed and we were unable to recover it. 00:27:56.573 [2024-07-26 11:35:52.112080] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.573 [2024-07-26 11:35:52.112110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.573 qpair failed and we were unable to recover it. 00:27:56.573 [2024-07-26 11:35:52.112329] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.573 [2024-07-26 11:35:52.112358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.573 qpair failed and we were unable to recover it. 00:27:56.573 [2024-07-26 11:35:52.112607] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.573 [2024-07-26 11:35:52.112647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.573 qpair failed and we were unable to recover it. 00:27:56.573 [2024-07-26 11:35:52.112828] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.573 [2024-07-26 11:35:52.112858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.573 qpair failed and we were unable to recover it. 00:27:56.573 [2024-07-26 11:35:52.113063] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.573 [2024-07-26 11:35:52.113092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.573 qpair failed and we were unable to recover it. 00:27:56.573 [2024-07-26 11:35:52.113197] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.573 [2024-07-26 11:35:52.113227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.573 qpair failed and we were unable to recover it. 00:27:56.573 [2024-07-26 11:35:52.113448] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.573 [2024-07-26 11:35:52.113477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.573 qpair failed and we were unable to recover it. 00:27:56.573 [2024-07-26 11:35:52.113617] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.573 [2024-07-26 11:35:52.113658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.573 qpair failed and we were unable to recover it. 00:27:56.573 [2024-07-26 11:35:52.113769] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.573 [2024-07-26 11:35:52.113799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.573 qpair failed and we were unable to recover it. 00:27:56.573 [2024-07-26 11:35:52.113976] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.573 [2024-07-26 11:35:52.114005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.573 qpair failed and we were unable to recover it. 00:27:56.573 [2024-07-26 11:35:52.114135] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.573 [2024-07-26 11:35:52.114164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.573 qpair failed and we were unable to recover it. 00:27:56.573 [2024-07-26 11:35:52.114428] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.573 [2024-07-26 11:35:52.114459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.573 qpair failed and we were unable to recover it. 00:27:56.573 [2024-07-26 11:35:52.114584] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.573 [2024-07-26 11:35:52.114613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.573 qpair failed and we were unable to recover it. 00:27:56.573 [2024-07-26 11:35:52.114760] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.573 [2024-07-26 11:35:52.114790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.573 qpair failed and we were unable to recover it. 00:27:56.573 [2024-07-26 11:35:52.114917] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.573 [2024-07-26 11:35:52.114947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.573 qpair failed and we were unable to recover it. 00:27:56.573 [2024-07-26 11:35:52.115064] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.573 [2024-07-26 11:35:52.115094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.573 qpair failed and we were unable to recover it. 00:27:56.573 [2024-07-26 11:35:52.115211] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.573 [2024-07-26 11:35:52.115241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.573 qpair failed and we were unable to recover it. 00:27:56.573 [2024-07-26 11:35:52.115374] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.573 [2024-07-26 11:35:52.115403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.573 qpair failed and we were unable to recover it. 00:27:56.573 [2024-07-26 11:35:52.115579] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.573 [2024-07-26 11:35:52.115609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.573 qpair failed and we were unable to recover it. 00:27:56.573 [2024-07-26 11:35:52.115867] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.573 [2024-07-26 11:35:52.115898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.573 qpair failed and we were unable to recover it. 00:27:56.573 [2024-07-26 11:35:52.116150] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.573 [2024-07-26 11:35:52.116222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:56.573 qpair failed and we were unable to recover it. 00:27:56.573 [2024-07-26 11:35:52.116427] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.573 [2024-07-26 11:35:52.116460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:56.573 qpair failed and we were unable to recover it. 00:27:56.573 [2024-07-26 11:35:52.116654] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.573 [2024-07-26 11:35:52.116688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:56.573 qpair failed and we were unable to recover it. 00:27:56.573 [2024-07-26 11:35:52.116812] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.573 [2024-07-26 11:35:52.116842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:56.573 qpair failed and we were unable to recover it. 00:27:56.573 [2024-07-26 11:35:52.117108] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.573 [2024-07-26 11:35:52.117138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:56.573 qpair failed and we were unable to recover it. 00:27:56.573 [2024-07-26 11:35:52.117253] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.573 [2024-07-26 11:35:52.117283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:56.573 qpair failed and we were unable to recover it. 00:27:56.573 [2024-07-26 11:35:52.117413] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.573 [2024-07-26 11:35:52.117443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:56.573 qpair failed and we were unable to recover it. 00:27:56.573 [2024-07-26 11:35:52.117556] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.573 [2024-07-26 11:35:52.117586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:56.573 qpair failed and we were unable to recover it. 00:27:56.573 [2024-07-26 11:35:52.117736] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.573 [2024-07-26 11:35:52.117767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:56.573 qpair failed and we were unable to recover it. 00:27:56.573 [2024-07-26 11:35:52.118028] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.574 [2024-07-26 11:35:52.118057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:56.574 qpair failed and we were unable to recover it. 00:27:56.574 [2024-07-26 11:35:52.118165] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.574 [2024-07-26 11:35:52.118195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:56.574 qpair failed and we were unable to recover it. 00:27:56.574 [2024-07-26 11:35:52.118367] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.574 [2024-07-26 11:35:52.118397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:56.574 qpair failed and we were unable to recover it. 00:27:56.574 [2024-07-26 11:35:52.118503] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.574 [2024-07-26 11:35:52.118532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:56.574 qpair failed and we were unable to recover it. 00:27:56.574 [2024-07-26 11:35:52.118657] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.574 [2024-07-26 11:35:52.118688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:56.574 qpair failed and we were unable to recover it. 00:27:56.574 [2024-07-26 11:35:52.118888] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.574 [2024-07-26 11:35:52.118919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:56.574 qpair failed and we were unable to recover it. 00:27:56.574 [2024-07-26 11:35:52.119025] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.574 [2024-07-26 11:35:52.119055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:56.574 qpair failed and we were unable to recover it. 00:27:56.574 [2024-07-26 11:35:52.119241] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.574 [2024-07-26 11:35:52.119270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:56.574 qpair failed and we were unable to recover it. 00:27:56.574 [2024-07-26 11:35:52.119515] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.574 [2024-07-26 11:35:52.119545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:56.574 qpair failed and we were unable to recover it. 00:27:56.574 [2024-07-26 11:35:52.119677] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.574 [2024-07-26 11:35:52.119708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:56.574 qpair failed and we were unable to recover it. 00:27:56.574 [2024-07-26 11:35:52.119831] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.574 [2024-07-26 11:35:52.119861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:56.574 qpair failed and we were unable to recover it. 00:27:56.574 [2024-07-26 11:35:52.120051] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.574 [2024-07-26 11:35:52.120082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:56.574 qpair failed and we were unable to recover it. 00:27:56.574 [2024-07-26 11:35:52.120229] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.574 [2024-07-26 11:35:52.120258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:56.574 qpair failed and we were unable to recover it. 00:27:56.574 [2024-07-26 11:35:52.120371] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.574 [2024-07-26 11:35:52.120401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:56.574 qpair failed and we were unable to recover it. 00:27:56.574 [2024-07-26 11:35:52.120524] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.574 [2024-07-26 11:35:52.120553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:56.574 qpair failed and we were unable to recover it. 00:27:56.574 [2024-07-26 11:35:52.120689] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.574 [2024-07-26 11:35:52.120721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:56.574 qpair failed and we were unable to recover it. 00:27:56.574 [2024-07-26 11:35:52.120844] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.574 [2024-07-26 11:35:52.120874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:56.574 qpair failed and we were unable to recover it. 00:27:56.574 [2024-07-26 11:35:52.121070] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.574 [2024-07-26 11:35:52.121099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:56.574 qpair failed and we were unable to recover it. 00:27:56.574 [2024-07-26 11:35:52.121217] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.574 [2024-07-26 11:35:52.121263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:56.574 qpair failed and we were unable to recover it. 00:27:56.574 [2024-07-26 11:35:52.121458] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.574 [2024-07-26 11:35:52.121487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:56.574 qpair failed and we were unable to recover it. 00:27:56.574 [2024-07-26 11:35:52.121596] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.574 [2024-07-26 11:35:52.121637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:56.574 qpair failed and we were unable to recover it. 00:27:56.574 [2024-07-26 11:35:52.121768] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.574 [2024-07-26 11:35:52.121798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:56.574 qpair failed and we were unable to recover it. 00:27:56.574 [2024-07-26 11:35:52.121900] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.574 [2024-07-26 11:35:52.121930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:56.574 qpair failed and we were unable to recover it. 00:27:56.574 [2024-07-26 11:35:52.122104] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.574 [2024-07-26 11:35:52.122134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:56.574 qpair failed and we were unable to recover it. 00:27:56.574 [2024-07-26 11:35:52.122254] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.574 [2024-07-26 11:35:52.122285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:56.574 qpair failed and we were unable to recover it. 00:27:56.574 [2024-07-26 11:35:52.122455] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.574 [2024-07-26 11:35:52.122485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:56.574 qpair failed and we were unable to recover it. 00:27:56.574 [2024-07-26 11:35:52.122704] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.574 [2024-07-26 11:35:52.122735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:56.574 qpair failed and we were unable to recover it. 00:27:56.574 [2024-07-26 11:35:52.122866] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.574 [2024-07-26 11:35:52.122896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:56.574 qpair failed and we were unable to recover it. 00:27:56.574 [2024-07-26 11:35:52.123073] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.574 [2024-07-26 11:35:52.123103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:56.574 qpair failed and we were unable to recover it. 00:27:56.574 [2024-07-26 11:35:52.123277] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.574 [2024-07-26 11:35:52.123306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:56.574 qpair failed and we were unable to recover it. 00:27:56.574 [2024-07-26 11:35:52.123425] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.574 [2024-07-26 11:35:52.123454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:56.574 qpair failed and we were unable to recover it. 00:27:56.574 [2024-07-26 11:35:52.123593] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.574 [2024-07-26 11:35:52.123623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:56.574 qpair failed and we were unable to recover it. 00:27:56.574 [2024-07-26 11:35:52.123764] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.574 [2024-07-26 11:35:52.123795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:56.574 qpair failed and we were unable to recover it. 00:27:56.574 [2024-07-26 11:35:52.123915] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.574 [2024-07-26 11:35:52.123945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:56.575 qpair failed and we were unable to recover it. 00:27:56.575 [2024-07-26 11:35:52.124119] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.575 [2024-07-26 11:35:52.124149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:56.575 qpair failed and we were unable to recover it. 00:27:56.575 [2024-07-26 11:35:52.124324] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.575 [2024-07-26 11:35:52.124354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:56.575 qpair failed and we were unable to recover it. 00:27:56.575 [2024-07-26 11:35:52.124463] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.575 [2024-07-26 11:35:52.124492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:56.575 qpair failed and we were unable to recover it. 00:27:56.575 [2024-07-26 11:35:52.124677] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.575 [2024-07-26 11:35:52.124708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:56.575 qpair failed and we were unable to recover it. 00:27:56.575 [2024-07-26 11:35:52.124893] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.575 [2024-07-26 11:35:52.124923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:56.575 qpair failed and we were unable to recover it. 00:27:56.575 [2024-07-26 11:35:52.125194] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.575 [2024-07-26 11:35:52.125224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:56.575 qpair failed and we were unable to recover it. 00:27:56.575 [2024-07-26 11:35:52.125413] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.575 [2024-07-26 11:35:52.125443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:56.575 qpair failed and we were unable to recover it. 00:27:56.575 [2024-07-26 11:35:52.125709] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.575 [2024-07-26 11:35:52.125739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:56.575 qpair failed and we were unable to recover it. 00:27:56.575 [2024-07-26 11:35:52.125880] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.575 [2024-07-26 11:35:52.125910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:56.575 qpair failed and we were unable to recover it. 00:27:56.575 [2024-07-26 11:35:52.126129] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.575 [2024-07-26 11:35:52.126159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:56.575 qpair failed and we were unable to recover it. 00:27:56.575 [2024-07-26 11:35:52.126280] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.575 [2024-07-26 11:35:52.126310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:56.575 qpair failed and we were unable to recover it. 00:27:56.575 [2024-07-26 11:35:52.126449] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.575 [2024-07-26 11:35:52.126484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:56.575 qpair failed and we were unable to recover it. 00:27:56.575 [2024-07-26 11:35:52.126602] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.575 [2024-07-26 11:35:52.126642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:56.575 qpair failed and we were unable to recover it. 00:27:56.575 [2024-07-26 11:35:52.126888] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.575 [2024-07-26 11:35:52.126918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:56.575 qpair failed and we were unable to recover it. 00:27:56.575 [2024-07-26 11:35:52.127034] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.575 [2024-07-26 11:35:52.127064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:56.575 qpair failed and we were unable to recover it. 00:27:56.575 [2024-07-26 11:35:52.127240] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.575 [2024-07-26 11:35:52.127270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:56.575 qpair failed and we were unable to recover it. 00:27:56.575 [2024-07-26 11:35:52.127392] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.575 [2024-07-26 11:35:52.127423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:56.575 qpair failed and we were unable to recover it. 00:27:56.575 [2024-07-26 11:35:52.127597] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.575 [2024-07-26 11:35:52.127642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:56.575 qpair failed and we were unable to recover it. 00:27:56.575 [2024-07-26 11:35:52.127821] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.575 [2024-07-26 11:35:52.127851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:56.575 qpair failed and we were unable to recover it. 00:27:56.575 [2024-07-26 11:35:52.127969] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.575 [2024-07-26 11:35:52.127999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:56.575 qpair failed and we were unable to recover it. 00:27:56.575 [2024-07-26 11:35:52.128242] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.575 [2024-07-26 11:35:52.128271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:56.575 qpair failed and we were unable to recover it. 00:27:56.575 [2024-07-26 11:35:52.128397] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.575 [2024-07-26 11:35:52.128426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:56.575 qpair failed and we were unable to recover it. 00:27:56.575 [2024-07-26 11:35:52.128558] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.575 [2024-07-26 11:35:52.128588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:56.575 qpair failed and we were unable to recover it. 00:27:56.575 [2024-07-26 11:35:52.128789] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.575 [2024-07-26 11:35:52.128820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:56.575 qpair failed and we were unable to recover it. 00:27:56.575 [2024-07-26 11:35:52.129070] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.575 [2024-07-26 11:35:52.129100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:56.575 qpair failed and we were unable to recover it. 00:27:56.575 [2024-07-26 11:35:52.129230] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.575 [2024-07-26 11:35:52.129261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:56.575 qpair failed and we were unable to recover it. 00:27:56.575 [2024-07-26 11:35:52.129379] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.575 [2024-07-26 11:35:52.129408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:56.575 qpair failed and we were unable to recover it. 00:27:56.575 [2024-07-26 11:35:52.129580] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.575 [2024-07-26 11:35:52.129610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:56.575 qpair failed and we were unable to recover it. 00:27:56.575 [2024-07-26 11:35:52.129807] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.575 [2024-07-26 11:35:52.129838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:56.575 qpair failed and we were unable to recover it. 00:27:56.575 [2024-07-26 11:35:52.130016] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.575 [2024-07-26 11:35:52.130046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:56.575 qpair failed and we were unable to recover it. 00:27:56.575 [2024-07-26 11:35:52.130163] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.575 [2024-07-26 11:35:52.130193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:56.575 qpair failed and we were unable to recover it. 00:27:56.575 [2024-07-26 11:35:52.130313] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.575 [2024-07-26 11:35:52.130343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:56.575 qpair failed and we were unable to recover it. 00:27:56.575 [2024-07-26 11:35:52.130518] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.575 [2024-07-26 11:35:52.130547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:56.575 qpair failed and we were unable to recover it. 00:27:56.575 [2024-07-26 11:35:52.130670] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.575 [2024-07-26 11:35:52.130701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:56.575 qpair failed and we were unable to recover it. 00:27:56.575 [2024-07-26 11:35:52.130945] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.575 [2024-07-26 11:35:52.130975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:56.575 qpair failed and we were unable to recover it. 00:27:56.575 [2024-07-26 11:35:52.131153] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.575 [2024-07-26 11:35:52.131183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:56.575 qpair failed and we were unable to recover it. 00:27:56.575 [2024-07-26 11:35:52.131350] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.575 [2024-07-26 11:35:52.131380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:56.575 qpair failed and we were unable to recover it. 00:27:56.575 [2024-07-26 11:35:52.131487] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.575 [2024-07-26 11:35:52.131517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:56.575 qpair failed and we were unable to recover it. 00:27:56.575 [2024-07-26 11:35:52.131648] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.575 [2024-07-26 11:35:52.131684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:56.575 qpair failed and we were unable to recover it. 00:27:56.575 [2024-07-26 11:35:52.131893] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.576 [2024-07-26 11:35:52.131923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:56.576 qpair failed and we were unable to recover it. 00:27:56.576 [2024-07-26 11:35:52.132031] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.576 [2024-07-26 11:35:52.132061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:56.576 qpair failed and we were unable to recover it. 00:27:56.576 [2024-07-26 11:35:52.132252] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.576 [2024-07-26 11:35:52.132282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:56.576 qpair failed and we were unable to recover it. 00:27:56.576 [2024-07-26 11:35:52.132395] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.576 [2024-07-26 11:35:52.132425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:56.576 qpair failed and we were unable to recover it. 00:27:56.576 [2024-07-26 11:35:52.132536] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.576 [2024-07-26 11:35:52.132566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:56.576 qpair failed and we were unable to recover it. 00:27:56.576 [2024-07-26 11:35:52.132842] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.576 [2024-07-26 11:35:52.132873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:56.576 qpair failed and we were unable to recover it. 00:27:56.576 [2024-07-26 11:35:52.133153] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.576 [2024-07-26 11:35:52.133183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:56.576 qpair failed and we were unable to recover it. 00:27:56.576 [2024-07-26 11:35:52.133371] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.576 [2024-07-26 11:35:52.133401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:56.576 qpair failed and we were unable to recover it. 00:27:56.576 [2024-07-26 11:35:52.133603] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.576 [2024-07-26 11:35:52.133642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:56.576 qpair failed and we were unable to recover it. 00:27:56.576 [2024-07-26 11:35:52.133824] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.576 [2024-07-26 11:35:52.133854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:56.576 qpair failed and we were unable to recover it. 00:27:56.576 [2024-07-26 11:35:52.133983] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.576 [2024-07-26 11:35:52.134012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:56.576 qpair failed and we were unable to recover it. 00:27:56.576 [2024-07-26 11:35:52.134128] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.576 [2024-07-26 11:35:52.134158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:56.576 qpair failed and we were unable to recover it. 00:27:56.576 [2024-07-26 11:35:52.134295] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.576 [2024-07-26 11:35:52.134325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:56.576 qpair failed and we were unable to recover it. 00:27:56.576 [2024-07-26 11:35:52.134449] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.576 [2024-07-26 11:35:52.134480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:56.576 qpair failed and we were unable to recover it. 00:27:56.576 [2024-07-26 11:35:52.134723] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.576 [2024-07-26 11:35:52.134754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:56.576 qpair failed and we were unable to recover it. 00:27:56.576 [2024-07-26 11:35:52.134876] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.576 [2024-07-26 11:35:52.134906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:56.576 qpair failed and we were unable to recover it. 00:27:56.576 [2024-07-26 11:35:52.135091] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.576 [2024-07-26 11:35:52.135120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:56.576 qpair failed and we were unable to recover it. 00:27:56.576 [2024-07-26 11:35:52.135389] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.576 [2024-07-26 11:35:52.135418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:56.576 qpair failed and we were unable to recover it. 00:27:56.576 [2024-07-26 11:35:52.135552] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.576 [2024-07-26 11:35:52.135582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:56.576 qpair failed and we were unable to recover it. 00:27:56.576 [2024-07-26 11:35:52.135852] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.576 [2024-07-26 11:35:52.135884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:56.576 qpair failed and we were unable to recover it. 00:27:56.576 [2024-07-26 11:35:52.136061] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.576 [2024-07-26 11:35:52.136091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:56.576 qpair failed and we were unable to recover it. 00:27:56.576 [2024-07-26 11:35:52.136213] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.576 [2024-07-26 11:35:52.136242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:56.576 qpair failed and we were unable to recover it. 00:27:56.576 [2024-07-26 11:35:52.136430] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.576 [2024-07-26 11:35:52.136459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:56.576 qpair failed and we were unable to recover it. 00:27:56.576 [2024-07-26 11:35:52.136641] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.576 [2024-07-26 11:35:52.136672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:56.576 qpair failed and we were unable to recover it. 00:27:56.576 [2024-07-26 11:35:52.136858] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.576 [2024-07-26 11:35:52.136887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:56.576 qpair failed and we were unable to recover it. 00:27:56.576 [2024-07-26 11:35:52.137006] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.576 [2024-07-26 11:35:52.137036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:56.576 qpair failed and we were unable to recover it. 00:27:56.576 [2024-07-26 11:35:52.137154] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.576 [2024-07-26 11:35:52.137183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:56.576 qpair failed and we were unable to recover it. 00:27:56.576 [2024-07-26 11:35:52.137311] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.576 [2024-07-26 11:35:52.137341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:56.576 qpair failed and we were unable to recover it. 00:27:56.576 [2024-07-26 11:35:52.137525] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.576 [2024-07-26 11:35:52.137555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:56.576 qpair failed and we were unable to recover it. 00:27:56.576 [2024-07-26 11:35:52.137675] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.576 [2024-07-26 11:35:52.137706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:56.576 qpair failed and we were unable to recover it. 00:27:56.576 [2024-07-26 11:35:52.137917] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.576 [2024-07-26 11:35:52.137947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:56.576 qpair failed and we were unable to recover it. 00:27:56.576 [2024-07-26 11:35:52.138079] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.576 [2024-07-26 11:35:52.138109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:56.576 qpair failed and we were unable to recover it. 00:27:56.576 [2024-07-26 11:35:52.138348] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.576 [2024-07-26 11:35:52.138378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:56.576 qpair failed and we were unable to recover it. 00:27:56.576 [2024-07-26 11:35:52.138561] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.576 [2024-07-26 11:35:52.138592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:56.576 qpair failed and we were unable to recover it. 00:27:56.576 [2024-07-26 11:35:52.138776] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.576 [2024-07-26 11:35:52.138807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:56.576 qpair failed and we were unable to recover it. 00:27:56.576 [2024-07-26 11:35:52.138920] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.577 [2024-07-26 11:35:52.138950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:56.577 qpair failed and we were unable to recover it. 00:27:56.577 [2024-07-26 11:35:52.139060] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.577 [2024-07-26 11:35:52.139089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:56.577 qpair failed and we were unable to recover it. 00:27:56.577 [2024-07-26 11:35:52.139203] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.577 [2024-07-26 11:35:52.139233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:56.577 qpair failed and we were unable to recover it. 00:27:56.577 [2024-07-26 11:35:52.139358] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.577 [2024-07-26 11:35:52.139388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:56.577 qpair failed and we were unable to recover it. 00:27:56.577 [2024-07-26 11:35:52.139565] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.577 [2024-07-26 11:35:52.139595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:56.577 qpair failed and we were unable to recover it. 00:27:56.577 [2024-07-26 11:35:52.139820] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.577 [2024-07-26 11:35:52.139851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:56.577 qpair failed and we were unable to recover it. 00:27:56.577 [2024-07-26 11:35:52.139972] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.577 [2024-07-26 11:35:52.140002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:56.577 qpair failed and we were unable to recover it. 00:27:56.577 [2024-07-26 11:35:52.140123] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.577 [2024-07-26 11:35:52.140152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:56.577 qpair failed and we were unable to recover it. 00:27:56.577 [2024-07-26 11:35:52.140394] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.577 [2024-07-26 11:35:52.140424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:56.577 qpair failed and we were unable to recover it. 00:27:56.577 [2024-07-26 11:35:52.140549] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.577 [2024-07-26 11:35:52.140579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:56.577 qpair failed and we were unable to recover it. 00:27:56.577 [2024-07-26 11:35:52.140785] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.577 [2024-07-26 11:35:52.140817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:56.577 qpair failed and we were unable to recover it. 00:27:56.577 [2024-07-26 11:35:52.141000] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.577 [2024-07-26 11:35:52.141030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:56.577 qpair failed and we were unable to recover it. 00:27:56.577 [2024-07-26 11:35:52.141219] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.577 [2024-07-26 11:35:52.141249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:56.577 qpair failed and we were unable to recover it. 00:27:56.577 [2024-07-26 11:35:52.141372] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.577 [2024-07-26 11:35:52.141402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:56.577 qpair failed and we were unable to recover it. 00:27:56.577 [2024-07-26 11:35:52.141581] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.577 [2024-07-26 11:35:52.141611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:56.577 qpair failed and we were unable to recover it. 00:27:56.577 [2024-07-26 11:35:52.141735] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.577 [2024-07-26 11:35:52.141765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:56.577 qpair failed and we were unable to recover it. 00:27:56.577 [2024-07-26 11:35:52.141889] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.577 [2024-07-26 11:35:52.141919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:56.577 qpair failed and we were unable to recover it. 00:27:56.577 [2024-07-26 11:35:52.142038] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.577 [2024-07-26 11:35:52.142067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:56.577 qpair failed and we were unable to recover it. 00:27:56.577 [2024-07-26 11:35:52.142241] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.577 [2024-07-26 11:35:52.142271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:56.577 qpair failed and we were unable to recover it. 00:27:56.577 [2024-07-26 11:35:52.142381] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.577 [2024-07-26 11:35:52.142411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:56.577 qpair failed and we were unable to recover it. 00:27:56.577 [2024-07-26 11:35:52.142601] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.577 [2024-07-26 11:35:52.142641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:56.577 qpair failed and we were unable to recover it. 00:27:56.577 [2024-07-26 11:35:52.142816] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.577 [2024-07-26 11:35:52.142846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:56.577 qpair failed and we were unable to recover it. 00:27:56.577 [2024-07-26 11:35:52.142959] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.577 [2024-07-26 11:35:52.142989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:56.577 qpair failed and we were unable to recover it. 00:27:56.577 [2024-07-26 11:35:52.143101] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.577 [2024-07-26 11:35:52.143131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:56.577 qpair failed and we were unable to recover it. 00:27:56.577 [2024-07-26 11:35:52.143238] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.577 [2024-07-26 11:35:52.143267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:56.577 qpair failed and we were unable to recover it. 00:27:56.577 [2024-07-26 11:35:52.143374] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.577 [2024-07-26 11:35:52.143404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:56.577 qpair failed and we were unable to recover it. 00:27:56.577 [2024-07-26 11:35:52.143519] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.577 [2024-07-26 11:35:52.143549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:56.577 qpair failed and we were unable to recover it. 00:27:56.577 [2024-07-26 11:35:52.143738] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.577 [2024-07-26 11:35:52.143770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:56.577 qpair failed and we were unable to recover it. 00:27:56.577 [2024-07-26 11:35:52.143965] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.577 [2024-07-26 11:35:52.143995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:56.577 qpair failed and we were unable to recover it. 00:27:56.577 [2024-07-26 11:35:52.144114] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.577 [2024-07-26 11:35:52.144143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:56.577 qpair failed and we were unable to recover it. 00:27:56.577 [2024-07-26 11:35:52.144385] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.577 [2024-07-26 11:35:52.144415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:56.577 qpair failed and we were unable to recover it. 00:27:56.577 [2024-07-26 11:35:52.144600] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.577 [2024-07-26 11:35:52.144638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:56.577 qpair failed and we were unable to recover it. 00:27:56.577 [2024-07-26 11:35:52.144779] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.577 [2024-07-26 11:35:52.144814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:56.577 qpair failed and we were unable to recover it. 00:27:56.577 [2024-07-26 11:35:52.145062] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.577 [2024-07-26 11:35:52.145092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:56.577 qpair failed and we were unable to recover it. 00:27:56.577 [2024-07-26 11:35:52.145285] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.578 [2024-07-26 11:35:52.145314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:56.578 qpair failed and we were unable to recover it. 00:27:56.578 [2024-07-26 11:35:52.145506] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.578 [2024-07-26 11:35:52.145535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:56.578 qpair failed and we were unable to recover it. 00:27:56.578 [2024-07-26 11:35:52.145671] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.578 [2024-07-26 11:35:52.145701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:56.578 qpair failed and we were unable to recover it. 00:27:56.578 [2024-07-26 11:35:52.145887] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.578 [2024-07-26 11:35:52.145916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:56.578 qpair failed and we were unable to recover it. 00:27:56.578 [2024-07-26 11:35:52.146034] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.578 [2024-07-26 11:35:52.146064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:56.578 qpair failed and we were unable to recover it. 00:27:56.578 [2024-07-26 11:35:52.146270] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.578 [2024-07-26 11:35:52.146299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:56.578 qpair failed and we were unable to recover it. 00:27:56.578 [2024-07-26 11:35:52.146568] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.578 [2024-07-26 11:35:52.146598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:56.578 qpair failed and we were unable to recover it. 00:27:56.578 [2024-07-26 11:35:52.146783] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.578 [2024-07-26 11:35:52.146813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:56.578 qpair failed and we were unable to recover it. 00:27:56.578 [2024-07-26 11:35:52.146922] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.578 [2024-07-26 11:35:52.146952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:56.578 qpair failed and we were unable to recover it. 00:27:56.578 [2024-07-26 11:35:52.147061] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.578 [2024-07-26 11:35:52.147091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:56.578 qpair failed and we were unable to recover it. 00:27:56.578 [2024-07-26 11:35:52.147210] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.578 [2024-07-26 11:35:52.147240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:56.578 qpair failed and we were unable to recover it. 00:27:56.578 [2024-07-26 11:35:52.147362] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.578 [2024-07-26 11:35:52.147392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:56.578 qpair failed and we were unable to recover it. 00:27:56.578 [2024-07-26 11:35:52.147576] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.578 [2024-07-26 11:35:52.147606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:56.578 qpair failed and we were unable to recover it. 00:27:56.578 [2024-07-26 11:35:52.147752] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.578 [2024-07-26 11:35:52.147782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:56.578 qpair failed and we were unable to recover it. 00:27:56.578 [2024-07-26 11:35:52.147902] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.578 [2024-07-26 11:35:52.147932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:56.578 qpair failed and we were unable to recover it. 00:27:56.578 [2024-07-26 11:35:52.148111] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.578 [2024-07-26 11:35:52.148141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:56.578 qpair failed and we were unable to recover it. 00:27:56.578 [2024-07-26 11:35:52.148383] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.578 [2024-07-26 11:35:52.148413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:56.578 qpair failed and we were unable to recover it. 00:27:56.578 [2024-07-26 11:35:52.148541] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.578 [2024-07-26 11:35:52.148571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:56.578 qpair failed and we were unable to recover it. 00:27:56.578 [2024-07-26 11:35:52.148684] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.578 [2024-07-26 11:35:52.148715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:56.578 qpair failed and we were unable to recover it. 00:27:56.578 [2024-07-26 11:35:52.148892] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.578 [2024-07-26 11:35:52.148921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:56.578 qpair failed and we were unable to recover it. 00:27:56.578 [2024-07-26 11:35:52.149059] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.578 [2024-07-26 11:35:52.149088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:56.578 qpair failed and we were unable to recover it. 00:27:56.578 [2024-07-26 11:35:52.149214] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.578 [2024-07-26 11:35:52.149244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:56.578 qpair failed and we were unable to recover it. 00:27:56.578 [2024-07-26 11:35:52.149393] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.578 [2024-07-26 11:35:52.149423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:56.578 qpair failed and we were unable to recover it. 00:27:56.578 [2024-07-26 11:35:52.149605] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.578 [2024-07-26 11:35:52.149645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:56.578 qpair failed and we were unable to recover it. 00:27:56.578 [2024-07-26 11:35:52.149767] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.578 [2024-07-26 11:35:52.149797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:56.578 qpair failed and we were unable to recover it. 00:27:56.578 [2024-07-26 11:35:52.149900] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.578 [2024-07-26 11:35:52.149934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:56.578 qpair failed and we were unable to recover it. 00:27:56.578 [2024-07-26 11:35:52.150040] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.578 [2024-07-26 11:35:52.150070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:56.578 qpair failed and we were unable to recover it. 00:27:56.578 [2024-07-26 11:35:52.150291] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.578 [2024-07-26 11:35:52.150321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:56.578 qpair failed and we were unable to recover it. 00:27:56.578 [2024-07-26 11:35:52.150496] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.578 [2024-07-26 11:35:52.150525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:56.578 qpair failed and we were unable to recover it. 00:27:56.578 [2024-07-26 11:35:52.150647] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.578 [2024-07-26 11:35:52.150677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:56.578 qpair failed and we were unable to recover it. 00:27:56.578 [2024-07-26 11:35:52.150854] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.578 [2024-07-26 11:35:52.150884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:56.578 qpair failed and we were unable to recover it. 00:27:56.578 [2024-07-26 11:35:52.151123] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.578 [2024-07-26 11:35:52.151152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:56.578 qpair failed and we were unable to recover it. 00:27:56.578 [2024-07-26 11:35:52.151331] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.578 [2024-07-26 11:35:52.151361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:56.578 qpair failed and we were unable to recover it. 00:27:56.578 [2024-07-26 11:35:52.151462] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.578 [2024-07-26 11:35:52.151491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:56.578 qpair failed and we were unable to recover it. 00:27:56.578 [2024-07-26 11:35:52.151679] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.578 [2024-07-26 11:35:52.151710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:56.578 qpair failed and we were unable to recover it. 00:27:56.578 [2024-07-26 11:35:52.151849] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.578 [2024-07-26 11:35:52.151878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:56.578 qpair failed and we were unable to recover it. 00:27:56.578 [2024-07-26 11:35:52.152004] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.578 [2024-07-26 11:35:52.152034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:56.578 qpair failed and we were unable to recover it. 00:27:56.578 [2024-07-26 11:35:52.152152] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.578 [2024-07-26 11:35:52.152181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:56.578 qpair failed and we were unable to recover it. 00:27:56.578 [2024-07-26 11:35:52.152371] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.578 [2024-07-26 11:35:52.152401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:56.578 qpair failed and we were unable to recover it. 00:27:56.578 [2024-07-26 11:35:52.152637] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.578 [2024-07-26 11:35:52.152668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:56.579 qpair failed and we were unable to recover it. 00:27:56.579 [2024-07-26 11:35:52.152802] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.579 [2024-07-26 11:35:52.152831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:56.579 qpair failed and we were unable to recover it. 00:27:56.579 [2024-07-26 11:35:52.153020] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.579 [2024-07-26 11:35:52.153049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:56.579 qpair failed and we were unable to recover it. 00:27:56.579 [2024-07-26 11:35:52.153220] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.579 [2024-07-26 11:35:52.153249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:56.579 qpair failed and we were unable to recover it. 00:27:56.579 [2024-07-26 11:35:52.153365] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.579 [2024-07-26 11:35:52.153394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:56.579 qpair failed and we were unable to recover it. 00:27:56.579 [2024-07-26 11:35:52.153508] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.579 [2024-07-26 11:35:52.153538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:56.579 qpair failed and we were unable to recover it. 00:27:56.579 [2024-07-26 11:35:52.153721] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.579 [2024-07-26 11:35:52.153751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:56.579 qpair failed and we were unable to recover it. 00:27:56.579 [2024-07-26 11:35:52.153862] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.579 [2024-07-26 11:35:52.153891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:56.579 qpair failed and we were unable to recover it. 00:27:56.579 [2024-07-26 11:35:52.154076] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.579 [2024-07-26 11:35:52.154105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:56.579 qpair failed and we were unable to recover it. 00:27:56.579 [2024-07-26 11:35:52.154226] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.579 [2024-07-26 11:35:52.154255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:56.579 qpair failed and we were unable to recover it. 00:27:56.579 [2024-07-26 11:35:52.154430] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.579 [2024-07-26 11:35:52.154460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:56.579 qpair failed and we were unable to recover it. 00:27:56.579 [2024-07-26 11:35:52.154653] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.579 [2024-07-26 11:35:52.154683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:56.579 qpair failed and we were unable to recover it. 00:27:56.579 [2024-07-26 11:35:52.154801] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.579 [2024-07-26 11:35:52.154830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:56.579 qpair failed and we were unable to recover it. 00:27:56.579 [2024-07-26 11:35:52.154937] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.579 [2024-07-26 11:35:52.154967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:56.579 qpair failed and we were unable to recover it. 00:27:56.579 [2024-07-26 11:35:52.155145] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.579 [2024-07-26 11:35:52.155174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:56.579 qpair failed and we were unable to recover it. 00:27:56.579 [2024-07-26 11:35:52.155362] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.579 [2024-07-26 11:35:52.155393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:56.579 qpair failed and we were unable to recover it. 00:27:56.579 [2024-07-26 11:35:52.155502] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.579 [2024-07-26 11:35:52.155532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:56.579 qpair failed and we were unable to recover it. 00:27:56.579 [2024-07-26 11:35:52.155714] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.579 [2024-07-26 11:35:52.155745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:56.579 qpair failed and we were unable to recover it. 00:27:56.579 [2024-07-26 11:35:52.155954] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.579 [2024-07-26 11:35:52.155984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:56.579 qpair failed and we were unable to recover it. 00:27:56.579 [2024-07-26 11:35:52.156114] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.579 [2024-07-26 11:35:52.156144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:56.579 qpair failed and we were unable to recover it. 00:27:56.579 [2024-07-26 11:35:52.156353] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.579 [2024-07-26 11:35:52.156382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:56.579 qpair failed and we were unable to recover it. 00:27:56.579 [2024-07-26 11:35:52.156570] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.579 [2024-07-26 11:35:52.156600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:56.579 qpair failed and we were unable to recover it. 00:27:56.579 [2024-07-26 11:35:52.156799] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.579 [2024-07-26 11:35:52.156830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:56.579 qpair failed and we were unable to recover it. 00:27:56.579 [2024-07-26 11:35:52.157022] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.579 [2024-07-26 11:35:52.157051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:56.579 qpair failed and we were unable to recover it. 00:27:56.579 [2024-07-26 11:35:52.157178] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.579 [2024-07-26 11:35:52.157207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:56.579 qpair failed and we were unable to recover it. 00:27:56.579 [2024-07-26 11:35:52.157392] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.579 [2024-07-26 11:35:52.157422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:56.579 qpair failed and we were unable to recover it. 00:27:56.579 [2024-07-26 11:35:52.157535] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.579 [2024-07-26 11:35:52.157564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:56.579 qpair failed and we were unable to recover it. 00:27:56.579 [2024-07-26 11:35:52.157702] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.579 [2024-07-26 11:35:52.157733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:56.579 qpair failed and we were unable to recover it. 00:27:56.579 [2024-07-26 11:35:52.157925] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.579 [2024-07-26 11:35:52.157955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:56.579 qpair failed and we were unable to recover it. 00:27:56.579 [2024-07-26 11:35:52.158071] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.579 [2024-07-26 11:35:52.158101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:56.579 qpair failed and we were unable to recover it. 00:27:56.579 [2024-07-26 11:35:52.158308] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.579 [2024-07-26 11:35:52.158337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:56.579 qpair failed and we were unable to recover it. 00:27:56.579 [2024-07-26 11:35:52.158542] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.579 [2024-07-26 11:35:52.158572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:56.579 qpair failed and we were unable to recover it. 00:27:56.579 [2024-07-26 11:35:52.158705] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.579 [2024-07-26 11:35:52.158736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:56.579 qpair failed and we were unable to recover it. 00:27:56.579 [2024-07-26 11:35:52.158844] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.579 [2024-07-26 11:35:52.158874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:56.579 qpair failed and we were unable to recover it. 00:27:56.579 [2024-07-26 11:35:52.159007] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.579 [2024-07-26 11:35:52.159036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:56.579 qpair failed and we were unable to recover it. 00:27:56.579 [2024-07-26 11:35:52.159216] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.579 [2024-07-26 11:35:52.159246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:56.579 qpair failed and we were unable to recover it. 00:27:56.579 [2024-07-26 11:35:52.159439] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.579 [2024-07-26 11:35:52.159469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:56.579 qpair failed and we were unable to recover it. 00:27:56.579 [2024-07-26 11:35:52.159579] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.579 [2024-07-26 11:35:52.159608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:56.579 qpair failed and we were unable to recover it. 00:27:56.579 [2024-07-26 11:35:52.159806] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.579 [2024-07-26 11:35:52.159837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:56.579 qpair failed and we were unable to recover it. 00:27:56.579 [2024-07-26 11:35:52.160014] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.579 [2024-07-26 11:35:52.160043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:56.579 qpair failed and we were unable to recover it. 00:27:56.579 [2024-07-26 11:35:52.160218] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.579 [2024-07-26 11:35:52.160247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:56.579 qpair failed and we were unable to recover it. 00:27:56.579 [2024-07-26 11:35:52.160381] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.579 [2024-07-26 11:35:52.160411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:56.579 qpair failed and we were unable to recover it. 00:27:56.579 [2024-07-26 11:35:52.160551] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.579 [2024-07-26 11:35:52.160580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:56.579 qpair failed and we were unable to recover it. 00:27:56.579 [2024-07-26 11:35:52.160734] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.580 [2024-07-26 11:35:52.160769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:56.580 qpair failed and we were unable to recover it. 00:27:56.580 [2024-07-26 11:35:52.160900] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.580 [2024-07-26 11:35:52.160930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:56.580 qpair failed and we were unable to recover it. 00:27:56.580 [2024-07-26 11:35:52.161137] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.580 [2024-07-26 11:35:52.161166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:56.580 qpair failed and we were unable to recover it. 00:27:56.580 [2024-07-26 11:35:52.161381] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.580 [2024-07-26 11:35:52.161410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:56.580 qpair failed and we were unable to recover it. 00:27:56.580 [2024-07-26 11:35:52.161531] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.580 [2024-07-26 11:35:52.161561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:56.580 qpair failed and we were unable to recover it. 00:27:56.580 [2024-07-26 11:35:52.161753] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.580 [2024-07-26 11:35:52.161784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:56.580 qpair failed and we were unable to recover it. 00:27:56.580 [2024-07-26 11:35:52.161958] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.580 [2024-07-26 11:35:52.161987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:56.580 qpair failed and we were unable to recover it. 00:27:56.580 [2024-07-26 11:35:52.162121] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.580 [2024-07-26 11:35:52.162151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:56.580 qpair failed and we were unable to recover it. 00:27:56.580 [2024-07-26 11:35:52.162275] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.580 [2024-07-26 11:35:52.162304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:56.580 qpair failed and we were unable to recover it. 00:27:56.580 [2024-07-26 11:35:52.162481] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.580 [2024-07-26 11:35:52.162511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:56.580 qpair failed and we were unable to recover it. 00:27:56.580 [2024-07-26 11:35:52.162777] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.580 [2024-07-26 11:35:52.162809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:56.580 qpair failed and we were unable to recover it. 00:27:56.580 [2024-07-26 11:35:52.162991] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.580 [2024-07-26 11:35:52.163026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:56.580 qpair failed and we were unable to recover it. 00:27:56.580 [2024-07-26 11:35:52.163214] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.580 [2024-07-26 11:35:52.163244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:56.580 qpair failed and we were unable to recover it. 00:27:56.580 [2024-07-26 11:35:52.163353] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.580 [2024-07-26 11:35:52.163383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:56.580 qpair failed and we were unable to recover it. 00:27:56.580 [2024-07-26 11:35:52.163578] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.580 [2024-07-26 11:35:52.163607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:56.580 qpair failed and we were unable to recover it. 00:27:56.580 [2024-07-26 11:35:52.163723] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.580 [2024-07-26 11:35:52.163753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:56.580 qpair failed and we were unable to recover it. 00:27:56.580 [2024-07-26 11:35:52.163943] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.580 [2024-07-26 11:35:52.163973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:56.580 qpair failed and we were unable to recover it. 00:27:56.580 [2024-07-26 11:35:52.164212] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.580 [2024-07-26 11:35:52.164242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:56.580 qpair failed and we were unable to recover it. 00:27:56.580 [2024-07-26 11:35:52.164342] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.580 [2024-07-26 11:35:52.164370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:56.580 qpair failed and we were unable to recover it. 00:27:56.580 [2024-07-26 11:35:52.164472] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.580 [2024-07-26 11:35:52.164500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:56.580 qpair failed and we were unable to recover it. 00:27:56.580 [2024-07-26 11:35:52.164789] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.580 [2024-07-26 11:35:52.164837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:56.580 qpair failed and we were unable to recover it. 00:27:56.580 [2024-07-26 11:35:52.165072] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.580 [2024-07-26 11:35:52.165112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:56.580 qpair failed and we were unable to recover it. 00:27:56.580 [2024-07-26 11:35:52.165250] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.580 [2024-07-26 11:35:52.165281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:56.580 qpair failed and we were unable to recover it. 00:27:56.580 [2024-07-26 11:35:52.165428] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.580 [2024-07-26 11:35:52.165458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:56.580 qpair failed and we were unable to recover it. 00:27:56.580 [2024-07-26 11:35:52.165730] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.580 [2024-07-26 11:35:52.165764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:56.580 qpair failed and we were unable to recover it. 00:27:56.580 [2024-07-26 11:35:52.165965] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.580 [2024-07-26 11:35:52.165995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:56.580 qpair failed and we were unable to recover it. 00:27:56.580 [2024-07-26 11:35:52.166122] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.580 [2024-07-26 11:35:52.166152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:56.580 qpair failed and we were unable to recover it. 00:27:56.580 [2024-07-26 11:35:52.166349] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.580 [2024-07-26 11:35:52.166379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:56.580 qpair failed and we were unable to recover it. 00:27:56.580 [2024-07-26 11:35:52.166568] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.580 [2024-07-26 11:35:52.166598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:56.580 qpair failed and we were unable to recover it. 00:27:56.580 [2024-07-26 11:35:52.166734] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.580 [2024-07-26 11:35:52.166765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:56.580 qpair failed and we were unable to recover it. 00:27:56.580 [2024-07-26 11:35:52.166908] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.580 [2024-07-26 11:35:52.166953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:56.580 qpair failed and we were unable to recover it. 00:27:56.580 [2024-07-26 11:35:52.167181] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.580 [2024-07-26 11:35:52.167216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:56.580 qpair failed and we were unable to recover it. 00:27:56.580 [2024-07-26 11:35:52.167402] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.580 [2024-07-26 11:35:52.167432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:56.580 qpair failed and we were unable to recover it. 00:27:56.580 [2024-07-26 11:35:52.167618] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.580 [2024-07-26 11:35:52.167663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:56.580 qpair failed and we were unable to recover it. 00:27:56.580 [2024-07-26 11:35:52.167845] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.580 [2024-07-26 11:35:52.167876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:56.580 qpair failed and we were unable to recover it. 00:27:56.580 [2024-07-26 11:35:52.168000] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.580 [2024-07-26 11:35:52.168030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:56.580 qpair failed and we were unable to recover it. 00:27:56.580 [2024-07-26 11:35:52.168214] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.580 [2024-07-26 11:35:52.168244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:56.580 qpair failed and we were unable to recover it. 00:27:56.580 [2024-07-26 11:35:52.168428] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.580 [2024-07-26 11:35:52.168458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:56.580 qpair failed and we were unable to recover it. 00:27:56.580 [2024-07-26 11:35:52.168659] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.580 [2024-07-26 11:35:52.168698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:56.580 qpair failed and we were unable to recover it. 00:27:56.580 [2024-07-26 11:35:52.168832] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.580 [2024-07-26 11:35:52.168866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:56.580 qpair failed and we were unable to recover it. 00:27:56.580 [2024-07-26 11:35:52.169007] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.861 [2024-07-26 11:35:52.169051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:56.861 qpair failed and we were unable to recover it. 00:27:56.861 [2024-07-26 11:35:52.169300] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.861 [2024-07-26 11:35:52.169350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:56.861 qpair failed and we were unable to recover it. 00:27:56.861 [2024-07-26 11:35:52.169557] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.861 [2024-07-26 11:35:52.169611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:56.861 qpair failed and we were unable to recover it. 00:27:56.861 [2024-07-26 11:35:52.169901] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.861 [2024-07-26 11:35:52.169959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:56.861 qpair failed and we were unable to recover it. 00:27:56.861 [2024-07-26 11:35:52.170133] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.861 [2024-07-26 11:35:52.170185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:56.861 qpair failed and we were unable to recover it. 00:27:56.861 [2024-07-26 11:35:52.170464] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.861 [2024-07-26 11:35:52.170499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:56.861 qpair failed and we were unable to recover it. 00:27:56.861 [2024-07-26 11:35:52.170620] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.861 [2024-07-26 11:35:52.170664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:56.861 qpair failed and we were unable to recover it. 00:27:56.861 [2024-07-26 11:35:52.170842] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.861 [2024-07-26 11:35:52.170878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:56.861 qpair failed and we were unable to recover it. 00:27:56.861 [2024-07-26 11:35:52.171002] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.861 [2024-07-26 11:35:52.171031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:56.861 qpair failed and we were unable to recover it. 00:27:56.861 [2024-07-26 11:35:52.171231] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.861 [2024-07-26 11:35:52.171271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:56.861 qpair failed and we were unable to recover it. 00:27:56.861 [2024-07-26 11:35:52.171401] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.861 [2024-07-26 11:35:52.171429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:56.861 qpair failed and we were unable to recover it. 00:27:56.861 [2024-07-26 11:35:52.171616] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.861 [2024-07-26 11:35:52.171668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:56.861 qpair failed and we were unable to recover it. 00:27:56.861 [2024-07-26 11:35:52.171798] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.861 [2024-07-26 11:35:52.171827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:56.861 qpair failed and we were unable to recover it. 00:27:56.861 [2024-07-26 11:35:52.172025] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.861 [2024-07-26 11:35:52.172067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:56.861 qpair failed and we were unable to recover it. 00:27:56.861 [2024-07-26 11:35:52.172205] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.861 [2024-07-26 11:35:52.172234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:56.861 qpair failed and we were unable to recover it. 00:27:56.861 [2024-07-26 11:35:52.172475] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.861 [2024-07-26 11:35:52.172509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:56.861 qpair failed and we were unable to recover it. 00:27:56.861 [2024-07-26 11:35:52.172695] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.861 [2024-07-26 11:35:52.172727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:56.861 qpair failed and we were unable to recover it. 00:27:56.862 [2024-07-26 11:35:52.172933] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.862 [2024-07-26 11:35:52.172966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:56.862 qpair failed and we were unable to recover it. 00:27:56.862 [2024-07-26 11:35:52.173068] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.862 [2024-07-26 11:35:52.173097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:56.862 qpair failed and we were unable to recover it. 00:27:56.862 [2024-07-26 11:35:52.173216] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.862 [2024-07-26 11:35:52.173248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:56.862 qpair failed and we were unable to recover it. 00:27:56.862 [2024-07-26 11:35:52.173488] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.862 [2024-07-26 11:35:52.173518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:56.862 qpair failed and we were unable to recover it. 00:27:56.862 [2024-07-26 11:35:52.173720] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.862 [2024-07-26 11:35:52.173769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:56.862 qpair failed and we were unable to recover it. 00:27:56.862 [2024-07-26 11:35:52.173971] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.862 [2024-07-26 11:35:52.174010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:56.862 qpair failed and we were unable to recover it. 00:27:56.862 [2024-07-26 11:35:52.174215] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.862 [2024-07-26 11:35:52.174255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:56.862 qpair failed and we were unable to recover it. 00:27:56.862 [2024-07-26 11:35:52.174457] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.862 [2024-07-26 11:35:52.174497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:56.862 qpair failed and we were unable to recover it. 00:27:56.862 [2024-07-26 11:35:52.174679] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.862 [2024-07-26 11:35:52.174730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:56.862 qpair failed and we were unable to recover it. 00:27:56.862 [2024-07-26 11:35:52.174851] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.862 [2024-07-26 11:35:52.174882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:56.862 qpair failed and we were unable to recover it. 00:27:56.862 [2024-07-26 11:35:52.175101] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.862 [2024-07-26 11:35:52.175148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:56.862 qpair failed and we were unable to recover it. 00:27:56.862 [2024-07-26 11:35:52.175278] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.862 [2024-07-26 11:35:52.175309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:56.862 qpair failed and we were unable to recover it. 00:27:56.862 [2024-07-26 11:35:52.175489] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.862 [2024-07-26 11:35:52.175519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:56.862 qpair failed and we were unable to recover it. 00:27:56.862 [2024-07-26 11:35:52.175657] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.862 [2024-07-26 11:35:52.175690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:56.862 qpair failed and we were unable to recover it. 00:27:56.862 [2024-07-26 11:35:52.175876] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.862 [2024-07-26 11:35:52.175906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:56.862 qpair failed and we were unable to recover it. 00:27:56.862 [2024-07-26 11:35:52.176089] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.862 [2024-07-26 11:35:52.176120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:56.862 qpair failed and we were unable to recover it. 00:27:56.862 [2024-07-26 11:35:52.176325] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.862 [2024-07-26 11:35:52.176355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:56.862 qpair failed and we were unable to recover it. 00:27:56.862 [2024-07-26 11:35:52.176569] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.862 [2024-07-26 11:35:52.176600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:56.862 qpair failed and we were unable to recover it. 00:27:56.862 [2024-07-26 11:35:52.176860] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.862 [2024-07-26 11:35:52.176892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:56.862 qpair failed and we were unable to recover it. 00:27:56.862 [2024-07-26 11:35:52.177022] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.862 [2024-07-26 11:35:52.177053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:56.862 qpair failed and we were unable to recover it. 00:27:56.862 [2024-07-26 11:35:52.177175] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.862 [2024-07-26 11:35:52.177205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:56.862 qpair failed and we were unable to recover it. 00:27:56.862 [2024-07-26 11:35:52.177331] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.862 [2024-07-26 11:35:52.177361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:56.862 qpair failed and we were unable to recover it. 00:27:56.862 [2024-07-26 11:35:52.177491] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.862 [2024-07-26 11:35:52.177522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:56.862 qpair failed and we were unable to recover it. 00:27:56.862 [2024-07-26 11:35:52.177653] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.862 [2024-07-26 11:35:52.177686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:56.862 qpair failed and we were unable to recover it. 00:27:56.862 [2024-07-26 11:35:52.177825] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.862 [2024-07-26 11:35:52.177855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:56.862 qpair failed and we were unable to recover it. 00:27:56.862 [2024-07-26 11:35:52.178030] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.862 [2024-07-26 11:35:52.178060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:56.862 qpair failed and we were unable to recover it. 00:27:56.862 [2024-07-26 11:35:52.178170] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.862 [2024-07-26 11:35:52.178200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:56.862 qpair failed and we were unable to recover it. 00:27:56.862 [2024-07-26 11:35:52.178306] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.862 [2024-07-26 11:35:52.178336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:56.862 qpair failed and we were unable to recover it. 00:27:56.862 [2024-07-26 11:35:52.178578] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.862 [2024-07-26 11:35:52.178609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:56.862 qpair failed and we were unable to recover it. 00:27:56.862 [2024-07-26 11:35:52.178745] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.862 [2024-07-26 11:35:52.178776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:56.862 qpair failed and we were unable to recover it. 00:27:56.862 [2024-07-26 11:35:52.178905] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.862 [2024-07-26 11:35:52.178935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:56.862 qpair failed and we were unable to recover it. 00:27:56.862 [2024-07-26 11:35:52.179110] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.862 [2024-07-26 11:35:52.179141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:56.862 qpair failed and we were unable to recover it. 00:27:56.863 [2024-07-26 11:35:52.179319] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.863 [2024-07-26 11:35:52.179349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:56.863 qpair failed and we were unable to recover it. 00:27:56.863 [2024-07-26 11:35:52.179464] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.863 [2024-07-26 11:35:52.179494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:56.863 qpair failed and we were unable to recover it. 00:27:56.863 [2024-07-26 11:35:52.179609] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.863 [2024-07-26 11:35:52.179649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:56.863 qpair failed and we were unable to recover it. 00:27:56.863 [2024-07-26 11:35:52.179836] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.863 [2024-07-26 11:35:52.179866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:56.863 qpair failed and we were unable to recover it. 00:27:56.863 [2024-07-26 11:35:52.179995] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.863 [2024-07-26 11:35:52.180026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:56.863 qpair failed and we were unable to recover it. 00:27:56.863 [2024-07-26 11:35:52.180147] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.863 [2024-07-26 11:35:52.180178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:56.863 qpair failed and we were unable to recover it. 00:27:56.863 [2024-07-26 11:35:52.180416] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.863 [2024-07-26 11:35:52.180445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:56.863 qpair failed and we were unable to recover it. 00:27:56.863 [2024-07-26 11:35:52.180625] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.863 [2024-07-26 11:35:52.180666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:56.863 qpair failed and we were unable to recover it. 00:27:56.863 [2024-07-26 11:35:52.180782] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.863 [2024-07-26 11:35:52.180812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:56.863 qpair failed and we were unable to recover it. 00:27:56.863 [2024-07-26 11:35:52.181008] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.863 [2024-07-26 11:35:52.181039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:56.863 qpair failed and we were unable to recover it. 00:27:56.863 [2024-07-26 11:35:52.181167] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.863 [2024-07-26 11:35:52.181197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:56.863 qpair failed and we were unable to recover it. 00:27:56.863 [2024-07-26 11:35:52.181379] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.863 [2024-07-26 11:35:52.181409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:56.863 qpair failed and we were unable to recover it. 00:27:56.863 [2024-07-26 11:35:52.181536] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.863 [2024-07-26 11:35:52.181566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:56.863 qpair failed and we were unable to recover it. 00:27:56.863 [2024-07-26 11:35:52.181683] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.863 [2024-07-26 11:35:52.181715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:56.863 qpair failed and we were unable to recover it. 00:27:56.863 [2024-07-26 11:35:52.181825] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.863 [2024-07-26 11:35:52.181856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:56.863 qpair failed and we were unable to recover it. 00:27:56.863 [2024-07-26 11:35:52.181970] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.863 [2024-07-26 11:35:52.182000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:56.863 qpair failed and we were unable to recover it. 00:27:56.863 [2024-07-26 11:35:52.182202] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.863 [2024-07-26 11:35:52.182232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:56.863 qpair failed and we were unable to recover it. 00:27:56.863 [2024-07-26 11:35:52.182413] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18c6ff0 is same with the state(5) to be set 00:27:56.863 [2024-07-26 11:35:52.182787] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.863 [2024-07-26 11:35:52.182856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.863 qpair failed and we were unable to recover it. 00:27:56.863 [2024-07-26 11:35:52.182992] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.863 [2024-07-26 11:35:52.183026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.863 qpair failed and we were unable to recover it. 00:27:56.863 [2024-07-26 11:35:52.183205] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.863 [2024-07-26 11:35:52.183235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.863 qpair failed and we were unable to recover it. 00:27:56.863 [2024-07-26 11:35:52.183477] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.863 [2024-07-26 11:35:52.183507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.863 qpair failed and we were unable to recover it. 00:27:56.863 [2024-07-26 11:35:52.183707] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.863 [2024-07-26 11:35:52.183740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.863 qpair failed and we were unable to recover it. 00:27:56.863 [2024-07-26 11:35:52.183873] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.863 [2024-07-26 11:35:52.183904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.863 qpair failed and we were unable to recover it. 00:27:56.863 [2024-07-26 11:35:52.184017] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.863 [2024-07-26 11:35:52.184047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.863 qpair failed and we were unable to recover it. 00:27:56.863 [2024-07-26 11:35:52.184244] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.863 [2024-07-26 11:35:52.184273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.863 qpair failed and we were unable to recover it. 00:27:56.863 [2024-07-26 11:35:52.184467] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.863 [2024-07-26 11:35:52.184497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.863 qpair failed and we were unable to recover it. 00:27:56.863 [2024-07-26 11:35:52.184642] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.863 [2024-07-26 11:35:52.184673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.863 qpair failed and we were unable to recover it. 00:27:56.863 [2024-07-26 11:35:52.184885] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.863 [2024-07-26 11:35:52.184915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.863 qpair failed and we were unable to recover it. 00:27:56.863 [2024-07-26 11:35:52.185196] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.863 [2024-07-26 11:35:52.185226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.863 qpair failed and we were unable to recover it. 00:27:56.863 [2024-07-26 11:35:52.185358] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.863 [2024-07-26 11:35:52.185388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.863 qpair failed and we were unable to recover it. 00:27:56.863 [2024-07-26 11:35:52.185518] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.863 [2024-07-26 11:35:52.185548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.863 qpair failed and we were unable to recover it. 00:27:56.863 [2024-07-26 11:35:52.185767] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.863 [2024-07-26 11:35:52.185797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.863 qpair failed and we were unable to recover it. 00:27:56.863 [2024-07-26 11:35:52.185995] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.863 [2024-07-26 11:35:52.186025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.863 qpair failed and we were unable to recover it. 00:27:56.863 [2024-07-26 11:35:52.186157] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.864 [2024-07-26 11:35:52.186187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.864 qpair failed and we were unable to recover it. 00:27:56.864 [2024-07-26 11:35:52.186368] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.864 [2024-07-26 11:35:52.186397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.864 qpair failed and we were unable to recover it. 00:27:56.864 [2024-07-26 11:35:52.186528] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.864 [2024-07-26 11:35:52.186558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.864 qpair failed and we were unable to recover it. 00:27:56.864 [2024-07-26 11:35:52.186678] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.864 [2024-07-26 11:35:52.186709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.864 qpair failed and we were unable to recover it. 00:27:56.864 [2024-07-26 11:35:52.186891] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.864 [2024-07-26 11:35:52.186921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.864 qpair failed and we were unable to recover it. 00:27:56.864 [2024-07-26 11:35:52.187096] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.864 [2024-07-26 11:35:52.187126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.864 qpair failed and we were unable to recover it. 00:27:56.864 [2024-07-26 11:35:52.187300] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.864 [2024-07-26 11:35:52.187330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.864 qpair failed and we were unable to recover it. 00:27:56.864 [2024-07-26 11:35:52.187571] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.864 [2024-07-26 11:35:52.187601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.864 qpair failed and we were unable to recover it. 00:27:56.864 [2024-07-26 11:35:52.187865] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.864 [2024-07-26 11:35:52.187899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:56.864 qpair failed and we were unable to recover it. 00:27:56.864 [2024-07-26 11:35:52.188014] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.864 [2024-07-26 11:35:52.188044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:56.864 qpair failed and we were unable to recover it. 00:27:56.864 [2024-07-26 11:35:52.188233] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.864 [2024-07-26 11:35:52.188268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:56.864 qpair failed and we were unable to recover it. 00:27:56.864 [2024-07-26 11:35:52.188382] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.864 [2024-07-26 11:35:52.188412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:56.864 qpair failed and we were unable to recover it. 00:27:56.864 [2024-07-26 11:35:52.188595] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.864 [2024-07-26 11:35:52.188636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:56.864 qpair failed and we were unable to recover it. 00:27:56.864 [2024-07-26 11:35:52.188840] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.864 [2024-07-26 11:35:52.188870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:56.864 qpair failed and we were unable to recover it. 00:27:56.864 [2024-07-26 11:35:52.189043] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.864 [2024-07-26 11:35:52.189073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:56.864 qpair failed and we were unable to recover it. 00:27:56.864 [2024-07-26 11:35:52.189339] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.864 [2024-07-26 11:35:52.189369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:56.864 qpair failed and we were unable to recover it. 00:27:56.864 [2024-07-26 11:35:52.189641] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.864 [2024-07-26 11:35:52.189672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:56.864 qpair failed and we were unable to recover it. 00:27:56.864 [2024-07-26 11:35:52.189898] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.864 [2024-07-26 11:35:52.189927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:56.864 qpair failed and we were unable to recover it. 00:27:56.864 [2024-07-26 11:35:52.190196] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.864 [2024-07-26 11:35:52.190226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:56.864 qpair failed and we were unable to recover it. 00:27:56.864 [2024-07-26 11:35:52.190352] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.864 [2024-07-26 11:35:52.190382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:56.864 qpair failed and we were unable to recover it. 00:27:56.864 [2024-07-26 11:35:52.190568] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.864 [2024-07-26 11:35:52.190599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:56.864 qpair failed and we were unable to recover it. 00:27:56.864 [2024-07-26 11:35:52.190738] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.864 [2024-07-26 11:35:52.190770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:56.864 qpair failed and we were unable to recover it. 00:27:56.864 [2024-07-26 11:35:52.190984] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.864 [2024-07-26 11:35:52.191013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:56.864 qpair failed and we were unable to recover it. 00:27:56.864 [2024-07-26 11:35:52.191199] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.864 [2024-07-26 11:35:52.191229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:56.864 qpair failed and we were unable to recover it. 00:27:56.864 [2024-07-26 11:35:52.191341] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.864 [2024-07-26 11:35:52.191371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:56.864 qpair failed and we were unable to recover it. 00:27:56.864 [2024-07-26 11:35:52.191551] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.864 [2024-07-26 11:35:52.191580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:56.864 qpair failed and we were unable to recover it. 00:27:56.864 [2024-07-26 11:35:52.191797] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.864 [2024-07-26 11:35:52.191828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:56.864 qpair failed and we were unable to recover it. 00:27:56.864 [2024-07-26 11:35:52.192012] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.864 [2024-07-26 11:35:52.192042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:56.864 qpair failed and we were unable to recover it. 00:27:56.864 [2024-07-26 11:35:52.192226] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.864 [2024-07-26 11:35:52.192257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:56.864 qpair failed and we were unable to recover it. 00:27:56.864 [2024-07-26 11:35:52.192445] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.864 [2024-07-26 11:35:52.192474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:56.864 qpair failed and we were unable to recover it. 00:27:56.864 [2024-07-26 11:35:52.192687] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.864 [2024-07-26 11:35:52.192717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:56.864 qpair failed and we were unable to recover it. 00:27:56.864 [2024-07-26 11:35:52.192862] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.864 [2024-07-26 11:35:52.192892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:56.864 qpair failed and we were unable to recover it. 00:27:56.864 [2024-07-26 11:35:52.193019] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.865 [2024-07-26 11:35:52.193048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:56.865 qpair failed and we were unable to recover it. 00:27:56.865 [2024-07-26 11:35:52.193155] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.865 [2024-07-26 11:35:52.193185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:56.865 qpair failed and we were unable to recover it. 00:27:56.865 [2024-07-26 11:35:52.193357] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.865 [2024-07-26 11:35:52.193387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:56.865 qpair failed and we were unable to recover it. 00:27:56.865 [2024-07-26 11:35:52.193503] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.865 [2024-07-26 11:35:52.193533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:56.865 qpair failed and we were unable to recover it. 00:27:56.865 [2024-07-26 11:35:52.193661] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.865 [2024-07-26 11:35:52.193693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:56.865 qpair failed and we were unable to recover it. 00:27:56.865 [2024-07-26 11:35:52.193875] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.865 [2024-07-26 11:35:52.193906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:56.865 qpair failed and we were unable to recover it. 00:27:56.865 [2024-07-26 11:35:52.194152] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.865 [2024-07-26 11:35:52.194183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:56.865 qpair failed and we were unable to recover it. 00:27:56.865 [2024-07-26 11:35:52.194355] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.865 [2024-07-26 11:35:52.194385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:56.865 qpair failed and we were unable to recover it. 00:27:56.865 [2024-07-26 11:35:52.194571] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.865 [2024-07-26 11:35:52.194601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:56.865 qpair failed and we were unable to recover it. 00:27:56.865 [2024-07-26 11:35:52.194729] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.865 [2024-07-26 11:35:52.194763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.865 qpair failed and we were unable to recover it. 00:27:56.865 [2024-07-26 11:35:52.194865] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.865 [2024-07-26 11:35:52.194896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.865 qpair failed and we were unable to recover it. 00:27:56.865 [2024-07-26 11:35:52.195090] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.865 [2024-07-26 11:35:52.195119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.865 qpair failed and we were unable to recover it. 00:27:56.865 [2024-07-26 11:35:52.195293] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.865 [2024-07-26 11:35:52.195323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.865 qpair failed and we were unable to recover it. 00:27:56.865 [2024-07-26 11:35:52.195432] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.865 [2024-07-26 11:35:52.195462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.865 qpair failed and we were unable to recover it. 00:27:56.865 [2024-07-26 11:35:52.195658] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.865 [2024-07-26 11:35:52.195688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.865 qpair failed and we were unable to recover it. 00:27:56.865 [2024-07-26 11:35:52.195860] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.865 [2024-07-26 11:35:52.195891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.865 qpair failed and we were unable to recover it. 00:27:56.865 [2024-07-26 11:35:52.196134] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.865 [2024-07-26 11:35:52.196163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.865 qpair failed and we were unable to recover it. 00:27:56.865 [2024-07-26 11:35:52.196296] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.865 [2024-07-26 11:35:52.196325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.865 qpair failed and we were unable to recover it. 00:27:56.865 [2024-07-26 11:35:52.196565] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.865 [2024-07-26 11:35:52.196595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.865 qpair failed and we were unable to recover it. 00:27:56.865 [2024-07-26 11:35:52.196803] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.865 [2024-07-26 11:35:52.196833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.865 qpair failed and we were unable to recover it. 00:27:56.865 [2024-07-26 11:35:52.197021] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.865 [2024-07-26 11:35:52.197050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.865 qpair failed and we were unable to recover it. 00:27:56.865 [2024-07-26 11:35:52.197162] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.865 [2024-07-26 11:35:52.197192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.865 qpair failed and we were unable to recover it. 00:27:56.865 [2024-07-26 11:35:52.197377] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.865 [2024-07-26 11:35:52.197407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.865 qpair failed and we were unable to recover it. 00:27:56.865 [2024-07-26 11:35:52.197515] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.865 [2024-07-26 11:35:52.197545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.865 qpair failed and we were unable to recover it. 00:27:56.865 [2024-07-26 11:35:52.197804] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.865 [2024-07-26 11:35:52.197834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.865 qpair failed and we were unable to recover it. 00:27:56.865 [2024-07-26 11:35:52.198020] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.865 [2024-07-26 11:35:52.198050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.865 qpair failed and we were unable to recover it. 00:27:56.865 [2024-07-26 11:35:52.198176] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.865 [2024-07-26 11:35:52.198205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.865 qpair failed and we were unable to recover it. 00:27:56.865 [2024-07-26 11:35:52.198374] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.865 [2024-07-26 11:35:52.198403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.865 qpair failed and we were unable to recover it. 00:27:56.865 [2024-07-26 11:35:52.198645] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.865 [2024-07-26 11:35:52.198675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.865 qpair failed and we were unable to recover it. 00:27:56.865 [2024-07-26 11:35:52.198785] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.865 [2024-07-26 11:35:52.198815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.865 qpair failed and we were unable to recover it. 00:27:56.865 [2024-07-26 11:35:52.198942] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.865 [2024-07-26 11:35:52.198971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.865 qpair failed and we were unable to recover it. 00:27:56.865 [2024-07-26 11:35:52.199089] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.865 [2024-07-26 11:35:52.199119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.865 qpair failed and we were unable to recover it. 00:27:56.865 [2024-07-26 11:35:52.199241] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.865 [2024-07-26 11:35:52.199277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.865 qpair failed and we were unable to recover it. 00:27:56.865 [2024-07-26 11:35:52.199399] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.865 [2024-07-26 11:35:52.199429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.865 qpair failed and we were unable to recover it. 00:27:56.866 [2024-07-26 11:35:52.199561] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.866 [2024-07-26 11:35:52.199590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.866 qpair failed and we were unable to recover it. 00:27:56.866 [2024-07-26 11:35:52.199773] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.866 [2024-07-26 11:35:52.199804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.866 qpair failed and we were unable to recover it. 00:27:56.866 [2024-07-26 11:35:52.199978] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.866 [2024-07-26 11:35:52.200008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.866 qpair failed and we were unable to recover it. 00:27:56.866 [2024-07-26 11:35:52.200212] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.866 [2024-07-26 11:35:52.200241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.866 qpair failed and we were unable to recover it. 00:27:56.866 [2024-07-26 11:35:52.200454] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.866 [2024-07-26 11:35:52.200483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.866 qpair failed and we were unable to recover it. 00:27:56.866 [2024-07-26 11:35:52.200656] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.866 [2024-07-26 11:35:52.200687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.866 qpair failed and we were unable to recover it. 00:27:56.866 [2024-07-26 11:35:52.200875] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.866 [2024-07-26 11:35:52.200905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.866 qpair failed and we were unable to recover it. 00:27:56.866 [2024-07-26 11:35:52.201098] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.866 [2024-07-26 11:35:52.201127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.866 qpair failed and we were unable to recover it. 00:27:56.866 [2024-07-26 11:35:52.201370] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.866 [2024-07-26 11:35:52.201399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.866 qpair failed and we were unable to recover it. 00:27:56.866 [2024-07-26 11:35:52.201592] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.866 [2024-07-26 11:35:52.201622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.866 qpair failed and we were unable to recover it. 00:27:56.866 [2024-07-26 11:35:52.201822] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.866 [2024-07-26 11:35:52.201852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.866 qpair failed and we were unable to recover it. 00:27:56.866 [2024-07-26 11:35:52.201959] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.866 [2024-07-26 11:35:52.201988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.866 qpair failed and we were unable to recover it. 00:27:56.866 [2024-07-26 11:35:52.202261] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.866 [2024-07-26 11:35:52.202291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.866 qpair failed and we were unable to recover it. 00:27:56.866 [2024-07-26 11:35:52.202483] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.866 [2024-07-26 11:35:52.202513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.866 qpair failed and we were unable to recover it. 00:27:56.866 [2024-07-26 11:35:52.202803] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.866 [2024-07-26 11:35:52.202834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.866 qpair failed and we were unable to recover it. 00:27:56.866 [2024-07-26 11:35:52.202978] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.866 [2024-07-26 11:35:52.203007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.866 qpair failed and we were unable to recover it. 00:27:56.866 [2024-07-26 11:35:52.203201] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.866 [2024-07-26 11:35:52.203231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.866 qpair failed and we were unable to recover it. 00:27:56.866 [2024-07-26 11:35:52.203420] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.866 [2024-07-26 11:35:52.203450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.866 qpair failed and we were unable to recover it. 00:27:56.866 [2024-07-26 11:35:52.203740] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.866 [2024-07-26 11:35:52.203770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.866 qpair failed and we were unable to recover it. 00:27:56.866 [2024-07-26 11:35:52.203890] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.866 [2024-07-26 11:35:52.203920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.866 qpair failed and we were unable to recover it. 00:27:56.866 [2024-07-26 11:35:52.204094] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.866 [2024-07-26 11:35:52.204124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.866 qpair failed and we were unable to recover it. 00:27:56.866 [2024-07-26 11:35:52.204312] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.866 [2024-07-26 11:35:52.204342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.866 qpair failed and we were unable to recover it. 00:27:56.866 [2024-07-26 11:35:52.204581] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.866 [2024-07-26 11:35:52.204611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.866 qpair failed and we were unable to recover it. 00:27:56.866 [2024-07-26 11:35:52.204736] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.866 [2024-07-26 11:35:52.204767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.866 qpair failed and we were unable to recover it. 00:27:56.866 [2024-07-26 11:35:52.204896] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.866 [2024-07-26 11:35:52.204926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.866 qpair failed and we were unable to recover it. 00:27:56.866 [2024-07-26 11:35:52.205077] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.866 [2024-07-26 11:35:52.205107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.866 qpair failed and we were unable to recover it. 00:27:56.866 [2024-07-26 11:35:52.205370] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.866 [2024-07-26 11:35:52.205400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.866 qpair failed and we were unable to recover it. 00:27:56.867 [2024-07-26 11:35:52.205511] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.867 [2024-07-26 11:35:52.205541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.867 qpair failed and we were unable to recover it. 00:27:56.867 [2024-07-26 11:35:52.205737] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.867 [2024-07-26 11:35:52.205767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.867 qpair failed and we were unable to recover it. 00:27:56.867 [2024-07-26 11:35:52.205935] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.867 [2024-07-26 11:35:52.205965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.867 qpair failed and we were unable to recover it. 00:27:56.867 [2024-07-26 11:35:52.206087] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.867 [2024-07-26 11:35:52.206117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.867 qpair failed and we were unable to recover it. 00:27:56.867 [2024-07-26 11:35:52.206390] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.867 [2024-07-26 11:35:52.206420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.867 qpair failed and we were unable to recover it. 00:27:56.867 [2024-07-26 11:35:52.206618] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.867 [2024-07-26 11:35:52.206671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.867 qpair failed and we were unable to recover it. 00:27:56.867 [2024-07-26 11:35:52.206804] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.867 [2024-07-26 11:35:52.206834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.867 qpair failed and we were unable to recover it. 00:27:56.867 [2024-07-26 11:35:52.206958] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.867 [2024-07-26 11:35:52.206988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.867 qpair failed and we were unable to recover it. 00:27:56.867 [2024-07-26 11:35:52.207257] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.867 [2024-07-26 11:35:52.207287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.867 qpair failed and we were unable to recover it. 00:27:56.867 [2024-07-26 11:35:52.207401] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.867 [2024-07-26 11:35:52.207431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.867 qpair failed and we were unable to recover it. 00:27:56.867 [2024-07-26 11:35:52.207603] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.867 [2024-07-26 11:35:52.207643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.867 qpair failed and we were unable to recover it. 00:27:56.867 [2024-07-26 11:35:52.207857] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.867 [2024-07-26 11:35:52.207892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.867 qpair failed and we were unable to recover it. 00:27:56.867 [2024-07-26 11:35:52.208083] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.867 [2024-07-26 11:35:52.208113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.867 qpair failed and we were unable to recover it. 00:27:56.867 [2024-07-26 11:35:52.208352] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.867 [2024-07-26 11:35:52.208382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.867 qpair failed and we were unable to recover it. 00:27:56.867 [2024-07-26 11:35:52.208517] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.867 [2024-07-26 11:35:52.208547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.867 qpair failed and we were unable to recover it. 00:27:56.867 [2024-07-26 11:35:52.208721] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.867 [2024-07-26 11:35:52.208754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.867 qpair failed and we were unable to recover it. 00:27:56.867 [2024-07-26 11:35:52.208941] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.867 [2024-07-26 11:35:52.208971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.867 qpair failed and we were unable to recover it. 00:27:56.867 [2024-07-26 11:35:52.209215] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.867 [2024-07-26 11:35:52.209244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.867 qpair failed and we were unable to recover it. 00:27:56.867 [2024-07-26 11:35:52.209430] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.867 [2024-07-26 11:35:52.209460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.867 qpair failed and we were unable to recover it. 00:27:56.867 [2024-07-26 11:35:52.209752] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.867 [2024-07-26 11:35:52.209783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.867 qpair failed and we were unable to recover it. 00:27:56.867 [2024-07-26 11:35:52.209998] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.867 [2024-07-26 11:35:52.210027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.867 qpair failed and we were unable to recover it. 00:27:56.867 [2024-07-26 11:35:52.210217] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.867 [2024-07-26 11:35:52.210247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.867 qpair failed and we were unable to recover it. 00:27:56.867 [2024-07-26 11:35:52.210454] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.867 [2024-07-26 11:35:52.210484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.867 qpair failed and we were unable to recover it. 00:27:56.867 [2024-07-26 11:35:52.210597] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.867 [2024-07-26 11:35:52.210635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.867 qpair failed and we were unable to recover it. 00:27:56.867 [2024-07-26 11:35:52.210885] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.867 [2024-07-26 11:35:52.210915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.867 qpair failed and we were unable to recover it. 00:27:56.867 [2024-07-26 11:35:52.211107] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.867 [2024-07-26 11:35:52.211137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.867 qpair failed and we were unable to recover it. 00:27:56.867 [2024-07-26 11:35:52.211318] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.867 [2024-07-26 11:35:52.211349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.867 qpair failed and we were unable to recover it. 00:27:56.867 [2024-07-26 11:35:52.211589] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.867 [2024-07-26 11:35:52.211618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.867 qpair failed and we were unable to recover it. 00:27:56.867 [2024-07-26 11:35:52.211801] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.867 [2024-07-26 11:35:52.211831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.867 qpair failed and we were unable to recover it. 00:27:56.867 [2024-07-26 11:35:52.212017] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.867 [2024-07-26 11:35:52.212047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.867 qpair failed and we were unable to recover it. 00:27:56.867 [2024-07-26 11:35:52.212235] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.867 [2024-07-26 11:35:52.212264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.867 qpair failed and we were unable to recover it. 00:27:56.868 [2024-07-26 11:35:52.212386] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.868 [2024-07-26 11:35:52.212416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.868 qpair failed and we were unable to recover it. 00:27:56.868 [2024-07-26 11:35:52.212594] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.868 [2024-07-26 11:35:52.212623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.868 qpair failed and we were unable to recover it. 00:27:56.868 [2024-07-26 11:35:52.212843] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.868 [2024-07-26 11:35:52.212873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.868 qpair failed and we were unable to recover it. 00:27:56.868 [2024-07-26 11:35:52.213130] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.868 [2024-07-26 11:35:52.213160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.868 qpair failed and we were unable to recover it. 00:27:56.868 [2024-07-26 11:35:52.213428] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.868 [2024-07-26 11:35:52.213458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.868 qpair failed and we were unable to recover it. 00:27:56.868 [2024-07-26 11:35:52.213663] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.868 [2024-07-26 11:35:52.213695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.868 qpair failed and we were unable to recover it. 00:27:56.868 [2024-07-26 11:35:52.213977] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.868 [2024-07-26 11:35:52.214007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.868 qpair failed and we were unable to recover it. 00:27:56.868 [2024-07-26 11:35:52.214257] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.868 [2024-07-26 11:35:52.214287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.868 qpair failed and we were unable to recover it. 00:27:56.868 [2024-07-26 11:35:52.214425] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.868 [2024-07-26 11:35:52.214454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.868 qpair failed and we were unable to recover it. 00:27:56.868 [2024-07-26 11:35:52.214654] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.868 [2024-07-26 11:35:52.214685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.868 qpair failed and we were unable to recover it. 00:27:56.868 [2024-07-26 11:35:52.214860] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.868 [2024-07-26 11:35:52.214889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.868 qpair failed and we were unable to recover it. 00:27:56.868 [2024-07-26 11:35:52.215075] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.868 [2024-07-26 11:35:52.215105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.868 qpair failed and we were unable to recover it. 00:27:56.868 [2024-07-26 11:35:52.215233] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.868 [2024-07-26 11:35:52.215263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.868 qpair failed and we were unable to recover it. 00:27:56.868 [2024-07-26 11:35:52.215482] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.868 [2024-07-26 11:35:52.215512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.868 qpair failed and we were unable to recover it. 00:27:56.868 [2024-07-26 11:35:52.215776] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.868 [2024-07-26 11:35:52.215807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.868 qpair failed and we were unable to recover it. 00:27:56.868 [2024-07-26 11:35:52.215940] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.868 [2024-07-26 11:35:52.215970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.868 qpair failed and we were unable to recover it. 00:27:56.868 [2024-07-26 11:35:52.216172] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.868 [2024-07-26 11:35:52.216201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.868 qpair failed and we were unable to recover it. 00:27:56.868 [2024-07-26 11:35:52.216471] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.868 [2024-07-26 11:35:52.216500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.868 qpair failed and we were unable to recover it. 00:27:56.868 [2024-07-26 11:35:52.216742] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.868 [2024-07-26 11:35:52.216772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.868 qpair failed and we were unable to recover it. 00:27:56.868 [2024-07-26 11:35:52.217024] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.868 [2024-07-26 11:35:52.217054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.868 qpair failed and we were unable to recover it. 00:27:56.868 [2024-07-26 11:35:52.217192] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.868 [2024-07-26 11:35:52.217228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.868 qpair failed and we were unable to recover it. 00:27:56.868 [2024-07-26 11:35:52.217498] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.868 [2024-07-26 11:35:52.217527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.868 qpair failed and we were unable to recover it. 00:27:56.868 [2024-07-26 11:35:52.217666] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.868 [2024-07-26 11:35:52.217697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.868 qpair failed and we were unable to recover it. 00:27:56.868 [2024-07-26 11:35:52.217872] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.868 [2024-07-26 11:35:52.217902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.868 qpair failed and we were unable to recover it. 00:27:56.868 [2024-07-26 11:35:52.218170] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.868 [2024-07-26 11:35:52.218200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.868 qpair failed and we were unable to recover it. 00:27:56.868 [2024-07-26 11:35:52.218393] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.868 [2024-07-26 11:35:52.218423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.868 qpair failed and we were unable to recover it. 00:27:56.868 [2024-07-26 11:35:52.218639] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.868 [2024-07-26 11:35:52.218670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.868 qpair failed and we were unable to recover it. 00:27:56.868 [2024-07-26 11:35:52.218843] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.868 [2024-07-26 11:35:52.218873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.868 qpair failed and we were unable to recover it. 00:27:56.868 [2024-07-26 11:35:52.219066] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.868 [2024-07-26 11:35:52.219096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.868 qpair failed and we were unable to recover it. 00:27:56.868 [2024-07-26 11:35:52.219289] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.868 [2024-07-26 11:35:52.219319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.868 qpair failed and we were unable to recover it. 00:27:56.868 [2024-07-26 11:35:52.219587] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.868 [2024-07-26 11:35:52.219618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.868 qpair failed and we were unable to recover it. 00:27:56.868 [2024-07-26 11:35:52.219747] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.868 [2024-07-26 11:35:52.219777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.868 qpair failed and we were unable to recover it. 00:27:56.868 [2024-07-26 11:35:52.219970] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.868 [2024-07-26 11:35:52.219999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.868 qpair failed and we were unable to recover it. 00:27:56.868 [2024-07-26 11:35:52.220181] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.868 [2024-07-26 11:35:52.220211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.868 qpair failed and we were unable to recover it. 00:27:56.868 [2024-07-26 11:35:52.220311] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.868 [2024-07-26 11:35:52.220341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.868 qpair failed and we were unable to recover it. 00:27:56.868 [2024-07-26 11:35:52.220536] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.868 [2024-07-26 11:35:52.220566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.868 qpair failed and we were unable to recover it. 00:27:56.869 [2024-07-26 11:35:52.220820] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.869 [2024-07-26 11:35:52.220850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.869 qpair failed and we were unable to recover it. 00:27:56.869 [2024-07-26 11:35:52.221107] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.869 [2024-07-26 11:35:52.221137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.869 qpair failed and we were unable to recover it. 00:27:56.869 [2024-07-26 11:35:52.221261] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.869 [2024-07-26 11:35:52.221290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.869 qpair failed and we were unable to recover it. 00:27:56.869 [2024-07-26 11:35:52.221532] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.869 [2024-07-26 11:35:52.221562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.869 qpair failed and we were unable to recover it. 00:27:56.869 [2024-07-26 11:35:52.221832] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.869 [2024-07-26 11:35:52.221864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.869 qpair failed and we were unable to recover it. 00:27:56.869 [2024-07-26 11:35:52.222051] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.869 [2024-07-26 11:35:52.222081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.869 qpair failed and we were unable to recover it. 00:27:56.869 [2024-07-26 11:35:52.222291] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.869 [2024-07-26 11:35:52.222321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.869 qpair failed and we were unable to recover it. 00:27:56.869 [2024-07-26 11:35:52.222579] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.869 [2024-07-26 11:35:52.222610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.869 qpair failed and we were unable to recover it. 00:27:56.869 [2024-07-26 11:35:52.222752] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.869 [2024-07-26 11:35:52.222783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.869 qpair failed and we were unable to recover it. 00:27:56.869 [2024-07-26 11:35:52.223028] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.869 [2024-07-26 11:35:52.223058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.869 qpair failed and we were unable to recover it. 00:27:56.869 [2024-07-26 11:35:52.223174] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.869 [2024-07-26 11:35:52.223204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.869 qpair failed and we were unable to recover it. 00:27:56.869 [2024-07-26 11:35:52.223396] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.869 [2024-07-26 11:35:52.223426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.869 qpair failed and we were unable to recover it. 00:27:56.869 [2024-07-26 11:35:52.223642] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.869 [2024-07-26 11:35:52.223672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.869 qpair failed and we were unable to recover it. 00:27:56.869 [2024-07-26 11:35:52.223855] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.869 [2024-07-26 11:35:52.223885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.869 qpair failed and we were unable to recover it. 00:27:56.869 [2024-07-26 11:35:52.224002] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.869 [2024-07-26 11:35:52.224032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.869 qpair failed and we were unable to recover it. 00:27:56.869 [2024-07-26 11:35:52.224165] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.869 [2024-07-26 11:35:52.224195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.869 qpair failed and we were unable to recover it. 00:27:56.869 [2024-07-26 11:35:52.224366] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.869 [2024-07-26 11:35:52.224396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.869 qpair failed and we were unable to recover it. 00:27:56.869 [2024-07-26 11:35:52.224660] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.869 [2024-07-26 11:35:52.224691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.869 qpair failed and we were unable to recover it. 00:27:56.869 [2024-07-26 11:35:52.224887] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.869 [2024-07-26 11:35:52.224918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.869 qpair failed and we were unable to recover it. 00:27:56.869 [2024-07-26 11:35:52.225131] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.869 [2024-07-26 11:35:52.225161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.869 qpair failed and we were unable to recover it. 00:27:56.869 [2024-07-26 11:35:52.225279] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.869 [2024-07-26 11:35:52.225309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.869 qpair failed and we were unable to recover it. 00:27:56.869 [2024-07-26 11:35:52.225515] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.869 [2024-07-26 11:35:52.225545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.869 qpair failed and we were unable to recover it. 00:27:56.869 [2024-07-26 11:35:52.225782] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.869 [2024-07-26 11:35:52.225813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.869 qpair failed and we were unable to recover it. 00:27:56.869 [2024-07-26 11:35:52.225918] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.869 [2024-07-26 11:35:52.225949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.869 qpair failed and we were unable to recover it. 00:27:56.869 [2024-07-26 11:35:52.226128] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.869 [2024-07-26 11:35:52.226164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.869 qpair failed and we were unable to recover it. 00:27:56.869 [2024-07-26 11:35:52.226288] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.869 [2024-07-26 11:35:52.226317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.869 qpair failed and we were unable to recover it. 00:27:56.869 [2024-07-26 11:35:52.226561] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.869 [2024-07-26 11:35:52.226591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.869 qpair failed and we were unable to recover it. 00:27:56.869 [2024-07-26 11:35:52.226719] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.869 [2024-07-26 11:35:52.226750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.869 qpair failed and we were unable to recover it. 00:27:56.869 [2024-07-26 11:35:52.226923] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.869 [2024-07-26 11:35:52.226953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.869 qpair failed and we were unable to recover it. 00:27:56.869 [2024-07-26 11:35:52.227133] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.869 [2024-07-26 11:35:52.227163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.869 qpair failed and we were unable to recover it. 00:27:56.869 [2024-07-26 11:35:52.227347] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.869 [2024-07-26 11:35:52.227377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.869 qpair failed and we were unable to recover it. 00:27:56.869 [2024-07-26 11:35:52.227514] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.869 [2024-07-26 11:35:52.227543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.869 qpair failed and we were unable to recover it. 00:27:56.869 [2024-07-26 11:35:52.227813] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.869 [2024-07-26 11:35:52.227844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.869 qpair failed and we were unable to recover it. 00:27:56.869 [2024-07-26 11:35:52.228041] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.869 [2024-07-26 11:35:52.228071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.869 qpair failed and we were unable to recover it. 00:27:56.869 [2024-07-26 11:35:52.228246] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.869 [2024-07-26 11:35:52.228276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.869 qpair failed and we were unable to recover it. 00:27:56.869 [2024-07-26 11:35:52.228468] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.869 [2024-07-26 11:35:52.228497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.869 qpair failed and we were unable to recover it. 00:27:56.869 [2024-07-26 11:35:52.228693] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.870 [2024-07-26 11:35:52.228724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.870 qpair failed and we were unable to recover it. 00:27:56.870 [2024-07-26 11:35:52.228919] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.870 [2024-07-26 11:35:52.228948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.870 qpair failed and we were unable to recover it. 00:27:56.870 [2024-07-26 11:35:52.229089] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.870 [2024-07-26 11:35:52.229120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.870 qpair failed and we were unable to recover it. 00:27:56.870 [2024-07-26 11:35:52.229318] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.870 [2024-07-26 11:35:52.229348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.870 qpair failed and we were unable to recover it. 00:27:56.870 [2024-07-26 11:35:52.229613] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.870 [2024-07-26 11:35:52.229670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.870 qpair failed and we were unable to recover it. 00:27:56.870 [2024-07-26 11:35:52.229893] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.870 [2024-07-26 11:35:52.229922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.870 qpair failed and we were unable to recover it. 00:27:56.870 [2024-07-26 11:35:52.230049] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.870 [2024-07-26 11:35:52.230078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.870 qpair failed and we were unable to recover it. 00:27:56.870 [2024-07-26 11:35:52.230265] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.870 [2024-07-26 11:35:52.230295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.870 qpair failed and we were unable to recover it. 00:27:56.870 [2024-07-26 11:35:52.230485] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.870 [2024-07-26 11:35:52.230515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.870 qpair failed and we were unable to recover it. 00:27:56.870 [2024-07-26 11:35:52.230781] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.870 [2024-07-26 11:35:52.230813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.870 qpair failed and we were unable to recover it. 00:27:56.870 [2024-07-26 11:35:52.230932] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.870 [2024-07-26 11:35:52.230962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.870 qpair failed and we were unable to recover it. 00:27:56.870 [2024-07-26 11:35:52.231113] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.870 [2024-07-26 11:35:52.231143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.870 qpair failed and we were unable to recover it. 00:27:56.870 [2024-07-26 11:35:52.231408] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.870 [2024-07-26 11:35:52.231438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.870 qpair failed and we were unable to recover it. 00:27:56.870 [2024-07-26 11:35:52.231694] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.870 [2024-07-26 11:35:52.231724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.870 qpair failed and we were unable to recover it. 00:27:56.870 [2024-07-26 11:35:52.231914] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.870 [2024-07-26 11:35:52.231944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.870 qpair failed and we were unable to recover it. 00:27:56.870 [2024-07-26 11:35:52.232157] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.870 [2024-07-26 11:35:52.232187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.870 qpair failed and we were unable to recover it. 00:27:56.870 [2024-07-26 11:35:52.232403] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.870 [2024-07-26 11:35:52.232432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.870 qpair failed and we were unable to recover it. 00:27:56.870 [2024-07-26 11:35:52.232609] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.870 [2024-07-26 11:35:52.232649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.870 qpair failed and we were unable to recover it. 00:27:56.870 [2024-07-26 11:35:52.232909] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.870 [2024-07-26 11:35:52.232938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.870 qpair failed and we were unable to recover it. 00:27:56.870 [2024-07-26 11:35:52.233115] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.870 [2024-07-26 11:35:52.233144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.870 qpair failed and we were unable to recover it. 00:27:56.870 [2024-07-26 11:35:52.233334] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.870 [2024-07-26 11:35:52.233364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.870 qpair failed and we were unable to recover it. 00:27:56.870 [2024-07-26 11:35:52.233545] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.870 [2024-07-26 11:35:52.233575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.870 qpair failed and we were unable to recover it. 00:27:56.870 [2024-07-26 11:35:52.233780] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.870 [2024-07-26 11:35:52.233811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.870 qpair failed and we were unable to recover it. 00:27:56.870 [2024-07-26 11:35:52.234041] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.870 [2024-07-26 11:35:52.234071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.870 qpair failed and we were unable to recover it. 00:27:56.870 [2024-07-26 11:35:52.234311] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.870 [2024-07-26 11:35:52.234341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.870 qpair failed and we were unable to recover it. 00:27:56.870 [2024-07-26 11:35:52.234537] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.870 [2024-07-26 11:35:52.234568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.870 qpair failed and we were unable to recover it. 00:27:56.870 [2024-07-26 11:35:52.234830] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.870 [2024-07-26 11:35:52.234861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.870 qpair failed and we were unable to recover it. 00:27:56.870 [2024-07-26 11:35:52.235059] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.870 [2024-07-26 11:35:52.235089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.870 qpair failed and we were unable to recover it. 00:27:56.870 [2024-07-26 11:35:52.235262] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.870 [2024-07-26 11:35:52.235297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.870 qpair failed and we were unable to recover it. 00:27:56.870 [2024-07-26 11:35:52.235502] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.870 [2024-07-26 11:35:52.235532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.870 qpair failed and we were unable to recover it. 00:27:56.870 [2024-07-26 11:35:52.235713] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.870 [2024-07-26 11:35:52.235744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.870 qpair failed and we were unable to recover it. 00:27:56.870 [2024-07-26 11:35:52.235873] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.870 [2024-07-26 11:35:52.235902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.870 qpair failed and we were unable to recover it. 00:27:56.870 [2024-07-26 11:35:52.236157] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.870 [2024-07-26 11:35:52.236187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.870 qpair failed and we were unable to recover it. 00:27:56.870 [2024-07-26 11:35:52.236357] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.870 [2024-07-26 11:35:52.236386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.870 qpair failed and we were unable to recover it. 00:27:56.870 [2024-07-26 11:35:52.236636] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.870 [2024-07-26 11:35:52.236667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.870 qpair failed and we were unable to recover it. 00:27:56.870 [2024-07-26 11:35:52.236930] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.870 [2024-07-26 11:35:52.236960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.870 qpair failed and we were unable to recover it. 00:27:56.870 [2024-07-26 11:35:52.237156] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.870 [2024-07-26 11:35:52.237186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.870 qpair failed and we were unable to recover it. 00:27:56.870 [2024-07-26 11:35:52.237402] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.870 [2024-07-26 11:35:52.237433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.871 qpair failed and we were unable to recover it. 00:27:56.871 [2024-07-26 11:35:52.237621] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.871 [2024-07-26 11:35:52.237658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.871 qpair failed and we were unable to recover it. 00:27:56.871 [2024-07-26 11:35:52.237876] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.871 [2024-07-26 11:35:52.237906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.871 qpair failed and we were unable to recover it. 00:27:56.871 [2024-07-26 11:35:52.238092] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.871 [2024-07-26 11:35:52.238121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.871 qpair failed and we were unable to recover it. 00:27:56.871 [2024-07-26 11:35:52.238391] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.871 [2024-07-26 11:35:52.238421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.871 qpair failed and we were unable to recover it. 00:27:56.871 [2024-07-26 11:35:52.238690] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.871 [2024-07-26 11:35:52.238721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.871 qpair failed and we were unable to recover it. 00:27:56.871 [2024-07-26 11:35:52.238906] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.871 [2024-07-26 11:35:52.238935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.871 qpair failed and we were unable to recover it. 00:27:56.871 [2024-07-26 11:35:52.239200] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.871 [2024-07-26 11:35:52.239230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.871 qpair failed and we were unable to recover it. 00:27:56.871 [2024-07-26 11:35:52.239420] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.871 [2024-07-26 11:35:52.239451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.871 qpair failed and we were unable to recover it. 00:27:56.871 [2024-07-26 11:35:52.239662] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.871 [2024-07-26 11:35:52.239693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.871 qpair failed and we were unable to recover it. 00:27:56.871 [2024-07-26 11:35:52.239905] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.871 [2024-07-26 11:35:52.239935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.871 qpair failed and we were unable to recover it. 00:27:56.871 [2024-07-26 11:35:52.240106] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.871 [2024-07-26 11:35:52.240136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.871 qpair failed and we were unable to recover it. 00:27:56.871 [2024-07-26 11:35:52.240346] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.871 [2024-07-26 11:35:52.240375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.871 qpair failed and we were unable to recover it. 00:27:56.871 [2024-07-26 11:35:52.240561] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.871 [2024-07-26 11:35:52.240591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.871 qpair failed and we were unable to recover it. 00:27:56.871 [2024-07-26 11:35:52.240809] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.871 [2024-07-26 11:35:52.240839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.871 qpair failed and we were unable to recover it. 00:27:56.871 [2024-07-26 11:35:52.240966] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.871 [2024-07-26 11:35:52.240996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.871 qpair failed and we were unable to recover it. 00:27:56.871 [2024-07-26 11:35:52.241283] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.871 [2024-07-26 11:35:52.241313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.871 qpair failed and we were unable to recover it. 00:27:56.871 [2024-07-26 11:35:52.241418] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.871 [2024-07-26 11:35:52.241448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.871 qpair failed and we were unable to recover it. 00:27:56.871 [2024-07-26 11:35:52.241586] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.871 [2024-07-26 11:35:52.241616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.871 qpair failed and we were unable to recover it. 00:27:56.871 [2024-07-26 11:35:52.241872] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.871 [2024-07-26 11:35:52.241903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.871 qpair failed and we were unable to recover it. 00:27:56.871 [2024-07-26 11:35:52.242174] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.871 [2024-07-26 11:35:52.242203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.871 qpair failed and we were unable to recover it. 00:27:56.871 [2024-07-26 11:35:52.242324] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.871 [2024-07-26 11:35:52.242354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.871 qpair failed and we were unable to recover it. 00:27:56.871 [2024-07-26 11:35:52.242554] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.871 [2024-07-26 11:35:52.242584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.871 qpair failed and we were unable to recover it. 00:27:56.871 [2024-07-26 11:35:52.242815] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.871 [2024-07-26 11:35:52.242846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.871 qpair failed and we were unable to recover it. 00:27:56.871 [2024-07-26 11:35:52.243034] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.871 [2024-07-26 11:35:52.243063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.871 qpair failed and we were unable to recover it. 00:27:56.871 [2024-07-26 11:35:52.243238] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.871 [2024-07-26 11:35:52.243267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.871 qpair failed and we were unable to recover it. 00:27:56.871 [2024-07-26 11:35:52.243443] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.871 [2024-07-26 11:35:52.243472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.871 qpair failed and we were unable to recover it. 00:27:56.871 [2024-07-26 11:35:52.243740] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.871 [2024-07-26 11:35:52.243772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.871 qpair failed and we were unable to recover it. 00:27:56.871 [2024-07-26 11:35:52.243905] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.871 [2024-07-26 11:35:52.243935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.871 qpair failed and we were unable to recover it. 00:27:56.871 [2024-07-26 11:35:52.244145] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.871 [2024-07-26 11:35:52.244174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.871 qpair failed and we were unable to recover it. 00:27:56.871 [2024-07-26 11:35:52.244445] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.871 [2024-07-26 11:35:52.244475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.871 qpair failed and we were unable to recover it. 00:27:56.872 [2024-07-26 11:35:52.244600] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.872 [2024-07-26 11:35:52.244643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.872 qpair failed and we were unable to recover it. 00:27:56.872 [2024-07-26 11:35:52.244816] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.872 [2024-07-26 11:35:52.244846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.872 qpair failed and we were unable to recover it. 00:27:56.872 [2024-07-26 11:35:52.245088] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.872 [2024-07-26 11:35:52.245117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.872 qpair failed and we were unable to recover it. 00:27:56.872 [2024-07-26 11:35:52.245364] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.872 [2024-07-26 11:35:52.245394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.872 qpair failed and we were unable to recover it. 00:27:56.872 [2024-07-26 11:35:52.245581] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.872 [2024-07-26 11:35:52.245611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.872 qpair failed and we were unable to recover it. 00:27:56.872 [2024-07-26 11:35:52.245806] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.872 [2024-07-26 11:35:52.245836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.872 qpair failed and we were unable to recover it. 00:27:56.872 [2024-07-26 11:35:52.245944] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.872 [2024-07-26 11:35:52.245973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.872 qpair failed and we were unable to recover it. 00:27:56.872 [2024-07-26 11:35:52.246147] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.872 [2024-07-26 11:35:52.246176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.872 qpair failed and we were unable to recover it. 00:27:56.872 [2024-07-26 11:35:52.246296] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.872 [2024-07-26 11:35:52.246325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.872 qpair failed and we were unable to recover it. 00:27:56.872 [2024-07-26 11:35:52.246588] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.872 [2024-07-26 11:35:52.246618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.872 qpair failed and we were unable to recover it. 00:27:56.872 [2024-07-26 11:35:52.246888] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.872 [2024-07-26 11:35:52.246918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.872 qpair failed and we were unable to recover it. 00:27:56.872 [2024-07-26 11:35:52.247157] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.872 [2024-07-26 11:35:52.247188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.872 qpair failed and we were unable to recover it. 00:27:56.872 [2024-07-26 11:35:52.247320] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.872 [2024-07-26 11:35:52.247350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.872 qpair failed and we were unable to recover it. 00:27:56.872 [2024-07-26 11:35:52.247473] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.872 [2024-07-26 11:35:52.247503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.872 qpair failed and we were unable to recover it. 00:27:56.872 [2024-07-26 11:35:52.247750] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.872 [2024-07-26 11:35:52.247781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.872 qpair failed and we were unable to recover it. 00:27:56.872 [2024-07-26 11:35:52.247983] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.872 [2024-07-26 11:35:52.248012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.872 qpair failed and we were unable to recover it. 00:27:56.872 [2024-07-26 11:35:52.248202] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.872 [2024-07-26 11:35:52.248232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.872 qpair failed and we were unable to recover it. 00:27:56.872 [2024-07-26 11:35:52.248345] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.872 [2024-07-26 11:35:52.248374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.872 qpair failed and we were unable to recover it. 00:27:56.872 [2024-07-26 11:35:52.248635] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.872 [2024-07-26 11:35:52.248665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.872 qpair failed and we were unable to recover it. 00:27:56.872 [2024-07-26 11:35:52.248765] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.872 [2024-07-26 11:35:52.248796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.872 qpair failed and we were unable to recover it. 00:27:56.872 [2024-07-26 11:35:52.249049] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.872 [2024-07-26 11:35:52.249078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.872 qpair failed and we were unable to recover it. 00:27:56.872 [2024-07-26 11:35:52.249275] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.872 [2024-07-26 11:35:52.249305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.872 qpair failed and we were unable to recover it. 00:27:56.872 [2024-07-26 11:35:52.249482] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.872 [2024-07-26 11:35:52.249512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.872 qpair failed and we were unable to recover it. 00:27:56.872 [2024-07-26 11:35:52.249795] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.872 [2024-07-26 11:35:52.249825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.872 qpair failed and we were unable to recover it. 00:27:56.872 [2024-07-26 11:35:52.250025] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.872 [2024-07-26 11:35:52.250055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.872 qpair failed and we were unable to recover it. 00:27:56.872 [2024-07-26 11:35:52.250190] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.872 [2024-07-26 11:35:52.250220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.872 qpair failed and we were unable to recover it. 00:27:56.872 [2024-07-26 11:35:52.250463] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.872 [2024-07-26 11:35:52.250493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.872 qpair failed and we were unable to recover it. 00:27:56.872 [2024-07-26 11:35:52.250604] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.872 [2024-07-26 11:35:52.250651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.872 qpair failed and we were unable to recover it. 00:27:56.872 [2024-07-26 11:35:52.250830] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.872 [2024-07-26 11:35:52.250859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.872 qpair failed and we were unable to recover it. 00:27:56.872 [2024-07-26 11:35:52.251078] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.872 [2024-07-26 11:35:52.251108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.872 qpair failed and we were unable to recover it. 00:27:56.872 [2024-07-26 11:35:52.251291] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.872 [2024-07-26 11:35:52.251321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.872 qpair failed and we were unable to recover it. 00:27:56.872 [2024-07-26 11:35:52.251588] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.872 [2024-07-26 11:35:52.251618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.872 qpair failed and we were unable to recover it. 00:27:56.872 [2024-07-26 11:35:52.251896] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.872 [2024-07-26 11:35:52.251926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.872 qpair failed and we were unable to recover it. 00:27:56.872 [2024-07-26 11:35:52.252050] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.872 [2024-07-26 11:35:52.252079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.872 qpair failed and we were unable to recover it. 00:27:56.872 [2024-07-26 11:35:52.252281] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.872 [2024-07-26 11:35:52.252311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.872 qpair failed and we were unable to recover it. 00:27:56.872 [2024-07-26 11:35:52.252593] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.872 [2024-07-26 11:35:52.252623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.872 qpair failed and we were unable to recover it. 00:27:56.872 [2024-07-26 11:35:52.252807] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.872 [2024-07-26 11:35:52.252835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.872 qpair failed and we were unable to recover it. 00:27:56.872 [2024-07-26 11:35:52.253013] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.873 [2024-07-26 11:35:52.253042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.873 qpair failed and we were unable to recover it. 00:27:56.873 [2024-07-26 11:35:52.253232] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.873 [2024-07-26 11:35:52.253261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.873 qpair failed and we were unable to recover it. 00:27:56.873 [2024-07-26 11:35:52.253504] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.873 [2024-07-26 11:35:52.253534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.873 qpair failed and we were unable to recover it. 00:27:56.873 [2024-07-26 11:35:52.253777] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.873 [2024-07-26 11:35:52.253814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.873 qpair failed and we were unable to recover it. 00:27:56.873 [2024-07-26 11:35:52.254062] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.873 [2024-07-26 11:35:52.254092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.873 qpair failed and we were unable to recover it. 00:27:56.873 [2024-07-26 11:35:52.254342] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.873 [2024-07-26 11:35:52.254372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.873 qpair failed and we were unable to recover it. 00:27:56.873 [2024-07-26 11:35:52.254503] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.873 [2024-07-26 11:35:52.254533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.873 qpair failed and we were unable to recover it. 00:27:56.873 [2024-07-26 11:35:52.254744] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.873 [2024-07-26 11:35:52.254775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.873 qpair failed and we were unable to recover it. 00:27:56.873 [2024-07-26 11:35:52.254964] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.873 [2024-07-26 11:35:52.254995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.873 qpair failed and we were unable to recover it. 00:27:56.873 [2024-07-26 11:35:52.255238] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.873 [2024-07-26 11:35:52.255268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.873 qpair failed and we were unable to recover it. 00:27:56.873 [2024-07-26 11:35:52.255536] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.873 [2024-07-26 11:35:52.255565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.873 qpair failed and we were unable to recover it. 00:27:56.873 [2024-07-26 11:35:52.255773] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.873 [2024-07-26 11:35:52.255804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.873 qpair failed and we were unable to recover it. 00:27:56.873 [2024-07-26 11:35:52.256016] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.873 [2024-07-26 11:35:52.256046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.873 qpair failed and we were unable to recover it. 00:27:56.873 [2024-07-26 11:35:52.256244] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.873 [2024-07-26 11:35:52.256274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.873 qpair failed and we were unable to recover it. 00:27:56.873 [2024-07-26 11:35:52.256392] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.873 [2024-07-26 11:35:52.256423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.873 qpair failed and we were unable to recover it. 00:27:56.873 [2024-07-26 11:35:52.256550] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.873 [2024-07-26 11:35:52.256580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.873 qpair failed and we were unable to recover it. 00:27:56.873 [2024-07-26 11:35:52.256799] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.873 [2024-07-26 11:35:52.256830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.873 qpair failed and we were unable to recover it. 00:27:56.873 [2024-07-26 11:35:52.257038] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.873 [2024-07-26 11:35:52.257068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.873 qpair failed and we were unable to recover it. 00:27:56.873 [2024-07-26 11:35:52.257242] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.873 [2024-07-26 11:35:52.257272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.873 qpair failed and we were unable to recover it. 00:27:56.873 [2024-07-26 11:35:52.257539] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.873 [2024-07-26 11:35:52.257569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.873 qpair failed and we were unable to recover it. 00:27:56.873 [2024-07-26 11:35:52.257768] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.873 [2024-07-26 11:35:52.257799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.873 qpair failed and we were unable to recover it. 00:27:56.873 [2024-07-26 11:35:52.257974] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.873 [2024-07-26 11:35:52.258005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.873 qpair failed and we were unable to recover it. 00:27:56.873 [2024-07-26 11:35:52.258140] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.873 [2024-07-26 11:35:52.258170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.873 qpair failed and we were unable to recover it. 00:27:56.873 [2024-07-26 11:35:52.258363] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.873 [2024-07-26 11:35:52.258393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.873 qpair failed and we were unable to recover it. 00:27:56.873 [2024-07-26 11:35:52.258520] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.873 [2024-07-26 11:35:52.258550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.873 qpair failed and we were unable to recover it. 00:27:56.873 [2024-07-26 11:35:52.258760] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.873 [2024-07-26 11:35:52.258792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.873 qpair failed and we were unable to recover it. 00:27:56.873 [2024-07-26 11:35:52.258972] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.873 [2024-07-26 11:35:52.259002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.873 qpair failed and we were unable to recover it. 00:27:56.873 [2024-07-26 11:35:52.259249] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.873 [2024-07-26 11:35:52.259279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.873 qpair failed and we were unable to recover it. 00:27:56.873 [2024-07-26 11:35:52.259431] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.873 [2024-07-26 11:35:52.259461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.873 qpair failed and we were unable to recover it. 00:27:56.873 [2024-07-26 11:35:52.259640] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.873 [2024-07-26 11:35:52.259672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.873 qpair failed and we were unable to recover it. 00:27:56.873 [2024-07-26 11:35:52.259796] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.873 [2024-07-26 11:35:52.259827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.873 qpair failed and we were unable to recover it. 00:27:56.873 [2024-07-26 11:35:52.260013] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.873 [2024-07-26 11:35:52.260043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.873 qpair failed and we were unable to recover it. 00:27:56.873 [2024-07-26 11:35:52.260247] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.873 [2024-07-26 11:35:52.260277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.873 qpair failed and we were unable to recover it. 00:27:56.873 [2024-07-26 11:35:52.260406] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.873 [2024-07-26 11:35:52.260437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.873 qpair failed and we were unable to recover it. 00:27:56.873 [2024-07-26 11:35:52.260646] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.873 [2024-07-26 11:35:52.260678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.873 qpair failed and we were unable to recover it. 00:27:56.873 [2024-07-26 11:35:52.260816] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.873 [2024-07-26 11:35:52.260846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.873 qpair failed and we were unable to recover it. 00:27:56.873 [2024-07-26 11:35:52.261043] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.873 [2024-07-26 11:35:52.261074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.873 qpair failed and we were unable to recover it. 00:27:56.873 [2024-07-26 11:35:52.261259] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.873 [2024-07-26 11:35:52.261289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.873 qpair failed and we were unable to recover it. 00:27:56.874 [2024-07-26 11:35:52.261473] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.874 [2024-07-26 11:35:52.261503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.874 qpair failed and we were unable to recover it. 00:27:56.874 [2024-07-26 11:35:52.261620] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.874 [2024-07-26 11:35:52.261676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.874 qpair failed and we were unable to recover it. 00:27:56.874 [2024-07-26 11:35:52.261808] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.874 [2024-07-26 11:35:52.261837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.874 qpair failed and we were unable to recover it. 00:27:56.874 [2024-07-26 11:35:52.261962] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.874 [2024-07-26 11:35:52.261993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.874 qpair failed and we were unable to recover it. 00:27:56.874 [2024-07-26 11:35:52.262183] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.874 [2024-07-26 11:35:52.262213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.874 qpair failed and we were unable to recover it. 00:27:56.874 [2024-07-26 11:35:52.262319] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.874 [2024-07-26 11:35:52.262354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.874 qpair failed and we were unable to recover it. 00:27:56.874 [2024-07-26 11:35:52.262594] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.874 [2024-07-26 11:35:52.262625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.874 qpair failed and we were unable to recover it. 00:27:56.874 [2024-07-26 11:35:52.262839] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.874 [2024-07-26 11:35:52.262869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.874 qpair failed and we were unable to recover it. 00:27:56.874 [2024-07-26 11:35:52.262989] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.874 [2024-07-26 11:35:52.263019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.874 qpair failed and we were unable to recover it. 00:27:56.874 [2024-07-26 11:35:52.263217] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.874 [2024-07-26 11:35:52.263248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.874 qpair failed and we were unable to recover it. 00:27:56.874 [2024-07-26 11:35:52.263368] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.874 [2024-07-26 11:35:52.263398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.874 qpair failed and we were unable to recover it. 00:27:56.874 [2024-07-26 11:35:52.263590] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.874 [2024-07-26 11:35:52.263620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.874 qpair failed and we were unable to recover it. 00:27:56.874 [2024-07-26 11:35:52.263754] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.874 [2024-07-26 11:35:52.263784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.874 qpair failed and we were unable to recover it. 00:27:56.874 [2024-07-26 11:35:52.263962] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.874 [2024-07-26 11:35:52.263992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.874 qpair failed and we were unable to recover it. 00:27:56.874 [2024-07-26 11:35:52.264256] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.874 [2024-07-26 11:35:52.264286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.874 qpair failed and we were unable to recover it. 00:27:56.874 [2024-07-26 11:35:52.264415] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.874 [2024-07-26 11:35:52.264445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.874 qpair failed and we were unable to recover it. 00:27:56.874 [2024-07-26 11:35:52.264557] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.874 [2024-07-26 11:35:52.264587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.874 qpair failed and we were unable to recover it. 00:27:56.874 [2024-07-26 11:35:52.264810] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.874 [2024-07-26 11:35:52.264840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.874 qpair failed and we were unable to recover it. 00:27:56.874 [2024-07-26 11:35:52.265110] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.874 [2024-07-26 11:35:52.265140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.874 qpair failed and we were unable to recover it. 00:27:56.874 [2024-07-26 11:35:52.265342] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.874 [2024-07-26 11:35:52.265373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.874 qpair failed and we were unable to recover it. 00:27:56.874 [2024-07-26 11:35:52.265647] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.874 [2024-07-26 11:35:52.265679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.874 qpair failed and we were unable to recover it. 00:27:56.874 [2024-07-26 11:35:52.265919] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.874 [2024-07-26 11:35:52.265948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.874 qpair failed and we were unable to recover it. 00:27:56.874 [2024-07-26 11:35:52.266087] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.874 [2024-07-26 11:35:52.266118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.874 qpair failed and we were unable to recover it. 00:27:56.874 [2024-07-26 11:35:52.266239] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.874 [2024-07-26 11:35:52.266270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.874 qpair failed and we were unable to recover it. 00:27:56.874 [2024-07-26 11:35:52.266511] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.874 [2024-07-26 11:35:52.266542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.874 qpair failed and we were unable to recover it. 00:27:56.874 [2024-07-26 11:35:52.266724] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.874 [2024-07-26 11:35:52.266756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.874 qpair failed and we were unable to recover it. 00:27:56.874 [2024-07-26 11:35:52.266889] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.874 [2024-07-26 11:35:52.266919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.874 qpair failed and we were unable to recover it. 00:27:56.874 [2024-07-26 11:35:52.267032] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.874 [2024-07-26 11:35:52.267062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.874 qpair failed and we were unable to recover it. 00:27:56.874 [2024-07-26 11:35:52.267321] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.874 [2024-07-26 11:35:52.267351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.874 qpair failed and we were unable to recover it. 00:27:56.874 [2024-07-26 11:35:52.267643] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.874 [2024-07-26 11:35:52.267673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.874 qpair failed and we were unable to recover it. 00:27:56.874 [2024-07-26 11:35:52.267805] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.874 [2024-07-26 11:35:52.267835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.874 qpair failed and we were unable to recover it. 00:27:56.874 [2024-07-26 11:35:52.268008] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.874 [2024-07-26 11:35:52.268037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.874 qpair failed and we were unable to recover it. 00:27:56.874 [2024-07-26 11:35:52.268216] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.874 [2024-07-26 11:35:52.268285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:56.874 qpair failed and we were unable to recover it. 00:27:56.874 [2024-07-26 11:35:52.268573] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.874 [2024-07-26 11:35:52.268606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:56.874 qpair failed and we were unable to recover it. 00:27:56.874 [2024-07-26 11:35:52.268824] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.874 [2024-07-26 11:35:52.268857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:56.874 qpair failed and we were unable to recover it. 00:27:56.874 [2024-07-26 11:35:52.269037] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.874 [2024-07-26 11:35:52.269068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:56.874 qpair failed and we were unable to recover it. 00:27:56.874 [2024-07-26 11:35:52.269191] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.874 [2024-07-26 11:35:52.269221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:56.874 qpair failed and we were unable to recover it. 00:27:56.874 [2024-07-26 11:35:52.269499] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.874 [2024-07-26 11:35:52.269529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:56.875 qpair failed and we were unable to recover it. 00:27:56.875 [2024-07-26 11:35:52.269715] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.875 [2024-07-26 11:35:52.269747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:56.875 qpair failed and we were unable to recover it. 00:27:56.875 [2024-07-26 11:35:52.269969] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.875 [2024-07-26 11:35:52.269999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:56.875 qpair failed and we were unable to recover it. 00:27:56.875 [2024-07-26 11:35:52.270134] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.875 [2024-07-26 11:35:52.270164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:56.875 qpair failed and we were unable to recover it. 00:27:56.875 [2024-07-26 11:35:52.270361] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.875 [2024-07-26 11:35:52.270391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:56.875 qpair failed and we were unable to recover it. 00:27:56.875 [2024-07-26 11:35:52.270513] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.875 [2024-07-26 11:35:52.270543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:56.875 qpair failed and we were unable to recover it. 00:27:56.875 [2024-07-26 11:35:52.270786] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.875 [2024-07-26 11:35:52.270818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:56.875 qpair failed and we were unable to recover it. 00:27:56.875 [2024-07-26 11:35:52.270999] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.875 [2024-07-26 11:35:52.271030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:56.875 qpair failed and we were unable to recover it. 00:27:56.875 [2024-07-26 11:35:52.271295] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.875 [2024-07-26 11:35:52.271325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:56.875 qpair failed and we were unable to recover it. 00:27:56.875 [2024-07-26 11:35:52.271580] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.875 [2024-07-26 11:35:52.271610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:56.875 qpair failed and we were unable to recover it. 00:27:56.875 [2024-07-26 11:35:52.271740] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.875 [2024-07-26 11:35:52.271772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:56.875 qpair failed and we were unable to recover it. 00:27:56.875 [2024-07-26 11:35:52.272031] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.875 [2024-07-26 11:35:52.272061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:56.875 qpair failed and we were unable to recover it. 00:27:56.875 [2024-07-26 11:35:52.272237] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.875 [2024-07-26 11:35:52.272267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:56.875 qpair failed and we were unable to recover it. 00:27:56.875 [2024-07-26 11:35:52.272448] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.875 [2024-07-26 11:35:52.272479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:56.875 qpair failed and we were unable to recover it. 00:27:56.875 [2024-07-26 11:35:52.272597] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.875 [2024-07-26 11:35:52.272634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:56.875 qpair failed and we were unable to recover it. 00:27:56.875 [2024-07-26 11:35:52.272752] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.875 [2024-07-26 11:35:52.272782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:56.875 qpair failed and we were unable to recover it. 00:27:56.875 [2024-07-26 11:35:52.272959] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.875 [2024-07-26 11:35:52.272989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:56.875 qpair failed and we were unable to recover it. 00:27:56.875 [2024-07-26 11:35:52.273253] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.875 [2024-07-26 11:35:52.273283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:56.875 qpair failed and we were unable to recover it. 00:27:56.875 [2024-07-26 11:35:52.273405] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.875 [2024-07-26 11:35:52.273435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:56.875 qpair failed and we were unable to recover it. 00:27:56.875 [2024-07-26 11:35:52.273636] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.875 [2024-07-26 11:35:52.273667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:56.875 qpair failed and we were unable to recover it. 00:27:56.875 [2024-07-26 11:35:52.273785] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.875 [2024-07-26 11:35:52.273815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:56.875 qpair failed and we were unable to recover it. 00:27:56.875 [2024-07-26 11:35:52.274003] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.875 [2024-07-26 11:35:52.274033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:56.875 qpair failed and we were unable to recover it. 00:27:56.875 [2024-07-26 11:35:52.274236] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.875 [2024-07-26 11:35:52.274272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:56.875 qpair failed and we were unable to recover it. 00:27:56.875 [2024-07-26 11:35:52.274382] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.875 [2024-07-26 11:35:52.274412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:56.875 qpair failed and we were unable to recover it. 00:27:56.875 [2024-07-26 11:35:52.274540] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.875 [2024-07-26 11:35:52.274570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:56.875 qpair failed and we were unable to recover it. 00:27:56.875 [2024-07-26 11:35:52.274766] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.875 [2024-07-26 11:35:52.274798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:56.875 qpair failed and we were unable to recover it. 00:27:56.875 [2024-07-26 11:35:52.274988] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.875 [2024-07-26 11:35:52.275019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:56.875 qpair failed and we were unable to recover it. 00:27:56.875 [2024-07-26 11:35:52.275144] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.875 [2024-07-26 11:35:52.275173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:56.875 qpair failed and we were unable to recover it. 00:27:56.875 [2024-07-26 11:35:52.275429] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.875 [2024-07-26 11:35:52.275459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:56.875 qpair failed and we were unable to recover it. 00:27:56.875 [2024-07-26 11:35:52.275573] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.875 [2024-07-26 11:35:52.275603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:56.875 qpair failed and we were unable to recover it. 00:27:56.875 [2024-07-26 11:35:52.275819] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.875 [2024-07-26 11:35:52.275850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:56.875 qpair failed and we were unable to recover it. 00:27:56.875 [2024-07-26 11:35:52.276033] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.875 [2024-07-26 11:35:52.276064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:56.875 qpair failed and we were unable to recover it. 00:27:56.875 [2024-07-26 11:35:52.276241] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.875 [2024-07-26 11:35:52.276271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:56.875 qpair failed and we were unable to recover it. 00:27:56.875 [2024-07-26 11:35:52.276479] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.875 [2024-07-26 11:35:52.276508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:56.875 qpair failed and we were unable to recover it. 00:27:56.875 [2024-07-26 11:35:52.276698] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.875 [2024-07-26 11:35:52.276730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:56.875 qpair failed and we were unable to recover it. 00:27:56.875 [2024-07-26 11:35:52.276907] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.875 [2024-07-26 11:35:52.276938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:56.875 qpair failed and we were unable to recover it. 00:27:56.875 [2024-07-26 11:35:52.277128] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.875 [2024-07-26 11:35:52.277158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:56.875 qpair failed and we were unable to recover it. 00:27:56.875 [2024-07-26 11:35:52.277282] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.875 [2024-07-26 11:35:52.277313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:56.875 qpair failed and we were unable to recover it. 00:27:56.875 [2024-07-26 11:35:52.277520] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.875 [2024-07-26 11:35:52.277549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:56.876 qpair failed and we were unable to recover it. 00:27:56.876 [2024-07-26 11:35:52.277724] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.876 [2024-07-26 11:35:52.277756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:56.876 qpair failed and we were unable to recover it. 00:27:56.876 [2024-07-26 11:35:52.277966] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.876 [2024-07-26 11:35:52.277997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:56.876 qpair failed and we were unable to recover it. 00:27:56.876 [2024-07-26 11:35:52.278118] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.876 [2024-07-26 11:35:52.278149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:56.876 qpair failed and we were unable to recover it. 00:27:56.876 [2024-07-26 11:35:52.278341] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.876 [2024-07-26 11:35:52.278371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:56.876 qpair failed and we were unable to recover it. 00:27:56.876 [2024-07-26 11:35:52.278559] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.876 [2024-07-26 11:35:52.278589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:56.876 qpair failed and we were unable to recover it. 00:27:56.876 [2024-07-26 11:35:52.278774] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.876 [2024-07-26 11:35:52.278805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:56.876 qpair failed and we were unable to recover it. 00:27:56.876 [2024-07-26 11:35:52.279004] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.876 [2024-07-26 11:35:52.279034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:56.876 qpair failed and we were unable to recover it. 00:27:56.876 [2024-07-26 11:35:52.279233] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.876 [2024-07-26 11:35:52.279263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:56.876 qpair failed and we were unable to recover it. 00:27:56.876 [2024-07-26 11:35:52.279395] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.876 [2024-07-26 11:35:52.279425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:56.876 qpair failed and we were unable to recover it. 00:27:56.876 [2024-07-26 11:35:52.279665] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.876 [2024-07-26 11:35:52.279696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:56.876 qpair failed and we were unable to recover it. 00:27:56.876 [2024-07-26 11:35:52.279945] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.876 [2024-07-26 11:35:52.279986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:56.876 qpair failed and we were unable to recover it. 00:27:56.876 [2024-07-26 11:35:52.280197] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.876 [2024-07-26 11:35:52.280227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:56.876 qpair failed and we were unable to recover it. 00:27:56.876 [2024-07-26 11:35:52.280436] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.876 [2024-07-26 11:35:52.280467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:56.876 qpair failed and we were unable to recover it. 00:27:56.876 [2024-07-26 11:35:52.280695] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.876 [2024-07-26 11:35:52.280726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:56.876 qpair failed and we were unable to recover it. 00:27:56.876 [2024-07-26 11:35:52.280932] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.876 [2024-07-26 11:35:52.280963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:56.876 qpair failed and we were unable to recover it. 00:27:56.876 [2024-07-26 11:35:52.281084] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.876 [2024-07-26 11:35:52.281114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:56.876 qpair failed and we were unable to recover it. 00:27:56.876 [2024-07-26 11:35:52.281286] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.876 [2024-07-26 11:35:52.281316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:56.876 qpair failed and we were unable to recover it. 00:27:56.876 [2024-07-26 11:35:52.281451] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.876 [2024-07-26 11:35:52.281481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:56.876 qpair failed and we were unable to recover it. 00:27:56.876 [2024-07-26 11:35:52.281743] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.876 [2024-07-26 11:35:52.281774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:56.876 qpair failed and we were unable to recover it. 00:27:56.876 [2024-07-26 11:35:52.282019] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.876 [2024-07-26 11:35:52.282049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:56.876 qpair failed and we were unable to recover it. 00:27:56.876 [2024-07-26 11:35:52.282308] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.876 [2024-07-26 11:35:52.282338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:56.876 qpair failed and we were unable to recover it. 00:27:56.876 [2024-07-26 11:35:52.282466] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.876 [2024-07-26 11:35:52.282496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:56.876 qpair failed and we were unable to recover it. 00:27:56.876 [2024-07-26 11:35:52.282677] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.876 [2024-07-26 11:35:52.282708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:56.876 qpair failed and we were unable to recover it. 00:27:56.876 [2024-07-26 11:35:52.282981] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.876 [2024-07-26 11:35:52.283011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:56.876 qpair failed and we were unable to recover it. 00:27:56.876 [2024-07-26 11:35:52.283267] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.876 [2024-07-26 11:35:52.283297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:56.876 qpair failed and we were unable to recover it. 00:27:56.876 [2024-07-26 11:35:52.283486] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.876 [2024-07-26 11:35:52.283517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:56.876 qpair failed and we were unable to recover it. 00:27:56.876 [2024-07-26 11:35:52.283709] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.876 [2024-07-26 11:35:52.283740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:56.876 qpair failed and we were unable to recover it. 00:27:56.876 [2024-07-26 11:35:52.284001] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.876 [2024-07-26 11:35:52.284031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:56.876 qpair failed and we were unable to recover it. 00:27:56.876 [2024-07-26 11:35:52.284204] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.876 [2024-07-26 11:35:52.284235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:56.876 qpair failed and we were unable to recover it. 00:27:56.876 [2024-07-26 11:35:52.284418] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.876 [2024-07-26 11:35:52.284447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:56.876 qpair failed and we were unable to recover it. 00:27:56.876 [2024-07-26 11:35:52.284624] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.876 [2024-07-26 11:35:52.284665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:56.876 qpair failed and we were unable to recover it. 00:27:56.876 [2024-07-26 11:35:52.284926] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.877 [2024-07-26 11:35:52.284956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:56.877 qpair failed and we were unable to recover it. 00:27:56.877 [2024-07-26 11:35:52.285149] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.877 [2024-07-26 11:35:52.285179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:56.877 qpair failed and we were unable to recover it. 00:27:56.877 [2024-07-26 11:35:52.285302] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.877 [2024-07-26 11:35:52.285333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:56.877 qpair failed and we were unable to recover it. 00:27:56.877 [2024-07-26 11:35:52.285521] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.877 [2024-07-26 11:35:52.285551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:56.877 qpair failed and we were unable to recover it. 00:27:56.877 [2024-07-26 11:35:52.285820] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.877 [2024-07-26 11:35:52.285852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:56.877 qpair failed and we were unable to recover it. 00:27:56.877 [2024-07-26 11:35:52.286108] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.877 [2024-07-26 11:35:52.286138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:56.877 qpair failed and we were unable to recover it. 00:27:56.877 [2024-07-26 11:35:52.286321] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.877 [2024-07-26 11:35:52.286357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:56.877 qpair failed and we were unable to recover it. 00:27:56.877 [2024-07-26 11:35:52.286489] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.877 [2024-07-26 11:35:52.286519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:56.877 qpair failed and we were unable to recover it. 00:27:56.877 [2024-07-26 11:35:52.286695] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.877 [2024-07-26 11:35:52.286726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:56.877 qpair failed and we were unable to recover it. 00:27:56.877 [2024-07-26 11:35:52.286849] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.877 [2024-07-26 11:35:52.286879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:56.877 qpair failed and we were unable to recover it. 00:27:56.877 [2024-07-26 11:35:52.287070] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.877 [2024-07-26 11:35:52.287100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:56.877 qpair failed and we were unable to recover it. 00:27:56.877 [2024-07-26 11:35:52.287216] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.877 [2024-07-26 11:35:52.287246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:56.877 qpair failed and we were unable to recover it. 00:27:56.877 [2024-07-26 11:35:52.287435] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.877 [2024-07-26 11:35:52.287465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:56.877 qpair failed and we were unable to recover it. 00:27:56.877 [2024-07-26 11:35:52.287574] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.877 [2024-07-26 11:35:52.287603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:56.877 qpair failed and we were unable to recover it. 00:27:56.877 [2024-07-26 11:35:52.287748] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.877 [2024-07-26 11:35:52.287779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:56.877 qpair failed and we were unable to recover it. 00:27:56.877 [2024-07-26 11:35:52.287884] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.877 [2024-07-26 11:35:52.287914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:56.877 qpair failed and we were unable to recover it. 00:27:56.877 [2024-07-26 11:35:52.288119] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.877 [2024-07-26 11:35:52.288148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:56.877 qpair failed and we were unable to recover it. 00:27:56.877 [2024-07-26 11:35:52.288397] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.877 [2024-07-26 11:35:52.288428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:56.877 qpair failed and we were unable to recover it. 00:27:56.877 [2024-07-26 11:35:52.288601] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.877 [2024-07-26 11:35:52.288638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:56.877 qpair failed and we were unable to recover it. 00:27:56.877 [2024-07-26 11:35:52.288814] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.877 [2024-07-26 11:35:52.288843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:56.877 qpair failed and we were unable to recover it. 00:27:56.877 [2024-07-26 11:35:52.289036] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.877 [2024-07-26 11:35:52.289067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:56.877 qpair failed and we were unable to recover it. 00:27:56.877 [2024-07-26 11:35:52.289241] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.877 [2024-07-26 11:35:52.289270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:56.877 qpair failed and we were unable to recover it. 00:27:56.877 [2024-07-26 11:35:52.289445] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.877 [2024-07-26 11:35:52.289475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:56.877 qpair failed and we were unable to recover it. 00:27:56.877 [2024-07-26 11:35:52.289664] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.877 [2024-07-26 11:35:52.289696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:56.877 qpair failed and we were unable to recover it. 00:27:56.877 [2024-07-26 11:35:52.289967] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.877 [2024-07-26 11:35:52.289997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:56.877 qpair failed and we were unable to recover it. 00:27:56.877 [2024-07-26 11:35:52.290237] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.877 [2024-07-26 11:35:52.290267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:56.877 qpair failed and we were unable to recover it. 00:27:56.877 [2024-07-26 11:35:52.290464] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.877 [2024-07-26 11:35:52.290494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:56.877 qpair failed and we were unable to recover it. 00:27:56.877 [2024-07-26 11:35:52.290680] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.877 [2024-07-26 11:35:52.290711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:56.877 qpair failed and we were unable to recover it. 00:27:56.877 [2024-07-26 11:35:52.290924] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.877 [2024-07-26 11:35:52.290955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:56.877 qpair failed and we were unable to recover it. 00:27:56.877 [2024-07-26 11:35:52.291141] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.877 [2024-07-26 11:35:52.291171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:56.877 qpair failed and we were unable to recover it. 00:27:56.877 [2024-07-26 11:35:52.291348] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.877 [2024-07-26 11:35:52.291378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:56.877 qpair failed and we were unable to recover it. 00:27:56.877 [2024-07-26 11:35:52.291549] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.877 [2024-07-26 11:35:52.291579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:56.877 qpair failed and we were unable to recover it. 00:27:56.877 [2024-07-26 11:35:52.291817] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.877 [2024-07-26 11:35:52.291848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:56.877 qpair failed and we were unable to recover it. 00:27:56.877 [2024-07-26 11:35:52.292045] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.877 [2024-07-26 11:35:52.292075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:56.877 qpair failed and we were unable to recover it. 00:27:56.877 [2024-07-26 11:35:52.292294] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.877 [2024-07-26 11:35:52.292326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:56.877 qpair failed and we were unable to recover it. 00:27:56.877 [2024-07-26 11:35:52.292442] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.877 [2024-07-26 11:35:52.292471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:56.877 qpair failed and we were unable to recover it. 00:27:56.877 [2024-07-26 11:35:52.292599] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.877 [2024-07-26 11:35:52.292638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:56.877 qpair failed and we were unable to recover it. 00:27:56.877 [2024-07-26 11:35:52.292817] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.877 [2024-07-26 11:35:52.292848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:56.877 qpair failed and we were unable to recover it. 00:27:56.877 [2024-07-26 11:35:52.293030] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.878 [2024-07-26 11:35:52.293060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:56.878 qpair failed and we were unable to recover it. 00:27:56.878 [2024-07-26 11:35:52.293235] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.878 [2024-07-26 11:35:52.293265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:56.878 qpair failed and we were unable to recover it. 00:27:56.878 [2024-07-26 11:35:52.293535] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.878 [2024-07-26 11:35:52.293565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:56.878 qpair failed and we were unable to recover it. 00:27:56.878 [2024-07-26 11:35:52.293755] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.878 [2024-07-26 11:35:52.293786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:56.878 qpair failed and we were unable to recover it. 00:27:56.878 [2024-07-26 11:35:52.293965] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.878 [2024-07-26 11:35:52.293995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:56.878 qpair failed and we were unable to recover it. 00:27:56.878 [2024-07-26 11:35:52.294130] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.878 [2024-07-26 11:35:52.294161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:56.878 qpair failed and we were unable to recover it. 00:27:56.878 [2024-07-26 11:35:52.294369] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.878 [2024-07-26 11:35:52.294398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:56.878 qpair failed and we were unable to recover it. 00:27:56.878 [2024-07-26 11:35:52.294570] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.878 [2024-07-26 11:35:52.294600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:56.878 qpair failed and we were unable to recover it. 00:27:56.878 [2024-07-26 11:35:52.294728] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.878 [2024-07-26 11:35:52.294758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:56.878 qpair failed and we were unable to recover it. 00:27:56.878 [2024-07-26 11:35:52.294934] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.878 [2024-07-26 11:35:52.294970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:56.878 qpair failed and we were unable to recover it. 00:27:56.878 [2024-07-26 11:35:52.295078] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.878 [2024-07-26 11:35:52.295108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:56.878 qpair failed and we were unable to recover it. 00:27:56.878 [2024-07-26 11:35:52.295377] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.878 [2024-07-26 11:35:52.295407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:56.878 qpair failed and we were unable to recover it. 00:27:56.878 [2024-07-26 11:35:52.295519] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.878 [2024-07-26 11:35:52.295557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:56.878 qpair failed and we were unable to recover it. 00:27:56.878 [2024-07-26 11:35:52.295739] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.878 [2024-07-26 11:35:52.295770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:56.878 qpair failed and we were unable to recover it. 00:27:56.878 [2024-07-26 11:35:52.295887] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.878 [2024-07-26 11:35:52.295917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:56.878 qpair failed and we were unable to recover it. 00:27:56.878 [2024-07-26 11:35:52.296124] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.878 [2024-07-26 11:35:52.296154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:56.878 qpair failed and we were unable to recover it. 00:27:56.878 [2024-07-26 11:35:52.296431] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.878 [2024-07-26 11:35:52.296461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:56.878 qpair failed and we were unable to recover it. 00:27:56.878 [2024-07-26 11:35:52.296646] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.878 [2024-07-26 11:35:52.296677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:56.878 qpair failed and we were unable to recover it. 00:27:56.878 [2024-07-26 11:35:52.296851] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.878 [2024-07-26 11:35:52.296881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:56.878 qpair failed and we were unable to recover it. 00:27:56.878 [2024-07-26 11:35:52.297004] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.878 [2024-07-26 11:35:52.297034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:56.878 qpair failed and we were unable to recover it. 00:27:56.878 [2024-07-26 11:35:52.297203] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.878 [2024-07-26 11:35:52.297232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:56.878 qpair failed and we were unable to recover it. 00:27:56.878 [2024-07-26 11:35:52.297430] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.878 [2024-07-26 11:35:52.297460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:56.878 qpair failed and we were unable to recover it. 00:27:56.878 [2024-07-26 11:35:52.297650] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.878 [2024-07-26 11:35:52.297681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:56.878 qpair failed and we were unable to recover it. 00:27:56.878 [2024-07-26 11:35:52.297906] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.878 [2024-07-26 11:35:52.297936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:56.878 qpair failed and we were unable to recover it. 00:27:56.878 [2024-07-26 11:35:52.298150] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.878 [2024-07-26 11:35:52.298179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:56.878 qpair failed and we were unable to recover it. 00:27:56.878 [2024-07-26 11:35:52.298358] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.878 [2024-07-26 11:35:52.298387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:56.878 qpair failed and we were unable to recover it. 00:27:56.878 [2024-07-26 11:35:52.298658] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.878 [2024-07-26 11:35:52.298689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:56.878 qpair failed and we were unable to recover it. 00:27:56.878 [2024-07-26 11:35:52.298863] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.878 [2024-07-26 11:35:52.298893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:56.878 qpair failed and we were unable to recover it. 00:27:56.878 [2024-07-26 11:35:52.299029] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.878 [2024-07-26 11:35:52.299059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:56.878 qpair failed and we were unable to recover it. 00:27:56.878 [2024-07-26 11:35:52.299232] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.878 [2024-07-26 11:35:52.299261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:56.878 qpair failed and we were unable to recover it. 00:27:56.878 [2024-07-26 11:35:52.299471] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.878 [2024-07-26 11:35:52.299500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:56.878 qpair failed and we were unable to recover it. 00:27:56.878 [2024-07-26 11:35:52.299662] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.878 [2024-07-26 11:35:52.299693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:56.878 qpair failed and we were unable to recover it. 00:27:56.878 [2024-07-26 11:35:52.299886] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.878 [2024-07-26 11:35:52.299916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:56.878 qpair failed and we were unable to recover it. 00:27:56.878 [2024-07-26 11:35:52.300109] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.878 [2024-07-26 11:35:52.300139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:56.878 qpair failed and we were unable to recover it. 00:27:56.878 [2024-07-26 11:35:52.300269] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.878 [2024-07-26 11:35:52.300299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:56.878 qpair failed and we were unable to recover it. 00:27:56.878 [2024-07-26 11:35:52.300427] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.878 [2024-07-26 11:35:52.300457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:56.878 qpair failed and we were unable to recover it. 00:27:56.878 [2024-07-26 11:35:52.300639] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.878 [2024-07-26 11:35:52.300675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:56.878 qpair failed and we were unable to recover it. 00:27:56.878 [2024-07-26 11:35:52.300850] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.878 [2024-07-26 11:35:52.300880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:56.878 qpair failed and we were unable to recover it. 00:27:56.878 [2024-07-26 11:35:52.301065] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.879 [2024-07-26 11:35:52.301094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:56.879 qpair failed and we were unable to recover it. 00:27:56.879 [2024-07-26 11:35:52.301223] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.879 [2024-07-26 11:35:52.301252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:56.879 qpair failed and we were unable to recover it. 00:27:56.879 [2024-07-26 11:35:52.301509] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.879 [2024-07-26 11:35:52.301539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:56.879 qpair failed and we were unable to recover it. 00:27:56.879 [2024-07-26 11:35:52.301646] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.879 [2024-07-26 11:35:52.301677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:56.879 qpair failed and we were unable to recover it. 00:27:56.879 [2024-07-26 11:35:52.301877] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.879 [2024-07-26 11:35:52.301906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:56.879 qpair failed and we were unable to recover it. 00:27:56.879 [2024-07-26 11:35:52.302032] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.879 [2024-07-26 11:35:52.302061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:56.879 qpair failed and we were unable to recover it. 00:27:56.879 [2024-07-26 11:35:52.302178] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.879 [2024-07-26 11:35:52.302207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:56.879 qpair failed and we were unable to recover it. 00:27:56.879 [2024-07-26 11:35:52.302380] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.879 [2024-07-26 11:35:52.302410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:56.879 qpair failed and we were unable to recover it. 00:27:56.879 [2024-07-26 11:35:52.302675] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.879 [2024-07-26 11:35:52.302706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:56.879 qpair failed and we were unable to recover it. 00:27:56.879 [2024-07-26 11:35:52.302897] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.879 [2024-07-26 11:35:52.302927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:56.879 qpair failed and we were unable to recover it. 00:27:56.879 [2024-07-26 11:35:52.303143] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.879 [2024-07-26 11:35:52.303173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:56.879 qpair failed and we were unable to recover it. 00:27:56.879 [2024-07-26 11:35:52.303338] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.879 [2024-07-26 11:35:52.303367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:56.879 qpair failed and we were unable to recover it. 00:27:56.879 [2024-07-26 11:35:52.303576] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.879 [2024-07-26 11:35:52.303605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:56.879 qpair failed and we were unable to recover it. 00:27:56.879 [2024-07-26 11:35:52.303777] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.879 [2024-07-26 11:35:52.303810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:56.879 qpair failed and we were unable to recover it. 00:27:56.879 [2024-07-26 11:35:52.304016] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.879 [2024-07-26 11:35:52.304046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:56.879 qpair failed and we were unable to recover it. 00:27:56.879 [2024-07-26 11:35:52.304148] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.879 [2024-07-26 11:35:52.304177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:56.879 qpair failed and we were unable to recover it. 00:27:56.879 [2024-07-26 11:35:52.304273] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.879 [2024-07-26 11:35:52.304303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:56.879 qpair failed and we were unable to recover it. 00:27:56.879 [2024-07-26 11:35:52.304475] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.879 [2024-07-26 11:35:52.304504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:56.879 qpair failed and we were unable to recover it. 00:27:56.879 [2024-07-26 11:35:52.304704] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.879 [2024-07-26 11:35:52.304735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:56.879 qpair failed and we were unable to recover it. 00:27:56.879 [2024-07-26 11:35:52.304848] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.879 [2024-07-26 11:35:52.304877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:56.879 qpair failed and we were unable to recover it. 00:27:56.879 [2024-07-26 11:35:52.305004] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.879 [2024-07-26 11:35:52.305033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:56.879 qpair failed and we were unable to recover it. 00:27:56.879 [2024-07-26 11:35:52.305297] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.879 [2024-07-26 11:35:52.305326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:56.879 qpair failed and we were unable to recover it. 00:27:56.879 [2024-07-26 11:35:52.305530] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.879 [2024-07-26 11:35:52.305560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:56.879 qpair failed and we were unable to recover it. 00:27:56.879 [2024-07-26 11:35:52.305750] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.879 [2024-07-26 11:35:52.305781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:56.879 qpair failed and we were unable to recover it. 00:27:56.879 [2024-07-26 11:35:52.305902] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.879 [2024-07-26 11:35:52.305931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:56.879 qpair failed and we were unable to recover it. 00:27:56.879 [2024-07-26 11:35:52.306055] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.879 [2024-07-26 11:35:52.306090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:56.879 qpair failed and we were unable to recover it. 00:27:56.879 [2024-07-26 11:35:52.306286] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.879 [2024-07-26 11:35:52.306316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:56.879 qpair failed and we were unable to recover it. 00:27:56.879 [2024-07-26 11:35:52.306553] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.879 [2024-07-26 11:35:52.306583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:56.879 qpair failed and we were unable to recover it. 00:27:56.879 [2024-07-26 11:35:52.306775] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.879 [2024-07-26 11:35:52.306806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:56.879 qpair failed and we were unable to recover it. 00:27:56.879 [2024-07-26 11:35:52.306925] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.879 [2024-07-26 11:35:52.306955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:56.879 qpair failed and we were unable to recover it. 00:27:56.879 [2024-07-26 11:35:52.307119] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.879 [2024-07-26 11:35:52.307149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:56.879 qpair failed and we were unable to recover it. 00:27:56.879 [2024-07-26 11:35:52.307255] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.879 [2024-07-26 11:35:52.307285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:56.879 qpair failed and we were unable to recover it. 00:27:56.879 [2024-07-26 11:35:52.307464] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.879 [2024-07-26 11:35:52.307494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:56.879 qpair failed and we were unable to recover it. 00:27:56.879 [2024-07-26 11:35:52.307666] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.879 [2024-07-26 11:35:52.307698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:56.879 qpair failed and we were unable to recover it. 00:27:56.879 [2024-07-26 11:35:52.307834] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.879 [2024-07-26 11:35:52.307864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:56.879 qpair failed and we were unable to recover it. 00:27:56.879 [2024-07-26 11:35:52.308045] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.879 [2024-07-26 11:35:52.308075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:56.879 qpair failed and we were unable to recover it. 00:27:56.879 [2024-07-26 11:35:52.308318] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.879 [2024-07-26 11:35:52.308348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:56.879 qpair failed and we were unable to recover it. 00:27:56.879 [2024-07-26 11:35:52.308538] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.879 [2024-07-26 11:35:52.308567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:56.879 qpair failed and we were unable to recover it. 00:27:56.880 [2024-07-26 11:35:52.308690] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.880 [2024-07-26 11:35:52.308721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:56.880 qpair failed and we were unable to recover it. 00:27:56.880 [2024-07-26 11:35:52.309038] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.880 [2024-07-26 11:35:52.309068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:56.880 qpair failed and we were unable to recover it. 00:27:56.880 [2024-07-26 11:35:52.309239] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.880 [2024-07-26 11:35:52.309275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:56.880 qpair failed and we were unable to recover it. 00:27:56.880 [2024-07-26 11:35:52.309411] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.880 [2024-07-26 11:35:52.309440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:56.880 qpair failed and we were unable to recover it. 00:27:56.880 [2024-07-26 11:35:52.309616] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.880 [2024-07-26 11:35:52.309654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:56.880 qpair failed and we were unable to recover it. 00:27:56.880 [2024-07-26 11:35:52.309920] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.880 [2024-07-26 11:35:52.309950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:56.880 qpair failed and we were unable to recover it. 00:27:56.880 [2024-07-26 11:35:52.310204] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.880 [2024-07-26 11:35:52.310234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:56.880 qpair failed and we were unable to recover it. 00:27:56.880 [2024-07-26 11:35:52.310420] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.880 [2024-07-26 11:35:52.310449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:56.880 qpair failed and we were unable to recover it. 00:27:56.880 [2024-07-26 11:35:52.310656] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.880 [2024-07-26 11:35:52.310686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:56.880 qpair failed and we were unable to recover it. 00:27:56.880 [2024-07-26 11:35:52.310820] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.880 [2024-07-26 11:35:52.310850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:56.880 qpair failed and we were unable to recover it. 00:27:56.880 [2024-07-26 11:35:52.311054] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.880 [2024-07-26 11:35:52.311084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:56.880 qpair failed and we were unable to recover it. 00:27:56.880 [2024-07-26 11:35:52.311207] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.880 [2024-07-26 11:35:52.311237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:56.880 qpair failed and we were unable to recover it. 00:27:56.880 [2024-07-26 11:35:52.311339] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.880 [2024-07-26 11:35:52.311368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:56.880 qpair failed and we were unable to recover it. 00:27:56.880 [2024-07-26 11:35:52.311543] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.880 [2024-07-26 11:35:52.311573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:56.880 qpair failed and we were unable to recover it. 00:27:56.880 [2024-07-26 11:35:52.311770] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.880 [2024-07-26 11:35:52.311802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:56.880 qpair failed and we were unable to recover it. 00:27:56.880 [2024-07-26 11:35:52.312023] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.880 [2024-07-26 11:35:52.312053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:56.880 qpair failed and we were unable to recover it. 00:27:56.880 [2024-07-26 11:35:52.312171] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.880 [2024-07-26 11:35:52.312200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:56.880 qpair failed and we were unable to recover it. 00:27:56.880 [2024-07-26 11:35:52.312374] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.880 [2024-07-26 11:35:52.312403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:56.880 qpair failed and we were unable to recover it. 00:27:56.880 [2024-07-26 11:35:52.312581] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.880 [2024-07-26 11:35:52.312610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:56.880 qpair failed and we were unable to recover it. 00:27:56.880 [2024-07-26 11:35:52.312798] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.880 [2024-07-26 11:35:52.312828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:56.880 qpair failed and we were unable to recover it. 00:27:56.880 [2024-07-26 11:35:52.313017] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.880 [2024-07-26 11:35:52.313046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:56.880 qpair failed and we were unable to recover it. 00:27:56.880 [2024-07-26 11:35:52.313467] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.880 [2024-07-26 11:35:52.313502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:56.880 qpair failed and we were unable to recover it. 00:27:56.880 [2024-07-26 11:35:52.313702] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.880 [2024-07-26 11:35:52.313736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:56.880 qpair failed and we were unable to recover it. 00:27:56.880 [2024-07-26 11:35:52.313940] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.880 [2024-07-26 11:35:52.313970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:56.880 qpair failed and we were unable to recover it. 00:27:56.880 [2024-07-26 11:35:52.314211] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.880 [2024-07-26 11:35:52.314242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:56.880 qpair failed and we were unable to recover it. 00:27:56.880 [2024-07-26 11:35:52.314409] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.880 [2024-07-26 11:35:52.314439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:56.880 qpair failed and we were unable to recover it. 00:27:56.880 [2024-07-26 11:35:52.314702] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.880 [2024-07-26 11:35:52.314733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:56.880 qpair failed and we were unable to recover it. 00:27:56.880 [2024-07-26 11:35:52.314926] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.880 [2024-07-26 11:35:52.314956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:56.880 qpair failed and we were unable to recover it. 00:27:56.880 [2024-07-26 11:35:52.315154] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.880 [2024-07-26 11:35:52.315184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:56.880 qpair failed and we were unable to recover it. 00:27:56.880 [2024-07-26 11:35:52.315376] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.880 [2024-07-26 11:35:52.315406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:56.880 qpair failed and we were unable to recover it. 00:27:56.880 [2024-07-26 11:35:52.315589] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.880 [2024-07-26 11:35:52.315619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:56.880 qpair failed and we were unable to recover it. 00:27:56.880 [2024-07-26 11:35:52.315816] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.880 [2024-07-26 11:35:52.315846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:56.880 qpair failed and we were unable to recover it. 00:27:56.880 [2024-07-26 11:35:52.316032] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.880 [2024-07-26 11:35:52.316063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:56.880 qpair failed and we were unable to recover it. 00:27:56.880 [2024-07-26 11:35:52.316279] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.880 [2024-07-26 11:35:52.316308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:56.880 qpair failed and we were unable to recover it. 00:27:56.880 [2024-07-26 11:35:52.316532] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.880 [2024-07-26 11:35:52.316562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:56.880 qpair failed and we were unable to recover it. 00:27:56.880 [2024-07-26 11:35:52.316801] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.880 [2024-07-26 11:35:52.316832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:56.880 qpair failed and we were unable to recover it. 00:27:56.880 [2024-07-26 11:35:52.317025] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.880 [2024-07-26 11:35:52.317055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:56.880 qpair failed and we were unable to recover it. 00:27:56.880 [2024-07-26 11:35:52.317190] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.880 [2024-07-26 11:35:52.317221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:56.881 qpair failed and we were unable to recover it. 00:27:56.881 [2024-07-26 11:35:52.317404] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.881 [2024-07-26 11:35:52.317434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:56.881 qpair failed and we were unable to recover it. 00:27:56.881 [2024-07-26 11:35:52.317725] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.881 [2024-07-26 11:35:52.317755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:56.881 qpair failed and we were unable to recover it. 00:27:56.881 [2024-07-26 11:35:52.317929] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.881 [2024-07-26 11:35:52.317959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:56.881 qpair failed and we were unable to recover it. 00:27:56.881 [2024-07-26 11:35:52.318196] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.881 [2024-07-26 11:35:52.318226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:56.881 qpair failed and we were unable to recover it. 00:27:56.881 [2024-07-26 11:35:52.318356] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.881 [2024-07-26 11:35:52.318386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:56.881 qpair failed and we were unable to recover it. 00:27:56.881 [2024-07-26 11:35:52.318646] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.881 [2024-07-26 11:35:52.318678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:56.881 qpair failed and we were unable to recover it. 00:27:56.881 [2024-07-26 11:35:52.318971] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.881 [2024-07-26 11:35:52.319001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:56.881 qpair failed and we were unable to recover it. 00:27:56.881 [2024-07-26 11:35:52.319264] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.881 [2024-07-26 11:35:52.319293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:56.881 qpair failed and we were unable to recover it. 00:27:56.881 [2024-07-26 11:35:52.319480] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.881 [2024-07-26 11:35:52.319509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:56.881 qpair failed and we were unable to recover it. 00:27:56.881 [2024-07-26 11:35:52.319665] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.881 [2024-07-26 11:35:52.319696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:56.881 qpair failed and we were unable to recover it. 00:27:56.881 [2024-07-26 11:35:52.319839] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.881 [2024-07-26 11:35:52.319870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:56.881 qpair failed and we were unable to recover it. 00:27:56.881 [2024-07-26 11:35:52.320107] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.881 [2024-07-26 11:35:52.320136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:56.881 qpair failed and we were unable to recover it. 00:27:56.881 [2024-07-26 11:35:52.320330] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.881 [2024-07-26 11:35:52.320360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:56.881 qpair failed and we were unable to recover it. 00:27:56.881 [2024-07-26 11:35:52.320573] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.881 [2024-07-26 11:35:52.320603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:56.881 qpair failed and we were unable to recover it. 00:27:56.881 [2024-07-26 11:35:52.320722] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.881 [2024-07-26 11:35:52.320752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:56.881 qpair failed and we were unable to recover it. 00:27:56.881 [2024-07-26 11:35:52.320877] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.881 [2024-07-26 11:35:52.320906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:56.881 qpair failed and we were unable to recover it. 00:27:56.881 [2024-07-26 11:35:52.321030] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.881 [2024-07-26 11:35:52.321060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:56.881 qpair failed and we were unable to recover it. 00:27:56.881 [2024-07-26 11:35:52.321308] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.881 [2024-07-26 11:35:52.321344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:56.881 qpair failed and we were unable to recover it. 00:27:56.881 [2024-07-26 11:35:52.321533] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.881 [2024-07-26 11:35:52.321563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:56.881 qpair failed and we were unable to recover it. 00:27:56.881 [2024-07-26 11:35:52.321753] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.881 [2024-07-26 11:35:52.321784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:56.881 qpair failed and we were unable to recover it. 00:27:56.881 [2024-07-26 11:35:52.321990] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.881 [2024-07-26 11:35:52.322019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:56.881 qpair failed and we were unable to recover it. 00:27:56.881 [2024-07-26 11:35:52.322274] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.881 [2024-07-26 11:35:52.322304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:56.881 qpair failed and we were unable to recover it. 00:27:56.881 [2024-07-26 11:35:52.322442] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.881 [2024-07-26 11:35:52.322471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:56.881 qpair failed and we were unable to recover it. 00:27:56.881 [2024-07-26 11:35:52.322588] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.881 [2024-07-26 11:35:52.322618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:56.881 qpair failed and we were unable to recover it. 00:27:56.881 [2024-07-26 11:35:52.322821] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.881 [2024-07-26 11:35:52.322851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:56.881 qpair failed and we were unable to recover it. 00:27:56.881 [2024-07-26 11:35:52.322970] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.881 [2024-07-26 11:35:52.323000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:56.881 qpair failed and we were unable to recover it. 00:27:56.881 [2024-07-26 11:35:52.323263] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.881 [2024-07-26 11:35:52.323293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:56.881 qpair failed and we were unable to recover it. 00:27:56.881 [2024-07-26 11:35:52.323419] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.881 [2024-07-26 11:35:52.323449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:56.881 qpair failed and we were unable to recover it. 00:27:56.881 [2024-07-26 11:35:52.323546] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.881 [2024-07-26 11:35:52.323577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:56.881 qpair failed and we were unable to recover it. 00:27:56.881 [2024-07-26 11:35:52.323781] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.881 [2024-07-26 11:35:52.323812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:56.881 qpair failed and we were unable to recover it. 00:27:56.881 [2024-07-26 11:35:52.323929] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.881 [2024-07-26 11:35:52.323959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:56.881 qpair failed and we were unable to recover it. 00:27:56.881 [2024-07-26 11:35:52.324140] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.881 [2024-07-26 11:35:52.324170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:56.881 qpair failed and we were unable to recover it. 00:27:56.882 [2024-07-26 11:35:52.324347] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.882 [2024-07-26 11:35:52.324377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:56.882 qpair failed and we were unable to recover it. 00:27:56.882 [2024-07-26 11:35:52.324513] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.882 [2024-07-26 11:35:52.324542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:56.882 qpair failed and we were unable to recover it. 00:27:56.882 [2024-07-26 11:35:52.324664] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.882 [2024-07-26 11:35:52.324695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:56.882 qpair failed and we were unable to recover it. 00:27:56.882 [2024-07-26 11:35:52.324935] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.882 [2024-07-26 11:35:52.324965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:56.882 qpair failed and we were unable to recover it. 00:27:56.882 [2024-07-26 11:35:52.325148] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.882 [2024-07-26 11:35:52.325178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:56.882 qpair failed and we were unable to recover it. 00:27:56.882 [2024-07-26 11:35:52.325300] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.882 [2024-07-26 11:35:52.325329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:56.882 qpair failed and we were unable to recover it. 00:27:56.882 [2024-07-26 11:35:52.325500] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.882 [2024-07-26 11:35:52.325531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:56.882 qpair failed and we were unable to recover it. 00:27:56.882 [2024-07-26 11:35:52.325721] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.882 [2024-07-26 11:35:52.325751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:56.882 qpair failed and we were unable to recover it. 00:27:56.882 [2024-07-26 11:35:52.325926] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.882 [2024-07-26 11:35:52.325956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:56.882 qpair failed and we were unable to recover it. 00:27:56.882 [2024-07-26 11:35:52.326080] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.882 [2024-07-26 11:35:52.326110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:56.882 qpair failed and we were unable to recover it. 00:27:56.882 [2024-07-26 11:35:52.326367] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.882 [2024-07-26 11:35:52.326397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:56.882 qpair failed and we were unable to recover it. 00:27:56.882 [2024-07-26 11:35:52.326592] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.882 [2024-07-26 11:35:52.326622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:56.882 qpair failed and we were unable to recover it. 00:27:56.882 [2024-07-26 11:35:52.326756] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.882 [2024-07-26 11:35:52.326791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:56.882 qpair failed and we were unable to recover it. 00:27:56.882 [2024-07-26 11:35:52.326900] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.882 [2024-07-26 11:35:52.326929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:56.882 qpair failed and we were unable to recover it. 00:27:56.882 [2024-07-26 11:35:52.327107] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.882 [2024-07-26 11:35:52.327137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:56.882 qpair failed and we were unable to recover it. 00:27:56.882 [2024-07-26 11:35:52.327326] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.882 [2024-07-26 11:35:52.327356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:56.882 qpair failed and we were unable to recover it. 00:27:56.882 [2024-07-26 11:35:52.327545] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.882 [2024-07-26 11:35:52.327575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:56.882 qpair failed and we were unable to recover it. 00:27:56.882 [2024-07-26 11:35:52.327722] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.882 [2024-07-26 11:35:52.327752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:56.882 qpair failed and we were unable to recover it. 00:27:56.882 [2024-07-26 11:35:52.327922] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.882 [2024-07-26 11:35:52.327952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:56.882 qpair failed and we were unable to recover it. 00:27:56.882 [2024-07-26 11:35:52.328225] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.882 [2024-07-26 11:35:52.328255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:56.882 qpair failed and we were unable to recover it. 00:27:56.882 [2024-07-26 11:35:52.328520] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.882 [2024-07-26 11:35:52.328550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:56.882 qpair failed and we were unable to recover it. 00:27:56.882 [2024-07-26 11:35:52.328684] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.882 [2024-07-26 11:35:52.328714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:56.882 qpair failed and we were unable to recover it. 00:27:56.882 [2024-07-26 11:35:52.328922] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.882 [2024-07-26 11:35:52.328952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:56.882 qpair failed and we were unable to recover it. 00:27:56.882 [2024-07-26 11:35:52.329125] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.882 [2024-07-26 11:35:52.329154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:56.882 qpair failed and we were unable to recover it. 00:27:56.882 [2024-07-26 11:35:52.329341] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.882 [2024-07-26 11:35:52.329371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:56.882 qpair failed and we were unable to recover it. 00:27:56.882 [2024-07-26 11:35:52.329564] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.882 [2024-07-26 11:35:52.329593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:56.882 qpair failed and we were unable to recover it. 00:27:56.882 [2024-07-26 11:35:52.329805] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.882 [2024-07-26 11:35:52.329836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:56.882 qpair failed and we were unable to recover it. 00:27:56.882 [2024-07-26 11:35:52.329971] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.882 [2024-07-26 11:35:52.330001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:56.882 qpair failed and we were unable to recover it. 00:27:56.882 [2024-07-26 11:35:52.330185] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.882 [2024-07-26 11:35:52.330216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:56.882 qpair failed and we were unable to recover it. 00:27:56.882 [2024-07-26 11:35:52.330484] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.882 [2024-07-26 11:35:52.330513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:56.882 qpair failed and we were unable to recover it. 00:27:56.882 [2024-07-26 11:35:52.330717] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.882 [2024-07-26 11:35:52.330748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:56.882 qpair failed and we were unable to recover it. 00:27:56.882 [2024-07-26 11:35:52.330938] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.882 [2024-07-26 11:35:52.330968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:56.882 qpair failed and we were unable to recover it. 00:27:56.882 [2024-07-26 11:35:52.331150] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.882 [2024-07-26 11:35:52.331180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:56.882 qpair failed and we were unable to recover it. 00:27:56.882 [2024-07-26 11:35:52.331431] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.882 [2024-07-26 11:35:52.331461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:56.882 qpair failed and we were unable to recover it. 00:27:56.882 [2024-07-26 11:35:52.331596] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.882 [2024-07-26 11:35:52.331625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:56.882 qpair failed and we were unable to recover it. 00:27:56.882 [2024-07-26 11:35:52.331937] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.882 [2024-07-26 11:35:52.331966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:56.882 qpair failed and we were unable to recover it. 00:27:56.882 [2024-07-26 11:35:52.332166] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.882 [2024-07-26 11:35:52.332196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:56.882 qpair failed and we were unable to recover it. 00:27:56.882 [2024-07-26 11:35:52.332395] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.882 [2024-07-26 11:35:52.332425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:56.882 qpair failed and we were unable to recover it. 00:27:56.883 [2024-07-26 11:35:52.332658] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.883 [2024-07-26 11:35:52.332688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:56.883 qpair failed and we were unable to recover it. 00:27:56.883 [2024-07-26 11:35:52.332822] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.883 [2024-07-26 11:35:52.332858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:56.883 qpair failed and we were unable to recover it. 00:27:56.883 [2024-07-26 11:35:52.333046] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.883 [2024-07-26 11:35:52.333075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:56.883 qpair failed and we were unable to recover it. 00:27:56.883 [2024-07-26 11:35:52.333335] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.883 [2024-07-26 11:35:52.333365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:56.883 qpair failed and we were unable to recover it. 00:27:56.883 [2024-07-26 11:35:52.333540] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.883 [2024-07-26 11:35:52.333569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:56.883 qpair failed and we were unable to recover it. 00:27:56.883 [2024-07-26 11:35:52.333790] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.883 [2024-07-26 11:35:52.333820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:56.883 qpair failed and we were unable to recover it. 00:27:56.883 [2024-07-26 11:35:52.334006] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.883 [2024-07-26 11:35:52.334036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:56.883 qpair failed and we were unable to recover it. 00:27:56.883 [2024-07-26 11:35:52.334140] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.883 [2024-07-26 11:35:52.334170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:56.883 qpair failed and we were unable to recover it. 00:27:56.883 [2024-07-26 11:35:52.334368] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.883 [2024-07-26 11:35:52.334398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:56.883 qpair failed and we were unable to recover it. 00:27:56.883 [2024-07-26 11:35:52.334616] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.883 [2024-07-26 11:35:52.334656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:56.883 qpair failed and we were unable to recover it. 00:27:56.883 [2024-07-26 11:35:52.334875] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.883 [2024-07-26 11:35:52.334905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:56.883 qpair failed and we were unable to recover it. 00:27:56.883 [2024-07-26 11:35:52.335147] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.883 [2024-07-26 11:35:52.335177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:56.883 qpair failed and we were unable to recover it. 00:27:56.883 [2024-07-26 11:35:52.335357] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.883 [2024-07-26 11:35:52.335387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:56.883 qpair failed and we were unable to recover it. 00:27:56.883 [2024-07-26 11:35:52.335514] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.883 [2024-07-26 11:35:52.335543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:56.883 qpair failed and we were unable to recover it. 00:27:56.883 [2024-07-26 11:35:52.335756] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.883 [2024-07-26 11:35:52.335787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:56.883 qpair failed and we were unable to recover it. 00:27:56.883 [2024-07-26 11:35:52.336049] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.883 [2024-07-26 11:35:52.336117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.883 qpair failed and we were unable to recover it. 00:27:56.883 [2024-07-26 11:35:52.336304] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.883 [2024-07-26 11:35:52.336373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:56.883 qpair failed and we were unable to recover it. 00:27:56.883 [2024-07-26 11:35:52.336599] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.883 [2024-07-26 11:35:52.336678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f8000b90 with addr=10.0.0.2, port=4420 00:27:56.883 qpair failed and we were unable to recover it. 00:27:56.883 [2024-07-26 11:35:52.336909] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.883 [2024-07-26 11:35:52.336944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f8000b90 with addr=10.0.0.2, port=4420 00:27:56.883 qpair failed and we were unable to recover it. 00:27:56.883 [2024-07-26 11:35:52.337148] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.883 [2024-07-26 11:35:52.337179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f8000b90 with addr=10.0.0.2, port=4420 00:27:56.883 qpair failed and we were unable to recover it. 00:27:56.883 [2024-07-26 11:35:52.337446] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.883 [2024-07-26 11:35:52.337477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f8000b90 with addr=10.0.0.2, port=4420 00:27:56.883 qpair failed and we were unable to recover it. 00:27:56.883 [2024-07-26 11:35:52.337674] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.883 [2024-07-26 11:35:52.337704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f8000b90 with addr=10.0.0.2, port=4420 00:27:56.883 qpair failed and we were unable to recover it. 00:27:56.883 [2024-07-26 11:35:52.337833] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.883 [2024-07-26 11:35:52.337863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f8000b90 with addr=10.0.0.2, port=4420 00:27:56.883 qpair failed and we were unable to recover it. 00:27:56.883 [2024-07-26 11:35:52.338051] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.883 [2024-07-26 11:35:52.338081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f8000b90 with addr=10.0.0.2, port=4420 00:27:56.883 qpair failed and we were unable to recover it. 00:27:56.883 [2024-07-26 11:35:52.338291] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.883 [2024-07-26 11:35:52.338320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f8000b90 with addr=10.0.0.2, port=4420 00:27:56.883 qpair failed and we were unable to recover it. 00:27:56.883 [2024-07-26 11:35:52.338498] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.883 [2024-07-26 11:35:52.338528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f8000b90 with addr=10.0.0.2, port=4420 00:27:56.883 qpair failed and we were unable to recover it. 00:27:56.883 [2024-07-26 11:35:52.338644] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.883 [2024-07-26 11:35:52.338674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f8000b90 with addr=10.0.0.2, port=4420 00:27:56.883 qpair failed and we were unable to recover it. 00:27:56.883 [2024-07-26 11:35:52.338888] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.883 [2024-07-26 11:35:52.338918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f8000b90 with addr=10.0.0.2, port=4420 00:27:56.883 qpair failed and we were unable to recover it. 00:27:56.883 [2024-07-26 11:35:52.339092] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.883 [2024-07-26 11:35:52.339130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f8000b90 with addr=10.0.0.2, port=4420 00:27:56.883 qpair failed and we were unable to recover it. 00:27:56.883 [2024-07-26 11:35:52.339255] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.883 [2024-07-26 11:35:52.339284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f8000b90 with addr=10.0.0.2, port=4420 00:27:56.883 qpair failed and we were unable to recover it. 00:27:56.883 [2024-07-26 11:35:52.339482] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.883 [2024-07-26 11:35:52.339512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f8000b90 with addr=10.0.0.2, port=4420 00:27:56.883 qpair failed and we were unable to recover it. 00:27:56.883 [2024-07-26 11:35:52.339756] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.883 [2024-07-26 11:35:52.339786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f8000b90 with addr=10.0.0.2, port=4420 00:27:56.883 qpair failed and we were unable to recover it. 00:27:56.883 [2024-07-26 11:35:52.339979] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.883 [2024-07-26 11:35:52.340009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f8000b90 with addr=10.0.0.2, port=4420 00:27:56.883 qpair failed and we were unable to recover it. 00:27:56.883 [2024-07-26 11:35:52.340152] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.883 [2024-07-26 11:35:52.340182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f8000b90 with addr=10.0.0.2, port=4420 00:27:56.883 qpair failed and we were unable to recover it. 00:27:56.883 [2024-07-26 11:35:52.340304] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.883 [2024-07-26 11:35:52.340334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f8000b90 with addr=10.0.0.2, port=4420 00:27:56.883 qpair failed and we were unable to recover it. 00:27:56.883 [2024-07-26 11:35:52.340454] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.883 [2024-07-26 11:35:52.340483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f8000b90 with addr=10.0.0.2, port=4420 00:27:56.883 qpair failed and we were unable to recover it. 00:27:56.883 [2024-07-26 11:35:52.340607] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.883 [2024-07-26 11:35:52.340647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f8000b90 with addr=10.0.0.2, port=4420 00:27:56.883 qpair failed and we were unable to recover it. 00:27:56.883 [2024-07-26 11:35:52.340913] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.883 [2024-07-26 11:35:52.340944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f8000b90 with addr=10.0.0.2, port=4420 00:27:56.884 qpair failed and we were unable to recover it. 00:27:56.884 [2024-07-26 11:35:52.341231] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.884 [2024-07-26 11:35:52.341261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f8000b90 with addr=10.0.0.2, port=4420 00:27:56.884 qpair failed and we were unable to recover it. 00:27:56.884 [2024-07-26 11:35:52.341375] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.884 [2024-07-26 11:35:52.341404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f8000b90 with addr=10.0.0.2, port=4420 00:27:56.884 qpair failed and we were unable to recover it. 00:27:56.884 [2024-07-26 11:35:52.341589] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.884 [2024-07-26 11:35:52.341619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f8000b90 with addr=10.0.0.2, port=4420 00:27:56.884 qpair failed and we were unable to recover it. 00:27:56.884 [2024-07-26 11:35:52.341840] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.884 [2024-07-26 11:35:52.341870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f8000b90 with addr=10.0.0.2, port=4420 00:27:56.884 qpair failed and we were unable to recover it. 00:27:56.884 [2024-07-26 11:35:52.342064] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.884 [2024-07-26 11:35:52.342094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f8000b90 with addr=10.0.0.2, port=4420 00:27:56.884 qpair failed and we were unable to recover it. 00:27:56.884 [2024-07-26 11:35:52.342217] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.884 [2024-07-26 11:35:52.342247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f8000b90 with addr=10.0.0.2, port=4420 00:27:56.884 qpair failed and we were unable to recover it. 00:27:56.884 [2024-07-26 11:35:52.342450] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.884 [2024-07-26 11:35:52.342480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f8000b90 with addr=10.0.0.2, port=4420 00:27:56.884 qpair failed and we were unable to recover it. 00:27:56.884 [2024-07-26 11:35:52.342676] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.884 [2024-07-26 11:35:52.342706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f8000b90 with addr=10.0.0.2, port=4420 00:27:56.884 qpair failed and we were unable to recover it. 00:27:56.884 [2024-07-26 11:35:52.342826] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.884 [2024-07-26 11:35:52.342855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f8000b90 with addr=10.0.0.2, port=4420 00:27:56.884 qpair failed and we were unable to recover it. 00:27:56.884 [2024-07-26 11:35:52.342971] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.884 [2024-07-26 11:35:52.343000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f8000b90 with addr=10.0.0.2, port=4420 00:27:56.884 qpair failed and we were unable to recover it. 00:27:56.884 [2024-07-26 11:35:52.343203] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.884 [2024-07-26 11:35:52.343233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f8000b90 with addr=10.0.0.2, port=4420 00:27:56.884 qpair failed and we were unable to recover it. 00:27:56.884 [2024-07-26 11:35:52.343421] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.884 [2024-07-26 11:35:52.343451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f8000b90 with addr=10.0.0.2, port=4420 00:27:56.884 qpair failed and we were unable to recover it. 00:27:56.884 [2024-07-26 11:35:52.343624] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.884 [2024-07-26 11:35:52.343665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f8000b90 with addr=10.0.0.2, port=4420 00:27:56.884 qpair failed and we were unable to recover it. 00:27:56.884 [2024-07-26 11:35:52.343958] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.884 [2024-07-26 11:35:52.343988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f8000b90 with addr=10.0.0.2, port=4420 00:27:56.884 qpair failed and we were unable to recover it. 00:27:56.884 [2024-07-26 11:35:52.344117] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.884 [2024-07-26 11:35:52.344146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f8000b90 with addr=10.0.0.2, port=4420 00:27:56.884 qpair failed and we were unable to recover it. 00:27:56.884 [2024-07-26 11:35:52.344329] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.884 [2024-07-26 11:35:52.344359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f8000b90 with addr=10.0.0.2, port=4420 00:27:56.884 qpair failed and we were unable to recover it. 00:27:56.884 [2024-07-26 11:35:52.344498] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.884 [2024-07-26 11:35:52.344527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f8000b90 with addr=10.0.0.2, port=4420 00:27:56.884 qpair failed and we were unable to recover it. 00:27:56.884 [2024-07-26 11:35:52.344725] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.884 [2024-07-26 11:35:52.344766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.884 qpair failed and we were unable to recover it. 00:27:56.884 [2024-07-26 11:35:52.345040] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.884 [2024-07-26 11:35:52.345071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.884 qpair failed and we were unable to recover it. 00:27:56.884 [2024-07-26 11:35:52.345216] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.884 [2024-07-26 11:35:52.345246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.884 qpair failed and we were unable to recover it. 00:27:56.884 [2024-07-26 11:35:52.345422] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.884 [2024-07-26 11:35:52.345452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.884 qpair failed and we were unable to recover it. 00:27:56.884 [2024-07-26 11:35:52.345646] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.884 [2024-07-26 11:35:52.345683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.884 qpair failed and we were unable to recover it. 00:27:56.884 [2024-07-26 11:35:52.345869] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.884 [2024-07-26 11:35:52.345899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.884 qpair failed and we were unable to recover it. 00:27:56.884 [2024-07-26 11:35:52.346140] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.884 [2024-07-26 11:35:52.346170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.884 qpair failed and we were unable to recover it. 00:27:56.884 [2024-07-26 11:35:52.346349] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.884 [2024-07-26 11:35:52.346378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.884 qpair failed and we were unable to recover it. 00:27:56.884 [2024-07-26 11:35:52.346601] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.884 [2024-07-26 11:35:52.346640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.884 qpair failed and we were unable to recover it. 00:27:56.884 [2024-07-26 11:35:52.346818] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.884 [2024-07-26 11:35:52.346848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.884 qpair failed and we were unable to recover it. 00:27:56.884 [2024-07-26 11:35:52.347048] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.884 [2024-07-26 11:35:52.347077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.884 qpair failed and we were unable to recover it. 00:27:56.884 [2024-07-26 11:35:52.347263] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.884 [2024-07-26 11:35:52.347293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.884 qpair failed and we were unable to recover it. 00:27:56.884 [2024-07-26 11:35:52.347539] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.884 [2024-07-26 11:35:52.347569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.884 qpair failed and we were unable to recover it. 00:27:56.884 [2024-07-26 11:35:52.347703] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.884 [2024-07-26 11:35:52.347743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.884 qpair failed and we were unable to recover it. 00:27:56.884 [2024-07-26 11:35:52.347929] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.884 [2024-07-26 11:35:52.347959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.884 qpair failed and we were unable to recover it. 00:27:56.884 [2024-07-26 11:35:52.348154] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.884 [2024-07-26 11:35:52.348183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.884 qpair failed and we were unable to recover it. 00:27:56.884 [2024-07-26 11:35:52.348301] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.884 [2024-07-26 11:35:52.348331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.884 qpair failed and we were unable to recover it. 00:27:56.884 [2024-07-26 11:35:52.348448] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.884 [2024-07-26 11:35:52.348477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.884 qpair failed and we were unable to recover it. 00:27:56.884 [2024-07-26 11:35:52.348590] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.884 [2024-07-26 11:35:52.348620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.884 qpair failed and we were unable to recover it. 00:27:56.884 [2024-07-26 11:35:52.348820] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.884 [2024-07-26 11:35:52.348851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.884 qpair failed and we were unable to recover it. 00:27:56.884 [2024-07-26 11:35:52.349035] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.885 [2024-07-26 11:35:52.349065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.885 qpair failed and we were unable to recover it. 00:27:56.885 [2024-07-26 11:35:52.349188] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.885 [2024-07-26 11:35:52.349217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.885 qpair failed and we were unable to recover it. 00:27:56.885 [2024-07-26 11:35:52.349324] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.885 [2024-07-26 11:35:52.349353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.885 qpair failed and we were unable to recover it. 00:27:56.885 [2024-07-26 11:35:52.349604] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.885 [2024-07-26 11:35:52.349644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.885 qpair failed and we were unable to recover it. 00:27:56.885 [2024-07-26 11:35:52.349824] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.885 [2024-07-26 11:35:52.349853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.885 qpair failed and we were unable to recover it. 00:27:56.885 [2024-07-26 11:35:52.350044] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.885 [2024-07-26 11:35:52.350074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.885 qpair failed and we were unable to recover it. 00:27:56.885 [2024-07-26 11:35:52.350313] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.885 [2024-07-26 11:35:52.350343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.885 qpair failed and we were unable to recover it. 00:27:56.885 [2024-07-26 11:35:52.350475] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.885 [2024-07-26 11:35:52.350505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.885 qpair failed and we were unable to recover it. 00:27:56.885 [2024-07-26 11:35:52.350640] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.885 [2024-07-26 11:35:52.350671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.885 qpair failed and we were unable to recover it. 00:27:56.885 [2024-07-26 11:35:52.350919] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.885 [2024-07-26 11:35:52.350949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.885 qpair failed and we were unable to recover it. 00:27:56.885 [2024-07-26 11:35:52.351120] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.885 [2024-07-26 11:35:52.351149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.885 qpair failed and we were unable to recover it. 00:27:56.885 [2024-07-26 11:35:52.351415] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.885 [2024-07-26 11:35:52.351445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.885 qpair failed and we were unable to recover it. 00:27:56.885 [2024-07-26 11:35:52.351642] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.885 [2024-07-26 11:35:52.351673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.885 qpair failed and we were unable to recover it. 00:27:56.885 [2024-07-26 11:35:52.351893] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.885 [2024-07-26 11:35:52.351923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.885 qpair failed and we were unable to recover it. 00:27:56.885 [2024-07-26 11:35:52.352117] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.885 [2024-07-26 11:35:52.352146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.885 qpair failed and we were unable to recover it. 00:27:56.885 [2024-07-26 11:35:52.352393] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.885 [2024-07-26 11:35:52.352422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.885 qpair failed and we were unable to recover it. 00:27:56.885 [2024-07-26 11:35:52.352686] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.885 [2024-07-26 11:35:52.352718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.885 qpair failed and we were unable to recover it. 00:27:56.885 [2024-07-26 11:35:52.352894] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.885 [2024-07-26 11:35:52.352923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.885 qpair failed and we were unable to recover it. 00:27:56.885 [2024-07-26 11:35:52.353148] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.885 [2024-07-26 11:35:52.353178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.885 qpair failed and we were unable to recover it. 00:27:56.885 [2024-07-26 11:35:52.353369] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.885 [2024-07-26 11:35:52.353398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.885 qpair failed and we were unable to recover it. 00:27:56.885 [2024-07-26 11:35:52.353580] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.885 [2024-07-26 11:35:52.353610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.885 qpair failed and we were unable to recover it. 00:27:56.885 [2024-07-26 11:35:52.353821] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.885 [2024-07-26 11:35:52.353852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.885 qpair failed and we were unable to recover it. 00:27:56.885 [2024-07-26 11:35:52.354067] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.885 [2024-07-26 11:35:52.354097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.885 qpair failed and we were unable to recover it. 00:27:56.885 [2024-07-26 11:35:52.354275] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.885 [2024-07-26 11:35:52.354305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.885 qpair failed and we were unable to recover it. 00:27:56.885 [2024-07-26 11:35:52.354481] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.885 [2024-07-26 11:35:52.354511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.885 qpair failed and we were unable to recover it. 00:27:56.885 [2024-07-26 11:35:52.354705] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.885 [2024-07-26 11:35:52.354736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.885 qpair failed and we were unable to recover it. 00:27:56.885 [2024-07-26 11:35:52.354976] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.885 [2024-07-26 11:35:52.355006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.885 qpair failed and we were unable to recover it. 00:27:56.885 [2024-07-26 11:35:52.355197] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.885 [2024-07-26 11:35:52.355226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.885 qpair failed and we were unable to recover it. 00:27:56.885 [2024-07-26 11:35:52.355466] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.885 [2024-07-26 11:35:52.355495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.885 qpair failed and we were unable to recover it. 00:27:56.885 [2024-07-26 11:35:52.355638] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.885 [2024-07-26 11:35:52.355669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.885 qpair failed and we were unable to recover it. 00:27:56.885 [2024-07-26 11:35:52.355793] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.885 [2024-07-26 11:35:52.355823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.885 qpair failed and we were unable to recover it. 00:27:56.885 [2024-07-26 11:35:52.355994] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.885 [2024-07-26 11:35:52.356024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.885 qpair failed and we were unable to recover it. 00:27:56.886 [2024-07-26 11:35:52.356266] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.886 [2024-07-26 11:35:52.356296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.886 qpair failed and we were unable to recover it. 00:27:56.886 [2024-07-26 11:35:52.356538] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.886 [2024-07-26 11:35:52.356572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.886 qpair failed and we were unable to recover it. 00:27:56.886 [2024-07-26 11:35:52.356874] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.886 [2024-07-26 11:35:52.356905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.886 qpair failed and we were unable to recover it. 00:27:56.886 [2024-07-26 11:35:52.357100] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.886 [2024-07-26 11:35:52.357130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.886 qpair failed and we were unable to recover it. 00:27:56.886 [2024-07-26 11:35:52.357344] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.886 [2024-07-26 11:35:52.357374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.886 qpair failed and we were unable to recover it. 00:27:56.886 [2024-07-26 11:35:52.357546] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.886 [2024-07-26 11:35:52.357575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.886 qpair failed and we were unable to recover it. 00:27:56.886 [2024-07-26 11:35:52.357850] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.886 [2024-07-26 11:35:52.357881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.886 qpair failed and we were unable to recover it. 00:27:56.886 [2024-07-26 11:35:52.358146] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.886 [2024-07-26 11:35:52.358175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.886 qpair failed and we were unable to recover it. 00:27:56.886 [2024-07-26 11:35:52.358365] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.886 [2024-07-26 11:35:52.358395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.886 qpair failed and we were unable to recover it. 00:27:56.886 [2024-07-26 11:35:52.358519] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.886 [2024-07-26 11:35:52.358548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.886 qpair failed and we were unable to recover it. 00:27:56.886 [2024-07-26 11:35:52.358680] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.886 [2024-07-26 11:35:52.358711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.886 qpair failed and we were unable to recover it. 00:27:56.886 [2024-07-26 11:35:52.358819] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.886 [2024-07-26 11:35:52.358849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.886 qpair failed and we were unable to recover it. 00:27:56.886 [2024-07-26 11:35:52.359037] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.886 [2024-07-26 11:35:52.359067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.886 qpair failed and we were unable to recover it. 00:27:56.886 [2024-07-26 11:35:52.359177] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.886 [2024-07-26 11:35:52.359206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.886 qpair failed and we were unable to recover it. 00:27:56.886 [2024-07-26 11:35:52.359382] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.886 [2024-07-26 11:35:52.359412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.886 qpair failed and we were unable to recover it. 00:27:56.886 [2024-07-26 11:35:52.359590] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.886 [2024-07-26 11:35:52.359620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.886 qpair failed and we were unable to recover it. 00:27:56.886 [2024-07-26 11:35:52.359857] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.886 [2024-07-26 11:35:52.359887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.886 qpair failed and we were unable to recover it. 00:27:56.886 [2024-07-26 11:35:52.360085] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.886 [2024-07-26 11:35:52.360115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.886 qpair failed and we were unable to recover it. 00:27:56.886 [2024-07-26 11:35:52.360314] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.886 [2024-07-26 11:35:52.360345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.886 qpair failed and we were unable to recover it. 00:27:56.886 [2024-07-26 11:35:52.360529] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.886 [2024-07-26 11:35:52.360559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.886 qpair failed and we were unable to recover it. 00:27:56.886 [2024-07-26 11:35:52.360801] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.886 [2024-07-26 11:35:52.360831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.886 qpair failed and we were unable to recover it. 00:27:56.886 [2024-07-26 11:35:52.361100] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.886 [2024-07-26 11:35:52.361130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.886 qpair failed and we were unable to recover it. 00:27:56.886 [2024-07-26 11:35:52.361369] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.886 [2024-07-26 11:35:52.361399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.886 qpair failed and we were unable to recover it. 00:27:56.886 [2024-07-26 11:35:52.361594] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.886 [2024-07-26 11:35:52.361624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.886 qpair failed and we were unable to recover it. 00:27:56.886 [2024-07-26 11:35:52.361907] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.886 [2024-07-26 11:35:52.361937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.886 qpair failed and we were unable to recover it. 00:27:56.886 [2024-07-26 11:35:52.362129] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.886 [2024-07-26 11:35:52.362158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.886 qpair failed and we were unable to recover it. 00:27:56.886 [2024-07-26 11:35:52.362331] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.886 [2024-07-26 11:35:52.362361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.886 qpair failed and we were unable to recover it. 00:27:56.886 [2024-07-26 11:35:52.362619] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.886 [2024-07-26 11:35:52.362660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.886 qpair failed and we were unable to recover it. 00:27:56.886 [2024-07-26 11:35:52.362865] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.886 [2024-07-26 11:35:52.362895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.886 qpair failed and we were unable to recover it. 00:27:56.886 [2024-07-26 11:35:52.363158] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.886 [2024-07-26 11:35:52.363189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.886 qpair failed and we were unable to recover it. 00:27:56.886 [2024-07-26 11:35:52.363292] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.886 [2024-07-26 11:35:52.363322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.886 qpair failed and we were unable to recover it. 00:27:56.886 [2024-07-26 11:35:52.363443] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.886 [2024-07-26 11:35:52.363473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.886 qpair failed and we were unable to recover it. 00:27:56.886 [2024-07-26 11:35:52.363664] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.886 [2024-07-26 11:35:52.363696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.886 qpair failed and we were unable to recover it. 00:27:56.886 [2024-07-26 11:35:52.363887] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.886 [2024-07-26 11:35:52.363917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.886 qpair failed and we were unable to recover it. 00:27:56.886 [2024-07-26 11:35:52.364110] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.886 [2024-07-26 11:35:52.364141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.886 qpair failed and we were unable to recover it. 00:27:56.886 [2024-07-26 11:35:52.364319] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.886 [2024-07-26 11:35:52.364349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.886 qpair failed and we were unable to recover it. 00:27:56.886 [2024-07-26 11:35:52.364545] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.886 [2024-07-26 11:35:52.364575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.886 qpair failed and we were unable to recover it. 00:27:56.886 [2024-07-26 11:35:52.364713] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.887 [2024-07-26 11:35:52.364744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.887 qpair failed and we were unable to recover it. 00:27:56.887 [2024-07-26 11:35:52.364938] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.887 [2024-07-26 11:35:52.364967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.887 qpair failed and we were unable to recover it. 00:27:56.887 [2024-07-26 11:35:52.365191] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.887 [2024-07-26 11:35:52.365222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.887 qpair failed and we were unable to recover it. 00:27:56.887 [2024-07-26 11:35:52.365473] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.887 [2024-07-26 11:35:52.365503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.887 qpair failed and we were unable to recover it. 00:27:56.887 [2024-07-26 11:35:52.365643] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.887 [2024-07-26 11:35:52.365681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.887 qpair failed and we were unable to recover it. 00:27:56.887 [2024-07-26 11:35:52.365874] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.887 [2024-07-26 11:35:52.365904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.887 qpair failed and we were unable to recover it. 00:27:56.887 [2024-07-26 11:35:52.366090] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.887 [2024-07-26 11:35:52.366120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.887 qpair failed and we were unable to recover it. 00:27:56.887 [2024-07-26 11:35:52.366320] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.887 [2024-07-26 11:35:52.366350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.887 qpair failed and we were unable to recover it. 00:27:56.887 [2024-07-26 11:35:52.366519] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.887 [2024-07-26 11:35:52.366548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.887 qpair failed and we were unable to recover it. 00:27:56.887 [2024-07-26 11:35:52.366805] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.887 [2024-07-26 11:35:52.366837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.887 qpair failed and we were unable to recover it. 00:27:56.887 [2024-07-26 11:35:52.366953] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.887 [2024-07-26 11:35:52.366984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.887 qpair failed and we were unable to recover it. 00:27:56.887 [2024-07-26 11:35:52.367197] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.887 [2024-07-26 11:35:52.367229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.887 qpair failed and we were unable to recover it. 00:27:56.887 [2024-07-26 11:35:52.367403] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.887 [2024-07-26 11:35:52.367433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.887 qpair failed and we were unable to recover it. 00:27:56.887 [2024-07-26 11:35:52.367649] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.887 [2024-07-26 11:35:52.367680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.887 qpair failed and we were unable to recover it. 00:27:56.887 [2024-07-26 11:35:52.367803] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.887 [2024-07-26 11:35:52.367833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.887 qpair failed and we were unable to recover it. 00:27:56.887 [2024-07-26 11:35:52.368008] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.887 [2024-07-26 11:35:52.368037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.887 qpair failed and we were unable to recover it. 00:27:56.887 [2024-07-26 11:35:52.368165] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.887 [2024-07-26 11:35:52.368195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.887 qpair failed and we were unable to recover it. 00:27:56.887 [2024-07-26 11:35:52.368458] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.887 [2024-07-26 11:35:52.368489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.887 qpair failed and we were unable to recover it. 00:27:56.887 [2024-07-26 11:35:52.368731] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.887 [2024-07-26 11:35:52.368761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.887 qpair failed and we were unable to recover it. 00:27:56.887 [2024-07-26 11:35:52.369052] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.887 [2024-07-26 11:35:52.369082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.887 qpair failed and we were unable to recover it. 00:27:56.887 [2024-07-26 11:35:52.369272] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.887 [2024-07-26 11:35:52.369302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.887 qpair failed and we were unable to recover it. 00:27:56.887 [2024-07-26 11:35:52.369426] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.887 [2024-07-26 11:35:52.369456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.887 qpair failed and we were unable to recover it. 00:27:56.887 [2024-07-26 11:35:52.369644] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.887 [2024-07-26 11:35:52.369675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.887 qpair failed and we were unable to recover it. 00:27:56.887 [2024-07-26 11:35:52.369919] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.887 [2024-07-26 11:35:52.369949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.887 qpair failed and we were unable to recover it. 00:27:56.887 [2024-07-26 11:35:52.370085] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.887 [2024-07-26 11:35:52.370115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.887 qpair failed and we were unable to recover it. 00:27:56.887 [2024-07-26 11:35:52.370226] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.887 [2024-07-26 11:35:52.370256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.887 qpair failed and we were unable to recover it. 00:27:56.887 [2024-07-26 11:35:52.370455] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.887 [2024-07-26 11:35:52.370485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.887 qpair failed and we were unable to recover it. 00:27:56.887 [2024-07-26 11:35:52.370673] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.887 [2024-07-26 11:35:52.370704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.887 qpair failed and we were unable to recover it. 00:27:56.887 [2024-07-26 11:35:52.370830] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.887 [2024-07-26 11:35:52.370862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.887 qpair failed and we were unable to recover it. 00:27:56.887 [2024-07-26 11:35:52.371054] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.887 [2024-07-26 11:35:52.371086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.887 qpair failed and we were unable to recover it. 00:27:56.887 [2024-07-26 11:35:52.371275] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.887 [2024-07-26 11:35:52.371305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.887 qpair failed and we were unable to recover it. 00:27:56.887 [2024-07-26 11:35:52.371517] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.887 [2024-07-26 11:35:52.371547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.887 qpair failed and we were unable to recover it. 00:27:56.887 [2024-07-26 11:35:52.371722] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.887 [2024-07-26 11:35:52.371755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.887 qpair failed and we were unable to recover it. 00:27:56.887 [2024-07-26 11:35:52.371962] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.887 [2024-07-26 11:35:52.371995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.887 qpair failed and we were unable to recover it. 00:27:56.887 [2024-07-26 11:35:52.372203] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.887 [2024-07-26 11:35:52.372233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.887 qpair failed and we were unable to recover it. 00:27:56.887 [2024-07-26 11:35:52.372412] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.887 [2024-07-26 11:35:52.372443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.887 qpair failed and we were unable to recover it. 00:27:56.887 [2024-07-26 11:35:52.372572] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.887 [2024-07-26 11:35:52.372606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.887 qpair failed and we were unable to recover it. 00:27:56.887 [2024-07-26 11:35:52.372746] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.887 [2024-07-26 11:35:52.372776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.887 qpair failed and we were unable to recover it. 00:27:56.887 [2024-07-26 11:35:52.372969] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.888 [2024-07-26 11:35:52.372999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.888 qpair failed and we were unable to recover it. 00:27:56.888 [2024-07-26 11:35:52.373221] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.888 [2024-07-26 11:35:52.373250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.888 qpair failed and we were unable to recover it. 00:27:56.888 [2024-07-26 11:35:52.373437] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.888 [2024-07-26 11:35:52.373466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.888 qpair failed and we were unable to recover it. 00:27:56.888 [2024-07-26 11:35:52.373664] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.888 [2024-07-26 11:35:52.373695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.888 qpair failed and we were unable to recover it. 00:27:56.888 [2024-07-26 11:35:52.373884] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.888 [2024-07-26 11:35:52.373913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.888 qpair failed and we were unable to recover it. 00:27:56.888 [2024-07-26 11:35:52.374122] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.888 [2024-07-26 11:35:52.374152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.888 qpair failed and we were unable to recover it. 00:27:56.888 [2024-07-26 11:35:52.374327] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.888 [2024-07-26 11:35:52.374357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.888 qpair failed and we were unable to recover it. 00:27:56.888 [2024-07-26 11:35:52.374477] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.888 [2024-07-26 11:35:52.374508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.888 qpair failed and we were unable to recover it. 00:27:56.888 [2024-07-26 11:35:52.374702] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.888 [2024-07-26 11:35:52.374732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.888 qpair failed and we were unable to recover it. 00:27:56.888 [2024-07-26 11:35:52.374863] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.888 [2024-07-26 11:35:52.374892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.888 qpair failed and we were unable to recover it. 00:27:56.888 [2024-07-26 11:35:52.375135] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.888 [2024-07-26 11:35:52.375165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.888 qpair failed and we were unable to recover it. 00:27:56.888 [2024-07-26 11:35:52.375357] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.888 [2024-07-26 11:35:52.375386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.888 qpair failed and we were unable to recover it. 00:27:56.888 [2024-07-26 11:35:52.375492] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.888 [2024-07-26 11:35:52.375522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.888 qpair failed and we were unable to recover it. 00:27:56.888 [2024-07-26 11:35:52.375715] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.888 [2024-07-26 11:35:52.375746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.888 qpair failed and we were unable to recover it. 00:27:56.888 [2024-07-26 11:35:52.375950] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.888 [2024-07-26 11:35:52.375980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.888 qpair failed and we were unable to recover it. 00:27:56.888 [2024-07-26 11:35:52.376165] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.888 [2024-07-26 11:35:52.376195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.888 qpair failed and we were unable to recover it. 00:27:56.888 [2024-07-26 11:35:52.376439] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.888 [2024-07-26 11:35:52.376469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.888 qpair failed and we were unable to recover it. 00:27:56.888 [2024-07-26 11:35:52.376737] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.888 [2024-07-26 11:35:52.376767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.888 qpair failed and we were unable to recover it. 00:27:56.888 [2024-07-26 11:35:52.376953] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.888 [2024-07-26 11:35:52.376982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.888 qpair failed and we were unable to recover it. 00:27:56.888 [2024-07-26 11:35:52.377214] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.888 [2024-07-26 11:35:52.377244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.888 qpair failed and we were unable to recover it. 00:27:56.888 [2024-07-26 11:35:52.377366] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.888 [2024-07-26 11:35:52.377396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.888 qpair failed and we were unable to recover it. 00:27:56.888 [2024-07-26 11:35:52.377614] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.888 [2024-07-26 11:35:52.377653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.888 qpair failed and we were unable to recover it. 00:27:56.888 [2024-07-26 11:35:52.377846] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.888 [2024-07-26 11:35:52.377876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.888 qpair failed and we were unable to recover it. 00:27:56.888 [2024-07-26 11:35:52.378003] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.888 [2024-07-26 11:35:52.378033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.888 qpair failed and we were unable to recover it. 00:27:56.888 [2024-07-26 11:35:52.378141] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.888 [2024-07-26 11:35:52.378171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.888 qpair failed and we were unable to recover it. 00:27:56.888 [2024-07-26 11:35:52.378350] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.888 [2024-07-26 11:35:52.378380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.888 qpair failed and we were unable to recover it. 00:27:56.888 [2024-07-26 11:35:52.378524] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.888 [2024-07-26 11:35:52.378554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.888 qpair failed and we were unable to recover it. 00:27:56.888 [2024-07-26 11:35:52.378734] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.888 [2024-07-26 11:35:52.378764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.888 qpair failed and we were unable to recover it. 00:27:56.888 [2024-07-26 11:35:52.378878] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.888 [2024-07-26 11:35:52.378908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.888 qpair failed and we were unable to recover it. 00:27:56.888 [2024-07-26 11:35:52.379152] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.888 [2024-07-26 11:35:52.379182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.888 qpair failed and we were unable to recover it. 00:27:56.888 [2024-07-26 11:35:52.379371] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.888 [2024-07-26 11:35:52.379403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.888 qpair failed and we were unable to recover it. 00:27:56.888 [2024-07-26 11:35:52.379597] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.888 [2024-07-26 11:35:52.379638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.888 qpair failed and we were unable to recover it. 00:27:56.888 [2024-07-26 11:35:52.379767] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.888 [2024-07-26 11:35:52.379797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.888 qpair failed and we were unable to recover it. 00:27:56.888 [2024-07-26 11:35:52.379919] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.888 [2024-07-26 11:35:52.379954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.888 qpair failed and we were unable to recover it. 00:27:56.888 [2024-07-26 11:35:52.380092] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.888 [2024-07-26 11:35:52.380122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.888 qpair failed and we were unable to recover it. 00:27:56.888 [2024-07-26 11:35:52.380221] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.888 [2024-07-26 11:35:52.380250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.888 qpair failed and we were unable to recover it. 00:27:56.888 [2024-07-26 11:35:52.380437] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.888 [2024-07-26 11:35:52.380466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.888 qpair failed and we were unable to recover it. 00:27:56.888 [2024-07-26 11:35:52.380600] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.888 [2024-07-26 11:35:52.380640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.889 qpair failed and we were unable to recover it. 00:27:56.889 [2024-07-26 11:35:52.380765] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.889 [2024-07-26 11:35:52.380795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.889 qpair failed and we were unable to recover it. 00:27:56.889 [2024-07-26 11:35:52.380975] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.889 [2024-07-26 11:35:52.381005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.889 qpair failed and we were unable to recover it. 00:27:56.889 [2024-07-26 11:35:52.381176] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.889 [2024-07-26 11:35:52.381205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.889 qpair failed and we were unable to recover it. 00:27:56.889 [2024-07-26 11:35:52.381397] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.889 [2024-07-26 11:35:52.381426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.889 qpair failed and we were unable to recover it. 00:27:56.889 [2024-07-26 11:35:52.381600] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.889 [2024-07-26 11:35:52.381653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.889 qpair failed and we were unable to recover it. 00:27:56.889 [2024-07-26 11:35:52.381829] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.889 [2024-07-26 11:35:52.381859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.889 qpair failed and we were unable to recover it. 00:27:56.889 [2024-07-26 11:35:52.381971] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.889 [2024-07-26 11:35:52.382000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.889 qpair failed and we were unable to recover it. 00:27:56.889 [2024-07-26 11:35:52.382184] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.889 [2024-07-26 11:35:52.382214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.889 qpair failed and we were unable to recover it. 00:27:56.889 [2024-07-26 11:35:52.382338] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.889 [2024-07-26 11:35:52.382368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.889 qpair failed and we were unable to recover it. 00:27:56.889 [2024-07-26 11:35:52.382660] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.889 [2024-07-26 11:35:52.382692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.889 qpair failed and we were unable to recover it. 00:27:56.889 [2024-07-26 11:35:52.382868] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.889 [2024-07-26 11:35:52.382898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.889 qpair failed and we were unable to recover it. 00:27:56.889 [2024-07-26 11:35:52.383093] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.889 [2024-07-26 11:35:52.383123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.889 qpair failed and we were unable to recover it. 00:27:56.889 [2024-07-26 11:35:52.383229] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.889 [2024-07-26 11:35:52.383258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.889 qpair failed and we were unable to recover it. 00:27:56.889 [2024-07-26 11:35:52.383459] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.889 [2024-07-26 11:35:52.383489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.889 qpair failed and we were unable to recover it. 00:27:56.889 [2024-07-26 11:35:52.383601] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.889 [2024-07-26 11:35:52.383639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.889 qpair failed and we were unable to recover it. 00:27:56.889 [2024-07-26 11:35:52.383887] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.889 [2024-07-26 11:35:52.383917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.889 qpair failed and we were unable to recover it. 00:27:56.889 [2024-07-26 11:35:52.384099] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.889 [2024-07-26 11:35:52.384129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.889 qpair failed and we were unable to recover it. 00:27:56.889 [2024-07-26 11:35:52.384303] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.889 [2024-07-26 11:35:52.384333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.889 qpair failed and we were unable to recover it. 00:27:56.889 [2024-07-26 11:35:52.384577] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.889 [2024-07-26 11:35:52.384606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.889 qpair failed and we were unable to recover it. 00:27:56.889 [2024-07-26 11:35:52.384806] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.889 [2024-07-26 11:35:52.384837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.889 qpair failed and we were unable to recover it. 00:27:56.889 [2024-07-26 11:35:52.384969] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.889 [2024-07-26 11:35:52.384999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.889 qpair failed and we were unable to recover it. 00:27:56.889 [2024-07-26 11:35:52.385243] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.889 [2024-07-26 11:35:52.385273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.889 qpair failed and we were unable to recover it. 00:27:56.889 [2024-07-26 11:35:52.385392] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.889 [2024-07-26 11:35:52.385422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.889 qpair failed and we were unable to recover it. 00:27:56.889 [2024-07-26 11:35:52.385668] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.889 [2024-07-26 11:35:52.385698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.889 qpair failed and we were unable to recover it. 00:27:56.889 [2024-07-26 11:35:52.385807] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.889 [2024-07-26 11:35:52.385837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.889 qpair failed and we were unable to recover it. 00:27:56.889 [2024-07-26 11:35:52.386019] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.889 [2024-07-26 11:35:52.386049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.889 qpair failed and we were unable to recover it. 00:27:56.889 [2024-07-26 11:35:52.386232] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.889 [2024-07-26 11:35:52.386261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.889 qpair failed and we were unable to recover it. 00:27:56.889 [2024-07-26 11:35:52.386441] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.889 [2024-07-26 11:35:52.386471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.889 qpair failed and we were unable to recover it. 00:27:56.889 [2024-07-26 11:35:52.386589] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.889 [2024-07-26 11:35:52.386618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.889 qpair failed and we were unable to recover it. 00:27:56.889 [2024-07-26 11:35:52.386736] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.889 [2024-07-26 11:35:52.386766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.889 qpair failed and we were unable to recover it. 00:27:56.889 [2024-07-26 11:35:52.386979] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.889 [2024-07-26 11:35:52.387008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.889 qpair failed and we were unable to recover it. 00:27:56.889 [2024-07-26 11:35:52.387209] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.889 [2024-07-26 11:35:52.387239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.889 qpair failed and we were unable to recover it. 00:27:56.889 [2024-07-26 11:35:52.387417] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.889 [2024-07-26 11:35:52.387447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.889 qpair failed and we were unable to recover it. 00:27:56.889 [2024-07-26 11:35:52.387641] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.889 [2024-07-26 11:35:52.387672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.889 qpair failed and we were unable to recover it. 00:27:56.889 [2024-07-26 11:35:52.387791] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.889 [2024-07-26 11:35:52.387821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.889 qpair failed and we were unable to recover it. 00:27:56.889 [2024-07-26 11:35:52.387943] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.889 [2024-07-26 11:35:52.387978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.889 qpair failed and we were unable to recover it. 00:27:56.889 [2024-07-26 11:35:52.388175] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.889 [2024-07-26 11:35:52.388205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.889 qpair failed and we were unable to recover it. 00:27:56.889 [2024-07-26 11:35:52.388331] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.890 [2024-07-26 11:35:52.388360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.890 qpair failed and we were unable to recover it. 00:27:56.890 [2024-07-26 11:35:52.388469] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.890 [2024-07-26 11:35:52.388499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.890 qpair failed and we were unable to recover it. 00:27:56.890 [2024-07-26 11:35:52.388710] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.890 [2024-07-26 11:35:52.388741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.890 qpair failed and we were unable to recover it. 00:27:56.890 [2024-07-26 11:35:52.388858] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.890 [2024-07-26 11:35:52.388887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.890 qpair failed and we were unable to recover it. 00:27:56.890 [2024-07-26 11:35:52.389062] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.890 [2024-07-26 11:35:52.389092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.890 qpair failed and we were unable to recover it. 00:27:56.890 [2024-07-26 11:35:52.389267] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.890 [2024-07-26 11:35:52.389298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.890 qpair failed and we were unable to recover it. 00:27:56.890 [2024-07-26 11:35:52.389555] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.890 [2024-07-26 11:35:52.389584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.890 qpair failed and we were unable to recover it. 00:27:56.890 [2024-07-26 11:35:52.389729] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.890 [2024-07-26 11:35:52.389760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.890 qpair failed and we were unable to recover it. 00:27:56.890 [2024-07-26 11:35:52.389978] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.890 [2024-07-26 11:35:52.390008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.890 qpair failed and we were unable to recover it. 00:27:56.890 [2024-07-26 11:35:52.390180] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.890 [2024-07-26 11:35:52.390210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.890 qpair failed and we were unable to recover it. 00:27:56.890 [2024-07-26 11:35:52.390382] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.890 [2024-07-26 11:35:52.390411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.890 qpair failed and we were unable to recover it. 00:27:56.890 [2024-07-26 11:35:52.390614] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.890 [2024-07-26 11:35:52.390652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.890 qpair failed and we were unable to recover it. 00:27:56.890 [2024-07-26 11:35:52.390766] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.890 [2024-07-26 11:35:52.390796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.890 qpair failed and we were unable to recover it. 00:27:56.890 [2024-07-26 11:35:52.390970] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.890 [2024-07-26 11:35:52.390999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.890 qpair failed and we were unable to recover it. 00:27:56.890 [2024-07-26 11:35:52.391138] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.890 [2024-07-26 11:35:52.391168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.890 qpair failed and we were unable to recover it. 00:27:56.890 [2024-07-26 11:35:52.391272] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.890 [2024-07-26 11:35:52.391302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.890 qpair failed and we were unable to recover it. 00:27:56.890 [2024-07-26 11:35:52.391504] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.890 [2024-07-26 11:35:52.391534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.890 qpair failed and we were unable to recover it. 00:27:56.890 [2024-07-26 11:35:52.391730] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.890 [2024-07-26 11:35:52.391761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.890 qpair failed and we were unable to recover it. 00:27:56.890 [2024-07-26 11:35:52.391960] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.890 [2024-07-26 11:35:52.391989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.890 qpair failed and we were unable to recover it. 00:27:56.890 [2024-07-26 11:35:52.392110] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.890 [2024-07-26 11:35:52.392139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.890 qpair failed and we were unable to recover it. 00:27:56.890 [2024-07-26 11:35:52.392271] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.890 [2024-07-26 11:35:52.392301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.890 qpair failed and we were unable to recover it. 00:27:56.890 [2024-07-26 11:35:52.392489] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.890 [2024-07-26 11:35:52.392519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.890 qpair failed and we were unable to recover it. 00:27:56.890 [2024-07-26 11:35:52.392632] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.890 [2024-07-26 11:35:52.392663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.890 qpair failed and we were unable to recover it. 00:27:56.890 [2024-07-26 11:35:52.392771] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.890 [2024-07-26 11:35:52.392801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.890 qpair failed and we were unable to recover it. 00:27:56.890 [2024-07-26 11:35:52.392918] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.890 [2024-07-26 11:35:52.392947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.890 qpair failed and we were unable to recover it. 00:27:56.890 [2024-07-26 11:35:52.393077] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.890 [2024-07-26 11:35:52.393108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.890 qpair failed and we were unable to recover it. 00:27:56.890 [2024-07-26 11:35:52.393346] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.890 [2024-07-26 11:35:52.393376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.890 qpair failed and we were unable to recover it. 00:27:56.890 [2024-07-26 11:35:52.393551] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.890 [2024-07-26 11:35:52.393581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.890 qpair failed and we were unable to recover it. 00:27:56.890 [2024-07-26 11:35:52.393789] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.890 [2024-07-26 11:35:52.393820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.890 qpair failed and we were unable to recover it. 00:27:56.890 [2024-07-26 11:35:52.394020] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.890 [2024-07-26 11:35:52.394049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.890 qpair failed and we were unable to recover it. 00:27:56.890 [2024-07-26 11:35:52.394175] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.890 [2024-07-26 11:35:52.394205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.890 qpair failed and we were unable to recover it. 00:27:56.890 [2024-07-26 11:35:52.394396] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.890 [2024-07-26 11:35:52.394426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.890 qpair failed and we were unable to recover it. 00:27:56.891 [2024-07-26 11:35:52.394711] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.891 [2024-07-26 11:35:52.394742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.891 qpair failed and we were unable to recover it. 00:27:56.891 [2024-07-26 11:35:52.394874] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.891 [2024-07-26 11:35:52.394904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.891 qpair failed and we were unable to recover it. 00:27:56.891 [2024-07-26 11:35:52.395025] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.891 [2024-07-26 11:35:52.395054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.891 qpair failed and we were unable to recover it. 00:27:56.891 [2024-07-26 11:35:52.395160] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.891 [2024-07-26 11:35:52.395190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.891 qpair failed and we were unable to recover it. 00:27:56.891 [2024-07-26 11:35:52.395295] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.891 [2024-07-26 11:35:52.395325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.891 qpair failed and we were unable to recover it. 00:27:56.891 [2024-07-26 11:35:52.395515] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.891 [2024-07-26 11:35:52.395545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.891 qpair failed and we were unable to recover it. 00:27:56.891 [2024-07-26 11:35:52.395683] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.891 [2024-07-26 11:35:52.395719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.891 qpair failed and we were unable to recover it. 00:27:56.891 [2024-07-26 11:35:52.395903] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.891 [2024-07-26 11:35:52.395934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.891 qpair failed and we were unable to recover it. 00:27:56.891 [2024-07-26 11:35:52.396126] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.891 [2024-07-26 11:35:52.396155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.891 qpair failed and we were unable to recover it. 00:27:56.891 [2024-07-26 11:35:52.396268] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.891 [2024-07-26 11:35:52.396298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.891 qpair failed and we were unable to recover it. 00:27:56.891 [2024-07-26 11:35:52.396472] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.891 [2024-07-26 11:35:52.396502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.891 qpair failed and we were unable to recover it. 00:27:56.891 [2024-07-26 11:35:52.396758] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.891 [2024-07-26 11:35:52.396788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.891 qpair failed and we were unable to recover it. 00:27:56.891 [2024-07-26 11:35:52.396963] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.891 [2024-07-26 11:35:52.396992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.891 qpair failed and we were unable to recover it. 00:27:56.891 [2024-07-26 11:35:52.397119] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.891 [2024-07-26 11:35:52.397149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.891 qpair failed and we were unable to recover it. 00:27:56.891 [2024-07-26 11:35:52.397374] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.891 [2024-07-26 11:35:52.397404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.891 qpair failed and we were unable to recover it. 00:27:56.891 [2024-07-26 11:35:52.397579] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.891 [2024-07-26 11:35:52.397609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.891 qpair failed and we were unable to recover it. 00:27:56.891 [2024-07-26 11:35:52.397899] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.891 [2024-07-26 11:35:52.397930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.891 qpair failed and we were unable to recover it. 00:27:56.891 [2024-07-26 11:35:52.398192] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.891 [2024-07-26 11:35:52.398221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.891 qpair failed and we were unable to recover it. 00:27:56.891 [2024-07-26 11:35:52.398351] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.891 [2024-07-26 11:35:52.398381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.891 qpair failed and we were unable to recover it. 00:27:56.891 [2024-07-26 11:35:52.398568] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.891 [2024-07-26 11:35:52.398598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.891 qpair failed and we were unable to recover it. 00:27:56.891 [2024-07-26 11:35:52.398743] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.891 [2024-07-26 11:35:52.398775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.891 qpair failed and we were unable to recover it. 00:27:56.891 [2024-07-26 11:35:52.399032] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.891 [2024-07-26 11:35:52.399062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.891 qpair failed and we were unable to recover it. 00:27:56.891 [2024-07-26 11:35:52.399173] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.891 [2024-07-26 11:35:52.399202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.891 qpair failed and we were unable to recover it. 00:27:56.891 [2024-07-26 11:35:52.399395] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.891 [2024-07-26 11:35:52.399425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.891 qpair failed and we were unable to recover it. 00:27:56.891 [2024-07-26 11:35:52.399688] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.891 [2024-07-26 11:35:52.399719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.891 qpair failed and we were unable to recover it. 00:27:56.891 [2024-07-26 11:35:52.399841] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.891 [2024-07-26 11:35:52.399871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.891 qpair failed and we were unable to recover it. 00:27:56.891 [2024-07-26 11:35:52.399998] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.891 [2024-07-26 11:35:52.400028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.891 qpair failed and we were unable to recover it. 00:27:56.891 [2024-07-26 11:35:52.400137] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.891 [2024-07-26 11:35:52.400166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.891 qpair failed and we were unable to recover it. 00:27:56.891 [2024-07-26 11:35:52.400345] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.891 [2024-07-26 11:35:52.400375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.891 qpair failed and we were unable to recover it. 00:27:56.891 [2024-07-26 11:35:52.400491] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.891 [2024-07-26 11:35:52.400521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.891 qpair failed and we were unable to recover it. 00:27:56.891 [2024-07-26 11:35:52.400784] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.891 [2024-07-26 11:35:52.400815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.891 qpair failed and we were unable to recover it. 00:27:56.891 [2024-07-26 11:35:52.401080] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.891 [2024-07-26 11:35:52.401110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.891 qpair failed and we were unable to recover it. 00:27:56.891 [2024-07-26 11:35:52.401222] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.891 [2024-07-26 11:35:52.401251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.891 qpair failed and we were unable to recover it. 00:27:56.891 [2024-07-26 11:35:52.401431] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.891 [2024-07-26 11:35:52.401461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.891 qpair failed and we were unable to recover it. 00:27:56.891 [2024-07-26 11:35:52.401647] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.891 [2024-07-26 11:35:52.401678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.891 qpair failed and we were unable to recover it. 00:27:56.891 [2024-07-26 11:35:52.401968] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.891 [2024-07-26 11:35:52.401998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.891 qpair failed and we were unable to recover it. 00:27:56.891 [2024-07-26 11:35:52.402184] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.891 [2024-07-26 11:35:52.402213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.891 qpair failed and we were unable to recover it. 00:27:56.891 [2024-07-26 11:35:52.402338] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.892 [2024-07-26 11:35:52.402368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.892 qpair failed and we were unable to recover it. 00:27:56.892 [2024-07-26 11:35:52.402538] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.892 [2024-07-26 11:35:52.402567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.892 qpair failed and we were unable to recover it. 00:27:56.892 [2024-07-26 11:35:52.402709] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.892 [2024-07-26 11:35:52.402740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.892 qpair failed and we were unable to recover it. 00:27:56.892 [2024-07-26 11:35:52.402933] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.892 [2024-07-26 11:35:52.402963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.892 qpair failed and we were unable to recover it. 00:27:56.892 [2024-07-26 11:35:52.403166] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.892 [2024-07-26 11:35:52.403196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.892 qpair failed and we were unable to recover it. 00:27:56.892 [2024-07-26 11:35:52.403439] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.892 [2024-07-26 11:35:52.403470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.892 qpair failed and we were unable to recover it. 00:27:56.892 [2024-07-26 11:35:52.403675] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.892 [2024-07-26 11:35:52.403706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.892 qpair failed and we were unable to recover it. 00:27:56.892 [2024-07-26 11:35:52.403902] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.892 [2024-07-26 11:35:52.403931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.892 qpair failed and we were unable to recover it. 00:27:56.892 [2024-07-26 11:35:52.404119] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.892 [2024-07-26 11:35:52.404149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.892 qpair failed and we were unable to recover it. 00:27:56.892 [2024-07-26 11:35:52.404325] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.892 [2024-07-26 11:35:52.404360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.892 qpair failed and we were unable to recover it. 00:27:56.892 [2024-07-26 11:35:52.404599] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.892 [2024-07-26 11:35:52.404635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.892 qpair failed and we were unable to recover it. 00:27:56.892 [2024-07-26 11:35:52.404823] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.892 [2024-07-26 11:35:52.404853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.892 qpair failed and we were unable to recover it. 00:27:56.892 [2024-07-26 11:35:52.405044] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.892 [2024-07-26 11:35:52.405074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.892 qpair failed and we were unable to recover it. 00:27:56.892 [2024-07-26 11:35:52.405249] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.892 [2024-07-26 11:35:52.405279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.892 qpair failed and we were unable to recover it. 00:27:56.892 [2024-07-26 11:35:52.405390] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.892 [2024-07-26 11:35:52.405420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.892 qpair failed and we were unable to recover it. 00:27:56.892 [2024-07-26 11:35:52.405599] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.892 [2024-07-26 11:35:52.405644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.892 qpair failed and we were unable to recover it. 00:27:56.892 [2024-07-26 11:35:52.405910] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.892 [2024-07-26 11:35:52.405940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.892 qpair failed and we were unable to recover it. 00:27:56.892 [2024-07-26 11:35:52.406064] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.892 [2024-07-26 11:35:52.406093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.892 qpair failed and we were unable to recover it. 00:27:56.892 [2024-07-26 11:35:52.406266] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.892 [2024-07-26 11:35:52.406296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.892 qpair failed and we were unable to recover it. 00:27:56.892 [2024-07-26 11:35:52.406412] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.892 [2024-07-26 11:35:52.406442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.892 qpair failed and we were unable to recover it. 00:27:56.892 [2024-07-26 11:35:52.406641] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.892 [2024-07-26 11:35:52.406672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.892 qpair failed and we were unable to recover it. 00:27:56.892 [2024-07-26 11:35:52.406847] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.892 [2024-07-26 11:35:52.406877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.892 qpair failed and we were unable to recover it. 00:27:56.892 [2024-07-26 11:35:52.407069] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.892 [2024-07-26 11:35:52.407099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.892 qpair failed and we were unable to recover it. 00:27:56.892 [2024-07-26 11:35:52.407285] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.892 [2024-07-26 11:35:52.407315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.892 qpair failed and we were unable to recover it. 00:27:56.892 [2024-07-26 11:35:52.407429] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.892 [2024-07-26 11:35:52.407459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.892 qpair failed and we were unable to recover it. 00:27:56.892 [2024-07-26 11:35:52.407653] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.892 [2024-07-26 11:35:52.407684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.892 qpair failed and we were unable to recover it. 00:27:56.892 [2024-07-26 11:35:52.407949] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.892 [2024-07-26 11:35:52.407979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.892 qpair failed and we were unable to recover it. 00:27:56.892 [2024-07-26 11:35:52.408174] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.892 [2024-07-26 11:35:52.408203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.892 qpair failed and we were unable to recover it. 00:27:56.892 [2024-07-26 11:35:52.408326] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.892 [2024-07-26 11:35:52.408356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.892 qpair failed and we were unable to recover it. 00:27:56.892 [2024-07-26 11:35:52.408580] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.892 [2024-07-26 11:35:52.408610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.892 qpair failed and we were unable to recover it. 00:27:56.892 [2024-07-26 11:35:52.408816] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.892 [2024-07-26 11:35:52.408847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.892 qpair failed and we were unable to recover it. 00:27:56.892 [2024-07-26 11:35:52.409057] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.892 [2024-07-26 11:35:52.409086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.892 qpair failed and we were unable to recover it. 00:27:56.892 [2024-07-26 11:35:52.409258] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.892 [2024-07-26 11:35:52.409288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.892 qpair failed and we were unable to recover it. 00:27:56.892 [2024-07-26 11:35:52.409462] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.892 [2024-07-26 11:35:52.409493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.892 qpair failed and we were unable to recover it. 00:27:56.892 [2024-07-26 11:35:52.409599] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.892 [2024-07-26 11:35:52.409644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.892 qpair failed and we were unable to recover it. 00:27:56.892 [2024-07-26 11:35:52.409759] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.892 [2024-07-26 11:35:52.409789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.892 qpair failed and we were unable to recover it. 00:27:56.892 [2024-07-26 11:35:52.409981] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.892 [2024-07-26 11:35:52.410011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.892 qpair failed and we were unable to recover it. 00:27:56.892 [2024-07-26 11:35:52.410149] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.892 [2024-07-26 11:35:52.410178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.892 qpair failed and we were unable to recover it. 00:27:56.893 [2024-07-26 11:35:52.410351] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.893 [2024-07-26 11:35:52.410381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.893 qpair failed and we were unable to recover it. 00:27:56.893 [2024-07-26 11:35:52.410625] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.893 [2024-07-26 11:35:52.410665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.893 qpair failed and we were unable to recover it. 00:27:56.893 [2024-07-26 11:35:52.410784] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.893 [2024-07-26 11:35:52.410814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.893 qpair failed and we were unable to recover it. 00:27:56.893 [2024-07-26 11:35:52.411000] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.893 [2024-07-26 11:35:52.411030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.893 qpair failed and we were unable to recover it. 00:27:56.893 [2024-07-26 11:35:52.411250] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.893 [2024-07-26 11:35:52.411279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.893 qpair failed and we were unable to recover it. 00:27:56.893 [2024-07-26 11:35:52.411472] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.893 [2024-07-26 11:35:52.411502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.893 qpair failed and we were unable to recover it. 00:27:56.893 [2024-07-26 11:35:52.411687] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.893 [2024-07-26 11:35:52.411718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.893 qpair failed and we were unable to recover it. 00:27:56.893 [2024-07-26 11:35:52.411908] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.893 [2024-07-26 11:35:52.411939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.893 qpair failed and we were unable to recover it. 00:27:56.893 [2024-07-26 11:35:52.412108] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.893 [2024-07-26 11:35:52.412138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.893 qpair failed and we were unable to recover it. 00:27:56.893 [2024-07-26 11:35:52.412387] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.893 [2024-07-26 11:35:52.412417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.893 qpair failed and we were unable to recover it. 00:27:56.893 [2024-07-26 11:35:52.412603] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.893 [2024-07-26 11:35:52.412653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.893 qpair failed and we were unable to recover it. 00:27:56.893 [2024-07-26 11:35:52.412849] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.893 [2024-07-26 11:35:52.412885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.893 qpair failed and we were unable to recover it. 00:27:56.893 [2024-07-26 11:35:52.413068] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.893 [2024-07-26 11:35:52.413098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.893 qpair failed and we were unable to recover it. 00:27:56.893 [2024-07-26 11:35:52.413202] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.893 [2024-07-26 11:35:52.413231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.893 qpair failed and we were unable to recover it. 00:27:56.893 [2024-07-26 11:35:52.413404] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.893 [2024-07-26 11:35:52.413433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.893 qpair failed and we were unable to recover it. 00:27:56.893 [2024-07-26 11:35:52.413676] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.893 [2024-07-26 11:35:52.413706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.893 qpair failed and we were unable to recover it. 00:27:56.893 [2024-07-26 11:35:52.413812] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.893 [2024-07-26 11:35:52.413841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.893 qpair failed and we were unable to recover it. 00:27:56.893 [2024-07-26 11:35:52.414029] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.893 [2024-07-26 11:35:52.414060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.893 qpair failed and we were unable to recover it. 00:27:56.893 [2024-07-26 11:35:52.414199] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.893 [2024-07-26 11:35:52.414228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.893 qpair failed and we were unable to recover it. 00:27:56.893 [2024-07-26 11:35:52.414488] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.893 [2024-07-26 11:35:52.414517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.893 qpair failed and we were unable to recover it. 00:27:56.893 [2024-07-26 11:35:52.414709] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.893 [2024-07-26 11:35:52.414739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.893 qpair failed and we were unable to recover it. 00:27:56.893 [2024-07-26 11:35:52.414875] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.893 [2024-07-26 11:35:52.414904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.893 qpair failed and we were unable to recover it. 00:27:56.893 [2024-07-26 11:35:52.415028] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.893 [2024-07-26 11:35:52.415057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.893 qpair failed and we were unable to recover it. 00:27:56.893 [2024-07-26 11:35:52.415265] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.893 [2024-07-26 11:35:52.415295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.893 qpair failed and we were unable to recover it. 00:27:56.893 [2024-07-26 11:35:52.415483] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.893 [2024-07-26 11:35:52.415512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.893 qpair failed and we were unable to recover it. 00:27:56.893 [2024-07-26 11:35:52.415708] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.893 [2024-07-26 11:35:52.415739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.893 qpair failed and we were unable to recover it. 00:27:56.893 [2024-07-26 11:35:52.415927] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.893 [2024-07-26 11:35:52.415957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.893 qpair failed and we were unable to recover it. 00:27:56.893 [2024-07-26 11:35:52.416159] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.893 [2024-07-26 11:35:52.416189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.893 qpair failed and we were unable to recover it. 00:27:56.893 [2024-07-26 11:35:52.416304] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.893 [2024-07-26 11:35:52.416334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.893 qpair failed and we were unable to recover it. 00:27:56.893 [2024-07-26 11:35:52.416522] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.893 [2024-07-26 11:35:52.416552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.893 qpair failed and we were unable to recover it. 00:27:56.893 [2024-07-26 11:35:52.416751] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.893 [2024-07-26 11:35:52.416782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.893 qpair failed and we were unable to recover it. 00:27:56.893 [2024-07-26 11:35:52.417061] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.893 [2024-07-26 11:35:52.417090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.893 qpair failed and we were unable to recover it. 00:27:56.893 [2024-07-26 11:35:52.417352] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.893 [2024-07-26 11:35:52.417382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.893 qpair failed and we were unable to recover it. 00:27:56.893 [2024-07-26 11:35:52.417554] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.893 [2024-07-26 11:35:52.417583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.893 qpair failed and we were unable to recover it. 00:27:56.893 [2024-07-26 11:35:52.417743] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.893 [2024-07-26 11:35:52.417773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.893 qpair failed and we were unable to recover it. 00:27:56.893 [2024-07-26 11:35:52.417963] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.893 [2024-07-26 11:35:52.417992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.893 qpair failed and we were unable to recover it. 00:27:56.893 [2024-07-26 11:35:52.418095] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.893 [2024-07-26 11:35:52.418124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.893 qpair failed and we were unable to recover it. 00:27:56.893 [2024-07-26 11:35:52.418368] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.894 [2024-07-26 11:35:52.418398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.894 qpair failed and we were unable to recover it. 00:27:56.894 [2024-07-26 11:35:52.418601] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.894 [2024-07-26 11:35:52.418638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.894 qpair failed and we were unable to recover it. 00:27:56.894 [2024-07-26 11:35:52.418902] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.894 [2024-07-26 11:35:52.418932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.894 qpair failed and we were unable to recover it. 00:27:56.894 [2024-07-26 11:35:52.419039] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.894 [2024-07-26 11:35:52.419069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.894 qpair failed and we were unable to recover it. 00:27:56.894 [2024-07-26 11:35:52.419331] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.894 [2024-07-26 11:35:52.419360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.894 qpair failed and we were unable to recover it. 00:27:56.894 [2024-07-26 11:35:52.419535] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.894 [2024-07-26 11:35:52.419565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.894 qpair failed and we were unable to recover it. 00:27:56.894 [2024-07-26 11:35:52.419707] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.894 [2024-07-26 11:35:52.419738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.894 qpair failed and we were unable to recover it. 00:27:56.894 [2024-07-26 11:35:52.419951] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.894 [2024-07-26 11:35:52.419981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.894 qpair failed and we were unable to recover it. 00:27:56.894 [2024-07-26 11:35:52.420175] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.894 [2024-07-26 11:35:52.420205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.894 qpair failed and we were unable to recover it. 00:27:56.894 [2024-07-26 11:35:52.420392] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.894 [2024-07-26 11:35:52.420422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.894 qpair failed and we were unable to recover it. 00:27:56.894 [2024-07-26 11:35:52.420527] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.894 [2024-07-26 11:35:52.420557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.894 qpair failed and we were unable to recover it. 00:27:56.894 [2024-07-26 11:35:52.420839] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.894 [2024-07-26 11:35:52.420874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.894 qpair failed and we were unable to recover it. 00:27:56.894 [2024-07-26 11:35:52.421139] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.894 [2024-07-26 11:35:52.421168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.894 qpair failed and we were unable to recover it. 00:27:56.894 [2024-07-26 11:35:52.421361] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.894 [2024-07-26 11:35:52.421390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.894 qpair failed and we were unable to recover it. 00:27:56.894 [2024-07-26 11:35:52.421565] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.894 [2024-07-26 11:35:52.421600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.894 qpair failed and we were unable to recover it. 00:27:56.894 [2024-07-26 11:35:52.421840] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.894 [2024-07-26 11:35:52.421870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.894 qpair failed and we were unable to recover it. 00:27:56.894 [2024-07-26 11:35:52.422045] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.894 [2024-07-26 11:35:52.422075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.894 qpair failed and we were unable to recover it. 00:27:56.894 [2024-07-26 11:35:52.422364] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.894 [2024-07-26 11:35:52.422394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.894 qpair failed and we were unable to recover it. 00:27:56.894 [2024-07-26 11:35:52.422638] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.894 [2024-07-26 11:35:52.422668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.894 qpair failed and we were unable to recover it. 00:27:56.894 [2024-07-26 11:35:52.422847] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.894 [2024-07-26 11:35:52.422876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.894 qpair failed and we were unable to recover it. 00:27:56.894 [2024-07-26 11:35:52.422997] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.894 [2024-07-26 11:35:52.423026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.894 qpair failed and we were unable to recover it. 00:27:56.894 [2024-07-26 11:35:52.423252] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.894 [2024-07-26 11:35:52.423281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.894 qpair failed and we were unable to recover it. 00:27:56.894 [2024-07-26 11:35:52.423473] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.894 [2024-07-26 11:35:52.423502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.894 qpair failed and we were unable to recover it. 00:27:56.894 [2024-07-26 11:35:52.423694] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.894 [2024-07-26 11:35:52.423725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.894 qpair failed and we were unable to recover it. 00:27:56.894 [2024-07-26 11:35:52.423915] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.894 [2024-07-26 11:35:52.423945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.894 qpair failed and we were unable to recover it. 00:27:56.894 [2024-07-26 11:35:52.424139] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.894 [2024-07-26 11:35:52.424169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.894 qpair failed and we were unable to recover it. 00:27:56.894 [2024-07-26 11:35:52.424435] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.894 [2024-07-26 11:35:52.424465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.894 qpair failed and we were unable to recover it. 00:27:56.894 [2024-07-26 11:35:52.424679] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.894 [2024-07-26 11:35:52.424709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.894 qpair failed and we were unable to recover it. 00:27:56.894 [2024-07-26 11:35:52.424839] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.894 [2024-07-26 11:35:52.424869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.894 qpair failed and we were unable to recover it. 00:27:56.894 [2024-07-26 11:35:52.425059] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.894 [2024-07-26 11:35:52.425088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.894 qpair failed and we were unable to recover it. 00:27:56.894 [2024-07-26 11:35:52.425205] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.894 [2024-07-26 11:35:52.425235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.894 qpair failed and we were unable to recover it. 00:27:56.894 [2024-07-26 11:35:52.425421] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.894 [2024-07-26 11:35:52.425450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.894 qpair failed and we were unable to recover it. 00:27:56.894 [2024-07-26 11:35:52.425620] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.894 [2024-07-26 11:35:52.425666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.894 qpair failed and we were unable to recover it. 00:27:56.894 [2024-07-26 11:35:52.425842] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.894 [2024-07-26 11:35:52.425872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.894 qpair failed and we were unable to recover it. 00:27:56.894 [2024-07-26 11:35:52.426113] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.894 [2024-07-26 11:35:52.426143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.894 qpair failed and we were unable to recover it. 00:27:56.894 [2024-07-26 11:35:52.426389] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.894 [2024-07-26 11:35:52.426419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.894 qpair failed and we were unable to recover it. 00:27:56.895 [2024-07-26 11:35:52.426665] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.895 [2024-07-26 11:35:52.426696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.895 qpair failed and we were unable to recover it. 00:27:56.895 [2024-07-26 11:35:52.426814] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.895 [2024-07-26 11:35:52.426843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.895 qpair failed and we were unable to recover it. 00:27:56.895 [2024-07-26 11:35:52.427034] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.895 [2024-07-26 11:35:52.427063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.895 qpair failed and we were unable to recover it. 00:27:56.895 [2024-07-26 11:35:52.427272] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.895 [2024-07-26 11:35:52.427302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.895 qpair failed and we were unable to recover it. 00:27:56.895 [2024-07-26 11:35:52.427545] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.895 [2024-07-26 11:35:52.427576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.895 qpair failed and we were unable to recover it. 00:27:56.895 [2024-07-26 11:35:52.427778] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.895 [2024-07-26 11:35:52.427809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.895 qpair failed and we were unable to recover it. 00:27:56.895 [2024-07-26 11:35:52.428056] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.895 [2024-07-26 11:35:52.428086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.895 qpair failed and we were unable to recover it. 00:27:56.895 [2024-07-26 11:35:52.428304] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.895 [2024-07-26 11:35:52.428334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.895 qpair failed and we were unable to recover it. 00:27:56.895 [2024-07-26 11:35:52.428475] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.895 [2024-07-26 11:35:52.428505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.895 qpair failed and we were unable to recover it. 00:27:56.895 [2024-07-26 11:35:52.428644] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.895 [2024-07-26 11:35:52.428676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.895 qpair failed and we were unable to recover it. 00:27:56.895 [2024-07-26 11:35:52.428921] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.895 [2024-07-26 11:35:52.428950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.895 qpair failed and we were unable to recover it. 00:27:56.895 [2024-07-26 11:35:52.429135] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.895 [2024-07-26 11:35:52.429165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.895 qpair failed and we were unable to recover it. 00:27:56.895 [2024-07-26 11:35:52.429352] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.895 [2024-07-26 11:35:52.429382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.895 qpair failed and we were unable to recover it. 00:27:56.895 [2024-07-26 11:35:52.429507] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.895 [2024-07-26 11:35:52.429536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.895 qpair failed and we were unable to recover it. 00:27:56.895 [2024-07-26 11:35:52.429665] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.895 [2024-07-26 11:35:52.429696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.895 qpair failed and we were unable to recover it. 00:27:56.895 [2024-07-26 11:35:52.429912] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.895 [2024-07-26 11:35:52.429941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.895 qpair failed and we were unable to recover it. 00:27:56.895 [2024-07-26 11:35:52.430114] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.895 [2024-07-26 11:35:52.430144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.895 qpair failed and we were unable to recover it. 00:27:56.895 [2024-07-26 11:35:52.430328] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.895 [2024-07-26 11:35:52.430358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.895 qpair failed and we were unable to recover it. 00:27:56.895 [2024-07-26 11:35:52.430494] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.895 [2024-07-26 11:35:52.430529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.895 qpair failed and we were unable to recover it. 00:27:56.895 [2024-07-26 11:35:52.430785] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.895 [2024-07-26 11:35:52.430816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.895 qpair failed and we were unable to recover it. 00:27:56.895 [2024-07-26 11:35:52.430992] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.895 [2024-07-26 11:35:52.431022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.895 qpair failed and we were unable to recover it. 00:27:56.895 [2024-07-26 11:35:52.431152] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.895 [2024-07-26 11:35:52.431182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.895 qpair failed and we were unable to recover it. 00:27:56.895 [2024-07-26 11:35:52.431293] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.895 [2024-07-26 11:35:52.431323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.895 qpair failed and we were unable to recover it. 00:27:56.895 [2024-07-26 11:35:52.431459] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.895 [2024-07-26 11:35:52.431488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.895 qpair failed and we were unable to recover it. 00:27:56.895 [2024-07-26 11:35:52.431774] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.895 [2024-07-26 11:35:52.431805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.895 qpair failed and we were unable to recover it. 00:27:56.895 [2024-07-26 11:35:52.431981] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.895 [2024-07-26 11:35:52.432010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.895 qpair failed and we were unable to recover it. 00:27:56.895 [2024-07-26 11:35:52.432269] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.895 [2024-07-26 11:35:52.432299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.895 qpair failed and we were unable to recover it. 00:27:56.895 [2024-07-26 11:35:52.432571] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.895 [2024-07-26 11:35:52.432601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.895 qpair failed and we were unable to recover it. 00:27:56.895 [2024-07-26 11:35:52.432793] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.895 [2024-07-26 11:35:52.432823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.895 qpair failed and we were unable to recover it. 00:27:56.895 [2024-07-26 11:35:52.433016] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.895 [2024-07-26 11:35:52.433046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.895 qpair failed and we were unable to recover it. 00:27:56.895 [2024-07-26 11:35:52.433225] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.895 [2024-07-26 11:35:52.433255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.895 qpair failed and we were unable to recover it. 00:27:56.895 [2024-07-26 11:35:52.433381] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.895 [2024-07-26 11:35:52.433411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.895 qpair failed and we were unable to recover it. 00:27:56.895 [2024-07-26 11:35:52.433686] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.895 [2024-07-26 11:35:52.433718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.895 qpair failed and we were unable to recover it. 00:27:56.895 [2024-07-26 11:35:52.433891] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.896 [2024-07-26 11:35:52.433921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.896 qpair failed and we were unable to recover it. 00:27:56.896 [2024-07-26 11:35:52.434064] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.896 [2024-07-26 11:35:52.434094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.896 qpair failed and we were unable to recover it. 00:27:56.896 [2024-07-26 11:35:52.434348] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.896 [2024-07-26 11:35:52.434377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.896 qpair failed and we were unable to recover it. 00:27:56.896 [2024-07-26 11:35:52.434640] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.896 [2024-07-26 11:35:52.434670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.896 qpair failed and we were unable to recover it. 00:27:56.896 [2024-07-26 11:35:52.434859] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.896 [2024-07-26 11:35:52.434889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.896 qpair failed and we were unable to recover it. 00:27:56.896 [2024-07-26 11:35:52.435078] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.896 [2024-07-26 11:35:52.435107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.896 qpair failed and we were unable to recover it. 00:27:56.896 [2024-07-26 11:35:52.435346] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.896 [2024-07-26 11:35:52.435376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.896 qpair failed and we were unable to recover it. 00:27:56.896 [2024-07-26 11:35:52.435562] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.896 [2024-07-26 11:35:52.435592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.896 qpair failed and we were unable to recover it. 00:27:56.896 [2024-07-26 11:35:52.435702] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.896 [2024-07-26 11:35:52.435732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.896 qpair failed and we were unable to recover it. 00:27:56.896 [2024-07-26 11:35:52.435916] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.896 [2024-07-26 11:35:52.435946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.896 qpair failed and we were unable to recover it. 00:27:56.896 [2024-07-26 11:35:52.436131] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.896 [2024-07-26 11:35:52.436161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.896 qpair failed and we were unable to recover it. 00:27:56.896 [2024-07-26 11:35:52.436368] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.896 [2024-07-26 11:35:52.436398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.896 qpair failed and we were unable to recover it. 00:27:56.896 [2024-07-26 11:35:52.436596] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.896 [2024-07-26 11:35:52.436634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.896 qpair failed and we were unable to recover it. 00:27:56.896 [2024-07-26 11:35:52.436756] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.896 [2024-07-26 11:35:52.436786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.896 qpair failed and we were unable to recover it. 00:27:56.896 [2024-07-26 11:35:52.436964] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.896 [2024-07-26 11:35:52.436994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.896 qpair failed and we were unable to recover it. 00:27:56.896 [2024-07-26 11:35:52.437183] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.896 [2024-07-26 11:35:52.437213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.896 qpair failed and we were unable to recover it. 00:27:56.896 [2024-07-26 11:35:52.437387] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.896 [2024-07-26 11:35:52.437417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.896 qpair failed and we were unable to recover it. 00:27:56.896 [2024-07-26 11:35:52.437667] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.896 [2024-07-26 11:35:52.437698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.896 qpair failed and we were unable to recover it. 00:27:56.896 [2024-07-26 11:35:52.437884] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.896 [2024-07-26 11:35:52.437915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.896 qpair failed and we were unable to recover it. 00:27:56.896 [2024-07-26 11:35:52.438109] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.896 [2024-07-26 11:35:52.438139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.896 qpair failed and we were unable to recover it. 00:27:56.896 [2024-07-26 11:35:52.438308] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.896 [2024-07-26 11:35:52.438337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.896 qpair failed and we were unable to recover it. 00:27:56.896 [2024-07-26 11:35:52.438502] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.896 [2024-07-26 11:35:52.438532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.896 qpair failed and we were unable to recover it. 00:27:56.896 [2024-07-26 11:35:52.438705] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.896 [2024-07-26 11:35:52.438735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.896 qpair failed and we were unable to recover it. 00:27:56.896 [2024-07-26 11:35:52.438876] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.896 [2024-07-26 11:35:52.438906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.896 qpair failed and we were unable to recover it. 00:27:56.896 [2024-07-26 11:35:52.439088] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.896 [2024-07-26 11:35:52.439117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.896 qpair failed and we were unable to recover it. 00:27:56.896 [2024-07-26 11:35:52.439372] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.896 [2024-07-26 11:35:52.439408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.896 qpair failed and we were unable to recover it. 00:27:56.896 [2024-07-26 11:35:52.439526] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.896 [2024-07-26 11:35:52.439556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.896 qpair failed and we were unable to recover it. 00:27:56.896 [2024-07-26 11:35:52.439771] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.896 [2024-07-26 11:35:52.439801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.896 qpair failed and we were unable to recover it. 00:27:56.896 [2024-07-26 11:35:52.440007] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.896 [2024-07-26 11:35:52.440037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.896 qpair failed and we were unable to recover it. 00:27:56.896 [2024-07-26 11:35:52.440250] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.896 [2024-07-26 11:35:52.440280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.896 qpair failed and we were unable to recover it. 00:27:56.896 [2024-07-26 11:35:52.440517] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.896 [2024-07-26 11:35:52.440547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.896 qpair failed and we were unable to recover it. 00:27:56.896 [2024-07-26 11:35:52.440684] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.896 [2024-07-26 11:35:52.440715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.896 qpair failed and we were unable to recover it. 00:27:56.896 [2024-07-26 11:35:52.440908] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.896 [2024-07-26 11:35:52.440937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.896 qpair failed and we were unable to recover it. 00:27:56.896 [2024-07-26 11:35:52.441127] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.896 [2024-07-26 11:35:52.441157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.896 qpair failed and we were unable to recover it. 00:27:56.896 [2024-07-26 11:35:52.441345] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.896 [2024-07-26 11:35:52.441375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.896 qpair failed and we were unable to recover it. 00:27:56.896 [2024-07-26 11:35:52.441565] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.896 [2024-07-26 11:35:52.441594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.896 qpair failed and we were unable to recover it. 00:27:56.896 [2024-07-26 11:35:52.441792] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.896 [2024-07-26 11:35:52.441824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.896 qpair failed and we were unable to recover it. 00:27:56.896 [2024-07-26 11:35:52.442115] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.896 [2024-07-26 11:35:52.442144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.896 qpair failed and we were unable to recover it. 00:27:56.897 [2024-07-26 11:35:52.442337] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.897 [2024-07-26 11:35:52.442367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.897 qpair failed and we were unable to recover it. 00:27:56.897 [2024-07-26 11:35:52.442480] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.897 [2024-07-26 11:35:52.442510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.897 qpair failed and we were unable to recover it. 00:27:56.897 [2024-07-26 11:35:52.442698] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.897 [2024-07-26 11:35:52.442729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.897 qpair failed and we were unable to recover it. 00:27:56.897 [2024-07-26 11:35:52.442903] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.897 [2024-07-26 11:35:52.442933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.897 qpair failed and we were unable to recover it. 00:27:56.897 [2024-07-26 11:35:52.443125] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.897 [2024-07-26 11:35:52.443155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.897 qpair failed and we were unable to recover it. 00:27:56.897 [2024-07-26 11:35:52.443343] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.897 [2024-07-26 11:35:52.443373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.897 qpair failed and we were unable to recover it. 00:27:56.897 [2024-07-26 11:35:52.443581] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.897 [2024-07-26 11:35:52.443610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.897 qpair failed and we were unable to recover it. 00:27:56.897 [2024-07-26 11:35:52.443803] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.897 [2024-07-26 11:35:52.443834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.897 qpair failed and we were unable to recover it. 00:27:56.897 [2024-07-26 11:35:52.444048] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.897 [2024-07-26 11:35:52.444078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.897 qpair failed and we were unable to recover it. 00:27:56.897 [2024-07-26 11:35:52.444272] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.897 [2024-07-26 11:35:52.444302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.897 qpair failed and we were unable to recover it. 00:27:56.897 [2024-07-26 11:35:52.444541] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.897 [2024-07-26 11:35:52.444570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.897 qpair failed and we were unable to recover it. 00:27:56.897 [2024-07-26 11:35:52.444760] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.897 [2024-07-26 11:35:52.444790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.897 qpair failed and we were unable to recover it. 00:27:56.897 [2024-07-26 11:35:52.445066] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.897 [2024-07-26 11:35:52.445096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.897 qpair failed and we were unable to recover it. 00:27:56.897 [2024-07-26 11:35:52.445360] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.897 [2024-07-26 11:35:52.445390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.897 qpair failed and we were unable to recover it. 00:27:56.897 [2024-07-26 11:35:52.445588] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.897 [2024-07-26 11:35:52.445618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.897 qpair failed and we were unable to recover it. 00:27:56.897 [2024-07-26 11:35:52.445755] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.897 [2024-07-26 11:35:52.445786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.897 qpair failed and we were unable to recover it. 00:27:56.897 [2024-07-26 11:35:52.445969] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.897 [2024-07-26 11:35:52.445998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.897 qpair failed and we were unable to recover it. 00:27:56.897 [2024-07-26 11:35:52.446240] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.897 [2024-07-26 11:35:52.446270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.897 qpair failed and we were unable to recover it. 00:27:56.897 [2024-07-26 11:35:52.446461] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.897 [2024-07-26 11:35:52.446491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.897 qpair failed and we were unable to recover it. 00:27:56.897 [2024-07-26 11:35:52.446732] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.897 [2024-07-26 11:35:52.446764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.897 qpair failed and we were unable to recover it. 00:27:56.897 [2024-07-26 11:35:52.446885] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.897 [2024-07-26 11:35:52.446914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.897 qpair failed and we were unable to recover it. 00:27:56.897 [2024-07-26 11:35:52.447104] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.897 [2024-07-26 11:35:52.447133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.897 qpair failed and we were unable to recover it. 00:27:56.897 [2024-07-26 11:35:52.447326] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.897 [2024-07-26 11:35:52.447356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.897 qpair failed and we were unable to recover it. 00:27:56.897 [2024-07-26 11:35:52.447570] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.897 [2024-07-26 11:35:52.447600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.897 qpair failed and we were unable to recover it. 00:27:56.897 [2024-07-26 11:35:52.447792] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.897 [2024-07-26 11:35:52.447823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.897 qpair failed and we were unable to recover it. 00:27:56.897 [2024-07-26 11:35:52.448001] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.897 [2024-07-26 11:35:52.448030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.897 qpair failed and we were unable to recover it. 00:27:56.897 [2024-07-26 11:35:52.448143] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.897 [2024-07-26 11:35:52.448173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.897 qpair failed and we were unable to recover it. 00:27:56.897 [2024-07-26 11:35:52.448363] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.897 [2024-07-26 11:35:52.448399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.897 qpair failed and we were unable to recover it. 00:27:56.897 [2024-07-26 11:35:52.448511] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.897 [2024-07-26 11:35:52.448541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.897 qpair failed and we were unable to recover it. 00:27:56.897 [2024-07-26 11:35:52.448804] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.897 [2024-07-26 11:35:52.448834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.897 qpair failed and we were unable to recover it. 00:27:56.897 [2024-07-26 11:35:52.449017] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.897 [2024-07-26 11:35:52.449047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.897 qpair failed and we were unable to recover it. 00:27:56.897 [2024-07-26 11:35:52.449240] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.897 [2024-07-26 11:35:52.449269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.897 qpair failed and we were unable to recover it. 00:27:56.897 [2024-07-26 11:35:52.449453] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.897 [2024-07-26 11:35:52.449483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.897 qpair failed and we were unable to recover it. 00:27:56.897 [2024-07-26 11:35:52.449659] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.897 [2024-07-26 11:35:52.449690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.897 qpair failed and we were unable to recover it. 00:27:56.897 [2024-07-26 11:35:52.449878] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.897 [2024-07-26 11:35:52.449908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.897 qpair failed and we were unable to recover it. 00:27:56.897 [2024-07-26 11:35:52.450148] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.897 [2024-07-26 11:35:52.450177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.897 qpair failed and we were unable to recover it. 00:27:56.897 [2024-07-26 11:35:52.450442] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.897 [2024-07-26 11:35:52.450472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.897 qpair failed and we were unable to recover it. 00:27:56.897 [2024-07-26 11:35:52.450667] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.898 [2024-07-26 11:35:52.450698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.898 qpair failed and we were unable to recover it. 00:27:56.898 [2024-07-26 11:35:52.450804] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.898 [2024-07-26 11:35:52.450833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.898 qpair failed and we were unable to recover it. 00:27:56.898 [2024-07-26 11:35:52.451075] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.898 [2024-07-26 11:35:52.451105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.898 qpair failed and we were unable to recover it. 00:27:56.898 [2024-07-26 11:35:52.451302] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.898 [2024-07-26 11:35:52.451332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.898 qpair failed and we were unable to recover it. 00:27:56.898 [2024-07-26 11:35:52.451536] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.898 [2024-07-26 11:35:52.451566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.898 qpair failed and we were unable to recover it. 00:27:56.898 [2024-07-26 11:35:52.451776] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.898 [2024-07-26 11:35:52.451806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.898 qpair failed and we were unable to recover it. 00:27:56.898 [2024-07-26 11:35:52.451997] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.898 [2024-07-26 11:35:52.452027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.898 qpair failed and we were unable to recover it. 00:27:56.898 [2024-07-26 11:35:52.452272] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.898 [2024-07-26 11:35:52.452301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.898 qpair failed and we were unable to recover it. 00:27:56.898 [2024-07-26 11:35:52.452490] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.898 [2024-07-26 11:35:52.452520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.898 qpair failed and we were unable to recover it. 00:27:56.898 [2024-07-26 11:35:52.452717] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.898 [2024-07-26 11:35:52.452747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.898 qpair failed and we were unable to recover it. 00:27:56.898 [2024-07-26 11:35:52.452923] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.898 [2024-07-26 11:35:52.452953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.898 qpair failed and we were unable to recover it. 00:27:56.898 [2024-07-26 11:35:52.453127] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.898 [2024-07-26 11:35:52.453157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.898 qpair failed and we were unable to recover it. 00:27:56.898 [2024-07-26 11:35:52.453373] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.898 [2024-07-26 11:35:52.453402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.898 qpair failed and we were unable to recover it. 00:27:56.898 [2024-07-26 11:35:52.453584] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.898 [2024-07-26 11:35:52.453614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.898 qpair failed and we were unable to recover it. 00:27:56.898 [2024-07-26 11:35:52.453813] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.898 [2024-07-26 11:35:52.453843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.898 qpair failed and we were unable to recover it. 00:27:56.898 [2024-07-26 11:35:52.453973] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.898 [2024-07-26 11:35:52.454002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.898 qpair failed and we were unable to recover it. 00:27:56.898 [2024-07-26 11:35:52.454218] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.898 [2024-07-26 11:35:52.454248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.898 qpair failed and we were unable to recover it. 00:27:56.898 [2024-07-26 11:35:52.454367] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.898 [2024-07-26 11:35:52.454397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.898 qpair failed and we were unable to recover it. 00:27:56.898 [2024-07-26 11:35:52.454647] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.898 [2024-07-26 11:35:52.454677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.898 qpair failed and we were unable to recover it. 00:27:56.898 [2024-07-26 11:35:52.454865] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.898 [2024-07-26 11:35:52.454895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.898 qpair failed and we were unable to recover it. 00:27:56.898 [2024-07-26 11:35:52.455163] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.898 [2024-07-26 11:35:52.455193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.898 qpair failed and we were unable to recover it. 00:27:56.898 [2024-07-26 11:35:52.455331] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.898 [2024-07-26 11:35:52.455361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.898 qpair failed and we were unable to recover it. 00:27:56.898 [2024-07-26 11:35:52.455640] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.898 [2024-07-26 11:35:52.455671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.898 qpair failed and we were unable to recover it. 00:27:56.898 [2024-07-26 11:35:52.455848] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.898 [2024-07-26 11:35:52.455877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.898 qpair failed and we were unable to recover it. 00:27:56.898 [2024-07-26 11:35:52.456141] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.898 [2024-07-26 11:35:52.456171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.898 qpair failed and we were unable to recover it. 00:27:56.898 [2024-07-26 11:35:52.456377] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.898 [2024-07-26 11:35:52.456406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.898 qpair failed and we were unable to recover it. 00:27:56.898 [2024-07-26 11:35:52.456682] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.898 [2024-07-26 11:35:52.456712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.898 qpair failed and we were unable to recover it. 00:27:56.898 [2024-07-26 11:35:52.456847] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.898 [2024-07-26 11:35:52.456876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.898 qpair failed and we were unable to recover it. 00:27:56.898 [2024-07-26 11:35:52.457061] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.898 [2024-07-26 11:35:52.457091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.898 qpair failed and we were unable to recover it. 00:27:56.898 [2024-07-26 11:35:52.457378] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.898 [2024-07-26 11:35:52.457408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.898 qpair failed and we were unable to recover it. 00:27:56.898 [2024-07-26 11:35:52.457596] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.898 [2024-07-26 11:35:52.457649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.898 qpair failed and we were unable to recover it. 00:27:56.898 [2024-07-26 11:35:52.457860] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.898 [2024-07-26 11:35:52.457890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.898 qpair failed and we were unable to recover it. 00:27:56.898 [2024-07-26 11:35:52.457987] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.898 [2024-07-26 11:35:52.458017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.898 qpair failed and we were unable to recover it. 00:27:56.898 [2024-07-26 11:35:52.458152] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.898 [2024-07-26 11:35:52.458181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.898 qpair failed and we were unable to recover it. 00:27:56.898 [2024-07-26 11:35:52.458313] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.898 [2024-07-26 11:35:52.458343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.898 qpair failed and we were unable to recover it. 00:27:56.898 [2024-07-26 11:35:52.458458] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.898 [2024-07-26 11:35:52.458488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.898 qpair failed and we were unable to recover it. 00:27:56.898 [2024-07-26 11:35:52.458597] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.898 [2024-07-26 11:35:52.458634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.898 qpair failed and we were unable to recover it. 00:27:56.898 [2024-07-26 11:35:52.458908] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.898 [2024-07-26 11:35:52.458938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.898 qpair failed and we were unable to recover it. 00:27:56.899 [2024-07-26 11:35:52.459187] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.899 [2024-07-26 11:35:52.459216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.899 qpair failed and we were unable to recover it. 00:27:56.899 [2024-07-26 11:35:52.459389] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.899 [2024-07-26 11:35:52.459419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.899 qpair failed and we were unable to recover it. 00:27:56.899 [2024-07-26 11:35:52.459656] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.899 [2024-07-26 11:35:52.459687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.899 qpair failed and we were unable to recover it. 00:27:56.899 [2024-07-26 11:35:52.459962] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.899 [2024-07-26 11:35:52.459991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.899 qpair failed and we were unable to recover it. 00:27:56.899 [2024-07-26 11:35:52.460103] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.899 [2024-07-26 11:35:52.460132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.899 qpair failed and we were unable to recover it. 00:27:56.899 [2024-07-26 11:35:52.460375] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.899 [2024-07-26 11:35:52.460405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.899 qpair failed and we were unable to recover it. 00:27:56.899 [2024-07-26 11:35:52.460651] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.899 [2024-07-26 11:35:52.460682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.899 qpair failed and we were unable to recover it. 00:27:56.899 [2024-07-26 11:35:52.460802] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.899 [2024-07-26 11:35:52.460831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.899 qpair failed and we were unable to recover it. 00:27:56.899 [2024-07-26 11:35:52.460942] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.899 [2024-07-26 11:35:52.460971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.899 qpair failed and we were unable to recover it. 00:27:56.899 [2024-07-26 11:35:52.461193] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.899 [2024-07-26 11:35:52.461222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.899 qpair failed and we were unable to recover it. 00:27:56.899 [2024-07-26 11:35:52.461410] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.899 [2024-07-26 11:35:52.461439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.899 qpair failed and we were unable to recover it. 00:27:56.899 [2024-07-26 11:35:52.461686] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.899 [2024-07-26 11:35:52.461717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.899 qpair failed and we were unable to recover it. 00:27:56.899 [2024-07-26 11:35:52.461957] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.899 [2024-07-26 11:35:52.461986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.899 qpair failed and we were unable to recover it. 00:27:56.899 [2024-07-26 11:35:52.462125] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.899 [2024-07-26 11:35:52.462155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.899 qpair failed and we were unable to recover it. 00:27:56.899 [2024-07-26 11:35:52.462348] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.899 [2024-07-26 11:35:52.462378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.899 qpair failed and we were unable to recover it. 00:27:56.899 [2024-07-26 11:35:52.462567] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.899 [2024-07-26 11:35:52.462596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.899 qpair failed and we were unable to recover it. 00:27:56.899 [2024-07-26 11:35:52.462821] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.899 [2024-07-26 11:35:52.462852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.899 qpair failed and we were unable to recover it. 00:27:56.899 [2024-07-26 11:35:52.462970] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.899 [2024-07-26 11:35:52.462999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.899 qpair failed and we were unable to recover it. 00:27:56.899 [2024-07-26 11:35:52.463178] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.899 [2024-07-26 11:35:52.463207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:56.899 qpair failed and we were unable to recover it. 00:27:56.899 [2024-07-26 11:35:52.463355] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.899 [2024-07-26 11:35:52.463424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:56.899 qpair failed and we were unable to recover it. 00:27:56.899 [2024-07-26 11:35:52.463659] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.899 [2024-07-26 11:35:52.463696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:56.899 qpair failed and we were unable to recover it. 00:27:56.899 [2024-07-26 11:35:52.463878] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.899 [2024-07-26 11:35:52.463909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:56.899 qpair failed and we were unable to recover it. 00:27:56.899 [2024-07-26 11:35:52.464181] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.899 [2024-07-26 11:35:52.464212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:56.899 qpair failed and we were unable to recover it. 00:27:56.899 [2024-07-26 11:35:52.464353] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.899 [2024-07-26 11:35:52.464383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:56.899 qpair failed and we were unable to recover it. 00:27:56.899 [2024-07-26 11:35:52.464639] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.899 [2024-07-26 11:35:52.464670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:56.899 qpair failed and we were unable to recover it. 00:27:56.899 [2024-07-26 11:35:52.464904] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.899 [2024-07-26 11:35:52.464934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:56.899 qpair failed and we were unable to recover it. 00:27:56.899 [2024-07-26 11:35:52.465115] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.899 [2024-07-26 11:35:52.465145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:56.899 qpair failed and we were unable to recover it. 00:27:56.899 [2024-07-26 11:35:52.465334] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.899 [2024-07-26 11:35:52.465364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:56.899 qpair failed and we were unable to recover it. 00:27:56.899 [2024-07-26 11:35:52.465490] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.899 [2024-07-26 11:35:52.465520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:56.899 qpair failed and we were unable to recover it. 00:27:56.899 [2024-07-26 11:35:52.465699] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.899 [2024-07-26 11:35:52.465729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:56.899 qpair failed and we were unable to recover it. 00:27:56.899 [2024-07-26 11:35:52.465971] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.899 [2024-07-26 11:35:52.466001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:56.899 qpair failed and we were unable to recover it. 00:27:56.900 [2024-07-26 11:35:52.466128] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.900 [2024-07-26 11:35:52.466158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:56.900 qpair failed and we were unable to recover it. 00:27:56.900 [2024-07-26 11:35:52.466400] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.900 [2024-07-26 11:35:52.466438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:56.900 qpair failed and we were unable to recover it. 00:27:56.900 [2024-07-26 11:35:52.466679] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.900 [2024-07-26 11:35:52.466710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:56.900 qpair failed and we were unable to recover it. 00:27:56.900 [2024-07-26 11:35:52.466922] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.900 [2024-07-26 11:35:52.466952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:56.900 qpair failed and we were unable to recover it. 00:27:56.900 [2024-07-26 11:35:52.467094] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.900 [2024-07-26 11:35:52.467123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:56.900 qpair failed and we were unable to recover it. 00:27:56.900 [2024-07-26 11:35:52.467234] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.900 [2024-07-26 11:35:52.467264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:56.900 qpair failed and we were unable to recover it. 00:27:56.900 [2024-07-26 11:35:52.467443] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.900 [2024-07-26 11:35:52.467473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:56.900 qpair failed and we were unable to recover it. 00:27:56.900 [2024-07-26 11:35:52.467657] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.900 [2024-07-26 11:35:52.467687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:56.900 qpair failed and we were unable to recover it. 00:27:56.900 [2024-07-26 11:35:52.467906] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.900 [2024-07-26 11:35:52.467936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:56.900 qpair failed and we were unable to recover it. 00:27:56.900 [2024-07-26 11:35:52.468125] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.900 [2024-07-26 11:35:52.468155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:56.900 qpair failed and we were unable to recover it. 00:27:56.900 [2024-07-26 11:35:52.468335] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.900 [2024-07-26 11:35:52.468364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:56.900 qpair failed and we were unable to recover it. 00:27:56.900 [2024-07-26 11:35:52.468590] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.900 [2024-07-26 11:35:52.468619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:56.900 qpair failed and we were unable to recover it. 00:27:56.900 [2024-07-26 11:35:52.468919] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.900 [2024-07-26 11:35:52.468949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:56.900 qpair failed and we were unable to recover it. 00:27:56.900 [2024-07-26 11:35:52.469171] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.900 [2024-07-26 11:35:52.469200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:56.900 qpair failed and we were unable to recover it. 00:27:56.900 [2024-07-26 11:35:52.469406] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.900 [2024-07-26 11:35:52.469435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:56.900 qpair failed and we were unable to recover it. 00:27:56.900 [2024-07-26 11:35:52.469607] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.900 [2024-07-26 11:35:52.469646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:56.900 qpair failed and we were unable to recover it. 00:27:56.900 [2024-07-26 11:35:52.469820] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.900 [2024-07-26 11:35:52.469849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:56.900 qpair failed and we were unable to recover it. 00:27:56.900 [2024-07-26 11:35:52.470040] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.900 [2024-07-26 11:35:52.470069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:56.900 qpair failed and we were unable to recover it. 00:27:56.900 [2024-07-26 11:35:52.470311] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.900 [2024-07-26 11:35:52.470341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:56.900 qpair failed and we were unable to recover it. 00:27:56.900 [2024-07-26 11:35:52.470527] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.900 [2024-07-26 11:35:52.470556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:56.900 qpair failed and we were unable to recover it. 00:27:56.900 [2024-07-26 11:35:52.470763] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.900 [2024-07-26 11:35:52.470795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:56.900 qpair failed and we were unable to recover it. 00:27:56.900 [2024-07-26 11:35:52.470981] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.900 [2024-07-26 11:35:52.471009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:56.900 qpair failed and we were unable to recover it. 00:27:56.900 [2024-07-26 11:35:52.471250] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.900 [2024-07-26 11:35:52.471280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:56.900 qpair failed and we were unable to recover it. 00:27:56.900 [2024-07-26 11:35:52.471471] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.900 [2024-07-26 11:35:52.471500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:56.900 qpair failed and we were unable to recover it. 00:27:56.900 [2024-07-26 11:35:52.471773] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.900 [2024-07-26 11:35:52.471803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:56.900 qpair failed and we were unable to recover it. 00:27:56.900 [2024-07-26 11:35:52.471996] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.900 [2024-07-26 11:35:52.472026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:56.900 qpair failed and we were unable to recover it. 00:27:56.900 [2024-07-26 11:35:52.472244] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.900 [2024-07-26 11:35:52.472273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:56.900 qpair failed and we were unable to recover it. 00:27:56.900 [2024-07-26 11:35:52.472448] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.900 [2024-07-26 11:35:52.472478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:56.900 qpair failed and we were unable to recover it. 00:27:56.900 [2024-07-26 11:35:52.472661] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.900 [2024-07-26 11:35:52.472692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:56.900 qpair failed and we were unable to recover it. 00:27:56.900 [2024-07-26 11:35:52.472937] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.900 [2024-07-26 11:35:52.472967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:56.900 qpair failed and we were unable to recover it. 00:27:56.900 [2024-07-26 11:35:52.473119] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.900 [2024-07-26 11:35:52.473148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:56.900 qpair failed and we were unable to recover it. 00:27:56.900 [2024-07-26 11:35:52.473270] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.900 [2024-07-26 11:35:52.473299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:56.900 qpair failed and we were unable to recover it. 00:27:56.900 [2024-07-26 11:35:52.473428] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.900 [2024-07-26 11:35:52.473457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:56.900 qpair failed and we were unable to recover it. 00:27:56.900 [2024-07-26 11:35:52.473680] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.900 [2024-07-26 11:35:52.473710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:56.900 qpair failed and we were unable to recover it. 00:27:56.900 [2024-07-26 11:35:52.473895] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.900 [2024-07-26 11:35:52.473925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:56.900 qpair failed and we were unable to recover it. 00:27:56.900 [2024-07-26 11:35:52.474188] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.900 [2024-07-26 11:35:52.474218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:56.900 qpair failed and we were unable to recover it. 00:27:56.900 [2024-07-26 11:35:52.474392] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.900 [2024-07-26 11:35:52.474422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:56.900 qpair failed and we were unable to recover it. 00:27:56.900 [2024-07-26 11:35:52.474606] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.901 [2024-07-26 11:35:52.474646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:56.901 qpair failed and we were unable to recover it. 00:27:56.901 [2024-07-26 11:35:52.474839] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.901 [2024-07-26 11:35:52.474869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:56.901 qpair failed and we were unable to recover it. 00:27:56.901 [2024-07-26 11:35:52.475143] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.901 [2024-07-26 11:35:52.475173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:56.901 qpair failed and we were unable to recover it. 00:27:56.901 [2024-07-26 11:35:52.475349] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.901 [2024-07-26 11:35:52.475378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:56.901 qpair failed and we were unable to recover it. 00:27:56.901 [2024-07-26 11:35:52.475562] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.901 [2024-07-26 11:35:52.475602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:56.901 qpair failed and we were unable to recover it. 00:27:56.901 [2024-07-26 11:35:52.475793] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.901 [2024-07-26 11:35:52.475824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:56.901 qpair failed and we were unable to recover it. 00:27:56.901 [2024-07-26 11:35:52.476085] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.901 [2024-07-26 11:35:52.476115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:56.901 qpair failed and we were unable to recover it. 00:27:56.901 [2024-07-26 11:35:52.476315] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.901 [2024-07-26 11:35:52.476344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:56.901 qpair failed and we were unable to recover it. 00:27:56.901 [2024-07-26 11:35:52.476538] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.901 [2024-07-26 11:35:52.476568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:56.901 qpair failed and we were unable to recover it. 00:27:56.901 [2024-07-26 11:35:52.476693] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.901 [2024-07-26 11:35:52.476723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:56.901 qpair failed and we were unable to recover it. 00:27:56.901 [2024-07-26 11:35:52.476921] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.901 [2024-07-26 11:35:52.476952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:56.901 qpair failed and we were unable to recover it. 00:27:56.901 [2024-07-26 11:35:52.477090] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.901 [2024-07-26 11:35:52.477119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:56.901 qpair failed and we were unable to recover it. 00:27:56.901 [2024-07-26 11:35:52.477262] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.901 [2024-07-26 11:35:52.477292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:56.901 qpair failed and we were unable to recover it. 00:27:56.901 [2024-07-26 11:35:52.477487] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.901 [2024-07-26 11:35:52.477517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:56.901 qpair failed and we were unable to recover it. 00:27:56.901 [2024-07-26 11:35:52.477623] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.901 [2024-07-26 11:35:52.477662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:56.901 qpair failed and we were unable to recover it. 00:27:56.901 [2024-07-26 11:35:52.477852] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.901 [2024-07-26 11:35:52.477881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:56.901 qpair failed and we were unable to recover it. 00:27:56.901 [2024-07-26 11:35:52.478075] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.901 [2024-07-26 11:35:52.478105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:56.901 qpair failed and we were unable to recover it. 00:27:56.901 [2024-07-26 11:35:52.478279] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.901 [2024-07-26 11:35:52.478308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:56.901 qpair failed and we were unable to recover it. 00:27:56.901 [2024-07-26 11:35:52.478581] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.901 [2024-07-26 11:35:52.478611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:56.901 qpair failed and we were unable to recover it. 00:27:56.901 [2024-07-26 11:35:52.478761] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.901 [2024-07-26 11:35:52.478792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:56.901 qpair failed and we were unable to recover it. 00:27:56.901 [2024-07-26 11:35:52.478997] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.901 [2024-07-26 11:35:52.479027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:56.901 qpair failed and we were unable to recover it. 00:27:56.901 [2024-07-26 11:35:52.479217] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.901 [2024-07-26 11:35:52.479246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:56.901 qpair failed and we were unable to recover it. 00:27:56.901 [2024-07-26 11:35:52.479438] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.901 [2024-07-26 11:35:52.479467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:56.901 qpair failed and we were unable to recover it. 00:27:56.901 [2024-07-26 11:35:52.479648] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.901 [2024-07-26 11:35:52.479679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:56.901 qpair failed and we were unable to recover it. 00:27:56.901 [2024-07-26 11:35:52.479870] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.901 [2024-07-26 11:35:52.479900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:56.901 qpair failed and we were unable to recover it. 00:27:56.901 [2024-07-26 11:35:52.480116] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.901 [2024-07-26 11:35:52.480145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:56.901 qpair failed and we were unable to recover it. 00:27:56.901 [2024-07-26 11:35:52.480388] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.901 [2024-07-26 11:35:52.480418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:56.901 qpair failed and we were unable to recover it. 00:27:56.901 [2024-07-26 11:35:52.480533] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.901 [2024-07-26 11:35:52.480563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:56.901 qpair failed and we were unable to recover it. 00:27:56.901 [2024-07-26 11:35:52.480736] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.901 [2024-07-26 11:35:52.480767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:56.901 qpair failed and we were unable to recover it. 00:27:56.901 [2024-07-26 11:35:52.481036] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.901 [2024-07-26 11:35:52.481065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:56.901 qpair failed and we were unable to recover it. 00:27:56.901 [2024-07-26 11:35:52.481328] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.901 [2024-07-26 11:35:52.481358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:56.901 qpair failed and we were unable to recover it. 00:27:56.901 [2024-07-26 11:35:52.481575] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.901 [2024-07-26 11:35:52.481605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:56.901 qpair failed and we were unable to recover it. 00:27:56.901 [2024-07-26 11:35:52.481857] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.901 [2024-07-26 11:35:52.481887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:56.901 qpair failed and we were unable to recover it. 00:27:56.901 [2024-07-26 11:35:52.482080] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.901 [2024-07-26 11:35:52.482109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:56.901 qpair failed and we were unable to recover it. 00:27:56.901 [2024-07-26 11:35:52.482348] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.901 [2024-07-26 11:35:52.482377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:56.901 qpair failed and we were unable to recover it. 00:27:56.901 [2024-07-26 11:35:52.482591] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.901 [2024-07-26 11:35:52.482620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:56.901 qpair failed and we were unable to recover it. 00:27:56.901 [2024-07-26 11:35:52.482822] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.901 [2024-07-26 11:35:52.482852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:56.901 qpair failed and we were unable to recover it. 00:27:56.901 [2024-07-26 11:35:52.483029] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.901 [2024-07-26 11:35:52.483059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:56.901 qpair failed and we were unable to recover it. 00:27:56.902 [2024-07-26 11:35:52.483241] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.902 [2024-07-26 11:35:52.483270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:56.902 qpair failed and we were unable to recover it. 00:27:56.902 [2024-07-26 11:35:52.483415] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.902 [2024-07-26 11:35:52.483445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:56.902 qpair failed and we were unable to recover it. 00:27:56.902 [2024-07-26 11:35:52.483618] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.902 [2024-07-26 11:35:52.483655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:56.902 qpair failed and we were unable to recover it. 00:27:56.902 [2024-07-26 11:35:52.483764] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.902 [2024-07-26 11:35:52.483794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:56.902 qpair failed and we were unable to recover it. 00:27:56.902 [2024-07-26 11:35:52.484059] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.902 [2024-07-26 11:35:52.484088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:56.902 qpair failed and we were unable to recover it. 00:27:56.902 [2024-07-26 11:35:52.484340] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.902 [2024-07-26 11:35:52.484369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:56.902 qpair failed and we were unable to recover it. 00:27:56.902 [2024-07-26 11:35:52.484604] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.902 [2024-07-26 11:35:52.484643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:56.902 qpair failed and we were unable to recover it. 00:27:56.902 [2024-07-26 11:35:52.484837] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.902 [2024-07-26 11:35:52.484868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:56.902 qpair failed and we were unable to recover it. 00:27:56.902 [2024-07-26 11:35:52.485131] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.902 [2024-07-26 11:35:52.485160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:56.902 qpair failed and we were unable to recover it. 00:27:56.902 [2024-07-26 11:35:52.485288] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.902 [2024-07-26 11:35:52.485317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:56.902 qpair failed and we were unable to recover it. 00:27:56.902 [2024-07-26 11:35:52.485523] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.902 [2024-07-26 11:35:52.485553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:56.902 qpair failed and we were unable to recover it. 00:27:56.902 [2024-07-26 11:35:52.485665] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.902 [2024-07-26 11:35:52.485696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:56.902 qpair failed and we were unable to recover it. 00:27:56.902 [2024-07-26 11:35:52.485816] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.902 [2024-07-26 11:35:52.485846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:56.902 qpair failed and we were unable to recover it. 00:27:56.902 [2024-07-26 11:35:52.486063] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.902 [2024-07-26 11:35:52.486092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:56.902 qpair failed and we were unable to recover it. 00:27:56.902 [2024-07-26 11:35:52.486279] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.902 [2024-07-26 11:35:52.486309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:56.902 qpair failed and we were unable to recover it. 00:27:56.902 [2024-07-26 11:35:52.486546] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.902 [2024-07-26 11:35:52.486576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:56.902 qpair failed and we were unable to recover it. 00:27:56.902 [2024-07-26 11:35:52.486793] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.902 [2024-07-26 11:35:52.486824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:56.902 qpair failed and we were unable to recover it. 00:27:56.902 [2024-07-26 11:35:52.487089] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.902 [2024-07-26 11:35:52.487119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:56.902 qpair failed and we were unable to recover it. 00:27:56.902 [2024-07-26 11:35:52.487309] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.902 [2024-07-26 11:35:52.487338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:56.902 qpair failed and we were unable to recover it. 00:27:56.902 [2024-07-26 11:35:52.487538] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.902 [2024-07-26 11:35:52.487568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:56.902 qpair failed and we were unable to recover it. 00:27:56.902 [2024-07-26 11:35:52.487763] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.902 [2024-07-26 11:35:52.487794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:56.902 qpair failed and we were unable to recover it. 00:27:56.902 [2024-07-26 11:35:52.487908] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.902 [2024-07-26 11:35:52.487938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:56.902 qpair failed and we were unable to recover it. 00:27:56.902 [2024-07-26 11:35:52.488208] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.902 [2024-07-26 11:35:52.488238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:56.902 qpair failed and we were unable to recover it. 00:27:56.902 [2024-07-26 11:35:52.488506] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.902 [2024-07-26 11:35:52.488535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:56.902 qpair failed and we were unable to recover it. 00:27:56.902 [2024-07-26 11:35:52.488721] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.902 [2024-07-26 11:35:52.488752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:56.902 qpair failed and we were unable to recover it. 00:27:56.902 [2024-07-26 11:35:52.488941] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.902 [2024-07-26 11:35:52.488971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:56.902 qpair failed and we were unable to recover it. 00:27:56.902 [2024-07-26 11:35:52.489089] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.902 [2024-07-26 11:35:52.489118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:56.902 qpair failed and we were unable to recover it. 00:27:56.902 [2024-07-26 11:35:52.489284] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.902 [2024-07-26 11:35:52.489314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:56.902 qpair failed and we were unable to recover it. 00:27:56.902 [2024-07-26 11:35:52.489500] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.902 [2024-07-26 11:35:52.489530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:56.902 qpair failed and we were unable to recover it. 00:27:56.902 [2024-07-26 11:35:52.489711] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.902 [2024-07-26 11:35:52.489741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:56.902 qpair failed and we were unable to recover it. 00:27:56.902 [2024-07-26 11:35:52.489860] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.902 [2024-07-26 11:35:52.489889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:56.902 qpair failed and we were unable to recover it. 00:27:56.902 [2024-07-26 11:35:52.490057] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.902 [2024-07-26 11:35:52.490087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:56.902 qpair failed and we were unable to recover it. 00:27:56.902 [2024-07-26 11:35:52.490217] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.902 [2024-07-26 11:35:52.490246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:56.902 qpair failed and we were unable to recover it. 00:27:56.902 [2024-07-26 11:35:52.490420] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.902 [2024-07-26 11:35:52.490456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:56.902 qpair failed and we were unable to recover it. 00:27:56.902 [2024-07-26 11:35:52.490609] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.902 [2024-07-26 11:35:52.490659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:56.902 qpair failed and we were unable to recover it. 00:27:56.902 [2024-07-26 11:35:52.490928] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.902 [2024-07-26 11:35:52.490958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:56.902 qpair failed and we were unable to recover it. 00:27:56.902 [2024-07-26 11:35:52.491196] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.902 [2024-07-26 11:35:52.491226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:56.902 qpair failed and we were unable to recover it. 00:27:56.902 [2024-07-26 11:35:52.491415] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.903 [2024-07-26 11:35:52.491444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:56.903 qpair failed and we were unable to recover it. 00:27:56.903 [2024-07-26 11:35:52.491575] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.903 [2024-07-26 11:35:52.491604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:56.903 qpair failed and we were unable to recover it. 00:27:56.903 [2024-07-26 11:35:52.491734] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.903 [2024-07-26 11:35:52.491764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:56.903 qpair failed and we were unable to recover it. 00:27:56.903 [2024-07-26 11:35:52.491887] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.903 [2024-07-26 11:35:52.491916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:56.903 qpair failed and we were unable to recover it. 00:27:56.903 [2024-07-26 11:35:52.492158] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.903 [2024-07-26 11:35:52.492188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:56.903 qpair failed and we were unable to recover it. 00:27:56.903 [2024-07-26 11:35:52.492381] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.903 [2024-07-26 11:35:52.492411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:56.903 qpair failed and we were unable to recover it. 00:27:56.903 [2024-07-26 11:35:52.492529] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.903 [2024-07-26 11:35:52.492559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:56.903 qpair failed and we were unable to recover it. 00:27:56.903 [2024-07-26 11:35:52.492692] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.903 [2024-07-26 11:35:52.492723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:56.903 qpair failed and we were unable to recover it. 00:27:56.903 [2024-07-26 11:35:52.492828] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.903 [2024-07-26 11:35:52.492857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:56.903 qpair failed and we were unable to recover it. 00:27:56.903 [2024-07-26 11:35:52.493033] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.903 [2024-07-26 11:35:52.493063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:56.903 qpair failed and we were unable to recover it. 00:27:56.903 [2024-07-26 11:35:52.493214] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.903 [2024-07-26 11:35:52.493244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:56.903 qpair failed and we were unable to recover it. 00:27:56.903 [2024-07-26 11:35:52.493363] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.903 [2024-07-26 11:35:52.493392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:56.903 qpair failed and we were unable to recover it. 00:27:56.903 [2024-07-26 11:35:52.493516] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.903 [2024-07-26 11:35:52.493546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:56.903 qpair failed and we were unable to recover it. 00:27:56.903 [2024-07-26 11:35:52.493663] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.903 [2024-07-26 11:35:52.493693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:56.903 qpair failed and we were unable to recover it. 00:27:56.903 [2024-07-26 11:35:52.493874] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.903 [2024-07-26 11:35:52.493904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:56.903 qpair failed and we were unable to recover it. 00:27:56.903 [2024-07-26 11:35:52.494036] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.903 [2024-07-26 11:35:52.494066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:56.903 qpair failed and we were unable to recover it. 00:27:56.903 [2024-07-26 11:35:52.494334] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.903 [2024-07-26 11:35:52.494377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:56.903 qpair failed and we were unable to recover it. 00:27:56.903 [2024-07-26 11:35:52.494650] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.903 [2024-07-26 11:35:52.494698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:56.903 qpair failed and we were unable to recover it. 00:27:56.903 [2024-07-26 11:35:52.495005] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.903 [2024-07-26 11:35:52.495037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:56.903 qpair failed and we were unable to recover it. 00:27:56.903 [2024-07-26 11:35:52.495172] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.903 [2024-07-26 11:35:52.495203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:56.903 qpair failed and we were unable to recover it. 00:27:56.903 [2024-07-26 11:35:52.495379] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.903 [2024-07-26 11:35:52.495410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:56.903 qpair failed and we were unable to recover it. 00:27:56.903 [2024-07-26 11:35:52.495613] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.903 [2024-07-26 11:35:52.495653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:56.903 qpair failed and we were unable to recover it. 00:27:56.903 [2024-07-26 11:35:52.495839] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.903 [2024-07-26 11:35:52.495869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:56.903 qpair failed and we were unable to recover it. 00:27:56.903 [2024-07-26 11:35:52.496121] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.903 [2024-07-26 11:35:52.496151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:56.903 qpair failed and we were unable to recover it. 00:27:56.903 [2024-07-26 11:35:52.496369] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.903 [2024-07-26 11:35:52.496400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:56.903 qpair failed and we were unable to recover it. 00:27:56.903 [2024-07-26 11:35:52.496674] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.903 [2024-07-26 11:35:52.496723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:56.903 qpair failed and we were unable to recover it. 00:27:56.903 [2024-07-26 11:35:52.496940] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.903 [2024-07-26 11:35:52.496976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:56.903 qpair failed and we were unable to recover it. 00:27:56.903 [2024-07-26 11:35:52.497227] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.903 [2024-07-26 11:35:52.497257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:56.903 qpair failed and we were unable to recover it. 00:27:56.903 [2024-07-26 11:35:52.497471] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.903 [2024-07-26 11:35:52.497501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:56.903 qpair failed and we were unable to recover it. 00:27:56.903 [2024-07-26 11:35:52.497746] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.903 [2024-07-26 11:35:52.497779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:56.903 qpair failed and we were unable to recover it. 00:27:56.903 [2024-07-26 11:35:52.497960] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.903 [2024-07-26 11:35:52.497991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:56.903 qpair failed and we were unable to recover it. 00:27:56.903 [2024-07-26 11:35:52.498190] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.903 [2024-07-26 11:35:52.498221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:56.903 qpair failed and we were unable to recover it. 00:27:56.903 [2024-07-26 11:35:52.498510] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.903 [2024-07-26 11:35:52.498554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:56.903 qpair failed and we were unable to recover it. 00:27:56.903 [2024-07-26 11:35:52.498854] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.903 [2024-07-26 11:35:52.498893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:56.903 qpair failed and we were unable to recover it. 00:27:57.181 [2024-07-26 11:35:52.499073] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.181 [2024-07-26 11:35:52.499103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.181 qpair failed and we were unable to recover it. 00:27:57.181 [2024-07-26 11:35:52.499373] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.181 [2024-07-26 11:35:52.499403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.181 qpair failed and we were unable to recover it. 00:27:57.181 [2024-07-26 11:35:52.499594] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.181 [2024-07-26 11:35:52.499641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.181 qpair failed and we were unable to recover it. 00:27:57.181 [2024-07-26 11:35:52.499831] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.181 [2024-07-26 11:35:52.499862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.181 qpair failed and we were unable to recover it. 00:27:57.181 [2024-07-26 11:35:52.500126] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.181 [2024-07-26 11:35:52.500156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.181 qpair failed and we were unable to recover it. 00:27:57.181 [2024-07-26 11:35:52.500301] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.181 [2024-07-26 11:35:52.500331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.181 qpair failed and we were unable to recover it. 00:27:57.181 [2024-07-26 11:35:52.500510] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.181 [2024-07-26 11:35:52.500540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.181 qpair failed and we were unable to recover it. 00:27:57.181 [2024-07-26 11:35:52.500751] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.181 [2024-07-26 11:35:52.500782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.181 qpair failed and we were unable to recover it. 00:27:57.181 [2024-07-26 11:35:52.500910] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.181 [2024-07-26 11:35:52.500940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.181 qpair failed and we were unable to recover it. 00:27:57.181 [2024-07-26 11:35:52.501201] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.181 [2024-07-26 11:35:52.501231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.181 qpair failed and we were unable to recover it. 00:27:57.181 [2024-07-26 11:35:52.501405] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.181 [2024-07-26 11:35:52.501435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.181 qpair failed and we were unable to recover it. 00:27:57.181 [2024-07-26 11:35:52.501653] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.181 [2024-07-26 11:35:52.501684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.181 qpair failed and we were unable to recover it. 00:27:57.181 [2024-07-26 11:35:52.501829] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.181 [2024-07-26 11:35:52.501859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.181 qpair failed and we were unable to recover it. 00:27:57.181 [2024-07-26 11:35:52.502048] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.181 [2024-07-26 11:35:52.502078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.181 qpair failed and we were unable to recover it. 00:27:57.181 [2024-07-26 11:35:52.502292] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.181 [2024-07-26 11:35:52.502322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.181 qpair failed and we were unable to recover it. 00:27:57.181 [2024-07-26 11:35:52.502592] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.181 [2024-07-26 11:35:52.502623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.181 qpair failed and we were unable to recover it. 00:27:57.181 [2024-07-26 11:35:52.502769] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.181 [2024-07-26 11:35:52.502800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.181 qpair failed and we were unable to recover it. 00:27:57.181 [2024-07-26 11:35:52.502935] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.181 [2024-07-26 11:35:52.502966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.181 qpair failed and we were unable to recover it. 00:27:57.181 [2024-07-26 11:35:52.503142] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.181 [2024-07-26 11:35:52.503172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.181 qpair failed and we were unable to recover it. 00:27:57.181 [2024-07-26 11:35:52.503343] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.181 [2024-07-26 11:35:52.503373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.181 qpair failed and we were unable to recover it. 00:27:57.181 [2024-07-26 11:35:52.503516] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.181 [2024-07-26 11:35:52.503546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.181 qpair failed and we were unable to recover it. 00:27:57.181 [2024-07-26 11:35:52.503803] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.181 [2024-07-26 11:35:52.503833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.181 qpair failed and we were unable to recover it. 00:27:57.181 [2024-07-26 11:35:52.504032] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.181 [2024-07-26 11:35:52.504062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.181 qpair failed and we were unable to recover it. 00:27:57.181 [2024-07-26 11:35:52.504214] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.181 [2024-07-26 11:35:52.504245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.181 qpair failed and we were unable to recover it. 00:27:57.181 [2024-07-26 11:35:52.504444] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.181 [2024-07-26 11:35:52.504474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.181 qpair failed and we were unable to recover it. 00:27:57.181 [2024-07-26 11:35:52.504662] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.181 [2024-07-26 11:35:52.504693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.181 qpair failed and we were unable to recover it. 00:27:57.182 [2024-07-26 11:35:52.504958] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.182 [2024-07-26 11:35:52.504989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.182 qpair failed and we were unable to recover it. 00:27:57.182 [2024-07-26 11:35:52.505128] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.182 [2024-07-26 11:35:52.505158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.182 qpair failed and we were unable to recover it. 00:27:57.182 [2024-07-26 11:35:52.505438] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.182 [2024-07-26 11:35:52.505468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.182 qpair failed and we were unable to recover it. 00:27:57.182 [2024-07-26 11:35:52.505608] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.182 [2024-07-26 11:35:52.505669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.182 qpair failed and we were unable to recover it. 00:27:57.182 [2024-07-26 11:35:52.505954] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.182 [2024-07-26 11:35:52.505985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.182 qpair failed and we were unable to recover it. 00:27:57.182 [2024-07-26 11:35:52.506082] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.182 [2024-07-26 11:35:52.506112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.182 qpair failed and we were unable to recover it. 00:27:57.182 [2024-07-26 11:35:52.506297] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.182 [2024-07-26 11:35:52.506327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.182 qpair failed and we were unable to recover it. 00:27:57.182 [2024-07-26 11:35:52.506538] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.182 [2024-07-26 11:35:52.506567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.182 qpair failed and we were unable to recover it. 00:27:57.182 [2024-07-26 11:35:52.506762] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.182 [2024-07-26 11:35:52.506794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.182 qpair failed and we were unable to recover it. 00:27:57.182 [2024-07-26 11:35:52.507039] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.182 [2024-07-26 11:35:52.507069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.182 qpair failed and we were unable to recover it. 00:27:57.182 [2024-07-26 11:35:52.507243] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.182 [2024-07-26 11:35:52.507273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.182 qpair failed and we were unable to recover it. 00:27:57.182 [2024-07-26 11:35:52.507401] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.182 [2024-07-26 11:35:52.507431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.182 qpair failed and we were unable to recover it. 00:27:57.182 [2024-07-26 11:35:52.507698] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.182 [2024-07-26 11:35:52.507729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.182 qpair failed and we were unable to recover it. 00:27:57.182 [2024-07-26 11:35:52.507925] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.182 [2024-07-26 11:35:52.507955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.182 qpair failed and we were unable to recover it. 00:27:57.182 [2024-07-26 11:35:52.508129] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.182 [2024-07-26 11:35:52.508159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.182 qpair failed and we were unable to recover it. 00:27:57.182 [2024-07-26 11:35:52.508425] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.182 [2024-07-26 11:35:52.508454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.182 qpair failed and we were unable to recover it. 00:27:57.182 [2024-07-26 11:35:52.508656] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.182 [2024-07-26 11:35:52.508693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.182 qpair failed and we were unable to recover it. 00:27:57.182 [2024-07-26 11:35:52.508826] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.182 [2024-07-26 11:35:52.508856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.182 qpair failed and we were unable to recover it. 00:27:57.182 [2024-07-26 11:35:52.509103] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.182 [2024-07-26 11:35:52.509133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.182 qpair failed and we were unable to recover it. 00:27:57.182 [2024-07-26 11:35:52.509307] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.182 [2024-07-26 11:35:52.509337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.182 qpair failed and we were unable to recover it. 00:27:57.182 [2024-07-26 11:35:52.509521] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.182 [2024-07-26 11:35:52.509551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.182 qpair failed and we were unable to recover it. 00:27:57.182 [2024-07-26 11:35:52.509777] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.182 [2024-07-26 11:35:52.509808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.182 qpair failed and we were unable to recover it. 00:27:57.182 [2024-07-26 11:35:52.510060] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.182 [2024-07-26 11:35:52.510089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.182 qpair failed and we were unable to recover it. 00:27:57.182 [2024-07-26 11:35:52.510230] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.182 [2024-07-26 11:35:52.510259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.182 qpair failed and we were unable to recover it. 00:27:57.182 [2024-07-26 11:35:52.510459] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.182 [2024-07-26 11:35:52.510489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.182 qpair failed and we were unable to recover it. 00:27:57.182 [2024-07-26 11:35:52.510758] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.182 [2024-07-26 11:35:52.510789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.182 qpair failed and we were unable to recover it. 00:27:57.182 [2024-07-26 11:35:52.511054] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.182 [2024-07-26 11:35:52.511084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.182 qpair failed and we were unable to recover it. 00:27:57.182 [2024-07-26 11:35:52.511293] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.182 [2024-07-26 11:35:52.511322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.182 qpair failed and we were unable to recover it. 00:27:57.182 [2024-07-26 11:35:52.511448] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.182 [2024-07-26 11:35:52.511478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.182 qpair failed and we were unable to recover it. 00:27:57.182 [2024-07-26 11:35:52.511679] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.182 [2024-07-26 11:35:52.511710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.182 qpair failed and we were unable to recover it. 00:27:57.182 [2024-07-26 11:35:52.511976] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.182 [2024-07-26 11:35:52.512006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.182 qpair failed and we were unable to recover it. 00:27:57.182 [2024-07-26 11:35:52.512277] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.182 [2024-07-26 11:35:52.512307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.182 qpair failed and we were unable to recover it. 00:27:57.182 [2024-07-26 11:35:52.512449] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.182 [2024-07-26 11:35:52.512479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.182 qpair failed and we were unable to recover it. 00:27:57.182 [2024-07-26 11:35:52.512676] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.182 [2024-07-26 11:35:52.512707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.182 qpair failed and we were unable to recover it. 00:27:57.182 [2024-07-26 11:35:52.512979] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.182 [2024-07-26 11:35:52.513008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.182 qpair failed and we were unable to recover it. 00:27:57.182 [2024-07-26 11:35:52.513224] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.182 [2024-07-26 11:35:52.513253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.182 qpair failed and we were unable to recover it. 00:27:57.182 [2024-07-26 11:35:52.513445] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.182 [2024-07-26 11:35:52.513476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.182 qpair failed and we were unable to recover it. 00:27:57.182 [2024-07-26 11:35:52.513766] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.183 [2024-07-26 11:35:52.513797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.183 qpair failed and we were unable to recover it. 00:27:57.183 [2024-07-26 11:35:52.514040] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.183 [2024-07-26 11:35:52.514070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.183 qpair failed and we were unable to recover it. 00:27:57.183 [2024-07-26 11:35:52.514244] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.183 [2024-07-26 11:35:52.514274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.183 qpair failed and we were unable to recover it. 00:27:57.183 [2024-07-26 11:35:52.514393] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.183 [2024-07-26 11:35:52.514423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.183 qpair failed and we were unable to recover it. 00:27:57.183 [2024-07-26 11:35:52.514552] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.183 [2024-07-26 11:35:52.514582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.183 qpair failed and we were unable to recover it. 00:27:57.183 [2024-07-26 11:35:52.514779] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.183 [2024-07-26 11:35:52.514810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.183 qpair failed and we were unable to recover it. 00:27:57.183 [2024-07-26 11:35:52.515011] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.183 [2024-07-26 11:35:52.515042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.183 qpair failed and we were unable to recover it. 00:27:57.183 [2024-07-26 11:35:52.515232] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.183 [2024-07-26 11:35:52.515262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.183 qpair failed and we were unable to recover it. 00:27:57.183 [2024-07-26 11:35:52.515380] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.183 [2024-07-26 11:35:52.515409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.183 qpair failed and we were unable to recover it. 00:27:57.183 [2024-07-26 11:35:52.515579] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.183 [2024-07-26 11:35:52.515608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.183 qpair failed and we were unable to recover it. 00:27:57.183 [2024-07-26 11:35:52.515762] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.183 [2024-07-26 11:35:52.515793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.183 qpair failed and we were unable to recover it. 00:27:57.183 [2024-07-26 11:35:52.516041] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.183 [2024-07-26 11:35:52.516071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.183 qpair failed and we were unable to recover it. 00:27:57.183 [2024-07-26 11:35:52.516319] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.183 [2024-07-26 11:35:52.516349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.183 qpair failed and we were unable to recover it. 00:27:57.183 [2024-07-26 11:35:52.516484] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.183 [2024-07-26 11:35:52.516514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.183 qpair failed and we were unable to recover it. 00:27:57.183 [2024-07-26 11:35:52.516705] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.183 [2024-07-26 11:35:52.516736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.183 qpair failed and we were unable to recover it. 00:27:57.183 [2024-07-26 11:35:52.516876] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.183 [2024-07-26 11:35:52.516905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.183 qpair failed and we were unable to recover it. 00:27:57.183 [2024-07-26 11:35:52.517089] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.183 [2024-07-26 11:35:52.517119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.183 qpair failed and we were unable to recover it. 00:27:57.183 [2024-07-26 11:35:52.517357] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.183 [2024-07-26 11:35:52.517386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.183 qpair failed and we were unable to recover it. 00:27:57.183 [2024-07-26 11:35:52.517578] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.183 [2024-07-26 11:35:52.517608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.183 qpair failed and we were unable to recover it. 00:27:57.183 [2024-07-26 11:35:52.517905] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.183 [2024-07-26 11:35:52.517942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.183 qpair failed and we were unable to recover it. 00:27:57.183 [2024-07-26 11:35:52.518142] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.183 [2024-07-26 11:35:52.518172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.183 qpair failed and we were unable to recover it. 00:27:57.183 [2024-07-26 11:35:52.518364] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.183 [2024-07-26 11:35:52.518394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.183 qpair failed and we were unable to recover it. 00:27:57.183 [2024-07-26 11:35:52.518647] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.183 [2024-07-26 11:35:52.518678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.183 qpair failed and we were unable to recover it. 00:27:57.183 [2024-07-26 11:35:52.518861] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.183 [2024-07-26 11:35:52.518891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.183 qpair failed and we were unable to recover it. 00:27:57.183 [2024-07-26 11:35:52.519161] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.183 [2024-07-26 11:35:52.519191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.183 qpair failed and we were unable to recover it. 00:27:57.183 [2024-07-26 11:35:52.519474] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.183 [2024-07-26 11:35:52.519504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.183 qpair failed and we were unable to recover it. 00:27:57.183 [2024-07-26 11:35:52.519791] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.183 [2024-07-26 11:35:52.519822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.183 qpair failed and we were unable to recover it. 00:27:57.183 [2024-07-26 11:35:52.520067] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.183 [2024-07-26 11:35:52.520096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.183 qpair failed and we were unable to recover it. 00:27:57.183 [2024-07-26 11:35:52.520340] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.183 [2024-07-26 11:35:52.520370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.183 qpair failed and we were unable to recover it. 00:27:57.183 [2024-07-26 11:35:52.520493] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.183 [2024-07-26 11:35:52.520522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.183 qpair failed and we were unable to recover it. 00:27:57.183 [2024-07-26 11:35:52.520741] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.183 [2024-07-26 11:35:52.520772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.183 qpair failed and we were unable to recover it. 00:27:57.183 [2024-07-26 11:35:52.520982] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.183 [2024-07-26 11:35:52.521011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.183 qpair failed and we were unable to recover it. 00:27:57.183 [2024-07-26 11:35:52.521294] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.183 [2024-07-26 11:35:52.521323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.183 qpair failed and we were unable to recover it. 00:27:57.183 [2024-07-26 11:35:52.521545] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.183 [2024-07-26 11:35:52.521576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.183 qpair failed and we were unable to recover it. 00:27:57.183 [2024-07-26 11:35:52.521879] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.183 [2024-07-26 11:35:52.521910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.183 qpair failed and we were unable to recover it. 00:27:57.183 [2024-07-26 11:35:52.522099] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.183 [2024-07-26 11:35:52.522130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.183 qpair failed and we were unable to recover it. 00:27:57.183 [2024-07-26 11:35:52.522351] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.183 [2024-07-26 11:35:52.522381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.183 qpair failed and we were unable to recover it. 00:27:57.183 [2024-07-26 11:35:52.522500] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.184 [2024-07-26 11:35:52.522530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.184 qpair failed and we were unable to recover it. 00:27:57.184 [2024-07-26 11:35:52.522651] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.184 [2024-07-26 11:35:52.522682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.184 qpair failed and we were unable to recover it. 00:27:57.184 [2024-07-26 11:35:52.522851] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.184 [2024-07-26 11:35:52.522881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.184 qpair failed and we were unable to recover it. 00:27:57.184 [2024-07-26 11:35:52.523017] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.184 [2024-07-26 11:35:52.523047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.184 qpair failed and we were unable to recover it. 00:27:57.184 [2024-07-26 11:35:52.523311] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.184 [2024-07-26 11:35:52.523341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.184 qpair failed and we were unable to recover it. 00:27:57.184 [2024-07-26 11:35:52.523464] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.184 [2024-07-26 11:35:52.523493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.184 qpair failed and we were unable to recover it. 00:27:57.184 [2024-07-26 11:35:52.523734] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.184 [2024-07-26 11:35:52.523766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.184 qpair failed and we were unable to recover it. 00:27:57.184 [2024-07-26 11:35:52.523953] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.184 [2024-07-26 11:35:52.523983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.184 qpair failed and we were unable to recover it. 00:27:57.184 [2024-07-26 11:35:52.524169] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.184 [2024-07-26 11:35:52.524199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.184 qpair failed and we were unable to recover it. 00:27:57.184 [2024-07-26 11:35:52.524450] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.184 [2024-07-26 11:35:52.524481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.184 qpair failed and we were unable to recover it. 00:27:57.184 [2024-07-26 11:35:52.524596] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.184 [2024-07-26 11:35:52.524638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.184 qpair failed and we were unable to recover it. 00:27:57.184 [2024-07-26 11:35:52.524887] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.184 [2024-07-26 11:35:52.524917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.184 qpair failed and we were unable to recover it. 00:27:57.184 [2024-07-26 11:35:52.525164] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.184 [2024-07-26 11:35:52.525194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.184 qpair failed and we were unable to recover it. 00:27:57.184 [2024-07-26 11:35:52.525409] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.184 [2024-07-26 11:35:52.525440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.184 qpair failed and we were unable to recover it. 00:27:57.184 [2024-07-26 11:35:52.525569] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.184 [2024-07-26 11:35:52.525599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.184 qpair failed and we were unable to recover it. 00:27:57.184 [2024-07-26 11:35:52.525797] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.184 [2024-07-26 11:35:52.525828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.184 qpair failed and we were unable to recover it. 00:27:57.184 [2024-07-26 11:35:52.526003] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.184 [2024-07-26 11:35:52.526032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.184 qpair failed and we were unable to recover it. 00:27:57.184 [2024-07-26 11:35:52.526296] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.184 [2024-07-26 11:35:52.526325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.184 qpair failed and we were unable to recover it. 00:27:57.184 [2024-07-26 11:35:52.526518] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.184 [2024-07-26 11:35:52.526548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.184 qpair failed and we were unable to recover it. 00:27:57.184 [2024-07-26 11:35:52.526836] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.184 [2024-07-26 11:35:52.526867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.184 qpair failed and we were unable to recover it. 00:27:57.184 [2024-07-26 11:35:52.527042] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.184 [2024-07-26 11:35:52.527072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.184 qpair failed and we were unable to recover it. 00:27:57.184 [2024-07-26 11:35:52.527264] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.184 [2024-07-26 11:35:52.527294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.184 qpair failed and we were unable to recover it. 00:27:57.184 [2024-07-26 11:35:52.527477] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.184 [2024-07-26 11:35:52.527512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.184 qpair failed and we were unable to recover it. 00:27:57.184 [2024-07-26 11:35:52.527656] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.184 [2024-07-26 11:35:52.527688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.184 qpair failed and we were unable to recover it. 00:27:57.184 [2024-07-26 11:35:52.527949] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.184 [2024-07-26 11:35:52.527979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.184 qpair failed and we were unable to recover it. 00:27:57.184 [2024-07-26 11:35:52.528108] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.184 [2024-07-26 11:35:52.528138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.184 qpair failed and we were unable to recover it. 00:27:57.184 [2024-07-26 11:35:52.528382] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.184 [2024-07-26 11:35:52.528412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.184 qpair failed and we were unable to recover it. 00:27:57.184 [2024-07-26 11:35:52.528612] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.184 [2024-07-26 11:35:52.528649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.184 qpair failed and we were unable to recover it. 00:27:57.184 [2024-07-26 11:35:52.528851] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.184 [2024-07-26 11:35:52.528881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.184 qpair failed and we were unable to recover it. 00:27:57.184 [2024-07-26 11:35:52.529122] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.184 [2024-07-26 11:35:52.529153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.184 qpair failed and we were unable to recover it. 00:27:57.184 [2024-07-26 11:35:52.529336] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.184 [2024-07-26 11:35:52.529366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.184 qpair failed and we were unable to recover it. 00:27:57.184 [2024-07-26 11:35:52.529572] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.184 [2024-07-26 11:35:52.529602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.184 qpair failed and we were unable to recover it. 00:27:57.184 [2024-07-26 11:35:52.529793] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.184 [2024-07-26 11:35:52.529823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.184 qpair failed and we were unable to recover it. 00:27:57.184 [2024-07-26 11:35:52.529958] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.184 [2024-07-26 11:35:52.529987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.184 qpair failed and we were unable to recover it. 00:27:57.184 [2024-07-26 11:35:52.530128] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.184 [2024-07-26 11:35:52.530158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.184 qpair failed and we were unable to recover it. 00:27:57.184 [2024-07-26 11:35:52.530340] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.184 [2024-07-26 11:35:52.530370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.184 qpair failed and we were unable to recover it. 00:27:57.184 [2024-07-26 11:35:52.530501] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.184 [2024-07-26 11:35:52.530532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.184 qpair failed and we were unable to recover it. 00:27:57.184 [2024-07-26 11:35:52.530666] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.184 [2024-07-26 11:35:52.530697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.184 qpair failed and we were unable to recover it. 00:27:57.185 [2024-07-26 11:35:52.530962] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.185 [2024-07-26 11:35:52.530992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.185 qpair failed and we were unable to recover it. 00:27:57.185 [2024-07-26 11:35:52.531237] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.185 [2024-07-26 11:35:52.531267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.185 qpair failed and we were unable to recover it. 00:27:57.185 [2024-07-26 11:35:52.531452] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.185 [2024-07-26 11:35:52.531482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.185 qpair failed and we were unable to recover it. 00:27:57.185 [2024-07-26 11:35:52.531675] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.185 [2024-07-26 11:35:52.531705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.185 qpair failed and we were unable to recover it. 00:27:57.185 [2024-07-26 11:35:52.531846] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.185 [2024-07-26 11:35:52.531875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.185 qpair failed and we were unable to recover it. 00:27:57.185 [2024-07-26 11:35:52.532065] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.185 [2024-07-26 11:35:52.532095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.185 qpair failed and we were unable to recover it. 00:27:57.185 [2024-07-26 11:35:52.532270] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.185 [2024-07-26 11:35:52.532300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.185 qpair failed and we were unable to recover it. 00:27:57.185 [2024-07-26 11:35:52.532571] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.185 [2024-07-26 11:35:52.532601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.185 qpair failed and we were unable to recover it. 00:27:57.185 [2024-07-26 11:35:52.532732] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.185 [2024-07-26 11:35:52.532763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.185 qpair failed and we were unable to recover it. 00:27:57.185 [2024-07-26 11:35:52.533010] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.185 [2024-07-26 11:35:52.533040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.185 qpair failed and we were unable to recover it. 00:27:57.185 [2024-07-26 11:35:52.533277] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.185 [2024-07-26 11:35:52.533307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.185 qpair failed and we were unable to recover it. 00:27:57.185 [2024-07-26 11:35:52.533430] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.185 [2024-07-26 11:35:52.533460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.185 qpair failed and we were unable to recover it. 00:27:57.185 [2024-07-26 11:35:52.533677] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.185 [2024-07-26 11:35:52.533708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.185 qpair failed and we were unable to recover it. 00:27:57.185 [2024-07-26 11:35:52.533917] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.185 [2024-07-26 11:35:52.533947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.185 qpair failed and we were unable to recover it. 00:27:57.185 [2024-07-26 11:35:52.534189] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.185 [2024-07-26 11:35:52.534219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.185 qpair failed and we were unable to recover it. 00:27:57.185 [2024-07-26 11:35:52.534509] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.185 [2024-07-26 11:35:52.534539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.185 qpair failed and we were unable to recover it. 00:27:57.185 [2024-07-26 11:35:52.534747] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.185 [2024-07-26 11:35:52.534778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.185 qpair failed and we were unable to recover it. 00:27:57.185 [2024-07-26 11:35:52.534910] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.185 [2024-07-26 11:35:52.534940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.185 qpair failed and we were unable to recover it. 00:27:57.185 [2024-07-26 11:35:52.535159] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.185 [2024-07-26 11:35:52.535189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.185 qpair failed and we were unable to recover it. 00:27:57.185 [2024-07-26 11:35:52.535320] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.185 [2024-07-26 11:35:52.535350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.185 qpair failed and we were unable to recover it. 00:27:57.185 [2024-07-26 11:35:52.535620] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.185 [2024-07-26 11:35:52.535665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.185 qpair failed and we were unable to recover it. 00:27:57.185 [2024-07-26 11:35:52.535909] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.185 [2024-07-26 11:35:52.535939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.185 qpair failed and we were unable to recover it. 00:27:57.185 [2024-07-26 11:35:52.536184] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.185 [2024-07-26 11:35:52.536214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.185 qpair failed and we were unable to recover it. 00:27:57.185 [2024-07-26 11:35:52.536398] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.185 [2024-07-26 11:35:52.536428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.185 qpair failed and we were unable to recover it. 00:27:57.185 [2024-07-26 11:35:52.536618] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.185 [2024-07-26 11:35:52.536661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.185 qpair failed and we were unable to recover it. 00:27:57.185 [2024-07-26 11:35:52.536858] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.185 [2024-07-26 11:35:52.536888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.185 qpair failed and we were unable to recover it. 00:27:57.185 [2024-07-26 11:35:52.537091] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.185 [2024-07-26 11:35:52.537121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.185 qpair failed and we were unable to recover it. 00:27:57.185 [2024-07-26 11:35:52.537385] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.185 [2024-07-26 11:35:52.537415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.185 qpair failed and we were unable to recover it. 00:27:57.185 [2024-07-26 11:35:52.537609] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.185 [2024-07-26 11:35:52.537648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.185 qpair failed and we were unable to recover it. 00:27:57.185 [2024-07-26 11:35:52.537780] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.185 [2024-07-26 11:35:52.537810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.185 qpair failed and we were unable to recover it. 00:27:57.185 [2024-07-26 11:35:52.538047] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.185 [2024-07-26 11:35:52.538076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.185 qpair failed and we were unable to recover it. 00:27:57.186 [2024-07-26 11:35:52.538211] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.186 [2024-07-26 11:35:52.538241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.186 qpair failed and we were unable to recover it. 00:27:57.186 [2024-07-26 11:35:52.538508] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.186 [2024-07-26 11:35:52.538539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.186 qpair failed and we were unable to recover it. 00:27:57.186 [2024-07-26 11:35:52.538727] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.186 [2024-07-26 11:35:52.538759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.186 qpair failed and we were unable to recover it. 00:27:57.186 [2024-07-26 11:35:52.538948] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.186 [2024-07-26 11:35:52.538978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.186 qpair failed and we were unable to recover it. 00:27:57.186 [2024-07-26 11:35:52.539087] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.186 [2024-07-26 11:35:52.539116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.186 qpair failed and we were unable to recover it. 00:27:57.186 [2024-07-26 11:35:52.539294] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.186 [2024-07-26 11:35:52.539324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.186 qpair failed and we were unable to recover it. 00:27:57.186 [2024-07-26 11:35:52.539498] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.186 [2024-07-26 11:35:52.539528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.186 qpair failed and we were unable to recover it. 00:27:57.186 [2024-07-26 11:35:52.539671] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.186 [2024-07-26 11:35:52.539702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.186 qpair failed and we were unable to recover it. 00:27:57.186 [2024-07-26 11:35:52.539834] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.186 [2024-07-26 11:35:52.539863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.186 qpair failed and we were unable to recover it. 00:27:57.186 [2024-07-26 11:35:52.540152] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.186 [2024-07-26 11:35:52.540182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.186 qpair failed and we were unable to recover it. 00:27:57.186 [2024-07-26 11:35:52.540369] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.186 [2024-07-26 11:35:52.540398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.186 qpair failed and we were unable to recover it. 00:27:57.186 [2024-07-26 11:35:52.540655] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.186 [2024-07-26 11:35:52.540686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.186 qpair failed and we were unable to recover it. 00:27:57.186 [2024-07-26 11:35:52.540812] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.186 [2024-07-26 11:35:52.540842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.186 qpair failed and we were unable to recover it. 00:27:57.186 [2024-07-26 11:35:52.541087] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.186 [2024-07-26 11:35:52.541117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.186 qpair failed and we were unable to recover it. 00:27:57.186 [2024-07-26 11:35:52.541362] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.186 [2024-07-26 11:35:52.541392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.186 qpair failed and we were unable to recover it. 00:27:57.186 [2024-07-26 11:35:52.541604] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.186 [2024-07-26 11:35:52.541642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.186 qpair failed and we were unable to recover it. 00:27:57.186 [2024-07-26 11:35:52.541853] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.186 [2024-07-26 11:35:52.541883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.186 qpair failed and we were unable to recover it. 00:27:57.186 [2024-07-26 11:35:52.542097] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.186 [2024-07-26 11:35:52.542126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.186 qpair failed and we were unable to recover it. 00:27:57.186 [2024-07-26 11:35:52.542300] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.186 [2024-07-26 11:35:52.542329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.186 qpair failed and we were unable to recover it. 00:27:57.186 [2024-07-26 11:35:52.542470] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.186 [2024-07-26 11:35:52.542500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.186 qpair failed and we were unable to recover it. 00:27:57.186 [2024-07-26 11:35:52.542749] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.186 [2024-07-26 11:35:52.542781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.186 qpair failed and we were unable to recover it. 00:27:57.186 [2024-07-26 11:35:52.542949] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.186 [2024-07-26 11:35:52.542979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.186 qpair failed and we were unable to recover it. 00:27:57.186 [2024-07-26 11:35:52.543098] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.186 [2024-07-26 11:35:52.543127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.186 qpair failed and we were unable to recover it. 00:27:57.186 [2024-07-26 11:35:52.543398] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.186 [2024-07-26 11:35:52.543428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.186 qpair failed and we were unable to recover it. 00:27:57.186 [2024-07-26 11:35:52.543670] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.186 [2024-07-26 11:35:52.543701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.186 qpair failed and we were unable to recover it. 00:27:57.186 [2024-07-26 11:35:52.543971] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.186 [2024-07-26 11:35:52.544001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.186 qpair failed and we were unable to recover it. 00:27:57.186 [2024-07-26 11:35:52.544135] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.186 [2024-07-26 11:35:52.544165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.186 qpair failed and we were unable to recover it. 00:27:57.186 [2024-07-26 11:35:52.544306] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.186 [2024-07-26 11:35:52.544336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.186 qpair failed and we were unable to recover it. 00:27:57.186 [2024-07-26 11:35:52.544447] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.186 [2024-07-26 11:35:52.544477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.186 qpair failed and we were unable to recover it. 00:27:57.186 [2024-07-26 11:35:52.544756] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.186 [2024-07-26 11:35:52.544787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.186 qpair failed and we were unable to recover it. 00:27:57.186 [2024-07-26 11:35:52.544968] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.186 [2024-07-26 11:35:52.544998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.186 qpair failed and we were unable to recover it. 00:27:57.186 [2024-07-26 11:35:52.545131] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.186 [2024-07-26 11:35:52.545161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.186 qpair failed and we were unable to recover it. 00:27:57.186 [2024-07-26 11:35:52.545353] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.186 [2024-07-26 11:35:52.545383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.186 qpair failed and we were unable to recover it. 00:27:57.186 [2024-07-26 11:35:52.545600] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.186 [2024-07-26 11:35:52.545644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.186 qpair failed and we were unable to recover it. 00:27:57.186 [2024-07-26 11:35:52.545889] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.186 [2024-07-26 11:35:52.545918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.186 qpair failed and we were unable to recover it. 00:27:57.186 [2024-07-26 11:35:52.546092] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.186 [2024-07-26 11:35:52.546122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.186 qpair failed and we were unable to recover it. 00:27:57.187 [2024-07-26 11:35:52.546295] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.187 [2024-07-26 11:35:52.546325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.187 qpair failed and we were unable to recover it. 00:27:57.187 [2024-07-26 11:35:52.546460] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.187 [2024-07-26 11:35:52.546489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.187 qpair failed and we were unable to recover it. 00:27:57.187 [2024-07-26 11:35:52.546594] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.187 [2024-07-26 11:35:52.546623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.187 qpair failed and we were unable to recover it. 00:27:57.187 [2024-07-26 11:35:52.546826] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.187 [2024-07-26 11:35:52.546855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.187 qpair failed and we were unable to recover it. 00:27:57.187 [2024-07-26 11:35:52.547123] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.187 [2024-07-26 11:35:52.547153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.187 qpair failed and we were unable to recover it. 00:27:57.187 [2024-07-26 11:35:52.547416] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.187 [2024-07-26 11:35:52.547446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.187 qpair failed and we were unable to recover it. 00:27:57.187 [2024-07-26 11:35:52.547647] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.187 [2024-07-26 11:35:52.547678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.187 qpair failed and we were unable to recover it. 00:27:57.187 [2024-07-26 11:35:52.547943] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.187 [2024-07-26 11:35:52.547973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.187 qpair failed and we were unable to recover it. 00:27:57.187 [2024-07-26 11:35:52.548102] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.187 [2024-07-26 11:35:52.548132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.187 qpair failed and we were unable to recover it. 00:27:57.187 [2024-07-26 11:35:52.548322] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.187 [2024-07-26 11:35:52.548352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.187 qpair failed and we were unable to recover it. 00:27:57.187 [2024-07-26 11:35:52.548536] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.187 [2024-07-26 11:35:52.548566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.187 qpair failed and we were unable to recover it. 00:27:57.187 [2024-07-26 11:35:52.548772] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.187 [2024-07-26 11:35:52.548803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.187 qpair failed and we were unable to recover it. 00:27:57.187 [2024-07-26 11:35:52.549014] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.187 [2024-07-26 11:35:52.549043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.187 qpair failed and we were unable to recover it. 00:27:57.187 [2024-07-26 11:35:52.549241] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.187 [2024-07-26 11:35:52.549270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.187 qpair failed and we were unable to recover it. 00:27:57.187 [2024-07-26 11:35:52.549515] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.187 [2024-07-26 11:35:52.549545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.187 qpair failed and we were unable to recover it. 00:27:57.187 [2024-07-26 11:35:52.549737] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.187 [2024-07-26 11:35:52.549769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.187 qpair failed and we were unable to recover it. 00:27:57.187 [2024-07-26 11:35:52.550023] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.187 [2024-07-26 11:35:52.550052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.187 qpair failed and we were unable to recover it. 00:27:57.187 [2024-07-26 11:35:52.550181] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.187 [2024-07-26 11:35:52.550211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.187 qpair failed and we were unable to recover it. 00:27:57.187 [2024-07-26 11:35:52.550452] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.187 [2024-07-26 11:35:52.550482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.187 qpair failed and we were unable to recover it. 00:27:57.187 [2024-07-26 11:35:52.550656] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.187 [2024-07-26 11:35:52.550688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.187 qpair failed and we were unable to recover it. 00:27:57.187 [2024-07-26 11:35:52.550878] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.187 [2024-07-26 11:35:52.550907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.187 qpair failed and we were unable to recover it. 00:27:57.187 [2024-07-26 11:35:52.551148] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.187 [2024-07-26 11:35:52.551177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.187 qpair failed and we were unable to recover it. 00:27:57.187 [2024-07-26 11:35:52.551296] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.187 [2024-07-26 11:35:52.551327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.187 qpair failed and we were unable to recover it. 00:27:57.187 [2024-07-26 11:35:52.551520] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.187 [2024-07-26 11:35:52.551549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.187 qpair failed and we were unable to recover it. 00:27:57.187 [2024-07-26 11:35:52.551680] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.187 [2024-07-26 11:35:52.551712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.187 qpair failed and we were unable to recover it. 00:27:57.187 [2024-07-26 11:35:52.551817] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.187 [2024-07-26 11:35:52.551847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.187 qpair failed and we were unable to recover it. 00:27:57.187 [2024-07-26 11:35:52.552041] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.187 [2024-07-26 11:35:52.552071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.187 qpair failed and we were unable to recover it. 00:27:57.187 [2024-07-26 11:35:52.552339] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.187 [2024-07-26 11:35:52.552369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.187 qpair failed and we were unable to recover it. 00:27:57.187 [2024-07-26 11:35:52.552669] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.187 [2024-07-26 11:35:52.552699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.187 qpair failed and we were unable to recover it. 00:27:57.187 [2024-07-26 11:35:52.552884] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.187 [2024-07-26 11:35:52.552914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.187 qpair failed and we were unable to recover it. 00:27:57.187 [2024-07-26 11:35:52.553037] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.187 [2024-07-26 11:35:52.553068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.187 qpair failed and we were unable to recover it. 00:27:57.187 [2024-07-26 11:35:52.553198] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.187 [2024-07-26 11:35:52.553228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.187 qpair failed and we were unable to recover it. 00:27:57.187 [2024-07-26 11:35:52.553480] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.187 [2024-07-26 11:35:52.553510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.187 qpair failed and we were unable to recover it. 00:27:57.187 [2024-07-26 11:35:52.553646] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.187 [2024-07-26 11:35:52.553676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.187 qpair failed and we were unable to recover it. 00:27:57.187 [2024-07-26 11:35:52.553889] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.187 [2024-07-26 11:35:52.553920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.187 qpair failed and we were unable to recover it. 00:27:57.187 [2024-07-26 11:35:52.554110] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.187 [2024-07-26 11:35:52.554140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.187 qpair failed and we were unable to recover it. 00:27:57.187 [2024-07-26 11:35:52.554349] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.187 [2024-07-26 11:35:52.554379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.187 qpair failed and we were unable to recover it. 00:27:57.187 [2024-07-26 11:35:52.554562] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.188 [2024-07-26 11:35:52.554598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.188 qpair failed and we were unable to recover it. 00:27:57.188 [2024-07-26 11:35:52.554887] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.188 [2024-07-26 11:35:52.554918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.188 qpair failed and we were unable to recover it. 00:27:57.188 [2024-07-26 11:35:52.555095] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.188 [2024-07-26 11:35:52.555125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.188 qpair failed and we were unable to recover it. 00:27:57.188 [2024-07-26 11:35:52.555372] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.188 [2024-07-26 11:35:52.555402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.188 qpair failed and we were unable to recover it. 00:27:57.188 [2024-07-26 11:35:52.555588] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.188 [2024-07-26 11:35:52.555618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.188 qpair failed and we were unable to recover it. 00:27:57.188 [2024-07-26 11:35:52.555889] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.188 [2024-07-26 11:35:52.555920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.188 qpair failed and we were unable to recover it. 00:27:57.188 [2024-07-26 11:35:52.556059] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.188 [2024-07-26 11:35:52.556089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.188 qpair failed and we were unable to recover it. 00:27:57.188 [2024-07-26 11:35:52.556357] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.188 [2024-07-26 11:35:52.556387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.188 qpair failed and we were unable to recover it. 00:27:57.188 [2024-07-26 11:35:52.556652] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.188 [2024-07-26 11:35:52.556683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.188 qpair failed and we were unable to recover it. 00:27:57.188 [2024-07-26 11:35:52.556922] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.188 [2024-07-26 11:35:52.556952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.188 qpair failed and we were unable to recover it. 00:27:57.188 [2024-07-26 11:35:52.557164] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.188 [2024-07-26 11:35:52.557194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.188 qpair failed and we were unable to recover it. 00:27:57.188 [2024-07-26 11:35:52.557365] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.188 [2024-07-26 11:35:52.557394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.188 qpair failed and we were unable to recover it. 00:27:57.188 [2024-07-26 11:35:52.557683] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.188 [2024-07-26 11:35:52.557714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.188 qpair failed and we were unable to recover it. 00:27:57.188 [2024-07-26 11:35:52.558002] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.188 [2024-07-26 11:35:52.558032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.188 qpair failed and we were unable to recover it. 00:27:57.188 [2024-07-26 11:35:52.558172] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.188 [2024-07-26 11:35:52.558203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.188 qpair failed and we were unable to recover it. 00:27:57.188 [2024-07-26 11:35:52.558444] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.188 [2024-07-26 11:35:52.558473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.188 qpair failed and we were unable to recover it. 00:27:57.188 [2024-07-26 11:35:52.558651] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.188 [2024-07-26 11:35:52.558681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.188 qpair failed and we were unable to recover it. 00:27:57.188 [2024-07-26 11:35:52.558939] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.188 [2024-07-26 11:35:52.558969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.188 qpair failed and we were unable to recover it. 00:27:57.188 [2024-07-26 11:35:52.559085] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.188 [2024-07-26 11:35:52.559114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.188 qpair failed and we were unable to recover it. 00:27:57.188 [2024-07-26 11:35:52.559262] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.188 [2024-07-26 11:35:52.559292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.188 qpair failed and we were unable to recover it. 00:27:57.188 [2024-07-26 11:35:52.559395] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.188 [2024-07-26 11:35:52.559424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.188 qpair failed and we were unable to recover it. 00:27:57.188 [2024-07-26 11:35:52.559620] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.188 [2024-07-26 11:35:52.559661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.188 qpair failed and we were unable to recover it. 00:27:57.188 [2024-07-26 11:35:52.559847] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.188 [2024-07-26 11:35:52.559877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.188 qpair failed and we were unable to recover it. 00:27:57.188 [2024-07-26 11:35:52.560003] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.188 [2024-07-26 11:35:52.560032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.188 qpair failed and we were unable to recover it. 00:27:57.188 [2024-07-26 11:35:52.560221] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.188 [2024-07-26 11:35:52.560251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.188 qpair failed and we were unable to recover it. 00:27:57.188 [2024-07-26 11:35:52.560514] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.188 [2024-07-26 11:35:52.560545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.188 qpair failed and we were unable to recover it. 00:27:57.188 [2024-07-26 11:35:52.560785] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.188 [2024-07-26 11:35:52.560815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.188 qpair failed and we were unable to recover it. 00:27:57.188 [2024-07-26 11:35:52.560945] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.188 [2024-07-26 11:35:52.560976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.188 qpair failed and we were unable to recover it. 00:27:57.188 [2024-07-26 11:35:52.561115] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.188 [2024-07-26 11:35:52.561145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.188 qpair failed and we were unable to recover it. 00:27:57.188 [2024-07-26 11:35:52.561331] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.188 [2024-07-26 11:35:52.561361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.188 qpair failed and we were unable to recover it. 00:27:57.188 [2024-07-26 11:35:52.561635] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.188 [2024-07-26 11:35:52.561666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.188 qpair failed and we were unable to recover it. 00:27:57.188 [2024-07-26 11:35:52.561790] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.188 [2024-07-26 11:35:52.561819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.188 qpair failed and we were unable to recover it. 00:27:57.188 [2024-07-26 11:35:52.562060] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.188 [2024-07-26 11:35:52.562090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.188 qpair failed and we were unable to recover it. 00:27:57.188 [2024-07-26 11:35:52.562210] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.188 [2024-07-26 11:35:52.562240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.188 qpair failed and we were unable to recover it. 00:27:57.188 [2024-07-26 11:35:52.562427] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.188 [2024-07-26 11:35:52.562456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.188 qpair failed and we were unable to recover it. 00:27:57.188 [2024-07-26 11:35:52.562601] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.188 [2024-07-26 11:35:52.562653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.188 qpair failed and we were unable to recover it. 00:27:57.188 [2024-07-26 11:35:52.562844] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.188 [2024-07-26 11:35:52.562875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.188 qpair failed and we were unable to recover it. 00:27:57.188 [2024-07-26 11:35:52.563091] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.188 [2024-07-26 11:35:52.563121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.189 qpair failed and we were unable to recover it. 00:27:57.189 [2024-07-26 11:35:52.563326] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.189 [2024-07-26 11:35:52.563355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.189 qpair failed and we were unable to recover it. 00:27:57.189 [2024-07-26 11:35:52.563541] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.189 [2024-07-26 11:35:52.563571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.189 qpair failed and we were unable to recover it. 00:27:57.189 [2024-07-26 11:35:52.563708] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.189 [2024-07-26 11:35:52.563745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.189 qpair failed and we were unable to recover it. 00:27:57.189 [2024-07-26 11:35:52.563869] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.189 [2024-07-26 11:35:52.563905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.189 qpair failed and we were unable to recover it. 00:27:57.189 [2024-07-26 11:35:52.564037] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.189 [2024-07-26 11:35:52.564072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.189 qpair failed and we were unable to recover it. 00:27:57.189 [2024-07-26 11:35:52.564260] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.189 [2024-07-26 11:35:52.564290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.189 qpair failed and we were unable to recover it. 00:27:57.189 [2024-07-26 11:35:52.564480] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.189 [2024-07-26 11:35:52.564509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.189 qpair failed and we were unable to recover it. 00:27:57.189 [2024-07-26 11:35:52.564696] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.189 [2024-07-26 11:35:52.564738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.189 qpair failed and we were unable to recover it. 00:27:57.189 [2024-07-26 11:35:52.564938] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.189 [2024-07-26 11:35:52.564982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.189 qpair failed and we were unable to recover it. 00:27:57.189 [2024-07-26 11:35:52.565182] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.189 [2024-07-26 11:35:52.565218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.189 qpair failed and we were unable to recover it. 00:27:57.189 [2024-07-26 11:35:52.565345] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.189 [2024-07-26 11:35:52.565375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.189 qpair failed and we were unable to recover it. 00:27:57.189 [2024-07-26 11:35:52.565568] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.189 [2024-07-26 11:35:52.565598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.189 qpair failed and we were unable to recover it. 00:27:57.189 [2024-07-26 11:35:52.565743] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.189 [2024-07-26 11:35:52.565775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.189 qpair failed and we were unable to recover it. 00:27:57.189 [2024-07-26 11:35:52.566025] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.189 [2024-07-26 11:35:52.566054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.189 qpair failed and we were unable to recover it. 00:27:57.189 [2024-07-26 11:35:52.566248] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.189 [2024-07-26 11:35:52.566278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.189 qpair failed and we were unable to recover it. 00:27:57.189 [2024-07-26 11:35:52.566524] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.189 [2024-07-26 11:35:52.566554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.189 qpair failed and we were unable to recover it. 00:27:57.189 [2024-07-26 11:35:52.566749] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.189 [2024-07-26 11:35:52.566787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.189 qpair failed and we were unable to recover it. 00:27:57.189 [2024-07-26 11:35:52.567001] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.189 [2024-07-26 11:35:52.567030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.189 qpair failed and we were unable to recover it. 00:27:57.189 [2024-07-26 11:35:52.567274] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.189 [2024-07-26 11:35:52.567303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.189 qpair failed and we were unable to recover it. 00:27:57.189 [2024-07-26 11:35:52.567576] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.189 [2024-07-26 11:35:52.567606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.189 qpair failed and we were unable to recover it. 00:27:57.189 [2024-07-26 11:35:52.567804] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.189 [2024-07-26 11:35:52.567834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.189 qpair failed and we were unable to recover it. 00:27:57.189 [2024-07-26 11:35:52.568029] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.189 [2024-07-26 11:35:52.568059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.189 qpair failed and we were unable to recover it. 00:27:57.189 [2024-07-26 11:35:52.568197] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.189 [2024-07-26 11:35:52.568227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.189 qpair failed and we were unable to recover it. 00:27:57.189 [2024-07-26 11:35:52.568451] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.189 [2024-07-26 11:35:52.568481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.189 qpair failed and we were unable to recover it. 00:27:57.189 [2024-07-26 11:35:52.568589] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.189 [2024-07-26 11:35:52.568619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.189 qpair failed and we were unable to recover it. 00:27:57.189 [2024-07-26 11:35:52.568876] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.189 [2024-07-26 11:35:52.568906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.189 qpair failed and we were unable to recover it. 00:27:57.189 [2024-07-26 11:35:52.569125] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.189 [2024-07-26 11:35:52.569154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.189 qpair failed and we were unable to recover it. 00:27:57.189 [2024-07-26 11:35:52.569273] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.189 [2024-07-26 11:35:52.569302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.189 qpair failed and we were unable to recover it. 00:27:57.189 [2024-07-26 11:35:52.569480] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.189 [2024-07-26 11:35:52.569510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.189 qpair failed and we were unable to recover it. 00:27:57.189 [2024-07-26 11:35:52.569699] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.189 [2024-07-26 11:35:52.569732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.189 qpair failed and we were unable to recover it. 00:27:57.189 [2024-07-26 11:35:52.569843] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.189 [2024-07-26 11:35:52.569874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.189 qpair failed and we were unable to recover it. 00:27:57.189 [2024-07-26 11:35:52.569982] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.189 [2024-07-26 11:35:52.570012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.189 qpair failed and we were unable to recover it. 00:27:57.189 [2024-07-26 11:35:52.570198] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.189 [2024-07-26 11:35:52.570228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.189 qpair failed and we were unable to recover it. 00:27:57.189 [2024-07-26 11:35:52.570402] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.189 [2024-07-26 11:35:52.570432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.189 qpair failed and we were unable to recover it. 00:27:57.189 [2024-07-26 11:35:52.570613] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.189 [2024-07-26 11:35:52.570653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.189 qpair failed and we were unable to recover it. 00:27:57.189 [2024-07-26 11:35:52.570840] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.189 [2024-07-26 11:35:52.570869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.189 qpair failed and we were unable to recover it. 00:27:57.189 [2024-07-26 11:35:52.571118] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.189 [2024-07-26 11:35:52.571148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.189 qpair failed and we were unable to recover it. 00:27:57.189 [2024-07-26 11:35:52.571278] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.190 [2024-07-26 11:35:52.571307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.190 qpair failed and we were unable to recover it. 00:27:57.190 [2024-07-26 11:35:52.571413] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.190 [2024-07-26 11:35:52.571444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.190 qpair failed and we were unable to recover it. 00:27:57.190 [2024-07-26 11:35:52.571645] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.190 [2024-07-26 11:35:52.571677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.190 qpair failed and we were unable to recover it. 00:27:57.190 [2024-07-26 11:35:52.571898] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.190 [2024-07-26 11:35:52.571927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.190 qpair failed and we were unable to recover it. 00:27:57.190 [2024-07-26 11:35:52.572051] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.190 [2024-07-26 11:35:52.572081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.190 qpair failed and we were unable to recover it. 00:27:57.190 [2024-07-26 11:35:52.572316] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.190 [2024-07-26 11:35:52.572352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.190 qpair failed and we were unable to recover it. 00:27:57.190 [2024-07-26 11:35:52.572540] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.190 [2024-07-26 11:35:52.572570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.190 qpair failed and we were unable to recover it. 00:27:57.190 [2024-07-26 11:35:52.572865] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.190 [2024-07-26 11:35:52.572896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.190 qpair failed and we were unable to recover it. 00:27:57.190 [2024-07-26 11:35:52.573165] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.190 [2024-07-26 11:35:52.573195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.190 qpair failed and we were unable to recover it. 00:27:57.190 [2024-07-26 11:35:52.573441] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.190 [2024-07-26 11:35:52.573471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.190 qpair failed and we were unable to recover it. 00:27:57.190 [2024-07-26 11:35:52.573691] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.190 [2024-07-26 11:35:52.573722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.190 qpair failed and we were unable to recover it. 00:27:57.190 [2024-07-26 11:35:52.573983] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.190 [2024-07-26 11:35:52.574012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.190 qpair failed and we were unable to recover it. 00:27:57.190 [2024-07-26 11:35:52.574277] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.190 [2024-07-26 11:35:52.574308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.190 qpair failed and we were unable to recover it. 00:27:57.190 [2024-07-26 11:35:52.574503] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.190 [2024-07-26 11:35:52.574533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.190 qpair failed and we were unable to recover it. 00:27:57.190 [2024-07-26 11:35:52.574732] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.190 [2024-07-26 11:35:52.574763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.190 qpair failed and we were unable to recover it. 00:27:57.190 [2024-07-26 11:35:52.574886] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.190 [2024-07-26 11:35:52.574916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.190 qpair failed and we were unable to recover it. 00:27:57.190 [2024-07-26 11:35:52.575042] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.190 [2024-07-26 11:35:52.575072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.190 qpair failed and we were unable to recover it. 00:27:57.190 [2024-07-26 11:35:52.575341] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.190 [2024-07-26 11:35:52.575371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.190 qpair failed and we were unable to recover it. 00:27:57.190 [2024-07-26 11:35:52.575557] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.190 [2024-07-26 11:35:52.575586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.190 qpair failed and we were unable to recover it. 00:27:57.190 [2024-07-26 11:35:52.575791] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.190 [2024-07-26 11:35:52.575822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.190 qpair failed and we were unable to recover it. 00:27:57.190 [2024-07-26 11:35:52.576015] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.190 [2024-07-26 11:35:52.576044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.190 qpair failed and we were unable to recover it. 00:27:57.190 [2024-07-26 11:35:52.576284] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.190 [2024-07-26 11:35:52.576314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.190 qpair failed and we were unable to recover it. 00:27:57.190 [2024-07-26 11:35:52.576506] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.190 [2024-07-26 11:35:52.576536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.190 qpair failed and we were unable to recover it. 00:27:57.190 [2024-07-26 11:35:52.576679] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.190 [2024-07-26 11:35:52.576709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.190 qpair failed and we were unable to recover it. 00:27:57.190 [2024-07-26 11:35:52.576884] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.190 [2024-07-26 11:35:52.576916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.190 qpair failed and we were unable to recover it. 00:27:57.190 [2024-07-26 11:35:52.577185] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.190 [2024-07-26 11:35:52.577214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.190 qpair failed and we were unable to recover it. 00:27:57.190 [2024-07-26 11:35:52.577405] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.190 [2024-07-26 11:35:52.577436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.190 qpair failed and we were unable to recover it. 00:27:57.190 [2024-07-26 11:35:52.577701] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.190 [2024-07-26 11:35:52.577732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.190 qpair failed and we were unable to recover it. 00:27:57.190 [2024-07-26 11:35:52.577919] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.190 [2024-07-26 11:35:52.577949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.190 qpair failed and we were unable to recover it. 00:27:57.190 [2024-07-26 11:35:52.578193] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.190 [2024-07-26 11:35:52.578223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.190 qpair failed and we were unable to recover it. 00:27:57.190 [2024-07-26 11:35:52.578462] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.190 [2024-07-26 11:35:52.578492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.190 qpair failed and we were unable to recover it. 00:27:57.190 [2024-07-26 11:35:52.578752] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.190 [2024-07-26 11:35:52.578782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.190 qpair failed and we were unable to recover it. 00:27:57.191 [2024-07-26 11:35:52.578993] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.191 [2024-07-26 11:35:52.579023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.191 qpair failed and we were unable to recover it. 00:27:57.191 [2024-07-26 11:35:52.579290] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.191 [2024-07-26 11:35:52.579319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.191 qpair failed and we were unable to recover it. 00:27:57.191 [2024-07-26 11:35:52.579609] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.191 [2024-07-26 11:35:52.579645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.191 qpair failed and we were unable to recover it. 00:27:57.191 [2024-07-26 11:35:52.579850] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.191 [2024-07-26 11:35:52.579880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.191 qpair failed and we were unable to recover it. 00:27:57.191 [2024-07-26 11:35:52.580147] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.191 [2024-07-26 11:35:52.580177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.191 qpair failed and we were unable to recover it. 00:27:57.191 [2024-07-26 11:35:52.580369] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.191 [2024-07-26 11:35:52.580399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.191 qpair failed and we were unable to recover it. 00:27:57.191 [2024-07-26 11:35:52.580661] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.191 [2024-07-26 11:35:52.580692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.191 qpair failed and we were unable to recover it. 00:27:57.191 [2024-07-26 11:35:52.580999] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.191 [2024-07-26 11:35:52.581029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.191 qpair failed and we were unable to recover it. 00:27:57.191 [2024-07-26 11:35:52.581291] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.191 [2024-07-26 11:35:52.581320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.191 qpair failed and we were unable to recover it. 00:27:57.191 [2024-07-26 11:35:52.581509] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.191 [2024-07-26 11:35:52.581539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.191 qpair failed and we were unable to recover it. 00:27:57.191 [2024-07-26 11:35:52.581742] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.191 [2024-07-26 11:35:52.581773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.191 qpair failed and we were unable to recover it. 00:27:57.191 [2024-07-26 11:35:52.582014] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.191 [2024-07-26 11:35:52.582044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.191 qpair failed and we were unable to recover it. 00:27:57.191 [2024-07-26 11:35:52.582287] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.191 [2024-07-26 11:35:52.582316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.191 qpair failed and we were unable to recover it. 00:27:57.191 [2024-07-26 11:35:52.582574] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.191 [2024-07-26 11:35:52.582608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.191 qpair failed and we were unable to recover it. 00:27:57.191 [2024-07-26 11:35:52.582889] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.191 [2024-07-26 11:35:52.582920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.191 qpair failed and we were unable to recover it. 00:27:57.191 [2024-07-26 11:35:52.583202] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.191 [2024-07-26 11:35:52.583232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.191 qpair failed and we were unable to recover it. 00:27:57.191 [2024-07-26 11:35:52.583483] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.191 [2024-07-26 11:35:52.583512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.191 qpair failed and we were unable to recover it. 00:27:57.191 [2024-07-26 11:35:52.583727] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.191 [2024-07-26 11:35:52.583758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.191 qpair failed and we were unable to recover it. 00:27:57.191 [2024-07-26 11:35:52.583998] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.191 [2024-07-26 11:35:52.584027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.191 qpair failed and we were unable to recover it. 00:27:57.191 [2024-07-26 11:35:52.584212] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.191 [2024-07-26 11:35:52.584241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.191 qpair failed and we were unable to recover it. 00:27:57.191 [2024-07-26 11:35:52.584482] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.191 [2024-07-26 11:35:52.584512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.191 qpair failed and we were unable to recover it. 00:27:57.191 [2024-07-26 11:35:52.584698] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.191 [2024-07-26 11:35:52.584729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.191 qpair failed and we were unable to recover it. 00:27:57.191 [2024-07-26 11:35:52.584849] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.191 [2024-07-26 11:35:52.584879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.191 qpair failed and we were unable to recover it. 00:27:57.191 [2024-07-26 11:35:52.585102] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.191 [2024-07-26 11:35:52.585131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.191 qpair failed and we were unable to recover it. 00:27:57.191 [2024-07-26 11:35:52.585398] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.191 [2024-07-26 11:35:52.585429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.191 qpair failed and we were unable to recover it. 00:27:57.191 [2024-07-26 11:35:52.585719] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.191 [2024-07-26 11:35:52.585750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.191 qpair failed and we were unable to recover it. 00:27:57.191 [2024-07-26 11:35:52.586043] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.191 [2024-07-26 11:35:52.586073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.191 qpair failed and we were unable to recover it. 00:27:57.191 [2024-07-26 11:35:52.586282] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.191 [2024-07-26 11:35:52.586312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.191 qpair failed and we were unable to recover it. 00:27:57.191 [2024-07-26 11:35:52.586579] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.191 [2024-07-26 11:35:52.586609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.191 qpair failed and we were unable to recover it. 00:27:57.191 [2024-07-26 11:35:52.586818] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.191 [2024-07-26 11:35:52.586849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.191 qpair failed and we were unable to recover it. 00:27:57.191 [2024-07-26 11:35:52.586974] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.191 [2024-07-26 11:35:52.587003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.191 qpair failed and we were unable to recover it. 00:27:57.191 [2024-07-26 11:35:52.587208] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.191 [2024-07-26 11:35:52.587237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.191 qpair failed and we were unable to recover it. 00:27:57.191 [2024-07-26 11:35:52.587428] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.191 [2024-07-26 11:35:52.587459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.191 qpair failed and we were unable to recover it. 00:27:57.191 [2024-07-26 11:35:52.587676] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.191 [2024-07-26 11:35:52.587706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.191 qpair failed and we were unable to recover it. 00:27:57.191 [2024-07-26 11:35:52.587899] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.191 [2024-07-26 11:35:52.587928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.191 qpair failed and we were unable to recover it. 00:27:57.191 [2024-07-26 11:35:52.588141] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.191 [2024-07-26 11:35:52.588171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.191 qpair failed and we were unable to recover it. 00:27:57.191 [2024-07-26 11:35:52.588368] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.191 [2024-07-26 11:35:52.588398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.191 qpair failed and we were unable to recover it. 00:27:57.191 [2024-07-26 11:35:52.588592] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.191 [2024-07-26 11:35:52.588622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.192 qpair failed and we were unable to recover it. 00:27:57.192 [2024-07-26 11:35:52.588882] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.192 [2024-07-26 11:35:52.588912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.192 qpair failed and we were unable to recover it. 00:27:57.192 [2024-07-26 11:35:52.589175] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.192 [2024-07-26 11:35:52.589205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.192 qpair failed and we were unable to recover it. 00:27:57.192 [2024-07-26 11:35:52.589413] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.192 [2024-07-26 11:35:52.589443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.192 qpair failed and we were unable to recover it. 00:27:57.192 [2024-07-26 11:35:52.589691] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.192 [2024-07-26 11:35:52.589722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.192 qpair failed and we were unable to recover it. 00:27:57.192 [2024-07-26 11:35:52.589850] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.192 [2024-07-26 11:35:52.589879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.192 qpair failed and we were unable to recover it. 00:27:57.192 [2024-07-26 11:35:52.590129] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.192 [2024-07-26 11:35:52.590159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.192 qpair failed and we were unable to recover it. 00:27:57.192 [2024-07-26 11:35:52.590337] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.192 [2024-07-26 11:35:52.590366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.192 qpair failed and we were unable to recover it. 00:27:57.192 [2024-07-26 11:35:52.590582] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.192 [2024-07-26 11:35:52.590612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.192 qpair failed and we were unable to recover it. 00:27:57.192 [2024-07-26 11:35:52.590904] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.192 [2024-07-26 11:35:52.590935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.192 qpair failed and we were unable to recover it. 00:27:57.192 [2024-07-26 11:35:52.591138] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.192 [2024-07-26 11:35:52.591167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.192 qpair failed and we were unable to recover it. 00:27:57.192 [2024-07-26 11:35:52.591339] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.192 [2024-07-26 11:35:52.591368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.192 qpair failed and we were unable to recover it. 00:27:57.192 [2024-07-26 11:35:52.591616] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.192 [2024-07-26 11:35:52.591655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.192 qpair failed and we were unable to recover it. 00:27:57.192 [2024-07-26 11:35:52.591923] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.192 [2024-07-26 11:35:52.591953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.192 qpair failed and we were unable to recover it. 00:27:57.192 [2024-07-26 11:35:52.592078] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.192 [2024-07-26 11:35:52.592108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.192 qpair failed and we were unable to recover it. 00:27:57.192 [2024-07-26 11:35:52.592284] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.192 [2024-07-26 11:35:52.592313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.192 qpair failed and we were unable to recover it. 00:27:57.192 [2024-07-26 11:35:52.592516] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.192 [2024-07-26 11:35:52.592551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.192 qpair failed and we were unable to recover it. 00:27:57.192 [2024-07-26 11:35:52.592796] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.192 [2024-07-26 11:35:52.592826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.192 qpair failed and we were unable to recover it. 00:27:57.192 [2024-07-26 11:35:52.593024] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.192 [2024-07-26 11:35:52.593054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.192 qpair failed and we were unable to recover it. 00:27:57.192 [2024-07-26 11:35:52.593232] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.192 [2024-07-26 11:35:52.593261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.192 qpair failed and we were unable to recover it. 00:27:57.192 [2024-07-26 11:35:52.593529] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.192 [2024-07-26 11:35:52.593558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.192 qpair failed and we were unable to recover it. 00:27:57.192 [2024-07-26 11:35:52.593752] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.192 [2024-07-26 11:35:52.593783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.192 qpair failed and we were unable to recover it. 00:27:57.192 [2024-07-26 11:35:52.594044] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.192 [2024-07-26 11:35:52.594073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.192 qpair failed and we were unable to recover it. 00:27:57.192 [2024-07-26 11:35:52.594256] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.192 [2024-07-26 11:35:52.594286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.192 qpair failed and we were unable to recover it. 00:27:57.192 [2024-07-26 11:35:52.594548] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.192 [2024-07-26 11:35:52.594578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.192 qpair failed and we were unable to recover it. 00:27:57.192 [2024-07-26 11:35:52.594777] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.192 [2024-07-26 11:35:52.594807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.192 qpair failed and we were unable to recover it. 00:27:57.192 [2024-07-26 11:35:52.595065] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.192 [2024-07-26 11:35:52.595094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.192 qpair failed and we were unable to recover it. 00:27:57.192 [2024-07-26 11:35:52.595280] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.192 [2024-07-26 11:35:52.595309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.192 qpair failed and we were unable to recover it. 00:27:57.192 [2024-07-26 11:35:52.595548] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.192 [2024-07-26 11:35:52.595577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.192 qpair failed and we were unable to recover it. 00:27:57.192 [2024-07-26 11:35:52.595831] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.192 [2024-07-26 11:35:52.595861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.192 qpair failed and we were unable to recover it. 00:27:57.192 [2024-07-26 11:35:52.596119] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.192 [2024-07-26 11:35:52.596148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.192 qpair failed and we were unable to recover it. 00:27:57.192 [2024-07-26 11:35:52.596283] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.192 [2024-07-26 11:35:52.596312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.192 qpair failed and we were unable to recover it. 00:27:57.192 [2024-07-26 11:35:52.596552] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.192 [2024-07-26 11:35:52.596581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.192 qpair failed and we were unable to recover it. 00:27:57.192 [2024-07-26 11:35:52.596850] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.192 [2024-07-26 11:35:52.596881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.192 qpair failed and we were unable to recover it. 00:27:57.192 [2024-07-26 11:35:52.597094] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.192 [2024-07-26 11:35:52.597129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.192 qpair failed and we were unable to recover it. 00:27:57.192 [2024-07-26 11:35:52.597391] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.192 [2024-07-26 11:35:52.597420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.192 qpair failed and we were unable to recover it. 00:27:57.192 [2024-07-26 11:35:52.597613] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.192 [2024-07-26 11:35:52.597648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.192 qpair failed and we were unable to recover it. 00:27:57.192 [2024-07-26 11:35:52.597866] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.192 [2024-07-26 11:35:52.597896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.192 qpair failed and we were unable to recover it. 00:27:57.192 [2024-07-26 11:35:52.598146] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.193 [2024-07-26 11:35:52.598176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.193 qpair failed and we were unable to recover it. 00:27:57.193 [2024-07-26 11:35:52.598385] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.193 [2024-07-26 11:35:52.598414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.193 qpair failed and we were unable to recover it. 00:27:57.193 [2024-07-26 11:35:52.598622] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.193 [2024-07-26 11:35:52.598681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.193 qpair failed and we were unable to recover it. 00:27:57.193 [2024-07-26 11:35:52.598953] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.193 [2024-07-26 11:35:52.598984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.193 qpair failed and we were unable to recover it. 00:27:57.193 [2024-07-26 11:35:52.599155] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.193 [2024-07-26 11:35:52.599185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.193 qpair failed and we were unable to recover it. 00:27:57.193 [2024-07-26 11:35:52.599406] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.193 [2024-07-26 11:35:52.599436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.193 qpair failed and we were unable to recover it. 00:27:57.193 [2024-07-26 11:35:52.599692] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.193 [2024-07-26 11:35:52.599723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.193 qpair failed and we were unable to recover it. 00:27:57.193 [2024-07-26 11:35:52.599984] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.193 [2024-07-26 11:35:52.600014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.193 qpair failed and we were unable to recover it. 00:27:57.193 [2024-07-26 11:35:52.600255] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.193 [2024-07-26 11:35:52.600285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.193 qpair failed and we were unable to recover it. 00:27:57.193 [2024-07-26 11:35:52.600459] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.193 [2024-07-26 11:35:52.600488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.193 qpair failed and we were unable to recover it. 00:27:57.193 [2024-07-26 11:35:52.600666] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.193 [2024-07-26 11:35:52.600697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.193 qpair failed and we were unable to recover it. 00:27:57.193 [2024-07-26 11:35:52.600959] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.193 [2024-07-26 11:35:52.600989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.193 qpair failed and we were unable to recover it. 00:27:57.193 [2024-07-26 11:35:52.601173] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.193 [2024-07-26 11:35:52.601203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.193 qpair failed and we were unable to recover it. 00:27:57.193 [2024-07-26 11:35:52.601470] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.193 [2024-07-26 11:35:52.601500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.193 qpair failed and we were unable to recover it. 00:27:57.193 [2024-07-26 11:35:52.601676] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.193 [2024-07-26 11:35:52.601706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.193 qpair failed and we were unable to recover it. 00:27:57.193 [2024-07-26 11:35:52.601890] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.193 [2024-07-26 11:35:52.601919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.193 qpair failed and we were unable to recover it. 00:27:57.193 [2024-07-26 11:35:52.602139] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.193 [2024-07-26 11:35:52.602169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.193 qpair failed and we were unable to recover it. 00:27:57.193 [2024-07-26 11:35:52.602430] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.193 [2024-07-26 11:35:52.602459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.193 qpair failed and we were unable to recover it. 00:27:57.193 [2024-07-26 11:35:52.602643] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.193 [2024-07-26 11:35:52.602674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.193 qpair failed and we were unable to recover it. 00:27:57.193 [2024-07-26 11:35:52.602873] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.193 [2024-07-26 11:35:52.602903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.193 qpair failed and we were unable to recover it. 00:27:57.193 [2024-07-26 11:35:52.603169] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.193 [2024-07-26 11:35:52.603199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.193 qpair failed and we were unable to recover it. 00:27:57.193 [2024-07-26 11:35:52.603375] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.193 [2024-07-26 11:35:52.603405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.193 qpair failed and we were unable to recover it. 00:27:57.193 [2024-07-26 11:35:52.603672] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.193 [2024-07-26 11:35:52.603703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.193 qpair failed and we were unable to recover it. 00:27:57.193 [2024-07-26 11:35:52.603972] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.193 [2024-07-26 11:35:52.604003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.193 qpair failed and we were unable to recover it. 00:27:57.193 [2024-07-26 11:35:52.604276] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.193 [2024-07-26 11:35:52.604306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.193 qpair failed and we were unable to recover it. 00:27:57.193 [2024-07-26 11:35:52.604501] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.193 [2024-07-26 11:35:52.604531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.193 qpair failed and we were unable to recover it. 00:27:57.193 [2024-07-26 11:35:52.604771] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.193 [2024-07-26 11:35:52.604801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.193 qpair failed and we were unable to recover it. 00:27:57.193 [2024-07-26 11:35:52.605065] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.193 [2024-07-26 11:35:52.605095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.193 qpair failed and we were unable to recover it. 00:27:57.193 [2024-07-26 11:35:52.605386] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.193 [2024-07-26 11:35:52.605416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.193 qpair failed and we were unable to recover it. 00:27:57.193 [2024-07-26 11:35:52.605667] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.193 [2024-07-26 11:35:52.605697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.193 qpair failed and we were unable to recover it. 00:27:57.193 [2024-07-26 11:35:52.605958] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.193 [2024-07-26 11:35:52.605990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.193 qpair failed and we were unable to recover it. 00:27:57.193 [2024-07-26 11:35:52.606280] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.193 [2024-07-26 11:35:52.606311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.193 qpair failed and we were unable to recover it. 00:27:57.193 [2024-07-26 11:35:52.606564] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.193 [2024-07-26 11:35:52.606594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.193 qpair failed and we were unable to recover it. 00:27:57.193 [2024-07-26 11:35:52.606879] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.193 [2024-07-26 11:35:52.606910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.193 qpair failed and we were unable to recover it. 00:27:57.193 [2024-07-26 11:35:52.607183] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.193 [2024-07-26 11:35:52.607213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.193 qpair failed and we were unable to recover it. 00:27:57.193 [2024-07-26 11:35:52.607427] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.193 [2024-07-26 11:35:52.607457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.193 qpair failed and we were unable to recover it. 00:27:57.193 [2024-07-26 11:35:52.607672] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.193 [2024-07-26 11:35:52.607703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.193 qpair failed and we were unable to recover it. 00:27:57.193 [2024-07-26 11:35:52.607944] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.193 [2024-07-26 11:35:52.607975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.193 qpair failed and we were unable to recover it. 00:27:57.194 [2024-07-26 11:35:52.608228] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.194 [2024-07-26 11:35:52.608258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.194 qpair failed and we were unable to recover it. 00:27:57.194 [2024-07-26 11:35:52.608449] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.194 [2024-07-26 11:35:52.608479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.194 qpair failed and we were unable to recover it. 00:27:57.194 [2024-07-26 11:35:52.608752] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.194 [2024-07-26 11:35:52.608782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.194 qpair failed and we were unable to recover it. 00:27:57.194 [2024-07-26 11:35:52.609092] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.194 [2024-07-26 11:35:52.609122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.194 qpair failed and we were unable to recover it. 00:27:57.194 [2024-07-26 11:35:52.609320] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.194 [2024-07-26 11:35:52.609350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.194 qpair failed and we were unable to recover it. 00:27:57.194 [2024-07-26 11:35:52.609543] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.194 [2024-07-26 11:35:52.609572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.194 qpair failed and we were unable to recover it. 00:27:57.194 [2024-07-26 11:35:52.609706] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.194 [2024-07-26 11:35:52.609737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.194 qpair failed and we were unable to recover it. 00:27:57.194 [2024-07-26 11:35:52.609931] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.194 [2024-07-26 11:35:52.609967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.194 qpair failed and we were unable to recover it. 00:27:57.194 [2024-07-26 11:35:52.610189] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.194 [2024-07-26 11:35:52.610219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.194 qpair failed and we were unable to recover it. 00:27:57.194 [2024-07-26 11:35:52.610463] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.194 [2024-07-26 11:35:52.610493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.194 qpair failed and we were unable to recover it. 00:27:57.194 [2024-07-26 11:35:52.610735] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.194 [2024-07-26 11:35:52.610766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.194 qpair failed and we were unable to recover it. 00:27:57.194 [2024-07-26 11:35:52.611009] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.194 [2024-07-26 11:35:52.611039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.194 qpair failed and we were unable to recover it. 00:27:57.194 [2024-07-26 11:35:52.611233] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.194 [2024-07-26 11:35:52.611263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.194 qpair failed and we were unable to recover it. 00:27:57.194 [2024-07-26 11:35:52.611444] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.194 [2024-07-26 11:35:52.611474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.194 qpair failed and we were unable to recover it. 00:27:57.194 [2024-07-26 11:35:52.611679] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.194 [2024-07-26 11:35:52.611711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.194 qpair failed and we were unable to recover it. 00:27:57.194 [2024-07-26 11:35:52.611900] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.194 [2024-07-26 11:35:52.611930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.194 qpair failed and we were unable to recover it. 00:27:57.194 [2024-07-26 11:35:52.612190] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.194 [2024-07-26 11:35:52.612220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.194 qpair failed and we were unable to recover it. 00:27:57.194 [2024-07-26 11:35:52.612514] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.194 [2024-07-26 11:35:52.612544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.194 qpair failed and we were unable to recover it. 00:27:57.194 [2024-07-26 11:35:52.612821] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.194 [2024-07-26 11:35:52.612852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.194 qpair failed and we were unable to recover it. 00:27:57.194 [2024-07-26 11:35:52.613027] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.194 [2024-07-26 11:35:52.613057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.194 qpair failed and we were unable to recover it. 00:27:57.194 [2024-07-26 11:35:52.613324] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.194 [2024-07-26 11:35:52.613354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.194 qpair failed and we were unable to recover it. 00:27:57.194 [2024-07-26 11:35:52.613600] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.194 [2024-07-26 11:35:52.613639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.194 qpair failed and we were unable to recover it. 00:27:57.194 [2024-07-26 11:35:52.613820] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.194 [2024-07-26 11:35:52.613850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.194 qpair failed and we were unable to recover it. 00:27:57.194 [2024-07-26 11:35:52.614138] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.194 [2024-07-26 11:35:52.614167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.194 qpair failed and we were unable to recover it. 00:27:57.194 [2024-07-26 11:35:52.614429] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.194 [2024-07-26 11:35:52.614459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.194 qpair failed and we were unable to recover it. 00:27:57.194 [2024-07-26 11:35:52.614761] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.194 [2024-07-26 11:35:52.614793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.194 qpair failed and we were unable to recover it. 00:27:57.194 [2024-07-26 11:35:52.614988] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.194 [2024-07-26 11:35:52.615018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.194 qpair failed and we were unable to recover it. 00:27:57.194 [2024-07-26 11:35:52.615205] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.194 [2024-07-26 11:35:52.615236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.194 qpair failed and we were unable to recover it. 00:27:57.194 [2024-07-26 11:35:52.615502] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.194 [2024-07-26 11:35:52.615532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.194 qpair failed and we were unable to recover it. 00:27:57.194 [2024-07-26 11:35:52.615716] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.194 [2024-07-26 11:35:52.615747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.194 qpair failed and we were unable to recover it. 00:27:57.194 [2024-07-26 11:35:52.616023] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.194 [2024-07-26 11:35:52.616054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.194 qpair failed and we were unable to recover it. 00:27:57.194 [2024-07-26 11:35:52.616264] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.194 [2024-07-26 11:35:52.616294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.194 qpair failed and we were unable to recover it. 00:27:57.194 [2024-07-26 11:35:52.616482] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.194 [2024-07-26 11:35:52.616512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.194 qpair failed and we were unable to recover it. 00:27:57.194 [2024-07-26 11:35:52.616686] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.194 [2024-07-26 11:35:52.616717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.194 qpair failed and we were unable to recover it. 00:27:57.194 [2024-07-26 11:35:52.616951] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.194 [2024-07-26 11:35:52.616982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.194 qpair failed and we were unable to recover it. 00:27:57.194 [2024-07-26 11:35:52.617246] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.194 [2024-07-26 11:35:52.617276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.194 qpair failed and we were unable to recover it. 00:27:57.194 [2024-07-26 11:35:52.617538] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.194 [2024-07-26 11:35:52.617568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.194 qpair failed and we were unable to recover it. 00:27:57.194 [2024-07-26 11:35:52.617821] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.195 [2024-07-26 11:35:52.617852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.195 qpair failed and we were unable to recover it. 00:27:57.195 [2024-07-26 11:35:52.618031] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.195 [2024-07-26 11:35:52.618061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.195 qpair failed and we were unable to recover it. 00:27:57.195 [2024-07-26 11:35:52.618327] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.195 [2024-07-26 11:35:52.618356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.195 qpair failed and we were unable to recover it. 00:27:57.195 [2024-07-26 11:35:52.618542] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.195 [2024-07-26 11:35:52.618572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.195 qpair failed and we were unable to recover it. 00:27:57.195 [2024-07-26 11:35:52.618794] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.195 [2024-07-26 11:35:52.618825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.195 qpair failed and we were unable to recover it. 00:27:57.195 [2024-07-26 11:35:52.619095] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.195 [2024-07-26 11:35:52.619125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.195 qpair failed and we were unable to recover it. 00:27:57.195 [2024-07-26 11:35:52.619418] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.195 [2024-07-26 11:35:52.619448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.195 qpair failed and we were unable to recover it. 00:27:57.195 [2024-07-26 11:35:52.619719] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.195 [2024-07-26 11:35:52.619750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.195 qpair failed and we were unable to recover it. 00:27:57.195 [2024-07-26 11:35:52.619963] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.195 [2024-07-26 11:35:52.619994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.195 qpair failed and we were unable to recover it. 00:27:57.195 [2024-07-26 11:35:52.620260] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.195 [2024-07-26 11:35:52.620290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.195 qpair failed and we were unable to recover it. 00:27:57.195 [2024-07-26 11:35:52.620483] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.195 [2024-07-26 11:35:52.620519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.195 qpair failed and we were unable to recover it. 00:27:57.195 [2024-07-26 11:35:52.620776] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.195 [2024-07-26 11:35:52.620807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.195 qpair failed and we were unable to recover it. 00:27:57.195 [2024-07-26 11:35:52.620914] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.195 [2024-07-26 11:35:52.620944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.195 qpair failed and we were unable to recover it. 00:27:57.195 [2024-07-26 11:35:52.621131] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.195 [2024-07-26 11:35:52.621162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.195 qpair failed and we were unable to recover it. 00:27:57.195 [2024-07-26 11:35:52.621375] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.195 [2024-07-26 11:35:52.621404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.195 qpair failed and we were unable to recover it. 00:27:57.195 [2024-07-26 11:35:52.621613] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.195 [2024-07-26 11:35:52.621653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.195 qpair failed and we were unable to recover it. 00:27:57.195 [2024-07-26 11:35:52.621844] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.195 [2024-07-26 11:35:52.621874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.195 qpair failed and we were unable to recover it. 00:27:57.195 [2024-07-26 11:35:52.622133] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.195 [2024-07-26 11:35:52.622163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.195 qpair failed and we were unable to recover it. 00:27:57.195 [2024-07-26 11:35:52.622404] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.195 [2024-07-26 11:35:52.622433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.195 qpair failed and we were unable to recover it. 00:27:57.195 [2024-07-26 11:35:52.622622] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.195 [2024-07-26 11:35:52.622660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.195 qpair failed and we were unable to recover it. 00:27:57.195 [2024-07-26 11:35:52.622849] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.195 [2024-07-26 11:35:52.622879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.195 qpair failed and we were unable to recover it. 00:27:57.195 [2024-07-26 11:35:52.623147] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.195 [2024-07-26 11:35:52.623176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.195 qpair failed and we were unable to recover it. 00:27:57.195 [2024-07-26 11:35:52.623465] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.195 [2024-07-26 11:35:52.623495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.195 qpair failed and we were unable to recover it. 00:27:57.195 [2024-07-26 11:35:52.623776] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.195 [2024-07-26 11:35:52.623807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.195 qpair failed and we were unable to recover it. 00:27:57.195 [2024-07-26 11:35:52.624085] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.195 [2024-07-26 11:35:52.624115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.195 qpair failed and we were unable to recover it. 00:27:57.195 [2024-07-26 11:35:52.624404] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.195 [2024-07-26 11:35:52.624435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.195 qpair failed and we were unable to recover it. 00:27:57.195 [2024-07-26 11:35:52.624714] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.195 [2024-07-26 11:35:52.624745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.195 qpair failed and we were unable to recover it. 00:27:57.195 [2024-07-26 11:35:52.625025] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.195 [2024-07-26 11:35:52.625055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.195 qpair failed and we were unable to recover it. 00:27:57.195 [2024-07-26 11:35:52.625309] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.195 [2024-07-26 11:35:52.625339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.195 qpair failed and we were unable to recover it. 00:27:57.195 [2024-07-26 11:35:52.625598] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.195 [2024-07-26 11:35:52.625635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.195 qpair failed and we were unable to recover it. 00:27:57.195 [2024-07-26 11:35:52.625814] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.195 [2024-07-26 11:35:52.625844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.195 qpair failed and we were unable to recover it. 00:27:57.195 [2024-07-26 11:35:52.626114] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.196 [2024-07-26 11:35:52.626144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.196 qpair failed and we were unable to recover it. 00:27:57.196 [2024-07-26 11:35:52.626385] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.196 [2024-07-26 11:35:52.626414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.196 qpair failed and we were unable to recover it. 00:27:57.196 [2024-07-26 11:35:52.626684] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.196 [2024-07-26 11:35:52.626715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.196 qpair failed and we were unable to recover it. 00:27:57.196 [2024-07-26 11:35:52.626924] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.196 [2024-07-26 11:35:52.626954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.196 qpair failed and we were unable to recover it. 00:27:57.196 [2024-07-26 11:35:52.627201] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.196 [2024-07-26 11:35:52.627231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.196 qpair failed and we were unable to recover it. 00:27:57.196 [2024-07-26 11:35:52.627502] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.196 [2024-07-26 11:35:52.627532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.196 qpair failed and we were unable to recover it. 00:27:57.196 [2024-07-26 11:35:52.627738] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.196 [2024-07-26 11:35:52.627770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.196 qpair failed and we were unable to recover it. 00:27:57.196 [2024-07-26 11:35:52.627950] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.196 [2024-07-26 11:35:52.627980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.196 qpair failed and we were unable to recover it. 00:27:57.196 [2024-07-26 11:35:52.628254] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.196 [2024-07-26 11:35:52.628284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.196 qpair failed and we were unable to recover it. 00:27:57.196 [2024-07-26 11:35:52.628474] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.196 [2024-07-26 11:35:52.628504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.196 qpair failed and we were unable to recover it. 00:27:57.196 [2024-07-26 11:35:52.628771] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.196 [2024-07-26 11:35:52.628801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.196 qpair failed and we were unable to recover it. 00:27:57.196 [2024-07-26 11:35:52.629090] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.196 [2024-07-26 11:35:52.629120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.196 qpair failed and we were unable to recover it. 00:27:57.196 [2024-07-26 11:35:52.629373] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.196 [2024-07-26 11:35:52.629403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.196 qpair failed and we were unable to recover it. 00:27:57.196 [2024-07-26 11:35:52.629619] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.196 [2024-07-26 11:35:52.629658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.196 qpair failed and we were unable to recover it. 00:27:57.196 [2024-07-26 11:35:52.629870] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.196 [2024-07-26 11:35:52.629901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.196 qpair failed and we were unable to recover it. 00:27:57.196 [2024-07-26 11:35:52.630146] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.196 [2024-07-26 11:35:52.630176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.196 qpair failed and we were unable to recover it. 00:27:57.196 [2024-07-26 11:35:52.630352] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.196 [2024-07-26 11:35:52.630382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.196 qpair failed and we were unable to recover it. 00:27:57.196 [2024-07-26 11:35:52.630571] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.196 [2024-07-26 11:35:52.630602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.196 qpair failed and we were unable to recover it. 00:27:57.196 [2024-07-26 11:35:52.630793] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.196 [2024-07-26 11:35:52.630823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.196 qpair failed and we were unable to recover it. 00:27:57.196 [2024-07-26 11:35:52.631017] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.196 [2024-07-26 11:35:52.631052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.196 qpair failed and we were unable to recover it. 00:27:57.196 [2024-07-26 11:35:52.631207] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.196 [2024-07-26 11:35:52.631238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.196 qpair failed and we were unable to recover it. 00:27:57.196 [2024-07-26 11:35:52.631460] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.196 [2024-07-26 11:35:52.631490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.196 qpair failed and we were unable to recover it. 00:27:57.196 [2024-07-26 11:35:52.631780] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.196 [2024-07-26 11:35:52.631811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.196 qpair failed and we were unable to recover it. 00:27:57.196 [2024-07-26 11:35:52.632006] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.196 [2024-07-26 11:35:52.632036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.196 qpair failed and we were unable to recover it. 00:27:57.196 [2024-07-26 11:35:52.632325] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.196 [2024-07-26 11:35:52.632356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.196 qpair failed and we were unable to recover it. 00:27:57.196 [2024-07-26 11:35:52.632634] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.196 [2024-07-26 11:35:52.632665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.196 qpair failed and we were unable to recover it. 00:27:57.196 [2024-07-26 11:35:52.632852] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.196 [2024-07-26 11:35:52.632882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.196 qpair failed and we were unable to recover it. 00:27:57.196 [2024-07-26 11:35:52.633085] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.196 [2024-07-26 11:35:52.633115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.196 qpair failed and we were unable to recover it. 00:27:57.196 [2024-07-26 11:35:52.633386] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.196 [2024-07-26 11:35:52.633416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.196 qpair failed and we were unable to recover it. 00:27:57.196 [2024-07-26 11:35:52.633711] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.196 [2024-07-26 11:35:52.633742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.196 qpair failed and we were unable to recover it. 00:27:57.196 [2024-07-26 11:35:52.633987] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.196 [2024-07-26 11:35:52.634017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.196 qpair failed and we were unable to recover it. 00:27:57.196 [2024-07-26 11:35:52.634213] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.196 [2024-07-26 11:35:52.634244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.196 qpair failed and we were unable to recover it. 00:27:57.196 [2024-07-26 11:35:52.634445] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.196 [2024-07-26 11:35:52.634475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.196 qpair failed and we were unable to recover it. 00:27:57.196 [2024-07-26 11:35:52.634670] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.196 [2024-07-26 11:35:52.634701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.196 qpair failed and we were unable to recover it. 00:27:57.196 [2024-07-26 11:35:52.634903] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.196 [2024-07-26 11:35:52.634933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.196 qpair failed and we were unable to recover it. 00:27:57.196 [2024-07-26 11:35:52.635045] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.196 [2024-07-26 11:35:52.635075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.196 qpair failed and we were unable to recover it. 00:27:57.196 [2024-07-26 11:35:52.635337] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.196 [2024-07-26 11:35:52.635368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.196 qpair failed and we were unable to recover it. 00:27:57.196 [2024-07-26 11:35:52.635655] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.196 [2024-07-26 11:35:52.635687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.197 qpair failed and we were unable to recover it. 00:27:57.197 [2024-07-26 11:35:52.635815] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.197 [2024-07-26 11:35:52.635845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.197 qpair failed and we were unable to recover it. 00:27:57.197 [2024-07-26 11:35:52.636035] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.197 [2024-07-26 11:35:52.636065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.197 qpair failed and we were unable to recover it. 00:27:57.197 [2024-07-26 11:35:52.636312] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.197 [2024-07-26 11:35:52.636342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.197 qpair failed and we were unable to recover it. 00:27:57.197 [2024-07-26 11:35:52.636522] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.197 [2024-07-26 11:35:52.636552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.197 qpair failed and we were unable to recover it. 00:27:57.197 [2024-07-26 11:35:52.636823] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.197 [2024-07-26 11:35:52.636853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.197 qpair failed and we were unable to recover it. 00:27:57.197 [2024-07-26 11:35:52.637143] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.197 [2024-07-26 11:35:52.637173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.197 qpair failed and we were unable to recover it. 00:27:57.197 [2024-07-26 11:35:52.637475] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.197 [2024-07-26 11:35:52.637505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.197 qpair failed and we were unable to recover it. 00:27:57.197 [2024-07-26 11:35:52.637778] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.197 [2024-07-26 11:35:52.637808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.197 qpair failed and we were unable to recover it. 00:27:57.197 [2024-07-26 11:35:52.637992] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.197 [2024-07-26 11:35:52.638022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.197 qpair failed and we were unable to recover it. 00:27:57.197 [2024-07-26 11:35:52.638233] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.197 [2024-07-26 11:35:52.638263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.197 qpair failed and we were unable to recover it. 00:27:57.197 [2024-07-26 11:35:52.638525] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.197 [2024-07-26 11:35:52.638555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.197 qpair failed and we were unable to recover it. 00:27:57.197 [2024-07-26 11:35:52.638811] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.197 [2024-07-26 11:35:52.638842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.197 qpair failed and we were unable to recover it. 00:27:57.197 [2024-07-26 11:35:52.639138] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.197 [2024-07-26 11:35:52.639168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.197 qpair failed and we were unable to recover it. 00:27:57.197 [2024-07-26 11:35:52.639447] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.197 [2024-07-26 11:35:52.639477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.197 qpair failed and we were unable to recover it. 00:27:57.197 [2024-07-26 11:35:52.639619] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.197 [2024-07-26 11:35:52.639660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.197 qpair failed and we were unable to recover it. 00:27:57.197 [2024-07-26 11:35:52.639911] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.197 [2024-07-26 11:35:52.639941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.197 qpair failed and we were unable to recover it. 00:27:57.197 [2024-07-26 11:35:52.640211] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.197 [2024-07-26 11:35:52.640241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.197 qpair failed and we were unable to recover it. 00:27:57.197 [2024-07-26 11:35:52.640452] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.197 [2024-07-26 11:35:52.640482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.197 qpair failed and we were unable to recover it. 00:27:57.197 [2024-07-26 11:35:52.640774] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.197 [2024-07-26 11:35:52.640805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.197 qpair failed and we were unable to recover it. 00:27:57.197 [2024-07-26 11:35:52.640958] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.197 [2024-07-26 11:35:52.640988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.197 qpair failed and we were unable to recover it. 00:27:57.197 [2024-07-26 11:35:52.641259] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.197 [2024-07-26 11:35:52.641289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.197 qpair failed and we were unable to recover it. 00:27:57.197 [2024-07-26 11:35:52.641592] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.197 [2024-07-26 11:35:52.641635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.197 qpair failed and we were unable to recover it. 00:27:57.197 [2024-07-26 11:35:52.641889] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.197 [2024-07-26 11:35:52.641920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.197 qpair failed and we were unable to recover it. 00:27:57.197 [2024-07-26 11:35:52.642074] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.197 [2024-07-26 11:35:52.642104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.197 qpair failed and we were unable to recover it. 00:27:57.197 [2024-07-26 11:35:52.642353] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.197 [2024-07-26 11:35:52.642383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.197 qpair failed and we were unable to recover it. 00:27:57.197 [2024-07-26 11:35:52.642661] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.197 [2024-07-26 11:35:52.642693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.197 qpair failed and we were unable to recover it. 00:27:57.197 [2024-07-26 11:35:52.642985] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.197 [2024-07-26 11:35:52.643015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.197 qpair failed and we were unable to recover it. 00:27:57.197 [2024-07-26 11:35:52.643223] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.197 [2024-07-26 11:35:52.643253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.197 qpair failed and we were unable to recover it. 00:27:57.197 [2024-07-26 11:35:52.643528] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.197 [2024-07-26 11:35:52.643558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.197 qpair failed and we were unable to recover it. 00:27:57.197 [2024-07-26 11:35:52.643771] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.197 [2024-07-26 11:35:52.643802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.197 qpair failed and we were unable to recover it. 00:27:57.197 [2024-07-26 11:35:52.644047] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.197 [2024-07-26 11:35:52.644077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.197 qpair failed and we were unable to recover it. 00:27:57.197 [2024-07-26 11:35:52.644347] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.197 [2024-07-26 11:35:52.644376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.197 qpair failed and we were unable to recover it. 00:27:57.197 [2024-07-26 11:35:52.644678] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.197 [2024-07-26 11:35:52.644710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.197 qpair failed and we were unable to recover it. 00:27:57.197 [2024-07-26 11:35:52.644980] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.197 [2024-07-26 11:35:52.645010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.197 qpair failed and we were unable to recover it. 00:27:57.197 [2024-07-26 11:35:52.645214] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.197 [2024-07-26 11:35:52.645244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.197 qpair failed and we were unable to recover it. 00:27:57.197 [2024-07-26 11:35:52.645500] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.197 [2024-07-26 11:35:52.645531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.197 qpair failed and we were unable to recover it. 00:27:57.197 [2024-07-26 11:35:52.645728] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.197 [2024-07-26 11:35:52.645759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.197 qpair failed and we were unable to recover it. 00:27:57.197 [2024-07-26 11:35:52.645895] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.198 [2024-07-26 11:35:52.645925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.198 qpair failed and we were unable to recover it. 00:27:57.198 [2024-07-26 11:35:52.646136] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.198 [2024-07-26 11:35:52.646166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.198 qpair failed and we were unable to recover it. 00:27:57.198 [2024-07-26 11:35:52.646413] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.198 [2024-07-26 11:35:52.646443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.198 qpair failed and we were unable to recover it. 00:27:57.198 [2024-07-26 11:35:52.646715] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.198 [2024-07-26 11:35:52.646746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.198 qpair failed and we were unable to recover it. 00:27:57.198 [2024-07-26 11:35:52.647052] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.198 [2024-07-26 11:35:52.647082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.198 qpair failed and we were unable to recover it. 00:27:57.198 [2024-07-26 11:35:52.647261] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.198 [2024-07-26 11:35:52.647291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.198 qpair failed and we were unable to recover it. 00:27:57.198 [2024-07-26 11:35:52.647489] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.198 [2024-07-26 11:35:52.647519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.198 qpair failed and we were unable to recover it. 00:27:57.198 [2024-07-26 11:35:52.647795] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.198 [2024-07-26 11:35:52.647827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.198 qpair failed and we were unable to recover it. 00:27:57.198 [2024-07-26 11:35:52.648097] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.198 [2024-07-26 11:35:52.648126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.198 qpair failed and we were unable to recover it. 00:27:57.198 [2024-07-26 11:35:52.648396] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.198 [2024-07-26 11:35:52.648426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.198 qpair failed and we were unable to recover it. 00:27:57.198 [2024-07-26 11:35:52.648601] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.198 [2024-07-26 11:35:52.648642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.198 qpair failed and we were unable to recover it. 00:27:57.198 [2024-07-26 11:35:52.648899] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.198 [2024-07-26 11:35:52.648929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.198 qpair failed and we were unable to recover it. 00:27:57.198 [2024-07-26 11:35:52.649172] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.198 [2024-07-26 11:35:52.649202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.198 qpair failed and we were unable to recover it. 00:27:57.198 [2024-07-26 11:35:52.649436] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.198 [2024-07-26 11:35:52.649465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.198 qpair failed and we were unable to recover it. 00:27:57.198 [2024-07-26 11:35:52.649725] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.198 [2024-07-26 11:35:52.649757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.198 qpair failed and we were unable to recover it. 00:27:57.198 [2024-07-26 11:35:52.650051] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.198 [2024-07-26 11:35:52.650081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.198 qpair failed and we were unable to recover it. 00:27:57.198 [2024-07-26 11:35:52.650282] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.198 [2024-07-26 11:35:52.650312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.198 qpair failed and we were unable to recover it. 00:27:57.198 [2024-07-26 11:35:52.650594] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.198 [2024-07-26 11:35:52.650624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.198 qpair failed and we were unable to recover it. 00:27:57.198 [2024-07-26 11:35:52.650873] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.198 [2024-07-26 11:35:52.650904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.198 qpair failed and we were unable to recover it. 00:27:57.198 [2024-07-26 11:35:52.651175] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.198 [2024-07-26 11:35:52.651205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.198 qpair failed and we were unable to recover it. 00:27:57.198 [2024-07-26 11:35:52.651460] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.198 [2024-07-26 11:35:52.651490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.198 qpair failed and we were unable to recover it. 00:27:57.198 [2024-07-26 11:35:52.651690] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.198 [2024-07-26 11:35:52.651722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.198 qpair failed and we were unable to recover it. 00:27:57.198 [2024-07-26 11:35:52.651930] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.198 [2024-07-26 11:35:52.651961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.198 qpair failed and we were unable to recover it. 00:27:57.198 [2024-07-26 11:35:52.652238] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.198 [2024-07-26 11:35:52.652268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.198 qpair failed and we were unable to recover it. 00:27:57.198 [2024-07-26 11:35:52.652506] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.198 [2024-07-26 11:35:52.652542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.198 qpair failed and we were unable to recover it. 00:27:57.198 [2024-07-26 11:35:52.652807] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.198 [2024-07-26 11:35:52.652838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.198 qpair failed and we were unable to recover it. 00:27:57.198 [2024-07-26 11:35:52.652972] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.198 [2024-07-26 11:35:52.653001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.198 qpair failed and we were unable to recover it. 00:27:57.198 [2024-07-26 11:35:52.653249] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.198 [2024-07-26 11:35:52.653279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.198 qpair failed and we were unable to recover it. 00:27:57.198 [2024-07-26 11:35:52.653484] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.198 [2024-07-26 11:35:52.653514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.198 qpair failed and we were unable to recover it. 00:27:57.198 [2024-07-26 11:35:52.653762] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.198 [2024-07-26 11:35:52.653793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.198 qpair failed and we were unable to recover it. 00:27:57.198 [2024-07-26 11:35:52.654073] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.198 [2024-07-26 11:35:52.654103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.198 qpair failed and we were unable to recover it. 00:27:57.198 [2024-07-26 11:35:52.654281] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.198 [2024-07-26 11:35:52.654311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.198 qpair failed and we were unable to recover it. 00:27:57.198 [2024-07-26 11:35:52.654526] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.198 [2024-07-26 11:35:52.654556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.198 qpair failed and we were unable to recover it. 00:27:57.198 [2024-07-26 11:35:52.654748] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.198 [2024-07-26 11:35:52.654779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.198 qpair failed and we were unable to recover it. 00:27:57.198 [2024-07-26 11:35:52.654978] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.198 [2024-07-26 11:35:52.655008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.198 qpair failed and we were unable to recover it. 00:27:57.198 [2024-07-26 11:35:52.655188] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.198 [2024-07-26 11:35:52.655218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.198 qpair failed and we were unable to recover it. 00:27:57.198 [2024-07-26 11:35:52.655432] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.198 [2024-07-26 11:35:52.655463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.198 qpair failed and we were unable to recover it. 00:27:57.198 [2024-07-26 11:35:52.655652] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.198 [2024-07-26 11:35:52.655683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.198 qpair failed and we were unable to recover it. 00:27:57.199 [2024-07-26 11:35:52.655944] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.199 [2024-07-26 11:35:52.655974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.199 qpair failed and we were unable to recover it. 00:27:57.199 [2024-07-26 11:35:52.656181] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.199 [2024-07-26 11:35:52.656213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.199 qpair failed and we were unable to recover it. 00:27:57.199 [2024-07-26 11:35:52.656354] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.199 [2024-07-26 11:35:52.656384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.199 qpair failed and we were unable to recover it. 00:27:57.199 [2024-07-26 11:35:52.656657] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.199 [2024-07-26 11:35:52.656687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.199 qpair failed and we were unable to recover it. 00:27:57.199 [2024-07-26 11:35:52.656935] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.199 [2024-07-26 11:35:52.656965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.199 qpair failed and we were unable to recover it. 00:27:57.199 [2024-07-26 11:35:52.657159] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.199 [2024-07-26 11:35:52.657189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.199 qpair failed and we were unable to recover it. 00:27:57.199 [2024-07-26 11:35:52.657465] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.199 [2024-07-26 11:35:52.657495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.199 qpair failed and we were unable to recover it. 00:27:57.199 [2024-07-26 11:35:52.657793] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.199 [2024-07-26 11:35:52.657824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.199 qpair failed and we were unable to recover it. 00:27:57.199 [2024-07-26 11:35:52.658075] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.199 [2024-07-26 11:35:52.658105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.199 qpair failed and we were unable to recover it. 00:27:57.199 [2024-07-26 11:35:52.658368] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.199 [2024-07-26 11:35:52.658399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.199 qpair failed and we were unable to recover it. 00:27:57.199 [2024-07-26 11:35:52.658619] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.199 [2024-07-26 11:35:52.658657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.199 qpair failed and we were unable to recover it. 00:27:57.199 [2024-07-26 11:35:52.658834] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.199 [2024-07-26 11:35:52.658865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.199 qpair failed and we were unable to recover it. 00:27:57.199 [2024-07-26 11:35:52.659127] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.199 [2024-07-26 11:35:52.659158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.199 qpair failed and we were unable to recover it. 00:27:57.199 [2024-07-26 11:35:52.659342] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.199 [2024-07-26 11:35:52.659372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.199 qpair failed and we were unable to recover it. 00:27:57.199 [2024-07-26 11:35:52.659648] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.199 [2024-07-26 11:35:52.659679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.199 qpair failed and we were unable to recover it. 00:27:57.199 [2024-07-26 11:35:52.659857] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.199 [2024-07-26 11:35:52.659887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.199 qpair failed and we were unable to recover it. 00:27:57.199 [2024-07-26 11:35:52.660132] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.199 [2024-07-26 11:35:52.660163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.199 qpair failed and we were unable to recover it. 00:27:57.199 [2024-07-26 11:35:52.660354] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.199 [2024-07-26 11:35:52.660384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.199 qpair failed and we were unable to recover it. 00:27:57.199 [2024-07-26 11:35:52.660680] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.199 [2024-07-26 11:35:52.660711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.199 qpair failed and we were unable to recover it. 00:27:57.199 [2024-07-26 11:35:52.660853] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.199 [2024-07-26 11:35:52.660883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.199 qpair failed and we were unable to recover it. 00:27:57.199 [2024-07-26 11:35:52.660993] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.199 [2024-07-26 11:35:52.661022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.199 qpair failed and we were unable to recover it. 00:27:57.199 [2024-07-26 11:35:52.661269] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.199 [2024-07-26 11:35:52.661300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.199 qpair failed and we were unable to recover it. 00:27:57.199 [2024-07-26 11:35:52.661547] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.199 [2024-07-26 11:35:52.661577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.199 qpair failed and we were unable to recover it. 00:27:57.199 [2024-07-26 11:35:52.661777] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.199 [2024-07-26 11:35:52.661808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.199 qpair failed and we were unable to recover it. 00:27:57.199 [2024-07-26 11:35:52.662082] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.199 [2024-07-26 11:35:52.662112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.199 qpair failed and we were unable to recover it. 00:27:57.199 [2024-07-26 11:35:52.662358] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.199 [2024-07-26 11:35:52.662388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.199 qpair failed and we were unable to recover it. 00:27:57.199 [2024-07-26 11:35:52.662564] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.199 [2024-07-26 11:35:52.662601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.199 qpair failed and we were unable to recover it. 00:27:57.199 [2024-07-26 11:35:52.662903] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.199 [2024-07-26 11:35:52.662934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.199 qpair failed and we were unable to recover it. 00:27:57.199 [2024-07-26 11:35:52.663228] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.199 [2024-07-26 11:35:52.663278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.199 qpair failed and we were unable to recover it. 00:27:57.199 [2024-07-26 11:35:52.663538] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.199 [2024-07-26 11:35:52.663569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.199 qpair failed and we were unable to recover it. 00:27:57.199 [2024-07-26 11:35:52.663854] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.199 [2024-07-26 11:35:52.663886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.199 qpair failed and we were unable to recover it. 00:27:57.199 [2024-07-26 11:35:52.664170] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.199 [2024-07-26 11:35:52.664200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.199 qpair failed and we were unable to recover it. 00:27:57.200 [2024-07-26 11:35:52.664345] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.200 [2024-07-26 11:35:52.664375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.200 qpair failed and we were unable to recover it. 00:27:57.200 [2024-07-26 11:35:52.664650] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.200 [2024-07-26 11:35:52.664682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.200 qpair failed and we were unable to recover it. 00:27:57.200 [2024-07-26 11:35:52.664973] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.200 [2024-07-26 11:35:52.665004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.200 qpair failed and we were unable to recover it. 00:27:57.200 [2024-07-26 11:35:52.665280] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.200 [2024-07-26 11:35:52.665310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.200 qpair failed and we were unable to recover it. 00:27:57.200 [2024-07-26 11:35:52.665497] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.200 [2024-07-26 11:35:52.665528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.200 qpair failed and we were unable to recover it. 00:27:57.200 [2024-07-26 11:35:52.665797] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.200 [2024-07-26 11:35:52.665828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.200 qpair failed and we were unable to recover it. 00:27:57.200 [2024-07-26 11:35:52.666118] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.200 [2024-07-26 11:35:52.666148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.200 qpair failed and we were unable to recover it. 00:27:57.200 [2024-07-26 11:35:52.666343] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.200 [2024-07-26 11:35:52.666373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.200 qpair failed and we were unable to recover it. 00:27:57.200 [2024-07-26 11:35:52.666535] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.200 [2024-07-26 11:35:52.666566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.200 qpair failed and we were unable to recover it. 00:27:57.200 [2024-07-26 11:35:52.666866] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.200 [2024-07-26 11:35:52.666898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.200 qpair failed and we were unable to recover it. 00:27:57.200 [2024-07-26 11:35:52.667185] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.200 [2024-07-26 11:35:52.667216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.200 qpair failed and we were unable to recover it. 00:27:57.200 [2024-07-26 11:35:52.667438] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.200 [2024-07-26 11:35:52.667468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.200 qpair failed and we were unable to recover it. 00:27:57.200 [2024-07-26 11:35:52.667746] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.200 [2024-07-26 11:35:52.667778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.200 qpair failed and we were unable to recover it. 00:27:57.200 [2024-07-26 11:35:52.668066] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.200 [2024-07-26 11:35:52.668096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.200 qpair failed and we were unable to recover it. 00:27:57.200 [2024-07-26 11:35:52.668274] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.200 [2024-07-26 11:35:52.668304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.200 qpair failed and we were unable to recover it. 00:27:57.200 [2024-07-26 11:35:52.668482] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.200 [2024-07-26 11:35:52.668512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.200 qpair failed and we were unable to recover it. 00:27:57.200 [2024-07-26 11:35:52.668758] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.200 [2024-07-26 11:35:52.668790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.200 qpair failed and we were unable to recover it. 00:27:57.200 [2024-07-26 11:35:52.669011] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.200 [2024-07-26 11:35:52.669041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.200 qpair failed and we were unable to recover it. 00:27:57.200 [2024-07-26 11:35:52.669287] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.200 [2024-07-26 11:35:52.669317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.200 qpair failed and we were unable to recover it. 00:27:57.200 [2024-07-26 11:35:52.669540] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.200 [2024-07-26 11:35:52.669570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.200 qpair failed and we were unable to recover it. 00:27:57.200 [2024-07-26 11:35:52.669853] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.200 [2024-07-26 11:35:52.669884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.200 qpair failed and we were unable to recover it. 00:27:57.200 [2024-07-26 11:35:52.670147] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.200 [2024-07-26 11:35:52.670178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.200 qpair failed and we were unable to recover it. 00:27:57.200 [2024-07-26 11:35:52.670357] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.200 [2024-07-26 11:35:52.670387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.200 qpair failed and we were unable to recover it. 00:27:57.200 [2024-07-26 11:35:52.670684] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.200 [2024-07-26 11:35:52.670715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.200 qpair failed and we were unable to recover it. 00:27:57.200 [2024-07-26 11:35:52.670927] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.200 [2024-07-26 11:35:52.670958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.200 qpair failed and we were unable to recover it. 00:27:57.200 [2024-07-26 11:35:52.671098] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.200 [2024-07-26 11:35:52.671128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.200 qpair failed and we were unable to recover it. 00:27:57.200 [2024-07-26 11:35:52.671402] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.200 [2024-07-26 11:35:52.671432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.200 qpair failed and we were unable to recover it. 00:27:57.200 [2024-07-26 11:35:52.671611] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.200 [2024-07-26 11:35:52.671651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.200 qpair failed and we were unable to recover it. 00:27:57.200 [2024-07-26 11:35:52.671923] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.200 [2024-07-26 11:35:52.671954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.200 qpair failed and we were unable to recover it. 00:27:57.200 [2024-07-26 11:35:52.672243] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.200 [2024-07-26 11:35:52.672274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.200 qpair failed and we were unable to recover it. 00:27:57.200 [2024-07-26 11:35:52.672556] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.200 [2024-07-26 11:35:52.672587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.200 qpair failed and we were unable to recover it. 00:27:57.200 [2024-07-26 11:35:52.672866] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.200 [2024-07-26 11:35:52.672898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.200 qpair failed and we were unable to recover it. 00:27:57.200 [2024-07-26 11:35:52.673179] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.200 [2024-07-26 11:35:52.673208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.200 qpair failed and we were unable to recover it. 00:27:57.200 [2024-07-26 11:35:52.673498] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.200 [2024-07-26 11:35:52.673528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.200 qpair failed and we were unable to recover it. 00:27:57.200 [2024-07-26 11:35:52.673810] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.200 [2024-07-26 11:35:52.673848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.200 qpair failed and we were unable to recover it. 00:27:57.200 [2024-07-26 11:35:52.674099] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.200 [2024-07-26 11:35:52.674149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.200 qpair failed and we were unable to recover it. 00:27:57.200 [2024-07-26 11:35:52.674342] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.200 [2024-07-26 11:35:52.674372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.201 qpair failed and we were unable to recover it. 00:27:57.201 [2024-07-26 11:35:52.674578] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.201 [2024-07-26 11:35:52.674608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.201 qpair failed and we were unable to recover it. 00:27:57.201 [2024-07-26 11:35:52.674885] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.201 [2024-07-26 11:35:52.674916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.201 qpair failed and we were unable to recover it. 00:27:57.201 [2024-07-26 11:35:52.675201] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.201 [2024-07-26 11:35:52.675232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.201 qpair failed and we were unable to recover it. 00:27:57.201 [2024-07-26 11:35:52.675483] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.201 [2024-07-26 11:35:52.675514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.201 qpair failed and we were unable to recover it. 00:27:57.201 [2024-07-26 11:35:52.675717] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.201 [2024-07-26 11:35:52.675748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.201 qpair failed and we were unable to recover it. 00:27:57.201 [2024-07-26 11:35:52.675864] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.201 [2024-07-26 11:35:52.675895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.201 qpair failed and we were unable to recover it. 00:27:57.201 [2024-07-26 11:35:52.676077] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.201 [2024-07-26 11:35:52.676108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.201 qpair failed and we were unable to recover it. 00:27:57.201 [2024-07-26 11:35:52.676285] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.201 [2024-07-26 11:35:52.676316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.201 qpair failed and we were unable to recover it. 00:27:57.201 [2024-07-26 11:35:52.676568] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.201 [2024-07-26 11:35:52.676601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.201 qpair failed and we were unable to recover it. 00:27:57.201 [2024-07-26 11:35:52.676807] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.201 [2024-07-26 11:35:52.676838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.201 qpair failed and we were unable to recover it. 00:27:57.201 [2024-07-26 11:35:52.677110] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.201 [2024-07-26 11:35:52.677140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.201 qpair failed and we were unable to recover it. 00:27:57.201 [2024-07-26 11:35:52.677347] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.201 [2024-07-26 11:35:52.677378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.201 qpair failed and we were unable to recover it. 00:27:57.201 [2024-07-26 11:35:52.677650] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.201 [2024-07-26 11:35:52.677681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.201 qpair failed and we were unable to recover it. 00:27:57.201 [2024-07-26 11:35:52.677936] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.201 [2024-07-26 11:35:52.677966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.201 qpair failed and we were unable to recover it. 00:27:57.201 [2024-07-26 11:35:52.678285] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.201 [2024-07-26 11:35:52.678315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.201 qpair failed and we were unable to recover it. 00:27:57.201 [2024-07-26 11:35:52.678445] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.201 [2024-07-26 11:35:52.678475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.201 qpair failed and we were unable to recover it. 00:27:57.201 [2024-07-26 11:35:52.678674] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.201 [2024-07-26 11:35:52.678706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.201 qpair failed and we were unable to recover it. 00:27:57.201 [2024-07-26 11:35:52.678894] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.201 [2024-07-26 11:35:52.678925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.201 qpair failed and we were unable to recover it. 00:27:57.201 [2024-07-26 11:35:52.679198] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.201 [2024-07-26 11:35:52.679228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.201 qpair failed and we were unable to recover it. 00:27:57.201 [2024-07-26 11:35:52.679424] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.201 [2024-07-26 11:35:52.679454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.201 qpair failed and we were unable to recover it. 00:27:57.201 [2024-07-26 11:35:52.679728] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.201 [2024-07-26 11:35:52.679759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.201 qpair failed and we were unable to recover it. 00:27:57.201 [2024-07-26 11:35:52.679952] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.201 [2024-07-26 11:35:52.679982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.201 qpair failed and we were unable to recover it. 00:27:57.201 [2024-07-26 11:35:52.680176] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.201 [2024-07-26 11:35:52.680207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.201 qpair failed and we were unable to recover it. 00:27:57.201 [2024-07-26 11:35:52.680412] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.201 [2024-07-26 11:35:52.680442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.201 qpair failed and we were unable to recover it. 00:27:57.201 [2024-07-26 11:35:52.680642] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.201 [2024-07-26 11:35:52.680673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.201 qpair failed and we were unable to recover it. 00:27:57.201 [2024-07-26 11:35:52.680893] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.201 [2024-07-26 11:35:52.680924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.201 qpair failed and we were unable to recover it. 00:27:57.201 [2024-07-26 11:35:52.681225] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.201 [2024-07-26 11:35:52.681255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.201 qpair failed and we were unable to recover it. 00:27:57.201 [2024-07-26 11:35:52.681529] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.201 [2024-07-26 11:35:52.681560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.201 qpair failed and we were unable to recover it. 00:27:57.201 [2024-07-26 11:35:52.681762] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.201 [2024-07-26 11:35:52.681794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.201 qpair failed and we were unable to recover it. 00:27:57.201 [2024-07-26 11:35:52.682050] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.201 [2024-07-26 11:35:52.682080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.201 qpair failed and we were unable to recover it. 00:27:57.201 [2024-07-26 11:35:52.682224] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.201 [2024-07-26 11:35:52.682253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.201 qpair failed and we were unable to recover it. 00:27:57.201 [2024-07-26 11:35:52.682504] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.201 [2024-07-26 11:35:52.682534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.201 qpair failed and we were unable to recover it. 00:27:57.201 [2024-07-26 11:35:52.682730] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.201 [2024-07-26 11:35:52.682761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.201 qpair failed and we were unable to recover it. 00:27:57.201 [2024-07-26 11:35:52.682964] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.201 [2024-07-26 11:35:52.682995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.201 qpair failed and we were unable to recover it. 00:27:57.201 [2024-07-26 11:35:52.683272] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.201 [2024-07-26 11:35:52.683302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.201 qpair failed and we were unable to recover it. 00:27:57.201 [2024-07-26 11:35:52.683484] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.201 [2024-07-26 11:35:52.683515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.201 qpair failed and we were unable to recover it. 00:27:57.201 [2024-07-26 11:35:52.683740] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.201 [2024-07-26 11:35:52.683789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.201 qpair failed and we were unable to recover it. 00:27:57.201 [2024-07-26 11:35:52.684092] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.202 [2024-07-26 11:35:52.684128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.202 qpair failed and we were unable to recover it. 00:27:57.202 [2024-07-26 11:35:52.684419] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.202 [2024-07-26 11:35:52.684449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.202 qpair failed and we were unable to recover it. 00:27:57.202 [2024-07-26 11:35:52.684650] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.202 [2024-07-26 11:35:52.684682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.202 qpair failed and we were unable to recover it. 00:27:57.202 [2024-07-26 11:35:52.684931] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.202 [2024-07-26 11:35:52.684961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.202 qpair failed and we were unable to recover it. 00:27:57.202 [2024-07-26 11:35:52.685246] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.202 [2024-07-26 11:35:52.685276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.202 qpair failed and we were unable to recover it. 00:27:57.202 [2024-07-26 11:35:52.685557] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.202 [2024-07-26 11:35:52.685588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.202 qpair failed and we were unable to recover it. 00:27:57.202 [2024-07-26 11:35:52.685822] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.202 [2024-07-26 11:35:52.685853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.202 qpair failed and we were unable to recover it. 00:27:57.202 [2024-07-26 11:35:52.686049] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.202 [2024-07-26 11:35:52.686079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.202 qpair failed and we were unable to recover it. 00:27:57.202 [2024-07-26 11:35:52.686224] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.202 [2024-07-26 11:35:52.686255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.202 qpair failed and we were unable to recover it. 00:27:57.202 [2024-07-26 11:35:52.686506] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.202 [2024-07-26 11:35:52.686536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.202 qpair failed and we were unable to recover it. 00:27:57.202 [2024-07-26 11:35:52.686813] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.202 [2024-07-26 11:35:52.686845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.202 qpair failed and we were unable to recover it. 00:27:57.202 [2024-07-26 11:35:52.687143] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.202 [2024-07-26 11:35:52.687173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.202 qpair failed and we were unable to recover it. 00:27:57.202 [2024-07-26 11:35:52.687352] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.202 [2024-07-26 11:35:52.687382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.202 qpair failed and we were unable to recover it. 00:27:57.202 [2024-07-26 11:35:52.687583] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.202 [2024-07-26 11:35:52.687614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.202 qpair failed and we were unable to recover it. 00:27:57.202 [2024-07-26 11:35:52.687894] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.202 [2024-07-26 11:35:52.687925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.202 qpair failed and we were unable to recover it. 00:27:57.202 [2024-07-26 11:35:52.688202] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.202 [2024-07-26 11:35:52.688233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.202 qpair failed and we were unable to recover it. 00:27:57.202 [2024-07-26 11:35:52.688439] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.202 [2024-07-26 11:35:52.688469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.202 qpair failed and we were unable to recover it. 00:27:57.202 [2024-07-26 11:35:52.688732] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.202 [2024-07-26 11:35:52.688763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.202 qpair failed and we were unable to recover it. 00:27:57.202 [2024-07-26 11:35:52.689065] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.202 [2024-07-26 11:35:52.689095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.202 qpair failed and we were unable to recover it. 00:27:57.202 [2024-07-26 11:35:52.689279] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.202 [2024-07-26 11:35:52.689309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.202 qpair failed and we were unable to recover it. 00:27:57.202 [2024-07-26 11:35:52.689609] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.202 [2024-07-26 11:35:52.689649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.202 qpair failed and we were unable to recover it. 00:27:57.202 [2024-07-26 11:35:52.689926] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.202 [2024-07-26 11:35:52.689957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.202 qpair failed and we were unable to recover it. 00:27:57.202 [2024-07-26 11:35:52.690238] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.202 [2024-07-26 11:35:52.690268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.202 qpair failed and we were unable to recover it. 00:27:57.202 [2024-07-26 11:35:52.690531] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.202 [2024-07-26 11:35:52.690562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.202 qpair failed and we were unable to recover it. 00:27:57.202 [2024-07-26 11:35:52.690871] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.202 [2024-07-26 11:35:52.690902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.202 qpair failed and we were unable to recover it. 00:27:57.202 [2024-07-26 11:35:52.691172] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.202 [2024-07-26 11:35:52.691203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.202 qpair failed and we were unable to recover it. 00:27:57.202 [2024-07-26 11:35:52.691427] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.202 [2024-07-26 11:35:52.691457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.202 qpair failed and we were unable to recover it. 00:27:57.202 [2024-07-26 11:35:52.691647] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.202 [2024-07-26 11:35:52.691678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.202 qpair failed and we were unable to recover it. 00:27:57.202 [2024-07-26 11:35:52.691887] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.202 [2024-07-26 11:35:52.691918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.202 qpair failed and we were unable to recover it. 00:27:57.202 [2024-07-26 11:35:52.692114] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.202 [2024-07-26 11:35:52.692144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.202 qpair failed and we were unable to recover it. 00:27:57.202 [2024-07-26 11:35:52.692421] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.202 [2024-07-26 11:35:52.692451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.202 qpair failed and we were unable to recover it. 00:27:57.202 [2024-07-26 11:35:52.692703] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.202 [2024-07-26 11:35:52.692735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.202 qpair failed and we were unable to recover it. 00:27:57.202 [2024-07-26 11:35:52.692966] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.202 [2024-07-26 11:35:52.692997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.202 qpair failed and we were unable to recover it. 00:27:57.202 [2024-07-26 11:35:52.693273] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.202 [2024-07-26 11:35:52.693303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.202 qpair failed and we were unable to recover it. 00:27:57.202 [2024-07-26 11:35:52.693510] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.202 [2024-07-26 11:35:52.693540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.202 qpair failed and we were unable to recover it. 00:27:57.202 [2024-07-26 11:35:52.693736] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.202 [2024-07-26 11:35:52.693767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.202 qpair failed and we were unable to recover it. 00:27:57.202 [2024-07-26 11:35:52.693995] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.202 [2024-07-26 11:35:52.694025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.202 qpair failed and we were unable to recover it. 00:27:57.202 [2024-07-26 11:35:52.694302] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.202 [2024-07-26 11:35:52.694333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.203 qpair failed and we were unable to recover it. 00:27:57.203 [2024-07-26 11:35:52.694569] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.203 [2024-07-26 11:35:52.694599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.203 qpair failed and we were unable to recover it. 00:27:57.203 [2024-07-26 11:35:52.694865] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.203 [2024-07-26 11:35:52.694897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.203 qpair failed and we were unable to recover it. 00:27:57.203 [2024-07-26 11:35:52.695194] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.203 [2024-07-26 11:35:52.695231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.203 qpair failed and we were unable to recover it. 00:27:57.203 [2024-07-26 11:35:52.695349] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.203 [2024-07-26 11:35:52.695380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.203 qpair failed and we were unable to recover it. 00:27:57.203 [2024-07-26 11:35:52.695590] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.203 [2024-07-26 11:35:52.695620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.203 qpair failed and we were unable to recover it. 00:27:57.203 [2024-07-26 11:35:52.695939] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.203 [2024-07-26 11:35:52.695970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.203 qpair failed and we were unable to recover it. 00:27:57.203 [2024-07-26 11:35:52.696230] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.203 [2024-07-26 11:35:52.696261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.203 qpair failed and we were unable to recover it. 00:27:57.203 [2024-07-26 11:35:52.696459] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.203 [2024-07-26 11:35:52.696490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.203 qpair failed and we were unable to recover it. 00:27:57.203 [2024-07-26 11:35:52.696693] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.203 [2024-07-26 11:35:52.696725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.203 qpair failed and we were unable to recover it. 00:27:57.203 [2024-07-26 11:35:52.696984] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.203 [2024-07-26 11:35:52.697015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.203 qpair failed and we were unable to recover it. 00:27:57.203 [2024-07-26 11:35:52.697311] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.203 [2024-07-26 11:35:52.697342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.203 qpair failed and we were unable to recover it. 00:27:57.203 [2024-07-26 11:35:52.697475] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.203 [2024-07-26 11:35:52.697506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.203 qpair failed and we were unable to recover it. 00:27:57.203 [2024-07-26 11:35:52.697760] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.203 [2024-07-26 11:35:52.697791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.203 qpair failed and we were unable to recover it. 00:27:57.203 [2024-07-26 11:35:52.698072] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.203 [2024-07-26 11:35:52.698103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.203 qpair failed and we were unable to recover it. 00:27:57.203 [2024-07-26 11:35:52.698326] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.203 [2024-07-26 11:35:52.698357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.203 qpair failed and we were unable to recover it. 00:27:57.203 [2024-07-26 11:35:52.698562] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.203 [2024-07-26 11:35:52.698593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.203 qpair failed and we were unable to recover it. 00:27:57.203 [2024-07-26 11:35:52.698920] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.203 [2024-07-26 11:35:52.698952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.203 qpair failed and we were unable to recover it. 00:27:57.203 [2024-07-26 11:35:52.699162] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.203 [2024-07-26 11:35:52.699192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.203 qpair failed and we were unable to recover it. 00:27:57.203 [2024-07-26 11:35:52.699416] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.203 [2024-07-26 11:35:52.699446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.203 qpair failed and we were unable to recover it. 00:27:57.203 [2024-07-26 11:35:52.699722] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.203 [2024-07-26 11:35:52.699754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.203 qpair failed and we were unable to recover it. 00:27:57.203 [2024-07-26 11:35:52.700041] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.203 [2024-07-26 11:35:52.700071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.203 qpair failed and we were unable to recover it. 00:27:57.203 [2024-07-26 11:35:52.700266] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.203 [2024-07-26 11:35:52.700297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.203 qpair failed and we were unable to recover it. 00:27:57.203 [2024-07-26 11:35:52.700507] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.203 [2024-07-26 11:35:52.700538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.203 qpair failed and we were unable to recover it. 00:27:57.203 [2024-07-26 11:35:52.700722] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.203 [2024-07-26 11:35:52.700755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.203 qpair failed and we were unable to recover it. 00:27:57.203 [2024-07-26 11:35:52.701059] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.203 [2024-07-26 11:35:52.701089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.203 qpair failed and we were unable to recover it. 00:27:57.203 [2024-07-26 11:35:52.701364] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.203 [2024-07-26 11:35:52.701394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.203 qpair failed and we were unable to recover it. 00:27:57.203 [2024-07-26 11:35:52.701581] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.203 [2024-07-26 11:35:52.701612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.203 qpair failed and we were unable to recover it. 00:27:57.203 [2024-07-26 11:35:52.701895] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.203 [2024-07-26 11:35:52.701926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.203 qpair failed and we were unable to recover it. 00:27:57.203 [2024-07-26 11:35:52.702196] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.203 [2024-07-26 11:35:52.702227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.203 qpair failed and we were unable to recover it. 00:27:57.203 [2024-07-26 11:35:52.702538] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.203 [2024-07-26 11:35:52.702568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.203 qpair failed and we were unable to recover it. 00:27:57.203 [2024-07-26 11:35:52.702836] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.203 [2024-07-26 11:35:52.702868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.203 qpair failed and we were unable to recover it. 00:27:57.203 [2024-07-26 11:35:52.703170] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.203 [2024-07-26 11:35:52.703200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.203 qpair failed and we were unable to recover it. 00:27:57.203 [2024-07-26 11:35:52.703475] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.203 [2024-07-26 11:35:52.703505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.203 qpair failed and we were unable to recover it. 00:27:57.203 [2024-07-26 11:35:52.703806] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.203 [2024-07-26 11:35:52.703838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.203 qpair failed and we were unable to recover it. 00:27:57.203 [2024-07-26 11:35:52.704068] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.203 [2024-07-26 11:35:52.704099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.203 qpair failed and we were unable to recover it. 00:27:57.203 [2024-07-26 11:35:52.704292] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.203 [2024-07-26 11:35:52.704323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.203 qpair failed and we were unable to recover it. 00:27:57.203 [2024-07-26 11:35:52.704579] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.203 [2024-07-26 11:35:52.704609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.203 qpair failed and we were unable to recover it. 00:27:57.203 [2024-07-26 11:35:52.704925] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.204 [2024-07-26 11:35:52.704957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.204 qpair failed and we were unable to recover it. 00:27:57.204 [2024-07-26 11:35:52.705236] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.204 [2024-07-26 11:35:52.705266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.204 qpair failed and we were unable to recover it. 00:27:57.204 [2024-07-26 11:35:52.705469] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.204 [2024-07-26 11:35:52.705498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.204 qpair failed and we were unable to recover it. 00:27:57.204 [2024-07-26 11:35:52.705776] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.204 [2024-07-26 11:35:52.705809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.204 qpair failed and we were unable to recover it. 00:27:57.204 [2024-07-26 11:35:52.706003] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.204 [2024-07-26 11:35:52.706036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.204 qpair failed and we were unable to recover it. 00:27:57.204 [2024-07-26 11:35:52.706243] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.204 [2024-07-26 11:35:52.706279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.204 qpair failed and we were unable to recover it. 00:27:57.204 [2024-07-26 11:35:52.706464] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.204 [2024-07-26 11:35:52.706494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.204 qpair failed and we were unable to recover it. 00:27:57.204 [2024-07-26 11:35:52.706689] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.204 [2024-07-26 11:35:52.706720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.204 qpair failed and we were unable to recover it. 00:27:57.204 [2024-07-26 11:35:52.706923] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.204 [2024-07-26 11:35:52.706953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.204 qpair failed and we were unable to recover it. 00:27:57.204 [2024-07-26 11:35:52.707085] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.204 [2024-07-26 11:35:52.707115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.204 qpair failed and we were unable to recover it. 00:27:57.204 [2024-07-26 11:35:52.707380] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.204 [2024-07-26 11:35:52.707411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.204 qpair failed and we were unable to recover it. 00:27:57.204 [2024-07-26 11:35:52.707612] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.204 [2024-07-26 11:35:52.707653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.204 qpair failed and we were unable to recover it. 00:27:57.204 [2024-07-26 11:35:52.707932] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.204 [2024-07-26 11:35:52.707963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.204 qpair failed and we were unable to recover it. 00:27:57.204 [2024-07-26 11:35:52.708241] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.204 [2024-07-26 11:35:52.708271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.204 qpair failed and we were unable to recover it. 00:27:57.204 [2024-07-26 11:35:52.708493] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.204 [2024-07-26 11:35:52.708523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.204 qpair failed and we were unable to recover it. 00:27:57.204 [2024-07-26 11:35:52.708706] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.204 [2024-07-26 11:35:52.708738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.204 qpair failed and we were unable to recover it. 00:27:57.204 [2024-07-26 11:35:52.709008] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.204 [2024-07-26 11:35:52.709038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.204 qpair failed and we were unable to recover it. 00:27:57.204 [2024-07-26 11:35:52.709316] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.204 [2024-07-26 11:35:52.709346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.204 qpair failed and we were unable to recover it. 00:27:57.204 [2024-07-26 11:35:52.709640] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.204 [2024-07-26 11:35:52.709672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.204 qpair failed and we were unable to recover it. 00:27:57.204 [2024-07-26 11:35:52.709898] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.204 [2024-07-26 11:35:52.709929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.204 qpair failed and we were unable to recover it. 00:27:57.204 [2024-07-26 11:35:52.710203] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.204 [2024-07-26 11:35:52.710234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.204 qpair failed and we were unable to recover it. 00:27:57.204 [2024-07-26 11:35:52.710470] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.204 [2024-07-26 11:35:52.710500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.204 qpair failed and we were unable to recover it. 00:27:57.204 [2024-07-26 11:35:52.710758] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.204 [2024-07-26 11:35:52.710790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.204 qpair failed and we were unable to recover it. 00:27:57.204 [2024-07-26 11:35:52.711095] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.204 [2024-07-26 11:35:52.711126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.204 qpair failed and we were unable to recover it. 00:27:57.204 [2024-07-26 11:35:52.711419] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.204 [2024-07-26 11:35:52.711449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.204 qpair failed and we were unable to recover it. 00:27:57.204 [2024-07-26 11:35:52.711731] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.204 [2024-07-26 11:35:52.711762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.204 qpair failed and we were unable to recover it. 00:27:57.204 [2024-07-26 11:35:52.712029] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.204 [2024-07-26 11:35:52.712059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.204 qpair failed and we were unable to recover it. 00:27:57.204 [2024-07-26 11:35:52.712269] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.204 [2024-07-26 11:35:52.712300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.204 qpair failed and we were unable to recover it. 00:27:57.204 [2024-07-26 11:35:52.712495] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.204 [2024-07-26 11:35:52.712526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.204 qpair failed and we were unable to recover it. 00:27:57.204 [2024-07-26 11:35:52.712805] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.204 [2024-07-26 11:35:52.712837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.204 qpair failed and we were unable to recover it. 00:27:57.204 [2024-07-26 11:35:52.713042] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.204 [2024-07-26 11:35:52.713072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.204 qpair failed and we were unable to recover it. 00:27:57.204 [2024-07-26 11:35:52.713302] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.204 [2024-07-26 11:35:52.713332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.205 qpair failed and we were unable to recover it. 00:27:57.205 [2024-07-26 11:35:52.713545] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.205 [2024-07-26 11:35:52.713577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.205 qpair failed and we were unable to recover it. 00:27:57.205 [2024-07-26 11:35:52.713785] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.205 [2024-07-26 11:35:52.713817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.205 qpair failed and we were unable to recover it. 00:27:57.205 [2024-07-26 11:35:52.714095] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.205 [2024-07-26 11:35:52.714125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.205 qpair failed and we were unable to recover it. 00:27:57.205 [2024-07-26 11:35:52.714406] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.205 [2024-07-26 11:35:52.714437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.205 qpair failed and we were unable to recover it. 00:27:57.205 [2024-07-26 11:35:52.714733] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.205 [2024-07-26 11:35:52.714765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.205 qpair failed and we were unable to recover it. 00:27:57.205 [2024-07-26 11:35:52.714999] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.205 [2024-07-26 11:35:52.715029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.205 qpair failed and we were unable to recover it. 00:27:57.205 [2024-07-26 11:35:52.715220] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.205 [2024-07-26 11:35:52.715251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.205 qpair failed and we were unable to recover it. 00:27:57.205 [2024-07-26 11:35:52.715447] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.205 [2024-07-26 11:35:52.715477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.205 qpair failed and we were unable to recover it. 00:27:57.205 [2024-07-26 11:35:52.715733] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.205 [2024-07-26 11:35:52.715765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.205 qpair failed and we were unable to recover it. 00:27:57.205 [2024-07-26 11:35:52.716068] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.205 [2024-07-26 11:35:52.716099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.205 qpair failed and we were unable to recover it. 00:27:57.205 [2024-07-26 11:35:52.716330] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.205 [2024-07-26 11:35:52.716362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.205 qpair failed and we were unable to recover it. 00:27:57.205 [2024-07-26 11:35:52.716622] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.205 [2024-07-26 11:35:52.716662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.205 qpair failed and we were unable to recover it. 00:27:57.205 [2024-07-26 11:35:52.716890] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.205 [2024-07-26 11:35:52.716921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.205 qpair failed and we were unable to recover it. 00:27:57.205 [2024-07-26 11:35:52.717191] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.205 [2024-07-26 11:35:52.717229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.205 qpair failed and we were unable to recover it. 00:27:57.205 [2024-07-26 11:35:52.717529] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.205 [2024-07-26 11:35:52.717560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.205 qpair failed and we were unable to recover it. 00:27:57.205 [2024-07-26 11:35:52.717829] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.205 [2024-07-26 11:35:52.717861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.205 qpair failed and we were unable to recover it. 00:27:57.205 [2024-07-26 11:35:52.718131] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.205 [2024-07-26 11:35:52.718162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.205 qpair failed and we were unable to recover it. 00:27:57.205 [2024-07-26 11:35:52.718460] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.205 [2024-07-26 11:35:52.718491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.205 qpair failed and we were unable to recover it. 00:27:57.205 [2024-07-26 11:35:52.718779] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.205 [2024-07-26 11:35:52.718810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.205 qpair failed and we were unable to recover it. 00:27:57.205 [2024-07-26 11:35:52.719067] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.205 [2024-07-26 11:35:52.719097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.205 qpair failed and we were unable to recover it. 00:27:57.205 [2024-07-26 11:35:52.719385] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.205 [2024-07-26 11:35:52.719415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.205 qpair failed and we were unable to recover it. 00:27:57.205 [2024-07-26 11:35:52.719704] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.205 [2024-07-26 11:35:52.719736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.205 qpair failed and we were unable to recover it. 00:27:57.205 [2024-07-26 11:35:52.719945] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.205 [2024-07-26 11:35:52.719975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.205 qpair failed and we were unable to recover it. 00:27:57.205 [2024-07-26 11:35:52.720279] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.205 [2024-07-26 11:35:52.720310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.205 qpair failed and we were unable to recover it. 00:27:57.205 [2024-07-26 11:35:52.720584] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.205 [2024-07-26 11:35:52.720615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.205 qpair failed and we were unable to recover it. 00:27:57.205 [2024-07-26 11:35:52.720744] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.205 [2024-07-26 11:35:52.720775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.205 qpair failed and we were unable to recover it. 00:27:57.205 [2024-07-26 11:35:52.721050] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.205 [2024-07-26 11:35:52.721080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.205 qpair failed and we were unable to recover it. 00:27:57.205 [2024-07-26 11:35:52.721383] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.205 [2024-07-26 11:35:52.721415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.205 qpair failed and we were unable to recover it. 00:27:57.205 [2024-07-26 11:35:52.721691] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.205 [2024-07-26 11:35:52.721722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.205 qpair failed and we were unable to recover it. 00:27:57.205 [2024-07-26 11:35:52.721932] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.205 [2024-07-26 11:35:52.721962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.205 qpair failed and we were unable to recover it. 00:27:57.205 [2024-07-26 11:35:52.722197] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.205 [2024-07-26 11:35:52.722228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.205 qpair failed and we were unable to recover it. 00:27:57.205 [2024-07-26 11:35:52.722434] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.205 [2024-07-26 11:35:52.722464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.205 qpair failed and we were unable to recover it. 00:27:57.205 [2024-07-26 11:35:52.722743] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.205 [2024-07-26 11:35:52.722775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.205 qpair failed and we were unable to recover it. 00:27:57.205 [2024-07-26 11:35:52.723014] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.205 [2024-07-26 11:35:52.723045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.205 qpair failed and we were unable to recover it. 00:27:57.205 [2024-07-26 11:35:52.723330] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.205 [2024-07-26 11:35:52.723360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.205 qpair failed and we were unable to recover it. 00:27:57.205 [2024-07-26 11:35:52.723648] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.205 [2024-07-26 11:35:52.723680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.205 qpair failed and we were unable to recover it. 00:27:57.205 [2024-07-26 11:35:52.723945] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.205 [2024-07-26 11:35:52.723976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.205 qpair failed and we were unable to recover it. 00:27:57.205 [2024-07-26 11:35:52.724193] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.206 [2024-07-26 11:35:52.724224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.206 qpair failed and we were unable to recover it. 00:27:57.206 [2024-07-26 11:35:52.724360] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.206 [2024-07-26 11:35:52.724391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.206 qpair failed and we were unable to recover it. 00:27:57.206 [2024-07-26 11:35:52.724650] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.206 [2024-07-26 11:35:52.724681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.206 qpair failed and we were unable to recover it. 00:27:57.206 [2024-07-26 11:35:52.724981] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.206 [2024-07-26 11:35:52.725058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.206 qpair failed and we were unable to recover it. 00:27:57.206 [2024-07-26 11:35:52.725294] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.206 [2024-07-26 11:35:52.725330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.206 qpair failed and we were unable to recover it. 00:27:57.206 [2024-07-26 11:35:52.725614] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.206 [2024-07-26 11:35:52.725658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.206 qpair failed and we were unable to recover it. 00:27:57.206 [2024-07-26 11:35:52.725868] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.206 [2024-07-26 11:35:52.725899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.206 qpair failed and we were unable to recover it. 00:27:57.206 [2024-07-26 11:35:52.726177] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.206 [2024-07-26 11:35:52.726207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.206 qpair failed and we were unable to recover it. 00:27:57.206 [2024-07-26 11:35:52.726512] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.206 [2024-07-26 11:35:52.726543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.206 qpair failed and we were unable to recover it. 00:27:57.206 [2024-07-26 11:35:52.726814] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.206 [2024-07-26 11:35:52.726845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.206 qpair failed and we were unable to recover it. 00:27:57.206 [2024-07-26 11:35:52.727084] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.206 [2024-07-26 11:35:52.727115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.206 qpair failed and we were unable to recover it. 00:27:57.206 [2024-07-26 11:35:52.727313] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.206 [2024-07-26 11:35:52.727343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.206 qpair failed and we were unable to recover it. 00:27:57.206 [2024-07-26 11:35:52.727528] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.206 [2024-07-26 11:35:52.727558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.206 qpair failed and we were unable to recover it. 00:27:57.206 [2024-07-26 11:35:52.727780] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.206 [2024-07-26 11:35:52.727812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.206 qpair failed and we were unable to recover it. 00:27:57.206 [2024-07-26 11:35:52.728096] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.206 [2024-07-26 11:35:52.728126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.206 qpair failed and we were unable to recover it. 00:27:57.206 [2024-07-26 11:35:52.728379] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.206 [2024-07-26 11:35:52.728409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.206 qpair failed and we were unable to recover it. 00:27:57.206 [2024-07-26 11:35:52.728673] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.206 [2024-07-26 11:35:52.728705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.206 qpair failed and we were unable to recover it. 00:27:57.206 [2024-07-26 11:35:52.728941] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.206 [2024-07-26 11:35:52.728972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.206 qpair failed and we were unable to recover it. 00:27:57.206 [2024-07-26 11:35:52.729224] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.206 [2024-07-26 11:35:52.729254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.206 qpair failed and we were unable to recover it. 00:27:57.206 [2024-07-26 11:35:52.729507] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.206 [2024-07-26 11:35:52.729537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.206 qpair failed and we were unable to recover it. 00:27:57.206 [2024-07-26 11:35:52.729759] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.206 [2024-07-26 11:35:52.729790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.206 qpair failed and we were unable to recover it. 00:27:57.206 [2024-07-26 11:35:52.730013] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.206 [2024-07-26 11:35:52.730043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.206 qpair failed and we were unable to recover it. 00:27:57.206 [2024-07-26 11:35:52.730322] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.206 [2024-07-26 11:35:52.730352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.206 qpair failed and we were unable to recover it. 00:27:57.206 [2024-07-26 11:35:52.730619] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.206 [2024-07-26 11:35:52.730658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.206 qpair failed and we were unable to recover it. 00:27:57.206 [2024-07-26 11:35:52.730880] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.206 [2024-07-26 11:35:52.730911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.206 qpair failed and we were unable to recover it. 00:27:57.206 [2024-07-26 11:35:52.731118] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.206 [2024-07-26 11:35:52.731148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.206 qpair failed and we were unable to recover it. 00:27:57.206 [2024-07-26 11:35:52.731402] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.206 [2024-07-26 11:35:52.731432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.206 qpair failed and we were unable to recover it. 00:27:57.206 [2024-07-26 11:35:52.731643] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.206 [2024-07-26 11:35:52.731675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.206 qpair failed and we were unable to recover it. 00:27:57.206 [2024-07-26 11:35:52.731953] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.206 [2024-07-26 11:35:52.731984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.206 qpair failed and we were unable to recover it. 00:27:57.206 [2024-07-26 11:35:52.732112] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.206 [2024-07-26 11:35:52.732142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.206 qpair failed and we were unable to recover it. 00:27:57.206 [2024-07-26 11:35:52.732348] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.206 [2024-07-26 11:35:52.732384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.206 qpair failed and we were unable to recover it. 00:27:57.206 [2024-07-26 11:35:52.732516] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.206 [2024-07-26 11:35:52.732546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.206 qpair failed and we were unable to recover it. 00:27:57.206 [2024-07-26 11:35:52.732832] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.206 [2024-07-26 11:35:52.732863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.206 qpair failed and we were unable to recover it. 00:27:57.206 [2024-07-26 11:35:52.733118] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.206 [2024-07-26 11:35:52.733148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.206 qpair failed and we were unable to recover it. 00:27:57.206 [2024-07-26 11:35:52.733402] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.206 [2024-07-26 11:35:52.733432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.206 qpair failed and we were unable to recover it. 00:27:57.206 [2024-07-26 11:35:52.733657] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.206 [2024-07-26 11:35:52.733689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.206 qpair failed and we were unable to recover it. 00:27:57.206 [2024-07-26 11:35:52.733908] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.206 [2024-07-26 11:35:52.733938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.206 qpair failed and we were unable to recover it. 00:27:57.206 [2024-07-26 11:35:52.734239] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.207 [2024-07-26 11:35:52.734269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.207 qpair failed and we were unable to recover it. 00:27:57.207 [2024-07-26 11:35:52.734459] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.207 [2024-07-26 11:35:52.734490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.207 qpair failed and we were unable to recover it. 00:27:57.207 [2024-07-26 11:35:52.734744] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.207 [2024-07-26 11:35:52.734775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.207 qpair failed and we were unable to recover it. 00:27:57.207 [2024-07-26 11:35:52.735050] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.207 [2024-07-26 11:35:52.735080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.207 qpair failed and we were unable to recover it. 00:27:57.207 [2024-07-26 11:35:52.735292] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.207 [2024-07-26 11:35:52.735323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.207 qpair failed and we were unable to recover it. 00:27:57.207 [2024-07-26 11:35:52.735507] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.207 [2024-07-26 11:35:52.735538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.207 qpair failed and we were unable to recover it. 00:27:57.207 [2024-07-26 11:35:52.735730] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.207 [2024-07-26 11:35:52.735761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.207 qpair failed and we were unable to recover it. 00:27:57.207 [2024-07-26 11:35:52.736027] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.207 [2024-07-26 11:35:52.736058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.207 qpair failed and we were unable to recover it. 00:27:57.207 [2024-07-26 11:35:52.736316] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.207 [2024-07-26 11:35:52.736347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.207 qpair failed and we were unable to recover it. 00:27:57.207 [2024-07-26 11:35:52.736650] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.207 [2024-07-26 11:35:52.736681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.207 qpair failed and we were unable to recover it. 00:27:57.207 [2024-07-26 11:35:52.736955] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.207 [2024-07-26 11:35:52.736986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.207 qpair failed and we were unable to recover it. 00:27:57.207 [2024-07-26 11:35:52.737239] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.207 [2024-07-26 11:35:52.737269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.207 qpair failed and we were unable to recover it. 00:27:57.207 [2024-07-26 11:35:52.737549] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.207 [2024-07-26 11:35:52.737580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.207 qpair failed and we were unable to recover it. 00:27:57.207 [2024-07-26 11:35:52.737874] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.207 [2024-07-26 11:35:52.737905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.207 qpair failed and we were unable to recover it. 00:27:57.207 [2024-07-26 11:35:52.738095] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.207 [2024-07-26 11:35:52.738125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.207 qpair failed and we were unable to recover it. 00:27:57.207 [2024-07-26 11:35:52.738392] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.207 [2024-07-26 11:35:52.738423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.207 qpair failed and we were unable to recover it. 00:27:57.207 [2024-07-26 11:35:52.738702] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.207 [2024-07-26 11:35:52.738733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.207 qpair failed and we were unable to recover it. 00:27:57.207 [2024-07-26 11:35:52.738881] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.207 [2024-07-26 11:35:52.738912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.207 qpair failed and we were unable to recover it. 00:27:57.207 [2024-07-26 11:35:52.739186] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.207 [2024-07-26 11:35:52.739217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.207 qpair failed and we were unable to recover it. 00:27:57.207 [2024-07-26 11:35:52.739417] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.207 [2024-07-26 11:35:52.739448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.207 qpair failed and we were unable to recover it. 00:27:57.207 [2024-07-26 11:35:52.739664] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.207 [2024-07-26 11:35:52.739702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.207 qpair failed and we were unable to recover it. 00:27:57.207 [2024-07-26 11:35:52.739941] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.207 [2024-07-26 11:35:52.739972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.207 qpair failed and we were unable to recover it. 00:27:57.207 [2024-07-26 11:35:52.740252] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.207 [2024-07-26 11:35:52.740282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.207 qpair failed and we were unable to recover it. 00:27:57.207 [2024-07-26 11:35:52.740536] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.207 [2024-07-26 11:35:52.740566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.207 qpair failed and we were unable to recover it. 00:27:57.207 [2024-07-26 11:35:52.740826] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.207 [2024-07-26 11:35:52.740858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.207 qpair failed and we were unable to recover it. 00:27:57.207 [2024-07-26 11:35:52.741047] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.207 [2024-07-26 11:35:52.741077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.207 qpair failed and we were unable to recover it. 00:27:57.207 [2024-07-26 11:35:52.741285] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.207 [2024-07-26 11:35:52.741315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.207 qpair failed and we were unable to recover it. 00:27:57.207 [2024-07-26 11:35:52.741591] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.207 [2024-07-26 11:35:52.741622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.207 qpair failed and we were unable to recover it. 00:27:57.207 [2024-07-26 11:35:52.741897] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.207 [2024-07-26 11:35:52.741928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.207 qpair failed and we were unable to recover it. 00:27:57.207 [2024-07-26 11:35:52.742110] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.207 [2024-07-26 11:35:52.742140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.207 qpair failed and we were unable to recover it. 00:27:57.207 [2024-07-26 11:35:52.742345] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.207 [2024-07-26 11:35:52.742375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.207 qpair failed and we were unable to recover it. 00:27:57.207 [2024-07-26 11:35:52.742643] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.207 [2024-07-26 11:35:52.742675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.207 qpair failed and we were unable to recover it. 00:27:57.207 [2024-07-26 11:35:52.742945] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.207 [2024-07-26 11:35:52.742976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.207 qpair failed and we were unable to recover it. 00:27:57.207 [2024-07-26 11:35:52.743177] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.207 [2024-07-26 11:35:52.743208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.207 qpair failed and we were unable to recover it. 00:27:57.207 [2024-07-26 11:35:52.743474] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.207 [2024-07-26 11:35:52.743506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.207 qpair failed and we were unable to recover it. 00:27:57.207 [2024-07-26 11:35:52.743803] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.207 [2024-07-26 11:35:52.743835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.207 qpair failed and we were unable to recover it. 00:27:57.207 [2024-07-26 11:35:52.744068] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.207 [2024-07-26 11:35:52.744098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.207 qpair failed and we were unable to recover it. 00:27:57.207 [2024-07-26 11:35:52.744295] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.207 [2024-07-26 11:35:52.744326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.207 qpair failed and we were unable to recover it. 00:27:57.207 [2024-07-26 11:35:52.744601] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.208 [2024-07-26 11:35:52.744639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.208 qpair failed and we were unable to recover it. 00:27:57.208 [2024-07-26 11:35:52.744934] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.208 [2024-07-26 11:35:52.744964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.208 qpair failed and we were unable to recover it. 00:27:57.208 [2024-07-26 11:35:52.745168] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.208 [2024-07-26 11:35:52.745199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.208 qpair failed and we were unable to recover it. 00:27:57.208 [2024-07-26 11:35:52.745464] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.208 [2024-07-26 11:35:52.745494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.208 qpair failed and we were unable to recover it. 00:27:57.208 [2024-07-26 11:35:52.745609] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.208 [2024-07-26 11:35:52.745650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.208 qpair failed and we were unable to recover it. 00:27:57.208 [2024-07-26 11:35:52.745843] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.208 [2024-07-26 11:35:52.745874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.208 qpair failed and we were unable to recover it. 00:27:57.208 [2024-07-26 11:35:52.746128] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.208 [2024-07-26 11:35:52.746159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.208 qpair failed and we were unable to recover it. 00:27:57.208 [2024-07-26 11:35:52.746463] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.208 [2024-07-26 11:35:52.746495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.208 qpair failed and we were unable to recover it. 00:27:57.208 [2024-07-26 11:35:52.746722] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.208 [2024-07-26 11:35:52.746754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.208 qpair failed and we were unable to recover it. 00:27:57.208 [2024-07-26 11:35:52.746961] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.208 [2024-07-26 11:35:52.746991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.208 qpair failed and we were unable to recover it. 00:27:57.208 [2024-07-26 11:35:52.747231] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.208 [2024-07-26 11:35:52.747261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.208 qpair failed and we were unable to recover it. 00:27:57.208 [2024-07-26 11:35:52.747547] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.208 [2024-07-26 11:35:52.747578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.208 qpair failed and we were unable to recover it. 00:27:57.208 [2024-07-26 11:35:52.747873] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.208 [2024-07-26 11:35:52.747905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.208 qpair failed and we were unable to recover it. 00:27:57.208 [2024-07-26 11:35:52.748187] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.208 [2024-07-26 11:35:52.748218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.208 qpair failed and we were unable to recover it. 00:27:57.208 [2024-07-26 11:35:52.748426] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.208 [2024-07-26 11:35:52.748457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.208 qpair failed and we were unable to recover it. 00:27:57.208 [2024-07-26 11:35:52.748759] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.208 [2024-07-26 11:35:52.748791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.208 qpair failed and we were unable to recover it. 00:27:57.208 [2024-07-26 11:35:52.749060] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.208 [2024-07-26 11:35:52.749091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.208 qpair failed and we were unable to recover it. 00:27:57.208 [2024-07-26 11:35:52.749277] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.208 [2024-07-26 11:35:52.749307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.208 qpair failed and we were unable to recover it. 00:27:57.208 [2024-07-26 11:35:52.749594] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.208 [2024-07-26 11:35:52.749634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.208 qpair failed and we were unable to recover it. 00:27:57.208 [2024-07-26 11:35:52.749914] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.208 [2024-07-26 11:35:52.749945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.208 qpair failed and we were unable to recover it. 00:27:57.208 [2024-07-26 11:35:52.750238] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.208 [2024-07-26 11:35:52.750269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.208 qpair failed and we were unable to recover it. 00:27:57.208 [2024-07-26 11:35:52.750554] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.208 [2024-07-26 11:35:52.750586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.208 qpair failed and we were unable to recover it. 00:27:57.208 [2024-07-26 11:35:52.750875] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.208 [2024-07-26 11:35:52.750906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.208 qpair failed and we were unable to recover it. 00:27:57.208 [2024-07-26 11:35:52.751193] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.208 [2024-07-26 11:35:52.751225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.208 qpair failed and we were unable to recover it. 00:27:57.208 [2024-07-26 11:35:52.751507] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.208 [2024-07-26 11:35:52.751539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.208 qpair failed and we were unable to recover it. 00:27:57.208 [2024-07-26 11:35:52.751827] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.208 [2024-07-26 11:35:52.751858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.208 qpair failed and we were unable to recover it. 00:27:57.208 [2024-07-26 11:35:52.752145] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.208 [2024-07-26 11:35:52.752176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.208 qpair failed and we were unable to recover it. 00:27:57.208 [2024-07-26 11:35:52.752319] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.208 [2024-07-26 11:35:52.752351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.208 qpair failed and we were unable to recover it. 00:27:57.208 [2024-07-26 11:35:52.752665] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.208 [2024-07-26 11:35:52.752698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.208 qpair failed and we were unable to recover it. 00:27:57.208 [2024-07-26 11:35:52.752934] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.208 [2024-07-26 11:35:52.752966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.208 qpair failed and we were unable to recover it. 00:27:57.208 [2024-07-26 11:35:52.753233] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.208 [2024-07-26 11:35:52.753264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.208 qpair failed and we were unable to recover it. 00:27:57.208 [2024-07-26 11:35:52.753487] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.208 [2024-07-26 11:35:52.753519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.208 qpair failed and we were unable to recover it. 00:27:57.208 [2024-07-26 11:35:52.753795] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.208 [2024-07-26 11:35:52.753830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.208 qpair failed and we were unable to recover it. 00:27:57.208 [2024-07-26 11:35:52.754047] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.208 [2024-07-26 11:35:52.754077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.208 qpair failed and we were unable to recover it. 00:27:57.208 [2024-07-26 11:35:52.754383] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.208 [2024-07-26 11:35:52.754413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.208 qpair failed and we were unable to recover it. 00:27:57.208 [2024-07-26 11:35:52.754691] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.208 [2024-07-26 11:35:52.754723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.208 qpair failed and we were unable to recover it. 00:27:57.208 [2024-07-26 11:35:52.754925] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.208 [2024-07-26 11:35:52.754955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.208 qpair failed and we were unable to recover it. 00:27:57.208 [2024-07-26 11:35:52.755205] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.208 [2024-07-26 11:35:52.755235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.208 qpair failed and we were unable to recover it. 00:27:57.208 [2024-07-26 11:35:52.755419] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.209 [2024-07-26 11:35:52.755450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.209 qpair failed and we were unable to recover it. 00:27:57.209 [2024-07-26 11:35:52.755670] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.209 [2024-07-26 11:35:52.755701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.209 qpair failed and we were unable to recover it. 00:27:57.209 [2024-07-26 11:35:52.755972] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.209 [2024-07-26 11:35:52.756002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.209 qpair failed and we were unable to recover it. 00:27:57.209 [2024-07-26 11:35:52.756219] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.209 [2024-07-26 11:35:52.756250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.209 qpair failed and we were unable to recover it. 00:27:57.209 [2024-07-26 11:35:52.756455] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.209 [2024-07-26 11:35:52.756486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.209 qpair failed and we were unable to recover it. 00:27:57.209 [2024-07-26 11:35:52.756773] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.209 [2024-07-26 11:35:52.756805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.209 qpair failed and we were unable to recover it. 00:27:57.209 [2024-07-26 11:35:52.756990] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.209 [2024-07-26 11:35:52.757020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.209 qpair failed and we were unable to recover it. 00:27:57.209 [2024-07-26 11:35:52.757287] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.209 [2024-07-26 11:35:52.757317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.209 qpair failed and we were unable to recover it. 00:27:57.209 [2024-07-26 11:35:52.757503] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.209 [2024-07-26 11:35:52.757533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.209 qpair failed and we were unable to recover it. 00:27:57.209 [2024-07-26 11:35:52.757737] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.209 [2024-07-26 11:35:52.757769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.209 qpair failed and we were unable to recover it. 00:27:57.209 [2024-07-26 11:35:52.757971] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.209 [2024-07-26 11:35:52.758001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.209 qpair failed and we were unable to recover it. 00:27:57.209 [2024-07-26 11:35:52.758258] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.209 [2024-07-26 11:35:52.758288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.209 qpair failed and we were unable to recover it. 00:27:57.209 [2024-07-26 11:35:52.758592] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.209 [2024-07-26 11:35:52.758635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.209 qpair failed and we were unable to recover it. 00:27:57.209 [2024-07-26 11:35:52.758936] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.209 [2024-07-26 11:35:52.758967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.209 qpair failed and we were unable to recover it. 00:27:57.209 [2024-07-26 11:35:52.759215] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.209 [2024-07-26 11:35:52.759246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.209 qpair failed and we were unable to recover it. 00:27:57.209 [2024-07-26 11:35:52.759521] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.209 [2024-07-26 11:35:52.759551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.209 qpair failed and we were unable to recover it. 00:27:57.209 [2024-07-26 11:35:52.759803] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.209 [2024-07-26 11:35:52.759835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.209 qpair failed and we were unable to recover it. 00:27:57.209 [2024-07-26 11:35:52.760038] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.209 [2024-07-26 11:35:52.760069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.209 qpair failed and we were unable to recover it. 00:27:57.209 [2024-07-26 11:35:52.760358] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.209 [2024-07-26 11:35:52.760388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.209 qpair failed and we were unable to recover it. 00:27:57.209 [2024-07-26 11:35:52.760671] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.209 [2024-07-26 11:35:52.760703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.209 qpair failed and we were unable to recover it. 00:27:57.209 [2024-07-26 11:35:52.761000] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.209 [2024-07-26 11:35:52.761031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.209 qpair failed and we were unable to recover it. 00:27:57.209 [2024-07-26 11:35:52.761306] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.209 [2024-07-26 11:35:52.761336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.209 qpair failed and we were unable to recover it. 00:27:57.209 [2024-07-26 11:35:52.761650] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.209 [2024-07-26 11:35:52.761682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.209 qpair failed and we were unable to recover it. 00:27:57.209 [2024-07-26 11:35:52.761875] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.209 [2024-07-26 11:35:52.761905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.209 qpair failed and we were unable to recover it. 00:27:57.209 [2024-07-26 11:35:52.762149] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.209 [2024-07-26 11:35:52.762179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.209 qpair failed and we were unable to recover it. 00:27:57.209 [2024-07-26 11:35:52.762407] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.209 [2024-07-26 11:35:52.762438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.209 qpair failed and we were unable to recover it. 00:27:57.209 [2024-07-26 11:35:52.762648] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.209 [2024-07-26 11:35:52.762680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.209 qpair failed and we were unable to recover it. 00:27:57.209 [2024-07-26 11:35:52.762963] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.209 [2024-07-26 11:35:52.762994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.209 qpair failed and we were unable to recover it. 00:27:57.209 [2024-07-26 11:35:52.763288] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.210 [2024-07-26 11:35:52.763319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.210 qpair failed and we were unable to recover it. 00:27:57.210 [2024-07-26 11:35:52.763572] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.210 [2024-07-26 11:35:52.763602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.210 qpair failed and we were unable to recover it. 00:27:57.210 [2024-07-26 11:35:52.763919] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.210 [2024-07-26 11:35:52.763950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.210 qpair failed and we were unable to recover it. 00:27:57.210 [2024-07-26 11:35:52.764153] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.210 [2024-07-26 11:35:52.764184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.210 qpair failed and we were unable to recover it. 00:27:57.210 [2024-07-26 11:35:52.764465] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.210 [2024-07-26 11:35:52.764495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.210 qpair failed and we were unable to recover it. 00:27:57.210 [2024-07-26 11:35:52.764775] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.210 [2024-07-26 11:35:52.764808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.210 qpair failed and we were unable to recover it. 00:27:57.210 [2024-07-26 11:35:52.765014] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.210 [2024-07-26 11:35:52.765045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.210 qpair failed and we were unable to recover it. 00:27:57.210 [2024-07-26 11:35:52.765249] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.210 [2024-07-26 11:35:52.765279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.210 qpair failed and we were unable to recover it. 00:27:57.210 [2024-07-26 11:35:52.765506] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.210 [2024-07-26 11:35:52.765536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.210 qpair failed and we were unable to recover it. 00:27:57.210 [2024-07-26 11:35:52.765841] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.210 [2024-07-26 11:35:52.765873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.210 qpair failed and we were unable to recover it. 00:27:57.210 [2024-07-26 11:35:52.766082] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.210 [2024-07-26 11:35:52.766112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.210 qpair failed and we were unable to recover it. 00:27:57.210 [2024-07-26 11:35:52.766391] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.210 [2024-07-26 11:35:52.766433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.210 qpair failed and we were unable to recover it. 00:27:57.210 [2024-07-26 11:35:52.766691] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.210 [2024-07-26 11:35:52.766723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.210 qpair failed and we were unable to recover it. 00:27:57.210 [2024-07-26 11:35:52.766848] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.210 [2024-07-26 11:35:52.766879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.210 qpair failed and we were unable to recover it. 00:27:57.210 [2024-07-26 11:35:52.767129] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.210 [2024-07-26 11:35:52.767159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.210 qpair failed and we were unable to recover it. 00:27:57.210 [2024-07-26 11:35:52.767442] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.210 [2024-07-26 11:35:52.767472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.210 qpair failed and we were unable to recover it. 00:27:57.210 [2024-07-26 11:35:52.767741] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.210 [2024-07-26 11:35:52.767773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.210 qpair failed and we were unable to recover it. 00:27:57.210 [2024-07-26 11:35:52.767998] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.210 [2024-07-26 11:35:52.768029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.210 qpair failed and we were unable to recover it. 00:27:57.210 [2024-07-26 11:35:52.768305] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.210 [2024-07-26 11:35:52.768336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.210 qpair failed and we were unable to recover it. 00:27:57.210 [2024-07-26 11:35:52.768594] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.210 [2024-07-26 11:35:52.768624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.210 qpair failed and we were unable to recover it. 00:27:57.210 [2024-07-26 11:35:52.768848] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.210 [2024-07-26 11:35:52.768878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.210 qpair failed and we were unable to recover it. 00:27:57.210 [2024-07-26 11:35:52.769155] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.210 [2024-07-26 11:35:52.769186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.210 qpair failed and we were unable to recover it. 00:27:57.210 [2024-07-26 11:35:52.769475] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.210 [2024-07-26 11:35:52.769505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.210 qpair failed and we were unable to recover it. 00:27:57.210 [2024-07-26 11:35:52.769760] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.210 [2024-07-26 11:35:52.769792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.210 qpair failed and we were unable to recover it. 00:27:57.210 [2024-07-26 11:35:52.770051] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.210 [2024-07-26 11:35:52.770081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.210 qpair failed and we were unable to recover it. 00:27:57.210 [2024-07-26 11:35:52.770392] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.210 [2024-07-26 11:35:52.770422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.210 qpair failed and we were unable to recover it. 00:27:57.210 [2024-07-26 11:35:52.770621] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.210 [2024-07-26 11:35:52.770664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.210 qpair failed and we were unable to recover it. 00:27:57.210 [2024-07-26 11:35:52.770862] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.210 [2024-07-26 11:35:52.770893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.210 qpair failed and we were unable to recover it. 00:27:57.210 [2024-07-26 11:35:52.771148] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.210 [2024-07-26 11:35:52.771178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.210 qpair failed and we were unable to recover it. 00:27:57.210 [2024-07-26 11:35:52.771483] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.210 [2024-07-26 11:35:52.771513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.210 qpair failed and we were unable to recover it. 00:27:57.210 [2024-07-26 11:35:52.771710] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.210 [2024-07-26 11:35:52.771742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.210 qpair failed and we were unable to recover it. 00:27:57.210 [2024-07-26 11:35:52.772021] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.210 [2024-07-26 11:35:52.772051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.210 qpair failed and we were unable to recover it. 00:27:57.210 [2024-07-26 11:35:52.772278] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.210 [2024-07-26 11:35:52.772309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.210 qpair failed and we were unable to recover it. 00:27:57.210 [2024-07-26 11:35:52.772579] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.210 [2024-07-26 11:35:52.772610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.210 qpair failed and we were unable to recover it. 00:27:57.210 [2024-07-26 11:35:52.772911] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.210 [2024-07-26 11:35:52.772942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.210 qpair failed and we were unable to recover it. 00:27:57.210 [2024-07-26 11:35:52.773142] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.210 [2024-07-26 11:35:52.773172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.210 qpair failed and we were unable to recover it. 00:27:57.210 [2024-07-26 11:35:52.773446] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.210 [2024-07-26 11:35:52.773476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.210 qpair failed and we were unable to recover it. 00:27:57.210 [2024-07-26 11:35:52.773746] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.210 [2024-07-26 11:35:52.773778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.210 qpair failed and we were unable to recover it. 00:27:57.210 [2024-07-26 11:35:52.773984] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.211 [2024-07-26 11:35:52.774019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.211 qpair failed and we were unable to recover it. 00:27:57.211 [2024-07-26 11:35:52.774220] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.211 [2024-07-26 11:35:52.774250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.211 qpair failed and we were unable to recover it. 00:27:57.211 [2024-07-26 11:35:52.774475] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.211 [2024-07-26 11:35:52.774506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.211 qpair failed and we were unable to recover it. 00:27:57.211 [2024-07-26 11:35:52.774809] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.211 [2024-07-26 11:35:52.774840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.211 qpair failed and we were unable to recover it. 00:27:57.211 [2024-07-26 11:35:52.775111] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.211 [2024-07-26 11:35:52.775142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.211 qpair failed and we were unable to recover it. 00:27:57.211 [2024-07-26 11:35:52.775444] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.211 [2024-07-26 11:35:52.775474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.211 qpair failed and we were unable to recover it. 00:27:57.211 [2024-07-26 11:35:52.775755] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.211 [2024-07-26 11:35:52.775787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.211 qpair failed and we were unable to recover it. 00:27:57.211 [2024-07-26 11:35:52.776043] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.211 [2024-07-26 11:35:52.776073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.211 qpair failed and we were unable to recover it. 00:27:57.211 [2024-07-26 11:35:52.776330] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.211 [2024-07-26 11:35:52.776361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.211 qpair failed and we were unable to recover it. 00:27:57.211 [2024-07-26 11:35:52.776621] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.211 [2024-07-26 11:35:52.776675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.211 qpair failed and we were unable to recover it. 00:27:57.211 [2024-07-26 11:35:52.776896] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.211 [2024-07-26 11:35:52.776926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.211 qpair failed and we were unable to recover it. 00:27:57.211 [2024-07-26 11:35:52.777112] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.211 [2024-07-26 11:35:52.777143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.211 qpair failed and we were unable to recover it. 00:27:57.211 [2024-07-26 11:35:52.777417] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.211 [2024-07-26 11:35:52.777448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.211 qpair failed and we were unable to recover it. 00:27:57.211 [2024-07-26 11:35:52.777662] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.211 [2024-07-26 11:35:52.777694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.211 qpair failed and we were unable to recover it. 00:27:57.211 [2024-07-26 11:35:52.777908] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.211 [2024-07-26 11:35:52.777939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.211 qpair failed and we were unable to recover it. 00:27:57.211 [2024-07-26 11:35:52.778218] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.211 [2024-07-26 11:35:52.778249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.211 qpair failed and we were unable to recover it. 00:27:57.211 [2024-07-26 11:35:52.778402] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.211 [2024-07-26 11:35:52.778432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.211 qpair failed and we were unable to recover it. 00:27:57.211 [2024-07-26 11:35:52.778710] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.211 [2024-07-26 11:35:52.778742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.211 qpair failed and we were unable to recover it. 00:27:57.211 [2024-07-26 11:35:52.778996] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.211 [2024-07-26 11:35:52.779026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.211 qpair failed and we were unable to recover it. 00:27:57.211 [2024-07-26 11:35:52.779303] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.211 [2024-07-26 11:35:52.779333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.211 qpair failed and we were unable to recover it. 00:27:57.211 [2024-07-26 11:35:52.779620] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.211 [2024-07-26 11:35:52.779661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.211 qpair failed and we were unable to recover it. 00:27:57.211 [2024-07-26 11:35:52.779915] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.211 [2024-07-26 11:35:52.779945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.211 qpair failed and we were unable to recover it. 00:27:57.211 [2024-07-26 11:35:52.780151] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.211 [2024-07-26 11:35:52.780181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.211 qpair failed and we were unable to recover it. 00:27:57.211 [2024-07-26 11:35:52.780400] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.211 [2024-07-26 11:35:52.780430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.211 qpair failed and we were unable to recover it. 00:27:57.211 [2024-07-26 11:35:52.780685] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.211 [2024-07-26 11:35:52.780717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.211 qpair failed and we were unable to recover it. 00:27:57.211 [2024-07-26 11:35:52.780977] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.211 [2024-07-26 11:35:52.781008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.211 qpair failed and we were unable to recover it. 00:27:57.211 [2024-07-26 11:35:52.781202] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.211 [2024-07-26 11:35:52.781232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.211 qpair failed and we were unable to recover it. 00:27:57.211 [2024-07-26 11:35:52.781429] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.211 [2024-07-26 11:35:52.781460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.211 qpair failed and we were unable to recover it. 00:27:57.211 [2024-07-26 11:35:52.781650] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.211 [2024-07-26 11:35:52.781682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.211 qpair failed and we were unable to recover it. 00:27:57.211 [2024-07-26 11:35:52.781946] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.211 [2024-07-26 11:35:52.781976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.211 qpair failed and we were unable to recover it. 00:27:57.211 [2024-07-26 11:35:52.782190] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.211 [2024-07-26 11:35:52.782220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.211 qpair failed and we were unable to recover it. 00:27:57.211 [2024-07-26 11:35:52.782440] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.211 [2024-07-26 11:35:52.782470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.211 qpair failed and we were unable to recover it. 00:27:57.211 [2024-07-26 11:35:52.782667] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.211 [2024-07-26 11:35:52.782699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.211 qpair failed and we were unable to recover it. 00:27:57.211 [2024-07-26 11:35:52.782975] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.211 [2024-07-26 11:35:52.783005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.211 qpair failed and we were unable to recover it. 00:27:57.211 [2024-07-26 11:35:52.783292] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.211 [2024-07-26 11:35:52.783322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.211 qpair failed and we were unable to recover it. 00:27:57.211 [2024-07-26 11:35:52.783512] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.211 [2024-07-26 11:35:52.783543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.211 qpair failed and we were unable to recover it. 00:27:57.211 [2024-07-26 11:35:52.783751] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.211 [2024-07-26 11:35:52.783783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.211 qpair failed and we were unable to recover it. 00:27:57.211 [2024-07-26 11:35:52.784010] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.211 [2024-07-26 11:35:52.784040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.211 qpair failed and we were unable to recover it. 00:27:57.212 [2024-07-26 11:35:52.784301] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.212 [2024-07-26 11:35:52.784331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.212 qpair failed and we were unable to recover it. 00:27:57.212 [2024-07-26 11:35:52.784590] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.212 [2024-07-26 11:35:52.784621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.212 qpair failed and we were unable to recover it. 00:27:57.212 [2024-07-26 11:35:52.784940] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.212 [2024-07-26 11:35:52.784971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.212 qpair failed and we were unable to recover it. 00:27:57.212 [2024-07-26 11:35:52.785185] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.212 [2024-07-26 11:35:52.785216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.212 qpair failed and we were unable to recover it. 00:27:57.212 [2024-07-26 11:35:52.785431] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.212 [2024-07-26 11:35:52.785461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.212 qpair failed and we were unable to recover it. 00:27:57.212 [2024-07-26 11:35:52.785600] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.212 [2024-07-26 11:35:52.785642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.212 qpair failed and we were unable to recover it. 00:27:57.212 [2024-07-26 11:35:52.785831] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.212 [2024-07-26 11:35:52.785863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.212 qpair failed and we were unable to recover it. 00:27:57.212 [2024-07-26 11:35:52.786048] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.212 [2024-07-26 11:35:52.786078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.212 qpair failed and we were unable to recover it. 00:27:57.212 [2024-07-26 11:35:52.786362] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.212 [2024-07-26 11:35:52.786393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.212 qpair failed and we were unable to recover it. 00:27:57.212 [2024-07-26 11:35:52.786666] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.212 [2024-07-26 11:35:52.786699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.212 qpair failed and we were unable to recover it. 00:27:57.212 [2024-07-26 11:35:52.786987] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.212 [2024-07-26 11:35:52.787019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.212 qpair failed and we were unable to recover it. 00:27:57.212 [2024-07-26 11:35:52.787161] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.212 [2024-07-26 11:35:52.787192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.212 qpair failed and we were unable to recover it. 00:27:57.212 [2024-07-26 11:35:52.787445] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.212 [2024-07-26 11:35:52.787474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.212 qpair failed and we were unable to recover it. 00:27:57.212 [2024-07-26 11:35:52.787760] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.212 [2024-07-26 11:35:52.787792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.212 qpair failed and we were unable to recover it. 00:27:57.212 [2024-07-26 11:35:52.788091] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.212 [2024-07-26 11:35:52.788122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.212 qpair failed and we were unable to recover it. 00:27:57.212 [2024-07-26 11:35:52.788395] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.212 [2024-07-26 11:35:52.788426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.212 qpair failed and we were unable to recover it. 00:27:57.212 [2024-07-26 11:35:52.788728] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.212 [2024-07-26 11:35:52.788759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.212 qpair failed and we were unable to recover it. 00:27:57.212 [2024-07-26 11:35:52.788993] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.212 [2024-07-26 11:35:52.789024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.212 qpair failed and we were unable to recover it. 00:27:57.212 [2024-07-26 11:35:52.789284] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.212 [2024-07-26 11:35:52.789315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.212 qpair failed and we were unable to recover it. 00:27:57.212 [2024-07-26 11:35:52.789617] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.212 [2024-07-26 11:35:52.789657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.212 qpair failed and we were unable to recover it. 00:27:57.212 [2024-07-26 11:35:52.789802] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.212 [2024-07-26 11:35:52.789834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.212 qpair failed and we were unable to recover it. 00:27:57.212 [2024-07-26 11:35:52.790017] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.212 [2024-07-26 11:35:52.790047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.212 qpair failed and we were unable to recover it. 00:27:57.212 [2024-07-26 11:35:52.790327] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.212 [2024-07-26 11:35:52.790357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.212 qpair failed and we were unable to recover it. 00:27:57.212 [2024-07-26 11:35:52.790557] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.212 [2024-07-26 11:35:52.790587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.212 qpair failed and we were unable to recover it. 00:27:57.212 [2024-07-26 11:35:52.790807] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.212 [2024-07-26 11:35:52.790838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.212 qpair failed and we were unable to recover it. 00:27:57.212 [2024-07-26 11:35:52.791149] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.212 [2024-07-26 11:35:52.791180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.212 qpair failed and we were unable to recover it. 00:27:57.212 [2024-07-26 11:35:52.791373] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.212 [2024-07-26 11:35:52.791403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.212 qpair failed and we were unable to recover it. 00:27:57.212 [2024-07-26 11:35:52.791640] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.212 [2024-07-26 11:35:52.791672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.212 qpair failed and we were unable to recover it. 00:27:57.212 [2024-07-26 11:35:52.791896] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.212 [2024-07-26 11:35:52.791926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.212 qpair failed and we were unable to recover it. 00:27:57.212 [2024-07-26 11:35:52.792109] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.212 [2024-07-26 11:35:52.792139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.212 qpair failed and we were unable to recover it. 00:27:57.212 [2024-07-26 11:35:52.792363] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.212 [2024-07-26 11:35:52.792400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.212 qpair failed and we were unable to recover it. 00:27:57.212 [2024-07-26 11:35:52.792674] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.212 [2024-07-26 11:35:52.792706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.212 qpair failed and we were unable to recover it. 00:27:57.212 [2024-07-26 11:35:52.792995] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.212 [2024-07-26 11:35:52.793025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.212 qpair failed and we were unable to recover it. 00:27:57.212 [2024-07-26 11:35:52.793312] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.212 [2024-07-26 11:35:52.793342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.212 qpair failed and we were unable to recover it. 00:27:57.212 [2024-07-26 11:35:52.793646] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.212 [2024-07-26 11:35:52.793678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.212 qpair failed and we were unable to recover it. 00:27:57.212 [2024-07-26 11:35:52.793950] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.212 [2024-07-26 11:35:52.793981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.212 qpair failed and we were unable to recover it. 00:27:57.212 [2024-07-26 11:35:52.794201] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.212 [2024-07-26 11:35:52.794231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.212 qpair failed and we were unable to recover it. 00:27:57.212 [2024-07-26 11:35:52.794486] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.212 [2024-07-26 11:35:52.794517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.212 qpair failed and we were unable to recover it. 00:27:57.213 [2024-07-26 11:35:52.794773] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.213 [2024-07-26 11:35:52.794805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.213 qpair failed and we were unable to recover it. 00:27:57.213 [2024-07-26 11:35:52.794991] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.213 [2024-07-26 11:35:52.795022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.213 qpair failed and we were unable to recover it. 00:27:57.213 [2024-07-26 11:35:52.795305] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.213 [2024-07-26 11:35:52.795335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.213 qpair failed and we were unable to recover it. 00:27:57.213 [2024-07-26 11:35:52.795612] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.213 [2024-07-26 11:35:52.795652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.213 qpair failed and we were unable to recover it. 00:27:57.213 [2024-07-26 11:35:52.795844] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.213 [2024-07-26 11:35:52.795875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.213 qpair failed and we were unable to recover it. 00:27:57.213 [2024-07-26 11:35:52.796133] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.213 [2024-07-26 11:35:52.796163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.213 qpair failed and we were unable to recover it. 00:27:57.213 [2024-07-26 11:35:52.796371] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.213 [2024-07-26 11:35:52.796402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.213 qpair failed and we were unable to recover it. 00:27:57.213 [2024-07-26 11:35:52.796622] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.213 [2024-07-26 11:35:52.796664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.213 qpair failed and we were unable to recover it. 00:27:57.213 [2024-07-26 11:35:52.796944] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.213 [2024-07-26 11:35:52.796975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.213 qpair failed and we were unable to recover it. 00:27:57.213 [2024-07-26 11:35:52.797181] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.213 [2024-07-26 11:35:52.797212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.213 qpair failed and we were unable to recover it. 00:27:57.213 [2024-07-26 11:35:52.797515] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.213 [2024-07-26 11:35:52.797545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.213 qpair failed and we were unable to recover it. 00:27:57.213 [2024-07-26 11:35:52.797733] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.213 [2024-07-26 11:35:52.797765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.213 qpair failed and we were unable to recover it. 00:27:57.213 [2024-07-26 11:35:52.798065] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.213 [2024-07-26 11:35:52.798095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.213 qpair failed and we were unable to recover it. 00:27:57.213 [2024-07-26 11:35:52.798371] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.213 [2024-07-26 11:35:52.798401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.213 qpair failed and we were unable to recover it. 00:27:57.213 [2024-07-26 11:35:52.798612] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.213 [2024-07-26 11:35:52.798653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.213 qpair failed and we were unable to recover it. 00:27:57.213 [2024-07-26 11:35:52.798929] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.213 [2024-07-26 11:35:52.798960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.213 qpair failed and we were unable to recover it. 00:27:57.213 [2024-07-26 11:35:52.799191] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.213 [2024-07-26 11:35:52.799222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.213 qpair failed and we were unable to recover it. 00:27:57.213 [2024-07-26 11:35:52.799423] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.213 [2024-07-26 11:35:52.799454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.213 qpair failed and we were unable to recover it. 00:27:57.213 [2024-07-26 11:35:52.799755] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.213 [2024-07-26 11:35:52.799786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.213 qpair failed and we were unable to recover it. 00:27:57.213 [2024-07-26 11:35:52.800065] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.213 [2024-07-26 11:35:52.800102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.213 qpair failed and we were unable to recover it. 00:27:57.213 [2024-07-26 11:35:52.800318] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.213 [2024-07-26 11:35:52.800348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.213 qpair failed and we were unable to recover it. 00:27:57.213 [2024-07-26 11:35:52.800615] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.213 [2024-07-26 11:35:52.800655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.213 qpair failed and we were unable to recover it. 00:27:57.213 [2024-07-26 11:35:52.800807] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.213 [2024-07-26 11:35:52.800837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.213 qpair failed and we were unable to recover it. 00:27:57.213 [2024-07-26 11:35:52.801036] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.213 [2024-07-26 11:35:52.801067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.213 qpair failed and we were unable to recover it. 00:27:57.213 [2024-07-26 11:35:52.801360] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.213 [2024-07-26 11:35:52.801390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.213 qpair failed and we were unable to recover it. 00:27:57.213 [2024-07-26 11:35:52.801595] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.213 [2024-07-26 11:35:52.801625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.213 qpair failed and we were unable to recover it. 00:27:57.213 [2024-07-26 11:35:52.801919] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.213 [2024-07-26 11:35:52.801951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.213 qpair failed and we were unable to recover it. 00:27:57.213 [2024-07-26 11:35:52.802232] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.213 [2024-07-26 11:35:52.802262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.213 qpair failed and we were unable to recover it. 00:27:57.213 [2024-07-26 11:35:52.802485] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.213 [2024-07-26 11:35:52.802516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.213 qpair failed and we were unable to recover it. 00:27:57.213 [2024-07-26 11:35:52.802796] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.213 [2024-07-26 11:35:52.802828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.213 qpair failed and we were unable to recover it. 00:27:57.213 [2024-07-26 11:35:52.803053] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.213 [2024-07-26 11:35:52.803085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.213 qpair failed and we were unable to recover it. 00:27:57.213 [2024-07-26 11:35:52.803279] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.213 [2024-07-26 11:35:52.803309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.213 qpair failed and we were unable to recover it. 00:27:57.213 [2024-07-26 11:35:52.803622] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.213 [2024-07-26 11:35:52.803663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.213 qpair failed and we were unable to recover it. 00:27:57.213 [2024-07-26 11:35:52.803944] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.213 [2024-07-26 11:35:52.803976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.213 qpair failed and we were unable to recover it. 00:27:57.213 [2024-07-26 11:35:52.804110] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.213 [2024-07-26 11:35:52.804139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.213 qpair failed and we were unable to recover it. 00:27:57.213 [2024-07-26 11:35:52.804420] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.213 [2024-07-26 11:35:52.804450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.213 qpair failed and we were unable to recover it. 00:27:57.213 [2024-07-26 11:35:52.804705] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.213 [2024-07-26 11:35:52.804737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.214 qpair failed and we were unable to recover it. 00:27:57.214 [2024-07-26 11:35:52.805004] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.214 [2024-07-26 11:35:52.805035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.214 qpair failed and we were unable to recover it. 00:27:57.214 [2024-07-26 11:35:52.805261] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.214 [2024-07-26 11:35:52.805291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.214 qpair failed and we were unable to recover it. 00:27:57.214 [2024-07-26 11:35:52.805542] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.214 [2024-07-26 11:35:52.805572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.214 qpair failed and we were unable to recover it. 00:27:57.214 [2024-07-26 11:35:52.805887] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.214 [2024-07-26 11:35:52.805920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.214 qpair failed and we were unable to recover it. 00:27:57.214 [2024-07-26 11:35:52.806183] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.214 [2024-07-26 11:35:52.806213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.214 qpair failed and we were unable to recover it. 00:27:57.214 [2024-07-26 11:35:52.806435] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.214 [2024-07-26 11:35:52.806464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.214 qpair failed and we were unable to recover it. 00:27:57.214 [2024-07-26 11:35:52.806672] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.214 [2024-07-26 11:35:52.806704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.214 qpair failed and we were unable to recover it. 00:27:57.214 [2024-07-26 11:35:52.806911] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.214 [2024-07-26 11:35:52.806942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.214 qpair failed and we were unable to recover it. 00:27:57.214 [2024-07-26 11:35:52.807222] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.214 [2024-07-26 11:35:52.807253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.214 qpair failed and we were unable to recover it. 00:27:57.214 [2024-07-26 11:35:52.807438] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.214 [2024-07-26 11:35:52.807468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.214 qpair failed and we were unable to recover it. 00:27:57.214 [2024-07-26 11:35:52.807699] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.214 [2024-07-26 11:35:52.807730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.214 qpair failed and we were unable to recover it. 00:27:57.214 [2024-07-26 11:35:52.808032] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.214 [2024-07-26 11:35:52.808063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.214 qpair failed and we were unable to recover it. 00:27:57.214 [2024-07-26 11:35:52.808247] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.214 [2024-07-26 11:35:52.808277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.214 qpair failed and we were unable to recover it. 00:27:57.214 [2024-07-26 11:35:52.808482] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.214 [2024-07-26 11:35:52.808513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.214 qpair failed and we were unable to recover it. 00:27:57.214 [2024-07-26 11:35:52.808638] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.214 [2024-07-26 11:35:52.808670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.214 qpair failed and we were unable to recover it. 00:27:57.214 [2024-07-26 11:35:52.808922] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.214 [2024-07-26 11:35:52.808953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.214 qpair failed and we were unable to recover it. 00:27:57.214 [2024-07-26 11:35:52.809150] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.214 [2024-07-26 11:35:52.809180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.214 qpair failed and we were unable to recover it. 00:27:57.214 [2024-07-26 11:35:52.809441] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.214 [2024-07-26 11:35:52.809471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.214 qpair failed and we were unable to recover it. 00:27:57.214 [2024-07-26 11:35:52.809727] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.214 [2024-07-26 11:35:52.809758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.214 qpair failed and we were unable to recover it. 00:27:57.214 [2024-07-26 11:35:52.810086] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.214 [2024-07-26 11:35:52.810116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.214 qpair failed and we were unable to recover it. 00:27:57.214 [2024-07-26 11:35:52.810392] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.214 [2024-07-26 11:35:52.810422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.214 qpair failed and we were unable to recover it. 00:27:57.214 [2024-07-26 11:35:52.810652] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.214 [2024-07-26 11:35:52.810684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.214 qpair failed and we were unable to recover it. 00:27:57.214 [2024-07-26 11:35:52.810933] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.214 [2024-07-26 11:35:52.810964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.214 qpair failed and we were unable to recover it. 00:27:57.214 [2024-07-26 11:35:52.811253] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.214 [2024-07-26 11:35:52.811284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.214 qpair failed and we were unable to recover it. 00:27:57.214 [2024-07-26 11:35:52.811562] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.214 [2024-07-26 11:35:52.811593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.214 qpair failed and we were unable to recover it. 00:27:57.214 [2024-07-26 11:35:52.811860] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.214 [2024-07-26 11:35:52.811892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.214 qpair failed and we were unable to recover it. 00:27:57.214 [2024-07-26 11:35:52.812119] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.214 [2024-07-26 11:35:52.812150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.214 qpair failed and we were unable to recover it. 00:27:57.214 [2024-07-26 11:35:52.812300] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.214 [2024-07-26 11:35:52.812330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.214 qpair failed and we were unable to recover it. 00:27:57.214 [2024-07-26 11:35:52.812609] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.214 [2024-07-26 11:35:52.812650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.214 qpair failed and we were unable to recover it. 00:27:57.214 [2024-07-26 11:35:52.812915] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.214 [2024-07-26 11:35:52.812946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.214 qpair failed and we were unable to recover it. 00:27:57.214 [2024-07-26 11:35:52.813142] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.214 [2024-07-26 11:35:52.813173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.214 qpair failed and we were unable to recover it. 00:27:57.214 [2024-07-26 11:35:52.813454] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.214 [2024-07-26 11:35:52.813484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.214 qpair failed and we were unable to recover it. 00:27:57.214 [2024-07-26 11:35:52.813772] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.214 [2024-07-26 11:35:52.813804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.215 qpair failed and we were unable to recover it. 00:27:57.215 [2024-07-26 11:35:52.814091] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.215 [2024-07-26 11:35:52.814122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.215 qpair failed and we were unable to recover it. 00:27:57.215 [2024-07-26 11:35:52.814407] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.215 [2024-07-26 11:35:52.814437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.215 qpair failed and we were unable to recover it. 00:27:57.215 [2024-07-26 11:35:52.814644] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.215 [2024-07-26 11:35:52.814677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.215 qpair failed and we were unable to recover it. 00:27:57.215 [2024-07-26 11:35:52.814863] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.215 [2024-07-26 11:35:52.814894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.215 qpair failed and we were unable to recover it. 00:27:57.215 [2024-07-26 11:35:52.815087] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.215 [2024-07-26 11:35:52.815118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.215 qpair failed and we were unable to recover it. 00:27:57.215 [2024-07-26 11:35:52.815397] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.215 [2024-07-26 11:35:52.815428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.215 qpair failed and we were unable to recover it. 00:27:57.215 [2024-07-26 11:35:52.815615] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.215 [2024-07-26 11:35:52.815654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.215 qpair failed and we were unable to recover it. 00:27:57.215 [2024-07-26 11:35:52.815840] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.215 [2024-07-26 11:35:52.815871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.215 qpair failed and we were unable to recover it. 00:27:57.215 [2024-07-26 11:35:52.816154] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.215 [2024-07-26 11:35:52.816184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.215 qpair failed and we were unable to recover it. 00:27:57.215 [2024-07-26 11:35:52.816388] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.215 [2024-07-26 11:35:52.816418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.215 qpair failed and we were unable to recover it. 00:27:57.215 [2024-07-26 11:35:52.816668] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.215 [2024-07-26 11:35:52.816700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.215 qpair failed and we were unable to recover it. 00:27:57.215 [2024-07-26 11:35:52.816923] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.215 [2024-07-26 11:35:52.816954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.215 qpair failed and we were unable to recover it. 00:27:57.215 [2024-07-26 11:35:52.817177] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.215 [2024-07-26 11:35:52.817207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.215 qpair failed and we were unable to recover it. 00:27:57.215 [2024-07-26 11:35:52.817343] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.215 [2024-07-26 11:35:52.817378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.215 qpair failed and we were unable to recover it. 00:27:57.215 [2024-07-26 11:35:52.817664] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.215 [2024-07-26 11:35:52.817723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.215 qpair failed and we were unable to recover it. 00:27:57.215 [2024-07-26 11:35:52.818022] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.215 [2024-07-26 11:35:52.818057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.215 qpair failed and we were unable to recover it. 00:27:57.215 [2024-07-26 11:35:52.818332] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.215 [2024-07-26 11:35:52.818366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.215 qpair failed and we were unable to recover it. 00:27:57.215 [2024-07-26 11:35:52.818659] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.215 [2024-07-26 11:35:52.818699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.215 qpair failed and we were unable to recover it. 00:27:57.215 [2024-07-26 11:35:52.818976] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.215 [2024-07-26 11:35:52.819006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.215 qpair failed and we were unable to recover it. 00:27:57.215 [2024-07-26 11:35:52.819266] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.215 [2024-07-26 11:35:52.819296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.215 qpair failed and we were unable to recover it. 00:27:57.215 [2024-07-26 11:35:52.819504] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.215 [2024-07-26 11:35:52.819535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.215 qpair failed and we were unable to recover it. 00:27:57.215 [2024-07-26 11:35:52.819685] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.215 [2024-07-26 11:35:52.819724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.215 qpair failed and we were unable to recover it. 00:27:57.215 [2024-07-26 11:35:52.819960] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.215 [2024-07-26 11:35:52.820005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.215 qpair failed and we were unable to recover it. 00:27:57.215 [2024-07-26 11:35:52.820310] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.215 [2024-07-26 11:35:52.820341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.215 qpair failed and we were unable to recover it. 00:27:57.215 [2024-07-26 11:35:52.820613] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.215 [2024-07-26 11:35:52.820656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.215 qpair failed and we were unable to recover it. 00:27:57.215 [2024-07-26 11:35:52.820786] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.215 [2024-07-26 11:35:52.820817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.215 qpair failed and we were unable to recover it. 00:27:57.215 [2024-07-26 11:35:52.821097] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.215 [2024-07-26 11:35:52.821128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.215 qpair failed and we were unable to recover it. 00:27:57.215 [2024-07-26 11:35:52.821402] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.215 [2024-07-26 11:35:52.821434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.215 qpair failed and we were unable to recover it. 00:27:57.215 [2024-07-26 11:35:52.821734] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.215 [2024-07-26 11:35:52.821766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.215 qpair failed and we were unable to recover it. 00:27:57.215 [2024-07-26 11:35:52.821898] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.215 [2024-07-26 11:35:52.821929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.215 qpair failed and we were unable to recover it. 00:27:57.490 [2024-07-26 11:35:52.822159] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.490 [2024-07-26 11:35:52.822241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.490 qpair failed and we were unable to recover it. 00:27:57.490 [2024-07-26 11:35:52.822563] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.490 [2024-07-26 11:35:52.822655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.490 qpair failed and we were unable to recover it. 00:27:57.490 [2024-07-26 11:35:52.822992] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.490 [2024-07-26 11:35:52.823063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.490 qpair failed and we were unable to recover it. 00:27:57.490 [2024-07-26 11:35:52.823389] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.490 [2024-07-26 11:35:52.823431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.490 qpair failed and we were unable to recover it. 00:27:57.490 [2024-07-26 11:35:52.823668] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.490 [2024-07-26 11:35:52.823703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.490 qpair failed and we were unable to recover it. 00:27:57.490 [2024-07-26 11:35:52.823990] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.490 [2024-07-26 11:35:52.824023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.490 qpair failed and we were unable to recover it. 00:27:57.490 [2024-07-26 11:35:52.824239] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.490 [2024-07-26 11:35:52.824273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.490 qpair failed and we were unable to recover it. 00:27:57.490 [2024-07-26 11:35:52.824574] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.490 [2024-07-26 11:35:52.824615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.490 qpair failed and we were unable to recover it. 00:27:57.490 [2024-07-26 11:35:52.824914] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.490 [2024-07-26 11:35:52.824949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.490 qpair failed and we were unable to recover it. 00:27:57.490 [2024-07-26 11:35:52.825155] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.490 [2024-07-26 11:35:52.825190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.490 qpair failed and we were unable to recover it. 00:27:57.490 [2024-07-26 11:35:52.825333] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.490 [2024-07-26 11:35:52.825366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.490 qpair failed and we were unable to recover it. 00:27:57.490 [2024-07-26 11:35:52.825657] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.490 [2024-07-26 11:35:52.825694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.490 qpair failed and we were unable to recover it. 00:27:57.490 [2024-07-26 11:35:52.825894] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.490 [2024-07-26 11:35:52.825943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.490 qpair failed and we were unable to recover it. 00:27:57.490 [2024-07-26 11:35:52.826206] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.490 [2024-07-26 11:35:52.826239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.490 qpair failed and we were unable to recover it. 00:27:57.490 [2024-07-26 11:35:52.826501] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.490 [2024-07-26 11:35:52.826542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.490 qpair failed and we were unable to recover it. 00:27:57.490 [2024-07-26 11:35:52.826803] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.490 [2024-07-26 11:35:52.826837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.490 qpair failed and we were unable to recover it. 00:27:57.490 [2024-07-26 11:35:52.826981] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.490 [2024-07-26 11:35:52.827012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.490 qpair failed and we were unable to recover it. 00:27:57.490 [2024-07-26 11:35:52.827294] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.490 [2024-07-26 11:35:52.827325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.490 qpair failed and we were unable to recover it. 00:27:57.490 [2024-07-26 11:35:52.827607] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.490 [2024-07-26 11:35:52.827648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.490 qpair failed and we were unable to recover it. 00:27:57.490 [2024-07-26 11:35:52.827928] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.490 [2024-07-26 11:35:52.827959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.490 qpair failed and we were unable to recover it. 00:27:57.490 [2024-07-26 11:35:52.828213] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.490 [2024-07-26 11:35:52.828243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.490 qpair failed and we were unable to recover it. 00:27:57.490 [2024-07-26 11:35:52.828504] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.490 [2024-07-26 11:35:52.828534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.490 qpair failed and we were unable to recover it. 00:27:57.490 [2024-07-26 11:35:52.828735] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.490 [2024-07-26 11:35:52.828767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.490 qpair failed and we were unable to recover it. 00:27:57.490 [2024-07-26 11:35:52.829040] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.490 [2024-07-26 11:35:52.829070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.490 qpair failed and we were unable to recover it. 00:27:57.490 [2024-07-26 11:35:52.829269] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.490 [2024-07-26 11:35:52.829299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.491 qpair failed and we were unable to recover it. 00:27:57.491 [2024-07-26 11:35:52.829550] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.491 [2024-07-26 11:35:52.829581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.491 qpair failed and we were unable to recover it. 00:27:57.491 [2024-07-26 11:35:52.829818] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.491 [2024-07-26 11:35:52.829850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.491 qpair failed and we were unable to recover it. 00:27:57.491 [2024-07-26 11:35:52.830108] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.491 [2024-07-26 11:35:52.830139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.491 qpair failed and we were unable to recover it. 00:27:57.491 [2024-07-26 11:35:52.830391] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.491 [2024-07-26 11:35:52.830423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.491 qpair failed and we were unable to recover it. 00:27:57.491 [2024-07-26 11:35:52.830700] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.491 [2024-07-26 11:35:52.830732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.491 qpair failed and we were unable to recover it. 00:27:57.491 [2024-07-26 11:35:52.830930] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.491 [2024-07-26 11:35:52.830961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.491 qpair failed and we were unable to recover it. 00:27:57.491 [2024-07-26 11:35:52.831225] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.491 [2024-07-26 11:35:52.831255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.491 qpair failed and we were unable to recover it. 00:27:57.491 [2024-07-26 11:35:52.831463] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.491 [2024-07-26 11:35:52.831493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.491 qpair failed and we were unable to recover it. 00:27:57.491 [2024-07-26 11:35:52.831761] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.491 [2024-07-26 11:35:52.831793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.491 qpair failed and we were unable to recover it. 00:27:57.491 [2024-07-26 11:35:52.832075] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.491 [2024-07-26 11:35:52.832104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.491 qpair failed and we were unable to recover it. 00:27:57.491 [2024-07-26 11:35:52.832388] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.491 [2024-07-26 11:35:52.832419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.491 qpair failed and we were unable to recover it. 00:27:57.491 [2024-07-26 11:35:52.832610] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.491 [2024-07-26 11:35:52.832651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.491 qpair failed and we were unable to recover it. 00:27:57.491 [2024-07-26 11:35:52.832858] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.491 [2024-07-26 11:35:52.832888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.491 qpair failed and we were unable to recover it. 00:27:57.491 [2024-07-26 11:35:52.833091] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.491 [2024-07-26 11:35:52.833121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.491 qpair failed and we were unable to recover it. 00:27:57.491 [2024-07-26 11:35:52.833325] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.491 [2024-07-26 11:35:52.833355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.491 qpair failed and we were unable to recover it. 00:27:57.491 [2024-07-26 11:35:52.833647] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.491 [2024-07-26 11:35:52.833678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.491 qpair failed and we were unable to recover it. 00:27:57.491 [2024-07-26 11:35:52.833933] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.491 [2024-07-26 11:35:52.833969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.491 qpair failed and we were unable to recover it. 00:27:57.491 [2024-07-26 11:35:52.834224] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.491 [2024-07-26 11:35:52.834255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.491 qpair failed and we were unable to recover it. 00:27:57.491 [2024-07-26 11:35:52.834475] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.491 [2024-07-26 11:35:52.834505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.491 qpair failed and we were unable to recover it. 00:27:57.491 [2024-07-26 11:35:52.834712] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.491 [2024-07-26 11:35:52.834744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.491 qpair failed and we were unable to recover it. 00:27:57.491 [2024-07-26 11:35:52.834876] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.491 [2024-07-26 11:35:52.834907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.491 qpair failed and we were unable to recover it. 00:27:57.491 [2024-07-26 11:35:52.835159] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.491 [2024-07-26 11:35:52.835189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.491 qpair failed and we were unable to recover it. 00:27:57.491 [2024-07-26 11:35:52.835384] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.491 [2024-07-26 11:35:52.835415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.491 qpair failed and we were unable to recover it. 00:27:57.491 [2024-07-26 11:35:52.835618] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.491 [2024-07-26 11:35:52.835659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.491 qpair failed and we were unable to recover it. 00:27:57.491 [2024-07-26 11:35:52.835925] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.491 [2024-07-26 11:35:52.835956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.491 qpair failed and we were unable to recover it. 00:27:57.491 [2024-07-26 11:35:52.836229] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.491 [2024-07-26 11:35:52.836261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.491 qpair failed and we were unable to recover it. 00:27:57.491 [2024-07-26 11:35:52.836555] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.491 [2024-07-26 11:35:52.836585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.491 qpair failed and we were unable to recover it. 00:27:57.491 [2024-07-26 11:35:52.836818] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.491 [2024-07-26 11:35:52.836849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.491 qpair failed and we were unable to recover it. 00:27:57.491 [2024-07-26 11:35:52.837149] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.491 [2024-07-26 11:35:52.837180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.491 qpair failed and we were unable to recover it. 00:27:57.491 [2024-07-26 11:35:52.837457] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.491 [2024-07-26 11:35:52.837487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.491 qpair failed and we were unable to recover it. 00:27:57.491 [2024-07-26 11:35:52.837697] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.491 [2024-07-26 11:35:52.837729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.491 qpair failed and we were unable to recover it. 00:27:57.491 [2024-07-26 11:35:52.837932] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.491 [2024-07-26 11:35:52.837963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.491 qpair failed and we were unable to recover it. 00:27:57.491 [2024-07-26 11:35:52.838167] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.491 [2024-07-26 11:35:52.838197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.491 qpair failed and we were unable to recover it. 00:27:57.491 [2024-07-26 11:35:52.838406] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.491 [2024-07-26 11:35:52.838437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.491 qpair failed and we were unable to recover it. 00:27:57.491 [2024-07-26 11:35:52.838649] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.491 [2024-07-26 11:35:52.838682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.491 qpair failed and we were unable to recover it. 00:27:57.491 [2024-07-26 11:35:52.838829] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.491 [2024-07-26 11:35:52.838860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.491 qpair failed and we were unable to recover it. 00:27:57.491 [2024-07-26 11:35:52.839005] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.491 [2024-07-26 11:35:52.839036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.491 qpair failed and we were unable to recover it. 00:27:57.491 [2024-07-26 11:35:52.839217] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.491 [2024-07-26 11:35:52.839248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.491 qpair failed and we were unable to recover it. 00:27:57.491 [2024-07-26 11:35:52.839441] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.491 [2024-07-26 11:35:52.839471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.491 qpair failed and we were unable to recover it. 00:27:57.491 [2024-07-26 11:35:52.839667] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.491 [2024-07-26 11:35:52.839699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.491 qpair failed and we were unable to recover it. 00:27:57.491 [2024-07-26 11:35:52.839957] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.491 [2024-07-26 11:35:52.839987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.491 qpair failed and we were unable to recover it. 00:27:57.491 [2024-07-26 11:35:52.840243] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.491 [2024-07-26 11:35:52.840273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.491 qpair failed and we were unable to recover it. 00:27:57.491 [2024-07-26 11:35:52.840533] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.491 [2024-07-26 11:35:52.840564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.491 qpair failed and we were unable to recover it. 00:27:57.491 [2024-07-26 11:35:52.840769] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.491 [2024-07-26 11:35:52.840801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.491 qpair failed and we were unable to recover it. 00:27:57.491 [2024-07-26 11:35:52.841080] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.491 [2024-07-26 11:35:52.841111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.491 qpair failed and we were unable to recover it. 00:27:57.491 [2024-07-26 11:35:52.841317] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.491 [2024-07-26 11:35:52.841347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.491 qpair failed and we were unable to recover it. 00:27:57.491 [2024-07-26 11:35:52.841495] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.491 [2024-07-26 11:35:52.841526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.491 qpair failed and we were unable to recover it. 00:27:57.491 [2024-07-26 11:35:52.841745] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.491 [2024-07-26 11:35:52.841777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.491 qpair failed and we were unable to recover it. 00:27:57.491 [2024-07-26 11:35:52.842072] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.491 [2024-07-26 11:35:52.842102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.491 qpair failed and we were unable to recover it. 00:27:57.491 [2024-07-26 11:35:52.842382] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.491 [2024-07-26 11:35:52.842413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.491 qpair failed and we were unable to recover it. 00:27:57.491 [2024-07-26 11:35:52.842605] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.491 [2024-07-26 11:35:52.842662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.491 qpair failed and we were unable to recover it. 00:27:57.491 [2024-07-26 11:35:52.842901] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.491 [2024-07-26 11:35:52.842932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.491 qpair failed and we were unable to recover it. 00:27:57.491 [2024-07-26 11:35:52.843088] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.491 [2024-07-26 11:35:52.843118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.491 qpair failed and we were unable to recover it. 00:27:57.491 [2024-07-26 11:35:52.843396] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.491 [2024-07-26 11:35:52.843426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.491 qpair failed and we were unable to recover it. 00:27:57.491 [2024-07-26 11:35:52.843639] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.491 [2024-07-26 11:35:52.843672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.491 qpair failed and we were unable to recover it. 00:27:57.491 [2024-07-26 11:35:52.843879] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.491 [2024-07-26 11:35:52.843910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.491 qpair failed and we were unable to recover it. 00:27:57.491 [2024-07-26 11:35:52.844118] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.491 [2024-07-26 11:35:52.844149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.491 qpair failed and we were unable to recover it. 00:27:57.491 [2024-07-26 11:35:52.844407] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.491 [2024-07-26 11:35:52.844438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.491 qpair failed and we were unable to recover it. 00:27:57.491 [2024-07-26 11:35:52.844703] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.491 [2024-07-26 11:35:52.844735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.491 qpair failed and we were unable to recover it. 00:27:57.491 [2024-07-26 11:35:52.844992] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.491 [2024-07-26 11:35:52.845022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.491 qpair failed and we were unable to recover it. 00:27:57.491 [2024-07-26 11:35:52.845214] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.491 [2024-07-26 11:35:52.845244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.491 qpair failed and we were unable to recover it. 00:27:57.491 [2024-07-26 11:35:52.845497] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.491 [2024-07-26 11:35:52.845528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.491 qpair failed and we were unable to recover it. 00:27:57.491 [2024-07-26 11:35:52.845835] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.491 [2024-07-26 11:35:52.845866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.491 qpair failed and we were unable to recover it. 00:27:57.491 [2024-07-26 11:35:52.846137] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.491 [2024-07-26 11:35:52.846167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.491 qpair failed and we were unable to recover it. 00:27:57.491 [2024-07-26 11:35:52.846424] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.491 [2024-07-26 11:35:52.846455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.491 qpair failed and we were unable to recover it. 00:27:57.492 [2024-07-26 11:35:52.846760] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.492 [2024-07-26 11:35:52.846792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.492 qpair failed and we were unable to recover it. 00:27:57.492 [2024-07-26 11:35:52.846995] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.492 [2024-07-26 11:35:52.847026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.492 qpair failed and we were unable to recover it. 00:27:57.492 [2024-07-26 11:35:52.847307] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.492 [2024-07-26 11:35:52.847337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.492 qpair failed and we were unable to recover it. 00:27:57.492 [2024-07-26 11:35:52.847598] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.492 [2024-07-26 11:35:52.847638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.492 qpair failed and we were unable to recover it. 00:27:57.492 [2024-07-26 11:35:52.847895] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.492 [2024-07-26 11:35:52.847927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.492 qpair failed and we were unable to recover it. 00:27:57.492 [2024-07-26 11:35:52.848133] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.492 [2024-07-26 11:35:52.848163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.492 qpair failed and we were unable to recover it. 00:27:57.492 [2024-07-26 11:35:52.848456] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.492 [2024-07-26 11:35:52.848487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.492 qpair failed and we were unable to recover it. 00:27:57.492 [2024-07-26 11:35:52.848728] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.492 [2024-07-26 11:35:52.848761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.492 qpair failed and we were unable to recover it. 00:27:57.492 [2024-07-26 11:35:52.848983] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.492 [2024-07-26 11:35:52.849015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.492 qpair failed and we were unable to recover it. 00:27:57.492 [2024-07-26 11:35:52.849232] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.492 [2024-07-26 11:35:52.849263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.492 qpair failed and we were unable to recover it. 00:27:57.492 [2024-07-26 11:35:52.849487] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.492 [2024-07-26 11:35:52.849518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.492 qpair failed and we were unable to recover it. 00:27:57.492 [2024-07-26 11:35:52.849725] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.492 [2024-07-26 11:35:52.849757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.492 qpair failed and we were unable to recover it. 00:27:57.492 [2024-07-26 11:35:52.849965] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.492 [2024-07-26 11:35:52.849995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.492 qpair failed and we were unable to recover it. 00:27:57.492 [2024-07-26 11:35:52.850124] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.492 [2024-07-26 11:35:52.850155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.492 qpair failed and we were unable to recover it. 00:27:57.492 [2024-07-26 11:35:52.850364] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.492 [2024-07-26 11:35:52.850395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.492 qpair failed and we were unable to recover it. 00:27:57.492 [2024-07-26 11:35:52.850609] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.492 [2024-07-26 11:35:52.850648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.492 qpair failed and we were unable to recover it. 00:27:57.492 [2024-07-26 11:35:52.850851] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.492 [2024-07-26 11:35:52.850882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.492 qpair failed and we were unable to recover it. 00:27:57.492 [2024-07-26 11:35:52.851183] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.492 [2024-07-26 11:35:52.851213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.492 qpair failed and we were unable to recover it. 00:27:57.492 [2024-07-26 11:35:52.851494] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.492 [2024-07-26 11:35:52.851525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.492 qpair failed and we were unable to recover it. 00:27:57.492 [2024-07-26 11:35:52.851811] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.492 [2024-07-26 11:35:52.851849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.492 qpair failed and we were unable to recover it. 00:27:57.492 [2024-07-26 11:35:52.852044] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.492 [2024-07-26 11:35:52.852074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.492 qpair failed and we were unable to recover it. 00:27:57.492 [2024-07-26 11:35:52.852376] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.492 [2024-07-26 11:35:52.852406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.492 qpair failed and we were unable to recover it. 00:27:57.492 [2024-07-26 11:35:52.852647] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.492 [2024-07-26 11:35:52.852680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.492 qpair failed and we were unable to recover it. 00:27:57.492 [2024-07-26 11:35:52.852897] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.492 [2024-07-26 11:35:52.852928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.492 qpair failed and we were unable to recover it. 00:27:57.492 [2024-07-26 11:35:52.853124] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.492 [2024-07-26 11:35:52.853154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.492 qpair failed and we were unable to recover it. 00:27:57.492 [2024-07-26 11:35:52.853431] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.492 [2024-07-26 11:35:52.853462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.492 qpair failed and we were unable to recover it. 00:27:57.492 [2024-07-26 11:35:52.853648] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.492 [2024-07-26 11:35:52.853680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.492 qpair failed and we were unable to recover it. 00:27:57.492 [2024-07-26 11:35:52.853937] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.492 [2024-07-26 11:35:52.853968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.492 qpair failed and we were unable to recover it. 00:27:57.492 [2024-07-26 11:35:52.854272] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.492 [2024-07-26 11:35:52.854303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.492 qpair failed and we were unable to recover it. 00:27:57.492 [2024-07-26 11:35:52.854598] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.492 [2024-07-26 11:35:52.854651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.492 qpair failed and we were unable to recover it. 00:27:57.492 [2024-07-26 11:35:52.854876] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.492 [2024-07-26 11:35:52.854907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.492 qpair failed and we were unable to recover it. 00:27:57.492 [2024-07-26 11:35:52.855200] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.492 [2024-07-26 11:35:52.855231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.492 qpair failed and we were unable to recover it. 00:27:57.492 [2024-07-26 11:35:52.855516] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.492 [2024-07-26 11:35:52.855547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.492 qpair failed and we were unable to recover it. 00:27:57.492 [2024-07-26 11:35:52.855807] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.492 [2024-07-26 11:35:52.855840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.492 qpair failed and we were unable to recover it. 00:27:57.492 [2024-07-26 11:35:52.855973] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.492 [2024-07-26 11:35:52.856004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.492 qpair failed and we were unable to recover it. 00:27:57.492 [2024-07-26 11:35:52.856184] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.492 [2024-07-26 11:35:52.856215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.492 qpair failed and we were unable to recover it. 00:27:57.492 [2024-07-26 11:35:52.856430] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.492 [2024-07-26 11:35:52.856460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.492 qpair failed and we were unable to recover it. 00:27:57.492 [2024-07-26 11:35:52.856655] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.492 [2024-07-26 11:35:52.856687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.492 qpair failed and we were unable to recover it. 00:27:57.492 [2024-07-26 11:35:52.856999] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.492 [2024-07-26 11:35:52.857030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.492 qpair failed and we were unable to recover it. 00:27:57.492 [2024-07-26 11:35:52.857274] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.492 [2024-07-26 11:35:52.857305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.492 qpair failed and we were unable to recover it. 00:27:57.492 [2024-07-26 11:35:52.857623] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.492 [2024-07-26 11:35:52.857665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.492 qpair failed and we were unable to recover it. 00:27:57.492 [2024-07-26 11:35:52.857923] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.492 [2024-07-26 11:35:52.857954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.492 qpair failed and we were unable to recover it. 00:27:57.492 [2024-07-26 11:35:52.858232] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.492 [2024-07-26 11:35:52.858263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.492 qpair failed and we were unable to recover it. 00:27:57.492 [2024-07-26 11:35:52.858542] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.492 [2024-07-26 11:35:52.858572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.492 qpair failed and we were unable to recover it. 00:27:57.492 [2024-07-26 11:35:52.858792] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.492 [2024-07-26 11:35:52.858824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.492 qpair failed and we were unable to recover it. 00:27:57.492 [2024-07-26 11:35:52.859108] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.492 [2024-07-26 11:35:52.859139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.492 qpair failed and we were unable to recover it. 00:27:57.492 [2024-07-26 11:35:52.859408] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.492 [2024-07-26 11:35:52.859444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.492 qpair failed and we were unable to recover it. 00:27:57.492 [2024-07-26 11:35:52.859673] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.492 [2024-07-26 11:35:52.859706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.492 qpair failed and we were unable to recover it. 00:27:57.492 [2024-07-26 11:35:52.859961] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.492 [2024-07-26 11:35:52.859992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.492 qpair failed and we were unable to recover it. 00:27:57.492 [2024-07-26 11:35:52.860251] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.492 [2024-07-26 11:35:52.860282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.492 qpair failed and we were unable to recover it. 00:27:57.492 [2024-07-26 11:35:52.860524] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.492 [2024-07-26 11:35:52.860554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.492 qpair failed and we were unable to recover it. 00:27:57.492 [2024-07-26 11:35:52.860844] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.492 [2024-07-26 11:35:52.860876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.492 qpair failed and we were unable to recover it. 00:27:57.492 [2024-07-26 11:35:52.861158] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.492 [2024-07-26 11:35:52.861188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.492 qpair failed and we were unable to recover it. 00:27:57.492 [2024-07-26 11:35:52.861471] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.492 [2024-07-26 11:35:52.861501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.492 qpair failed and we were unable to recover it. 00:27:57.492 [2024-07-26 11:35:52.861761] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.492 [2024-07-26 11:35:52.861793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.492 qpair failed and we were unable to recover it. 00:27:57.492 [2024-07-26 11:35:52.861983] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.492 [2024-07-26 11:35:52.862013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.492 qpair failed and we were unable to recover it. 00:27:57.492 [2024-07-26 11:35:52.862273] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.492 [2024-07-26 11:35:52.862304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.492 qpair failed and we were unable to recover it. 00:27:57.492 [2024-07-26 11:35:52.862553] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.492 [2024-07-26 11:35:52.862584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.492 qpair failed and we were unable to recover it. 00:27:57.492 [2024-07-26 11:35:52.862787] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.492 [2024-07-26 11:35:52.862818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.492 qpair failed and we were unable to recover it. 00:27:57.492 [2024-07-26 11:35:52.863028] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.492 [2024-07-26 11:35:52.863059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.492 qpair failed and we were unable to recover it. 00:27:57.492 [2024-07-26 11:35:52.863272] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.492 [2024-07-26 11:35:52.863304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.492 qpair failed and we were unable to recover it. 00:27:57.492 [2024-07-26 11:35:52.863492] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.492 [2024-07-26 11:35:52.863522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.492 qpair failed and we were unable to recover it. 00:27:57.492 [2024-07-26 11:35:52.863798] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.492 [2024-07-26 11:35:52.863831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.492 qpair failed and we were unable to recover it. 00:27:57.492 [2024-07-26 11:35:52.864104] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.492 [2024-07-26 11:35:52.864135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.492 qpair failed and we were unable to recover it. 00:27:57.492 [2024-07-26 11:35:52.864435] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.492 [2024-07-26 11:35:52.864465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.492 qpair failed and we were unable to recover it. 00:27:57.492 [2024-07-26 11:35:52.864762] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.493 [2024-07-26 11:35:52.864794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.493 qpair failed and we were unable to recover it. 00:27:57.493 [2024-07-26 11:35:52.865075] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.493 [2024-07-26 11:35:52.865105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.493 qpair failed and we were unable to recover it. 00:27:57.493 [2024-07-26 11:35:52.865401] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.493 [2024-07-26 11:35:52.865432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.493 qpair failed and we were unable to recover it. 00:27:57.493 [2024-07-26 11:35:52.865685] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.493 [2024-07-26 11:35:52.865716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.493 qpair failed and we were unable to recover it. 00:27:57.493 [2024-07-26 11:35:52.865974] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.493 [2024-07-26 11:35:52.866004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.493 qpair failed and we were unable to recover it. 00:27:57.493 [2024-07-26 11:35:52.866136] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.493 [2024-07-26 11:35:52.866167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.493 qpair failed and we were unable to recover it. 00:27:57.493 [2024-07-26 11:35:52.866421] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.493 [2024-07-26 11:35:52.866452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.493 qpair failed and we were unable to recover it. 00:27:57.493 [2024-07-26 11:35:52.866676] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.493 [2024-07-26 11:35:52.866709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.493 qpair failed and we were unable to recover it. 00:27:57.493 [2024-07-26 11:35:52.866989] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.493 [2024-07-26 11:35:52.867019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.493 qpair failed and we were unable to recover it. 00:27:57.493 [2024-07-26 11:35:52.867209] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.493 [2024-07-26 11:35:52.867241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.493 qpair failed and we were unable to recover it. 00:27:57.493 [2024-07-26 11:35:52.867537] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.493 [2024-07-26 11:35:52.867568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.493 qpair failed and we were unable to recover it. 00:27:57.493 [2024-07-26 11:35:52.867846] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.493 [2024-07-26 11:35:52.867879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.493 qpair failed and we were unable to recover it. 00:27:57.493 [2024-07-26 11:35:52.868166] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.493 [2024-07-26 11:35:52.868197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.493 qpair failed and we were unable to recover it. 00:27:57.493 [2024-07-26 11:35:52.868485] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.493 [2024-07-26 11:35:52.868515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.493 qpair failed and we were unable to recover it. 00:27:57.493 [2024-07-26 11:35:52.868705] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.493 [2024-07-26 11:35:52.868738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.493 qpair failed and we were unable to recover it. 00:27:57.493 [2024-07-26 11:35:52.869009] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.493 [2024-07-26 11:35:52.869039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.493 qpair failed and we were unable to recover it. 00:27:57.493 [2024-07-26 11:35:52.869291] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.493 [2024-07-26 11:35:52.869323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.493 qpair failed and we were unable to recover it. 00:27:57.493 [2024-07-26 11:35:52.869604] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.493 [2024-07-26 11:35:52.869644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.493 qpair failed and we were unable to recover it. 00:27:57.493 [2024-07-26 11:35:52.869865] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.493 [2024-07-26 11:35:52.869896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.493 qpair failed and we were unable to recover it. 00:27:57.493 [2024-07-26 11:35:52.870151] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.493 [2024-07-26 11:35:52.870181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.493 qpair failed and we were unable to recover it. 00:27:57.493 [2024-07-26 11:35:52.870491] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.493 [2024-07-26 11:35:52.870522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.493 qpair failed and we were unable to recover it. 00:27:57.493 [2024-07-26 11:35:52.870708] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.493 [2024-07-26 11:35:52.870740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.493 qpair failed and we were unable to recover it. 00:27:57.493 [2024-07-26 11:35:52.870895] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.493 [2024-07-26 11:35:52.870926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.493 qpair failed and we were unable to recover it. 00:27:57.493 [2024-07-26 11:35:52.871229] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.493 [2024-07-26 11:35:52.871259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.493 qpair failed and we were unable to recover it. 00:27:57.493 [2024-07-26 11:35:52.871458] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.493 [2024-07-26 11:35:52.871489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.493 qpair failed and we were unable to recover it. 00:27:57.493 [2024-07-26 11:35:52.871740] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.493 [2024-07-26 11:35:52.871772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.493 qpair failed and we were unable to recover it. 00:27:57.493 [2024-07-26 11:35:52.872026] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.493 [2024-07-26 11:35:52.872057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.493 qpair failed and we were unable to recover it. 00:27:57.493 [2024-07-26 11:35:52.872364] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.493 [2024-07-26 11:35:52.872394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.493 qpair failed and we were unable to recover it. 00:27:57.493 [2024-07-26 11:35:52.872587] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.493 [2024-07-26 11:35:52.872618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.493 qpair failed and we were unable to recover it. 00:27:57.493 [2024-07-26 11:35:52.872829] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.493 [2024-07-26 11:35:52.872860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.493 qpair failed and we were unable to recover it. 00:27:57.493 [2024-07-26 11:35:52.873067] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.493 [2024-07-26 11:35:52.873097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.493 qpair failed and we were unable to recover it. 00:27:57.493 [2024-07-26 11:35:52.873292] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.493 [2024-07-26 11:35:52.873322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.493 qpair failed and we were unable to recover it. 00:27:57.493 [2024-07-26 11:35:52.873546] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.493 [2024-07-26 11:35:52.873577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.493 qpair failed and we were unable to recover it. 00:27:57.493 [2024-07-26 11:35:52.873864] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.493 [2024-07-26 11:35:52.873896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.493 qpair failed and we were unable to recover it. 00:27:57.493 [2024-07-26 11:35:52.874127] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.493 [2024-07-26 11:35:52.874157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.493 qpair failed and we were unable to recover it. 00:27:57.493 [2024-07-26 11:35:52.874380] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.493 [2024-07-26 11:35:52.874411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.493 qpair failed and we were unable to recover it. 00:27:57.493 [2024-07-26 11:35:52.874670] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.493 [2024-07-26 11:35:52.874704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.493 qpair failed and we were unable to recover it. 00:27:57.493 [2024-07-26 11:35:52.874981] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.493 [2024-07-26 11:35:52.875011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.493 qpair failed and we were unable to recover it. 00:27:57.493 [2024-07-26 11:35:52.875300] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.493 [2024-07-26 11:35:52.875331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.493 qpair failed and we were unable to recover it. 00:27:57.493 [2024-07-26 11:35:52.875612] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.493 [2024-07-26 11:35:52.875654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.493 qpair failed and we were unable to recover it. 00:27:57.493 [2024-07-26 11:35:52.875939] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.493 [2024-07-26 11:35:52.875969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.493 qpair failed and we were unable to recover it. 00:27:57.493 [2024-07-26 11:35:52.876250] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.493 [2024-07-26 11:35:52.876280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.493 qpair failed and we were unable to recover it. 00:27:57.493 [2024-07-26 11:35:52.876396] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.493 [2024-07-26 11:35:52.876426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.493 qpair failed and we were unable to recover it. 00:27:57.493 [2024-07-26 11:35:52.876559] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.493 [2024-07-26 11:35:52.876590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.493 qpair failed and we were unable to recover it. 00:27:57.493 [2024-07-26 11:35:52.876886] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.493 [2024-07-26 11:35:52.876918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.493 qpair failed and we were unable to recover it. 00:27:57.493 [2024-07-26 11:35:52.877150] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.493 [2024-07-26 11:35:52.877202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.493 qpair failed and we were unable to recover it. 00:27:57.493 [2024-07-26 11:35:52.877428] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.493 [2024-07-26 11:35:52.877467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.493 qpair failed and we were unable to recover it. 00:27:57.493 [2024-07-26 11:35:52.877732] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.493 [2024-07-26 11:35:52.877764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.493 qpair failed and we were unable to recover it. 00:27:57.493 [2024-07-26 11:35:52.878042] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.493 [2024-07-26 11:35:52.878076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.493 qpair failed and we were unable to recover it. 00:27:57.493 [2024-07-26 11:35:52.878293] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.493 [2024-07-26 11:35:52.878337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.493 qpair failed and we were unable to recover it. 00:27:57.493 [2024-07-26 11:35:52.878559] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.493 [2024-07-26 11:35:52.878590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.493 qpair failed and we were unable to recover it. 00:27:57.493 [2024-07-26 11:35:52.878861] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.493 [2024-07-26 11:35:52.878893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.493 qpair failed and we were unable to recover it. 00:27:57.493 [2024-07-26 11:35:52.879092] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.493 [2024-07-26 11:35:52.879121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.493 qpair failed and we were unable to recover it. 00:27:57.493 [2024-07-26 11:35:52.879404] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.493 [2024-07-26 11:35:52.879435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.493 qpair failed and we were unable to recover it. 00:27:57.493 [2024-07-26 11:35:52.879703] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.493 [2024-07-26 11:35:52.879735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.493 qpair failed and we were unable to recover it. 00:27:57.493 [2024-07-26 11:35:52.880036] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.493 [2024-07-26 11:35:52.880067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.493 qpair failed and we were unable to recover it. 00:27:57.493 [2024-07-26 11:35:52.880286] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.493 [2024-07-26 11:35:52.880317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.494 qpair failed and we were unable to recover it. 00:27:57.494 [2024-07-26 11:35:52.880599] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.494 [2024-07-26 11:35:52.880637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.494 qpair failed and we were unable to recover it. 00:27:57.494 [2024-07-26 11:35:52.880919] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.494 [2024-07-26 11:35:52.880950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.494 qpair failed and we were unable to recover it. 00:27:57.494 [2024-07-26 11:35:52.881234] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.494 [2024-07-26 11:35:52.881264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.494 qpair failed and we were unable to recover it. 00:27:57.494 [2024-07-26 11:35:52.881558] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.494 [2024-07-26 11:35:52.881589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.494 qpair failed and we were unable to recover it. 00:27:57.494 [2024-07-26 11:35:52.881797] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.494 [2024-07-26 11:35:52.881829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.494 qpair failed and we were unable to recover it. 00:27:57.494 [2024-07-26 11:35:52.882053] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.494 [2024-07-26 11:35:52.882083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.494 qpair failed and we were unable to recover it. 00:27:57.494 [2024-07-26 11:35:52.882288] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.494 [2024-07-26 11:35:52.882320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.494 qpair failed and we were unable to recover it. 00:27:57.494 [2024-07-26 11:35:52.882621] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.494 [2024-07-26 11:35:52.882680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.494 qpair failed and we were unable to recover it. 00:27:57.494 [2024-07-26 11:35:52.882883] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.494 [2024-07-26 11:35:52.882914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.494 qpair failed and we were unable to recover it. 00:27:57.494 [2024-07-26 11:35:52.883126] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.494 [2024-07-26 11:35:52.883157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.494 qpair failed and we were unable to recover it. 00:27:57.494 [2024-07-26 11:35:52.883413] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.494 [2024-07-26 11:35:52.883444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.494 qpair failed and we were unable to recover it. 00:27:57.494 [2024-07-26 11:35:52.883641] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.494 [2024-07-26 11:35:52.883673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.494 qpair failed and we were unable to recover it. 00:27:57.494 [2024-07-26 11:35:52.883960] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.494 [2024-07-26 11:35:52.883991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.494 qpair failed and we were unable to recover it. 00:27:57.494 [2024-07-26 11:35:52.884190] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.494 [2024-07-26 11:35:52.884221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.494 qpair failed and we were unable to recover it. 00:27:57.494 [2024-07-26 11:35:52.884427] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.494 [2024-07-26 11:35:52.884458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.494 qpair failed and we were unable to recover it. 00:27:57.494 [2024-07-26 11:35:52.884730] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.494 [2024-07-26 11:35:52.884763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.494 qpair failed and we were unable to recover it. 00:27:57.494 [2024-07-26 11:35:52.885049] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.494 [2024-07-26 11:35:52.885080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.494 qpair failed and we were unable to recover it. 00:27:57.494 [2024-07-26 11:35:52.885314] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.494 [2024-07-26 11:35:52.885345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.494 qpair failed and we were unable to recover it. 00:27:57.494 [2024-07-26 11:35:52.885620] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.494 [2024-07-26 11:35:52.885660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.494 qpair failed and we were unable to recover it. 00:27:57.494 [2024-07-26 11:35:52.885897] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.494 [2024-07-26 11:35:52.885933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.494 qpair failed and we were unable to recover it. 00:27:57.494 [2024-07-26 11:35:52.886140] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.494 [2024-07-26 11:35:52.886171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.494 qpair failed and we were unable to recover it. 00:27:57.494 [2024-07-26 11:35:52.886463] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.494 [2024-07-26 11:35:52.886493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.494 qpair failed and we were unable to recover it. 00:27:57.494 [2024-07-26 11:35:52.886776] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.494 [2024-07-26 11:35:52.886808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.494 qpair failed and we were unable to recover it. 00:27:57.494 [2024-07-26 11:35:52.887080] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.494 [2024-07-26 11:35:52.887111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.494 qpair failed and we were unable to recover it. 00:27:57.494 [2024-07-26 11:35:52.887404] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.494 [2024-07-26 11:35:52.887435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.494 qpair failed and we were unable to recover it. 00:27:57.494 [2024-07-26 11:35:52.887658] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.494 [2024-07-26 11:35:52.887690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.494 qpair failed and we were unable to recover it. 00:27:57.494 [2024-07-26 11:35:52.887899] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.494 [2024-07-26 11:35:52.887929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.494 qpair failed and we were unable to recover it. 00:27:57.494 [2024-07-26 11:35:52.888208] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.494 [2024-07-26 11:35:52.888239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.494 qpair failed and we were unable to recover it. 00:27:57.494 [2024-07-26 11:35:52.888443] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.494 [2024-07-26 11:35:52.888474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.494 qpair failed and we were unable to recover it. 00:27:57.494 [2024-07-26 11:35:52.888729] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.494 [2024-07-26 11:35:52.888761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.494 qpair failed and we were unable to recover it. 00:27:57.494 [2024-07-26 11:35:52.888968] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.494 [2024-07-26 11:35:52.888999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.494 qpair failed and we were unable to recover it. 00:27:57.494 [2024-07-26 11:35:52.889272] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.494 [2024-07-26 11:35:52.889302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.494 qpair failed and we were unable to recover it. 00:27:57.494 [2024-07-26 11:35:52.889486] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.494 [2024-07-26 11:35:52.889516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.494 qpair failed and we were unable to recover it. 00:27:57.494 [2024-07-26 11:35:52.889803] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.494 [2024-07-26 11:35:52.889836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.494 qpair failed and we were unable to recover it. 00:27:57.494 [2024-07-26 11:35:52.890091] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.494 [2024-07-26 11:35:52.890121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.494 qpair failed and we were unable to recover it. 00:27:57.494 [2024-07-26 11:35:52.890399] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.494 [2024-07-26 11:35:52.890429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.494 qpair failed and we were unable to recover it. 00:27:57.494 [2024-07-26 11:35:52.890613] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.494 [2024-07-26 11:35:52.890667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.494 qpair failed and we were unable to recover it. 00:27:57.494 [2024-07-26 11:35:52.890927] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.494 [2024-07-26 11:35:52.890957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.494 qpair failed and we were unable to recover it. 00:27:57.494 [2024-07-26 11:35:52.891223] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.494 [2024-07-26 11:35:52.891254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.494 qpair failed and we were unable to recover it. 00:27:57.494 [2024-07-26 11:35:52.891554] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.494 [2024-07-26 11:35:52.891585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.494 qpair failed and we were unable to recover it. 00:27:57.494 [2024-07-26 11:35:52.891855] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.494 [2024-07-26 11:35:52.891888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.494 qpair failed and we were unable to recover it. 00:27:57.494 [2024-07-26 11:35:52.892193] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.494 [2024-07-26 11:35:52.892224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.494 qpair failed and we were unable to recover it. 00:27:57.494 [2024-07-26 11:35:52.892492] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.494 [2024-07-26 11:35:52.892523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.494 qpair failed and we were unable to recover it. 00:27:57.494 [2024-07-26 11:35:52.892746] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.494 [2024-07-26 11:35:52.892778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.494 qpair failed and we were unable to recover it. 00:27:57.494 [2024-07-26 11:35:52.893024] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.494 [2024-07-26 11:35:52.893055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.494 qpair failed and we were unable to recover it. 00:27:57.494 [2024-07-26 11:35:52.893323] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.494 [2024-07-26 11:35:52.893353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.494 qpair failed and we were unable to recover it. 00:27:57.494 [2024-07-26 11:35:52.893653] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.494 [2024-07-26 11:35:52.893690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.494 qpair failed and we were unable to recover it. 00:27:57.494 [2024-07-26 11:35:52.893874] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.494 [2024-07-26 11:35:52.893905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.494 qpair failed and we were unable to recover it. 00:27:57.494 [2024-07-26 11:35:52.894163] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.494 [2024-07-26 11:35:52.894193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.494 qpair failed and we were unable to recover it. 00:27:57.494 [2024-07-26 11:35:52.894388] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.494 [2024-07-26 11:35:52.894419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.494 qpair failed and we were unable to recover it. 00:27:57.494 [2024-07-26 11:35:52.894701] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.494 [2024-07-26 11:35:52.894734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.494 qpair failed and we were unable to recover it. 00:27:57.494 [2024-07-26 11:35:52.895040] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.494 [2024-07-26 11:35:52.895071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.494 qpair failed and we were unable to recover it. 00:27:57.494 [2024-07-26 11:35:52.895277] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.494 [2024-07-26 11:35:52.895308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.494 qpair failed and we were unable to recover it. 00:27:57.494 [2024-07-26 11:35:52.895532] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.494 [2024-07-26 11:35:52.895563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.494 qpair failed and we were unable to recover it. 00:27:57.494 [2024-07-26 11:35:52.895824] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.494 [2024-07-26 11:35:52.895856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.494 qpair failed and we were unable to recover it. 00:27:57.494 [2024-07-26 11:35:52.896161] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.494 [2024-07-26 11:35:52.896192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.494 qpair failed and we were unable to recover it. 00:27:57.494 [2024-07-26 11:35:52.896413] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.494 [2024-07-26 11:35:52.896443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.494 qpair failed and we were unable to recover it. 00:27:57.494 [2024-07-26 11:35:52.896720] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.494 [2024-07-26 11:35:52.896753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.494 qpair failed and we were unable to recover it. 00:27:57.494 [2024-07-26 11:35:52.897018] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.494 [2024-07-26 11:35:52.897048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.494 qpair failed and we were unable to recover it. 00:27:57.494 [2024-07-26 11:35:52.897302] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.494 [2024-07-26 11:35:52.897333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.494 qpair failed and we were unable to recover it. 00:27:57.494 [2024-07-26 11:35:52.897571] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.494 [2024-07-26 11:35:52.897602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.494 qpair failed and we were unable to recover it. 00:27:57.494 [2024-07-26 11:35:52.897925] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.494 [2024-07-26 11:35:52.897957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.494 qpair failed and we were unable to recover it. 00:27:57.494 [2024-07-26 11:35:52.898214] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.494 [2024-07-26 11:35:52.898245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.494 qpair failed and we were unable to recover it. 00:27:57.494 [2024-07-26 11:35:52.898554] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.494 [2024-07-26 11:35:52.898584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.494 qpair failed and we were unable to recover it. 00:27:57.494 [2024-07-26 11:35:52.898872] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.494 [2024-07-26 11:35:52.898904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.494 qpair failed and we were unable to recover it. 00:27:57.494 [2024-07-26 11:35:52.899131] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.495 [2024-07-26 11:35:52.899161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.495 qpair failed and we were unable to recover it. 00:27:57.495 [2024-07-26 11:35:52.899364] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.495 [2024-07-26 11:35:52.899395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.495 qpair failed and we were unable to recover it. 00:27:57.495 [2024-07-26 11:35:52.899624] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.495 [2024-07-26 11:35:52.899664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.495 qpair failed and we were unable to recover it. 00:27:57.495 [2024-07-26 11:35:52.899870] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.495 [2024-07-26 11:35:52.899901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.495 qpair failed and we were unable to recover it. 00:27:57.495 [2024-07-26 11:35:52.900084] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.495 [2024-07-26 11:35:52.900116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.495 qpair failed and we were unable to recover it. 00:27:57.495 [2024-07-26 11:35:52.900341] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.495 [2024-07-26 11:35:52.900371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.495 qpair failed and we were unable to recover it. 00:27:57.495 [2024-07-26 11:35:52.900620] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.495 [2024-07-26 11:35:52.900663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.495 qpair failed and we were unable to recover it. 00:27:57.495 [2024-07-26 11:35:52.900873] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.495 [2024-07-26 11:35:52.900903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.495 qpair failed and we were unable to recover it. 00:27:57.495 [2024-07-26 11:35:52.901087] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.495 [2024-07-26 11:35:52.901117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.495 qpair failed and we were unable to recover it. 00:27:57.495 [2024-07-26 11:35:52.901248] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.495 [2024-07-26 11:35:52.901280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.495 qpair failed and we were unable to recover it. 00:27:57.495 [2024-07-26 11:35:52.901558] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.495 [2024-07-26 11:35:52.901589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.495 qpair failed and we were unable to recover it. 00:27:57.495 [2024-07-26 11:35:52.901862] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.495 [2024-07-26 11:35:52.901893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.495 qpair failed and we were unable to recover it. 00:27:57.495 [2024-07-26 11:35:52.902190] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.495 [2024-07-26 11:35:52.902220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.495 qpair failed and we were unable to recover it. 00:27:57.495 [2024-07-26 11:35:52.902436] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.495 [2024-07-26 11:35:52.902467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.495 qpair failed and we were unable to recover it. 00:27:57.495 [2024-07-26 11:35:52.902666] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.495 [2024-07-26 11:35:52.902699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.495 qpair failed and we were unable to recover it. 00:27:57.495 [2024-07-26 11:35:52.902979] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.495 [2024-07-26 11:35:52.903011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.495 qpair failed and we were unable to recover it. 00:27:57.495 [2024-07-26 11:35:52.903241] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.495 [2024-07-26 11:35:52.903273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.495 qpair failed and we were unable to recover it. 00:27:57.495 [2024-07-26 11:35:52.903406] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.495 [2024-07-26 11:35:52.903436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.495 qpair failed and we were unable to recover it. 00:27:57.495 [2024-07-26 11:35:52.903704] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.495 [2024-07-26 11:35:52.903736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.495 qpair failed and we were unable to recover it. 00:27:57.495 [2024-07-26 11:35:52.903979] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.495 [2024-07-26 11:35:52.904010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.495 qpair failed and we were unable to recover it. 00:27:57.495 [2024-07-26 11:35:52.904136] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.495 [2024-07-26 11:35:52.904166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.495 qpair failed and we were unable to recover it. 00:27:57.495 [2024-07-26 11:35:52.904364] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.495 [2024-07-26 11:35:52.904395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.495 qpair failed and we were unable to recover it. 00:27:57.495 [2024-07-26 11:35:52.904732] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.495 [2024-07-26 11:35:52.904807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.495 qpair failed and we were unable to recover it. 00:27:57.495 [2024-07-26 11:35:52.905102] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.495 [2024-07-26 11:35:52.905136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.495 qpair failed and we were unable to recover it. 00:27:57.495 [2024-07-26 11:35:52.905395] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.495 [2024-07-26 11:35:52.905426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.495 qpair failed and we were unable to recover it. 00:27:57.495 [2024-07-26 11:35:52.905727] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.495 [2024-07-26 11:35:52.905760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.495 qpair failed and we were unable to recover it. 00:27:57.495 [2024-07-26 11:35:52.905975] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.495 [2024-07-26 11:35:52.906011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.495 qpair failed and we were unable to recover it. 00:27:57.495 [2024-07-26 11:35:52.906223] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.495 [2024-07-26 11:35:52.906254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.495 qpair failed and we were unable to recover it. 00:27:57.495 [2024-07-26 11:35:52.906390] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.495 [2024-07-26 11:35:52.906420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.495 qpair failed and we were unable to recover it. 00:27:57.495 [2024-07-26 11:35:52.906701] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.495 [2024-07-26 11:35:52.906732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.495 qpair failed and we were unable to recover it. 00:27:57.495 [2024-07-26 11:35:52.906927] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.495 [2024-07-26 11:35:52.906958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.495 qpair failed and we were unable to recover it. 00:27:57.495 [2024-07-26 11:35:52.907156] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.495 [2024-07-26 11:35:52.907186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.495 qpair failed and we were unable to recover it. 00:27:57.495 [2024-07-26 11:35:52.907461] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.495 [2024-07-26 11:35:52.907491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.495 qpair failed and we were unable to recover it. 00:27:57.495 [2024-07-26 11:35:52.907702] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.495 [2024-07-26 11:35:52.907734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.495 qpair failed and we were unable to recover it. 00:27:57.495 [2024-07-26 11:35:52.907968] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.495 [2024-07-26 11:35:52.907997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.495 qpair failed and we were unable to recover it. 00:27:57.495 [2024-07-26 11:35:52.908281] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.495 [2024-07-26 11:35:52.908321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.495 qpair failed and we were unable to recover it. 00:27:57.495 [2024-07-26 11:35:52.908602] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.495 [2024-07-26 11:35:52.908642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.495 qpair failed and we were unable to recover it. 00:27:57.495 [2024-07-26 11:35:52.908861] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.495 [2024-07-26 11:35:52.908891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.495 qpair failed and we were unable to recover it. 00:27:57.495 [2024-07-26 11:35:52.909168] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.495 [2024-07-26 11:35:52.909198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.495 qpair failed and we were unable to recover it. 00:27:57.495 [2024-07-26 11:35:52.909421] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.495 [2024-07-26 11:35:52.909451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.495 qpair failed and we were unable to recover it. 00:27:57.495 [2024-07-26 11:35:52.909708] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.495 [2024-07-26 11:35:52.909739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.495 qpair failed and we were unable to recover it. 00:27:57.495 [2024-07-26 11:35:52.909953] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.495 [2024-07-26 11:35:52.909983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.495 qpair failed and we were unable to recover it. 00:27:57.495 [2024-07-26 11:35:52.910251] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.495 [2024-07-26 11:35:52.910281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.495 qpair failed and we were unable to recover it. 00:27:57.495 [2024-07-26 11:35:52.910583] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.495 [2024-07-26 11:35:52.910613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.495 qpair failed and we were unable to recover it. 00:27:57.495 [2024-07-26 11:35:52.910829] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.495 [2024-07-26 11:35:52.910860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.495 qpair failed and we were unable to recover it. 00:27:57.495 [2024-07-26 11:35:52.911042] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.495 [2024-07-26 11:35:52.911072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.495 qpair failed and we were unable to recover it. 00:27:57.495 [2024-07-26 11:35:52.911354] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.495 [2024-07-26 11:35:52.911384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.495 qpair failed and we were unable to recover it. 00:27:57.495 [2024-07-26 11:35:52.911660] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.495 [2024-07-26 11:35:52.911691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.495 qpair failed and we were unable to recover it. 00:27:57.495 [2024-07-26 11:35:52.911957] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.495 [2024-07-26 11:35:52.911987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.495 qpair failed and we were unable to recover it. 00:27:57.495 [2024-07-26 11:35:52.912215] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.495 [2024-07-26 11:35:52.912246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.495 qpair failed and we were unable to recover it. 00:27:57.495 [2024-07-26 11:35:52.912518] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.495 [2024-07-26 11:35:52.912548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.495 qpair failed and we were unable to recover it. 00:27:57.495 [2024-07-26 11:35:52.912737] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.495 [2024-07-26 11:35:52.912768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.495 qpair failed and we were unable to recover it. 00:27:57.495 [2024-07-26 11:35:52.913048] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.495 [2024-07-26 11:35:52.913078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.495 qpair failed and we were unable to recover it. 00:27:57.495 [2024-07-26 11:35:52.913349] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.495 [2024-07-26 11:35:52.913379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.495 qpair failed and we were unable to recover it. 00:27:57.495 [2024-07-26 11:35:52.913661] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.495 [2024-07-26 11:35:52.913693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.495 qpair failed and we were unable to recover it. 00:27:57.495 [2024-07-26 11:35:52.913979] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.495 [2024-07-26 11:35:52.914009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.495 qpair failed and we were unable to recover it. 00:27:57.495 [2024-07-26 11:35:52.914204] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.495 [2024-07-26 11:35:52.914235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.495 qpair failed and we were unable to recover it. 00:27:57.495 [2024-07-26 11:35:52.914447] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.495 [2024-07-26 11:35:52.914477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.495 qpair failed and we were unable to recover it. 00:27:57.495 [2024-07-26 11:35:52.914662] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.495 [2024-07-26 11:35:52.914693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.495 qpair failed and we were unable to recover it. 00:27:57.495 [2024-07-26 11:35:52.914946] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.495 [2024-07-26 11:35:52.914976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.495 qpair failed and we were unable to recover it. 00:27:57.495 [2024-07-26 11:35:52.915272] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.495 [2024-07-26 11:35:52.915302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.495 qpair failed and we were unable to recover it. 00:27:57.495 [2024-07-26 11:35:52.915512] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.495 [2024-07-26 11:35:52.915543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.495 qpair failed and we were unable to recover it. 00:27:57.495 [2024-07-26 11:35:52.915669] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.495 [2024-07-26 11:35:52.915706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.495 qpair failed and we were unable to recover it. 00:27:57.495 [2024-07-26 11:35:52.915916] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.495 [2024-07-26 11:35:52.915946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.495 qpair failed and we were unable to recover it. 00:27:57.495 [2024-07-26 11:35:52.916155] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.495 [2024-07-26 11:35:52.916185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.496 qpair failed and we were unable to recover it. 00:27:57.496 [2024-07-26 11:35:52.916372] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.496 [2024-07-26 11:35:52.916402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.496 qpair failed and we were unable to recover it. 00:27:57.496 [2024-07-26 11:35:52.916681] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.496 [2024-07-26 11:35:52.916712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.496 qpair failed and we were unable to recover it. 00:27:57.496 [2024-07-26 11:35:52.916930] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.496 [2024-07-26 11:35:52.916960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.496 qpair failed and we were unable to recover it. 00:27:57.496 [2024-07-26 11:35:52.917212] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.496 [2024-07-26 11:35:52.917243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.496 qpair failed and we were unable to recover it. 00:27:57.496 [2024-07-26 11:35:52.917446] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.496 [2024-07-26 11:35:52.917475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.496 qpair failed and we were unable to recover it. 00:27:57.496 [2024-07-26 11:35:52.917755] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.496 [2024-07-26 11:35:52.917786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.496 qpair failed and we were unable to recover it. 00:27:57.496 [2024-07-26 11:35:52.917987] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.496 [2024-07-26 11:35:52.918018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.496 qpair failed and we were unable to recover it. 00:27:57.496 [2024-07-26 11:35:52.918294] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.496 [2024-07-26 11:35:52.918324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.496 qpair failed and we were unable to recover it. 00:27:57.496 [2024-07-26 11:35:52.918636] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.496 [2024-07-26 11:35:52.918667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.496 qpair failed and we were unable to recover it. 00:27:57.496 [2024-07-26 11:35:52.918940] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.496 [2024-07-26 11:35:52.918971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.496 qpair failed and we were unable to recover it. 00:27:57.496 [2024-07-26 11:35:52.919169] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.496 [2024-07-26 11:35:52.919198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.496 qpair failed and we were unable to recover it. 00:27:57.496 [2024-07-26 11:35:52.919425] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.496 [2024-07-26 11:35:52.919456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.496 qpair failed and we were unable to recover it. 00:27:57.496 [2024-07-26 11:35:52.919762] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.496 [2024-07-26 11:35:52.919792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.496 qpair failed and we were unable to recover it. 00:27:57.496 [2024-07-26 11:35:52.920069] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.496 [2024-07-26 11:35:52.920100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.496 qpair failed and we were unable to recover it. 00:27:57.496 [2024-07-26 11:35:52.920357] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.496 [2024-07-26 11:35:52.920388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.496 qpair failed and we were unable to recover it. 00:27:57.496 [2024-07-26 11:35:52.920590] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.496 [2024-07-26 11:35:52.920621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.496 qpair failed and we were unable to recover it. 00:27:57.496 [2024-07-26 11:35:52.920836] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.496 [2024-07-26 11:35:52.920866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.496 qpair failed and we were unable to recover it. 00:27:57.496 [2024-07-26 11:35:52.921068] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.496 [2024-07-26 11:35:52.921099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.496 qpair failed and we were unable to recover it. 00:27:57.496 [2024-07-26 11:35:52.921315] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.496 [2024-07-26 11:35:52.921345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.496 qpair failed and we were unable to recover it. 00:27:57.496 [2024-07-26 11:35:52.921637] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.496 [2024-07-26 11:35:52.921669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.496 qpair failed and we were unable to recover it. 00:27:57.496 [2024-07-26 11:35:52.921952] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.496 [2024-07-26 11:35:52.921983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.496 qpair failed and we were unable to recover it. 00:27:57.496 [2024-07-26 11:35:52.922185] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.496 [2024-07-26 11:35:52.922215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.496 qpair failed and we were unable to recover it. 00:27:57.496 [2024-07-26 11:35:52.922407] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.496 [2024-07-26 11:35:52.922437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.496 qpair failed and we were unable to recover it. 00:27:57.496 [2024-07-26 11:35:52.922690] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.496 [2024-07-26 11:35:52.922722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.496 qpair failed and we were unable to recover it. 00:27:57.496 [2024-07-26 11:35:52.923024] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.496 [2024-07-26 11:35:52.923054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.496 qpair failed and we were unable to recover it. 00:27:57.496 [2024-07-26 11:35:52.923326] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.496 [2024-07-26 11:35:52.923356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.496 qpair failed and we were unable to recover it. 00:27:57.496 [2024-07-26 11:35:52.923496] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.496 [2024-07-26 11:35:52.923526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.496 qpair failed and we were unable to recover it. 00:27:57.496 [2024-07-26 11:35:52.923803] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.496 [2024-07-26 11:35:52.923834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.496 qpair failed and we were unable to recover it. 00:27:57.496 [2024-07-26 11:35:52.924098] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.496 [2024-07-26 11:35:52.924129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.496 qpair failed and we were unable to recover it. 00:27:57.496 [2024-07-26 11:35:52.924325] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.496 [2024-07-26 11:35:52.924355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.496 qpair failed and we were unable to recover it. 00:27:57.496 [2024-07-26 11:35:52.924539] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.496 [2024-07-26 11:35:52.924570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.496 qpair failed and we were unable to recover it. 00:27:57.496 [2024-07-26 11:35:52.924762] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.496 [2024-07-26 11:35:52.924794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.496 qpair failed and we were unable to recover it. 00:27:57.496 [2024-07-26 11:35:52.925049] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.496 [2024-07-26 11:35:52.925079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.496 qpair failed and we were unable to recover it. 00:27:57.496 [2024-07-26 11:35:52.925384] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.496 [2024-07-26 11:35:52.925414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.496 qpair failed and we were unable to recover it. 00:27:57.496 [2024-07-26 11:35:52.925693] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.496 [2024-07-26 11:35:52.925724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.496 qpair failed and we were unable to recover it. 00:27:57.496 [2024-07-26 11:35:52.925979] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.496 [2024-07-26 11:35:52.926010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.496 qpair failed and we were unable to recover it. 00:27:57.496 [2024-07-26 11:35:52.926294] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.496 [2024-07-26 11:35:52.926325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.496 qpair failed and we were unable to recover it. 00:27:57.496 [2024-07-26 11:35:52.926610] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.496 [2024-07-26 11:35:52.926666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.496 qpair failed and we were unable to recover it. 00:27:57.496 [2024-07-26 11:35:52.926935] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.496 [2024-07-26 11:35:52.926966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.496 qpair failed and we were unable to recover it. 00:27:57.496 [2024-07-26 11:35:52.927254] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.496 [2024-07-26 11:35:52.927285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.496 qpair failed and we were unable to recover it. 00:27:57.496 [2024-07-26 11:35:52.927574] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.496 [2024-07-26 11:35:52.927604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.496 qpair failed and we were unable to recover it. 00:27:57.496 [2024-07-26 11:35:52.927890] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.496 [2024-07-26 11:35:52.927922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.496 qpair failed and we were unable to recover it. 00:27:57.496 [2024-07-26 11:35:52.928216] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.496 [2024-07-26 11:35:52.928246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.496 qpair failed and we were unable to recover it. 00:27:57.496 [2024-07-26 11:35:52.928529] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.496 [2024-07-26 11:35:52.928559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.496 qpair failed and we were unable to recover it. 00:27:57.496 [2024-07-26 11:35:52.928796] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.496 [2024-07-26 11:35:52.928827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.496 qpair failed and we were unable to recover it. 00:27:57.496 [2024-07-26 11:35:52.929033] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.496 [2024-07-26 11:35:52.929064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.496 qpair failed and we were unable to recover it. 00:27:57.496 [2024-07-26 11:35:52.929297] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.496 [2024-07-26 11:35:52.929327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.496 qpair failed and we were unable to recover it. 00:27:57.496 [2024-07-26 11:35:52.929583] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.496 [2024-07-26 11:35:52.929614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.496 qpair failed and we were unable to recover it. 00:27:57.496 [2024-07-26 11:35:52.929926] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.496 [2024-07-26 11:35:52.929957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.496 qpair failed and we were unable to recover it. 00:27:57.496 [2024-07-26 11:35:52.930255] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.496 [2024-07-26 11:35:52.930285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.496 qpair failed and we were unable to recover it. 00:27:57.496 [2024-07-26 11:35:52.930499] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.496 [2024-07-26 11:35:52.930530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.496 qpair failed and we were unable to recover it. 00:27:57.496 [2024-07-26 11:35:52.930733] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.496 [2024-07-26 11:35:52.930764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.496 qpair failed and we were unable to recover it. 00:27:57.496 [2024-07-26 11:35:52.931074] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.496 [2024-07-26 11:35:52.931105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.496 qpair failed and we were unable to recover it. 00:27:57.496 [2024-07-26 11:35:52.931377] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.496 [2024-07-26 11:35:52.931407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.496 qpair failed and we were unable to recover it. 00:27:57.496 [2024-07-26 11:35:52.931637] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.496 [2024-07-26 11:35:52.931668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.496 qpair failed and we were unable to recover it. 00:27:57.496 [2024-07-26 11:35:52.931877] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.496 [2024-07-26 11:35:52.931908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.496 qpair failed and we were unable to recover it. 00:27:57.496 [2024-07-26 11:35:52.932189] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.496 [2024-07-26 11:35:52.932219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.496 qpair failed and we were unable to recover it. 00:27:57.496 [2024-07-26 11:35:52.932425] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.496 [2024-07-26 11:35:52.932456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.496 qpair failed and we were unable to recover it. 00:27:57.496 [2024-07-26 11:35:52.932735] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.496 [2024-07-26 11:35:52.932766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.496 qpair failed and we were unable to recover it. 00:27:57.496 [2024-07-26 11:35:52.933021] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.496 [2024-07-26 11:35:52.933052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.496 qpair failed and we were unable to recover it. 00:27:57.496 [2024-07-26 11:35:52.933179] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.496 [2024-07-26 11:35:52.933209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.496 qpair failed and we were unable to recover it. 00:27:57.496 [2024-07-26 11:35:52.933466] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.496 [2024-07-26 11:35:52.933496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.496 qpair failed and we were unable to recover it. 00:27:57.496 [2024-07-26 11:35:52.933707] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.496 [2024-07-26 11:35:52.933739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.496 qpair failed and we were unable to recover it. 00:27:57.496 [2024-07-26 11:35:52.934029] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.496 [2024-07-26 11:35:52.934059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.496 qpair failed and we were unable to recover it. 00:27:57.497 [2024-07-26 11:35:52.934327] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.497 [2024-07-26 11:35:52.934358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.497 qpair failed and we were unable to recover it. 00:27:57.497 [2024-07-26 11:35:52.934622] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.497 [2024-07-26 11:35:52.934679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.497 qpair failed and we were unable to recover it. 00:27:57.497 [2024-07-26 11:35:52.934972] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.497 [2024-07-26 11:35:52.935003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.497 qpair failed and we were unable to recover it. 00:27:57.497 [2024-07-26 11:35:52.935204] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.497 [2024-07-26 11:35:52.935235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.497 qpair failed and we were unable to recover it. 00:27:57.497 [2024-07-26 11:35:52.935514] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.497 [2024-07-26 11:35:52.935544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.497 qpair failed and we were unable to recover it. 00:27:57.497 [2024-07-26 11:35:52.935828] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.497 [2024-07-26 11:35:52.935859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.497 qpair failed and we were unable to recover it. 00:27:57.497 [2024-07-26 11:35:52.936121] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.497 [2024-07-26 11:35:52.936151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.497 qpair failed and we were unable to recover it. 00:27:57.497 [2024-07-26 11:35:52.936343] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.497 [2024-07-26 11:35:52.936374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.497 qpair failed and we were unable to recover it. 00:27:57.497 [2024-07-26 11:35:52.936486] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.497 [2024-07-26 11:35:52.936516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.497 qpair failed and we were unable to recover it. 00:27:57.497 [2024-07-26 11:35:52.936801] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.497 [2024-07-26 11:35:52.936832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.497 qpair failed and we were unable to recover it. 00:27:57.497 [2024-07-26 11:35:52.937116] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.497 [2024-07-26 11:35:52.937147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.497 qpair failed and we were unable to recover it. 00:27:57.497 [2024-07-26 11:35:52.937356] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.497 [2024-07-26 11:35:52.937386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.497 qpair failed and we were unable to recover it. 00:27:57.497 [2024-07-26 11:35:52.937604] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.497 [2024-07-26 11:35:52.937643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.497 qpair failed and we were unable to recover it. 00:27:57.497 [2024-07-26 11:35:52.937847] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.497 [2024-07-26 11:35:52.937884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.497 qpair failed and we were unable to recover it. 00:27:57.497 [2024-07-26 11:35:52.938171] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.497 [2024-07-26 11:35:52.938201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.497 qpair failed and we were unable to recover it. 00:27:57.497 [2024-07-26 11:35:52.938502] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.497 [2024-07-26 11:35:52.938533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.497 qpair failed and we were unable to recover it. 00:27:57.497 [2024-07-26 11:35:52.938829] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.497 [2024-07-26 11:35:52.938860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.497 qpair failed and we were unable to recover it. 00:27:57.497 [2024-07-26 11:35:52.939068] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.497 [2024-07-26 11:35:52.939098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.497 qpair failed and we were unable to recover it. 00:27:57.497 [2024-07-26 11:35:52.939377] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.497 [2024-07-26 11:35:52.939407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.497 qpair failed and we were unable to recover it. 00:27:57.497 [2024-07-26 11:35:52.939710] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.497 [2024-07-26 11:35:52.939741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.497 qpair failed and we were unable to recover it. 00:27:57.497 [2024-07-26 11:35:52.939924] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.497 [2024-07-26 11:35:52.939954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.497 qpair failed and we were unable to recover it. 00:27:57.497 [2024-07-26 11:35:52.940224] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.497 [2024-07-26 11:35:52.940255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.497 qpair failed and we were unable to recover it. 00:27:57.497 [2024-07-26 11:35:52.940558] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.497 [2024-07-26 11:35:52.940588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.497 qpair failed and we were unable to recover it. 00:27:57.497 [2024-07-26 11:35:52.940864] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.497 [2024-07-26 11:35:52.940896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.497 qpair failed and we were unable to recover it. 00:27:57.497 [2024-07-26 11:35:52.941178] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.497 [2024-07-26 11:35:52.941209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.497 qpair failed and we were unable to recover it. 00:27:57.497 [2024-07-26 11:35:52.941429] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.497 [2024-07-26 11:35:52.941460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.497 qpair failed and we were unable to recover it. 00:27:57.497 [2024-07-26 11:35:52.941650] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.497 [2024-07-26 11:35:52.941682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.497 qpair failed and we were unable to recover it. 00:27:57.497 [2024-07-26 11:35:52.941887] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.497 [2024-07-26 11:35:52.941927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.497 qpair failed and we were unable to recover it. 00:27:57.497 [2024-07-26 11:35:52.942216] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.497 [2024-07-26 11:35:52.942246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.497 qpair failed and we were unable to recover it. 00:27:57.497 [2024-07-26 11:35:52.942501] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.497 [2024-07-26 11:35:52.942532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.497 qpair failed and we were unable to recover it. 00:27:57.497 [2024-07-26 11:35:52.942809] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.497 [2024-07-26 11:35:52.942840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.497 qpair failed and we were unable to recover it. 00:27:57.497 [2024-07-26 11:35:52.943095] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.497 [2024-07-26 11:35:52.943125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.497 qpair failed and we were unable to recover it. 00:27:57.497 [2024-07-26 11:35:52.943409] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.497 [2024-07-26 11:35:52.943439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.497 qpair failed and we were unable to recover it. 00:27:57.497 [2024-07-26 11:35:52.943722] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.497 [2024-07-26 11:35:52.943754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.497 qpair failed and we were unable to recover it. 00:27:57.497 [2024-07-26 11:35:52.943958] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.497 [2024-07-26 11:35:52.943988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.497 qpair failed and we were unable to recover it. 00:27:57.497 [2024-07-26 11:35:52.944175] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.497 [2024-07-26 11:35:52.944205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.497 qpair failed and we were unable to recover it. 00:27:57.497 [2024-07-26 11:35:52.944482] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.497 [2024-07-26 11:35:52.944512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.497 qpair failed and we were unable to recover it. 00:27:57.497 [2024-07-26 11:35:52.944647] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.497 [2024-07-26 11:35:52.944679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.497 qpair failed and we were unable to recover it. 00:27:57.497 [2024-07-26 11:35:52.944886] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.497 [2024-07-26 11:35:52.944917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.497 qpair failed and we were unable to recover it. 00:27:57.497 [2024-07-26 11:35:52.945170] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.497 [2024-07-26 11:35:52.945200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.497 qpair failed and we were unable to recover it. 00:27:57.497 [2024-07-26 11:35:52.945469] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.497 [2024-07-26 11:35:52.945500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.497 qpair failed and we were unable to recover it. 00:27:57.497 [2024-07-26 11:35:52.945702] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.497 [2024-07-26 11:35:52.945734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.497 qpair failed and we were unable to recover it. 00:27:57.497 [2024-07-26 11:35:52.946002] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.497 [2024-07-26 11:35:52.946032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.497 qpair failed and we were unable to recover it. 00:27:57.497 [2024-07-26 11:35:52.946264] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.497 [2024-07-26 11:35:52.946295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.497 qpair failed and we were unable to recover it. 00:27:57.497 [2024-07-26 11:35:52.946569] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.497 [2024-07-26 11:35:52.946599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.497 qpair failed and we were unable to recover it. 00:27:57.497 [2024-07-26 11:35:52.946895] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.497 [2024-07-26 11:35:52.946926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.497 qpair failed and we were unable to recover it. 00:27:57.497 [2024-07-26 11:35:52.947209] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.497 [2024-07-26 11:35:52.947239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.497 qpair failed and we were unable to recover it. 00:27:57.497 [2024-07-26 11:35:52.947533] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.497 [2024-07-26 11:35:52.947563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.497 qpair failed and we were unable to recover it. 00:27:57.497 [2024-07-26 11:35:52.947785] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.497 [2024-07-26 11:35:52.947816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.497 qpair failed and we were unable to recover it. 00:27:57.497 [2024-07-26 11:35:52.948099] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.497 [2024-07-26 11:35:52.948130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.497 qpair failed and we were unable to recover it. 00:27:57.497 [2024-07-26 11:35:52.948390] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.497 [2024-07-26 11:35:52.948420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.497 qpair failed and we were unable to recover it. 00:27:57.497 [2024-07-26 11:35:52.948728] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.497 [2024-07-26 11:35:52.948760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.497 qpair failed and we were unable to recover it. 00:27:57.497 [2024-07-26 11:35:52.949069] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.497 [2024-07-26 11:35:52.949100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.497 qpair failed and we were unable to recover it. 00:27:57.497 [2024-07-26 11:35:52.949369] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.497 [2024-07-26 11:35:52.949404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.497 qpair failed and we were unable to recover it. 00:27:57.497 [2024-07-26 11:35:52.949608] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.497 [2024-07-26 11:35:52.949647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.497 qpair failed and we were unable to recover it. 00:27:57.497 [2024-07-26 11:35:52.949915] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.497 [2024-07-26 11:35:52.949945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.497 qpair failed and we were unable to recover it. 00:27:57.497 [2024-07-26 11:35:52.950255] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.497 [2024-07-26 11:35:52.950286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.497 qpair failed and we were unable to recover it. 00:27:57.497 [2024-07-26 11:35:52.950550] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.497 [2024-07-26 11:35:52.950580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.497 qpair failed and we were unable to recover it. 00:27:57.497 [2024-07-26 11:35:52.950854] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.497 [2024-07-26 11:35:52.950886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.497 qpair failed and we were unable to recover it. 00:27:57.497 [2024-07-26 11:35:52.951184] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.497 [2024-07-26 11:35:52.951214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.497 qpair failed and we were unable to recover it. 00:27:57.497 [2024-07-26 11:35:52.951496] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.497 [2024-07-26 11:35:52.951527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.497 qpair failed and we were unable to recover it. 00:27:57.497 [2024-07-26 11:35:52.951814] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.497 [2024-07-26 11:35:52.951847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.497 qpair failed and we were unable to recover it. 00:27:57.497 [2024-07-26 11:35:52.952134] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.497 [2024-07-26 11:35:52.952165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.497 qpair failed and we were unable to recover it. 00:27:57.497 [2024-07-26 11:35:52.952420] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.497 [2024-07-26 11:35:52.952450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.497 qpair failed and we were unable to recover it. 00:27:57.497 [2024-07-26 11:35:52.952663] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.497 [2024-07-26 11:35:52.952696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.497 qpair failed and we were unable to recover it. 00:27:57.497 [2024-07-26 11:35:52.952883] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.497 [2024-07-26 11:35:52.952913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.497 qpair failed and we were unable to recover it. 00:27:57.497 [2024-07-26 11:35:52.953176] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.497 [2024-07-26 11:35:52.953207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.498 qpair failed and we were unable to recover it. 00:27:57.498 [2024-07-26 11:35:52.953466] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.498 [2024-07-26 11:35:52.953497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.498 qpair failed and we were unable to recover it. 00:27:57.498 [2024-07-26 11:35:52.953710] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.498 [2024-07-26 11:35:52.953742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.498 qpair failed and we were unable to recover it. 00:27:57.498 [2024-07-26 11:35:52.953932] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.498 [2024-07-26 11:35:52.953962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.498 qpair failed and we were unable to recover it. 00:27:57.498 [2024-07-26 11:35:52.954180] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.498 [2024-07-26 11:35:52.954210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.498 qpair failed and we were unable to recover it. 00:27:57.498 [2024-07-26 11:35:52.954492] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.498 [2024-07-26 11:35:52.954522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.498 qpair failed and we were unable to recover it. 00:27:57.498 [2024-07-26 11:35:52.954680] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.498 [2024-07-26 11:35:52.954711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.498 qpair failed and we were unable to recover it. 00:27:57.498 [2024-07-26 11:35:52.954991] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.498 [2024-07-26 11:35:52.955021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.498 qpair failed and we were unable to recover it. 00:27:57.498 [2024-07-26 11:35:52.955209] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.498 [2024-07-26 11:35:52.955240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.498 qpair failed and we were unable to recover it. 00:27:57.498 [2024-07-26 11:35:52.955515] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.498 [2024-07-26 11:35:52.955546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.498 qpair failed and we were unable to recover it. 00:27:57.498 [2024-07-26 11:35:52.955847] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.498 [2024-07-26 11:35:52.955880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.498 qpair failed and we were unable to recover it. 00:27:57.498 [2024-07-26 11:35:52.956152] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.498 [2024-07-26 11:35:52.956182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.498 qpair failed and we were unable to recover it. 00:27:57.498 [2024-07-26 11:35:52.956435] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.498 [2024-07-26 11:35:52.956466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.498 qpair failed and we were unable to recover it. 00:27:57.498 [2024-07-26 11:35:52.956775] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.498 [2024-07-26 11:35:52.956806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.498 qpair failed and we were unable to recover it. 00:27:57.498 [2024-07-26 11:35:52.957075] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.498 [2024-07-26 11:35:52.957106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.498 qpair failed and we were unable to recover it. 00:27:57.498 [2024-07-26 11:35:52.957304] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.498 [2024-07-26 11:35:52.957334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.498 qpair failed and we were unable to recover it. 00:27:57.498 [2024-07-26 11:35:52.957590] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.498 [2024-07-26 11:35:52.957621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.498 qpair failed and we were unable to recover it. 00:27:57.498 [2024-07-26 11:35:52.957839] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.498 [2024-07-26 11:35:52.957870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.498 qpair failed and we were unable to recover it. 00:27:57.498 [2024-07-26 11:35:52.958054] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.498 [2024-07-26 11:35:52.958084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.498 qpair failed and we were unable to recover it. 00:27:57.498 [2024-07-26 11:35:52.958308] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.498 [2024-07-26 11:35:52.958338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.498 qpair failed and we were unable to recover it. 00:27:57.498 [2024-07-26 11:35:52.958616] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.498 [2024-07-26 11:35:52.958656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.498 qpair failed and we were unable to recover it. 00:27:57.498 [2024-07-26 11:35:52.958911] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.498 [2024-07-26 11:35:52.958942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.498 qpair failed and we were unable to recover it. 00:27:57.498 [2024-07-26 11:35:52.959124] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.498 [2024-07-26 11:35:52.959154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.498 qpair failed and we were unable to recover it. 00:27:57.498 [2024-07-26 11:35:52.959408] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.498 [2024-07-26 11:35:52.959438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.498 qpair failed and we were unable to recover it. 00:27:57.498 [2024-07-26 11:35:52.959646] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.498 [2024-07-26 11:35:52.959678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.498 qpair failed and we were unable to recover it. 00:27:57.498 [2024-07-26 11:35:52.959955] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.498 [2024-07-26 11:35:52.959986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.498 qpair failed and we were unable to recover it. 00:27:57.498 [2024-07-26 11:35:52.960247] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.498 [2024-07-26 11:35:52.960277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.498 qpair failed and we were unable to recover it. 00:27:57.498 [2024-07-26 11:35:52.960559] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.498 [2024-07-26 11:35:52.960600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.498 qpair failed and we were unable to recover it. 00:27:57.498 [2024-07-26 11:35:52.960912] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.498 [2024-07-26 11:35:52.960944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.498 qpair failed and we were unable to recover it. 00:27:57.498 [2024-07-26 11:35:52.961247] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.498 [2024-07-26 11:35:52.961278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.498 qpair failed and we were unable to recover it. 00:27:57.498 [2024-07-26 11:35:52.961551] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.498 [2024-07-26 11:35:52.961581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.498 qpair failed and we were unable to recover it. 00:27:57.498 [2024-07-26 11:35:52.961856] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.498 [2024-07-26 11:35:52.961888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.498 qpair failed and we were unable to recover it. 00:27:57.498 [2024-07-26 11:35:52.962105] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.498 [2024-07-26 11:35:52.962135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.498 qpair failed and we were unable to recover it. 00:27:57.498 [2024-07-26 11:35:52.962344] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.498 [2024-07-26 11:35:52.962374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.498 qpair failed and we were unable to recover it. 00:27:57.498 [2024-07-26 11:35:52.962665] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.498 [2024-07-26 11:35:52.962697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.498 qpair failed and we were unable to recover it. 00:27:57.498 [2024-07-26 11:35:52.962987] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.498 [2024-07-26 11:35:52.963019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.498 qpair failed and we were unable to recover it. 00:27:57.498 [2024-07-26 11:35:52.963229] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.498 [2024-07-26 11:35:52.963260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.498 qpair failed and we were unable to recover it. 00:27:57.498 [2024-07-26 11:35:52.963537] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.498 [2024-07-26 11:35:52.963568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.498 qpair failed and we were unable to recover it. 00:27:57.498 [2024-07-26 11:35:52.963825] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.498 [2024-07-26 11:35:52.963857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.498 qpair failed and we were unable to recover it. 00:27:57.498 [2024-07-26 11:35:52.964132] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.498 [2024-07-26 11:35:52.964163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.498 qpair failed and we were unable to recover it. 00:27:57.498 [2024-07-26 11:35:52.964452] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.498 [2024-07-26 11:35:52.964482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.498 qpair failed and we were unable to recover it. 00:27:57.498 [2024-07-26 11:35:52.964778] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.498 [2024-07-26 11:35:52.964810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.498 qpair failed and we were unable to recover it. 00:27:57.498 [2024-07-26 11:35:52.964993] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.498 [2024-07-26 11:35:52.965023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.498 qpair failed and we were unable to recover it. 00:27:57.498 [2024-07-26 11:35:52.965227] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.498 [2024-07-26 11:35:52.965257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.498 qpair failed and we were unable to recover it. 00:27:57.498 [2024-07-26 11:35:52.965485] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.498 [2024-07-26 11:35:52.965515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.498 qpair failed and we were unable to recover it. 00:27:57.498 [2024-07-26 11:35:52.965706] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.498 [2024-07-26 11:35:52.965738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.498 qpair failed and we were unable to recover it. 00:27:57.498 [2024-07-26 11:35:52.965876] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.498 [2024-07-26 11:35:52.965906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.498 qpair failed and we were unable to recover it. 00:27:57.498 [2024-07-26 11:35:52.966090] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.498 [2024-07-26 11:35:52.966119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.498 qpair failed and we were unable to recover it. 00:27:57.498 [2024-07-26 11:35:52.966318] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.498 [2024-07-26 11:35:52.966348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.498 qpair failed and we were unable to recover it. 00:27:57.498 [2024-07-26 11:35:52.966667] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.498 [2024-07-26 11:35:52.966699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.498 qpair failed and we were unable to recover it. 00:27:57.498 [2024-07-26 11:35:52.966989] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.498 [2024-07-26 11:35:52.967020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.498 qpair failed and we were unable to recover it. 00:27:57.498 [2024-07-26 11:35:52.967161] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.498 [2024-07-26 11:35:52.967191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.498 qpair failed and we were unable to recover it. 00:27:57.498 [2024-07-26 11:35:52.967451] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.498 [2024-07-26 11:35:52.967482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.498 qpair failed and we were unable to recover it. 00:27:57.498 [2024-07-26 11:35:52.967787] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.498 [2024-07-26 11:35:52.967819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.498 qpair failed and we were unable to recover it. 00:27:57.498 [2024-07-26 11:35:52.968092] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.498 [2024-07-26 11:35:52.968123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.498 qpair failed and we were unable to recover it. 00:27:57.498 [2024-07-26 11:35:52.968428] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.498 [2024-07-26 11:35:52.968458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.498 qpair failed and we were unable to recover it. 00:27:57.498 [2024-07-26 11:35:52.968729] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.498 [2024-07-26 11:35:52.968761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.498 qpair failed and we were unable to recover it. 00:27:57.498 [2024-07-26 11:35:52.968943] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.498 [2024-07-26 11:35:52.968973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.498 qpair failed and we were unable to recover it. 00:27:57.498 [2024-07-26 11:35:52.969254] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.498 [2024-07-26 11:35:52.969284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.498 qpair failed and we were unable to recover it. 00:27:57.498 [2024-07-26 11:35:52.969557] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.498 [2024-07-26 11:35:52.969587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.498 qpair failed and we were unable to recover it. 00:27:57.498 [2024-07-26 11:35:52.969823] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.498 [2024-07-26 11:35:52.969855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.498 qpair failed and we were unable to recover it. 00:27:57.498 [2024-07-26 11:35:52.970115] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.498 [2024-07-26 11:35:52.970145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.498 qpair failed and we were unable to recover it. 00:27:57.498 [2024-07-26 11:35:52.970392] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.498 [2024-07-26 11:35:52.970422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.498 qpair failed and we were unable to recover it. 00:27:57.499 [2024-07-26 11:35:52.970702] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.499 [2024-07-26 11:35:52.970734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.499 qpair failed and we were unable to recover it. 00:27:57.499 [2024-07-26 11:35:52.970991] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.499 [2024-07-26 11:35:52.971021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.499 qpair failed and we were unable to recover it. 00:27:57.499 [2024-07-26 11:35:52.971331] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.499 [2024-07-26 11:35:52.971362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.499 qpair failed and we were unable to recover it. 00:27:57.499 [2024-07-26 11:35:52.971594] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.499 [2024-07-26 11:35:52.971624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.499 qpair failed and we were unable to recover it. 00:27:57.499 [2024-07-26 11:35:52.971852] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.499 [2024-07-26 11:35:52.971888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.499 qpair failed and we were unable to recover it. 00:27:57.499 [2024-07-26 11:35:52.972171] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.499 [2024-07-26 11:35:52.972203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.499 qpair failed and we were unable to recover it. 00:27:57.499 [2024-07-26 11:35:52.972407] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.499 [2024-07-26 11:35:52.972438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.499 qpair failed and we were unable to recover it. 00:27:57.499 [2024-07-26 11:35:52.972710] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.499 [2024-07-26 11:35:52.972741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.499 qpair failed and we were unable to recover it. 00:27:57.499 [2024-07-26 11:35:52.972893] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.499 [2024-07-26 11:35:52.972924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.499 qpair failed and we were unable to recover it. 00:27:57.499 [2024-07-26 11:35:52.973180] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.499 [2024-07-26 11:35:52.973210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.499 qpair failed and we were unable to recover it. 00:27:57.499 [2024-07-26 11:35:52.973517] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.499 [2024-07-26 11:35:52.973547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.499 qpair failed and we were unable to recover it. 00:27:57.499 [2024-07-26 11:35:52.973764] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.499 [2024-07-26 11:35:52.973796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.499 qpair failed and we were unable to recover it. 00:27:57.499 [2024-07-26 11:35:52.974012] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.499 [2024-07-26 11:35:52.974042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.499 qpair failed and we were unable to recover it. 00:27:57.499 [2024-07-26 11:35:52.974229] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.499 [2024-07-26 11:35:52.974259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.499 qpair failed and we were unable to recover it. 00:27:57.499 [2024-07-26 11:35:52.974538] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.499 [2024-07-26 11:35:52.974569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.499 qpair failed and we were unable to recover it. 00:27:57.499 [2024-07-26 11:35:52.974799] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.499 [2024-07-26 11:35:52.974831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.499 qpair failed and we were unable to recover it. 00:27:57.499 [2024-07-26 11:35:52.974969] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.499 [2024-07-26 11:35:52.974999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.499 qpair failed and we were unable to recover it. 00:27:57.499 [2024-07-26 11:35:52.975201] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.499 [2024-07-26 11:35:52.975231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.499 qpair failed and we were unable to recover it. 00:27:57.499 [2024-07-26 11:35:52.975515] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.499 [2024-07-26 11:35:52.975545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.499 qpair failed and we were unable to recover it. 00:27:57.499 [2024-07-26 11:35:52.975831] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.499 [2024-07-26 11:35:52.975863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.499 qpair failed and we were unable to recover it. 00:27:57.499 [2024-07-26 11:35:52.976154] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.499 [2024-07-26 11:35:52.976184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.499 qpair failed and we were unable to recover it. 00:27:57.499 [2024-07-26 11:35:52.976442] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.499 [2024-07-26 11:35:52.976473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.499 qpair failed and we were unable to recover it. 00:27:57.499 [2024-07-26 11:35:52.976685] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.499 [2024-07-26 11:35:52.976717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.499 qpair failed and we were unable to recover it. 00:27:57.499 [2024-07-26 11:35:52.976983] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.499 [2024-07-26 11:35:52.977013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.499 qpair failed and we were unable to recover it. 00:27:57.499 [2024-07-26 11:35:52.977230] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.499 [2024-07-26 11:35:52.977261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.499 qpair failed and we were unable to recover it. 00:27:57.499 [2024-07-26 11:35:52.977446] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.499 [2024-07-26 11:35:52.977477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.499 qpair failed and we were unable to recover it. 00:27:57.499 [2024-07-26 11:35:52.977702] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.499 [2024-07-26 11:35:52.977734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.499 qpair failed and we were unable to recover it. 00:27:57.499 [2024-07-26 11:35:52.977955] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.499 [2024-07-26 11:35:52.977986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.499 qpair failed and we were unable to recover it. 00:27:57.499 [2024-07-26 11:35:52.978287] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.499 [2024-07-26 11:35:52.978317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.499 qpair failed and we were unable to recover it. 00:27:57.499 [2024-07-26 11:35:52.978443] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.499 [2024-07-26 11:35:52.978473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.499 qpair failed and we were unable to recover it. 00:27:57.499 [2024-07-26 11:35:52.978779] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.499 [2024-07-26 11:35:52.978810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.499 qpair failed and we were unable to recover it. 00:27:57.499 [2024-07-26 11:35:52.979121] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.499 [2024-07-26 11:35:52.979152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.499 qpair failed and we were unable to recover it. 00:27:57.499 [2024-07-26 11:35:52.979361] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.499 [2024-07-26 11:35:52.979392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.499 qpair failed and we were unable to recover it. 00:27:57.499 [2024-07-26 11:35:52.979613] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.499 [2024-07-26 11:35:52.979655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.499 qpair failed and we were unable to recover it. 00:27:57.499 [2024-07-26 11:35:52.979936] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.499 [2024-07-26 11:35:52.979967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.499 qpair failed and we were unable to recover it. 00:27:57.499 [2024-07-26 11:35:52.980191] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.499 [2024-07-26 11:35:52.980222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.499 qpair failed and we were unable to recover it. 00:27:57.499 [2024-07-26 11:35:52.980503] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.499 [2024-07-26 11:35:52.980534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.499 qpair failed and we were unable to recover it. 00:27:57.499 [2024-07-26 11:35:52.980800] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.499 [2024-07-26 11:35:52.980832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.499 qpair failed and we were unable to recover it. 00:27:57.499 [2024-07-26 11:35:52.981134] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.499 [2024-07-26 11:35:52.981165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.499 qpair failed and we were unable to recover it. 00:27:57.499 [2024-07-26 11:35:52.981316] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.499 [2024-07-26 11:35:52.981346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.499 qpair failed and we were unable to recover it. 00:27:57.499 [2024-07-26 11:35:52.981640] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.499 [2024-07-26 11:35:52.981672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.499 qpair failed and we were unable to recover it. 00:27:57.499 [2024-07-26 11:35:52.981879] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.499 [2024-07-26 11:35:52.981910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.499 qpair failed and we were unable to recover it. 00:27:57.499 [2024-07-26 11:35:52.982168] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.499 [2024-07-26 11:35:52.982199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.499 qpair failed and we were unable to recover it. 00:27:57.499 [2024-07-26 11:35:52.982395] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.499 [2024-07-26 11:35:52.982425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.499 qpair failed and we were unable to recover it. 00:27:57.499 [2024-07-26 11:35:52.982702] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.499 [2024-07-26 11:35:52.982739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.499 qpair failed and we were unable to recover it. 00:27:57.499 [2024-07-26 11:35:52.983024] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.499 [2024-07-26 11:35:52.983055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.499 qpair failed and we were unable to recover it. 00:27:57.499 [2024-07-26 11:35:52.983283] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.499 [2024-07-26 11:35:52.983314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.499 qpair failed and we were unable to recover it. 00:27:57.499 [2024-07-26 11:35:52.983591] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.499 [2024-07-26 11:35:52.983622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.499 qpair failed and we were unable to recover it. 00:27:57.499 [2024-07-26 11:35:52.983929] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.499 [2024-07-26 11:35:52.983960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.499 qpair failed and we were unable to recover it. 00:27:57.499 [2024-07-26 11:35:52.984195] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.499 [2024-07-26 11:35:52.984225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.499 qpair failed and we were unable to recover it. 00:27:57.499 [2024-07-26 11:35:52.984526] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.499 [2024-07-26 11:35:52.984556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.499 qpair failed and we were unable to recover it. 00:27:57.499 [2024-07-26 11:35:52.984838] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.499 [2024-07-26 11:35:52.984869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.499 qpair failed and we were unable to recover it. 00:27:57.499 [2024-07-26 11:35:52.985157] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.499 [2024-07-26 11:35:52.985187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.499 qpair failed and we were unable to recover it. 00:27:57.499 [2024-07-26 11:35:52.985465] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.499 [2024-07-26 11:35:52.985496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.499 qpair failed and we were unable to recover it. 00:27:57.499 [2024-07-26 11:35:52.985784] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.499 [2024-07-26 11:35:52.985815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.499 qpair failed and we were unable to recover it. 00:27:57.499 [2024-07-26 11:35:52.986004] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.499 [2024-07-26 11:35:52.986034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.499 qpair failed and we were unable to recover it. 00:27:57.499 [2024-07-26 11:35:52.986235] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.499 [2024-07-26 11:35:52.986266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.499 qpair failed and we were unable to recover it. 00:27:57.499 [2024-07-26 11:35:52.986542] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.499 [2024-07-26 11:35:52.986573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.499 qpair failed and we were unable to recover it. 00:27:57.499 [2024-07-26 11:35:52.986724] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.499 [2024-07-26 11:35:52.986756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.499 qpair failed and we were unable to recover it. 00:27:57.499 [2024-07-26 11:35:52.987038] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.499 [2024-07-26 11:35:52.987068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.499 qpair failed and we were unable to recover it. 00:27:57.499 [2024-07-26 11:35:52.987338] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.499 [2024-07-26 11:35:52.987369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.499 qpair failed and we were unable to recover it. 00:27:57.499 [2024-07-26 11:35:52.987647] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.499 [2024-07-26 11:35:52.987678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.499 qpair failed and we were unable to recover it. 00:27:57.499 [2024-07-26 11:35:52.987981] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.499 [2024-07-26 11:35:52.988011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.499 qpair failed and we were unable to recover it. 00:27:57.499 [2024-07-26 11:35:52.988284] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.499 [2024-07-26 11:35:52.988314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.499 qpair failed and we were unable to recover it. 00:27:57.499 [2024-07-26 11:35:52.988586] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.499 [2024-07-26 11:35:52.988617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.499 qpair failed and we were unable to recover it. 00:27:57.499 [2024-07-26 11:35:52.988922] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.499 [2024-07-26 11:35:52.988953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.499 qpair failed and we were unable to recover it. 00:27:57.499 [2024-07-26 11:35:52.989227] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.499 [2024-07-26 11:35:52.989258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.499 qpair failed and we were unable to recover it. 00:27:57.499 [2024-07-26 11:35:52.989475] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.499 [2024-07-26 11:35:52.989505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.499 qpair failed and we were unable to recover it. 00:27:57.499 [2024-07-26 11:35:52.989712] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.499 [2024-07-26 11:35:52.989743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.499 qpair failed and we were unable to recover it. 00:27:57.499 [2024-07-26 11:35:52.989973] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.499 [2024-07-26 11:35:52.990003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.499 qpair failed and we were unable to recover it. 00:27:57.499 [2024-07-26 11:35:52.990281] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.499 [2024-07-26 11:35:52.990312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.499 qpair failed and we were unable to recover it. 00:27:57.499 [2024-07-26 11:35:52.990604] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.499 [2024-07-26 11:35:52.990651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.499 qpair failed and we were unable to recover it. 00:27:57.499 [2024-07-26 11:35:52.990924] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.500 [2024-07-26 11:35:52.990954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.500 qpair failed and we were unable to recover it. 00:27:57.500 [2024-07-26 11:35:52.991141] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.500 [2024-07-26 11:35:52.991172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.500 qpair failed and we were unable to recover it. 00:27:57.500 [2024-07-26 11:35:52.991456] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.500 [2024-07-26 11:35:52.991486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.500 qpair failed and we were unable to recover it. 00:27:57.500 [2024-07-26 11:35:52.991684] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.500 [2024-07-26 11:35:52.991715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.500 qpair failed and we were unable to recover it. 00:27:57.500 [2024-07-26 11:35:52.991842] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.500 [2024-07-26 11:35:52.991872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.500 qpair failed and we were unable to recover it. 00:27:57.500 [2024-07-26 11:35:52.992152] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.500 [2024-07-26 11:35:52.992183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.500 qpair failed and we were unable to recover it. 00:27:57.500 [2024-07-26 11:35:52.992380] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.500 [2024-07-26 11:35:52.992410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.500 qpair failed and we were unable to recover it. 00:27:57.500 [2024-07-26 11:35:52.992690] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.500 [2024-07-26 11:35:52.992721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.500 qpair failed and we were unable to recover it. 00:27:57.500 [2024-07-26 11:35:52.992977] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.500 [2024-07-26 11:35:52.993008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.500 qpair failed and we were unable to recover it. 00:27:57.500 [2024-07-26 11:35:52.993240] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.500 [2024-07-26 11:35:52.993270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.500 qpair failed and we were unable to recover it. 00:27:57.500 [2024-07-26 11:35:52.993523] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.500 [2024-07-26 11:35:52.993554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.500 qpair failed and we were unable to recover it. 00:27:57.500 [2024-07-26 11:35:52.993775] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.500 [2024-07-26 11:35:52.993807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.500 qpair failed and we were unable to recover it. 00:27:57.500 [2024-07-26 11:35:52.994011] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.500 [2024-07-26 11:35:52.994047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.500 qpair failed and we were unable to recover it. 00:27:57.500 [2024-07-26 11:35:52.994334] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.500 [2024-07-26 11:35:52.994364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.500 qpair failed and we were unable to recover it. 00:27:57.500 [2024-07-26 11:35:52.994617] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.500 [2024-07-26 11:35:52.994665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.500 qpair failed and we were unable to recover it. 00:27:57.500 [2024-07-26 11:35:52.994869] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.500 [2024-07-26 11:35:52.994899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.500 qpair failed and we were unable to recover it. 00:27:57.500 [2024-07-26 11:35:52.995162] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.500 [2024-07-26 11:35:52.995192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.500 qpair failed and we were unable to recover it. 00:27:57.500 [2024-07-26 11:35:52.995495] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.500 [2024-07-26 11:35:52.995525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.500 qpair failed and we were unable to recover it. 00:27:57.500 [2024-07-26 11:35:52.995740] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.500 [2024-07-26 11:35:52.995770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.500 qpair failed and we were unable to recover it. 00:27:57.500 [2024-07-26 11:35:52.996047] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.500 [2024-07-26 11:35:52.996077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.500 qpair failed and we were unable to recover it. 00:27:57.500 [2024-07-26 11:35:52.996305] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.500 [2024-07-26 11:35:52.996336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.500 qpair failed and we were unable to recover it. 00:27:57.500 [2024-07-26 11:35:52.996521] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.500 [2024-07-26 11:35:52.996554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.500 qpair failed and we were unable to recover it. 00:27:57.500 [2024-07-26 11:35:52.996802] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.500 [2024-07-26 11:35:52.996835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.500 qpair failed and we were unable to recover it. 00:27:57.500 [2024-07-26 11:35:52.997046] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.500 [2024-07-26 11:35:52.997079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.500 qpair failed and we were unable to recover it. 00:27:57.500 [2024-07-26 11:35:52.997276] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.500 [2024-07-26 11:35:52.997308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.500 qpair failed and we were unable to recover it. 00:27:57.500 [2024-07-26 11:35:52.997531] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.500 [2024-07-26 11:35:52.997563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.500 qpair failed and we were unable to recover it. 00:27:57.500 [2024-07-26 11:35:52.997850] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.500 [2024-07-26 11:35:52.997884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.500 qpair failed and we were unable to recover it. 00:27:57.500 [2024-07-26 11:35:52.998145] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.500 [2024-07-26 11:35:52.998177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.500 qpair failed and we were unable to recover it. 00:27:57.500 [2024-07-26 11:35:52.998333] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.500 [2024-07-26 11:35:52.998365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.500 qpair failed and we were unable to recover it. 00:27:57.500 [2024-07-26 11:35:52.998561] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.500 [2024-07-26 11:35:52.998593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.500 qpair failed and we were unable to recover it. 00:27:57.500 [2024-07-26 11:35:52.998809] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.500 [2024-07-26 11:35:52.998843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.500 qpair failed and we were unable to recover it. 00:27:57.500 [2024-07-26 11:35:52.999057] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.500 [2024-07-26 11:35:52.999089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.500 qpair failed and we were unable to recover it. 00:27:57.500 [2024-07-26 11:35:52.999395] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.500 [2024-07-26 11:35:52.999427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.500 qpair failed and we were unable to recover it. 00:27:57.500 [2024-07-26 11:35:52.999585] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.500 [2024-07-26 11:35:52.999618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.500 qpair failed and we were unable to recover it. 00:27:57.500 [2024-07-26 11:35:52.999915] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.500 [2024-07-26 11:35:52.999949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.500 qpair failed and we were unable to recover it. 00:27:57.500 [2024-07-26 11:35:53.000241] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.500 [2024-07-26 11:35:53.000274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.500 qpair failed and we were unable to recover it. 00:27:57.500 [2024-07-26 11:35:53.000554] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.500 [2024-07-26 11:35:53.000587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.500 qpair failed and we were unable to recover it. 00:27:57.500 [2024-07-26 11:35:53.000810] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.500 [2024-07-26 11:35:53.000845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.500 qpair failed and we were unable to recover it. 00:27:57.500 [2024-07-26 11:35:53.001156] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.500 [2024-07-26 11:35:53.001189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.500 qpair failed and we were unable to recover it. 00:27:57.500 [2024-07-26 11:35:53.001423] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.500 [2024-07-26 11:35:53.001456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.500 qpair failed and we were unable to recover it. 00:27:57.500 [2024-07-26 11:35:53.001783] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.500 [2024-07-26 11:35:53.001817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.500 qpair failed and we were unable to recover it. 00:27:57.500 [2024-07-26 11:35:53.002041] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.500 [2024-07-26 11:35:53.002074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.500 qpair failed and we were unable to recover it. 00:27:57.500 [2024-07-26 11:35:53.002210] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.500 [2024-07-26 11:35:53.002243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.500 qpair failed and we were unable to recover it. 00:27:57.500 [2024-07-26 11:35:53.002523] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.500 [2024-07-26 11:35:53.002555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.500 qpair failed and we were unable to recover it. 00:27:57.500 [2024-07-26 11:35:53.002775] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.500 [2024-07-26 11:35:53.002808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.500 qpair failed and we were unable to recover it. 00:27:57.500 [2024-07-26 11:35:53.003014] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.500 [2024-07-26 11:35:53.003047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.500 qpair failed and we were unable to recover it. 00:27:57.500 [2024-07-26 11:35:53.003247] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.500 [2024-07-26 11:35:53.003280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.500 qpair failed and we were unable to recover it. 00:27:57.500 [2024-07-26 11:35:53.003500] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.500 [2024-07-26 11:35:53.003532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.500 qpair failed and we were unable to recover it. 00:27:57.500 [2024-07-26 11:35:53.003795] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.500 [2024-07-26 11:35:53.003829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.500 qpair failed and we were unable to recover it. 00:27:57.500 [2024-07-26 11:35:53.004080] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.500 [2024-07-26 11:35:53.004113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.500 qpair failed and we were unable to recover it. 00:27:57.500 [2024-07-26 11:35:53.004362] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.500 [2024-07-26 11:35:53.004395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.500 qpair failed and we were unable to recover it. 00:27:57.500 [2024-07-26 11:35:53.004621] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.500 [2024-07-26 11:35:53.004661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.500 qpair failed and we were unable to recover it. 00:27:57.500 [2024-07-26 11:35:53.004876] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.500 [2024-07-26 11:35:53.004914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.500 qpair failed and we were unable to recover it. 00:27:57.500 [2024-07-26 11:35:53.005201] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.500 [2024-07-26 11:35:53.005233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.500 qpair failed and we were unable to recover it. 00:27:57.500 [2024-07-26 11:35:53.005515] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.500 [2024-07-26 11:35:53.005548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.500 qpair failed and we were unable to recover it. 00:27:57.500 [2024-07-26 11:35:53.005795] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.500 [2024-07-26 11:35:53.005829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.500 qpair failed and we were unable to recover it. 00:27:57.500 [2024-07-26 11:35:53.006057] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.500 [2024-07-26 11:35:53.006092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.500 qpair failed and we were unable to recover it. 00:27:57.500 [2024-07-26 11:35:53.006348] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.500 [2024-07-26 11:35:53.006382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.500 qpair failed and we were unable to recover it. 00:27:57.500 [2024-07-26 11:35:53.006587] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.500 [2024-07-26 11:35:53.006619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.500 qpair failed and we were unable to recover it. 00:27:57.500 [2024-07-26 11:35:53.006902] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.500 [2024-07-26 11:35:53.006935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.500 qpair failed and we were unable to recover it. 00:27:57.500 [2024-07-26 11:35:53.007206] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.500 [2024-07-26 11:35:53.007239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.500 qpair failed and we were unable to recover it. 00:27:57.500 [2024-07-26 11:35:53.007398] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.500 [2024-07-26 11:35:53.007431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.500 qpair failed and we were unable to recover it. 00:27:57.500 [2024-07-26 11:35:53.007737] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.500 [2024-07-26 11:35:53.007770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.500 qpair failed and we were unable to recover it. 00:27:57.500 [2024-07-26 11:35:53.008029] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.500 [2024-07-26 11:35:53.008062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.500 qpair failed and we were unable to recover it. 00:27:57.500 [2024-07-26 11:35:53.008373] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.500 [2024-07-26 11:35:53.008406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.500 qpair failed and we were unable to recover it. 00:27:57.500 [2024-07-26 11:35:53.008612] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.500 [2024-07-26 11:35:53.008653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.500 qpair failed and we were unable to recover it. 00:27:57.500 [2024-07-26 11:35:53.008941] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.500 [2024-07-26 11:35:53.008974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.500 qpair failed and we were unable to recover it. 00:27:57.500 [2024-07-26 11:35:53.009261] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.500 [2024-07-26 11:35:53.009294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.500 qpair failed and we were unable to recover it. 00:27:57.500 [2024-07-26 11:35:53.009439] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.500 [2024-07-26 11:35:53.009472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.500 qpair failed and we were unable to recover it. 00:27:57.500 [2024-07-26 11:35:53.009745] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.500 [2024-07-26 11:35:53.009779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.500 qpair failed and we were unable to recover it. 00:27:57.500 [2024-07-26 11:35:53.009967] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.500 [2024-07-26 11:35:53.010000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.500 qpair failed and we were unable to recover it. 00:27:57.500 [2024-07-26 11:35:53.010206] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.500 [2024-07-26 11:35:53.010239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.500 qpair failed and we were unable to recover it. 00:27:57.501 [2024-07-26 11:35:53.010461] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.501 [2024-07-26 11:35:53.010493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.501 qpair failed and we were unable to recover it. 00:27:57.501 [2024-07-26 11:35:53.010733] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.501 [2024-07-26 11:35:53.010765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.501 qpair failed and we were unable to recover it. 00:27:57.501 [2024-07-26 11:35:53.010950] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.501 [2024-07-26 11:35:53.010982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.501 qpair failed and we were unable to recover it. 00:27:57.501 [2024-07-26 11:35:53.011172] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.501 [2024-07-26 11:35:53.011204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.501 qpair failed and we were unable to recover it. 00:27:57.501 [2024-07-26 11:35:53.011424] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.501 [2024-07-26 11:35:53.011456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.501 qpair failed and we were unable to recover it. 00:27:57.501 [2024-07-26 11:35:53.011605] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.501 [2024-07-26 11:35:53.011646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.501 qpair failed and we were unable to recover it. 00:27:57.501 [2024-07-26 11:35:53.011781] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.501 [2024-07-26 11:35:53.011813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.501 qpair failed and we were unable to recover it. 00:27:57.501 [2024-07-26 11:35:53.012134] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.501 [2024-07-26 11:35:53.012214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.501 qpair failed and we were unable to recover it. 00:27:57.501 [2024-07-26 11:35:53.012426] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.501 [2024-07-26 11:35:53.012462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.501 qpair failed and we were unable to recover it. 00:27:57.501 [2024-07-26 11:35:53.012766] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.501 [2024-07-26 11:35:53.012804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.501 qpair failed and we were unable to recover it. 00:27:57.501 [2024-07-26 11:35:53.013091] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.501 [2024-07-26 11:35:53.013125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.501 qpair failed and we were unable to recover it. 00:27:57.501 [2024-07-26 11:35:53.013332] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.501 [2024-07-26 11:35:53.013365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.501 qpair failed and we were unable to recover it. 00:27:57.501 [2024-07-26 11:35:53.013624] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.501 [2024-07-26 11:35:53.013668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.501 qpair failed and we were unable to recover it. 00:27:57.501 [2024-07-26 11:35:53.013876] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.501 [2024-07-26 11:35:53.013908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.501 qpair failed and we were unable to recover it. 00:27:57.501 [2024-07-26 11:35:53.014165] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.501 [2024-07-26 11:35:53.014197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.501 qpair failed and we were unable to recover it. 00:27:57.501 [2024-07-26 11:35:53.014453] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.501 [2024-07-26 11:35:53.014485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.501 qpair failed and we were unable to recover it. 00:27:57.501 [2024-07-26 11:35:53.014791] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.501 [2024-07-26 11:35:53.014825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.501 qpair failed and we were unable to recover it. 00:27:57.501 [2024-07-26 11:35:53.015096] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.501 [2024-07-26 11:35:53.015129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.501 qpair failed and we were unable to recover it. 00:27:57.501 [2024-07-26 11:35:53.015324] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.501 [2024-07-26 11:35:53.015357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.501 qpair failed and we were unable to recover it. 00:27:57.501 [2024-07-26 11:35:53.015515] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.501 [2024-07-26 11:35:53.015548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.501 qpair failed and we were unable to recover it. 00:27:57.501 [2024-07-26 11:35:53.015821] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.501 [2024-07-26 11:35:53.015856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.501 qpair failed and we were unable to recover it. 00:27:57.501 [2024-07-26 11:35:53.016100] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.501 [2024-07-26 11:35:53.016134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.501 qpair failed and we were unable to recover it. 00:27:57.501 [2024-07-26 11:35:53.016442] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.501 [2024-07-26 11:35:53.016474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.501 qpair failed and we were unable to recover it. 00:27:57.501 [2024-07-26 11:35:53.016778] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.501 [2024-07-26 11:35:53.016812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.501 qpair failed and we were unable to recover it. 00:27:57.501 [2024-07-26 11:35:53.017105] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.501 [2024-07-26 11:35:53.017138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.501 qpair failed and we were unable to recover it. 00:27:57.501 [2024-07-26 11:35:53.017437] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.501 [2024-07-26 11:35:53.017470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.501 qpair failed and we were unable to recover it. 00:27:57.501 [2024-07-26 11:35:53.017745] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.501 [2024-07-26 11:35:53.017779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.501 qpair failed and we were unable to recover it. 00:27:57.501 [2024-07-26 11:35:53.018065] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.501 [2024-07-26 11:35:53.018098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.501 qpair failed and we were unable to recover it. 00:27:57.501 [2024-07-26 11:35:53.018387] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.501 [2024-07-26 11:35:53.018420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.501 qpair failed and we were unable to recover it. 00:27:57.501 [2024-07-26 11:35:53.018613] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.501 [2024-07-26 11:35:53.018654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.501 qpair failed and we were unable to recover it. 00:27:57.501 [2024-07-26 11:35:53.018960] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.501 [2024-07-26 11:35:53.018992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.501 qpair failed and we were unable to recover it. 00:27:57.501 [2024-07-26 11:35:53.019196] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.501 [2024-07-26 11:35:53.019230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.501 qpair failed and we were unable to recover it. 00:27:57.501 [2024-07-26 11:35:53.019413] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.501 [2024-07-26 11:35:53.019446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.501 qpair failed and we were unable to recover it. 00:27:57.501 [2024-07-26 11:35:53.019723] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.501 [2024-07-26 11:35:53.019758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.501 qpair failed and we were unable to recover it. 00:27:57.501 [2024-07-26 11:35:53.019960] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.501 [2024-07-26 11:35:53.019999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.501 qpair failed and we were unable to recover it. 00:27:57.501 [2024-07-26 11:35:53.020186] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.501 [2024-07-26 11:35:53.020219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.501 qpair failed and we were unable to recover it. 00:27:57.501 [2024-07-26 11:35:53.020419] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.501 [2024-07-26 11:35:53.020452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.501 qpair failed and we were unable to recover it. 00:27:57.501 [2024-07-26 11:35:53.020647] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.501 [2024-07-26 11:35:53.020681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.501 qpair failed and we were unable to recover it. 00:27:57.501 [2024-07-26 11:35:53.020878] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.501 [2024-07-26 11:35:53.020911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.501 qpair failed and we were unable to recover it. 00:27:57.501 [2024-07-26 11:35:53.021188] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.501 [2024-07-26 11:35:53.021222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.501 qpair failed and we were unable to recover it. 00:27:57.501 [2024-07-26 11:35:53.021477] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.501 [2024-07-26 11:35:53.021510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.501 qpair failed and we were unable to recover it. 00:27:57.501 [2024-07-26 11:35:53.021787] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.501 [2024-07-26 11:35:53.021821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.501 qpair failed and we were unable to recover it. 00:27:57.501 [2024-07-26 11:35:53.022026] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.501 [2024-07-26 11:35:53.022059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.501 qpair failed and we were unable to recover it. 00:27:57.501 [2024-07-26 11:35:53.022267] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.501 [2024-07-26 11:35:53.022300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.501 qpair failed and we were unable to recover it. 00:27:57.501 [2024-07-26 11:35:53.022414] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.501 [2024-07-26 11:35:53.022447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.501 qpair failed and we were unable to recover it. 00:27:57.501 [2024-07-26 11:35:53.022727] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.501 [2024-07-26 11:35:53.022762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.501 qpair failed and we were unable to recover it. 00:27:57.501 [2024-07-26 11:35:53.022968] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.501 [2024-07-26 11:35:53.023002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.501 qpair failed and we were unable to recover it. 00:27:57.501 [2024-07-26 11:35:53.023234] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.501 [2024-07-26 11:35:53.023267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.501 qpair failed and we were unable to recover it. 00:27:57.501 [2024-07-26 11:35:53.023554] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.501 [2024-07-26 11:35:53.023588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.501 qpair failed and we were unable to recover it. 00:27:57.501 [2024-07-26 11:35:53.023806] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.501 [2024-07-26 11:35:53.023842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.501 qpair failed and we were unable to recover it. 00:27:57.501 [2024-07-26 11:35:53.024101] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.501 [2024-07-26 11:35:53.024133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.501 qpair failed and we were unable to recover it. 00:27:57.501 [2024-07-26 11:35:53.024411] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.501 [2024-07-26 11:35:53.024444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.501 qpair failed and we were unable to recover it. 00:27:57.501 [2024-07-26 11:35:53.024646] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.501 [2024-07-26 11:35:53.024679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.501 qpair failed and we were unable to recover it. 00:27:57.501 [2024-07-26 11:35:53.024970] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.501 [2024-07-26 11:35:53.025002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.501 qpair failed and we were unable to recover it. 00:27:57.501 [2024-07-26 11:35:53.025295] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.501 [2024-07-26 11:35:53.025328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.501 qpair failed and we were unable to recover it. 00:27:57.501 [2024-07-26 11:35:53.025612] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.501 [2024-07-26 11:35:53.025656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.501 qpair failed and we were unable to recover it. 00:27:57.501 [2024-07-26 11:35:53.025886] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.501 [2024-07-26 11:35:53.025919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.501 qpair failed and we were unable to recover it. 00:27:57.501 [2024-07-26 11:35:53.026186] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.501 [2024-07-26 11:35:53.026219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.501 qpair failed and we were unable to recover it. 00:27:57.501 [2024-07-26 11:35:53.026489] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.501 [2024-07-26 11:35:53.026522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.501 qpair failed and we were unable to recover it. 00:27:57.501 [2024-07-26 11:35:53.026713] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.501 [2024-07-26 11:35:53.026748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.501 qpair failed and we were unable to recover it. 00:27:57.501 [2024-07-26 11:35:53.026894] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.501 [2024-07-26 11:35:53.026927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.501 qpair failed and we were unable to recover it. 00:27:57.501 [2024-07-26 11:35:53.027129] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.501 [2024-07-26 11:35:53.027168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.501 qpair failed and we were unable to recover it. 00:27:57.501 [2024-07-26 11:35:53.027393] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.501 [2024-07-26 11:35:53.027427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.501 qpair failed and we were unable to recover it. 00:27:57.501 [2024-07-26 11:35:53.027684] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.501 [2024-07-26 11:35:53.027718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.501 qpair failed and we were unable to recover it. 00:27:57.501 [2024-07-26 11:35:53.028022] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.501 [2024-07-26 11:35:53.028055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.501 qpair failed and we were unable to recover it. 00:27:57.501 [2024-07-26 11:35:53.028309] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.501 [2024-07-26 11:35:53.028342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.502 qpair failed and we were unable to recover it. 00:27:57.502 [2024-07-26 11:35:53.028571] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.502 [2024-07-26 11:35:53.028604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.502 qpair failed and we were unable to recover it. 00:27:57.502 [2024-07-26 11:35:53.028822] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.502 [2024-07-26 11:35:53.028856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.502 qpair failed and we were unable to recover it. 00:27:57.502 [2024-07-26 11:35:53.029037] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.502 [2024-07-26 11:35:53.029069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.502 qpair failed and we were unable to recover it. 00:27:57.502 [2024-07-26 11:35:53.029284] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.502 [2024-07-26 11:35:53.029317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.502 qpair failed and we were unable to recover it. 00:27:57.502 [2024-07-26 11:35:53.029599] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.502 [2024-07-26 11:35:53.029641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.502 qpair failed and we were unable to recover it. 00:27:57.502 [2024-07-26 11:35:53.029919] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.502 [2024-07-26 11:35:53.029953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.502 qpair failed and we were unable to recover it. 00:27:57.502 [2024-07-26 11:35:53.030235] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.502 [2024-07-26 11:35:53.030267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.502 qpair failed and we were unable to recover it. 00:27:57.502 [2024-07-26 11:35:53.030555] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.502 [2024-07-26 11:35:53.030587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.502 qpair failed and we were unable to recover it. 00:27:57.502 [2024-07-26 11:35:53.030892] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.502 [2024-07-26 11:35:53.030927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.502 qpair failed and we were unable to recover it. 00:27:57.502 [2024-07-26 11:35:53.031131] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.502 [2024-07-26 11:35:53.031164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.502 qpair failed and we were unable to recover it. 00:27:57.502 [2024-07-26 11:35:53.031446] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.502 [2024-07-26 11:35:53.031479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.502 qpair failed and we were unable to recover it. 00:27:57.502 [2024-07-26 11:35:53.031737] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.502 [2024-07-26 11:35:53.031772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.502 qpair failed and we were unable to recover it. 00:27:57.502 [2024-07-26 11:35:53.032069] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.502 [2024-07-26 11:35:53.032102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.502 qpair failed and we were unable to recover it. 00:27:57.502 [2024-07-26 11:35:53.032378] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.502 [2024-07-26 11:35:53.032411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.502 qpair failed and we were unable to recover it. 00:27:57.502 [2024-07-26 11:35:53.032692] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.502 [2024-07-26 11:35:53.032726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.502 qpair failed and we were unable to recover it. 00:27:57.502 [2024-07-26 11:35:53.032876] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.502 [2024-07-26 11:35:53.032909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.502 qpair failed and we were unable to recover it. 00:27:57.502 [2024-07-26 11:35:53.033189] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.502 [2024-07-26 11:35:53.033222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.502 qpair failed and we were unable to recover it. 00:27:57.502 [2024-07-26 11:35:53.033501] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.502 [2024-07-26 11:35:53.033535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.502 qpair failed and we were unable to recover it. 00:27:57.502 [2024-07-26 11:35:53.033766] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.502 [2024-07-26 11:35:53.033800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.502 qpair failed and we were unable to recover it. 00:27:57.502 [2024-07-26 11:35:53.033991] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.502 [2024-07-26 11:35:53.034023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.502 qpair failed and we were unable to recover it. 00:27:57.502 [2024-07-26 11:35:53.034285] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.502 [2024-07-26 11:35:53.034317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.502 qpair failed and we were unable to recover it. 00:27:57.502 [2024-07-26 11:35:53.034509] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.502 [2024-07-26 11:35:53.034542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.502 qpair failed and we were unable to recover it. 00:27:57.502 [2024-07-26 11:35:53.034841] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.502 [2024-07-26 11:35:53.034880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.502 qpair failed and we were unable to recover it. 00:27:57.502 [2024-07-26 11:35:53.035167] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.502 [2024-07-26 11:35:53.035200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.502 qpair failed and we were unable to recover it. 00:27:57.502 [2024-07-26 11:35:53.035485] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.502 [2024-07-26 11:35:53.035519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.502 qpair failed and we were unable to recover it. 00:27:57.502 [2024-07-26 11:35:53.035727] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.502 [2024-07-26 11:35:53.035761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.502 qpair failed and we were unable to recover it. 00:27:57.502 [2024-07-26 11:35:53.036043] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.502 [2024-07-26 11:35:53.036076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.502 qpair failed and we were unable to recover it. 00:27:57.502 [2024-07-26 11:35:53.036227] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.502 [2024-07-26 11:35:53.036260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.502 qpair failed and we were unable to recover it. 00:27:57.502 [2024-07-26 11:35:53.036517] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.502 [2024-07-26 11:35:53.036551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.502 qpair failed and we were unable to recover it. 00:27:57.502 [2024-07-26 11:35:53.036821] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.502 [2024-07-26 11:35:53.036853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.502 qpair failed and we were unable to recover it. 00:27:57.502 [2024-07-26 11:35:53.037062] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.502 [2024-07-26 11:35:53.037094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.502 qpair failed and we were unable to recover it. 00:27:57.502 [2024-07-26 11:35:53.037378] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.502 [2024-07-26 11:35:53.037411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.502 qpair failed and we were unable to recover it. 00:27:57.502 [2024-07-26 11:35:53.037665] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.502 [2024-07-26 11:35:53.037699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.502 qpair failed and we were unable to recover it. 00:27:57.502 [2024-07-26 11:35:53.037981] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.502 [2024-07-26 11:35:53.038015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.502 qpair failed and we were unable to recover it. 00:27:57.502 [2024-07-26 11:35:53.038221] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.502 [2024-07-26 11:35:53.038254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.502 qpair failed and we were unable to recover it. 00:27:57.502 [2024-07-26 11:35:53.038535] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.502 [2024-07-26 11:35:53.038568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.502 qpair failed and we were unable to recover it. 00:27:57.502 [2024-07-26 11:35:53.038804] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.502 [2024-07-26 11:35:53.038838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.502 qpair failed and we were unable to recover it. 00:27:57.502 [2024-07-26 11:35:53.039034] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.502 [2024-07-26 11:35:53.039067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.502 qpair failed and we were unable to recover it. 00:27:57.502 [2024-07-26 11:35:53.039271] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.502 [2024-07-26 11:35:53.039303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.502 qpair failed and we were unable to recover it. 00:27:57.502 [2024-07-26 11:35:53.039522] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.502 [2024-07-26 11:35:53.039555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.502 qpair failed and we were unable to recover it. 00:27:57.502 [2024-07-26 11:35:53.039762] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.502 [2024-07-26 11:35:53.039796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.502 qpair failed and we were unable to recover it. 00:27:57.502 [2024-07-26 11:35:53.040055] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.502 [2024-07-26 11:35:53.040088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.502 qpair failed and we were unable to recover it. 00:27:57.502 [2024-07-26 11:35:53.040361] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.502 [2024-07-26 11:35:53.040394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.502 qpair failed and we were unable to recover it. 00:27:57.502 [2024-07-26 11:35:53.040681] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.502 [2024-07-26 11:35:53.040715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.502 qpair failed and we were unable to recover it. 00:27:57.502 [2024-07-26 11:35:53.040872] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.502 [2024-07-26 11:35:53.040905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.502 qpair failed and we were unable to recover it. 00:27:57.502 [2024-07-26 11:35:53.041128] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.502 [2024-07-26 11:35:53.041161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.502 qpair failed and we were unable to recover it. 00:27:57.502 [2024-07-26 11:35:53.041465] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.502 [2024-07-26 11:35:53.041498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.502 qpair failed and we were unable to recover it. 00:27:57.502 [2024-07-26 11:35:53.041770] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.502 [2024-07-26 11:35:53.041804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.502 qpair failed and we were unable to recover it. 00:27:57.502 [2024-07-26 11:35:53.042062] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.502 [2024-07-26 11:35:53.042095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.502 qpair failed and we were unable to recover it. 00:27:57.502 [2024-07-26 11:35:53.042376] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.502 [2024-07-26 11:35:53.042409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.502 qpair failed and we were unable to recover it. 00:27:57.502 [2024-07-26 11:35:53.042700] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.502 [2024-07-26 11:35:53.042735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.502 qpair failed and we were unable to recover it. 00:27:57.502 [2024-07-26 11:35:53.042852] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.502 [2024-07-26 11:35:53.042885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.502 qpair failed and we were unable to recover it. 00:27:57.502 [2024-07-26 11:35:53.043086] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.502 [2024-07-26 11:35:53.043119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.502 qpair failed and we were unable to recover it. 00:27:57.502 [2024-07-26 11:35:53.043393] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.502 [2024-07-26 11:35:53.043426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.502 qpair failed and we were unable to recover it. 00:27:57.502 [2024-07-26 11:35:53.043612] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.502 [2024-07-26 11:35:53.043662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.502 qpair failed and we were unable to recover it. 00:27:57.502 [2024-07-26 11:35:53.043864] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.502 [2024-07-26 11:35:53.043897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.502 qpair failed and we were unable to recover it. 00:27:57.502 [2024-07-26 11:35:53.044161] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.502 [2024-07-26 11:35:53.044194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.502 qpair failed and we were unable to recover it. 00:27:57.502 [2024-07-26 11:35:53.044490] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.502 [2024-07-26 11:35:53.044522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.502 qpair failed and we were unable to recover it. 00:27:57.502 [2024-07-26 11:35:53.044733] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.502 [2024-07-26 11:35:53.044767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.502 qpair failed and we were unable to recover it. 00:27:57.502 [2024-07-26 11:35:53.044980] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.502 [2024-07-26 11:35:53.045013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.502 qpair failed and we were unable to recover it. 00:27:57.502 [2024-07-26 11:35:53.045212] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.502 [2024-07-26 11:35:53.045245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.502 qpair failed and we were unable to recover it. 00:27:57.502 [2024-07-26 11:35:53.045525] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.502 [2024-07-26 11:35:53.045557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.502 qpair failed and we were unable to recover it. 00:27:57.502 [2024-07-26 11:35:53.045883] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.502 [2024-07-26 11:35:53.045917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.502 qpair failed and we were unable to recover it. 00:27:57.502 [2024-07-26 11:35:53.046188] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.502 [2024-07-26 11:35:53.046221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.502 qpair failed and we were unable to recover it. 00:27:57.502 [2024-07-26 11:35:53.046527] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.502 [2024-07-26 11:35:53.046560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.502 qpair failed and we were unable to recover it. 00:27:57.502 [2024-07-26 11:35:53.046863] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.502 [2024-07-26 11:35:53.046897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.502 qpair failed and we were unable to recover it. 00:27:57.502 [2024-07-26 11:35:53.047090] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.502 [2024-07-26 11:35:53.047123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.502 qpair failed and we were unable to recover it. 00:27:57.502 [2024-07-26 11:35:53.047387] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.502 [2024-07-26 11:35:53.047420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.502 qpair failed and we were unable to recover it. 00:27:57.502 [2024-07-26 11:35:53.047605] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.502 [2024-07-26 11:35:53.047647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.502 qpair failed and we were unable to recover it. 00:27:57.502 [2024-07-26 11:35:53.047925] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.502 [2024-07-26 11:35:53.047959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.502 qpair failed and we were unable to recover it. 00:27:57.502 [2024-07-26 11:35:53.048219] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.502 [2024-07-26 11:35:53.048252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.502 qpair failed and we were unable to recover it. 00:27:57.502 [2024-07-26 11:35:53.048558] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.502 [2024-07-26 11:35:53.048591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.502 qpair failed and we were unable to recover it. 00:27:57.502 [2024-07-26 11:35:53.048827] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.502 [2024-07-26 11:35:53.048861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.503 qpair failed and we were unable to recover it. 00:27:57.503 [2024-07-26 11:35:53.049048] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.503 [2024-07-26 11:35:53.049081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.503 qpair failed and we were unable to recover it. 00:27:57.503 [2024-07-26 11:35:53.049346] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.503 [2024-07-26 11:35:53.049380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.503 qpair failed and we were unable to recover it. 00:27:57.503 [2024-07-26 11:35:53.049662] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.503 [2024-07-26 11:35:53.049696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.503 qpair failed and we were unable to recover it. 00:27:57.503 [2024-07-26 11:35:53.049881] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.503 [2024-07-26 11:35:53.049914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.503 qpair failed and we were unable to recover it. 00:27:57.503 [2024-07-26 11:35:53.050175] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.503 [2024-07-26 11:35:53.050208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.503 qpair failed and we were unable to recover it. 00:27:57.503 [2024-07-26 11:35:53.050409] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.503 [2024-07-26 11:35:53.050442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.503 qpair failed and we were unable to recover it. 00:27:57.503 [2024-07-26 11:35:53.050649] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.503 [2024-07-26 11:35:53.050683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.503 qpair failed and we were unable to recover it. 00:27:57.503 [2024-07-26 11:35:53.050886] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.503 [2024-07-26 11:35:53.050918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.503 qpair failed and we were unable to recover it. 00:27:57.503 [2024-07-26 11:35:53.051118] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.503 [2024-07-26 11:35:53.051152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.503 qpair failed and we were unable to recover it. 00:27:57.503 [2024-07-26 11:35:53.051432] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.503 [2024-07-26 11:35:53.051466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.503 qpair failed and we were unable to recover it. 00:27:57.503 [2024-07-26 11:35:53.051617] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.503 [2024-07-26 11:35:53.051661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.503 qpair failed and we were unable to recover it. 00:27:57.503 [2024-07-26 11:35:53.051795] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.503 [2024-07-26 11:35:53.051829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.503 qpair failed and we were unable to recover it. 00:27:57.503 [2024-07-26 11:35:53.052101] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.503 [2024-07-26 11:35:53.052134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.503 qpair failed and we were unable to recover it. 00:27:57.503 [2024-07-26 11:35:53.052279] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.503 [2024-07-26 11:35:53.052312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.503 qpair failed and we were unable to recover it. 00:27:57.503 [2024-07-26 11:35:53.052495] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.503 [2024-07-26 11:35:53.052529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.503 qpair failed and we were unable to recover it. 00:27:57.503 [2024-07-26 11:35:53.052785] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.503 [2024-07-26 11:35:53.052819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.503 qpair failed and we were unable to recover it. 00:27:57.503 [2024-07-26 11:35:53.053037] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.503 [2024-07-26 11:35:53.053071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.503 qpair failed and we were unable to recover it. 00:27:57.503 [2024-07-26 11:35:53.053299] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.503 [2024-07-26 11:35:53.053337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.503 qpair failed and we were unable to recover it. 00:27:57.503 [2024-07-26 11:35:53.053520] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.503 [2024-07-26 11:35:53.053553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.503 qpair failed and we were unable to recover it. 00:27:57.503 [2024-07-26 11:35:53.053822] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.503 [2024-07-26 11:35:53.053857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.503 qpair failed and we were unable to recover it. 00:27:57.503 [2024-07-26 11:35:53.054139] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.503 [2024-07-26 11:35:53.054172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.503 qpair failed and we were unable to recover it. 00:27:57.503 [2024-07-26 11:35:53.054436] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.503 [2024-07-26 11:35:53.054469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.503 qpair failed and we were unable to recover it. 00:27:57.503 [2024-07-26 11:35:53.054748] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.503 [2024-07-26 11:35:53.054781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.503 qpair failed and we were unable to recover it. 00:27:57.503 [2024-07-26 11:35:53.055060] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.503 [2024-07-26 11:35:53.055092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.503 qpair failed and we were unable to recover it. 00:27:57.503 [2024-07-26 11:35:53.055371] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.503 [2024-07-26 11:35:53.055404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.503 qpair failed and we were unable to recover it. 00:27:57.503 [2024-07-26 11:35:53.055599] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.503 [2024-07-26 11:35:53.055643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.503 qpair failed and we were unable to recover it. 00:27:57.503 [2024-07-26 11:35:53.055948] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.503 [2024-07-26 11:35:53.055981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.503 qpair failed and we were unable to recover it. 00:27:57.503 [2024-07-26 11:35:53.056193] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.503 [2024-07-26 11:35:53.056225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.503 qpair failed and we were unable to recover it. 00:27:57.503 [2024-07-26 11:35:53.056373] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.503 [2024-07-26 11:35:53.056407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.503 qpair failed and we were unable to recover it. 00:27:57.503 [2024-07-26 11:35:53.056670] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.503 [2024-07-26 11:35:53.056703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.503 qpair failed and we were unable to recover it. 00:27:57.503 [2024-07-26 11:35:53.056894] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.503 [2024-07-26 11:35:53.056928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.503 qpair failed and we were unable to recover it. 00:27:57.503 [2024-07-26 11:35:53.057074] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.503 [2024-07-26 11:35:53.057107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.503 qpair failed and we were unable to recover it. 00:27:57.503 [2024-07-26 11:35:53.057334] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.503 [2024-07-26 11:35:53.057367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.503 qpair failed and we were unable to recover it. 00:27:57.503 [2024-07-26 11:35:53.057570] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.503 [2024-07-26 11:35:53.057603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.503 qpair failed and we were unable to recover it. 00:27:57.503 [2024-07-26 11:35:53.057894] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.503 [2024-07-26 11:35:53.057928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.503 qpair failed and we were unable to recover it. 00:27:57.503 [2024-07-26 11:35:53.058199] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.503 [2024-07-26 11:35:53.058231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.503 qpair failed and we were unable to recover it. 00:27:57.503 [2024-07-26 11:35:53.058466] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.503 [2024-07-26 11:35:53.058500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.503 qpair failed and we were unable to recover it. 00:27:57.503 [2024-07-26 11:35:53.058687] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.503 [2024-07-26 11:35:53.058721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.503 qpair failed and we were unable to recover it. 00:27:57.503 [2024-07-26 11:35:53.058946] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.503 [2024-07-26 11:35:53.058979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.503 qpair failed and we were unable to recover it. 00:27:57.503 [2024-07-26 11:35:53.059236] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.503 [2024-07-26 11:35:53.059270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.503 qpair failed and we were unable to recover it. 00:27:57.503 [2024-07-26 11:35:53.059463] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.503 [2024-07-26 11:35:53.059496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.503 qpair failed and we were unable to recover it. 00:27:57.503 [2024-07-26 11:35:53.059712] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.503 [2024-07-26 11:35:53.059746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.503 qpair failed and we were unable to recover it. 00:27:57.503 [2024-07-26 11:35:53.060028] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.503 [2024-07-26 11:35:53.060061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.503 qpair failed and we were unable to recover it. 00:27:57.503 [2024-07-26 11:35:53.060318] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.503 [2024-07-26 11:35:53.060351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.503 qpair failed and we were unable to recover it. 00:27:57.503 [2024-07-26 11:35:53.060655] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.503 [2024-07-26 11:35:53.060695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.503 qpair failed and we were unable to recover it. 00:27:57.503 [2024-07-26 11:35:53.060921] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.503 [2024-07-26 11:35:53.060954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.503 qpair failed and we were unable to recover it. 00:27:57.503 [2024-07-26 11:35:53.061236] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.503 [2024-07-26 11:35:53.061268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.503 qpair failed and we were unable to recover it. 00:27:57.503 [2024-07-26 11:35:53.061461] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.503 [2024-07-26 11:35:53.061495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.503 qpair failed and we were unable to recover it. 00:27:57.503 [2024-07-26 11:35:53.061699] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.503 [2024-07-26 11:35:53.061732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.503 qpair failed and we were unable to recover it. 00:27:57.503 [2024-07-26 11:35:53.061956] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.503 [2024-07-26 11:35:53.061988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.503 qpair failed and we were unable to recover it. 00:27:57.503 [2024-07-26 11:35:53.062246] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.503 [2024-07-26 11:35:53.062280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.503 qpair failed and we were unable to recover it. 00:27:57.503 [2024-07-26 11:35:53.062480] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.503 [2024-07-26 11:35:53.062513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.503 qpair failed and we were unable to recover it. 00:27:57.503 [2024-07-26 11:35:53.062657] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.503 [2024-07-26 11:35:53.062692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.503 qpair failed and we were unable to recover it. 00:27:57.503 [2024-07-26 11:35:53.062986] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.503 [2024-07-26 11:35:53.063019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.503 qpair failed and we were unable to recover it. 00:27:57.503 [2024-07-26 11:35:53.063169] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.503 [2024-07-26 11:35:53.063203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.503 qpair failed and we were unable to recover it. 00:27:57.503 [2024-07-26 11:35:53.063414] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.503 [2024-07-26 11:35:53.063447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.503 qpair failed and we were unable to recover it. 00:27:57.503 [2024-07-26 11:35:53.063755] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.503 [2024-07-26 11:35:53.063788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.503 qpair failed and we were unable to recover it. 00:27:57.503 [2024-07-26 11:35:53.064047] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.503 [2024-07-26 11:35:53.064080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.503 qpair failed and we were unable to recover it. 00:27:57.503 [2024-07-26 11:35:53.064273] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.503 [2024-07-26 11:35:53.064307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.503 qpair failed and we were unable to recover it. 00:27:57.503 [2024-07-26 11:35:53.064514] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.503 [2024-07-26 11:35:53.064547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.503 qpair failed and we were unable to recover it. 00:27:57.503 [2024-07-26 11:35:53.064823] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.503 [2024-07-26 11:35:53.064859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.503 qpair failed and we were unable to recover it. 00:27:57.503 [2024-07-26 11:35:53.065059] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.503 [2024-07-26 11:35:53.065092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.503 qpair failed and we were unable to recover it. 00:27:57.503 [2024-07-26 11:35:53.065347] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.503 [2024-07-26 11:35:53.065380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.503 qpair failed and we were unable to recover it. 00:27:57.503 [2024-07-26 11:35:53.065580] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.503 [2024-07-26 11:35:53.065614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.503 qpair failed and we were unable to recover it. 00:27:57.504 [2024-07-26 11:35:53.065835] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.504 [2024-07-26 11:35:53.065869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.504 qpair failed and we were unable to recover it. 00:27:57.504 [2024-07-26 11:35:53.066170] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.504 [2024-07-26 11:35:53.066204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.504 qpair failed and we were unable to recover it. 00:27:57.504 [2024-07-26 11:35:53.066395] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.504 [2024-07-26 11:35:53.066428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.504 qpair failed and we were unable to recover it. 00:27:57.504 [2024-07-26 11:35:53.066624] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.504 [2024-07-26 11:35:53.066669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.504 qpair failed and we were unable to recover it. 00:27:57.504 [2024-07-26 11:35:53.066868] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.504 [2024-07-26 11:35:53.066900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.504 qpair failed and we were unable to recover it. 00:27:57.504 [2024-07-26 11:35:53.067168] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.504 [2024-07-26 11:35:53.067200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.504 qpair failed and we were unable to recover it. 00:27:57.504 [2024-07-26 11:35:53.067471] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.504 [2024-07-26 11:35:53.067503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.504 qpair failed and we were unable to recover it. 00:27:57.504 [2024-07-26 11:35:53.067805] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.504 [2024-07-26 11:35:53.067839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.504 qpair failed and we were unable to recover it. 00:27:57.504 [2024-07-26 11:35:53.068126] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.504 [2024-07-26 11:35:53.068160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.504 qpair failed and we were unable to recover it. 00:27:57.504 [2024-07-26 11:35:53.068415] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.504 [2024-07-26 11:35:53.068448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.504 qpair failed and we were unable to recover it. 00:27:57.504 [2024-07-26 11:35:53.068601] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.504 [2024-07-26 11:35:53.068646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.504 qpair failed and we were unable to recover it. 00:27:57.504 [2024-07-26 11:35:53.068954] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.504 [2024-07-26 11:35:53.068987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.504 qpair failed and we were unable to recover it. 00:27:57.504 [2024-07-26 11:35:53.069194] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.504 [2024-07-26 11:35:53.069227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.504 qpair failed and we were unable to recover it. 00:27:57.504 [2024-07-26 11:35:53.069420] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.504 [2024-07-26 11:35:53.069453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.504 qpair failed and we were unable to recover it. 00:27:57.504 [2024-07-26 11:35:53.069730] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.504 [2024-07-26 11:35:53.069764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.504 qpair failed and we were unable to recover it. 00:27:57.504 [2024-07-26 11:35:53.070054] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.504 [2024-07-26 11:35:53.070087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.504 qpair failed and we were unable to recover it. 00:27:57.504 [2024-07-26 11:35:53.070354] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.504 [2024-07-26 11:35:53.070387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.504 qpair failed and we were unable to recover it. 00:27:57.504 [2024-07-26 11:35:53.070646] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.504 [2024-07-26 11:35:53.070680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.504 qpair failed and we were unable to recover it. 00:27:57.504 [2024-07-26 11:35:53.070982] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.504 [2024-07-26 11:35:53.071015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.504 qpair failed and we were unable to recover it. 00:27:57.504 [2024-07-26 11:35:53.071269] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.504 [2024-07-26 11:35:53.071302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.504 qpair failed and we were unable to recover it. 00:27:57.504 [2024-07-26 11:35:53.071566] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.504 [2024-07-26 11:35:53.071600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.504 qpair failed and we were unable to recover it. 00:27:57.504 [2024-07-26 11:35:53.071905] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.504 [2024-07-26 11:35:53.071939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.504 qpair failed and we were unable to recover it. 00:27:57.504 [2024-07-26 11:35:53.072139] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.504 [2024-07-26 11:35:53.072172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.504 qpair failed and we were unable to recover it. 00:27:57.504 [2024-07-26 11:35:53.072397] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.504 [2024-07-26 11:35:53.072431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.504 qpair failed and we were unable to recover it. 00:27:57.504 [2024-07-26 11:35:53.072677] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.504 [2024-07-26 11:35:53.072711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.504 qpair failed and we were unable to recover it. 00:27:57.504 [2024-07-26 11:35:53.072901] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.504 [2024-07-26 11:35:53.072934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.504 qpair failed and we were unable to recover it. 00:27:57.504 [2024-07-26 11:35:53.073189] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.504 [2024-07-26 11:35:53.073221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.504 qpair failed and we were unable to recover it. 00:27:57.504 [2024-07-26 11:35:53.073477] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.504 [2024-07-26 11:35:53.073510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.504 qpair failed and we were unable to recover it. 00:27:57.504 [2024-07-26 11:35:53.073705] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.504 [2024-07-26 11:35:53.073740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.504 qpair failed and we were unable to recover it. 00:27:57.504 [2024-07-26 11:35:53.073935] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.504 [2024-07-26 11:35:53.073967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.504 qpair failed and we were unable to recover it. 00:27:57.504 [2024-07-26 11:35:53.074245] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.504 [2024-07-26 11:35:53.074278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.504 qpair failed and we were unable to recover it. 00:27:57.504 [2024-07-26 11:35:53.074571] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.504 [2024-07-26 11:35:53.074604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.504 qpair failed and we were unable to recover it. 00:27:57.504 [2024-07-26 11:35:53.074885] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.504 [2024-07-26 11:35:53.074919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.504 qpair failed and we were unable to recover it. 00:27:57.504 [2024-07-26 11:35:53.075052] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.504 [2024-07-26 11:35:53.075086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.504 qpair failed and we were unable to recover it. 00:27:57.504 [2024-07-26 11:35:53.075367] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.504 [2024-07-26 11:35:53.075400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.504 qpair failed and we were unable to recover it. 00:27:57.504 [2024-07-26 11:35:53.075612] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.504 [2024-07-26 11:35:53.075655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.504 qpair failed and we were unable to recover it. 00:27:57.504 [2024-07-26 11:35:53.075963] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.504 [2024-07-26 11:35:53.075997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.504 qpair failed and we were unable to recover it. 00:27:57.504 [2024-07-26 11:35:53.076231] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.504 [2024-07-26 11:35:53.076264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.504 qpair failed and we were unable to recover it. 00:27:57.504 [2024-07-26 11:35:53.076505] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.504 [2024-07-26 11:35:53.076538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.504 qpair failed and we were unable to recover it. 00:27:57.504 [2024-07-26 11:35:53.076796] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.504 [2024-07-26 11:35:53.076830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.504 qpair failed and we were unable to recover it. 00:27:57.504 [2024-07-26 11:35:53.077054] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.504 [2024-07-26 11:35:53.077086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.504 qpair failed and we were unable to recover it. 00:27:57.504 [2024-07-26 11:35:53.077317] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.504 [2024-07-26 11:35:53.077351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.504 qpair failed and we were unable to recover it. 00:27:57.504 [2024-07-26 11:35:53.077604] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.504 [2024-07-26 11:35:53.077652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.504 qpair failed and we were unable to recover it. 00:27:57.504 [2024-07-26 11:35:53.077810] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.504 [2024-07-26 11:35:53.077841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.504 qpair failed and we were unable to recover it. 00:27:57.504 [2024-07-26 11:35:53.078023] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.504 [2024-07-26 11:35:53.078054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.504 qpair failed and we were unable to recover it. 00:27:57.504 [2024-07-26 11:35:53.078383] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.504 [2024-07-26 11:35:53.078416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.504 qpair failed and we were unable to recover it. 00:27:57.504 [2024-07-26 11:35:53.078637] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.504 [2024-07-26 11:35:53.078671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.504 qpair failed and we were unable to recover it. 00:27:57.504 [2024-07-26 11:35:53.078955] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.504 [2024-07-26 11:35:53.078987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.504 qpair failed and we were unable to recover it. 00:27:57.504 [2024-07-26 11:35:53.079175] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.504 [2024-07-26 11:35:53.079212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.504 qpair failed and we were unable to recover it. 00:27:57.504 [2024-07-26 11:35:53.079411] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.504 [2024-07-26 11:35:53.079443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.504 qpair failed and we were unable to recover it. 00:27:57.504 [2024-07-26 11:35:53.079720] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.504 [2024-07-26 11:35:53.079753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.504 qpair failed and we were unable to recover it. 00:27:57.504 [2024-07-26 11:35:53.079974] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.504 [2024-07-26 11:35:53.080007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.504 qpair failed and we were unable to recover it. 00:27:57.504 [2024-07-26 11:35:53.080262] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.504 [2024-07-26 11:35:53.080295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.504 qpair failed and we were unable to recover it. 00:27:57.504 [2024-07-26 11:35:53.080525] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.504 [2024-07-26 11:35:53.080558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.504 qpair failed and we were unable to recover it. 00:27:57.504 [2024-07-26 11:35:53.080837] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.504 [2024-07-26 11:35:53.080871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.504 qpair failed and we were unable to recover it. 00:27:57.504 [2024-07-26 11:35:53.081019] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.504 [2024-07-26 11:35:53.081052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.504 qpair failed and we were unable to recover it. 00:27:57.504 [2024-07-26 11:35:53.081335] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.504 [2024-07-26 11:35:53.081368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.504 qpair failed and we were unable to recover it. 00:27:57.504 [2024-07-26 11:35:53.081592] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.504 [2024-07-26 11:35:53.081635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.504 qpair failed and we were unable to recover it. 00:27:57.504 [2024-07-26 11:35:53.081854] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.504 [2024-07-26 11:35:53.081887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.504 qpair failed and we were unable to recover it. 00:27:57.504 [2024-07-26 11:35:53.082142] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.504 [2024-07-26 11:35:53.082175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.504 qpair failed and we were unable to recover it. 00:27:57.504 [2024-07-26 11:35:53.082372] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.504 [2024-07-26 11:35:53.082405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.504 qpair failed and we were unable to recover it. 00:27:57.504 [2024-07-26 11:35:53.082688] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.504 [2024-07-26 11:35:53.082723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.504 qpair failed and we were unable to recover it. 00:27:57.504 [2024-07-26 11:35:53.082926] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.504 [2024-07-26 11:35:53.082959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.504 qpair failed and we were unable to recover it. 00:27:57.504 [2024-07-26 11:35:53.083261] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.504 [2024-07-26 11:35:53.083293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.504 qpair failed and we were unable to recover it. 00:27:57.504 [2024-07-26 11:35:53.083585] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.504 [2024-07-26 11:35:53.083618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.504 qpair failed and we were unable to recover it. 00:27:57.504 [2024-07-26 11:35:53.083901] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.504 [2024-07-26 11:35:53.083934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.504 qpair failed and we were unable to recover it. 00:27:57.504 [2024-07-26 11:35:53.084188] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.504 [2024-07-26 11:35:53.084221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.504 qpair failed and we were unable to recover it. 00:27:57.504 [2024-07-26 11:35:53.084416] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.504 [2024-07-26 11:35:53.084448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.504 qpair failed and we were unable to recover it. 00:27:57.504 [2024-07-26 11:35:53.084731] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.504 [2024-07-26 11:35:53.084765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.504 qpair failed and we were unable to recover it. 00:27:57.504 [2024-07-26 11:35:53.085077] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.504 [2024-07-26 11:35:53.085109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.504 qpair failed and we were unable to recover it. 00:27:57.504 [2024-07-26 11:35:53.085313] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.504 [2024-07-26 11:35:53.085346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.504 qpair failed and we were unable to recover it. 00:27:57.504 [2024-07-26 11:35:53.085553] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.504 [2024-07-26 11:35:53.085586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.504 qpair failed and we were unable to recover it. 00:27:57.504 [2024-07-26 11:35:53.085784] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.504 [2024-07-26 11:35:53.085817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.504 qpair failed and we were unable to recover it. 00:27:57.505 [2024-07-26 11:35:53.086040] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.505 [2024-07-26 11:35:53.086073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.505 qpair failed and we were unable to recover it. 00:27:57.505 [2024-07-26 11:35:53.086329] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.505 [2024-07-26 11:35:53.086361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.505 qpair failed and we were unable to recover it. 00:27:57.505 [2024-07-26 11:35:53.086623] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.505 [2024-07-26 11:35:53.086671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.505 qpair failed and we were unable to recover it. 00:27:57.505 [2024-07-26 11:35:53.086789] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.505 [2024-07-26 11:35:53.086823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.505 qpair failed and we were unable to recover it. 00:27:57.505 [2024-07-26 11:35:53.087019] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.505 [2024-07-26 11:35:53.087052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.505 qpair failed and we were unable to recover it. 00:27:57.505 [2024-07-26 11:35:53.087337] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.505 [2024-07-26 11:35:53.087370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.505 qpair failed and we were unable to recover it. 00:27:57.505 [2024-07-26 11:35:53.087635] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.505 [2024-07-26 11:35:53.087668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.505 qpair failed and we were unable to recover it. 00:27:57.505 [2024-07-26 11:35:53.087973] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.505 [2024-07-26 11:35:53.088006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.505 qpair failed and we were unable to recover it. 00:27:57.505 [2024-07-26 11:35:53.088276] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.505 [2024-07-26 11:35:53.088309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.505 qpair failed and we were unable to recover it. 00:27:57.505 [2024-07-26 11:35:53.088592] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.505 [2024-07-26 11:35:53.088625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.505 qpair failed and we were unable to recover it. 00:27:57.505 [2024-07-26 11:35:53.088933] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.505 [2024-07-26 11:35:53.088966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.505 qpair failed and we were unable to recover it. 00:27:57.505 [2024-07-26 11:35:53.089232] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.505 [2024-07-26 11:35:53.089264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.505 qpair failed and we were unable to recover it. 00:27:57.505 [2024-07-26 11:35:53.089571] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.505 [2024-07-26 11:35:53.089603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.505 qpair failed and we were unable to recover it. 00:27:57.505 [2024-07-26 11:35:53.089901] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.505 [2024-07-26 11:35:53.089936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.505 qpair failed and we were unable to recover it. 00:27:57.505 [2024-07-26 11:35:53.090159] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.505 [2024-07-26 11:35:53.090192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.505 qpair failed and we were unable to recover it. 00:27:57.505 [2024-07-26 11:35:53.090446] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.505 [2024-07-26 11:35:53.090478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.505 qpair failed and we were unable to recover it. 00:27:57.505 [2024-07-26 11:35:53.090765] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.505 [2024-07-26 11:35:53.090800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.505 qpair failed and we were unable to recover it. 00:27:57.505 [2024-07-26 11:35:53.091062] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.505 [2024-07-26 11:35:53.091095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.505 qpair failed and we were unable to recover it. 00:27:57.505 [2024-07-26 11:35:53.091366] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.505 [2024-07-26 11:35:53.091398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.505 qpair failed and we were unable to recover it. 00:27:57.505 [2024-07-26 11:35:53.091702] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.505 [2024-07-26 11:35:53.091735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.505 qpair failed and we were unable to recover it. 00:27:57.505 [2024-07-26 11:35:53.092005] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.505 [2024-07-26 11:35:53.092037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.505 qpair failed and we were unable to recover it. 00:27:57.505 [2024-07-26 11:35:53.092294] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.505 [2024-07-26 11:35:53.092327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.505 qpair failed and we were unable to recover it. 00:27:57.505 [2024-07-26 11:35:53.092529] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.505 [2024-07-26 11:35:53.092561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.505 qpair failed and we were unable to recover it. 00:27:57.505 [2024-07-26 11:35:53.092830] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.505 [2024-07-26 11:35:53.092864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.505 qpair failed and we were unable to recover it. 00:27:57.505 [2024-07-26 11:35:53.093069] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.505 [2024-07-26 11:35:53.093102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.505 qpair failed and we were unable to recover it. 00:27:57.505 [2024-07-26 11:35:53.093306] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.505 [2024-07-26 11:35:53.093339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.505 qpair failed and we were unable to recover it. 00:27:57.505 [2024-07-26 11:35:53.093594] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.505 [2024-07-26 11:35:53.093638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.505 qpair failed and we were unable to recover it. 00:27:57.505 [2024-07-26 11:35:53.093885] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.505 [2024-07-26 11:35:53.093919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.505 qpair failed and we were unable to recover it. 00:27:57.505 [2024-07-26 11:35:53.094220] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.505 [2024-07-26 11:35:53.094253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.505 qpair failed and we were unable to recover it. 00:27:57.505 [2024-07-26 11:35:53.094551] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.505 [2024-07-26 11:35:53.094588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.505 qpair failed and we were unable to recover it. 00:27:57.505 [2024-07-26 11:35:53.094862] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.505 [2024-07-26 11:35:53.094896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.505 qpair failed and we were unable to recover it. 00:27:57.505 [2024-07-26 11:35:53.095120] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.505 [2024-07-26 11:35:53.095152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.505 qpair failed and we were unable to recover it. 00:27:57.505 [2024-07-26 11:35:53.095434] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.505 [2024-07-26 11:35:53.095466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.505 qpair failed and we were unable to recover it. 00:27:57.505 [2024-07-26 11:35:53.095669] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.505 [2024-07-26 11:35:53.095704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.505 qpair failed and we were unable to recover it. 00:27:57.505 [2024-07-26 11:35:53.095970] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.505 [2024-07-26 11:35:53.096002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.505 qpair failed and we were unable to recover it. 00:27:57.505 [2024-07-26 11:35:53.096258] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.505 [2024-07-26 11:35:53.096291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.505 qpair failed and we were unable to recover it. 00:27:57.505 [2024-07-26 11:35:53.096436] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.505 [2024-07-26 11:35:53.096469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.505 qpair failed and we were unable to recover it. 00:27:57.505 [2024-07-26 11:35:53.096617] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.505 [2024-07-26 11:35:53.096667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.505 qpair failed and we were unable to recover it. 00:27:57.505 [2024-07-26 11:35:53.096924] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.505 [2024-07-26 11:35:53.096957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.505 qpair failed and we were unable to recover it. 00:27:57.505 [2024-07-26 11:35:53.097165] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.505 [2024-07-26 11:35:53.097198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.505 qpair failed and we were unable to recover it. 00:27:57.505 [2024-07-26 11:35:53.097387] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.505 [2024-07-26 11:35:53.097420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.505 qpair failed and we were unable to recover it. 00:27:57.505 [2024-07-26 11:35:53.097701] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.505 [2024-07-26 11:35:53.097735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.505 qpair failed and we were unable to recover it. 00:27:57.505 [2024-07-26 11:35:53.097958] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.505 [2024-07-26 11:35:53.097991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.505 qpair failed and we were unable to recover it. 00:27:57.505 [2024-07-26 11:35:53.098151] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.505 [2024-07-26 11:35:53.098184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.505 qpair failed and we were unable to recover it. 00:27:57.505 [2024-07-26 11:35:53.098417] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.505 [2024-07-26 11:35:53.098449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.505 qpair failed and we were unable to recover it. 00:27:57.505 [2024-07-26 11:35:53.098705] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.505 [2024-07-26 11:35:53.098739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.505 qpair failed and we were unable to recover it. 00:27:57.505 [2024-07-26 11:35:53.098928] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.505 [2024-07-26 11:35:53.098960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.505 qpair failed and we were unable to recover it. 00:27:57.505 [2024-07-26 11:35:53.099163] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.505 [2024-07-26 11:35:53.099197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.505 qpair failed and we were unable to recover it. 00:27:57.505 [2024-07-26 11:35:53.099407] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.505 [2024-07-26 11:35:53.099439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.505 qpair failed and we were unable to recover it. 00:27:57.505 [2024-07-26 11:35:53.099694] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.505 [2024-07-26 11:35:53.099729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.505 qpair failed and we were unable to recover it. 00:27:57.505 [2024-07-26 11:35:53.099981] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.505 [2024-07-26 11:35:53.100014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.505 qpair failed and we were unable to recover it. 00:27:57.505 [2024-07-26 11:35:53.100231] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.505 [2024-07-26 11:35:53.100264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.505 qpair failed and we were unable to recover it. 00:27:57.505 [2024-07-26 11:35:53.100474] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.505 [2024-07-26 11:35:53.100507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.505 qpair failed and we were unable to recover it. 00:27:57.505 [2024-07-26 11:35:53.100714] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.505 [2024-07-26 11:35:53.100748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.505 qpair failed and we were unable to recover it. 00:27:57.505 [2024-07-26 11:35:53.100933] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.505 [2024-07-26 11:35:53.100966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.505 qpair failed and we were unable to recover it. 00:27:57.505 [2024-07-26 11:35:53.101224] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.505 [2024-07-26 11:35:53.101257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.505 qpair failed and we were unable to recover it. 00:27:57.505 [2024-07-26 11:35:53.101462] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.505 [2024-07-26 11:35:53.101495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.505 qpair failed and we were unable to recover it. 00:27:57.505 [2024-07-26 11:35:53.101765] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.505 [2024-07-26 11:35:53.101799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.505 qpair failed and we were unable to recover it. 00:27:57.505 [2024-07-26 11:35:53.102056] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.505 [2024-07-26 11:35:53.102089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.505 qpair failed and we were unable to recover it. 00:27:57.505 [2024-07-26 11:35:53.102392] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.505 [2024-07-26 11:35:53.102424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.505 qpair failed and we were unable to recover it. 00:27:57.505 [2024-07-26 11:35:53.102714] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.505 [2024-07-26 11:35:53.102749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.505 qpair failed and we were unable to recover it. 00:27:57.505 [2024-07-26 11:35:53.103006] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.505 [2024-07-26 11:35:53.103039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.505 qpair failed and we were unable to recover it. 00:27:57.505 [2024-07-26 11:35:53.103295] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.505 [2024-07-26 11:35:53.103328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.505 qpair failed and we were unable to recover it. 00:27:57.505 [2024-07-26 11:35:53.103647] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.505 [2024-07-26 11:35:53.103682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.505 qpair failed and we were unable to recover it. 00:27:57.505 [2024-07-26 11:35:53.103947] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.505 [2024-07-26 11:35:53.103979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.505 qpair failed and we were unable to recover it. 00:27:57.505 [2024-07-26 11:35:53.104265] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.505 [2024-07-26 11:35:53.104297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.505 qpair failed and we were unable to recover it. 00:27:57.506 [2024-07-26 11:35:53.104589] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.506 [2024-07-26 11:35:53.104622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.506 qpair failed and we were unable to recover it. 00:27:57.506 [2024-07-26 11:35:53.104953] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.506 [2024-07-26 11:35:53.104987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.506 qpair failed and we were unable to recover it. 00:27:57.506 [2024-07-26 11:35:53.105244] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.506 [2024-07-26 11:35:53.105276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.506 qpair failed and we were unable to recover it. 00:27:57.506 [2024-07-26 11:35:53.105478] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.506 [2024-07-26 11:35:53.105510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.506 qpair failed and we were unable to recover it. 00:27:57.506 [2024-07-26 11:35:53.105790] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.506 [2024-07-26 11:35:53.105826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.506 qpair failed and we were unable to recover it. 00:27:57.506 [2024-07-26 11:35:53.106082] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.506 [2024-07-26 11:35:53.106115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.506 qpair failed and we were unable to recover it. 00:27:57.506 [2024-07-26 11:35:53.106304] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.506 [2024-07-26 11:35:53.106337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.506 qpair failed and we were unable to recover it. 00:27:57.506 [2024-07-26 11:35:53.106620] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.506 [2024-07-26 11:35:53.106665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.506 qpair failed and we were unable to recover it. 00:27:57.506 [2024-07-26 11:35:53.106973] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.506 [2024-07-26 11:35:53.107006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.506 qpair failed and we were unable to recover it. 00:27:57.506 [2024-07-26 11:35:53.107304] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.506 [2024-07-26 11:35:53.107336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.506 qpair failed and we were unable to recover it. 00:27:57.506 [2024-07-26 11:35:53.107646] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.506 [2024-07-26 11:35:53.107679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.506 qpair failed and we were unable to recover it. 00:27:57.506 [2024-07-26 11:35:53.107817] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.506 [2024-07-26 11:35:53.107850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.506 qpair failed and we were unable to recover it. 00:27:57.506 [2024-07-26 11:35:53.108154] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.506 [2024-07-26 11:35:53.108187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.506 qpair failed and we were unable to recover it. 00:27:57.506 [2024-07-26 11:35:53.108475] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.506 [2024-07-26 11:35:53.108507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.506 qpair failed and we were unable to recover it. 00:27:57.506 [2024-07-26 11:35:53.108787] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.506 [2024-07-26 11:35:53.108820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.506 qpair failed and we were unable to recover it. 00:27:57.506 [2024-07-26 11:35:53.109113] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.506 [2024-07-26 11:35:53.109145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.506 qpair failed and we were unable to recover it. 00:27:57.506 [2024-07-26 11:35:53.109422] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.506 [2024-07-26 11:35:53.109455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.506 qpair failed and we were unable to recover it. 00:27:57.506 [2024-07-26 11:35:53.109745] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.506 [2024-07-26 11:35:53.109781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.506 qpair failed and we were unable to recover it. 00:27:57.506 [2024-07-26 11:35:53.110059] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.506 [2024-07-26 11:35:53.110089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.506 qpair failed and we were unable to recover it. 00:27:57.506 [2024-07-26 11:35:53.110294] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.506 [2024-07-26 11:35:53.110322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.506 qpair failed and we were unable to recover it. 00:27:57.506 [2024-07-26 11:35:53.110520] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.506 [2024-07-26 11:35:53.110549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.506 qpair failed and we were unable to recover it. 00:27:57.506 [2024-07-26 11:35:53.110826] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.506 [2024-07-26 11:35:53.110856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.506 qpair failed and we were unable to recover it. 00:27:57.506 [2024-07-26 11:35:53.111144] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.506 [2024-07-26 11:35:53.111173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.506 qpair failed and we were unable to recover it. 00:27:57.506 [2024-07-26 11:35:53.111403] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.506 [2024-07-26 11:35:53.111432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.506 qpair failed and we were unable to recover it. 00:27:57.506 [2024-07-26 11:35:53.111619] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.506 [2024-07-26 11:35:53.111659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.506 qpair failed and we were unable to recover it. 00:27:57.506 [2024-07-26 11:35:53.111953] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.506 [2024-07-26 11:35:53.111982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.506 qpair failed and we were unable to recover it. 00:27:57.506 [2024-07-26 11:35:53.112210] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.506 [2024-07-26 11:35:53.112239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.506 qpair failed and we were unable to recover it. 00:27:57.506 [2024-07-26 11:35:53.112516] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.506 [2024-07-26 11:35:53.112545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.506 qpair failed and we were unable to recover it. 00:27:57.506 [2024-07-26 11:35:53.112808] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.506 [2024-07-26 11:35:53.112838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.506 qpair failed and we were unable to recover it. 00:27:57.506 [2024-07-26 11:35:53.113104] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.506 [2024-07-26 11:35:53.113134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.506 qpair failed and we were unable to recover it. 00:27:57.506 [2024-07-26 11:35:53.113331] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.506 [2024-07-26 11:35:53.113362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.506 qpair failed and we were unable to recover it. 00:27:57.506 [2024-07-26 11:35:53.113647] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.506 [2024-07-26 11:35:53.113698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.506 qpair failed and we were unable to recover it. 00:27:57.506 [2024-07-26 11:35:53.113922] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.506 [2024-07-26 11:35:53.113952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.506 qpair failed and we were unable to recover it. 00:27:57.506 [2024-07-26 11:35:53.114234] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.506 [2024-07-26 11:35:53.114265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.506 qpair failed and we were unable to recover it. 00:27:57.506 [2024-07-26 11:35:53.114483] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.506 [2024-07-26 11:35:53.114513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.506 qpair failed and we were unable to recover it. 00:27:57.506 [2024-07-26 11:35:53.114711] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.506 [2024-07-26 11:35:53.114743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.506 qpair failed and we were unable to recover it. 00:27:57.506 [2024-07-26 11:35:53.114951] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.506 [2024-07-26 11:35:53.114981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.506 qpair failed and we were unable to recover it. 00:27:57.506 [2024-07-26 11:35:53.115250] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.506 [2024-07-26 11:35:53.115282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.506 qpair failed and we were unable to recover it. 00:27:57.506 [2024-07-26 11:35:53.115466] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.506 [2024-07-26 11:35:53.115497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.506 qpair failed and we were unable to recover it. 00:27:57.506 [2024-07-26 11:35:53.115753] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.506 [2024-07-26 11:35:53.115786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.506 qpair failed and we were unable to recover it. 00:27:57.506 [2024-07-26 11:35:53.116070] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.506 [2024-07-26 11:35:53.116103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.506 qpair failed and we were unable to recover it. 00:27:57.506 [2024-07-26 11:35:53.116259] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.506 [2024-07-26 11:35:53.116292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.506 qpair failed and we were unable to recover it. 00:27:57.506 [2024-07-26 11:35:53.116549] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.506 [2024-07-26 11:35:53.116581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.506 qpair failed and we were unable to recover it. 00:27:57.506 [2024-07-26 11:35:53.116881] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.506 [2024-07-26 11:35:53.116915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.506 qpair failed and we were unable to recover it. 00:27:57.506 [2024-07-26 11:35:53.117196] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.506 [2024-07-26 11:35:53.117229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.506 qpair failed and we were unable to recover it. 00:27:57.506 [2024-07-26 11:35:53.117434] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.506 [2024-07-26 11:35:53.117466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.506 qpair failed and we were unable to recover it. 00:27:57.506 [2024-07-26 11:35:53.117665] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.506 [2024-07-26 11:35:53.117699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.506 qpair failed and we were unable to recover it. 00:27:57.506 [2024-07-26 11:35:53.117975] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.506 [2024-07-26 11:35:53.118008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.506 qpair failed and we were unable to recover it. 00:27:57.506 [2024-07-26 11:35:53.118227] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.506 [2024-07-26 11:35:53.118259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.506 qpair failed and we were unable to recover it. 00:27:57.506 [2024-07-26 11:35:53.118515] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.506 [2024-07-26 11:35:53.118547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.506 qpair failed and we were unable to recover it. 00:27:57.506 [2024-07-26 11:35:53.118851] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.506 [2024-07-26 11:35:53.118884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.506 qpair failed and we were unable to recover it. 00:27:57.506 [2024-07-26 11:35:53.119181] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.506 [2024-07-26 11:35:53.119214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.506 qpair failed and we were unable to recover it. 00:27:57.506 [2024-07-26 11:35:53.119450] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.506 [2024-07-26 11:35:53.119483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.506 qpair failed and we were unable to recover it. 00:27:57.506 [2024-07-26 11:35:53.119759] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.506 [2024-07-26 11:35:53.119792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.506 qpair failed and we were unable to recover it. 00:27:57.506 [2024-07-26 11:35:53.120047] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.506 [2024-07-26 11:35:53.120080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.506 qpair failed and we were unable to recover it. 00:27:57.506 [2024-07-26 11:35:53.120360] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.506 [2024-07-26 11:35:53.120392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.506 qpair failed and we were unable to recover it. 00:27:57.506 [2024-07-26 11:35:53.120599] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.506 [2024-07-26 11:35:53.120648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.506 qpair failed and we were unable to recover it. 00:27:57.506 [2024-07-26 11:35:53.120834] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.506 [2024-07-26 11:35:53.120866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.506 qpair failed and we were unable to recover it. 00:27:57.506 [2024-07-26 11:35:53.121180] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.506 [2024-07-26 11:35:53.121218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.506 qpair failed and we were unable to recover it. 00:27:57.506 [2024-07-26 11:35:53.121350] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.506 [2024-07-26 11:35:53.121383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.506 qpair failed and we were unable to recover it. 00:27:57.506 [2024-07-26 11:35:53.121599] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.506 [2024-07-26 11:35:53.121641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.506 qpair failed and we were unable to recover it. 00:27:57.506 [2024-07-26 11:35:53.121902] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.506 [2024-07-26 11:35:53.121934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.506 qpair failed and we were unable to recover it. 00:27:57.506 [2024-07-26 11:35:53.122115] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.506 [2024-07-26 11:35:53.122148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.506 qpair failed and we were unable to recover it. 00:27:57.506 [2024-07-26 11:35:53.122348] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.506 [2024-07-26 11:35:53.122380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.506 qpair failed and we were unable to recover it. 00:27:57.506 [2024-07-26 11:35:53.122663] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.506 [2024-07-26 11:35:53.122697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.506 qpair failed and we were unable to recover it. 00:27:57.506 [2024-07-26 11:35:53.122829] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.506 [2024-07-26 11:35:53.122862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.506 qpair failed and we were unable to recover it. 00:27:57.506 [2024-07-26 11:35:53.123062] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.506 [2024-07-26 11:35:53.123095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.506 qpair failed and we were unable to recover it. 00:27:57.506 [2024-07-26 11:35:53.123350] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.506 [2024-07-26 11:35:53.123382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.506 qpair failed and we were unable to recover it. 00:27:57.506 [2024-07-26 11:35:53.123570] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.506 [2024-07-26 11:35:53.123601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.506 qpair failed and we were unable to recover it. 00:27:57.506 [2024-07-26 11:35:53.123799] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.506 [2024-07-26 11:35:53.123832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.506 qpair failed and we were unable to recover it. 00:27:57.506 [2024-07-26 11:35:53.123976] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.506 [2024-07-26 11:35:53.124008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.507 qpair failed and we were unable to recover it. 00:27:57.507 [2024-07-26 11:35:53.124218] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.507 [2024-07-26 11:35:53.124250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.507 qpair failed and we were unable to recover it. 00:27:57.507 [2024-07-26 11:35:53.124457] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.507 [2024-07-26 11:35:53.124490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.507 qpair failed and we were unable to recover it. 00:27:57.507 [2024-07-26 11:35:53.124722] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.507 [2024-07-26 11:35:53.124755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.507 qpair failed and we were unable to recover it. 00:27:57.507 [2024-07-26 11:35:53.125065] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.507 [2024-07-26 11:35:53.125097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.507 qpair failed and we were unable to recover it. 00:27:57.507 [2024-07-26 11:35:53.125348] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.507 [2024-07-26 11:35:53.125381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.507 qpair failed and we were unable to recover it. 00:27:57.507 [2024-07-26 11:35:53.125603] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.507 [2024-07-26 11:35:53.125647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.507 qpair failed and we were unable to recover it. 00:27:57.507 [2024-07-26 11:35:53.125849] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.507 [2024-07-26 11:35:53.125881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.507 qpair failed and we were unable to recover it. 00:27:57.507 [2024-07-26 11:35:53.126104] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.507 [2024-07-26 11:35:53.126135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.507 qpair failed and we were unable to recover it. 00:27:57.507 [2024-07-26 11:35:53.126363] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.507 [2024-07-26 11:35:53.126395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.507 qpair failed and we were unable to recover it. 00:27:57.507 [2024-07-26 11:35:53.126653] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.507 [2024-07-26 11:35:53.126688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.507 qpair failed and we were unable to recover it. 00:27:57.507 [2024-07-26 11:35:53.126894] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.507 [2024-07-26 11:35:53.126926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.507 qpair failed and we were unable to recover it. 00:27:57.507 [2024-07-26 11:35:53.127193] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.507 [2024-07-26 11:35:53.127225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.507 qpair failed and we were unable to recover it. 00:27:57.507 [2024-07-26 11:35:53.127434] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.507 [2024-07-26 11:35:53.127467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.507 qpair failed and we were unable to recover it. 00:27:57.507 [2024-07-26 11:35:53.127657] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.507 [2024-07-26 11:35:53.127690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.507 qpair failed and we were unable to recover it. 00:27:57.507 [2024-07-26 11:35:53.127893] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.507 [2024-07-26 11:35:53.127925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.507 qpair failed and we were unable to recover it. 00:27:57.507 [2024-07-26 11:35:53.128155] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.507 [2024-07-26 11:35:53.128188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.507 qpair failed and we were unable to recover it. 00:27:57.507 [2024-07-26 11:35:53.128371] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.507 [2024-07-26 11:35:53.128403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.507 qpair failed and we were unable to recover it. 00:27:57.507 [2024-07-26 11:35:53.128670] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.507 [2024-07-26 11:35:53.128704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.507 qpair failed and we were unable to recover it. 00:27:57.507 [2024-07-26 11:35:53.128989] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.507 [2024-07-26 11:35:53.129022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.507 qpair failed and we were unable to recover it. 00:27:57.507 [2024-07-26 11:35:53.129311] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.507 [2024-07-26 11:35:53.129354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.507 qpair failed and we were unable to recover it. 00:27:57.507 [2024-07-26 11:35:53.129666] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.507 [2024-07-26 11:35:53.129724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.507 qpair failed and we were unable to recover it. 00:27:57.507 [2024-07-26 11:35:53.129967] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.507 [2024-07-26 11:35:53.130002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.507 qpair failed and we were unable to recover it. 00:27:57.507 [2024-07-26 11:35:53.130203] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.507 [2024-07-26 11:35:53.130236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.507 qpair failed and we were unable to recover it. 00:27:57.507 [2024-07-26 11:35:53.130435] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.507 [2024-07-26 11:35:53.130467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.507 qpair failed and we were unable to recover it. 00:27:57.507 [2024-07-26 11:35:53.130659] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.507 [2024-07-26 11:35:53.130693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.507 qpair failed and we were unable to recover it. 00:27:57.507 [2024-07-26 11:35:53.130904] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.507 [2024-07-26 11:35:53.130937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.507 qpair failed and we were unable to recover it. 00:27:57.507 [2024-07-26 11:35:53.131087] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.507 [2024-07-26 11:35:53.131119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.507 qpair failed and we were unable to recover it. 00:27:57.507 [2024-07-26 11:35:53.131327] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.507 [2024-07-26 11:35:53.131359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.507 qpair failed and we were unable to recover it. 00:27:57.507 [2024-07-26 11:35:53.131647] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.507 [2024-07-26 11:35:53.131705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.507 qpair failed and we were unable to recover it. 00:27:57.507 [2024-07-26 11:35:53.132009] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.507 [2024-07-26 11:35:53.132043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.507 qpair failed and we were unable to recover it. 00:27:57.507 [2024-07-26 11:35:53.132272] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.507 [2024-07-26 11:35:53.132308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.507 qpair failed and we were unable to recover it. 00:27:57.507 [2024-07-26 11:35:53.132497] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.507 [2024-07-26 11:35:53.132530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.507 qpair failed and we were unable to recover it. 00:27:57.507 [2024-07-26 11:35:53.132761] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.507 [2024-07-26 11:35:53.132796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.507 qpair failed and we were unable to recover it. 00:27:57.507 [2024-07-26 11:35:53.133101] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.507 [2024-07-26 11:35:53.133134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.507 qpair failed and we were unable to recover it. 00:27:57.507 [2024-07-26 11:35:53.133342] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.507 [2024-07-26 11:35:53.133374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.507 qpair failed and we were unable to recover it. 00:27:57.507 [2024-07-26 11:35:53.133597] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.507 [2024-07-26 11:35:53.133656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.507 qpair failed and we were unable to recover it. 00:27:57.784 [2024-07-26 11:35:53.133888] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.784 [2024-07-26 11:35:53.133946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.784 qpair failed and we were unable to recover it. 00:27:57.784 [2024-07-26 11:35:53.134288] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.784 [2024-07-26 11:35:53.134360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.784 qpair failed and we were unable to recover it. 00:27:57.784 [2024-07-26 11:35:53.134659] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.784 [2024-07-26 11:35:53.134730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.784 qpair failed and we were unable to recover it. 00:27:57.784 [2024-07-26 11:35:53.134965] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.784 [2024-07-26 11:35:53.135003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.784 qpair failed and we were unable to recover it. 00:27:57.784 [2024-07-26 11:35:53.135293] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.784 [2024-07-26 11:35:53.135330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.784 qpair failed and we were unable to recover it. 00:27:57.784 [2024-07-26 11:35:53.135652] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.784 [2024-07-26 11:35:53.135691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.784 qpair failed and we were unable to recover it. 00:27:57.784 [2024-07-26 11:35:53.135987] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.784 [2024-07-26 11:35:53.136023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.784 qpair failed and we were unable to recover it. 00:27:57.784 [2024-07-26 11:35:53.136146] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.784 [2024-07-26 11:35:53.136186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.784 qpair failed and we were unable to recover it. 00:27:57.784 [2024-07-26 11:35:53.136495] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.784 [2024-07-26 11:35:53.136530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.784 qpair failed and we were unable to recover it. 00:27:57.784 [2024-07-26 11:35:53.136791] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.784 [2024-07-26 11:35:53.136828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.784 qpair failed and we were unable to recover it. 00:27:57.784 [2024-07-26 11:35:53.137119] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.784 [2024-07-26 11:35:53.137155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.784 qpair failed and we were unable to recover it. 00:27:57.784 [2024-07-26 11:35:53.137382] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.784 [2024-07-26 11:35:53.137425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.784 qpair failed and we were unable to recover it. 00:27:57.784 [2024-07-26 11:35:53.137652] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.784 [2024-07-26 11:35:53.137687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.784 qpair failed and we were unable to recover it. 00:27:57.784 [2024-07-26 11:35:53.137946] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.784 [2024-07-26 11:35:53.137982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.784 qpair failed and we were unable to recover it. 00:27:57.784 [2024-07-26 11:35:53.138180] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.784 [2024-07-26 11:35:53.138213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.784 qpair failed and we were unable to recover it. 00:27:57.784 [2024-07-26 11:35:53.138494] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.784 [2024-07-26 11:35:53.138527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.784 qpair failed and we were unable to recover it. 00:27:57.785 [2024-07-26 11:35:53.138733] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.785 [2024-07-26 11:35:53.138769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.785 qpair failed and we were unable to recover it. 00:27:57.785 [2024-07-26 11:35:53.139051] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.785 [2024-07-26 11:35:53.139084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.785 qpair failed and we were unable to recover it. 00:27:57.785 [2024-07-26 11:35:53.139289] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.785 [2024-07-26 11:35:53.139322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.785 qpair failed and we were unable to recover it. 00:27:57.785 [2024-07-26 11:35:53.139612] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.785 [2024-07-26 11:35:53.139665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.785 qpair failed and we were unable to recover it. 00:27:57.785 [2024-07-26 11:35:53.139800] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.785 [2024-07-26 11:35:53.139833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.785 qpair failed and we were unable to recover it. 00:27:57.785 [2024-07-26 11:35:53.140036] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.785 [2024-07-26 11:35:53.140068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.785 qpair failed and we were unable to recover it. 00:27:57.785 [2024-07-26 11:35:53.140291] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.785 [2024-07-26 11:35:53.140324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.785 qpair failed and we were unable to recover it. 00:27:57.785 [2024-07-26 11:35:53.140524] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.785 [2024-07-26 11:35:53.140557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.785 qpair failed and we were unable to recover it. 00:27:57.785 [2024-07-26 11:35:53.140693] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.785 [2024-07-26 11:35:53.140727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.785 qpair failed and we were unable to recover it. 00:27:57.785 [2024-07-26 11:35:53.141007] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.785 [2024-07-26 11:35:53.141040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.785 qpair failed and we were unable to recover it. 00:27:57.785 [2024-07-26 11:35:53.141271] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.785 [2024-07-26 11:35:53.141303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.785 qpair failed and we were unable to recover it. 00:27:57.785 [2024-07-26 11:35:53.141488] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.785 [2024-07-26 11:35:53.141521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.785 qpair failed and we were unable to recover it. 00:27:57.785 [2024-07-26 11:35:53.141810] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.785 [2024-07-26 11:35:53.141844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.785 qpair failed and we were unable to recover it. 00:27:57.785 [2024-07-26 11:35:53.141985] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.785 [2024-07-26 11:35:53.142018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.785 qpair failed and we were unable to recover it. 00:27:57.785 [2024-07-26 11:35:53.142204] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.785 [2024-07-26 11:35:53.142237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.785 qpair failed and we were unable to recover it. 00:27:57.785 [2024-07-26 11:35:53.142521] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.785 [2024-07-26 11:35:53.142552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.785 qpair failed and we were unable to recover it. 00:27:57.785 [2024-07-26 11:35:53.142783] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.785 [2024-07-26 11:35:53.142816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.785 qpair failed and we were unable to recover it. 00:27:57.785 [2024-07-26 11:35:53.143098] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.785 [2024-07-26 11:35:53.143130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.785 qpair failed and we were unable to recover it. 00:27:57.785 [2024-07-26 11:35:53.143366] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.785 [2024-07-26 11:35:53.143399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.785 qpair failed and we were unable to recover it. 00:27:57.785 [2024-07-26 11:35:53.143660] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.785 [2024-07-26 11:35:53.143694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.785 qpair failed and we were unable to recover it. 00:27:57.785 [2024-07-26 11:35:53.143894] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.785 [2024-07-26 11:35:53.143927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.785 qpair failed and we were unable to recover it. 00:27:57.785 [2024-07-26 11:35:53.144183] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.785 [2024-07-26 11:35:53.144216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.785 qpair failed and we were unable to recover it. 00:27:57.785 [2024-07-26 11:35:53.144432] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.785 [2024-07-26 11:35:53.144464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.785 qpair failed and we were unable to recover it. 00:27:57.785 [2024-07-26 11:35:53.144708] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.785 [2024-07-26 11:35:53.144743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.785 qpair failed and we were unable to recover it. 00:27:57.785 [2024-07-26 11:35:53.145001] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.785 [2024-07-26 11:35:53.145034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.785 qpair failed and we were unable to recover it. 00:27:57.785 [2024-07-26 11:35:53.145336] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.785 [2024-07-26 11:35:53.145369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.785 qpair failed and we were unable to recover it. 00:27:57.785 [2024-07-26 11:35:53.145577] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.785 [2024-07-26 11:35:53.145610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.785 qpair failed and we were unable to recover it. 00:27:57.785 [2024-07-26 11:35:53.145826] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.785 [2024-07-26 11:35:53.145859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.785 qpair failed and we were unable to recover it. 00:27:57.785 [2024-07-26 11:35:53.146082] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.785 [2024-07-26 11:35:53.146114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.785 qpair failed and we were unable to recover it. 00:27:57.785 [2024-07-26 11:35:53.146391] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.785 [2024-07-26 11:35:53.146424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.785 qpair failed and we were unable to recover it. 00:27:57.785 [2024-07-26 11:35:53.146718] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.785 [2024-07-26 11:35:53.146757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.785 qpair failed and we were unable to recover it. 00:27:57.785 [2024-07-26 11:35:53.147031] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.785 [2024-07-26 11:35:53.147064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.785 qpair failed and we were unable to recover it. 00:27:57.785 [2024-07-26 11:35:53.147251] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.785 [2024-07-26 11:35:53.147283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.785 qpair failed and we were unable to recover it. 00:27:57.785 [2024-07-26 11:35:53.147559] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.785 [2024-07-26 11:35:53.147592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.785 qpair failed and we were unable to recover it. 00:27:57.785 [2024-07-26 11:35:53.147862] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.785 [2024-07-26 11:35:53.147896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.785 qpair failed and we were unable to recover it. 00:27:57.785 [2024-07-26 11:35:53.148169] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.785 [2024-07-26 11:35:53.148201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.785 qpair failed and we were unable to recover it. 00:27:57.785 [2024-07-26 11:35:53.148492] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.786 [2024-07-26 11:35:53.148525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.786 qpair failed and we were unable to recover it. 00:27:57.786 [2024-07-26 11:35:53.148719] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.786 [2024-07-26 11:35:53.148753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.786 qpair failed and we were unable to recover it. 00:27:57.786 [2024-07-26 11:35:53.148940] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.786 [2024-07-26 11:35:53.148972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.786 qpair failed and we were unable to recover it. 00:27:57.786 [2024-07-26 11:35:53.149229] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.786 [2024-07-26 11:35:53.149260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.786 qpair failed and we were unable to recover it. 00:27:57.786 [2024-07-26 11:35:53.149399] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.786 [2024-07-26 11:35:53.149432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.786 qpair failed and we were unable to recover it. 00:27:57.786 [2024-07-26 11:35:53.149699] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.786 [2024-07-26 11:35:53.149733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.786 qpair failed and we were unable to recover it. 00:27:57.786 [2024-07-26 11:35:53.150076] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.786 [2024-07-26 11:35:53.150108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.786 qpair failed and we were unable to recover it. 00:27:57.786 [2024-07-26 11:35:53.150387] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.786 [2024-07-26 11:35:53.150420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.786 qpair failed and we were unable to recover it. 00:27:57.786 [2024-07-26 11:35:53.150714] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.786 [2024-07-26 11:35:53.150748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.786 qpair failed and we were unable to recover it. 00:27:57.786 [2024-07-26 11:35:53.150950] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.786 [2024-07-26 11:35:53.150983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.786 qpair failed and we were unable to recover it. 00:27:57.786 [2024-07-26 11:35:53.151129] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.786 [2024-07-26 11:35:53.151162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.786 qpair failed and we were unable to recover it. 00:27:57.786 [2024-07-26 11:35:53.151460] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.786 [2024-07-26 11:35:53.151493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.786 qpair failed and we were unable to recover it. 00:27:57.786 [2024-07-26 11:35:53.151766] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.786 [2024-07-26 11:35:53.151800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.786 qpair failed and we were unable to recover it. 00:27:57.786 [2024-07-26 11:35:53.152118] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.786 [2024-07-26 11:35:53.152150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.786 qpair failed and we were unable to recover it. 00:27:57.786 [2024-07-26 11:35:53.152348] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.786 [2024-07-26 11:35:53.152380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.786 qpair failed and we were unable to recover it. 00:27:57.786 [2024-07-26 11:35:53.152672] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.786 [2024-07-26 11:35:53.152707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.786 qpair failed and we were unable to recover it. 00:27:57.786 [2024-07-26 11:35:53.152896] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.786 [2024-07-26 11:35:53.152929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.786 qpair failed and we were unable to recover it. 00:27:57.786 [2024-07-26 11:35:53.153205] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.786 [2024-07-26 11:35:53.153238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.786 qpair failed and we were unable to recover it. 00:27:57.786 [2024-07-26 11:35:53.153533] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.786 [2024-07-26 11:35:53.153567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.786 qpair failed and we were unable to recover it. 00:27:57.786 [2024-07-26 11:35:53.153799] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.786 [2024-07-26 11:35:53.153833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.786 qpair failed and we were unable to recover it. 00:27:57.786 [2024-07-26 11:35:53.154038] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.786 [2024-07-26 11:35:53.154071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.786 qpair failed and we were unable to recover it. 00:27:57.786 [2024-07-26 11:35:53.154377] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.786 [2024-07-26 11:35:53.154420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.786 qpair failed and we were unable to recover it. 00:27:57.786 [2024-07-26 11:35:53.154688] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.786 [2024-07-26 11:35:53.154722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.786 qpair failed and we were unable to recover it. 00:27:57.786 [2024-07-26 11:35:53.154921] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.786 [2024-07-26 11:35:53.154954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.786 qpair failed and we were unable to recover it. 00:27:57.786 [2024-07-26 11:35:53.155145] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.786 [2024-07-26 11:35:53.155178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.786 qpair failed and we were unable to recover it. 00:27:57.786 [2024-07-26 11:35:53.155459] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.786 [2024-07-26 11:35:53.155492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.786 qpair failed and we were unable to recover it. 00:27:57.786 [2024-07-26 11:35:53.155695] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.786 [2024-07-26 11:35:53.155730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.786 qpair failed and we were unable to recover it. 00:27:57.786 [2024-07-26 11:35:53.155918] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.786 [2024-07-26 11:35:53.155950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.786 qpair failed and we were unable to recover it. 00:27:57.786 [2024-07-26 11:35:53.156138] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.786 [2024-07-26 11:35:53.156171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.786 qpair failed and we were unable to recover it. 00:27:57.786 [2024-07-26 11:35:53.156374] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.786 [2024-07-26 11:35:53.156407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.786 qpair failed and we were unable to recover it. 00:27:57.786 [2024-07-26 11:35:53.156608] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.786 [2024-07-26 11:35:53.156662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.786 qpair failed and we were unable to recover it. 00:27:57.786 [2024-07-26 11:35:53.156865] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.786 [2024-07-26 11:35:53.156898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.786 qpair failed and we were unable to recover it. 00:27:57.786 [2024-07-26 11:35:53.157087] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.786 [2024-07-26 11:35:53.157119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.786 qpair failed and we were unable to recover it. 00:27:57.786 [2024-07-26 11:35:53.157396] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.787 [2024-07-26 11:35:53.157428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.787 qpair failed and we were unable to recover it. 00:27:57.787 [2024-07-26 11:35:53.157727] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.787 [2024-07-26 11:35:53.157761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.787 qpair failed and we were unable to recover it. 00:27:57.787 [2024-07-26 11:35:53.158035] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.787 [2024-07-26 11:35:53.158068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.787 qpair failed and we were unable to recover it. 00:27:57.787 [2024-07-26 11:35:53.158223] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.787 [2024-07-26 11:35:53.158255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.787 qpair failed and we were unable to recover it. 00:27:57.787 [2024-07-26 11:35:53.158512] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.787 [2024-07-26 11:35:53.158545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.787 qpair failed and we were unable to recover it. 00:27:57.787 [2024-07-26 11:35:53.158844] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.787 [2024-07-26 11:35:53.158879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.787 qpair failed and we were unable to recover it. 00:27:57.787 [2024-07-26 11:35:53.159153] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.787 [2024-07-26 11:35:53.159186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.787 qpair failed and we were unable to recover it. 00:27:57.787 [2024-07-26 11:35:53.159316] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.787 [2024-07-26 11:35:53.159348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.787 qpair failed and we were unable to recover it. 00:27:57.787 [2024-07-26 11:35:53.159555] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.787 [2024-07-26 11:35:53.159587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.787 qpair failed and we were unable to recover it. 00:27:57.787 [2024-07-26 11:35:53.159830] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.787 [2024-07-26 11:35:53.159865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.787 qpair failed and we were unable to recover it. 00:27:57.787 [2024-07-26 11:35:53.160073] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.787 [2024-07-26 11:35:53.160105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.787 qpair failed and we were unable to recover it. 00:27:57.787 [2024-07-26 11:35:53.160411] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.787 [2024-07-26 11:35:53.160444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.787 qpair failed and we were unable to recover it. 00:27:57.787 [2024-07-26 11:35:53.160661] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.787 [2024-07-26 11:35:53.160695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.787 qpair failed and we were unable to recover it. 00:27:57.787 [2024-07-26 11:35:53.160966] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.787 [2024-07-26 11:35:53.161000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.787 qpair failed and we were unable to recover it. 00:27:57.787 [2024-07-26 11:35:53.161232] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.787 [2024-07-26 11:35:53.161264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.788 qpair failed and we were unable to recover it. 00:27:57.788 [2024-07-26 11:35:53.161493] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.788 [2024-07-26 11:35:53.161525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.788 qpair failed and we were unable to recover it. 00:27:57.788 [2024-07-26 11:35:53.161818] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.788 [2024-07-26 11:35:53.161852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.788 qpair failed and we were unable to recover it. 00:27:57.788 [2024-07-26 11:35:53.162058] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.788 [2024-07-26 11:35:53.162090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.788 qpair failed and we were unable to recover it. 00:27:57.788 [2024-07-26 11:35:53.162297] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.788 [2024-07-26 11:35:53.162329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.788 qpair failed and we were unable to recover it. 00:27:57.788 [2024-07-26 11:35:53.162586] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.788 [2024-07-26 11:35:53.162619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.788 qpair failed and we were unable to recover it. 00:27:57.788 [2024-07-26 11:35:53.162839] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.788 [2024-07-26 11:35:53.162872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.788 qpair failed and we were unable to recover it. 00:27:57.788 [2024-07-26 11:35:53.163134] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.788 [2024-07-26 11:35:53.163166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.788 qpair failed and we were unable to recover it. 00:27:57.788 [2024-07-26 11:35:53.163429] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.788 [2024-07-26 11:35:53.163461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.788 qpair failed and we were unable to recover it. 00:27:57.788 [2024-07-26 11:35:53.163662] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.788 [2024-07-26 11:35:53.163696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.788 qpair failed and we were unable to recover it. 00:27:57.788 [2024-07-26 11:35:53.163887] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.788 [2024-07-26 11:35:53.163919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.788 qpair failed and we were unable to recover it. 00:27:57.788 [2024-07-26 11:35:53.164172] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.788 [2024-07-26 11:35:53.164205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.788 qpair failed and we were unable to recover it. 00:27:57.788 [2024-07-26 11:35:53.164516] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.788 [2024-07-26 11:35:53.164549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.788 qpair failed and we were unable to recover it. 00:27:57.788 [2024-07-26 11:35:53.164795] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.789 [2024-07-26 11:35:53.164829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.789 qpair failed and we were unable to recover it. 00:27:57.789 [2024-07-26 11:35:53.165066] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.789 [2024-07-26 11:35:53.165098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.789 qpair failed and we were unable to recover it. 00:27:57.789 [2024-07-26 11:35:53.165304] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.789 [2024-07-26 11:35:53.165338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.789 qpair failed and we were unable to recover it. 00:27:57.789 [2024-07-26 11:35:53.165620] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.789 [2024-07-26 11:35:53.165667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.789 qpair failed and we were unable to recover it. 00:27:57.789 [2024-07-26 11:35:53.165945] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.789 [2024-07-26 11:35:53.165978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.789 qpair failed and we were unable to recover it. 00:27:57.789 [2024-07-26 11:35:53.166281] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.789 [2024-07-26 11:35:53.166314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.789 qpair failed and we were unable to recover it. 00:27:57.789 [2024-07-26 11:35:53.166598] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.789 [2024-07-26 11:35:53.166642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.789 qpair failed and we were unable to recover it. 00:27:57.789 [2024-07-26 11:35:53.166854] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.789 [2024-07-26 11:35:53.166887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.789 qpair failed and we were unable to recover it. 00:27:57.789 [2024-07-26 11:35:53.167143] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.789 [2024-07-26 11:35:53.167176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.789 qpair failed and we were unable to recover it. 00:27:57.789 [2024-07-26 11:35:53.167397] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.789 [2024-07-26 11:35:53.167429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.789 qpair failed and we were unable to recover it. 00:27:57.789 [2024-07-26 11:35:53.167708] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.790 [2024-07-26 11:35:53.167742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.790 qpair failed and we were unable to recover it. 00:27:57.790 [2024-07-26 11:35:53.167973] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.790 [2024-07-26 11:35:53.168005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.790 qpair failed and we were unable to recover it. 00:27:57.790 [2024-07-26 11:35:53.168311] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.790 [2024-07-26 11:35:53.168344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.790 qpair failed and we were unable to recover it. 00:27:57.790 [2024-07-26 11:35:53.168622] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.790 [2024-07-26 11:35:53.168691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.790 qpair failed and we were unable to recover it. 00:27:57.790 [2024-07-26 11:35:53.169004] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.790 [2024-07-26 11:35:53.169051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.790 qpair failed and we were unable to recover it. 00:27:57.790 [2024-07-26 11:35:53.169271] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.790 [2024-07-26 11:35:53.169305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.790 qpair failed and we were unable to recover it. 00:27:57.790 [2024-07-26 11:35:53.169595] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.790 [2024-07-26 11:35:53.169646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.790 qpair failed and we were unable to recover it. 00:27:57.790 [2024-07-26 11:35:53.169793] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.790 [2024-07-26 11:35:53.169826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.790 qpair failed and we were unable to recover it. 00:27:57.790 [2024-07-26 11:35:53.170106] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.790 [2024-07-26 11:35:53.170139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.791 qpair failed and we were unable to recover it. 00:27:57.791 [2024-07-26 11:35:53.170397] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.791 [2024-07-26 11:35:53.170430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.791 qpair failed and we were unable to recover it. 00:27:57.791 [2024-07-26 11:35:53.170641] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.791 [2024-07-26 11:35:53.170675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.791 qpair failed and we were unable to recover it. 00:27:57.791 [2024-07-26 11:35:53.170825] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.791 [2024-07-26 11:35:53.170859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.791 qpair failed and we were unable to recover it. 00:27:57.791 [2024-07-26 11:35:53.171138] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.791 [2024-07-26 11:35:53.171170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.791 qpair failed and we were unable to recover it. 00:27:57.791 [2024-07-26 11:35:53.171356] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.791 [2024-07-26 11:35:53.171389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.791 qpair failed and we were unable to recover it. 00:27:57.791 [2024-07-26 11:35:53.171591] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.791 [2024-07-26 11:35:53.171624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.791 qpair failed and we were unable to recover it. 00:27:57.791 [2024-07-26 11:35:53.171844] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.791 [2024-07-26 11:35:53.171877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.791 qpair failed and we were unable to recover it. 00:27:57.791 [2024-07-26 11:35:53.172075] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.791 [2024-07-26 11:35:53.172108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.791 qpair failed and we were unable to recover it. 00:27:57.791 [2024-07-26 11:35:53.172412] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.791 [2024-07-26 11:35:53.172445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.791 qpair failed and we were unable to recover it. 00:27:57.791 [2024-07-26 11:35:53.172732] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.791 [2024-07-26 11:35:53.172767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.791 qpair failed and we were unable to recover it. 00:27:57.791 [2024-07-26 11:35:53.172969] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.791 [2024-07-26 11:35:53.173009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.791 qpair failed and we were unable to recover it. 00:27:57.791 [2024-07-26 11:35:53.173233] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.792 [2024-07-26 11:35:53.173267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.792 qpair failed and we were unable to recover it. 00:27:57.792 [2024-07-26 11:35:53.173489] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.792 [2024-07-26 11:35:53.173521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.792 qpair failed and we were unable to recover it. 00:27:57.792 [2024-07-26 11:35:53.173755] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.792 [2024-07-26 11:35:53.173790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.792 qpair failed and we were unable to recover it. 00:27:57.792 [2024-07-26 11:35:53.173990] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.792 [2024-07-26 11:35:53.174023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.792 qpair failed and we were unable to recover it. 00:27:57.792 [2024-07-26 11:35:53.174207] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.792 [2024-07-26 11:35:53.174240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.792 qpair failed and we were unable to recover it. 00:27:57.792 [2024-07-26 11:35:53.174438] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.792 [2024-07-26 11:35:53.174472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.792 qpair failed and we were unable to recover it. 00:27:57.792 [2024-07-26 11:35:53.174665] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.792 [2024-07-26 11:35:53.174699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.792 qpair failed and we were unable to recover it. 00:27:57.792 [2024-07-26 11:35:53.174992] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.792 [2024-07-26 11:35:53.175025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.792 qpair failed and we were unable to recover it. 00:27:57.792 [2024-07-26 11:35:53.175302] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.792 [2024-07-26 11:35:53.175334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.792 qpair failed and we were unable to recover it. 00:27:57.792 [2024-07-26 11:35:53.175557] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.792 [2024-07-26 11:35:53.175590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.792 qpair failed and we were unable to recover it. 00:27:57.792 [2024-07-26 11:35:53.175844] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.792 [2024-07-26 11:35:53.175877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.792 qpair failed and we were unable to recover it. 00:27:57.793 [2024-07-26 11:35:53.176114] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.793 [2024-07-26 11:35:53.176146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.793 qpair failed and we were unable to recover it. 00:27:57.793 [2024-07-26 11:35:53.176427] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.793 [2024-07-26 11:35:53.176459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.793 qpair failed and we were unable to recover it. 00:27:57.793 [2024-07-26 11:35:53.176722] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.793 [2024-07-26 11:35:53.176757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.793 qpair failed and we were unable to recover it. 00:27:57.793 [2024-07-26 11:35:53.176994] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.793 [2024-07-26 11:35:53.177028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.793 qpair failed and we were unable to recover it. 00:27:57.794 [2024-07-26 11:35:53.177252] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.794 [2024-07-26 11:35:53.177285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.794 qpair failed and we were unable to recover it. 00:27:57.794 [2024-07-26 11:35:53.177484] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.794 [2024-07-26 11:35:53.177517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.794 qpair failed and we were unable to recover it. 00:27:57.794 [2024-07-26 11:35:53.177776] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.794 [2024-07-26 11:35:53.177811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.794 qpair failed and we were unable to recover it. 00:27:57.794 [2024-07-26 11:35:53.178011] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.794 [2024-07-26 11:35:53.178045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.794 qpair failed and we were unable to recover it. 00:27:57.794 [2024-07-26 11:35:53.178175] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.794 [2024-07-26 11:35:53.178210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.794 qpair failed and we were unable to recover it. 00:27:57.794 [2024-07-26 11:35:53.178411] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.794 [2024-07-26 11:35:53.178443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.794 qpair failed and we were unable to recover it. 00:27:57.795 [2024-07-26 11:35:53.178646] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.795 [2024-07-26 11:35:53.178681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.795 qpair failed and we were unable to recover it. 00:27:57.795 [2024-07-26 11:35:53.178956] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.795 [2024-07-26 11:35:53.178989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.795 qpair failed and we were unable to recover it. 00:27:57.795 [2024-07-26 11:35:53.179171] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.795 [2024-07-26 11:35:53.179203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.795 qpair failed and we were unable to recover it. 00:27:57.795 [2024-07-26 11:35:53.179411] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.795 [2024-07-26 11:35:53.179445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.795 qpair failed and we were unable to recover it. 00:27:57.795 [2024-07-26 11:35:53.179723] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.795 [2024-07-26 11:35:53.179757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.795 qpair failed and we were unable to recover it. 00:27:57.795 [2024-07-26 11:35:53.179884] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.795 [2024-07-26 11:35:53.179922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.795 qpair failed and we were unable to recover it. 00:27:57.795 [2024-07-26 11:35:53.180195] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.795 [2024-07-26 11:35:53.180228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.795 qpair failed and we were unable to recover it. 00:27:57.795 [2024-07-26 11:35:53.180497] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.795 [2024-07-26 11:35:53.180530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.796 qpair failed and we were unable to recover it. 00:27:57.796 [2024-07-26 11:35:53.180732] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.798 [2024-07-26 11:35:53.180769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.798 qpair failed and we were unable to recover it. 00:27:57.798 [2024-07-26 11:35:53.181004] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.798 [2024-07-26 11:35:53.181037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.798 qpair failed and we were unable to recover it. 00:27:57.798 [2024-07-26 11:35:53.181274] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.798 [2024-07-26 11:35:53.181306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.798 qpair failed and we were unable to recover it. 00:27:57.798 [2024-07-26 11:35:53.181522] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.798 [2024-07-26 11:35:53.181555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.798 qpair failed and we were unable to recover it. 00:27:57.798 [2024-07-26 11:35:53.181813] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.798 [2024-07-26 11:35:53.181847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.798 qpair failed and we were unable to recover it. 00:27:57.798 [2024-07-26 11:35:53.182068] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.799 [2024-07-26 11:35:53.182100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.799 qpair failed and we were unable to recover it. 00:27:57.799 [2024-07-26 11:35:53.182356] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.799 [2024-07-26 11:35:53.182389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.799 qpair failed and we were unable to recover it. 00:27:57.799 [2024-07-26 11:35:53.182590] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.800 [2024-07-26 11:35:53.182623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.800 qpair failed and we were unable to recover it. 00:27:57.800 [2024-07-26 11:35:53.182838] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.800 [2024-07-26 11:35:53.182872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.800 qpair failed and we were unable to recover it. 00:27:57.800 [2024-07-26 11:35:53.183139] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.800 [2024-07-26 11:35:53.183172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.800 qpair failed and we were unable to recover it. 00:27:57.800 [2024-07-26 11:35:53.183448] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.800 [2024-07-26 11:35:53.183481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.800 qpair failed and we were unable to recover it. 00:27:57.800 [2024-07-26 11:35:53.183745] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.800 [2024-07-26 11:35:53.183779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.800 qpair failed and we were unable to recover it. 00:27:57.800 [2024-07-26 11:35:53.183991] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.800 [2024-07-26 11:35:53.184024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.800 qpair failed and we were unable to recover it. 00:27:57.800 [2024-07-26 11:35:53.184237] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.800 [2024-07-26 11:35:53.184269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.800 qpair failed and we were unable to recover it. 00:27:57.800 [2024-07-26 11:35:53.184500] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.801 [2024-07-26 11:35:53.184533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.801 qpair failed and we were unable to recover it. 00:27:57.801 [2024-07-26 11:35:53.184688] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.801 [2024-07-26 11:35:53.184723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.801 qpair failed and we were unable to recover it. 00:27:57.801 [2024-07-26 11:35:53.184932] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.801 [2024-07-26 11:35:53.184965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.801 qpair failed and we were unable to recover it. 00:27:57.801 [2024-07-26 11:35:53.185166] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.801 [2024-07-26 11:35:53.185199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.801 qpair failed and we were unable to recover it. 00:27:57.801 [2024-07-26 11:35:53.185387] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.801 [2024-07-26 11:35:53.185420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.801 qpair failed and we were unable to recover it. 00:27:57.801 [2024-07-26 11:35:53.185638] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.801 [2024-07-26 11:35:53.185673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.801 qpair failed and we were unable to recover it. 00:27:57.801 [2024-07-26 11:35:53.185956] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.801 [2024-07-26 11:35:53.185989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.801 qpair failed and we were unable to recover it. 00:27:57.801 [2024-07-26 11:35:53.186198] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.801 [2024-07-26 11:35:53.186231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.802 qpair failed and we were unable to recover it. 00:27:57.802 [2024-07-26 11:35:53.186488] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.802 [2024-07-26 11:35:53.186521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.802 qpair failed and we were unable to recover it. 00:27:57.802 [2024-07-26 11:35:53.186723] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.802 [2024-07-26 11:35:53.186757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.802 qpair failed and we were unable to recover it. 00:27:57.802 [2024-07-26 11:35:53.187033] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.802 [2024-07-26 11:35:53.187065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.802 qpair failed and we were unable to recover it. 00:27:57.802 [2024-07-26 11:35:53.187348] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.803 [2024-07-26 11:35:53.187381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.803 qpair failed and we were unable to recover it. 00:27:57.803 [2024-07-26 11:35:53.187545] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.803 [2024-07-26 11:35:53.187578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.803 qpair failed and we were unable to recover it. 00:27:57.803 [2024-07-26 11:35:53.187872] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.803 [2024-07-26 11:35:53.187906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.803 qpair failed and we were unable to recover it. 00:27:57.803 [2024-07-26 11:35:53.188188] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.803 [2024-07-26 11:35:53.188220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.803 qpair failed and we were unable to recover it. 00:27:57.803 [2024-07-26 11:35:53.188478] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.804 [2024-07-26 11:35:53.188510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.804 qpair failed and we were unable to recover it. 00:27:57.804 [2024-07-26 11:35:53.188714] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.804 [2024-07-26 11:35:53.188748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.804 qpair failed and we were unable to recover it. 00:27:57.804 [2024-07-26 11:35:53.188945] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.804 [2024-07-26 11:35:53.188979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.804 qpair failed and we were unable to recover it. 00:27:57.804 [2024-07-26 11:35:53.189160] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.804 [2024-07-26 11:35:53.189193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.805 qpair failed and we were unable to recover it. 00:27:57.805 [2024-07-26 11:35:53.189420] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.805 [2024-07-26 11:35:53.189453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.805 qpair failed and we were unable to recover it. 00:27:57.805 [2024-07-26 11:35:53.189599] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.805 [2024-07-26 11:35:53.189642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.805 qpair failed and we were unable to recover it. 00:27:57.805 [2024-07-26 11:35:53.189852] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.805 [2024-07-26 11:35:53.189885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.805 qpair failed and we were unable to recover it. 00:27:57.805 [2024-07-26 11:35:53.190096] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.805 [2024-07-26 11:35:53.190129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.805 qpair failed and we were unable to recover it. 00:27:57.805 [2024-07-26 11:35:53.190339] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.805 [2024-07-26 11:35:53.190371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.805 qpair failed and we were unable to recover it. 00:27:57.805 [2024-07-26 11:35:53.190645] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.806 [2024-07-26 11:35:53.190680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.806 qpair failed and we were unable to recover it. 00:27:57.806 [2024-07-26 11:35:53.190870] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.806 [2024-07-26 11:35:53.190902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.806 qpair failed and we were unable to recover it. 00:27:57.806 [2024-07-26 11:35:53.191125] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.806 [2024-07-26 11:35:53.191157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.806 qpair failed and we were unable to recover it. 00:27:57.806 [2024-07-26 11:35:53.191359] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.806 [2024-07-26 11:35:53.191392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.807 qpair failed and we were unable to recover it. 00:27:57.807 [2024-07-26 11:35:53.191618] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.807 [2024-07-26 11:35:53.191661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.807 qpair failed and we were unable to recover it. 00:27:57.807 [2024-07-26 11:35:53.191970] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.807 [2024-07-26 11:35:53.192003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.807 qpair failed and we were unable to recover it. 00:27:57.807 [2024-07-26 11:35:53.192207] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.807 [2024-07-26 11:35:53.192239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.807 qpair failed and we were unable to recover it. 00:27:57.807 [2024-07-26 11:35:53.192499] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.808 [2024-07-26 11:35:53.192532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.808 qpair failed and we were unable to recover it. 00:27:57.808 [2024-07-26 11:35:53.192755] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.808 [2024-07-26 11:35:53.192790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.808 qpair failed and we were unable to recover it. 00:27:57.808 [2024-07-26 11:35:53.193073] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.808 [2024-07-26 11:35:53.193106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.808 qpair failed and we were unable to recover it. 00:27:57.808 [2024-07-26 11:35:53.193391] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.808 [2024-07-26 11:35:53.193424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.808 qpair failed and we were unable to recover it. 00:27:57.808 [2024-07-26 11:35:53.193653] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.808 [2024-07-26 11:35:53.193688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.808 qpair failed and we were unable to recover it. 00:27:57.808 [2024-07-26 11:35:53.193940] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.809 [2024-07-26 11:35:53.193972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.809 qpair failed and we were unable to recover it. 00:27:57.809 [2024-07-26 11:35:53.194274] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.809 [2024-07-26 11:35:53.194307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.809 qpair failed and we were unable to recover it. 00:27:57.809 [2024-07-26 11:35:53.194585] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.809 [2024-07-26 11:35:53.194619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.810 qpair failed and we were unable to recover it. 00:27:57.810 [2024-07-26 11:35:53.194889] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.810 [2024-07-26 11:35:53.194922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.810 qpair failed and we were unable to recover it. 00:27:57.810 [2024-07-26 11:35:53.195122] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.810 [2024-07-26 11:35:53.195155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.810 qpair failed and we were unable to recover it. 00:27:57.810 [2024-07-26 11:35:53.195430] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.810 [2024-07-26 11:35:53.195463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.810 qpair failed and we were unable to recover it. 00:27:57.810 [2024-07-26 11:35:53.195656] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.810 [2024-07-26 11:35:53.195691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.810 qpair failed and we were unable to recover it. 00:27:57.810 [2024-07-26 11:35:53.195968] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.810 [2024-07-26 11:35:53.196001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.810 qpair failed and we were unable to recover it. 00:27:57.811 [2024-07-26 11:35:53.196143] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.811 [2024-07-26 11:35:53.196176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.811 qpair failed and we were unable to recover it. 00:27:57.811 [2024-07-26 11:35:53.196472] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.811 [2024-07-26 11:35:53.196504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.811 qpair failed and we were unable to recover it. 00:27:57.811 [2024-07-26 11:35:53.196789] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.811 [2024-07-26 11:35:53.196824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.811 qpair failed and we were unable to recover it. 00:27:57.811 [2024-07-26 11:35:53.197033] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.811 [2024-07-26 11:35:53.197066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.811 qpair failed and we were unable to recover it. 00:27:57.811 [2024-07-26 11:35:53.197300] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.811 [2024-07-26 11:35:53.197332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.811 qpair failed and we were unable to recover it. 00:27:57.812 [2024-07-26 11:35:53.197546] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.812 [2024-07-26 11:35:53.197578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.812 qpair failed and we were unable to recover it. 00:27:57.812 [2024-07-26 11:35:53.197892] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.812 [2024-07-26 11:35:53.197929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.812 qpair failed and we were unable to recover it. 00:27:57.812 [2024-07-26 11:35:53.198158] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.812 [2024-07-26 11:35:53.198198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.812 qpair failed and we were unable to recover it. 00:27:57.812 [2024-07-26 11:35:53.198418] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.813 [2024-07-26 11:35:53.198452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.813 qpair failed and we were unable to recover it. 00:27:57.813 [2024-07-26 11:35:53.198694] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.813 [2024-07-26 11:35:53.198728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.813 qpair failed and we were unable to recover it. 00:27:57.813 [2024-07-26 11:35:53.198866] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.813 [2024-07-26 11:35:53.198899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.813 qpair failed and we were unable to recover it. 00:27:57.813 [2024-07-26 11:35:53.199161] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.814 [2024-07-26 11:35:53.199194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.814 qpair failed and we were unable to recover it. 00:27:57.814 [2024-07-26 11:35:53.199414] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.814 [2024-07-26 11:35:53.199446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.814 qpair failed and we were unable to recover it. 00:27:57.814 [2024-07-26 11:35:53.199702] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.814 [2024-07-26 11:35:53.199736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.814 qpair failed and we were unable to recover it. 00:27:57.814 [2024-07-26 11:35:53.199927] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.814 [2024-07-26 11:35:53.199960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.814 qpair failed and we were unable to recover it. 00:27:57.815 [2024-07-26 11:35:53.200158] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.815 [2024-07-26 11:35:53.200191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.815 qpair failed and we were unable to recover it. 00:27:57.815 [2024-07-26 11:35:53.200383] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.815 [2024-07-26 11:35:53.200416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.815 qpair failed and we were unable to recover it. 00:27:57.815 [2024-07-26 11:35:53.200675] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.816 [2024-07-26 11:35:53.200711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.816 qpair failed and we were unable to recover it. 00:27:57.816 [2024-07-26 11:35:53.200974] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.816 [2024-07-26 11:35:53.201007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.816 qpair failed and we were unable to recover it. 00:27:57.816 [2024-07-26 11:35:53.201198] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.816 [2024-07-26 11:35:53.201230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.816 qpair failed and we were unable to recover it. 00:27:57.816 [2024-07-26 11:35:53.201518] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.816 [2024-07-26 11:35:53.201552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.816 qpair failed and we were unable to recover it. 00:27:57.816 [2024-07-26 11:35:53.201767] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.816 [2024-07-26 11:35:53.201802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.816 qpair failed and we were unable to recover it. 00:27:57.816 [2024-07-26 11:35:53.202113] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.816 [2024-07-26 11:35:53.202146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.817 qpair failed and we were unable to recover it. 00:27:57.817 [2024-07-26 11:35:53.202333] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.817 [2024-07-26 11:35:53.202366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.817 qpair failed and we were unable to recover it. 00:27:57.817 [2024-07-26 11:35:53.202618] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.817 [2024-07-26 11:35:53.202662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.817 qpair failed and we were unable to recover it. 00:27:57.817 [2024-07-26 11:35:53.202925] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.817 [2024-07-26 11:35:53.202958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.817 qpair failed and we were unable to recover it. 00:27:57.817 [2024-07-26 11:35:53.203269] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.817 [2024-07-26 11:35:53.203301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.817 qpair failed and we were unable to recover it. 00:27:57.817 [2024-07-26 11:35:53.203593] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.817 [2024-07-26 11:35:53.203638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.817 qpair failed and we were unable to recover it. 00:27:57.817 [2024-07-26 11:35:53.203846] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.817 [2024-07-26 11:35:53.203880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.817 qpair failed and we were unable to recover it. 00:27:57.818 [2024-07-26 11:35:53.204033] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.818 [2024-07-26 11:35:53.204066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.818 qpair failed and we were unable to recover it. 00:27:57.818 [2024-07-26 11:35:53.204225] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.818 [2024-07-26 11:35:53.204258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.818 qpair failed and we were unable to recover it. 00:27:57.818 [2024-07-26 11:35:53.204483] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.818 [2024-07-26 11:35:53.204516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.818 qpair failed and we were unable to recover it. 00:27:57.818 [2024-07-26 11:35:53.204757] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.818 [2024-07-26 11:35:53.204790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.818 qpair failed and we were unable to recover it. 00:27:57.818 [2024-07-26 11:35:53.204945] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.818 [2024-07-26 11:35:53.204978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.818 qpair failed and we were unable to recover it. 00:27:57.818 [2024-07-26 11:35:53.205194] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.818 [2024-07-26 11:35:53.205233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.818 qpair failed and we were unable to recover it. 00:27:57.818 [2024-07-26 11:35:53.205453] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.818 [2024-07-26 11:35:53.205487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.818 qpair failed and we were unable to recover it. 00:27:57.818 [2024-07-26 11:35:53.205620] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.818 [2024-07-26 11:35:53.205679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.818 qpair failed and we were unable to recover it. 00:27:57.818 [2024-07-26 11:35:53.205883] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.818 [2024-07-26 11:35:53.205917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.818 qpair failed and we were unable to recover it. 00:27:57.818 [2024-07-26 11:35:53.206136] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.818 [2024-07-26 11:35:53.206169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.818 qpair failed and we were unable to recover it. 00:27:57.818 [2024-07-26 11:35:53.206435] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.818 [2024-07-26 11:35:53.206468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.818 qpair failed and we were unable to recover it. 00:27:57.818 [2024-07-26 11:35:53.206609] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.818 [2024-07-26 11:35:53.206654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.818 qpair failed and we were unable to recover it. 00:27:57.818 [2024-07-26 11:35:53.206860] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.818 [2024-07-26 11:35:53.206893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.818 qpair failed and we were unable to recover it. 00:27:57.818 [2024-07-26 11:35:53.207160] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.818 [2024-07-26 11:35:53.207193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.818 qpair failed and we were unable to recover it. 00:27:57.818 [2024-07-26 11:35:53.207381] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.818 [2024-07-26 11:35:53.207414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.818 qpair failed and we were unable to recover it. 00:27:57.818 [2024-07-26 11:35:53.207669] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.818 [2024-07-26 11:35:53.207706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.818 qpair failed and we were unable to recover it. 00:27:57.818 [2024-07-26 11:35:53.207917] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.818 [2024-07-26 11:35:53.207951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.818 qpair failed and we were unable to recover it. 00:27:57.818 [2024-07-26 11:35:53.208109] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.818 [2024-07-26 11:35:53.208142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.818 qpair failed and we were unable to recover it. 00:27:57.818 [2024-07-26 11:35:53.208439] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.818 [2024-07-26 11:35:53.208474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.818 qpair failed and we were unable to recover it. 00:27:57.818 [2024-07-26 11:35:53.208776] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.818 [2024-07-26 11:35:53.208812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.818 qpair failed and we were unable to recover it. 00:27:57.818 [2024-07-26 11:35:53.208972] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.818 [2024-07-26 11:35:53.209010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.818 qpair failed and we were unable to recover it. 00:27:57.818 [2024-07-26 11:35:53.209196] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.818 [2024-07-26 11:35:53.209229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.818 qpair failed and we were unable to recover it. 00:27:57.818 [2024-07-26 11:35:53.209368] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.818 [2024-07-26 11:35:53.209400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.818 qpair failed and we were unable to recover it. 00:27:57.818 [2024-07-26 11:35:53.209588] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.818 [2024-07-26 11:35:53.209621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.818 qpair failed and we were unable to recover it. 00:27:57.818 [2024-07-26 11:35:53.209896] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.818 [2024-07-26 11:35:53.209928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.818 qpair failed and we were unable to recover it. 00:27:57.818 [2024-07-26 11:35:53.210062] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.819 [2024-07-26 11:35:53.210095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.819 qpair failed and we were unable to recover it. 00:27:57.819 [2024-07-26 11:35:53.210390] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.819 [2024-07-26 11:35:53.210423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.819 qpair failed and we were unable to recover it. 00:27:57.819 [2024-07-26 11:35:53.210635] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.819 [2024-07-26 11:35:53.210669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.819 qpair failed and we were unable to recover it. 00:27:57.819 [2024-07-26 11:35:53.210858] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.819 [2024-07-26 11:35:53.210892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.819 qpair failed and we were unable to recover it. 00:27:57.819 [2024-07-26 11:35:53.211162] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.819 [2024-07-26 11:35:53.211195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.819 qpair failed and we were unable to recover it. 00:27:57.819 [2024-07-26 11:35:53.211401] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.819 [2024-07-26 11:35:53.211434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.819 qpair failed and we were unable to recover it. 00:27:57.819 [2024-07-26 11:35:53.211549] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.819 [2024-07-26 11:35:53.211582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.819 qpair failed and we were unable to recover it. 00:27:57.819 [2024-07-26 11:35:53.211743] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.819 [2024-07-26 11:35:53.211785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.819 qpair failed and we were unable to recover it. 00:27:57.819 [2024-07-26 11:35:53.211985] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.819 [2024-07-26 11:35:53.212018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.819 qpair failed and we were unable to recover it. 00:27:57.819 [2024-07-26 11:35:53.212335] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.819 [2024-07-26 11:35:53.212367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.819 qpair failed and we were unable to recover it. 00:27:57.819 [2024-07-26 11:35:53.212569] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.819 [2024-07-26 11:35:53.212603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.819 qpair failed and we were unable to recover it. 00:27:57.819 [2024-07-26 11:35:53.212760] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.819 [2024-07-26 11:35:53.212793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.819 qpair failed and we were unable to recover it. 00:27:57.819 [2024-07-26 11:35:53.213063] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.819 [2024-07-26 11:35:53.213096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.819 qpair failed and we were unable to recover it. 00:27:57.819 [2024-07-26 11:35:53.213362] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.819 [2024-07-26 11:35:53.213395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.819 qpair failed and we were unable to recover it. 00:27:57.819 [2024-07-26 11:35:53.213612] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.819 [2024-07-26 11:35:53.213657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.819 qpair failed and we were unable to recover it. 00:27:57.819 [2024-07-26 11:35:53.213854] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.819 [2024-07-26 11:35:53.213886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.819 qpair failed and we were unable to recover it. 00:27:57.819 [2024-07-26 11:35:53.214071] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.819 [2024-07-26 11:35:53.214104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.819 qpair failed and we were unable to recover it. 00:27:57.819 [2024-07-26 11:35:53.214256] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.819 [2024-07-26 11:35:53.214288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.819 qpair failed and we were unable to recover it. 00:27:57.819 [2024-07-26 11:35:53.214488] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.819 [2024-07-26 11:35:53.214520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.819 qpair failed and we were unable to recover it. 00:27:57.819 [2024-07-26 11:35:53.214724] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.819 [2024-07-26 11:35:53.214759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.819 qpair failed and we were unable to recover it. 00:27:57.819 [2024-07-26 11:35:53.214979] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.819 [2024-07-26 11:35:53.215012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.819 qpair failed and we were unable to recover it. 00:27:57.819 [2024-07-26 11:35:53.215219] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.819 [2024-07-26 11:35:53.215253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.819 qpair failed and we were unable to recover it. 00:27:57.819 [2024-07-26 11:35:53.215460] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.819 [2024-07-26 11:35:53.215494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.819 qpair failed and we were unable to recover it. 00:27:57.819 [2024-07-26 11:35:53.215715] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.819 [2024-07-26 11:35:53.215749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.819 qpair failed and we were unable to recover it. 00:27:57.819 [2024-07-26 11:35:53.215907] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.819 [2024-07-26 11:35:53.215940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.819 qpair failed and we were unable to recover it. 00:27:57.819 [2024-07-26 11:35:53.216206] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.819 [2024-07-26 11:35:53.216238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.819 qpair failed and we were unable to recover it. 00:27:57.819 [2024-07-26 11:35:53.216445] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.819 [2024-07-26 11:35:53.216478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.819 qpair failed and we were unable to recover it. 00:27:57.819 [2024-07-26 11:35:53.216763] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.819 [2024-07-26 11:35:53.216797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.819 qpair failed and we were unable to recover it. 00:27:57.819 [2024-07-26 11:35:53.216953] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.819 [2024-07-26 11:35:53.216986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.819 qpair failed and we were unable to recover it. 00:27:57.819 [2024-07-26 11:35:53.217117] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.819 [2024-07-26 11:35:53.217149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.819 qpair failed and we were unable to recover it. 00:27:57.819 [2024-07-26 11:35:53.217342] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.819 [2024-07-26 11:35:53.217375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.819 qpair failed and we were unable to recover it. 00:27:57.819 [2024-07-26 11:35:53.217606] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.819 [2024-07-26 11:35:53.217651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.819 qpair failed and we were unable to recover it. 00:27:57.819 [2024-07-26 11:35:53.217883] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.819 [2024-07-26 11:35:53.217916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.819 qpair failed and we were unable to recover it. 00:27:57.819 [2024-07-26 11:35:53.218144] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.819 [2024-07-26 11:35:53.218177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.819 qpair failed and we were unable to recover it. 00:27:57.819 [2024-07-26 11:35:53.218323] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.819 [2024-07-26 11:35:53.218358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.819 qpair failed and we were unable to recover it. 00:27:57.820 [2024-07-26 11:35:53.218573] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.820 [2024-07-26 11:35:53.218607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.820 qpair failed and we were unable to recover it. 00:27:57.820 [2024-07-26 11:35:53.218887] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.820 [2024-07-26 11:35:53.218920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.820 qpair failed and we were unable to recover it. 00:27:57.820 [2024-07-26 11:35:53.219206] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.820 [2024-07-26 11:35:53.219240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.820 qpair failed and we were unable to recover it. 00:27:57.820 [2024-07-26 11:35:53.219551] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.820 [2024-07-26 11:35:53.219584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.820 qpair failed and we were unable to recover it. 00:27:57.820 [2024-07-26 11:35:53.219807] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.820 [2024-07-26 11:35:53.219841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.820 qpair failed and we were unable to recover it. 00:27:57.820 [2024-07-26 11:35:53.220062] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.820 [2024-07-26 11:35:53.220095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.820 qpair failed and we were unable to recover it. 00:27:57.820 [2024-07-26 11:35:53.220399] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.820 [2024-07-26 11:35:53.220432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.820 qpair failed and we were unable to recover it. 00:27:57.820 [2024-07-26 11:35:53.220695] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.820 [2024-07-26 11:35:53.220730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.820 qpair failed and we were unable to recover it. 00:27:57.820 [2024-07-26 11:35:53.220940] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.820 [2024-07-26 11:35:53.220973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.820 qpair failed and we were unable to recover it. 00:27:57.820 [2024-07-26 11:35:53.221183] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.820 [2024-07-26 11:35:53.221216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.820 qpair failed and we were unable to recover it. 00:27:57.820 [2024-07-26 11:35:53.221473] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.820 [2024-07-26 11:35:53.221506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.820 qpair failed and we were unable to recover it. 00:27:57.820 [2024-07-26 11:35:53.221809] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.820 [2024-07-26 11:35:53.221843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.820 qpair failed and we were unable to recover it. 00:27:57.820 [2024-07-26 11:35:53.222034] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.820 [2024-07-26 11:35:53.222067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.820 qpair failed and we were unable to recover it. 00:27:57.820 [2024-07-26 11:35:53.222361] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.820 [2024-07-26 11:35:53.222437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.820 qpair failed and we were unable to recover it. 00:27:57.820 [2024-07-26 11:35:53.222786] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.820 [2024-07-26 11:35:53.222825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.820 qpair failed and we were unable to recover it. 00:27:57.820 [2024-07-26 11:35:53.222989] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.820 [2024-07-26 11:35:53.223023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.820 qpair failed and we were unable to recover it. 00:27:57.820 [2024-07-26 11:35:53.223278] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.820 [2024-07-26 11:35:53.223312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.820 qpair failed and we were unable to recover it. 00:27:57.820 [2024-07-26 11:35:53.223593] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.820 [2024-07-26 11:35:53.223636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.820 qpair failed and we were unable to recover it. 00:27:57.820 [2024-07-26 11:35:53.223849] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.820 [2024-07-26 11:35:53.223884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.820 qpair failed and we were unable to recover it. 00:27:57.820 [2024-07-26 11:35:53.224139] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.820 [2024-07-26 11:35:53.224171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.820 qpair failed and we were unable to recover it. 00:27:57.820 [2024-07-26 11:35:53.224482] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.820 [2024-07-26 11:35:53.224518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.820 qpair failed and we were unable to recover it. 00:27:57.820 [2024-07-26 11:35:53.224795] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.820 [2024-07-26 11:35:53.224829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.820 qpair failed and we were unable to recover it. 00:27:57.820 [2024-07-26 11:35:53.225037] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.820 [2024-07-26 11:35:53.225070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.820 qpair failed and we were unable to recover it. 00:27:57.820 [2024-07-26 11:35:53.225222] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.820 [2024-07-26 11:35:53.225257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.820 qpair failed and we were unable to recover it. 00:27:57.821 [2024-07-26 11:35:53.225482] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.821 [2024-07-26 11:35:53.225516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.821 qpair failed and we were unable to recover it. 00:27:57.821 [2024-07-26 11:35:53.225707] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.821 [2024-07-26 11:35:53.225744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.821 qpair failed and we were unable to recover it. 00:27:57.821 [2024-07-26 11:35:53.226003] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.821 [2024-07-26 11:35:53.226051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.821 qpair failed and we were unable to recover it. 00:27:57.821 [2024-07-26 11:35:53.226273] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.821 [2024-07-26 11:35:53.226306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.821 qpair failed and we were unable to recover it. 00:27:57.821 [2024-07-26 11:35:53.226426] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.821 [2024-07-26 11:35:53.226460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.821 qpair failed and we were unable to recover it. 00:27:57.821 [2024-07-26 11:35:53.226582] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.821 [2024-07-26 11:35:53.226615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.821 qpair failed and we were unable to recover it. 00:27:57.821 [2024-07-26 11:35:53.226909] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.821 [2024-07-26 11:35:53.226943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.821 qpair failed and we were unable to recover it. 00:27:57.821 [2024-07-26 11:35:53.227228] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.821 [2024-07-26 11:35:53.227263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.821 qpair failed and we were unable to recover it. 00:27:57.821 [2024-07-26 11:35:53.227521] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.821 [2024-07-26 11:35:53.227554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.821 qpair failed and we were unable to recover it. 00:27:57.821 [2024-07-26 11:35:53.227761] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.821 [2024-07-26 11:35:53.227794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.821 qpair failed and we were unable to recover it. 00:27:57.821 [2024-07-26 11:35:53.227994] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.821 [2024-07-26 11:35:53.228027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.821 qpair failed and we were unable to recover it. 00:27:57.821 [2024-07-26 11:35:53.228225] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.821 [2024-07-26 11:35:53.228258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.821 qpair failed and we were unable to recover it. 00:27:57.821 [2024-07-26 11:35:53.228525] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.821 [2024-07-26 11:35:53.228558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.821 qpair failed and we were unable to recover it. 00:27:57.821 [2024-07-26 11:35:53.228747] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.821 [2024-07-26 11:35:53.228783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.821 qpair failed and we were unable to recover it. 00:27:57.821 [2024-07-26 11:35:53.229057] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.821 [2024-07-26 11:35:53.229090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.821 qpair failed and we were unable to recover it. 00:27:57.821 [2024-07-26 11:35:53.229343] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.821 [2024-07-26 11:35:53.229378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.821 qpair failed and we were unable to recover it. 00:27:57.821 [2024-07-26 11:35:53.229590] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.821 [2024-07-26 11:35:53.229625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.821 qpair failed and we were unable to recover it. 00:27:57.821 [2024-07-26 11:35:53.229767] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.821 [2024-07-26 11:35:53.229799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.822 qpair failed and we were unable to recover it. 00:27:57.822 [2024-07-26 11:35:53.230014] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.822 [2024-07-26 11:35:53.230048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.822 qpair failed and we were unable to recover it. 00:27:57.822 [2024-07-26 11:35:53.230336] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.822 [2024-07-26 11:35:53.230369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.822 qpair failed and we were unable to recover it. 00:27:57.822 [2024-07-26 11:35:53.230642] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.822 [2024-07-26 11:35:53.230677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.822 qpair failed and we were unable to recover it. 00:27:57.822 [2024-07-26 11:35:53.231000] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.822 [2024-07-26 11:35:53.231033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.822 qpair failed and we were unable to recover it. 00:27:57.822 [2024-07-26 11:35:53.231264] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.822 [2024-07-26 11:35:53.231296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.822 qpair failed and we were unable to recover it. 00:27:57.822 [2024-07-26 11:35:53.231450] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.822 [2024-07-26 11:35:53.231484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.822 qpair failed and we were unable to recover it. 00:27:57.822 [2024-07-26 11:35:53.231759] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.822 [2024-07-26 11:35:53.231793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.822 qpair failed and we were unable to recover it. 00:27:57.822 [2024-07-26 11:35:53.232006] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.822 [2024-07-26 11:35:53.232039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.822 qpair failed and we were unable to recover it. 00:27:57.822 [2024-07-26 11:35:53.232283] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.822 [2024-07-26 11:35:53.232316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.822 qpair failed and we were unable to recover it. 00:27:57.822 [2024-07-26 11:35:53.232505] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.822 [2024-07-26 11:35:53.232537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.822 qpair failed and we were unable to recover it. 00:27:57.822 [2024-07-26 11:35:53.232745] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.822 [2024-07-26 11:35:53.232779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.822 qpair failed and we were unable to recover it. 00:27:57.822 [2024-07-26 11:35:53.232912] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.822 [2024-07-26 11:35:53.232951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.822 qpair failed and we were unable to recover it. 00:27:57.822 [2024-07-26 11:35:53.233257] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.822 [2024-07-26 11:35:53.233290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.822 qpair failed and we were unable to recover it. 00:27:57.822 [2024-07-26 11:35:53.233492] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.822 [2024-07-26 11:35:53.233524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.822 qpair failed and we were unable to recover it. 00:27:57.822 [2024-07-26 11:35:53.233728] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.822 [2024-07-26 11:35:53.233762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.822 qpair failed and we were unable to recover it. 00:27:57.822 [2024-07-26 11:35:53.234036] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.822 [2024-07-26 11:35:53.234068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.822 qpair failed and we were unable to recover it. 00:27:57.822 [2024-07-26 11:35:53.234302] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.822 [2024-07-26 11:35:53.234334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.822 qpair failed and we were unable to recover it. 00:27:57.822 [2024-07-26 11:35:53.234532] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.822 [2024-07-26 11:35:53.234566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.822 qpair failed and we were unable to recover it. 00:27:57.822 [2024-07-26 11:35:53.234945] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.822 [2024-07-26 11:35:53.234980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.822 qpair failed and we were unable to recover it. 00:27:57.822 [2024-07-26 11:35:53.235192] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.822 [2024-07-26 11:35:53.235224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.822 qpair failed and we were unable to recover it. 00:27:57.822 [2024-07-26 11:35:53.235527] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.822 [2024-07-26 11:35:53.235559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.822 qpair failed and we were unable to recover it. 00:27:57.822 [2024-07-26 11:35:53.235845] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.822 [2024-07-26 11:35:53.235880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.822 qpair failed and we were unable to recover it. 00:27:57.822 [2024-07-26 11:35:53.236185] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.822 [2024-07-26 11:35:53.236219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.822 qpair failed and we were unable to recover it. 00:27:57.822 [2024-07-26 11:35:53.236411] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.822 [2024-07-26 11:35:53.236448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.822 qpair failed and we were unable to recover it. 00:27:57.823 [2024-07-26 11:35:53.236652] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.823 [2024-07-26 11:35:53.236686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.823 qpair failed and we were unable to recover it. 00:27:57.823 [2024-07-26 11:35:53.236901] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.823 [2024-07-26 11:35:53.236934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.823 qpair failed and we were unable to recover it. 00:27:57.823 [2024-07-26 11:35:53.237129] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.823 [2024-07-26 11:35:53.237161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.823 qpair failed and we were unable to recover it. 00:27:57.823 [2024-07-26 11:35:53.237377] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.823 [2024-07-26 11:35:53.237410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.823 qpair failed and we were unable to recover it. 00:27:57.823 [2024-07-26 11:35:53.237624] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.823 [2024-07-26 11:35:53.237664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.823 qpair failed and we were unable to recover it. 00:27:57.823 [2024-07-26 11:35:53.237806] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.823 [2024-07-26 11:35:53.237839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.823 qpair failed and we were unable to recover it. 00:27:57.823 [2024-07-26 11:35:53.238024] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.823 [2024-07-26 11:35:53.238057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.823 qpair failed and we were unable to recover it. 00:27:57.823 [2024-07-26 11:35:53.238245] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.823 [2024-07-26 11:35:53.238277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.823 qpair failed and we were unable to recover it. 00:27:57.823 [2024-07-26 11:35:53.238556] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.823 [2024-07-26 11:35:53.238589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.823 qpair failed and we were unable to recover it. 00:27:57.823 [2024-07-26 11:35:53.238718] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.823 [2024-07-26 11:35:53.238752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.823 qpair failed and we were unable to recover it. 00:27:57.823 [2024-07-26 11:35:53.238895] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.823 [2024-07-26 11:35:53.238928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.823 qpair failed and we were unable to recover it. 00:27:57.823 [2024-07-26 11:35:53.239136] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.823 [2024-07-26 11:35:53.239170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.823 qpair failed and we were unable to recover it. 00:27:57.823 [2024-07-26 11:35:53.239466] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.823 [2024-07-26 11:35:53.239499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.823 qpair failed and we were unable to recover it. 00:27:57.823 [2024-07-26 11:35:53.239783] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.823 [2024-07-26 11:35:53.239818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.823 qpair failed and we were unable to recover it. 00:27:57.823 [2024-07-26 11:35:53.240105] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.824 [2024-07-26 11:35:53.240138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.824 qpair failed and we were unable to recover it. 00:27:57.824 [2024-07-26 11:35:53.240460] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.824 [2024-07-26 11:35:53.240494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.824 qpair failed and we were unable to recover it. 00:27:57.824 [2024-07-26 11:35:53.240689] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.824 [2024-07-26 11:35:53.240724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.824 qpair failed and we were unable to recover it. 00:27:57.824 [2024-07-26 11:35:53.240982] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.824 [2024-07-26 11:35:53.241016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.824 qpair failed and we were unable to recover it. 00:27:57.824 [2024-07-26 11:35:53.241286] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.824 [2024-07-26 11:35:53.241318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.824 qpair failed and we were unable to recover it. 00:27:57.824 [2024-07-26 11:35:53.241512] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.824 [2024-07-26 11:35:53.241545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.824 qpair failed and we were unable to recover it. 00:27:57.824 [2024-07-26 11:35:53.241830] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.824 [2024-07-26 11:35:53.241864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.824 qpair failed and we were unable to recover it. 00:27:57.824 [2024-07-26 11:35:53.242071] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.824 [2024-07-26 11:35:53.242106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.824 qpair failed and we were unable to recover it. 00:27:57.824 [2024-07-26 11:35:53.242303] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.824 [2024-07-26 11:35:53.242337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.824 qpair failed and we were unable to recover it. 00:27:57.824 [2024-07-26 11:35:53.242619] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.825 [2024-07-26 11:35:53.242659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.825 qpair failed and we were unable to recover it. 00:27:57.825 [2024-07-26 11:35:53.242826] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.825 [2024-07-26 11:35:53.242859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.825 qpair failed and we were unable to recover it. 00:27:57.825 [2024-07-26 11:35:53.243058] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.825 [2024-07-26 11:35:53.243091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.825 qpair failed and we were unable to recover it. 00:27:57.825 [2024-07-26 11:35:53.243320] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.825 [2024-07-26 11:35:53.243354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.825 qpair failed and we were unable to recover it. 00:27:57.825 [2024-07-26 11:35:53.243586] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.825 [2024-07-26 11:35:53.243624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.825 qpair failed and we were unable to recover it. 00:27:57.825 [2024-07-26 11:35:53.243875] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.825 [2024-07-26 11:35:53.243908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.825 qpair failed and we were unable to recover it. 00:27:57.825 [2024-07-26 11:35:53.244069] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.825 [2024-07-26 11:35:53.244101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.825 qpair failed and we were unable to recover it. 00:27:57.825 [2024-07-26 11:35:53.244398] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.825 [2024-07-26 11:35:53.244436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.825 qpair failed and we were unable to recover it. 00:27:57.825 [2024-07-26 11:35:53.244659] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.825 [2024-07-26 11:35:53.244694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.825 qpair failed and we were unable to recover it. 00:27:57.826 [2024-07-26 11:35:53.244914] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.826 [2024-07-26 11:35:53.244947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.826 qpair failed and we were unable to recover it. 00:27:57.826 [2024-07-26 11:35:53.245094] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.826 [2024-07-26 11:35:53.245128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.826 qpair failed and we were unable to recover it. 00:27:57.826 [2024-07-26 11:35:53.245352] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.826 [2024-07-26 11:35:53.245385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.826 qpair failed and we were unable to recover it. 00:27:57.826 [2024-07-26 11:35:53.245594] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.826 [2024-07-26 11:35:53.245639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.826 qpair failed and we were unable to recover it. 00:27:57.826 [2024-07-26 11:35:53.245848] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.826 [2024-07-26 11:35:53.245881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.826 qpair failed and we were unable to recover it. 00:27:57.826 [2024-07-26 11:35:53.246092] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.826 [2024-07-26 11:35:53.246124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.826 qpair failed and we were unable to recover it. 00:27:57.826 [2024-07-26 11:35:53.246389] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.826 [2024-07-26 11:35:53.246422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.827 qpair failed and we were unable to recover it. 00:27:57.827 [2024-07-26 11:35:53.246638] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.827 [2024-07-26 11:35:53.246672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.827 qpair failed and we were unable to recover it. 00:27:57.827 [2024-07-26 11:35:53.246888] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.827 [2024-07-26 11:35:53.246921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.827 qpair failed and we were unable to recover it. 00:27:57.827 [2024-07-26 11:35:53.247058] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.827 [2024-07-26 11:35:53.247091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.827 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh: line 36: 1671360 Killed "${NVMF_APP[@]}" "$@" 00:27:57.827 qpair failed and we were unable to recover it. 00:27:57.827 [2024-07-26 11:35:53.247299] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.827 [2024-07-26 11:35:53.247332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.827 qpair failed and we were unable to recover it. 00:27:57.827 [2024-07-26 11:35:53.247528] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.827 [2024-07-26 11:35:53.247562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.827 qpair failed and we were unable to recover it. 00:27:57.827 11:35:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@48 -- # disconnect_init 10.0.0.2 00:27:57.827 [2024-07-26 11:35:53.247917] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.827 [2024-07-26 11:35:53.247951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.827 qpair failed and we were unable to recover it. 00:27:57.827 [2024-07-26 11:35:53.248109] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.827 [2024-07-26 11:35:53.248142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.827 qpair failed and we were unable to recover it. 00:27:57.827 11:35:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:27:57.827 [2024-07-26 11:35:53.248421] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.827 [2024-07-26 11:35:53.248454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.827 qpair failed and we were unable to recover it. 00:27:57.827 11:35:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:27:57.827 [2024-07-26 11:35:53.248584] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.827 [2024-07-26 11:35:53.248617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.827 qpair failed and we were unable to recover it. 00:27:57.827 [2024-07-26 11:35:53.248778] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.827 11:35:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@724 -- # xtrace_disable 00:27:57.827 [2024-07-26 11:35:53.248811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.827 qpair failed and we were unable to recover it. 00:27:57.827 [2024-07-26 11:35:53.249024] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.827 [2024-07-26 11:35:53.249056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.827 qpair failed and we were unable to recover it. 00:27:57.827 11:35:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:57.827 [2024-07-26 11:35:53.249357] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.827 [2024-07-26 11:35:53.249390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.827 qpair failed and we were unable to recover it. 00:27:57.827 [2024-07-26 11:35:53.249579] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.827 [2024-07-26 11:35:53.249617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.827 qpair failed and we were unable to recover it. 00:27:57.827 [2024-07-26 11:35:53.249776] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.827 [2024-07-26 11:35:53.249808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.827 qpair failed and we were unable to recover it. 00:27:57.827 [2024-07-26 11:35:53.250107] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.827 [2024-07-26 11:35:53.250139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.827 qpair failed and we were unable to recover it. 00:27:57.827 [2024-07-26 11:35:53.250397] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.827 [2024-07-26 11:35:53.250431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.827 qpair failed and we were unable to recover it. 00:27:57.827 [2024-07-26 11:35:53.250661] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.827 [2024-07-26 11:35:53.250694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.827 qpair failed and we were unable to recover it. 00:27:57.827 [2024-07-26 11:35:53.250958] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.827 [2024-07-26 11:35:53.250990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.827 qpair failed and we were unable to recover it. 00:27:57.827 [2024-07-26 11:35:53.251123] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.827 [2024-07-26 11:35:53.251155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.827 qpair failed and we were unable to recover it. 00:27:57.827 [2024-07-26 11:35:53.251385] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.827 [2024-07-26 11:35:53.251418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.827 qpair failed and we were unable to recover it. 00:27:57.827 [2024-07-26 11:35:53.251704] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.827 [2024-07-26 11:35:53.251737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.827 qpair failed and we were unable to recover it. 00:27:57.827 [2024-07-26 11:35:53.251938] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.827 [2024-07-26 11:35:53.251970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.827 qpair failed and we were unable to recover it. 00:27:57.827 [2024-07-26 11:35:53.252158] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.827 [2024-07-26 11:35:53.252190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.827 qpair failed and we were unable to recover it. 00:27:57.827 [2024-07-26 11:35:53.252447] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.827 [2024-07-26 11:35:53.252480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.827 qpair failed and we were unable to recover it. 00:27:57.827 [2024-07-26 11:35:53.252674] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.827 [2024-07-26 11:35:53.252708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.827 qpair failed and we were unable to recover it. 00:27:57.827 [2024-07-26 11:35:53.252910] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.827 [2024-07-26 11:35:53.252940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.827 qpair failed and we were unable to recover it. 00:27:57.827 [2024-07-26 11:35:53.253170] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.827 [2024-07-26 11:35:53.253201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.827 qpair failed and we were unable to recover it. 00:27:57.827 [2024-07-26 11:35:53.253511] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.827 [2024-07-26 11:35:53.253544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.827 qpair failed and we were unable to recover it. 00:27:57.827 [2024-07-26 11:35:53.253821] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.827 [2024-07-26 11:35:53.253869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.827 qpair failed and we were unable to recover it. 00:27:57.828 [2024-07-26 11:35:53.254034] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.828 [2024-07-26 11:35:53.254067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.828 qpair failed and we were unable to recover it. 00:27:57.828 [2024-07-26 11:35:53.254325] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.828 [2024-07-26 11:35:53.254358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.828 qpair failed and we were unable to recover it. 00:27:57.828 [2024-07-26 11:35:53.254578] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.828 [2024-07-26 11:35:53.254611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.828 qpair failed and we were unable to recover it. 00:27:57.828 [2024-07-26 11:35:53.254878] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.828 [2024-07-26 11:35:53.254911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.828 qpair failed and we were unable to recover it. 00:27:57.828 [2024-07-26 11:35:53.255135] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.828 [2024-07-26 11:35:53.255167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.828 qpair failed and we were unable to recover it. 00:27:57.828 [2024-07-26 11:35:53.255450] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.828 [2024-07-26 11:35:53.255482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.828 qpair failed and we were unable to recover it. 00:27:57.828 [2024-07-26 11:35:53.255676] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.828 [2024-07-26 11:35:53.255710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.828 qpair failed and we were unable to recover it. 00:27:57.829 [2024-07-26 11:35:53.255993] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.829 [2024-07-26 11:35:53.256025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.829 qpair failed and we were unable to recover it. 00:27:57.829 11:35:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@481 -- # nvmfpid=1672092 00:27:57.829 [2024-07-26 11:35:53.256333] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.829 [2024-07-26 11:35:53.256367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.829 qpair failed and we were unable to recover it. 00:27:57.829 11:35:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@482 -- # waitforlisten 1672092 00:27:57.829 [2024-07-26 11:35:53.256658] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.829 11:35:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:27:57.829 [2024-07-26 11:35:53.256692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.829 qpair failed and we were unable to recover it. 00:27:57.829 [2024-07-26 11:35:53.256879] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.829 [2024-07-26 11:35:53.256912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.829 qpair failed and we were unable to recover it. 00:27:57.829 11:35:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@831 -- # '[' -z 1672092 ']' 00:27:57.829 [2024-07-26 11:35:53.257114] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.829 [2024-07-26 11:35:53.257148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.829 qpair failed and we were unable to recover it. 00:27:57.829 11:35:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:57.829 [2024-07-26 11:35:53.257472] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.829 11:35:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@836 -- # local max_retries=100 00:27:57.829 [2024-07-26 11:35:53.257506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.829 qpair failed and we were unable to recover it. 00:27:57.829 [2024-07-26 11:35:53.257749] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.830 [2024-07-26 11:35:53.257783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.830 11:35:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:57.830 qpair failed and we were unable to recover it. 00:27:57.830 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:57.830 [2024-07-26 11:35:53.258043] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.830 11:35:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@840 -- # xtrace_disable 00:27:57.830 [2024-07-26 11:35:53.258077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.830 qpair failed and we were unable to recover it. 00:27:57.830 11:35:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:57.830 [2024-07-26 11:35:53.258377] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.830 [2024-07-26 11:35:53.258412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.830 qpair failed and we were unable to recover it. 00:27:57.830 [2024-07-26 11:35:53.258574] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.830 [2024-07-26 11:35:53.258607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.830 qpair failed and we were unable to recover it. 00:27:57.830 [2024-07-26 11:35:53.258878] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.830 [2024-07-26 11:35:53.258912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.830 qpair failed and we were unable to recover it. 00:27:57.830 [2024-07-26 11:35:53.259069] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.830 [2024-07-26 11:35:53.259102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.830 qpair failed and we were unable to recover it. 00:27:57.830 [2024-07-26 11:35:53.259315] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.830 [2024-07-26 11:35:53.259347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.830 qpair failed and we were unable to recover it. 00:27:57.831 [2024-07-26 11:35:53.259620] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.831 [2024-07-26 11:35:53.259664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.831 qpair failed and we were unable to recover it. 00:27:57.831 [2024-07-26 11:35:53.259876] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.831 [2024-07-26 11:35:53.259908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.831 qpair failed and we were unable to recover it. 00:27:57.831 [2024-07-26 11:35:53.260142] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.831 [2024-07-26 11:35:53.260174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.831 qpair failed and we were unable to recover it. 00:27:57.831 [2024-07-26 11:35:53.260381] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.831 [2024-07-26 11:35:53.260413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.831 qpair failed and we were unable to recover it. 00:27:57.831 [2024-07-26 11:35:53.260678] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.831 [2024-07-26 11:35:53.260712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.831 qpair failed and we were unable to recover it. 00:27:57.831 [2024-07-26 11:35:53.260945] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.831 [2024-07-26 11:35:53.260981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.831 qpair failed and we were unable to recover it. 00:27:57.831 [2024-07-26 11:35:53.261118] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.831 [2024-07-26 11:35:53.261150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.831 qpair failed and we were unable to recover it. 00:27:57.831 [2024-07-26 11:35:53.261429] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.831 [2024-07-26 11:35:53.261462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.832 qpair failed and we were unable to recover it. 00:27:57.832 [2024-07-26 11:35:53.261703] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.832 [2024-07-26 11:35:53.261737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.832 qpair failed and we were unable to recover it. 00:27:57.832 [2024-07-26 11:35:53.261956] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.832 [2024-07-26 11:35:53.261990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.832 qpair failed and we were unable to recover it. 00:27:57.832 [2024-07-26 11:35:53.262213] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.832 [2024-07-26 11:35:53.262246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.832 qpair failed and we were unable to recover it. 00:27:57.832 [2024-07-26 11:35:53.262504] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.832 [2024-07-26 11:35:53.262536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.832 qpair failed and we were unable to recover it. 00:27:57.832 [2024-07-26 11:35:53.262701] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.832 [2024-07-26 11:35:53.262735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.832 qpair failed and we were unable to recover it. 00:27:57.832 [2024-07-26 11:35:53.262875] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.832 [2024-07-26 11:35:53.262908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.832 qpair failed and we were unable to recover it. 00:27:57.832 [2024-07-26 11:35:53.263113] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.833 [2024-07-26 11:35:53.263145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.833 qpair failed and we were unable to recover it. 00:27:57.833 [2024-07-26 11:35:53.263383] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.833 [2024-07-26 11:35:53.263416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.833 qpair failed and we were unable to recover it. 00:27:57.833 [2024-07-26 11:35:53.263604] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.833 [2024-07-26 11:35:53.263647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.833 qpair failed and we were unable to recover it. 00:27:57.833 [2024-07-26 11:35:53.263806] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.833 [2024-07-26 11:35:53.263839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.833 qpair failed and we were unable to recover it. 00:27:57.833 [2024-07-26 11:35:53.264042] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.833 [2024-07-26 11:35:53.264077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.833 qpair failed and we were unable to recover it. 00:27:57.833 [2024-07-26 11:35:53.264237] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.833 [2024-07-26 11:35:53.264269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.833 qpair failed and we were unable to recover it. 00:27:57.833 [2024-07-26 11:35:53.264399] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.833 [2024-07-26 11:35:53.264432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.833 qpair failed and we were unable to recover it. 00:27:57.833 [2024-07-26 11:35:53.264621] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.833 [2024-07-26 11:35:53.264664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.833 qpair failed and we were unable to recover it. 00:27:57.833 [2024-07-26 11:35:53.264853] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.833 [2024-07-26 11:35:53.264886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.833 qpair failed and we were unable to recover it. 00:27:57.833 [2024-07-26 11:35:53.265104] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.834 [2024-07-26 11:35:53.265138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.834 qpair failed and we were unable to recover it. 00:27:57.834 [2024-07-26 11:35:53.265356] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.834 [2024-07-26 11:35:53.265390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.834 qpair failed and we were unable to recover it. 00:27:57.834 [2024-07-26 11:35:53.265585] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.834 [2024-07-26 11:35:53.265623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.834 qpair failed and we were unable to recover it. 00:27:57.834 [2024-07-26 11:35:53.265845] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.834 [2024-07-26 11:35:53.265880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.834 qpair failed and we were unable to recover it. 00:27:57.834 [2024-07-26 11:35:53.266162] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.835 [2024-07-26 11:35:53.266196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.835 qpair failed and we were unable to recover it. 00:27:57.835 [2024-07-26 11:35:53.266395] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.835 [2024-07-26 11:35:53.266428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.835 qpair failed and we were unable to recover it. 00:27:57.835 [2024-07-26 11:35:53.266641] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.835 [2024-07-26 11:35:53.266676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.835 qpair failed and we were unable to recover it. 00:27:57.835 [2024-07-26 11:35:53.266876] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.835 [2024-07-26 11:35:53.266909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.835 qpair failed and we were unable to recover it. 00:27:57.835 [2024-07-26 11:35:53.267050] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.835 [2024-07-26 11:35:53.267083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.836 qpair failed and we were unable to recover it. 00:27:57.837 [2024-07-26 11:35:53.267299] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.837 [2024-07-26 11:35:53.267333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.837 qpair failed and we were unable to recover it. 00:27:57.837 [2024-07-26 11:35:53.267471] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.837 [2024-07-26 11:35:53.267505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.837 qpair failed and we were unable to recover it. 00:27:57.837 [2024-07-26 11:35:53.267719] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.837 [2024-07-26 11:35:53.267753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.837 qpair failed and we were unable to recover it. 00:27:57.837 [2024-07-26 11:35:53.267898] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.837 [2024-07-26 11:35:53.267931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.837 qpair failed and we were unable to recover it. 00:27:57.837 [2024-07-26 11:35:53.268102] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.837 [2024-07-26 11:35:53.268137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.837 qpair failed and we were unable to recover it. 00:27:57.837 [2024-07-26 11:35:53.268483] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.837 [2024-07-26 11:35:53.268516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.837 qpair failed and we were unable to recover it. 00:27:57.837 [2024-07-26 11:35:53.268803] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.837 [2024-07-26 11:35:53.268836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.837 qpair failed and we were unable to recover it. 00:27:57.837 [2024-07-26 11:35:53.269079] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.837 [2024-07-26 11:35:53.269111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.837 qpair failed and we were unable to recover it. 00:27:57.837 [2024-07-26 11:35:53.269321] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.837 [2024-07-26 11:35:53.269353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.837 qpair failed and we were unable to recover it. 00:27:57.837 [2024-07-26 11:35:53.269643] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.837 [2024-07-26 11:35:53.269676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.837 qpair failed and we were unable to recover it. 00:27:57.837 [2024-07-26 11:35:53.269827] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.837 [2024-07-26 11:35:53.269862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.837 qpair failed and we were unable to recover it. 00:27:57.837 [2024-07-26 11:35:53.270131] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.837 [2024-07-26 11:35:53.270164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.837 qpair failed and we were unable to recover it. 00:27:57.837 [2024-07-26 11:35:53.270381] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.837 [2024-07-26 11:35:53.270414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.837 qpair failed and we were unable to recover it. 00:27:57.837 [2024-07-26 11:35:53.270617] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.837 [2024-07-26 11:35:53.270658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.837 qpair failed and we were unable to recover it. 00:27:57.837 [2024-07-26 11:35:53.270874] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.837 [2024-07-26 11:35:53.270907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.837 qpair failed and we were unable to recover it. 00:27:57.837 [2024-07-26 11:35:53.271120] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.837 [2024-07-26 11:35:53.271155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.837 qpair failed and we were unable to recover it. 00:27:57.837 [2024-07-26 11:35:53.271360] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.837 [2024-07-26 11:35:53.271392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.837 qpair failed and we were unable to recover it. 00:27:57.837 [2024-07-26 11:35:53.271689] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.837 [2024-07-26 11:35:53.271722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.837 qpair failed and we were unable to recover it. 00:27:57.837 [2024-07-26 11:35:53.271881] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.837 [2024-07-26 11:35:53.271914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.837 qpair failed and we were unable to recover it. 00:27:57.837 [2024-07-26 11:35:53.272171] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.837 [2024-07-26 11:35:53.272203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.837 qpair failed and we were unable to recover it. 00:27:57.838 [2024-07-26 11:35:53.272393] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.838 [2024-07-26 11:35:53.272426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.838 qpair failed and we were unable to recover it. 00:27:57.838 [2024-07-26 11:35:53.272708] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.838 [2024-07-26 11:35:53.272742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.838 qpair failed and we were unable to recover it. 00:27:57.838 [2024-07-26 11:35:53.272963] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.838 [2024-07-26 11:35:53.272995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.838 qpair failed and we were unable to recover it. 00:27:57.838 [2024-07-26 11:35:53.273125] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.838 [2024-07-26 11:35:53.273157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.838 qpair failed and we were unable to recover it. 00:27:57.838 [2024-07-26 11:35:53.273479] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.838 [2024-07-26 11:35:53.273512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.838 qpair failed and we were unable to recover it. 00:27:57.838 [2024-07-26 11:35:53.273739] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.838 [2024-07-26 11:35:53.273773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.838 qpair failed and we were unable to recover it. 00:27:57.838 [2024-07-26 11:35:53.273924] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.838 [2024-07-26 11:35:53.273957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.839 qpair failed and we were unable to recover it. 00:27:57.839 [2024-07-26 11:35:53.274143] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.839 [2024-07-26 11:35:53.274175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.839 qpair failed and we were unable to recover it. 00:27:57.839 [2024-07-26 11:35:53.274441] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.839 [2024-07-26 11:35:53.274474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.839 qpair failed and we were unable to recover it. 00:27:57.839 [2024-07-26 11:35:53.274667] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.839 [2024-07-26 11:35:53.274701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.839 qpair failed and we were unable to recover it. 00:27:57.839 [2024-07-26 11:35:53.274832] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.839 [2024-07-26 11:35:53.274865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.839 qpair failed and we were unable to recover it. 00:27:57.839 [2024-07-26 11:35:53.275096] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.839 [2024-07-26 11:35:53.275128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.839 qpair failed and we were unable to recover it. 00:27:57.839 [2024-07-26 11:35:53.275391] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.840 [2024-07-26 11:35:53.275423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.840 qpair failed and we were unable to recover it. 00:27:57.840 [2024-07-26 11:35:53.275642] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.840 [2024-07-26 11:35:53.275681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.840 qpair failed and we were unable to recover it. 00:27:57.840 [2024-07-26 11:35:53.275946] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.840 [2024-07-26 11:35:53.275979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.840 qpair failed and we were unable to recover it. 00:27:57.840 [2024-07-26 11:35:53.276265] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.840 [2024-07-26 11:35:53.276298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.840 qpair failed and we were unable to recover it. 00:27:57.840 [2024-07-26 11:35:53.276518] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.840 [2024-07-26 11:35:53.276551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.840 qpair failed and we were unable to recover it. 00:27:57.840 [2024-07-26 11:35:53.276777] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.840 [2024-07-26 11:35:53.276811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.840 qpair failed and we were unable to recover it. 00:27:57.840 [2024-07-26 11:35:53.276964] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.840 [2024-07-26 11:35:53.276997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.840 qpair failed and we were unable to recover it. 00:27:57.840 [2024-07-26 11:35:53.277227] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.840 [2024-07-26 11:35:53.277261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.840 qpair failed and we were unable to recover it. 00:27:57.840 [2024-07-26 11:35:53.277530] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.840 [2024-07-26 11:35:53.277563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.840 qpair failed and we were unable to recover it. 00:27:57.840 [2024-07-26 11:35:53.277774] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.840 [2024-07-26 11:35:53.277808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.841 qpair failed and we were unable to recover it. 00:27:57.841 [2024-07-26 11:35:53.277971] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.841 [2024-07-26 11:35:53.278003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.841 qpair failed and we were unable to recover it. 00:27:57.841 [2024-07-26 11:35:53.278199] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.841 [2024-07-26 11:35:53.278232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.841 qpair failed and we were unable to recover it. 00:27:57.841 [2024-07-26 11:35:53.278459] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.841 [2024-07-26 11:35:53.278491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.841 qpair failed and we were unable to recover it. 00:27:57.841 [2024-07-26 11:35:53.278695] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.841 [2024-07-26 11:35:53.278728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.841 qpair failed and we were unable to recover it. 00:27:57.841 [2024-07-26 11:35:53.278934] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.841 [2024-07-26 11:35:53.278967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.841 qpair failed and we were unable to recover it. 00:27:57.841 [2024-07-26 11:35:53.279134] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.841 [2024-07-26 11:35:53.279167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.841 qpair failed and we were unable to recover it. 00:27:57.841 [2024-07-26 11:35:53.279382] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.841 [2024-07-26 11:35:53.279415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.841 qpair failed and we were unable to recover it. 00:27:57.841 [2024-07-26 11:35:53.279639] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.841 [2024-07-26 11:35:53.279673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.841 qpair failed and we were unable to recover it. 00:27:57.841 [2024-07-26 11:35:53.279890] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.841 [2024-07-26 11:35:53.279922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.841 qpair failed and we were unable to recover it. 00:27:57.841 [2024-07-26 11:35:53.280062] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.841 [2024-07-26 11:35:53.280095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.841 qpair failed and we were unable to recover it. 00:27:57.841 [2024-07-26 11:35:53.280310] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.841 [2024-07-26 11:35:53.280342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.841 qpair failed and we were unable to recover it. 00:27:57.841 [2024-07-26 11:35:53.280489] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.841 [2024-07-26 11:35:53.280521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.841 qpair failed and we were unable to recover it. 00:27:57.841 [2024-07-26 11:35:53.280736] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.841 [2024-07-26 11:35:53.280769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.841 qpair failed and we were unable to recover it. 00:27:57.841 [2024-07-26 11:35:53.280942] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.841 [2024-07-26 11:35:53.280974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.841 qpair failed and we were unable to recover it. 00:27:57.841 [2024-07-26 11:35:53.281232] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.841 [2024-07-26 11:35:53.281265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.841 qpair failed and we were unable to recover it. 00:27:57.841 [2024-07-26 11:35:53.281391] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.841 [2024-07-26 11:35:53.281424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.841 qpair failed and we were unable to recover it. 00:27:57.841 [2024-07-26 11:35:53.281703] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.841 [2024-07-26 11:35:53.281738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.841 qpair failed and we were unable to recover it. 00:27:57.841 [2024-07-26 11:35:53.281876] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.841 [2024-07-26 11:35:53.281909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.841 qpair failed and we were unable to recover it. 00:27:57.841 [2024-07-26 11:35:53.282066] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.841 [2024-07-26 11:35:53.282099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.841 qpair failed and we were unable to recover it. 00:27:57.841 [2024-07-26 11:35:53.282359] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.841 [2024-07-26 11:35:53.282391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.841 qpair failed and we were unable to recover it. 00:27:57.841 [2024-07-26 11:35:53.282596] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.841 [2024-07-26 11:35:53.282638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.841 qpair failed and we were unable to recover it. 00:27:57.841 [2024-07-26 11:35:53.282840] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.841 [2024-07-26 11:35:53.282873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.841 qpair failed and we were unable to recover it. 00:27:57.841 [2024-07-26 11:35:53.283085] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.841 [2024-07-26 11:35:53.283118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.841 qpair failed and we were unable to recover it. 00:27:57.841 [2024-07-26 11:35:53.283262] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.841 [2024-07-26 11:35:53.283294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.841 qpair failed and we were unable to recover it. 00:27:57.841 [2024-07-26 11:35:53.283530] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.841 [2024-07-26 11:35:53.283563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.841 qpair failed and we were unable to recover it. 00:27:57.841 [2024-07-26 11:35:53.283755] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.841 [2024-07-26 11:35:53.283788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.841 qpair failed and we were unable to recover it. 00:27:57.841 [2024-07-26 11:35:53.283993] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.841 [2024-07-26 11:35:53.284026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.841 qpair failed and we were unable to recover it. 00:27:57.841 [2024-07-26 11:35:53.284161] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.841 [2024-07-26 11:35:53.284194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.841 qpair failed and we were unable to recover it. 00:27:57.841 [2024-07-26 11:35:53.284321] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.841 [2024-07-26 11:35:53.284353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.841 qpair failed and we were unable to recover it. 00:27:57.841 [2024-07-26 11:35:53.284572] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.841 [2024-07-26 11:35:53.284605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.841 qpair failed and we were unable to recover it. 00:27:57.841 [2024-07-26 11:35:53.284763] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.841 [2024-07-26 11:35:53.284797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.841 qpair failed and we were unable to recover it. 00:27:57.841 [2024-07-26 11:35:53.284945] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.841 [2024-07-26 11:35:53.284983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.841 qpair failed and we were unable to recover it. 00:27:57.841 [2024-07-26 11:35:53.285124] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.841 [2024-07-26 11:35:53.285157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.841 qpair failed and we were unable to recover it. 00:27:57.841 [2024-07-26 11:35:53.285350] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.841 [2024-07-26 11:35:53.285383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.841 qpair failed and we were unable to recover it. 00:27:57.841 [2024-07-26 11:35:53.285518] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.841 [2024-07-26 11:35:53.285551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.841 qpair failed and we were unable to recover it. 00:27:57.841 [2024-07-26 11:35:53.285673] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.841 [2024-07-26 11:35:53.285706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.841 qpair failed and we were unable to recover it. 00:27:57.841 [2024-07-26 11:35:53.285914] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.841 [2024-07-26 11:35:53.285947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.841 qpair failed and we were unable to recover it. 00:27:57.841 [2024-07-26 11:35:53.286204] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.841 [2024-07-26 11:35:53.286236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.841 qpair failed and we were unable to recover it. 00:27:57.841 [2024-07-26 11:35:53.286440] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.841 [2024-07-26 11:35:53.286472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.841 qpair failed and we were unable to recover it. 00:27:57.841 [2024-07-26 11:35:53.286610] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.841 [2024-07-26 11:35:53.286653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.841 qpair failed and we were unable to recover it. 00:27:57.841 [2024-07-26 11:35:53.286802] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.841 [2024-07-26 11:35:53.286834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.841 qpair failed and we were unable to recover it. 00:27:57.841 [2024-07-26 11:35:53.286984] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.841 [2024-07-26 11:35:53.287016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.841 qpair failed and we were unable to recover it. 00:27:57.841 [2024-07-26 11:35:53.287273] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.841 [2024-07-26 11:35:53.287306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.841 qpair failed and we were unable to recover it. 00:27:57.841 [2024-07-26 11:35:53.287460] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.841 [2024-07-26 11:35:53.287492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.841 qpair failed and we were unable to recover it. 00:27:57.841 [2024-07-26 11:35:53.287692] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.841 [2024-07-26 11:35:53.287725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.841 qpair failed and we were unable to recover it. 00:27:57.841 [2024-07-26 11:35:53.287873] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.841 [2024-07-26 11:35:53.287905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.841 qpair failed and we were unable to recover it. 00:27:57.841 [2024-07-26 11:35:53.288164] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.841 [2024-07-26 11:35:53.288197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.841 qpair failed and we were unable to recover it. 00:27:57.841 [2024-07-26 11:35:53.288335] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.841 [2024-07-26 11:35:53.288367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.841 qpair failed and we were unable to recover it. 00:27:57.841 [2024-07-26 11:35:53.288506] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.841 [2024-07-26 11:35:53.288542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.841 qpair failed and we were unable to recover it. 00:27:57.841 [2024-07-26 11:35:53.288661] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.841 [2024-07-26 11:35:53.288694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.841 qpair failed and we were unable to recover it. 00:27:57.841 [2024-07-26 11:35:53.288829] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.841 [2024-07-26 11:35:53.288861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.841 qpair failed and we were unable to recover it. 00:27:57.841 [2024-07-26 11:35:53.288988] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.841 [2024-07-26 11:35:53.289019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.841 qpair failed and we were unable to recover it. 00:27:57.841 [2024-07-26 11:35:53.289206] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.841 [2024-07-26 11:35:53.289238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.841 qpair failed and we were unable to recover it. 00:27:57.841 [2024-07-26 11:35:53.289368] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.841 [2024-07-26 11:35:53.289400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.841 qpair failed and we were unable to recover it. 00:27:57.841 [2024-07-26 11:35:53.289586] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.841 [2024-07-26 11:35:53.289619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.841 qpair failed and we were unable to recover it. 00:27:57.841 [2024-07-26 11:35:53.289844] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.841 [2024-07-26 11:35:53.289877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.841 qpair failed and we were unable to recover it. 00:27:57.841 [2024-07-26 11:35:53.290092] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.841 [2024-07-26 11:35:53.290134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.841 qpair failed and we were unable to recover it. 00:27:57.841 [2024-07-26 11:35:53.290391] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.841 [2024-07-26 11:35:53.290422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.841 qpair failed and we were unable to recover it. 00:27:57.841 [2024-07-26 11:35:53.290546] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.842 [2024-07-26 11:35:53.290578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.842 qpair failed and we were unable to recover it. 00:27:57.842 [2024-07-26 11:35:53.290718] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.842 [2024-07-26 11:35:53.290752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.842 qpair failed and we were unable to recover it. 00:27:57.842 [2024-07-26 11:35:53.290898] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.842 [2024-07-26 11:35:53.290931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.842 qpair failed and we were unable to recover it. 00:27:57.842 [2024-07-26 11:35:53.291132] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.842 [2024-07-26 11:35:53.291165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.842 qpair failed and we were unable to recover it. 00:27:57.842 [2024-07-26 11:35:53.291350] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.842 [2024-07-26 11:35:53.291383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.842 qpair failed and we were unable to recover it. 00:27:57.842 [2024-07-26 11:35:53.291501] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.842 [2024-07-26 11:35:53.291533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.842 qpair failed and we were unable to recover it. 00:27:57.842 [2024-07-26 11:35:53.291669] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.842 [2024-07-26 11:35:53.291703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.842 qpair failed and we were unable to recover it. 00:27:57.842 [2024-07-26 11:35:53.291893] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.842 [2024-07-26 11:35:53.291926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.842 qpair failed and we were unable to recover it. 00:27:57.842 [2024-07-26 11:35:53.292074] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.842 [2024-07-26 11:35:53.292107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.842 qpair failed and we were unable to recover it. 00:27:57.842 [2024-07-26 11:35:53.292252] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.842 [2024-07-26 11:35:53.292285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.842 qpair failed and we were unable to recover it. 00:27:57.842 [2024-07-26 11:35:53.292480] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.842 [2024-07-26 11:35:53.292512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.842 qpair failed and we were unable to recover it. 00:27:57.842 [2024-07-26 11:35:53.292713] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.842 [2024-07-26 11:35:53.292747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.842 qpair failed and we were unable to recover it. 00:27:57.842 [2024-07-26 11:35:53.292892] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.842 [2024-07-26 11:35:53.292924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.842 qpair failed and we were unable to recover it. 00:27:57.842 [2024-07-26 11:35:53.293055] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.842 [2024-07-26 11:35:53.293093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.842 qpair failed and we were unable to recover it. 00:27:57.842 [2024-07-26 11:35:53.293238] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.842 [2024-07-26 11:35:53.293270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.842 qpair failed and we were unable to recover it. 00:27:57.842 [2024-07-26 11:35:53.293400] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.842 [2024-07-26 11:35:53.293434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.842 qpair failed and we were unable to recover it. 00:27:57.842 [2024-07-26 11:35:53.293583] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.842 [2024-07-26 11:35:53.293615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.842 qpair failed and we were unable to recover it. 00:27:57.842 [2024-07-26 11:35:53.293820] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.842 [2024-07-26 11:35:53.293852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.842 qpair failed and we were unable to recover it. 00:27:57.842 [2024-07-26 11:35:53.294109] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.842 [2024-07-26 11:35:53.294141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.842 qpair failed and we were unable to recover it. 00:27:57.842 [2024-07-26 11:35:53.294292] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.842 [2024-07-26 11:35:53.294323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.842 qpair failed and we were unable to recover it. 00:27:57.842 [2024-07-26 11:35:53.294456] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.842 [2024-07-26 11:35:53.294488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.842 qpair failed and we were unable to recover it. 00:27:57.842 [2024-07-26 11:35:53.294617] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.842 [2024-07-26 11:35:53.294659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.842 qpair failed and we were unable to recover it. 00:27:57.842 [2024-07-26 11:35:53.294864] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.842 [2024-07-26 11:35:53.294896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.842 qpair failed and we were unable to recover it. 00:27:57.842 [2024-07-26 11:35:53.295026] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.842 [2024-07-26 11:35:53.295059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.842 qpair failed and we were unable to recover it. 00:27:57.842 [2024-07-26 11:35:53.295176] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.842 [2024-07-26 11:35:53.295207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.842 qpair failed and we were unable to recover it. 00:27:57.842 [2024-07-26 11:35:53.295400] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.842 [2024-07-26 11:35:53.295432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.842 qpair failed and we were unable to recover it. 00:27:57.842 [2024-07-26 11:35:53.295708] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.842 [2024-07-26 11:35:53.295743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.842 qpair failed and we were unable to recover it. 00:27:57.842 [2024-07-26 11:35:53.295890] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.842 [2024-07-26 11:35:53.295922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.842 qpair failed and we were unable to recover it. 00:27:57.842 [2024-07-26 11:35:53.296076] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.842 [2024-07-26 11:35:53.296109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.842 qpair failed and we were unable to recover it. 00:27:57.842 [2024-07-26 11:35:53.296301] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.842 [2024-07-26 11:35:53.296333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.842 qpair failed and we were unable to recover it. 00:27:57.842 [2024-07-26 11:35:53.296482] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.842 [2024-07-26 11:35:53.296515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.842 qpair failed and we were unable to recover it. 00:27:57.842 [2024-07-26 11:35:53.296655] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.842 [2024-07-26 11:35:53.296688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.842 qpair failed and we were unable to recover it. 00:27:57.842 [2024-07-26 11:35:53.296826] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.842 [2024-07-26 11:35:53.296857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.842 qpair failed and we were unable to recover it. 00:27:57.842 [2024-07-26 11:35:53.297044] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.842 [2024-07-26 11:35:53.297076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.842 qpair failed and we were unable to recover it. 00:27:57.842 [2024-07-26 11:35:53.297283] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.842 [2024-07-26 11:35:53.297314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.842 qpair failed and we were unable to recover it. 00:27:57.842 [2024-07-26 11:35:53.297455] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.842 [2024-07-26 11:35:53.297487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.842 qpair failed and we were unable to recover it. 00:27:57.842 [2024-07-26 11:35:53.297661] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.842 [2024-07-26 11:35:53.297694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.842 qpair failed and we were unable to recover it. 00:27:57.842 [2024-07-26 11:35:53.297831] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.842 [2024-07-26 11:35:53.297864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.842 qpair failed and we were unable to recover it. 00:27:57.842 [2024-07-26 11:35:53.297993] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.842 [2024-07-26 11:35:53.298026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.842 qpair failed and we were unable to recover it. 00:27:57.842 [2024-07-26 11:35:53.298305] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.842 [2024-07-26 11:35:53.298337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.842 qpair failed and we were unable to recover it. 00:27:57.842 [2024-07-26 11:35:53.298481] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.842 [2024-07-26 11:35:53.298512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.842 qpair failed and we were unable to recover it. 00:27:57.842 [2024-07-26 11:35:53.298623] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.842 [2024-07-26 11:35:53.298664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.842 qpair failed and we were unable to recover it. 00:27:57.842 [2024-07-26 11:35:53.298799] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.842 [2024-07-26 11:35:53.298831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.842 qpair failed and we were unable to recover it. 00:27:57.842 [2024-07-26 11:35:53.298961] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.842 [2024-07-26 11:35:53.298994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.842 qpair failed and we were unable to recover it. 00:27:57.842 [2024-07-26 11:35:53.299132] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.842 [2024-07-26 11:35:53.299163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.842 qpair failed and we were unable to recover it. 00:27:57.842 [2024-07-26 11:35:53.299296] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.842 [2024-07-26 11:35:53.299329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.842 qpair failed and we were unable to recover it. 00:27:57.842 [2024-07-26 11:35:53.299453] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.842 [2024-07-26 11:35:53.299486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.842 qpair failed and we were unable to recover it. 00:27:57.842 [2024-07-26 11:35:53.299625] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.842 [2024-07-26 11:35:53.299669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.842 qpair failed and we were unable to recover it. 00:27:57.842 [2024-07-26 11:35:53.299932] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.842 [2024-07-26 11:35:53.299965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.842 qpair failed and we were unable to recover it. 00:27:57.842 [2024-07-26 11:35:53.300087] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.842 [2024-07-26 11:35:53.300119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.842 qpair failed and we were unable to recover it. 00:27:57.842 [2024-07-26 11:35:53.300244] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.842 [2024-07-26 11:35:53.300276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.842 qpair failed and we were unable to recover it. 00:27:57.842 [2024-07-26 11:35:53.300397] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.842 [2024-07-26 11:35:53.300429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.842 qpair failed and we were unable to recover it. 00:27:57.842 [2024-07-26 11:35:53.300624] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.842 [2024-07-26 11:35:53.300667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.842 qpair failed and we were unable to recover it. 00:27:57.842 [2024-07-26 11:35:53.300867] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.842 [2024-07-26 11:35:53.300905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.842 qpair failed and we were unable to recover it. 00:27:57.842 [2024-07-26 11:35:53.301021] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.842 [2024-07-26 11:35:53.301053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.842 qpair failed and we were unable to recover it. 00:27:57.842 [2024-07-26 11:35:53.301249] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.842 [2024-07-26 11:35:53.301281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.842 qpair failed and we were unable to recover it. 00:27:57.842 [2024-07-26 11:35:53.301483] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.842 [2024-07-26 11:35:53.301516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.842 qpair failed and we were unable to recover it. 00:27:57.842 [2024-07-26 11:35:53.301706] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.842 [2024-07-26 11:35:53.301739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.842 qpair failed and we were unable to recover it. 00:27:57.842 [2024-07-26 11:35:53.302020] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.842 [2024-07-26 11:35:53.302052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.842 qpair failed and we were unable to recover it. 00:27:57.842 [2024-07-26 11:35:53.302166] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.843 [2024-07-26 11:35:53.302198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.843 qpair failed and we were unable to recover it. 00:27:57.843 [2024-07-26 11:35:53.302331] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.843 [2024-07-26 11:35:53.302363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.843 qpair failed and we were unable to recover it. 00:27:57.843 [2024-07-26 11:35:53.302488] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.843 [2024-07-26 11:35:53.302521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.843 qpair failed and we were unable to recover it. 00:27:57.843 [2024-07-26 11:35:53.302714] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.843 [2024-07-26 11:35:53.302747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.843 qpair failed and we were unable to recover it. 00:27:57.843 [2024-07-26 11:35:53.302888] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.843 [2024-07-26 11:35:53.302921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.843 qpair failed and we were unable to recover it. 00:27:57.843 [2024-07-26 11:35:53.303120] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.843 [2024-07-26 11:35:53.303151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.843 qpair failed and we were unable to recover it. 00:27:57.843 [2024-07-26 11:35:53.303280] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.843 [2024-07-26 11:35:53.303312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.843 qpair failed and we were unable to recover it. 00:27:57.843 [2024-07-26 11:35:53.303454] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.843 [2024-07-26 11:35:53.303487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.843 qpair failed and we were unable to recover it. 00:27:57.843 [2024-07-26 11:35:53.303621] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.843 [2024-07-26 11:35:53.303687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.843 qpair failed and we were unable to recover it. 00:27:57.843 [2024-07-26 11:35:53.303807] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.843 [2024-07-26 11:35:53.303839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.843 qpair failed and we were unable to recover it. 00:27:57.843 [2024-07-26 11:35:53.304040] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.843 [2024-07-26 11:35:53.304073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.843 qpair failed and we were unable to recover it. 00:27:57.843 [2024-07-26 11:35:53.304192] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.843 [2024-07-26 11:35:53.304224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.843 qpair failed and we were unable to recover it. 00:27:57.843 [2024-07-26 11:35:53.304418] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.843 [2024-07-26 11:35:53.304451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.843 qpair failed and we were unable to recover it. 00:27:57.843 [2024-07-26 11:35:53.304646] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.843 [2024-07-26 11:35:53.304679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.843 qpair failed and we were unable to recover it. 00:27:57.843 [2024-07-26 11:35:53.304879] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.843 [2024-07-26 11:35:53.304911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.843 qpair failed and we were unable to recover it. 00:27:57.843 [2024-07-26 11:35:53.305108] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.843 [2024-07-26 11:35:53.305140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.843 qpair failed and we were unable to recover it. 00:27:57.843 [2024-07-26 11:35:53.305257] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.843 [2024-07-26 11:35:53.305289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.843 qpair failed and we were unable to recover it. 00:27:57.843 [2024-07-26 11:35:53.305411] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.843 [2024-07-26 11:35:53.305442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.843 qpair failed and we were unable to recover it. 00:27:57.843 [2024-07-26 11:35:53.305656] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.843 [2024-07-26 11:35:53.305689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.843 qpair failed and we were unable to recover it. 00:27:57.843 [2024-07-26 11:35:53.305832] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.843 [2024-07-26 11:35:53.305865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.843 qpair failed and we were unable to recover it. 00:27:57.843 [2024-07-26 11:35:53.305988] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.843 [2024-07-26 11:35:53.306021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.843 qpair failed and we were unable to recover it. 00:27:57.843 [2024-07-26 11:35:53.306217] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.843 [2024-07-26 11:35:53.306249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.843 qpair failed and we were unable to recover it. 00:27:57.843 [2024-07-26 11:35:53.306381] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.843 [2024-07-26 11:35:53.306416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.843 qpair failed and we were unable to recover it. 00:27:57.843 [2024-07-26 11:35:53.306546] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.843 [2024-07-26 11:35:53.306578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.843 qpair failed and we were unable to recover it. 00:27:57.843 [2024-07-26 11:35:53.306825] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.843 [2024-07-26 11:35:53.306858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.843 qpair failed and we were unable to recover it. 00:27:57.843 [2024-07-26 11:35:53.306971] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.843 [2024-07-26 11:35:53.307002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.843 qpair failed and we were unable to recover it. 00:27:57.843 [2024-07-26 11:35:53.307122] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.843 [2024-07-26 11:35:53.307153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.843 qpair failed and we were unable to recover it. 00:27:57.843 [2024-07-26 11:35:53.307333] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.843 [2024-07-26 11:35:53.307364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.843 qpair failed and we were unable to recover it. 00:27:57.843 [2024-07-26 11:35:53.307551] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.843 [2024-07-26 11:35:53.307583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.843 qpair failed and we were unable to recover it. 00:27:57.843 [2024-07-26 11:35:53.307798] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.843 [2024-07-26 11:35:53.307831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.843 qpair failed and we were unable to recover it. 00:27:57.843 [2024-07-26 11:35:53.307945] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.843 [2024-07-26 11:35:53.307978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.843 qpair failed and we were unable to recover it. 00:27:57.843 [2024-07-26 11:35:53.308233] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.843 [2024-07-26 11:35:53.308264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.843 qpair failed and we were unable to recover it. 00:27:57.843 [2024-07-26 11:35:53.308480] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.843 [2024-07-26 11:35:53.308512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.843 qpair failed and we were unable to recover it. 00:27:57.843 [2024-07-26 11:35:53.308649] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.843 [2024-07-26 11:35:53.308682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.843 qpair failed and we were unable to recover it. 00:27:57.843 [2024-07-26 11:35:53.308875] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.843 [2024-07-26 11:35:53.308914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.843 qpair failed and we were unable to recover it. 00:27:57.843 [2024-07-26 11:35:53.309131] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.843 [2024-07-26 11:35:53.309163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.843 qpair failed and we were unable to recover it. 00:27:57.843 [2024-07-26 11:35:53.309325] Starting SPDK v24.09-pre git sha1 487ff9e1a / DPDK 24.03.0 initialization... 00:27:57.843 [2024-07-26 11:35:53.309349] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.843 [2024-07-26 11:35:53.309381] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:57.843 [2024-07-26 11:35:53.309384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.843 qpair failed and we were unable to recover it. 00:27:57.843 [2024-07-26 11:35:53.309572] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.843 [2024-07-26 11:35:53.309601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.843 qpair failed and we were unable to recover it. 00:27:57.843 [2024-07-26 11:35:53.309824] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.843 [2024-07-26 11:35:53.309857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.843 qpair failed and we were unable to recover it. 00:27:57.843 [2024-07-26 11:35:53.310132] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.843 [2024-07-26 11:35:53.310165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.843 qpair failed and we were unable to recover it. 00:27:57.843 [2024-07-26 11:35:53.310313] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.843 [2024-07-26 11:35:53.310344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.843 qpair failed and we were unable to recover it. 00:27:57.843 [2024-07-26 11:35:53.310474] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.843 [2024-07-26 11:35:53.310505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.843 qpair failed and we were unable to recover it. 00:27:57.843 [2024-07-26 11:35:53.310638] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.843 [2024-07-26 11:35:53.310671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.843 qpair failed and we were unable to recover it. 00:27:57.843 [2024-07-26 11:35:53.310859] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.843 [2024-07-26 11:35:53.310892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.843 qpair failed and we were unable to recover it. 00:27:57.843 [2024-07-26 11:35:53.311104] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.843 [2024-07-26 11:35:53.311137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.844 qpair failed and we were unable to recover it. 00:27:57.844 [2024-07-26 11:35:53.311254] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.844 [2024-07-26 11:35:53.311288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.844 qpair failed and we were unable to recover it. 00:27:57.844 [2024-07-26 11:35:53.311474] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.844 [2024-07-26 11:35:53.311513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.844 qpair failed and we were unable to recover it. 00:27:57.844 [2024-07-26 11:35:53.311712] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.844 [2024-07-26 11:35:53.311747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.844 qpair failed and we were unable to recover it. 00:27:57.844 [2024-07-26 11:35:53.311941] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.844 [2024-07-26 11:35:53.311974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.844 qpair failed and we were unable to recover it. 00:27:57.844 [2024-07-26 11:35:53.312119] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.844 [2024-07-26 11:35:53.312153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.844 qpair failed and we were unable to recover it. 00:27:57.844 [2024-07-26 11:35:53.312431] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.844 [2024-07-26 11:35:53.312464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.844 qpair failed and we were unable to recover it. 00:27:57.844 [2024-07-26 11:35:53.312613] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.844 [2024-07-26 11:35:53.312675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.844 qpair failed and we were unable to recover it. 00:27:57.844 [2024-07-26 11:35:53.312796] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.844 [2024-07-26 11:35:53.312830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.844 qpair failed and we were unable to recover it. 00:27:57.844 [2024-07-26 11:35:53.312963] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.844 [2024-07-26 11:35:53.312996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.844 qpair failed and we were unable to recover it. 00:27:57.844 [2024-07-26 11:35:53.313256] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.844 [2024-07-26 11:35:53.313289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.844 qpair failed and we were unable to recover it. 00:27:57.844 [2024-07-26 11:35:53.313475] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.844 [2024-07-26 11:35:53.313509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.844 qpair failed and we were unable to recover it. 00:27:57.844 [2024-07-26 11:35:53.313702] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.844 [2024-07-26 11:35:53.313737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.844 qpair failed and we were unable to recover it. 00:27:57.844 [2024-07-26 11:35:53.314000] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.844 [2024-07-26 11:35:53.314033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.844 qpair failed and we were unable to recover it. 00:27:57.844 [2024-07-26 11:35:53.314227] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.844 [2024-07-26 11:35:53.314258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.844 qpair failed and we were unable to recover it. 00:27:57.844 [2024-07-26 11:35:53.314393] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.844 [2024-07-26 11:35:53.314426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.844 qpair failed and we were unable to recover it. 00:27:57.844 [2024-07-26 11:35:53.314562] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.844 [2024-07-26 11:35:53.314594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.844 qpair failed and we were unable to recover it. 00:27:57.844 [2024-07-26 11:35:53.314738] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.844 [2024-07-26 11:35:53.314771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.844 qpair failed and we were unable to recover it. 00:27:57.844 [2024-07-26 11:35:53.314904] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.844 [2024-07-26 11:35:53.314936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.844 qpair failed and we were unable to recover it. 00:27:57.844 [2024-07-26 11:35:53.315138] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.844 [2024-07-26 11:35:53.315170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.844 qpair failed and we were unable to recover it. 00:27:57.844 [2024-07-26 11:35:53.315357] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.844 [2024-07-26 11:35:53.315389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.844 qpair failed and we were unable to recover it. 00:27:57.844 [2024-07-26 11:35:53.315527] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.844 [2024-07-26 11:35:53.315559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.844 qpair failed and we were unable to recover it. 00:27:57.844 [2024-07-26 11:35:53.315756] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.844 [2024-07-26 11:35:53.315788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.844 qpair failed and we were unable to recover it. 00:27:57.844 [2024-07-26 11:35:53.315981] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.844 [2024-07-26 11:35:53.316014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.844 qpair failed and we were unable to recover it. 00:27:57.844 [2024-07-26 11:35:53.316144] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.844 [2024-07-26 11:35:53.316176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.844 qpair failed and we were unable to recover it. 00:27:57.844 [2024-07-26 11:35:53.316295] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.844 [2024-07-26 11:35:53.316327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.844 qpair failed and we were unable to recover it. 00:27:57.844 [2024-07-26 11:35:53.316460] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.844 [2024-07-26 11:35:53.316500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.844 qpair failed and we were unable to recover it. 00:27:57.844 [2024-07-26 11:35:53.316619] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.844 [2024-07-26 11:35:53.316662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.844 qpair failed and we were unable to recover it. 00:27:57.844 [2024-07-26 11:35:53.316850] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.844 [2024-07-26 11:35:53.316882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.844 qpair failed and we were unable to recover it. 00:27:57.844 [2024-07-26 11:35:53.317048] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.844 [2024-07-26 11:35:53.317122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f8000b90 with addr=10.0.0.2, port=4420 00:27:57.844 qpair failed and we were unable to recover it. 00:27:57.844 [2024-07-26 11:35:53.317353] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.844 [2024-07-26 11:35:53.317390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f8000b90 with addr=10.0.0.2, port=4420 00:27:57.844 qpair failed and we were unable to recover it. 00:27:57.844 [2024-07-26 11:35:53.317537] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.844 [2024-07-26 11:35:53.317572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f8000b90 with addr=10.0.0.2, port=4420 00:27:57.844 qpair failed and we were unable to recover it. 00:27:57.844 [2024-07-26 11:35:53.317780] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.844 [2024-07-26 11:35:53.317815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f8000b90 with addr=10.0.0.2, port=4420 00:27:57.844 qpair failed and we were unable to recover it. 00:27:57.844 [2024-07-26 11:35:53.317964] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.844 [2024-07-26 11:35:53.317997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f8000b90 with addr=10.0.0.2, port=4420 00:27:57.844 qpair failed and we were unable to recover it. 00:27:57.844 [2024-07-26 11:35:53.318121] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.844 [2024-07-26 11:35:53.318154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f8000b90 with addr=10.0.0.2, port=4420 00:27:57.844 qpair failed and we were unable to recover it. 00:27:57.844 [2024-07-26 11:35:53.318292] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.844 [2024-07-26 11:35:53.318325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f8000b90 with addr=10.0.0.2, port=4420 00:27:57.844 qpair failed and we were unable to recover it. 00:27:57.844 [2024-07-26 11:35:53.318452] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.844 [2024-07-26 11:35:53.318483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f8000b90 with addr=10.0.0.2, port=4420 00:27:57.844 qpair failed and we were unable to recover it. 00:27:57.844 [2024-07-26 11:35:53.318684] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.844 [2024-07-26 11:35:53.318717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f8000b90 with addr=10.0.0.2, port=4420 00:27:57.844 qpair failed and we were unable to recover it. 00:27:57.844 [2024-07-26 11:35:53.318859] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.844 [2024-07-26 11:35:53.318891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f8000b90 with addr=10.0.0.2, port=4420 00:27:57.844 qpair failed and we were unable to recover it. 00:27:57.844 [2024-07-26 11:35:53.319014] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.844 [2024-07-26 11:35:53.319046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f8000b90 with addr=10.0.0.2, port=4420 00:27:57.844 qpair failed and we were unable to recover it. 00:27:57.844 [2024-07-26 11:35:53.319237] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.844 [2024-07-26 11:35:53.319269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f8000b90 with addr=10.0.0.2, port=4420 00:27:57.844 qpair failed and we were unable to recover it. 00:27:57.844 [2024-07-26 11:35:53.319402] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.844 [2024-07-26 11:35:53.319435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f8000b90 with addr=10.0.0.2, port=4420 00:27:57.844 qpair failed and we were unable to recover it. 00:27:57.844 [2024-07-26 11:35:53.319554] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.844 [2024-07-26 11:35:53.319595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f8000b90 with addr=10.0.0.2, port=4420 00:27:57.844 qpair failed and we were unable to recover it. 00:27:57.844 [2024-07-26 11:35:53.319664] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18c6ff0 (9): Bad file descriptor 00:27:57.844 [2024-07-26 11:35:53.319908] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.844 [2024-07-26 11:35:53.319983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.844 qpair failed and we were unable to recover it. 00:27:57.844 [2024-07-26 11:35:53.320203] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.844 [2024-07-26 11:35:53.320239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.844 qpair failed and we were unable to recover it. 00:27:57.844 [2024-07-26 11:35:53.320432] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.844 [2024-07-26 11:35:53.320465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.844 qpair failed and we were unable to recover it. 00:27:57.844 [2024-07-26 11:35:53.320576] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.844 [2024-07-26 11:35:53.320614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.844 qpair failed and we were unable to recover it. 00:27:57.844 [2024-07-26 11:35:53.320838] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.844 [2024-07-26 11:35:53.320870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.844 qpair failed and we were unable to recover it. 00:27:57.844 [2024-07-26 11:35:53.321072] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.844 [2024-07-26 11:35:53.321106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.844 qpair failed and we were unable to recover it. 00:27:57.844 [2024-07-26 11:35:53.321313] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.844 [2024-07-26 11:35:53.321346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.844 qpair failed and we were unable to recover it. 00:27:57.844 [2024-07-26 11:35:53.321529] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.844 [2024-07-26 11:35:53.321563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.844 qpair failed and we were unable to recover it. 00:27:57.844 [2024-07-26 11:35:53.321759] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.844 [2024-07-26 11:35:53.321792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.844 qpair failed and we were unable to recover it. 00:27:57.844 [2024-07-26 11:35:53.321996] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.844 [2024-07-26 11:35:53.322028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.844 qpair failed and we were unable to recover it. 00:27:57.844 [2024-07-26 11:35:53.322230] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.844 [2024-07-26 11:35:53.322262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.844 qpair failed and we were unable to recover it. 00:27:57.844 [2024-07-26 11:35:53.322395] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.844 [2024-07-26 11:35:53.322427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.844 qpair failed and we were unable to recover it. 00:27:57.844 [2024-07-26 11:35:53.322555] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.844 [2024-07-26 11:35:53.322597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.844 qpair failed and we were unable to recover it. 00:27:57.844 [2024-07-26 11:35:53.322846] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.844 [2024-07-26 11:35:53.322892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.844 qpair failed and we were unable to recover it. 00:27:57.844 [2024-07-26 11:35:53.323085] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.844 [2024-07-26 11:35:53.323119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.844 qpair failed and we were unable to recover it. 00:27:57.844 [2024-07-26 11:35:53.323300] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.844 [2024-07-26 11:35:53.323336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f8000b90 with addr=10.0.0.2, port=4420 00:27:57.844 qpair failed and we were unable to recover it. 00:27:57.844 [2024-07-26 11:35:53.323466] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.844 [2024-07-26 11:35:53.323499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f8000b90 with addr=10.0.0.2, port=4420 00:27:57.844 qpair failed and we were unable to recover it. 00:27:57.844 [2024-07-26 11:35:53.323639] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.845 [2024-07-26 11:35:53.323673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f8000b90 with addr=10.0.0.2, port=4420 00:27:57.845 qpair failed and we were unable to recover it. 00:27:57.845 [2024-07-26 11:35:53.323786] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.845 [2024-07-26 11:35:53.323824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f8000b90 with addr=10.0.0.2, port=4420 00:27:57.845 qpair failed and we were unable to recover it. 00:27:57.845 [2024-07-26 11:35:53.323972] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.845 [2024-07-26 11:35:53.324005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f8000b90 with addr=10.0.0.2, port=4420 00:27:57.845 qpair failed and we were unable to recover it. 00:27:57.845 [2024-07-26 11:35:53.324125] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.845 [2024-07-26 11:35:53.324157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f8000b90 with addr=10.0.0.2, port=4420 00:27:57.845 qpair failed and we were unable to recover it. 00:27:57.845 [2024-07-26 11:35:53.324342] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.845 [2024-07-26 11:35:53.324375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f8000b90 with addr=10.0.0.2, port=4420 00:27:57.845 qpair failed and we were unable to recover it. 00:27:57.845 [2024-07-26 11:35:53.324501] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.845 [2024-07-26 11:35:53.324533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f8000b90 with addr=10.0.0.2, port=4420 00:27:57.845 qpair failed and we were unable to recover it. 00:27:57.845 [2024-07-26 11:35:53.324733] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.845 [2024-07-26 11:35:53.324767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f8000b90 with addr=10.0.0.2, port=4420 00:27:57.845 qpair failed and we were unable to recover it. 00:27:57.845 [2024-07-26 11:35:53.324974] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.845 [2024-07-26 11:35:53.325007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f8000b90 with addr=10.0.0.2, port=4420 00:27:57.845 qpair failed and we were unable to recover it. 00:27:57.845 [2024-07-26 11:35:53.325250] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.845 [2024-07-26 11:35:53.325283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f8000b90 with addr=10.0.0.2, port=4420 00:27:57.845 qpair failed and we were unable to recover it. 00:27:57.845 [2024-07-26 11:35:53.325419] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.845 [2024-07-26 11:35:53.325452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f8000b90 with addr=10.0.0.2, port=4420 00:27:57.845 qpair failed and we were unable to recover it. 00:27:57.845 [2024-07-26 11:35:53.325573] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.845 [2024-07-26 11:35:53.325605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f8000b90 with addr=10.0.0.2, port=4420 00:27:57.845 qpair failed and we were unable to recover it. 00:27:57.845 [2024-07-26 11:35:53.325773] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.845 [2024-07-26 11:35:53.325806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f8000b90 with addr=10.0.0.2, port=4420 00:27:57.845 qpair failed and we were unable to recover it. 00:27:57.845 [2024-07-26 11:35:53.325994] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.845 [2024-07-26 11:35:53.326026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f8000b90 with addr=10.0.0.2, port=4420 00:27:57.845 qpair failed and we were unable to recover it. 00:27:57.845 [2024-07-26 11:35:53.326223] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.845 [2024-07-26 11:35:53.326256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f8000b90 with addr=10.0.0.2, port=4420 00:27:57.845 qpair failed and we were unable to recover it. 00:27:57.845 [2024-07-26 11:35:53.326389] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.845 [2024-07-26 11:35:53.326422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f8000b90 with addr=10.0.0.2, port=4420 00:27:57.845 qpair failed and we were unable to recover it. 00:27:57.845 [2024-07-26 11:35:53.326563] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.845 [2024-07-26 11:35:53.326596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f8000b90 with addr=10.0.0.2, port=4420 00:27:57.845 qpair failed and we were unable to recover it. 00:27:57.845 [2024-07-26 11:35:53.326906] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.845 [2024-07-26 11:35:53.326943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.845 qpair failed and we were unable to recover it. 00:27:57.845 [2024-07-26 11:35:53.327086] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.845 [2024-07-26 11:35:53.327119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.845 qpair failed and we were unable to recover it. 00:27:57.845 [2024-07-26 11:35:53.327313] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.845 [2024-07-26 11:35:53.327345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.845 qpair failed and we were unable to recover it. 00:27:57.845 [2024-07-26 11:35:53.327484] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.845 [2024-07-26 11:35:53.327517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.845 qpair failed and we were unable to recover it. 00:27:57.845 [2024-07-26 11:35:53.327698] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.845 [2024-07-26 11:35:53.327733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.845 qpair failed and we were unable to recover it. 00:27:57.845 [2024-07-26 11:35:53.327933] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.845 [2024-07-26 11:35:53.327967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.845 qpair failed and we were unable to recover it. 00:27:57.845 [2024-07-26 11:35:53.328096] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.845 [2024-07-26 11:35:53.328138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.845 qpair failed and we were unable to recover it. 00:27:57.845 [2024-07-26 11:35:53.328380] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.845 [2024-07-26 11:35:53.328411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.845 qpair failed and we were unable to recover it. 00:27:57.845 [2024-07-26 11:35:53.328594] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.845 [2024-07-26 11:35:53.328624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.845 qpair failed and we were unable to recover it. 00:27:57.845 [2024-07-26 11:35:53.328757] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.845 [2024-07-26 11:35:53.328787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.845 qpair failed and we were unable to recover it. 00:27:57.845 [2024-07-26 11:35:53.328969] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.845 [2024-07-26 11:35:53.328999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.845 qpair failed and we were unable to recover it. 00:27:57.845 [2024-07-26 11:35:53.329102] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.845 [2024-07-26 11:35:53.329133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.845 qpair failed and we were unable to recover it. 00:27:57.845 [2024-07-26 11:35:53.329291] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.845 [2024-07-26 11:35:53.329324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.845 qpair failed and we were unable to recover it. 00:27:57.845 [2024-07-26 11:35:53.329453] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.845 [2024-07-26 11:35:53.329485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.845 qpair failed and we were unable to recover it. 00:27:57.845 [2024-07-26 11:35:53.329686] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.845 [2024-07-26 11:35:53.329719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.845 qpair failed and we were unable to recover it. 00:27:57.845 [2024-07-26 11:35:53.329912] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.845 [2024-07-26 11:35:53.329945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.845 qpair failed and we were unable to recover it. 00:27:57.845 [2024-07-26 11:35:53.330067] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.845 [2024-07-26 11:35:53.330100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.845 qpair failed and we were unable to recover it. 00:27:57.845 [2024-07-26 11:35:53.330240] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.845 [2024-07-26 11:35:53.330273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.845 qpair failed and we were unable to recover it. 00:27:57.845 [2024-07-26 11:35:53.330527] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.845 [2024-07-26 11:35:53.330560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.845 qpair failed and we were unable to recover it. 00:27:57.845 [2024-07-26 11:35:53.330706] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.845 [2024-07-26 11:35:53.330740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.845 qpair failed and we were unable to recover it. 00:27:57.845 [2024-07-26 11:35:53.330876] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.845 [2024-07-26 11:35:53.330910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.845 qpair failed and we were unable to recover it. 00:27:57.845 [2024-07-26 11:35:53.331041] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.845 [2024-07-26 11:35:53.331074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.845 qpair failed and we were unable to recover it. 00:27:57.845 [2024-07-26 11:35:53.331212] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.845 [2024-07-26 11:35:53.331245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.845 qpair failed and we were unable to recover it. 00:27:57.845 [2024-07-26 11:35:53.331481] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.845 [2024-07-26 11:35:53.331514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.845 qpair failed and we were unable to recover it. 00:27:57.845 [2024-07-26 11:35:53.331716] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.845 [2024-07-26 11:35:53.331750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.845 qpair failed and we were unable to recover it. 00:27:57.845 [2024-07-26 11:35:53.331957] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.845 [2024-07-26 11:35:53.331990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.845 qpair failed and we were unable to recover it. 00:27:57.845 [2024-07-26 11:35:53.332186] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.845 [2024-07-26 11:35:53.332219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.845 qpair failed and we were unable to recover it. 00:27:57.845 [2024-07-26 11:35:53.332432] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.845 [2024-07-26 11:35:53.332465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.845 qpair failed and we were unable to recover it. 00:27:57.845 [2024-07-26 11:35:53.332591] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.845 [2024-07-26 11:35:53.332623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.845 qpair failed and we were unable to recover it. 00:27:57.845 [2024-07-26 11:35:53.332883] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.845 [2024-07-26 11:35:53.332916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.845 qpair failed and we were unable to recover it. 00:27:57.845 [2024-07-26 11:35:53.333118] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.845 [2024-07-26 11:35:53.333151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.845 qpair failed and we were unable to recover it. 00:27:57.845 [2024-07-26 11:35:53.333333] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.845 [2024-07-26 11:35:53.333366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.845 qpair failed and we were unable to recover it. 00:27:57.845 [2024-07-26 11:35:53.333494] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.845 [2024-07-26 11:35:53.333526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.845 qpair failed and we were unable to recover it. 00:27:57.845 [2024-07-26 11:35:53.333649] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.845 [2024-07-26 11:35:53.333687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.845 qpair failed and we were unable to recover it. 00:27:57.845 [2024-07-26 11:35:53.333811] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.845 [2024-07-26 11:35:53.333843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.845 qpair failed and we were unable to recover it. 00:27:57.845 [2024-07-26 11:35:53.334045] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.845 [2024-07-26 11:35:53.334078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.845 qpair failed and we were unable to recover it. 00:27:57.845 [2024-07-26 11:35:53.334211] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.845 [2024-07-26 11:35:53.334243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.845 qpair failed and we were unable to recover it. 00:27:57.845 [2024-07-26 11:35:53.334437] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.845 [2024-07-26 11:35:53.334470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.845 qpair failed and we were unable to recover it. 00:27:57.845 [2024-07-26 11:35:53.334603] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.845 [2024-07-26 11:35:53.334649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.845 qpair failed and we were unable to recover it. 00:27:57.845 [2024-07-26 11:35:53.334840] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.845 [2024-07-26 11:35:53.334872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.845 qpair failed and we were unable to recover it. 00:27:57.845 [2024-07-26 11:35:53.335065] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.845 [2024-07-26 11:35:53.335099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.845 qpair failed and we were unable to recover it. 00:27:57.845 [2024-07-26 11:35:53.335227] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.845 [2024-07-26 11:35:53.335259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.845 qpair failed and we were unable to recover it. 00:27:57.845 [2024-07-26 11:35:53.335390] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.846 [2024-07-26 11:35:53.335423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.846 qpair failed and we were unable to recover it. 00:27:57.846 [2024-07-26 11:35:53.335603] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.846 [2024-07-26 11:35:53.335650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.846 qpair failed and we were unable to recover it. 00:27:57.846 [2024-07-26 11:35:53.335855] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.846 [2024-07-26 11:35:53.335887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.846 qpair failed and we were unable to recover it. 00:27:57.846 [2024-07-26 11:35:53.336023] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.846 [2024-07-26 11:35:53.336056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.846 qpair failed and we were unable to recover it. 00:27:57.846 [2024-07-26 11:35:53.336200] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.846 [2024-07-26 11:35:53.336233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.846 qpair failed and we were unable to recover it. 00:27:57.846 [2024-07-26 11:35:53.336363] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.846 [2024-07-26 11:35:53.336397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.846 qpair failed and we were unable to recover it. 00:27:57.846 [2024-07-26 11:35:53.336514] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.846 [2024-07-26 11:35:53.336546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.846 qpair failed and we were unable to recover it. 00:27:57.846 [2024-07-26 11:35:53.336860] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.846 [2024-07-26 11:35:53.336895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.846 qpair failed and we were unable to recover it. 00:27:57.846 [2024-07-26 11:35:53.337033] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.846 [2024-07-26 11:35:53.337066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.846 qpair failed and we were unable to recover it. 00:27:57.846 [2024-07-26 11:35:53.337208] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.846 [2024-07-26 11:35:53.337241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.846 qpair failed and we were unable to recover it. 00:27:57.846 [2024-07-26 11:35:53.337436] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.846 [2024-07-26 11:35:53.337468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.846 qpair failed and we were unable to recover it. 00:27:57.846 [2024-07-26 11:35:53.337790] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.846 [2024-07-26 11:35:53.337823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.846 qpair failed and we were unable to recover it. 00:27:57.846 [2024-07-26 11:35:53.338022] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.846 [2024-07-26 11:35:53.338055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.846 qpair failed and we were unable to recover it. 00:27:57.846 [2024-07-26 11:35:53.338280] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.846 [2024-07-26 11:35:53.338313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.846 qpair failed and we were unable to recover it. 00:27:57.846 [2024-07-26 11:35:53.338492] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.846 [2024-07-26 11:35:53.338524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.846 qpair failed and we were unable to recover it. 00:27:57.846 [2024-07-26 11:35:53.338648] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.846 [2024-07-26 11:35:53.338681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.846 qpair failed and we were unable to recover it. 00:27:57.846 [2024-07-26 11:35:53.338865] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.846 [2024-07-26 11:35:53.338898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.846 qpair failed and we were unable to recover it. 00:27:57.846 [2024-07-26 11:35:53.339022] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.846 [2024-07-26 11:35:53.339054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.846 qpair failed and we were unable to recover it. 00:27:57.846 [2024-07-26 11:35:53.339272] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.846 [2024-07-26 11:35:53.339310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.846 qpair failed and we were unable to recover it. 00:27:57.846 [2024-07-26 11:35:53.339494] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.846 [2024-07-26 11:35:53.339526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.846 qpair failed and we were unable to recover it. 00:27:57.846 [2024-07-26 11:35:53.339706] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.846 [2024-07-26 11:35:53.339739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.846 qpair failed and we were unable to recover it. 00:27:57.846 [2024-07-26 11:35:53.339939] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.846 [2024-07-26 11:35:53.339972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.846 qpair failed and we were unable to recover it. 00:27:57.846 [2024-07-26 11:35:53.340156] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.846 [2024-07-26 11:35:53.340188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.846 qpair failed and we were unable to recover it. 00:27:57.846 [2024-07-26 11:35:53.340332] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.846 [2024-07-26 11:35:53.340363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.846 qpair failed and we were unable to recover it. 00:27:57.846 [2024-07-26 11:35:53.340545] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.846 [2024-07-26 11:35:53.340577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.846 qpair failed and we were unable to recover it. 00:27:57.846 [2024-07-26 11:35:53.340796] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.846 [2024-07-26 11:35:53.340830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.846 qpair failed and we were unable to recover it. 00:27:57.846 [2024-07-26 11:35:53.340946] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.846 [2024-07-26 11:35:53.340978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.846 qpair failed and we were unable to recover it. 00:27:57.846 [2024-07-26 11:35:53.341191] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.846 [2024-07-26 11:35:53.341223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.846 qpair failed and we were unable to recover it. 00:27:57.847 [2024-07-26 11:35:53.341420] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.847 [2024-07-26 11:35:53.341453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.847 qpair failed and we were unable to recover it. 00:27:57.847 [2024-07-26 11:35:53.341586] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.847 [2024-07-26 11:35:53.341618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.847 qpair failed and we were unable to recover it. 00:27:57.847 [2024-07-26 11:35:53.341849] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.847 [2024-07-26 11:35:53.341881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.847 qpair failed and we were unable to recover it. 00:27:57.847 EAL: No free 2048 kB hugepages reported on node 1 00:27:57.847 [2024-07-26 11:35:53.342059] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.847 [2024-07-26 11:35:53.342093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.847 qpair failed and we were unable to recover it. 00:27:57.847 [2024-07-26 11:35:53.342232] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.847 [2024-07-26 11:35:53.342265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.847 qpair failed and we were unable to recover it. 00:27:57.847 [2024-07-26 11:35:53.342473] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.847 [2024-07-26 11:35:53.342505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.847 qpair failed and we were unable to recover it. 00:27:57.847 [2024-07-26 11:35:53.342694] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.847 [2024-07-26 11:35:53.342728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.847 qpair failed and we were unable to recover it. 00:27:57.847 [2024-07-26 11:35:53.342860] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.847 [2024-07-26 11:35:53.342892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.847 qpair failed and we were unable to recover it. 00:27:57.847 [2024-07-26 11:35:53.343018] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.847 [2024-07-26 11:35:53.343050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.847 qpair failed and we were unable to recover it. 00:27:57.847 [2024-07-26 11:35:53.343228] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.847 [2024-07-26 11:35:53.343260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.847 qpair failed and we were unable to recover it. 00:27:57.847 [2024-07-26 11:35:53.343383] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.847 [2024-07-26 11:35:53.343415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.847 qpair failed and we were unable to recover it. 00:27:57.847 [2024-07-26 11:35:53.343545] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.847 [2024-07-26 11:35:53.343578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.847 qpair failed and we were unable to recover it. 00:27:57.847 [2024-07-26 11:35:53.343697] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.847 [2024-07-26 11:35:53.343729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.847 qpair failed and we were unable to recover it. 00:27:57.847 [2024-07-26 11:35:53.343860] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.847 [2024-07-26 11:35:53.343892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.847 qpair failed and we were unable to recover it. 00:27:57.847 [2024-07-26 11:35:53.344014] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.847 [2024-07-26 11:35:53.344045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.847 qpair failed and we were unable to recover it. 00:27:57.847 [2024-07-26 11:35:53.344188] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.847 [2024-07-26 11:35:53.344221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.847 qpair failed and we were unable to recover it. 00:27:57.847 [2024-07-26 11:35:53.344432] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.847 [2024-07-26 11:35:53.344464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.847 qpair failed and we were unable to recover it. 00:27:57.847 [2024-07-26 11:35:53.344580] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.847 [2024-07-26 11:35:53.344619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.847 qpair failed and we were unable to recover it. 00:27:57.847 [2024-07-26 11:35:53.344739] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.847 [2024-07-26 11:35:53.344772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.847 qpair failed and we were unable to recover it. 00:27:57.847 [2024-07-26 11:35:53.344978] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.847 [2024-07-26 11:35:53.345009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.847 qpair failed and we were unable to recover it. 00:27:57.847 [2024-07-26 11:35:53.345123] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.847 [2024-07-26 11:35:53.345155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.847 qpair failed and we were unable to recover it. 00:27:57.847 [2024-07-26 11:35:53.345346] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.847 [2024-07-26 11:35:53.345378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.847 qpair failed and we were unable to recover it. 00:27:57.847 [2024-07-26 11:35:53.345570] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.847 [2024-07-26 11:35:53.345601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.847 qpair failed and we were unable to recover it. 00:27:57.847 [2024-07-26 11:35:53.345725] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.847 [2024-07-26 11:35:53.345758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.847 qpair failed and we were unable to recover it. 00:27:57.847 [2024-07-26 11:35:53.346025] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.847 [2024-07-26 11:35:53.346058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.847 qpair failed and we were unable to recover it. 00:27:57.847 [2024-07-26 11:35:53.346198] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.847 [2024-07-26 11:35:53.346232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.847 qpair failed and we were unable to recover it. 00:27:57.847 [2024-07-26 11:35:53.346343] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.847 [2024-07-26 11:35:53.346374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.847 qpair failed and we were unable to recover it. 00:27:57.847 [2024-07-26 11:35:53.346646] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.847 [2024-07-26 11:35:53.346681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.847 qpair failed and we were unable to recover it. 00:27:57.847 [2024-07-26 11:35:53.346807] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.847 [2024-07-26 11:35:53.346839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.847 qpair failed and we were unable to recover it. 00:27:57.847 [2024-07-26 11:35:53.346964] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.847 [2024-07-26 11:35:53.346996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.847 qpair failed and we were unable to recover it. 00:27:57.847 [2024-07-26 11:35:53.347300] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.847 [2024-07-26 11:35:53.347333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.847 qpair failed and we were unable to recover it. 00:27:57.847 [2024-07-26 11:35:53.347459] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.847 [2024-07-26 11:35:53.347491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.847 qpair failed and we were unable to recover it. 00:27:57.847 [2024-07-26 11:35:53.347607] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.847 [2024-07-26 11:35:53.347651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.847 qpair failed and we were unable to recover it. 00:27:57.847 [2024-07-26 11:35:53.347833] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.847 [2024-07-26 11:35:53.347865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.847 qpair failed and we were unable to recover it. 00:27:57.847 [2024-07-26 11:35:53.348051] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.847 [2024-07-26 11:35:53.348095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.847 qpair failed and we were unable to recover it. 00:27:57.847 [2024-07-26 11:35:53.348221] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.847 [2024-07-26 11:35:53.348254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.847 qpair failed and we were unable to recover it. 00:27:57.847 [2024-07-26 11:35:53.348446] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.847 [2024-07-26 11:35:53.348478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.847 qpair failed and we were unable to recover it. 00:27:57.847 [2024-07-26 11:35:53.348691] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.847 [2024-07-26 11:35:53.348724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.847 qpair failed and we were unable to recover it. 00:27:57.847 [2024-07-26 11:35:53.348918] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.847 [2024-07-26 11:35:53.348951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.847 qpair failed and we were unable to recover it. 00:27:57.847 [2024-07-26 11:35:53.349086] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.847 [2024-07-26 11:35:53.349119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.847 qpair failed and we were unable to recover it. 00:27:57.847 [2024-07-26 11:35:53.349247] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.847 [2024-07-26 11:35:53.349279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.847 qpair failed and we were unable to recover it. 00:27:57.847 [2024-07-26 11:35:53.349412] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.847 [2024-07-26 11:35:53.349445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.847 qpair failed and we were unable to recover it. 00:27:57.847 [2024-07-26 11:35:53.349561] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.847 [2024-07-26 11:35:53.349593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.847 qpair failed and we were unable to recover it. 00:27:57.847 [2024-07-26 11:35:53.349717] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.847 [2024-07-26 11:35:53.349750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.847 qpair failed and we were unable to recover it. 00:27:57.847 [2024-07-26 11:35:53.349949] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.847 [2024-07-26 11:35:53.349981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.847 qpair failed and we were unable to recover it. 00:27:57.847 [2024-07-26 11:35:53.350101] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.847 [2024-07-26 11:35:53.350135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.847 qpair failed and we were unable to recover it. 00:27:57.847 [2024-07-26 11:35:53.350260] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.847 [2024-07-26 11:35:53.350293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.847 qpair failed and we were unable to recover it. 00:27:57.847 [2024-07-26 11:35:53.350398] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.847 [2024-07-26 11:35:53.350429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.847 qpair failed and we were unable to recover it. 00:27:57.847 [2024-07-26 11:35:53.350546] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.847 [2024-07-26 11:35:53.350578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.847 qpair failed and we were unable to recover it. 00:27:57.847 [2024-07-26 11:35:53.350769] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.847 [2024-07-26 11:35:53.350802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.848 qpair failed and we were unable to recover it. 00:27:57.848 [2024-07-26 11:35:53.350981] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.848 [2024-07-26 11:35:53.351013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.848 qpair failed and we were unable to recover it. 00:27:57.848 [2024-07-26 11:35:53.351139] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.848 [2024-07-26 11:35:53.351171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.848 qpair failed and we were unable to recover it. 00:27:57.848 [2024-07-26 11:35:53.351276] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.848 [2024-07-26 11:35:53.351309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.848 qpair failed and we were unable to recover it. 00:27:57.848 [2024-07-26 11:35:53.351495] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.848 [2024-07-26 11:35:53.351527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.848 qpair failed and we were unable to recover it. 00:27:57.848 [2024-07-26 11:35:53.351665] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.848 [2024-07-26 11:35:53.351698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.848 qpair failed and we were unable to recover it. 00:27:57.848 [2024-07-26 11:35:53.351949] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.848 [2024-07-26 11:35:53.351982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.848 qpair failed and we were unable to recover it. 00:27:57.848 [2024-07-26 11:35:53.352181] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.848 [2024-07-26 11:35:53.352212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.848 qpair failed and we were unable to recover it. 00:27:57.848 [2024-07-26 11:35:53.352414] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.848 [2024-07-26 11:35:53.352447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.848 qpair failed and we were unable to recover it. 00:27:57.848 [2024-07-26 11:35:53.352683] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.848 [2024-07-26 11:35:53.352754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:57.848 qpair failed and we were unable to recover it. 00:27:57.848 [2024-07-26 11:35:53.352901] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.848 [2024-07-26 11:35:53.352938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f8000b90 with addr=10.0.0.2, port=4420 00:27:57.848 qpair failed and we were unable to recover it. 00:27:57.848 [2024-07-26 11:35:53.353071] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.848 [2024-07-26 11:35:53.353104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f8000b90 with addr=10.0.0.2, port=4420 00:27:57.848 qpair failed and we were unable to recover it. 00:27:57.848 [2024-07-26 11:35:53.353238] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.848 [2024-07-26 11:35:53.353270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f8000b90 with addr=10.0.0.2, port=4420 00:27:57.848 qpair failed and we were unable to recover it. 00:27:57.848 [2024-07-26 11:35:53.353470] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.848 [2024-07-26 11:35:53.353502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f8000b90 with addr=10.0.0.2, port=4420 00:27:57.848 qpair failed and we were unable to recover it. 00:27:57.848 [2024-07-26 11:35:53.353639] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.848 [2024-07-26 11:35:53.353673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f8000b90 with addr=10.0.0.2, port=4420 00:27:57.848 qpair failed and we were unable to recover it. 00:27:57.848 [2024-07-26 11:35:53.353810] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.848 [2024-07-26 11:35:53.353842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f8000b90 with addr=10.0.0.2, port=4420 00:27:57.848 qpair failed and we were unable to recover it. 00:27:57.848 [2024-07-26 11:35:53.353962] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.848 [2024-07-26 11:35:53.353994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f8000b90 with addr=10.0.0.2, port=4420 00:27:57.848 qpair failed and we were unable to recover it. 00:27:57.848 [2024-07-26 11:35:53.354212] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.848 [2024-07-26 11:35:53.354244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f8000b90 with addr=10.0.0.2, port=4420 00:27:57.848 qpair failed and we were unable to recover it. 00:27:57.848 [2024-07-26 11:35:53.354360] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.848 [2024-07-26 11:35:53.354392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f8000b90 with addr=10.0.0.2, port=4420 00:27:57.848 qpair failed and we were unable to recover it. 00:27:57.848 [2024-07-26 11:35:53.354522] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.848 [2024-07-26 11:35:53.354554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f8000b90 with addr=10.0.0.2, port=4420 00:27:57.848 qpair failed and we were unable to recover it. 00:27:57.848 [2024-07-26 11:35:53.354687] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.848 [2024-07-26 11:35:53.354720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f8000b90 with addr=10.0.0.2, port=4420 00:27:57.848 qpair failed and we were unable to recover it. 00:27:57.848 [2024-07-26 11:35:53.354852] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.848 [2024-07-26 11:35:53.354883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f8000b90 with addr=10.0.0.2, port=4420 00:27:57.848 qpair failed and we were unable to recover it. 00:27:57.848 [2024-07-26 11:35:53.355013] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.848 [2024-07-26 11:35:53.355051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f8000b90 with addr=10.0.0.2, port=4420 00:27:57.848 qpair failed and we were unable to recover it. 00:27:57.848 [2024-07-26 11:35:53.355166] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.848 [2024-07-26 11:35:53.355198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f8000b90 with addr=10.0.0.2, port=4420 00:27:57.848 qpair failed and we were unable to recover it. 00:27:57.848 [2024-07-26 11:35:53.355331] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.848 [2024-07-26 11:35:53.355363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f8000b90 with addr=10.0.0.2, port=4420 00:27:57.848 qpair failed and we were unable to recover it. 00:27:57.848 [2024-07-26 11:35:53.355483] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.848 [2024-07-26 11:35:53.355515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f8000b90 with addr=10.0.0.2, port=4420 00:27:57.848 qpair failed and we were unable to recover it. 00:27:57.848 [2024-07-26 11:35:53.355689] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.848 [2024-07-26 11:35:53.355721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f8000b90 with addr=10.0.0.2, port=4420 00:27:57.848 qpair failed and we were unable to recover it. 00:27:57.848 [2024-07-26 11:35:53.355838] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.848 [2024-07-26 11:35:53.355870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f8000b90 with addr=10.0.0.2, port=4420 00:27:57.848 qpair failed and we were unable to recover it. 00:27:57.848 [2024-07-26 11:35:53.356000] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.848 [2024-07-26 11:35:53.356031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f8000b90 with addr=10.0.0.2, port=4420 00:27:57.848 qpair failed and we were unable to recover it. 00:27:57.848 [2024-07-26 11:35:53.356250] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.848 [2024-07-26 11:35:53.356281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f8000b90 with addr=10.0.0.2, port=4420 00:27:57.848 qpair failed and we were unable to recover it. 00:27:57.848 [2024-07-26 11:35:53.356422] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.848 [2024-07-26 11:35:53.356455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f8000b90 with addr=10.0.0.2, port=4420 00:27:57.848 qpair failed and we were unable to recover it. 00:27:57.848 [2024-07-26 11:35:53.356585] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.848 [2024-07-26 11:35:53.356620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f8000b90 with addr=10.0.0.2, port=4420 00:27:57.848 qpair failed and we were unable to recover it. 00:27:57.848 [2024-07-26 11:35:53.356743] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.848 [2024-07-26 11:35:53.356775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f8000b90 with addr=10.0.0.2, port=4420 00:27:57.848 qpair failed and we were unable to recover it. 00:27:57.848 [2024-07-26 11:35:53.356973] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.848 [2024-07-26 11:35:53.357008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f8000b90 with addr=10.0.0.2, port=4420 00:27:57.848 qpair failed and we were unable to recover it. 00:27:57.848 [2024-07-26 11:35:53.357204] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.848 [2024-07-26 11:35:53.357237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f8000b90 with addr=10.0.0.2, port=4420 00:27:57.848 qpair failed and we were unable to recover it. 00:27:57.848 [2024-07-26 11:35:53.357420] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.848 [2024-07-26 11:35:53.357452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f8000b90 with addr=10.0.0.2, port=4420 00:27:57.848 qpair failed and we were unable to recover it. 00:27:57.848 [2024-07-26 11:35:53.357575] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.848 [2024-07-26 11:35:53.357608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f8000b90 with addr=10.0.0.2, port=4420 00:27:57.848 qpair failed and we were unable to recover it. 00:27:57.848 [2024-07-26 11:35:53.357751] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.848 [2024-07-26 11:35:53.357783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f8000b90 with addr=10.0.0.2, port=4420 00:27:57.848 qpair failed and we were unable to recover it. 00:27:57.848 [2024-07-26 11:35:53.357926] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.848 [2024-07-26 11:35:53.357958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f8000b90 with addr=10.0.0.2, port=4420 00:27:57.848 qpair failed and we were unable to recover it. 00:27:57.848 [2024-07-26 11:35:53.358088] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.848 [2024-07-26 11:35:53.358120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f8000b90 with addr=10.0.0.2, port=4420 00:27:57.848 qpair failed and we were unable to recover it. 00:27:57.848 [2024-07-26 11:35:53.358251] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.848 [2024-07-26 11:35:53.358283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f8000b90 with addr=10.0.0.2, port=4420 00:27:57.848 qpair failed and we were unable to recover it. 00:27:57.848 [2024-07-26 11:35:53.358487] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.848 [2024-07-26 11:35:53.358518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f8000b90 with addr=10.0.0.2, port=4420 00:27:57.848 qpair failed and we were unable to recover it. 00:27:57.848 [2024-07-26 11:35:53.358763] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.848 [2024-07-26 11:35:53.358796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f8000b90 with addr=10.0.0.2, port=4420 00:27:57.848 qpair failed and we were unable to recover it. 00:27:57.848 [2024-07-26 11:35:53.358929] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.848 [2024-07-26 11:35:53.358962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f8000b90 with addr=10.0.0.2, port=4420 00:27:57.848 qpair failed and we were unable to recover it. 00:27:57.848 [2024-07-26 11:35:53.359092] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.848 [2024-07-26 11:35:53.359123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f8000b90 with addr=10.0.0.2, port=4420 00:27:57.848 qpair failed and we were unable to recover it. 00:27:57.848 [2024-07-26 11:35:53.359250] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.848 [2024-07-26 11:35:53.359281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f8000b90 with addr=10.0.0.2, port=4420 00:27:57.848 qpair failed and we were unable to recover it. 00:27:57.848 [2024-07-26 11:35:53.359395] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.848 [2024-07-26 11:35:53.359427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f8000b90 with addr=10.0.0.2, port=4420 00:27:57.848 qpair failed and we were unable to recover it. 00:27:57.848 [2024-07-26 11:35:53.359634] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.848 [2024-07-26 11:35:53.359666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f8000b90 with addr=10.0.0.2, port=4420 00:27:57.848 qpair failed and we were unable to recover it. 00:27:57.848 [2024-07-26 11:35:53.359798] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.848 [2024-07-26 11:35:53.359829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f8000b90 with addr=10.0.0.2, port=4420 00:27:57.848 qpair failed and we were unable to recover it. 00:27:57.848 [2024-07-26 11:35:53.360039] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.848 [2024-07-26 11:35:53.360085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:57.848 qpair failed and we were unable to recover it. 00:27:57.848 [2024-07-26 11:35:53.360225] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.848 [2024-07-26 11:35:53.360259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:57.848 qpair failed and we were unable to recover it. 00:27:57.849 [2024-07-26 11:35:53.360385] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.849 [2024-07-26 11:35:53.360417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:57.849 qpair failed and we were unable to recover it. 00:27:57.849 [2024-07-26 11:35:53.360549] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.849 [2024-07-26 11:35:53.360581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:57.849 qpair failed and we were unable to recover it. 00:27:57.849 [2024-07-26 11:35:53.360791] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.849 [2024-07-26 11:35:53.360826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:57.849 qpair failed and we were unable to recover it. 00:27:57.849 [2024-07-26 11:35:53.360958] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.849 [2024-07-26 11:35:53.360990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:57.849 qpair failed and we were unable to recover it. 00:27:57.849 [2024-07-26 11:35:53.361167] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.849 [2024-07-26 11:35:53.361199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:57.849 qpair failed and we were unable to recover it. 00:27:57.849 [2024-07-26 11:35:53.361392] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.849 [2024-07-26 11:35:53.361423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:57.849 qpair failed and we were unable to recover it. 00:27:57.849 [2024-07-26 11:35:53.361571] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.849 [2024-07-26 11:35:53.361603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:57.849 qpair failed and we were unable to recover it. 00:27:57.849 [2024-07-26 11:35:53.361759] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.849 [2024-07-26 11:35:53.361792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:57.849 qpair failed and we were unable to recover it. 00:27:57.849 [2024-07-26 11:35:53.361930] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.849 [2024-07-26 11:35:53.361962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:57.849 qpair failed and we were unable to recover it. 00:27:57.849 [2024-07-26 11:35:53.362080] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.849 [2024-07-26 11:35:53.362112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:57.849 qpair failed and we were unable to recover it. 00:27:57.849 [2024-07-26 11:35:53.362322] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.849 [2024-07-26 11:35:53.362354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:57.849 qpair failed and we were unable to recover it. 00:27:57.849 [2024-07-26 11:35:53.362478] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.849 [2024-07-26 11:35:53.362520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:57.849 qpair failed and we were unable to recover it. 00:27:57.849 [2024-07-26 11:35:53.362704] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.849 [2024-07-26 11:35:53.362738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:57.849 qpair failed and we were unable to recover it. 00:27:57.849 [2024-07-26 11:35:53.362895] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.849 [2024-07-26 11:35:53.362927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:57.849 qpair failed and we were unable to recover it. 00:27:57.849 [2024-07-26 11:35:53.363175] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.849 [2024-07-26 11:35:53.363206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:57.849 qpair failed and we were unable to recover it. 00:27:57.849 [2024-07-26 11:35:53.363415] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.849 [2024-07-26 11:35:53.363447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:57.849 qpair failed and we were unable to recover it. 00:27:57.849 [2024-07-26 11:35:53.363570] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.849 [2024-07-26 11:35:53.363602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:57.849 qpair failed and we were unable to recover it. 00:27:57.849 [2024-07-26 11:35:53.363816] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.849 [2024-07-26 11:35:53.363849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:57.849 qpair failed and we were unable to recover it. 00:27:57.849 [2024-07-26 11:35:53.364098] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.849 [2024-07-26 11:35:53.364130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:57.849 qpair failed and we were unable to recover it. 00:27:57.849 [2024-07-26 11:35:53.364262] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.849 [2024-07-26 11:35:53.364294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:57.849 qpair failed and we were unable to recover it. 00:27:57.849 [2024-07-26 11:35:53.364556] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.849 [2024-07-26 11:35:53.364588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:57.849 qpair failed and we were unable to recover it. 00:27:57.849 [2024-07-26 11:35:53.364810] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.849 [2024-07-26 11:35:53.364843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:57.849 qpair failed and we were unable to recover it. 00:27:57.849 [2024-07-26 11:35:53.365031] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.849 [2024-07-26 11:35:53.365062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:57.849 qpair failed and we were unable to recover it. 00:27:57.849 [2024-07-26 11:35:53.365309] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.849 [2024-07-26 11:35:53.365341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:57.849 qpair failed and we were unable to recover it. 00:27:57.849 [2024-07-26 11:35:53.365519] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.849 [2024-07-26 11:35:53.365551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:57.849 qpair failed and we were unable to recover it. 00:27:57.849 [2024-07-26 11:35:53.365700] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.849 [2024-07-26 11:35:53.365734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:57.849 qpair failed and we were unable to recover it. 00:27:57.849 [2024-07-26 11:35:53.365856] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.849 [2024-07-26 11:35:53.365887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:57.849 qpair failed and we were unable to recover it. 00:27:57.849 [2024-07-26 11:35:53.366021] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.849 [2024-07-26 11:35:53.366053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:57.849 qpair failed and we were unable to recover it. 00:27:57.849 [2024-07-26 11:35:53.366172] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.849 [2024-07-26 11:35:53.366205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:57.849 qpair failed and we were unable to recover it. 00:27:57.849 [2024-07-26 11:35:53.366386] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.849 [2024-07-26 11:35:53.366418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:57.849 qpair failed and we were unable to recover it. 00:27:57.849 [2024-07-26 11:35:53.366548] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.849 [2024-07-26 11:35:53.366581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:57.849 qpair failed and we were unable to recover it. 00:27:57.849 [2024-07-26 11:35:53.366722] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.849 [2024-07-26 11:35:53.366755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:57.849 qpair failed and we were unable to recover it. 00:27:57.849 [2024-07-26 11:35:53.366897] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.849 [2024-07-26 11:35:53.366929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:57.849 qpair failed and we were unable to recover it. 00:27:57.849 [2024-07-26 11:35:53.367137] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.849 [2024-07-26 11:35:53.367169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:57.849 qpair failed and we were unable to recover it. 00:27:57.849 [2024-07-26 11:35:53.367290] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.849 [2024-07-26 11:35:53.367321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:57.849 qpair failed and we were unable to recover it. 00:27:57.849 [2024-07-26 11:35:53.367539] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.849 [2024-07-26 11:35:53.367571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:57.849 qpair failed and we were unable to recover it. 00:27:57.849 [2024-07-26 11:35:53.367703] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.849 [2024-07-26 11:35:53.367736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:57.849 qpair failed and we were unable to recover it. 00:27:57.849 [2024-07-26 11:35:53.367937] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.849 [2024-07-26 11:35:53.367970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:57.849 qpair failed and we were unable to recover it. 00:27:57.849 [2024-07-26 11:35:53.368106] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.849 [2024-07-26 11:35:53.368144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.849 qpair failed and we were unable to recover it. 00:27:57.849 [2024-07-26 11:35:53.368330] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.849 [2024-07-26 11:35:53.368361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.849 qpair failed and we were unable to recover it. 00:27:57.849 [2024-07-26 11:35:53.368585] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.849 [2024-07-26 11:35:53.368618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.849 qpair failed and we were unable to recover it. 00:27:57.849 [2024-07-26 11:35:53.368750] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.849 [2024-07-26 11:35:53.368783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.849 qpair failed and we were unable to recover it. 00:27:57.849 [2024-07-26 11:35:53.368975] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.849 [2024-07-26 11:35:53.369007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.849 qpair failed and we were unable to recover it. 00:27:57.849 [2024-07-26 11:35:53.369136] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.849 [2024-07-26 11:35:53.369169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.849 qpair failed and we were unable to recover it. 00:27:57.849 [2024-07-26 11:35:53.369284] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.849 [2024-07-26 11:35:53.369316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.849 qpair failed and we were unable to recover it. 00:27:57.849 [2024-07-26 11:35:53.369496] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.849 [2024-07-26 11:35:53.369532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.849 qpair failed and we were unable to recover it. 00:27:57.849 [2024-07-26 11:35:53.369667] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.849 [2024-07-26 11:35:53.369699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.849 qpair failed and we were unable to recover it. 00:27:57.849 [2024-07-26 11:35:53.369816] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.849 [2024-07-26 11:35:53.369847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.849 qpair failed and we were unable to recover it. 00:27:57.849 [2024-07-26 11:35:53.369969] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.849 [2024-07-26 11:35:53.370001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.849 qpair failed and we were unable to recover it. 00:27:57.849 [2024-07-26 11:35:53.370197] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.849 [2024-07-26 11:35:53.370229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.849 qpair failed and we were unable to recover it. 00:27:57.849 [2024-07-26 11:35:53.370403] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.849 [2024-07-26 11:35:53.370435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.849 qpair failed and we were unable to recover it. 00:27:57.849 [2024-07-26 11:35:53.370547] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.849 [2024-07-26 11:35:53.370584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.849 qpair failed and we were unable to recover it. 00:27:57.849 [2024-07-26 11:35:53.370735] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.849 [2024-07-26 11:35:53.370769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.849 qpair failed and we were unable to recover it. 00:27:57.849 [2024-07-26 11:35:53.370960] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.849 [2024-07-26 11:35:53.370993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.850 qpair failed and we were unable to recover it. 00:27:57.850 [2024-07-26 11:35:53.371106] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.850 [2024-07-26 11:35:53.371137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.850 qpair failed and we were unable to recover it. 00:27:57.850 [2024-07-26 11:35:53.371403] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.850 [2024-07-26 11:35:53.371435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.850 qpair failed and we were unable to recover it. 00:27:57.850 [2024-07-26 11:35:53.371576] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.850 [2024-07-26 11:35:53.371607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.850 qpair failed and we were unable to recover it. 00:27:57.850 [2024-07-26 11:35:53.371741] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.850 [2024-07-26 11:35:53.371774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.850 qpair failed and we were unable to recover it. 00:27:57.850 [2024-07-26 11:35:53.371891] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.850 [2024-07-26 11:35:53.371923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.850 qpair failed and we were unable to recover it. 00:27:57.850 [2024-07-26 11:35:53.372178] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.850 [2024-07-26 11:35:53.372209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.850 qpair failed and we were unable to recover it. 00:27:57.850 [2024-07-26 11:35:53.372387] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.850 [2024-07-26 11:35:53.372418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.850 qpair failed and we were unable to recover it. 00:27:57.850 [2024-07-26 11:35:53.372537] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.850 [2024-07-26 11:35:53.372569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.850 qpair failed and we were unable to recover it. 00:27:57.850 [2024-07-26 11:35:53.372688] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.850 [2024-07-26 11:35:53.372719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.850 qpair failed and we were unable to recover it. 00:27:57.850 [2024-07-26 11:35:53.372922] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.850 [2024-07-26 11:35:53.372954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.850 qpair failed and we were unable to recover it. 00:27:57.850 [2024-07-26 11:35:53.373097] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.850 [2024-07-26 11:35:53.373128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.850 qpair failed and we were unable to recover it. 00:27:57.850 [2024-07-26 11:35:53.373248] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.850 [2024-07-26 11:35:53.373279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.850 qpair failed and we were unable to recover it. 00:27:57.850 [2024-07-26 11:35:53.373398] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.850 [2024-07-26 11:35:53.373430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.850 qpair failed and we were unable to recover it. 00:27:57.850 [2024-07-26 11:35:53.373609] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.850 [2024-07-26 11:35:53.373652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.850 qpair failed and we were unable to recover it. 00:27:57.850 [2024-07-26 11:35:53.373829] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.850 [2024-07-26 11:35:53.373861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.850 qpair failed and we were unable to recover it. 00:27:57.850 [2024-07-26 11:35:53.374103] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.850 [2024-07-26 11:35:53.374134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.850 qpair failed and we were unable to recover it. 00:27:57.850 [2024-07-26 11:35:53.374339] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.850 [2024-07-26 11:35:53.374369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.850 qpair failed and we were unable to recover it. 00:27:57.850 [2024-07-26 11:35:53.374509] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.850 [2024-07-26 11:35:53.374540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.850 qpair failed and we were unable to recover it. 00:27:57.850 [2024-07-26 11:35:53.374726] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.850 [2024-07-26 11:35:53.374759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.850 qpair failed and we were unable to recover it. 00:27:57.850 [2024-07-26 11:35:53.374933] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.850 [2024-07-26 11:35:53.374964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.850 qpair failed and we were unable to recover it. 00:27:57.850 [2024-07-26 11:35:53.375140] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.850 [2024-07-26 11:35:53.375171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.850 qpair failed and we were unable to recover it. 00:27:57.850 [2024-07-26 11:35:53.375378] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.850 [2024-07-26 11:35:53.375409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.850 qpair failed and we were unable to recover it. 00:27:57.850 [2024-07-26 11:35:53.375528] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.850 [2024-07-26 11:35:53.375559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.850 qpair failed and we were unable to recover it. 00:27:57.850 [2024-07-26 11:35:53.375712] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.850 [2024-07-26 11:35:53.375744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.850 qpair failed and we were unable to recover it. 00:27:57.850 [2024-07-26 11:35:53.375933] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.850 [2024-07-26 11:35:53.375976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.850 qpair failed and we were unable to recover it. 00:27:57.850 [2024-07-26 11:35:53.376214] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.850 [2024-07-26 11:35:53.376246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.850 qpair failed and we were unable to recover it. 00:27:57.850 [2024-07-26 11:35:53.376353] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.850 [2024-07-26 11:35:53.376384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.850 qpair failed and we were unable to recover it. 00:27:57.850 [2024-07-26 11:35:53.376512] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.850 [2024-07-26 11:35:53.376544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.850 qpair failed and we were unable to recover it. 00:27:57.850 [2024-07-26 11:35:53.376684] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.850 [2024-07-26 11:35:53.376718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.850 qpair failed and we were unable to recover it. 00:27:57.850 [2024-07-26 11:35:53.376848] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.850 [2024-07-26 11:35:53.376879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.850 qpair failed and we were unable to recover it. 00:27:57.850 [2024-07-26 11:35:53.377005] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.850 [2024-07-26 11:35:53.377037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.850 qpair failed and we were unable to recover it. 00:27:57.850 [2024-07-26 11:35:53.377170] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.850 [2024-07-26 11:35:53.377201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.850 qpair failed and we were unable to recover it. 00:27:57.850 [2024-07-26 11:35:53.377449] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.850 [2024-07-26 11:35:53.377481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.850 qpair failed and we were unable to recover it. 00:27:57.850 [2024-07-26 11:35:53.377608] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.850 [2024-07-26 11:35:53.377651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.850 qpair failed and we were unable to recover it. 00:27:57.850 [2024-07-26 11:35:53.377766] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.850 [2024-07-26 11:35:53.377798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.850 qpair failed and we were unable to recover it. 00:27:57.850 [2024-07-26 11:35:53.378095] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.850 [2024-07-26 11:35:53.378126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.850 qpair failed and we were unable to recover it. 00:27:57.850 [2024-07-26 11:35:53.378267] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.850 [2024-07-26 11:35:53.378299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.850 qpair failed and we were unable to recover it. 00:27:57.850 [2024-07-26 11:35:53.378419] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.850 [2024-07-26 11:35:53.378450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.851 qpair failed and we were unable to recover it. 00:27:57.851 [2024-07-26 11:35:53.378573] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.851 [2024-07-26 11:35:53.378605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.851 qpair failed and we were unable to recover it. 00:27:57.851 [2024-07-26 11:35:53.378748] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.851 [2024-07-26 11:35:53.378781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.851 qpair failed and we were unable to recover it. 00:27:57.851 [2024-07-26 11:35:53.378893] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.851 [2024-07-26 11:35:53.378934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.851 qpair failed and we were unable to recover it. 00:27:57.851 [2024-07-26 11:35:53.379129] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.851 [2024-07-26 11:35:53.379170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.851 qpair failed and we were unable to recover it. 00:27:57.851 [2024-07-26 11:35:53.379341] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.851 [2024-07-26 11:35:53.379382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.851 qpair failed and we were unable to recover it. 00:27:57.851 [2024-07-26 11:35:53.379509] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.851 [2024-07-26 11:35:53.379540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.851 qpair failed and we were unable to recover it. 00:27:57.851 [2024-07-26 11:35:53.379718] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.851 [2024-07-26 11:35:53.379754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.851 qpair failed and we were unable to recover it. 00:27:57.851 [2024-07-26 11:35:53.379889] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.851 [2024-07-26 11:35:53.379922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.851 qpair failed and we were unable to recover it. 00:27:57.851 [2024-07-26 11:35:53.380115] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.851 [2024-07-26 11:35:53.380148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.851 qpair failed and we were unable to recover it. 00:27:57.851 [2024-07-26 11:35:53.380358] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.851 [2024-07-26 11:35:53.380393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.851 qpair failed and we were unable to recover it. 00:27:57.851 [2024-07-26 11:35:53.380521] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.851 [2024-07-26 11:35:53.380554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.851 qpair failed and we were unable to recover it. 00:27:57.851 [2024-07-26 11:35:53.380750] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.851 [2024-07-26 11:35:53.380784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.851 qpair failed and we were unable to recover it. 00:27:57.851 [2024-07-26 11:35:53.380973] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.851 [2024-07-26 11:35:53.381009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.851 qpair failed and we were unable to recover it. 00:27:57.851 [2024-07-26 11:35:53.381200] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.851 [2024-07-26 11:35:53.381238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.851 qpair failed and we were unable to recover it. 00:27:57.851 [2024-07-26 11:35:53.381361] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.851 [2024-07-26 11:35:53.381393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.851 qpair failed and we were unable to recover it. 00:27:57.851 [2024-07-26 11:35:53.381508] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.851 [2024-07-26 11:35:53.381539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.851 qpair failed and we were unable to recover it. 00:27:57.851 [2024-07-26 11:35:53.381666] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.851 [2024-07-26 11:35:53.381700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.851 qpair failed and we were unable to recover it. 00:27:57.851 [2024-07-26 11:35:53.381825] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.851 [2024-07-26 11:35:53.381856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.851 qpair failed and we were unable to recover it. 00:27:57.851 [2024-07-26 11:35:53.381972] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.851 [2024-07-26 11:35:53.382006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.851 qpair failed and we were unable to recover it. 00:27:57.851 [2024-07-26 11:35:53.382120] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.851 [2024-07-26 11:35:53.382152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.851 qpair failed and we were unable to recover it. 00:27:57.851 [2024-07-26 11:35:53.382256] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.851 [2024-07-26 11:35:53.382289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.851 qpair failed and we were unable to recover it. 00:27:57.851 [2024-07-26 11:35:53.382457] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.851 [2024-07-26 11:35:53.382489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.851 qpair failed and we were unable to recover it. 00:27:57.851 [2024-07-26 11:35:53.382615] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.851 [2024-07-26 11:35:53.382660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.851 qpair failed and we were unable to recover it. 00:27:57.851 [2024-07-26 11:35:53.382793] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.851 [2024-07-26 11:35:53.382825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.851 qpair failed and we were unable to recover it. 00:27:57.851 [2024-07-26 11:35:53.383035] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.851 [2024-07-26 11:35:53.383066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.851 qpair failed and we were unable to recover it. 00:27:57.851 [2024-07-26 11:35:53.383252] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.851 [2024-07-26 11:35:53.383284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.851 qpair failed and we were unable to recover it. 00:27:57.851 [2024-07-26 11:35:53.383413] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.851 [2024-07-26 11:35:53.383444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.851 qpair failed and we were unable to recover it. 00:27:57.851 [2024-07-26 11:35:53.383608] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.851 [2024-07-26 11:35:53.383652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.851 qpair failed and we were unable to recover it. 00:27:57.851 [2024-07-26 11:35:53.383785] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.851 [2024-07-26 11:35:53.383817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.851 qpair failed and we were unable to recover it. 00:27:57.851 [2024-07-26 11:35:53.383926] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.851 [2024-07-26 11:35:53.383958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.851 qpair failed and we were unable to recover it. 00:27:57.851 [2024-07-26 11:35:53.384095] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.851 [2024-07-26 11:35:53.384126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.851 qpair failed and we were unable to recover it. 00:27:57.851 [2024-07-26 11:35:53.384258] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.851 [2024-07-26 11:35:53.384291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.851 qpair failed and we were unable to recover it. 00:27:57.851 [2024-07-26 11:35:53.384427] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.851 [2024-07-26 11:35:53.384459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.851 qpair failed and we were unable to recover it. 00:27:57.851 [2024-07-26 11:35:53.384604] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.851 [2024-07-26 11:35:53.384650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.851 qpair failed and we were unable to recover it. 00:27:57.851 [2024-07-26 11:35:53.384781] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.851 [2024-07-26 11:35:53.384813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.851 qpair failed and we were unable to recover it. 00:27:57.851 [2024-07-26 11:35:53.384932] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.851 [2024-07-26 11:35:53.384964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.851 qpair failed and we were unable to recover it. 00:27:57.851 [2024-07-26 11:35:53.385166] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.851 [2024-07-26 11:35:53.385197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.851 qpair failed and we were unable to recover it. 00:27:57.851 [2024-07-26 11:35:53.385398] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.851 [2024-07-26 11:35:53.385429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.851 qpair failed and we were unable to recover it. 00:27:57.851 [2024-07-26 11:35:53.385618] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.851 [2024-07-26 11:35:53.385658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.851 qpair failed and we were unable to recover it. 00:27:57.851 [2024-07-26 11:35:53.385782] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.851 [2024-07-26 11:35:53.385814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.851 qpair failed and we were unable to recover it. 00:27:57.851 [2024-07-26 11:35:53.385928] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.851 [2024-07-26 11:35:53.385964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.851 qpair failed and we were unable to recover it. 00:27:57.851 [2024-07-26 11:35:53.386113] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.851 [2024-07-26 11:35:53.386145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.851 qpair failed and we were unable to recover it. 00:27:57.851 [2024-07-26 11:35:53.386275] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.851 [2024-07-26 11:35:53.386306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.851 qpair failed and we were unable to recover it. 00:27:57.851 [2024-07-26 11:35:53.386507] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.851 [2024-07-26 11:35:53.386538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.851 qpair failed and we were unable to recover it. 00:27:57.851 [2024-07-26 11:35:53.386658] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.851 [2024-07-26 11:35:53.386690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.851 qpair failed and we were unable to recover it. 00:27:57.851 [2024-07-26 11:35:53.386812] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.851 [2024-07-26 11:35:53.386843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.851 qpair failed and we were unable to recover it. 00:27:57.851 [2024-07-26 11:35:53.386969] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.851 [2024-07-26 11:35:53.387001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.851 qpair failed and we were unable to recover it. 00:27:57.851 [2024-07-26 11:35:53.387179] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.851 [2024-07-26 11:35:53.387210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.851 qpair failed and we were unable to recover it. 00:27:57.851 [2024-07-26 11:35:53.387331] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.851 [2024-07-26 11:35:53.387363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.851 qpair failed and we were unable to recover it. 00:27:57.851 [2024-07-26 11:35:53.387506] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.851 [2024-07-26 11:35:53.387538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.851 qpair failed and we were unable to recover it. 00:27:57.851 [2024-07-26 11:35:53.387556] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:27:57.851 [2024-07-26 11:35:53.387666] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.851 [2024-07-26 11:35:53.387696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.851 qpair failed and we were unable to recover it. 00:27:57.851 [2024-07-26 11:35:53.387811] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.851 [2024-07-26 11:35:53.387842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.851 qpair failed and we were unable to recover it. 00:27:57.851 [2024-07-26 11:35:53.388040] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.851 [2024-07-26 11:35:53.388071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.851 qpair failed and we were unable to recover it. 00:27:57.851 [2024-07-26 11:35:53.388190] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.851 [2024-07-26 11:35:53.388222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.851 qpair failed and we were unable to recover it. 00:27:57.851 [2024-07-26 11:35:53.388349] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.851 [2024-07-26 11:35:53.388381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.851 qpair failed and we were unable to recover it. 00:27:57.851 [2024-07-26 11:35:53.388508] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.851 [2024-07-26 11:35:53.388540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.851 qpair failed and we were unable to recover it. 00:27:57.852 [2024-07-26 11:35:53.388719] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.852 [2024-07-26 11:35:53.388752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.852 qpair failed and we were unable to recover it. 00:27:57.852 [2024-07-26 11:35:53.388878] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.852 [2024-07-26 11:35:53.388911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.852 qpair failed and we were unable to recover it. 00:27:57.852 [2024-07-26 11:35:53.389066] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.852 [2024-07-26 11:35:53.389099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.852 qpair failed and we were unable to recover it. 00:27:57.852 [2024-07-26 11:35:53.389329] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.852 [2024-07-26 11:35:53.389361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.852 qpair failed and we were unable to recover it. 00:27:57.852 [2024-07-26 11:35:53.389488] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.852 [2024-07-26 11:35:53.389519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.852 qpair failed and we were unable to recover it. 00:27:57.852 [2024-07-26 11:35:53.389638] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.852 [2024-07-26 11:35:53.389670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.852 qpair failed and we were unable to recover it. 00:27:57.852 [2024-07-26 11:35:53.389790] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.852 [2024-07-26 11:35:53.389823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.852 qpair failed and we were unable to recover it. 00:27:57.852 [2024-07-26 11:35:53.389940] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.852 [2024-07-26 11:35:53.389972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.852 qpair failed and we were unable to recover it. 00:27:57.852 [2024-07-26 11:35:53.390097] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.852 [2024-07-26 11:35:53.390131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.852 qpair failed and we were unable to recover it. 00:27:57.852 [2024-07-26 11:35:53.390256] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.852 [2024-07-26 11:35:53.390288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.852 qpair failed and we were unable to recover it. 00:27:57.852 [2024-07-26 11:35:53.390392] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.852 [2024-07-26 11:35:53.390424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.852 qpair failed and we were unable to recover it. 00:27:57.852 [2024-07-26 11:35:53.390554] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.852 [2024-07-26 11:35:53.390591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.852 qpair failed and we were unable to recover it. 00:27:57.852 [2024-07-26 11:35:53.390707] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.852 [2024-07-26 11:35:53.390740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.852 qpair failed and we were unable to recover it. 00:27:57.852 [2024-07-26 11:35:53.390852] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.852 [2024-07-26 11:35:53.390883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.852 qpair failed and we were unable to recover it. 00:27:57.852 [2024-07-26 11:35:53.391080] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.852 [2024-07-26 11:35:53.391111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.852 qpair failed and we were unable to recover it. 00:27:57.852 [2024-07-26 11:35:53.391288] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.852 [2024-07-26 11:35:53.391319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.852 qpair failed and we were unable to recover it. 00:27:57.852 [2024-07-26 11:35:53.391447] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.852 [2024-07-26 11:35:53.391479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.852 qpair failed and we were unable to recover it. 00:27:57.852 [2024-07-26 11:35:53.391620] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.852 [2024-07-26 11:35:53.391675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.852 qpair failed and we were unable to recover it. 00:27:57.852 [2024-07-26 11:35:53.391793] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.852 [2024-07-26 11:35:53.391825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.852 qpair failed and we were unable to recover it. 00:27:57.852 [2024-07-26 11:35:53.391955] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.852 [2024-07-26 11:35:53.391986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.852 qpair failed and we were unable to recover it. 00:27:57.852 [2024-07-26 11:35:53.392105] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.852 [2024-07-26 11:35:53.392136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.852 qpair failed and we were unable to recover it. 00:27:57.852 [2024-07-26 11:35:53.392249] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.852 [2024-07-26 11:35:53.392280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.852 qpair failed and we were unable to recover it. 00:27:57.852 [2024-07-26 11:35:53.392460] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.852 [2024-07-26 11:35:53.392492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.852 qpair failed and we were unable to recover it. 00:27:57.852 [2024-07-26 11:35:53.392617] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.852 [2024-07-26 11:35:53.392661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.852 qpair failed and we were unable to recover it. 00:27:57.852 [2024-07-26 11:35:53.392780] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.852 [2024-07-26 11:35:53.392811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.852 qpair failed and we were unable to recover it. 00:27:57.852 [2024-07-26 11:35:53.392943] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.852 [2024-07-26 11:35:53.392976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.852 qpair failed and we were unable to recover it. 00:27:57.852 [2024-07-26 11:35:53.393230] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.852 [2024-07-26 11:35:53.393262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.852 qpair failed and we were unable to recover it. 00:27:57.852 [2024-07-26 11:35:53.393415] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.852 [2024-07-26 11:35:53.393446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.852 qpair failed and we were unable to recover it. 00:27:57.852 [2024-07-26 11:35:53.393561] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.852 [2024-07-26 11:35:53.393593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.852 qpair failed and we were unable to recover it. 00:27:57.852 [2024-07-26 11:35:53.393735] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.852 [2024-07-26 11:35:53.393768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.852 qpair failed and we were unable to recover it. 00:27:57.852 [2024-07-26 11:35:53.394008] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.852 [2024-07-26 11:35:53.394039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.852 qpair failed and we were unable to recover it. 00:27:57.852 [2024-07-26 11:35:53.394266] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.852 [2024-07-26 11:35:53.394299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.852 qpair failed and we were unable to recover it. 00:27:57.852 [2024-07-26 11:35:53.394428] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.852 [2024-07-26 11:35:53.394459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.852 qpair failed and we were unable to recover it. 00:27:57.852 [2024-07-26 11:35:53.394574] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.852 [2024-07-26 11:35:53.394607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.852 qpair failed and we were unable to recover it. 00:27:57.852 [2024-07-26 11:35:53.394734] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.852 [2024-07-26 11:35:53.394766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.852 qpair failed and we were unable to recover it. 00:27:57.852 [2024-07-26 11:35:53.394873] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.852 [2024-07-26 11:35:53.394904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.852 qpair failed and we were unable to recover it. 00:27:57.852 [2024-07-26 11:35:53.395027] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.852 [2024-07-26 11:35:53.395059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.852 qpair failed and we were unable to recover it. 00:27:57.852 [2024-07-26 11:35:53.395302] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.852 [2024-07-26 11:35:53.395335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.852 qpair failed and we were unable to recover it. 00:27:57.852 [2024-07-26 11:35:53.395524] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.852 [2024-07-26 11:35:53.395562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.852 qpair failed and we were unable to recover it. 00:27:57.852 [2024-07-26 11:35:53.395846] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.852 [2024-07-26 11:35:53.395880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.852 qpair failed and we were unable to recover it. 00:27:57.852 [2024-07-26 11:35:53.396012] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.852 [2024-07-26 11:35:53.396045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.852 qpair failed and we were unable to recover it. 00:27:57.852 [2024-07-26 11:35:53.396271] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.852 [2024-07-26 11:35:53.396304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.852 qpair failed and we were unable to recover it. 00:27:57.852 [2024-07-26 11:35:53.396508] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.852 [2024-07-26 11:35:53.396541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.852 qpair failed and we were unable to recover it. 00:27:57.852 [2024-07-26 11:35:53.396732] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.852 [2024-07-26 11:35:53.396771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.852 qpair failed and we were unable to recover it. 00:27:57.852 [2024-07-26 11:35:53.396960] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.852 [2024-07-26 11:35:53.396992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.852 qpair failed and we were unable to recover it. 00:27:57.852 [2024-07-26 11:35:53.397173] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.852 [2024-07-26 11:35:53.397206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.852 qpair failed and we were unable to recover it. 00:27:57.852 [2024-07-26 11:35:53.397348] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.852 [2024-07-26 11:35:53.397380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.852 qpair failed and we were unable to recover it. 00:27:57.852 [2024-07-26 11:35:53.397638] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.852 [2024-07-26 11:35:53.397673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.852 qpair failed and we were unable to recover it. 00:27:57.852 [2024-07-26 11:35:53.397877] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.852 [2024-07-26 11:35:53.397910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.852 qpair failed and we were unable to recover it. 00:27:57.852 [2024-07-26 11:35:53.398106] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.852 [2024-07-26 11:35:53.398138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.852 qpair failed and we were unable to recover it. 00:27:57.852 [2024-07-26 11:35:53.398375] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.852 [2024-07-26 11:35:53.398406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.852 qpair failed and we were unable to recover it. 00:27:57.852 [2024-07-26 11:35:53.398597] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.853 [2024-07-26 11:35:53.398637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.853 qpair failed and we were unable to recover it. 00:27:57.853 [2024-07-26 11:35:53.398889] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.853 [2024-07-26 11:35:53.398921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.853 qpair failed and we were unable to recover it. 00:27:57.853 [2024-07-26 11:35:53.399067] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.853 [2024-07-26 11:35:53.399098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.853 qpair failed and we were unable to recover it. 00:27:57.853 [2024-07-26 11:35:53.399366] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.853 [2024-07-26 11:35:53.399399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.853 qpair failed and we were unable to recover it. 00:27:57.853 [2024-07-26 11:35:53.399571] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.853 [2024-07-26 11:35:53.399602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.853 qpair failed and we were unable to recover it. 00:27:57.853 [2024-07-26 11:35:53.399763] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.853 [2024-07-26 11:35:53.399797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.853 qpair failed and we were unable to recover it. 00:27:57.853 [2024-07-26 11:35:53.399937] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.853 [2024-07-26 11:35:53.399969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.853 qpair failed and we were unable to recover it. 00:27:57.853 [2024-07-26 11:35:53.400174] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.853 [2024-07-26 11:35:53.400205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.853 qpair failed and we were unable to recover it. 00:27:57.853 [2024-07-26 11:35:53.400498] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.853 [2024-07-26 11:35:53.400529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.853 qpair failed and we were unable to recover it. 00:27:57.853 [2024-07-26 11:35:53.400669] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.853 [2024-07-26 11:35:53.400702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.853 qpair failed and we were unable to recover it. 00:27:57.853 [2024-07-26 11:35:53.400850] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.853 [2024-07-26 11:35:53.400882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.853 qpair failed and we were unable to recover it. 00:27:57.853 [2024-07-26 11:35:53.401097] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.853 [2024-07-26 11:35:53.401129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.853 qpair failed and we were unable to recover it. 00:27:57.853 [2024-07-26 11:35:53.401466] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.853 [2024-07-26 11:35:53.401498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.853 qpair failed and we were unable to recover it. 00:27:57.853 [2024-07-26 11:35:53.401754] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.853 [2024-07-26 11:35:53.401787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.853 qpair failed and we were unable to recover it. 00:27:57.853 [2024-07-26 11:35:53.401983] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.853 [2024-07-26 11:35:53.402020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.853 qpair failed and we were unable to recover it. 00:27:57.853 [2024-07-26 11:35:53.402305] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.853 [2024-07-26 11:35:53.402336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.853 qpair failed and we were unable to recover it. 00:27:57.853 [2024-07-26 11:35:53.402578] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.853 [2024-07-26 11:35:53.402610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.853 qpair failed and we were unable to recover it. 00:27:57.853 [2024-07-26 11:35:53.402870] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.853 [2024-07-26 11:35:53.402903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.853 qpair failed and we were unable to recover it. 00:27:57.853 [2024-07-26 11:35:53.403158] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.853 [2024-07-26 11:35:53.403190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.853 qpair failed and we were unable to recover it. 00:27:57.853 [2024-07-26 11:35:53.403449] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.853 [2024-07-26 11:35:53.403481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.853 qpair failed and we were unable to recover it. 00:27:57.853 [2024-07-26 11:35:53.403604] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.853 [2024-07-26 11:35:53.403648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.853 qpair failed and we were unable to recover it. 00:27:57.853 [2024-07-26 11:35:53.403790] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.853 [2024-07-26 11:35:53.403821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.853 qpair failed and we were unable to recover it. 00:27:57.853 [2024-07-26 11:35:53.404019] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.853 [2024-07-26 11:35:53.404050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.853 qpair failed and we were unable to recover it. 00:27:57.853 [2024-07-26 11:35:53.404177] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.853 [2024-07-26 11:35:53.404208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.853 qpair failed and we were unable to recover it. 00:27:57.853 [2024-07-26 11:35:53.404328] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.853 [2024-07-26 11:35:53.404360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.853 qpair failed and we were unable to recover it. 00:27:57.853 [2024-07-26 11:35:53.404555] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.853 [2024-07-26 11:35:53.404587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.853 qpair failed and we were unable to recover it. 00:27:57.853 [2024-07-26 11:35:53.404828] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.853 [2024-07-26 11:35:53.404861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.853 qpair failed and we were unable to recover it. 00:27:57.853 [2024-07-26 11:35:53.404975] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.853 [2024-07-26 11:35:53.405006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.853 qpair failed and we were unable to recover it. 00:27:57.853 [2024-07-26 11:35:53.405161] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.853 [2024-07-26 11:35:53.405210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:57.853 qpair failed and we were unable to recover it. 00:27:57.853 [2024-07-26 11:35:53.405437] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.853 [2024-07-26 11:35:53.405469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:57.853 qpair failed and we were unable to recover it. 00:27:57.853 [2024-07-26 11:35:53.405751] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.853 [2024-07-26 11:35:53.405787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:57.853 qpair failed and we were unable to recover it. 00:27:57.853 [2024-07-26 11:35:53.405994] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.853 [2024-07-26 11:35:53.406025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:57.853 qpair failed and we were unable to recover it. 00:27:57.853 [2024-07-26 11:35:53.406309] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.853 [2024-07-26 11:35:53.406340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:57.853 qpair failed and we were unable to recover it. 00:27:57.853 [2024-07-26 11:35:53.406568] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.853 [2024-07-26 11:35:53.406600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:57.853 qpair failed and we were unable to recover it. 00:27:57.853 [2024-07-26 11:35:53.406787] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.853 [2024-07-26 11:35:53.406819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:57.853 qpair failed and we were unable to recover it. 00:27:57.853 [2024-07-26 11:35:53.406958] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.853 [2024-07-26 11:35:53.406990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:57.853 qpair failed and we were unable to recover it. 00:27:57.853 [2024-07-26 11:35:53.407119] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.853 [2024-07-26 11:35:53.407150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:57.853 qpair failed and we were unable to recover it. 00:27:57.853 [2024-07-26 11:35:53.407387] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.853 [2024-07-26 11:35:53.407419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:57.853 qpair failed and we were unable to recover it. 00:27:57.853 [2024-07-26 11:35:53.407707] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.853 [2024-07-26 11:35:53.407738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:57.853 qpair failed and we were unable to recover it. 00:27:57.853 [2024-07-26 11:35:53.407868] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.853 [2024-07-26 11:35:53.407899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:57.853 qpair failed and we were unable to recover it. 00:27:57.853 [2024-07-26 11:35:53.408035] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.853 [2024-07-26 11:35:53.408066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:57.853 qpair failed and we were unable to recover it. 00:27:57.853 [2024-07-26 11:35:53.408262] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.853 [2024-07-26 11:35:53.408301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:57.853 qpair failed and we were unable to recover it. 00:27:57.853 [2024-07-26 11:35:53.408513] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.853 [2024-07-26 11:35:53.408545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:57.853 qpair failed and we were unable to recover it. 00:27:57.853 [2024-07-26 11:35:53.408795] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.853 [2024-07-26 11:35:53.408827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:57.853 qpair failed and we were unable to recover it. 00:27:57.853 [2024-07-26 11:35:53.408967] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.853 [2024-07-26 11:35:53.408998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:57.853 qpair failed and we were unable to recover it. 00:27:57.853 [2024-07-26 11:35:53.409244] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.853 [2024-07-26 11:35:53.409275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:57.853 qpair failed and we were unable to recover it. 00:27:57.854 [2024-07-26 11:35:53.409517] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.854 [2024-07-26 11:35:53.409548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:57.854 qpair failed and we were unable to recover it. 00:27:57.854 [2024-07-26 11:35:53.409739] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.854 [2024-07-26 11:35:53.409772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:57.854 qpair failed and we were unable to recover it. 00:27:57.854 [2024-07-26 11:35:53.409918] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.854 [2024-07-26 11:35:53.409949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:57.854 qpair failed and we were unable to recover it. 00:27:57.854 [2024-07-26 11:35:53.410152] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.854 [2024-07-26 11:35:53.410183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:57.854 qpair failed and we were unable to recover it. 00:27:57.854 [2024-07-26 11:35:53.410480] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.854 [2024-07-26 11:35:53.410512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:57.854 qpair failed and we were unable to recover it. 00:27:57.854 [2024-07-26 11:35:53.410766] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.854 [2024-07-26 11:35:53.410798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:57.854 qpair failed and we were unable to recover it. 00:27:57.854 [2024-07-26 11:35:53.410929] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.854 [2024-07-26 11:35:53.410961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:57.854 qpair failed and we were unable to recover it. 00:27:57.854 [2024-07-26 11:35:53.411203] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.854 [2024-07-26 11:35:53.411233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:57.854 qpair failed and we were unable to recover it. 00:27:57.854 [2024-07-26 11:35:53.411428] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.854 [2024-07-26 11:35:53.411459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:57.854 qpair failed and we were unable to recover it. 00:27:57.854 [2024-07-26 11:35:53.411739] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.854 [2024-07-26 11:35:53.411772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:57.854 qpair failed and we were unable to recover it. 00:27:57.854 [2024-07-26 11:35:53.411968] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.854 [2024-07-26 11:35:53.412000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:57.854 qpair failed and we were unable to recover it. 00:27:57.854 [2024-07-26 11:35:53.412127] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.854 [2024-07-26 11:35:53.412159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:57.854 qpair failed and we were unable to recover it. 00:27:57.854 [2024-07-26 11:35:53.412300] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.854 [2024-07-26 11:35:53.412332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:57.854 qpair failed and we were unable to recover it. 00:27:57.854 [2024-07-26 11:35:53.412592] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.854 [2024-07-26 11:35:53.412624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:57.854 qpair failed and we were unable to recover it. 00:27:57.854 [2024-07-26 11:35:53.412838] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.854 [2024-07-26 11:35:53.412869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:57.854 qpair failed and we were unable to recover it. 00:27:57.854 [2024-07-26 11:35:53.413042] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.854 [2024-07-26 11:35:53.413073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:57.854 qpair failed and we were unable to recover it. 00:27:57.854 [2024-07-26 11:35:53.413216] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.854 [2024-07-26 11:35:53.413247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:57.854 qpair failed and we were unable to recover it. 00:27:57.854 [2024-07-26 11:35:53.413460] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.854 [2024-07-26 11:35:53.413491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:57.854 qpair failed and we were unable to recover it. 00:27:57.854 [2024-07-26 11:35:53.413735] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.854 [2024-07-26 11:35:53.413767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:57.854 qpair failed and we were unable to recover it. 00:27:57.854 [2024-07-26 11:35:53.414009] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.854 [2024-07-26 11:35:53.414040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:57.854 qpair failed and we were unable to recover it. 00:27:57.854 [2024-07-26 11:35:53.414227] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.854 [2024-07-26 11:35:53.414258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:57.854 qpair failed and we were unable to recover it. 00:27:57.854 [2024-07-26 11:35:53.414492] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.854 [2024-07-26 11:35:53.414524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:57.854 qpair failed and we were unable to recover it. 00:27:57.854 [2024-07-26 11:35:53.414805] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.854 [2024-07-26 11:35:53.414846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.854 qpair failed and we were unable to recover it. 00:27:57.854 [2024-07-26 11:35:53.415072] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.854 [2024-07-26 11:35:53.415104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.854 qpair failed and we were unable to recover it. 00:27:57.854 [2024-07-26 11:35:53.415297] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.854 [2024-07-26 11:35:53.415330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.854 qpair failed and we were unable to recover it. 00:27:57.854 [2024-07-26 11:35:53.415508] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.854 [2024-07-26 11:35:53.415538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.854 qpair failed and we were unable to recover it. 00:27:57.854 [2024-07-26 11:35:53.415730] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.854 [2024-07-26 11:35:53.415763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.854 qpair failed and we were unable to recover it. 00:27:57.854 [2024-07-26 11:35:53.415886] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.854 [2024-07-26 11:35:53.415917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.854 qpair failed and we were unable to recover it. 00:27:57.854 [2024-07-26 11:35:53.416040] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.854 [2024-07-26 11:35:53.416072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.854 qpair failed and we were unable to recover it. 00:27:57.854 [2024-07-26 11:35:53.416269] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.854 [2024-07-26 11:35:53.416301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.854 qpair failed and we were unable to recover it. 00:27:57.854 [2024-07-26 11:35:53.416437] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.854 [2024-07-26 11:35:53.416468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.854 qpair failed and we were unable to recover it. 00:27:57.854 [2024-07-26 11:35:53.416646] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.854 [2024-07-26 11:35:53.416678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.854 qpair failed and we were unable to recover it. 00:27:57.854 [2024-07-26 11:35:53.416870] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.854 [2024-07-26 11:35:53.416902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.854 qpair failed and we were unable to recover it. 00:27:57.854 [2024-07-26 11:35:53.417052] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.854 [2024-07-26 11:35:53.417083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.854 qpair failed and we were unable to recover it. 00:27:57.854 [2024-07-26 11:35:53.417412] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.854 [2024-07-26 11:35:53.417444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.854 qpair failed and we were unable to recover it. 00:27:57.854 [2024-07-26 11:35:53.417688] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.854 [2024-07-26 11:35:53.417726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.854 qpair failed and we were unable to recover it. 00:27:57.854 [2024-07-26 11:35:53.417919] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.854 [2024-07-26 11:35:53.417949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.854 qpair failed and we were unable to recover it. 00:27:57.854 [2024-07-26 11:35:53.418088] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.854 [2024-07-26 11:35:53.418119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.854 qpair failed and we were unable to recover it. 00:27:57.854 [2024-07-26 11:35:53.418329] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.854 [2024-07-26 11:35:53.418360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.854 qpair failed and we were unable to recover it. 00:27:57.854 [2024-07-26 11:35:53.418551] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.854 [2024-07-26 11:35:53.418582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.854 qpair failed and we were unable to recover it. 00:27:57.854 [2024-07-26 11:35:53.418802] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.854 [2024-07-26 11:35:53.418834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.854 qpair failed and we were unable to recover it. 00:27:57.854 [2024-07-26 11:35:53.418973] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.854 [2024-07-26 11:35:53.419004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.854 qpair failed and we were unable to recover it. 00:27:57.854 [2024-07-26 11:35:53.419141] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.854 [2024-07-26 11:35:53.419171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.854 qpair failed and we were unable to recover it. 00:27:57.854 [2024-07-26 11:35:53.419349] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.854 [2024-07-26 11:35:53.419380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.854 qpair failed and we were unable to recover it. 00:27:57.854 [2024-07-26 11:35:53.419641] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.855 [2024-07-26 11:35:53.419674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.855 qpair failed and we were unable to recover it. 00:27:57.855 [2024-07-26 11:35:53.419873] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.855 [2024-07-26 11:35:53.419904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.855 qpair failed and we were unable to recover it. 00:27:57.855 [2024-07-26 11:35:53.420093] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.855 [2024-07-26 11:35:53.420124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.855 qpair failed and we were unable to recover it. 00:27:57.855 [2024-07-26 11:35:53.420252] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.855 [2024-07-26 11:35:53.420283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.855 qpair failed and we were unable to recover it. 00:27:57.855 [2024-07-26 11:35:53.420414] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.855 [2024-07-26 11:35:53.420445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.855 qpair failed and we were unable to recover it. 00:27:57.855 [2024-07-26 11:35:53.420592] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.855 [2024-07-26 11:35:53.420650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.855 qpair failed and we were unable to recover it. 00:27:57.855 [2024-07-26 11:35:53.420830] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.855 [2024-07-26 11:35:53.420875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:57.855 qpair failed and we were unable to recover it. 00:27:57.855 [2024-07-26 11:35:53.421044] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.855 [2024-07-26 11:35:53.421087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.855 qpair failed and we were unable to recover it. 00:27:57.855 [2024-07-26 11:35:53.421402] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.855 [2024-07-26 11:35:53.421462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.855 qpair failed and we were unable to recover it. 00:27:57.855 [2024-07-26 11:35:53.421733] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.855 [2024-07-26 11:35:53.421780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f8000b90 with addr=10.0.0.2, port=4420 00:27:57.855 qpair failed and we were unable to recover it. 00:27:57.855 [2024-07-26 11:35:53.421923] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.855 [2024-07-26 11:35:53.421955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f8000b90 with addr=10.0.0.2, port=4420 00:27:57.855 qpair failed and we were unable to recover it. 00:27:57.855 [2024-07-26 11:35:53.422135] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.855 [2024-07-26 11:35:53.422166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f8000b90 with addr=10.0.0.2, port=4420 00:27:57.855 qpair failed and we were unable to recover it. 00:27:57.855 [2024-07-26 11:35:53.422429] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.855 [2024-07-26 11:35:53.422461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f8000b90 with addr=10.0.0.2, port=4420 00:27:57.855 qpair failed and we were unable to recover it. 00:27:57.855 [2024-07-26 11:35:53.422690] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.855 [2024-07-26 11:35:53.422724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f8000b90 with addr=10.0.0.2, port=4420 00:27:57.855 qpair failed and we were unable to recover it. 00:27:57.855 [2024-07-26 11:35:53.422919] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.855 [2024-07-26 11:35:53.422951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f8000b90 with addr=10.0.0.2, port=4420 00:27:57.855 qpair failed and we were unable to recover it. 00:27:57.855 [2024-07-26 11:35:53.423091] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.855 [2024-07-26 11:35:53.423122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f8000b90 with addr=10.0.0.2, port=4420 00:27:57.855 qpair failed and we were unable to recover it. 00:27:57.855 [2024-07-26 11:35:53.423385] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.855 [2024-07-26 11:35:53.423417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f8000b90 with addr=10.0.0.2, port=4420 00:27:57.855 qpair failed and we were unable to recover it. 00:27:57.855 [2024-07-26 11:35:53.423675] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.855 [2024-07-26 11:35:53.423709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f8000b90 with addr=10.0.0.2, port=4420 00:27:57.855 qpair failed and we were unable to recover it. 00:27:57.855 [2024-07-26 11:35:53.423881] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.855 [2024-07-26 11:35:53.423927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.855 qpair failed and we were unable to recover it. 00:27:57.855 [2024-07-26 11:35:53.424149] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.855 [2024-07-26 11:35:53.424180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.855 qpair failed and we were unable to recover it. 00:27:57.855 [2024-07-26 11:35:53.424475] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.855 [2024-07-26 11:35:53.424506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.855 qpair failed and we were unable to recover it. 00:27:57.855 [2024-07-26 11:35:53.424684] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.855 [2024-07-26 11:35:53.424717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.855 qpair failed and we were unable to recover it. 00:27:57.855 [2024-07-26 11:35:53.424861] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.855 [2024-07-26 11:35:53.424892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.855 qpair failed and we were unable to recover it. 00:27:57.855 [2024-07-26 11:35:53.425087] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.855 [2024-07-26 11:35:53.425119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:57.855 qpair failed and we were unable to recover it. 00:27:58.141 [2024-07-26 11:35:53.425460] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.141 [2024-07-26 11:35:53.425524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:58.141 qpair failed and we were unable to recover it. 00:27:58.141 [2024-07-26 11:35:53.425728] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.141 [2024-07-26 11:35:53.425779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.141 qpair failed and we were unable to recover it. 00:27:58.141 [2024-07-26 11:35:53.426095] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.141 [2024-07-26 11:35:53.426128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.141 qpair failed and we were unable to recover it. 00:27:58.141 [2024-07-26 11:35:53.426319] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.141 [2024-07-26 11:35:53.426351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.141 qpair failed and we were unable to recover it. 00:27:58.141 [2024-07-26 11:35:53.426615] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.141 [2024-07-26 11:35:53.426660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.141 qpair failed and we were unable to recover it. 00:27:58.141 [2024-07-26 11:35:53.426821] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.141 [2024-07-26 11:35:53.426853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.141 qpair failed and we were unable to recover it. 00:27:58.141 [2024-07-26 11:35:53.427096] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.142 [2024-07-26 11:35:53.427127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.142 qpair failed and we were unable to recover it. 00:27:58.142 [2024-07-26 11:35:53.427379] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.142 [2024-07-26 11:35:53.427410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.142 qpair failed and we were unable to recover it. 00:27:58.142 [2024-07-26 11:35:53.427569] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.142 [2024-07-26 11:35:53.427601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.142 qpair failed and we were unable to recover it. 00:27:58.142 [2024-07-26 11:35:53.427763] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.142 [2024-07-26 11:35:53.427794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.142 qpair failed and we were unable to recover it. 00:27:58.142 [2024-07-26 11:35:53.427984] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.142 [2024-07-26 11:35:53.428016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.142 qpair failed and we were unable to recover it. 00:27:58.142 [2024-07-26 11:35:53.428277] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.142 [2024-07-26 11:35:53.428309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.142 qpair failed and we were unable to recover it. 00:27:58.142 [2024-07-26 11:35:53.428595] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.142 [2024-07-26 11:35:53.428647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.142 qpair failed and we were unable to recover it. 00:27:58.142 [2024-07-26 11:35:53.428846] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.142 [2024-07-26 11:35:53.428878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.142 qpair failed and we were unable to recover it. 00:27:58.142 [2024-07-26 11:35:53.429168] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.142 [2024-07-26 11:35:53.429201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.142 qpair failed and we were unable to recover it. 00:27:58.142 [2024-07-26 11:35:53.429410] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.142 [2024-07-26 11:35:53.429442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.142 qpair failed and we were unable to recover it. 00:27:58.142 [2024-07-26 11:35:53.429730] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.142 [2024-07-26 11:35:53.429763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.142 qpair failed and we were unable to recover it. 00:27:58.142 [2024-07-26 11:35:53.430009] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.142 [2024-07-26 11:35:53.430042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.142 qpair failed and we were unable to recover it. 00:27:58.142 [2024-07-26 11:35:53.430235] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.142 [2024-07-26 11:35:53.430267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.142 qpair failed and we were unable to recover it. 00:27:58.142 [2024-07-26 11:35:53.430511] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.142 [2024-07-26 11:35:53.430544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.142 qpair failed and we were unable to recover it. 00:27:58.142 [2024-07-26 11:35:53.430796] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.142 [2024-07-26 11:35:53.430829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.142 qpair failed and we were unable to recover it. 00:27:58.142 [2024-07-26 11:35:53.431048] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.142 [2024-07-26 11:35:53.431081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.142 qpair failed and we were unable to recover it. 00:27:58.142 [2024-07-26 11:35:53.431221] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.142 [2024-07-26 11:35:53.431253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.142 qpair failed and we were unable to recover it. 00:27:58.142 [2024-07-26 11:35:53.431467] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.142 [2024-07-26 11:35:53.431498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.142 qpair failed and we were unable to recover it. 00:27:58.142 [2024-07-26 11:35:53.431713] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.142 [2024-07-26 11:35:53.431747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.142 qpair failed and we were unable to recover it. 00:27:58.142 [2024-07-26 11:35:53.431936] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.142 [2024-07-26 11:35:53.431969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.142 qpair failed and we were unable to recover it. 00:27:58.142 [2024-07-26 11:35:53.432108] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.142 [2024-07-26 11:35:53.432140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.142 qpair failed and we were unable to recover it. 00:27:58.142 [2024-07-26 11:35:53.432316] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.142 [2024-07-26 11:35:53.432351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.142 qpair failed and we were unable to recover it. 00:27:58.142 [2024-07-26 11:35:53.432546] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.142 [2024-07-26 11:35:53.432579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.142 qpair failed and we were unable to recover it. 00:27:58.142 [2024-07-26 11:35:53.432798] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.142 [2024-07-26 11:35:53.432832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.142 qpair failed and we were unable to recover it. 00:27:58.142 [2024-07-26 11:35:53.432979] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.142 [2024-07-26 11:35:53.433011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.142 qpair failed and we were unable to recover it. 00:27:58.142 [2024-07-26 11:35:53.433274] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.142 [2024-07-26 11:35:53.433307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.142 qpair failed and we were unable to recover it. 00:27:58.142 [2024-07-26 11:35:53.433504] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.142 [2024-07-26 11:35:53.433537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.142 qpair failed and we were unable to recover it. 00:27:58.142 [2024-07-26 11:35:53.433732] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.142 [2024-07-26 11:35:53.433765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.142 qpair failed and we were unable to recover it. 00:27:58.142 [2024-07-26 11:35:53.434009] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.142 [2024-07-26 11:35:53.434048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.142 qpair failed and we were unable to recover it. 00:27:58.142 [2024-07-26 11:35:53.434228] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.142 [2024-07-26 11:35:53.434260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.142 qpair failed and we were unable to recover it. 00:27:58.142 [2024-07-26 11:35:53.434530] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.142 [2024-07-26 11:35:53.434562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.142 qpair failed and we were unable to recover it. 00:27:58.142 [2024-07-26 11:35:53.434733] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.142 [2024-07-26 11:35:53.434764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.142 qpair failed and we were unable to recover it. 00:27:58.142 [2024-07-26 11:35:53.434962] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.142 [2024-07-26 11:35:53.434993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.142 qpair failed and we were unable to recover it. 00:27:58.142 [2024-07-26 11:35:53.435200] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.142 [2024-07-26 11:35:53.435232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.142 qpair failed and we were unable to recover it. 00:27:58.142 [2024-07-26 11:35:53.435474] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.142 [2024-07-26 11:35:53.435506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.142 qpair failed and we were unable to recover it. 00:27:58.142 [2024-07-26 11:35:53.435716] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.142 [2024-07-26 11:35:53.435748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.142 qpair failed and we were unable to recover it. 00:27:58.142 [2024-07-26 11:35:53.435945] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.142 [2024-07-26 11:35:53.435979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.142 qpair failed and we were unable to recover it. 00:27:58.142 [2024-07-26 11:35:53.436168] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.143 [2024-07-26 11:35:53.436199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.143 qpair failed and we were unable to recover it. 00:27:58.143 [2024-07-26 11:35:53.436465] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.143 [2024-07-26 11:35:53.436496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.143 qpair failed and we were unable to recover it. 00:27:58.143 [2024-07-26 11:35:53.436759] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.143 [2024-07-26 11:35:53.436791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.143 qpair failed and we were unable to recover it. 00:27:58.143 [2024-07-26 11:35:53.436993] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.143 [2024-07-26 11:35:53.437024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.143 qpair failed and we were unable to recover it. 00:27:58.143 [2024-07-26 11:35:53.437149] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.143 [2024-07-26 11:35:53.437179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.143 qpair failed and we were unable to recover it. 00:27:58.143 [2024-07-26 11:35:53.437389] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.143 [2024-07-26 11:35:53.437422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.143 qpair failed and we were unable to recover it. 00:27:58.143 [2024-07-26 11:35:53.437643] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.143 [2024-07-26 11:35:53.437676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.143 qpair failed and we were unable to recover it. 00:27:58.143 [2024-07-26 11:35:53.437865] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.143 [2024-07-26 11:35:53.437897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.143 qpair failed and we were unable to recover it. 00:27:58.143 [2024-07-26 11:35:53.438093] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.143 [2024-07-26 11:35:53.438124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.143 qpair failed and we were unable to recover it. 00:27:58.143 [2024-07-26 11:35:53.438334] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.143 [2024-07-26 11:35:53.438365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.143 qpair failed and we were unable to recover it. 00:27:58.143 [2024-07-26 11:35:53.438665] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.143 [2024-07-26 11:35:53.438699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.143 qpair failed and we were unable to recover it. 00:27:58.143 [2024-07-26 11:35:53.438886] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.143 [2024-07-26 11:35:53.438918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.143 qpair failed and we were unable to recover it. 00:27:58.143 [2024-07-26 11:35:53.439097] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.143 [2024-07-26 11:35:53.439128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.143 qpair failed and we were unable to recover it. 00:27:58.143 [2024-07-26 11:35:53.439391] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.143 [2024-07-26 11:35:53.439422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.143 qpair failed and we were unable to recover it. 00:27:58.143 [2024-07-26 11:35:53.439596] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.143 [2024-07-26 11:35:53.439635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.143 qpair failed and we were unable to recover it. 00:27:58.143 [2024-07-26 11:35:53.439786] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.143 [2024-07-26 11:35:53.439816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.143 qpair failed and we were unable to recover it. 00:27:58.143 [2024-07-26 11:35:53.440035] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.143 [2024-07-26 11:35:53.440066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.143 qpair failed and we were unable to recover it. 00:27:58.143 [2024-07-26 11:35:53.440272] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.143 [2024-07-26 11:35:53.440304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.143 qpair failed and we were unable to recover it. 00:27:58.143 [2024-07-26 11:35:53.440493] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.143 [2024-07-26 11:35:53.440524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.143 qpair failed and we were unable to recover it. 00:27:58.143 [2024-07-26 11:35:53.440722] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.143 [2024-07-26 11:35:53.440754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.143 qpair failed and we were unable to recover it. 00:27:58.143 [2024-07-26 11:35:53.440950] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.143 [2024-07-26 11:35:53.440980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.143 qpair failed and we were unable to recover it. 00:27:58.143 [2024-07-26 11:35:53.441220] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.143 [2024-07-26 11:35:53.441252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.143 qpair failed and we were unable to recover it. 00:27:58.143 [2024-07-26 11:35:53.441467] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.143 [2024-07-26 11:35:53.441499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.143 qpair failed and we were unable to recover it. 00:27:58.143 [2024-07-26 11:35:53.441682] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.143 [2024-07-26 11:35:53.441714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.143 qpair failed and we were unable to recover it. 00:27:58.143 [2024-07-26 11:35:53.441915] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.143 [2024-07-26 11:35:53.441945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.143 qpair failed and we were unable to recover it. 00:27:58.143 [2024-07-26 11:35:53.442079] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.143 [2024-07-26 11:35:53.442110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.143 qpair failed and we were unable to recover it. 00:27:58.143 [2024-07-26 11:35:53.442402] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.143 [2024-07-26 11:35:53.442434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.143 qpair failed and we were unable to recover it. 00:27:58.143 [2024-07-26 11:35:53.442609] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.143 [2024-07-26 11:35:53.442651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.143 qpair failed and we were unable to recover it. 00:27:58.143 [2024-07-26 11:35:53.442847] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.143 [2024-07-26 11:35:53.442878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.143 qpair failed and we were unable to recover it. 00:27:58.143 [2024-07-26 11:35:53.443080] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.143 [2024-07-26 11:35:53.443111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.143 qpair failed and we were unable to recover it. 00:27:58.143 [2024-07-26 11:35:53.443290] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.143 [2024-07-26 11:35:53.443321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.143 qpair failed and we were unable to recover it. 00:27:58.143 [2024-07-26 11:35:53.443513] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.143 [2024-07-26 11:35:53.443550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.143 qpair failed and we were unable to recover it. 00:27:58.143 [2024-07-26 11:35:53.443818] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.143 [2024-07-26 11:35:53.443851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.143 qpair failed and we were unable to recover it. 00:27:58.143 [2024-07-26 11:35:53.443965] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.143 [2024-07-26 11:35:53.443996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.143 qpair failed and we were unable to recover it. 00:27:58.143 [2024-07-26 11:35:53.444191] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.143 [2024-07-26 11:35:53.444222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.143 qpair failed and we were unable to recover it. 00:27:58.143 [2024-07-26 11:35:53.444444] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.143 [2024-07-26 11:35:53.444476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.143 qpair failed and we were unable to recover it. 00:27:58.143 [2024-07-26 11:35:53.444599] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.143 [2024-07-26 11:35:53.444648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.143 qpair failed and we were unable to recover it. 00:27:58.143 [2024-07-26 11:35:53.444839] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.144 [2024-07-26 11:35:53.444870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.144 qpair failed and we were unable to recover it. 00:27:58.144 [2024-07-26 11:35:53.445004] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.144 [2024-07-26 11:35:53.445035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.144 qpair failed and we were unable to recover it. 00:27:58.144 [2024-07-26 11:35:53.445212] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.144 [2024-07-26 11:35:53.445243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.144 qpair failed and we were unable to recover it. 00:27:58.144 [2024-07-26 11:35:53.445510] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.144 [2024-07-26 11:35:53.445541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.144 qpair failed and we were unable to recover it. 00:27:58.144 [2024-07-26 11:35:53.445683] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.144 [2024-07-26 11:35:53.445715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.144 qpair failed and we were unable to recover it. 00:27:58.144 [2024-07-26 11:35:53.445891] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.144 [2024-07-26 11:35:53.445922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.144 qpair failed and we were unable to recover it. 00:27:58.144 [2024-07-26 11:35:53.446113] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.144 [2024-07-26 11:35:53.446144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.144 qpair failed and we were unable to recover it. 00:27:58.144 [2024-07-26 11:35:53.446434] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.144 [2024-07-26 11:35:53.446465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.144 qpair failed and we were unable to recover it. 00:27:58.144 [2024-07-26 11:35:53.446667] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.144 [2024-07-26 11:35:53.446698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.144 qpair failed and we were unable to recover it. 00:27:58.144 [2024-07-26 11:35:53.446938] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.144 [2024-07-26 11:35:53.446970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.144 qpair failed and we were unable to recover it. 00:27:58.144 [2024-07-26 11:35:53.447209] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.144 [2024-07-26 11:35:53.447241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.144 qpair failed and we were unable to recover it. 00:27:58.144 [2024-07-26 11:35:53.447487] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.144 [2024-07-26 11:35:53.447519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.144 qpair failed and we were unable to recover it. 00:27:58.144 [2024-07-26 11:35:53.447712] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.144 [2024-07-26 11:35:53.447745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.144 qpair failed and we were unable to recover it. 00:27:58.144 [2024-07-26 11:35:53.447933] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.144 [2024-07-26 11:35:53.447964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.144 qpair failed and we were unable to recover it. 00:27:58.144 [2024-07-26 11:35:53.448211] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.144 [2024-07-26 11:35:53.448242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.144 qpair failed and we were unable to recover it. 00:27:58.144 [2024-07-26 11:35:53.448512] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.144 [2024-07-26 11:35:53.448542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.144 qpair failed and we were unable to recover it. 00:27:58.144 [2024-07-26 11:35:53.448825] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.144 [2024-07-26 11:35:53.448857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.144 qpair failed and we were unable to recover it. 00:27:58.144 [2024-07-26 11:35:53.449106] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.144 [2024-07-26 11:35:53.449137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.144 qpair failed and we were unable to recover it. 00:27:58.144 [2024-07-26 11:35:53.449368] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.144 [2024-07-26 11:35:53.449399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.144 qpair failed and we were unable to recover it. 00:27:58.144 [2024-07-26 11:35:53.449594] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.144 [2024-07-26 11:35:53.449625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.144 qpair failed and we were unable to recover it. 00:27:58.144 [2024-07-26 11:35:53.449774] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.144 [2024-07-26 11:35:53.449804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.144 qpair failed and we were unable to recover it. 00:27:58.144 [2024-07-26 11:35:53.450054] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.144 [2024-07-26 11:35:53.450086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.144 qpair failed and we were unable to recover it. 00:27:58.144 [2024-07-26 11:35:53.450330] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.144 [2024-07-26 11:35:53.450361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.144 qpair failed and we were unable to recover it. 00:27:58.144 [2024-07-26 11:35:53.450610] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.144 [2024-07-26 11:35:53.450649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.144 qpair failed and we were unable to recover it. 00:27:58.144 [2024-07-26 11:35:53.450872] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.144 [2024-07-26 11:35:53.450903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.144 qpair failed and we were unable to recover it. 00:27:58.144 [2024-07-26 11:35:53.451151] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.144 [2024-07-26 11:35:53.451181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.144 qpair failed and we were unable to recover it. 00:27:58.144 [2024-07-26 11:35:53.451446] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.144 [2024-07-26 11:35:53.451477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.144 qpair failed and we were unable to recover it. 00:27:58.144 [2024-07-26 11:35:53.451687] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.144 [2024-07-26 11:35:53.451719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.144 qpair failed and we were unable to recover it. 00:27:58.144 [2024-07-26 11:35:53.451977] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.144 [2024-07-26 11:35:53.452008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.144 qpair failed and we were unable to recover it. 00:27:58.144 [2024-07-26 11:35:53.452187] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.144 [2024-07-26 11:35:53.452217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.144 qpair failed and we were unable to recover it. 00:27:58.144 [2024-07-26 11:35:53.452402] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.144 [2024-07-26 11:35:53.452433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.144 qpair failed and we were unable to recover it. 00:27:58.144 [2024-07-26 11:35:53.452691] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.144 [2024-07-26 11:35:53.452724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.144 qpair failed and we were unable to recover it. 00:27:58.144 [2024-07-26 11:35:53.453000] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.144 [2024-07-26 11:35:53.453031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.144 qpair failed and we were unable to recover it. 00:27:58.144 [2024-07-26 11:35:53.453330] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.144 [2024-07-26 11:35:53.453361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.144 qpair failed and we were unable to recover it. 00:27:58.144 [2024-07-26 11:35:53.453498] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.144 [2024-07-26 11:35:53.453534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.144 qpair failed and we were unable to recover it. 00:27:58.144 [2024-07-26 11:35:53.453828] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.144 [2024-07-26 11:35:53.453861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.144 qpair failed and we were unable to recover it. 00:27:58.144 [2024-07-26 11:35:53.453974] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.144 [2024-07-26 11:35:53.454006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.144 qpair failed and we were unable to recover it. 00:27:58.144 [2024-07-26 11:35:53.454201] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.145 [2024-07-26 11:35:53.454231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.145 qpair failed and we were unable to recover it. 00:27:58.145 [2024-07-26 11:35:53.454496] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.145 [2024-07-26 11:35:53.454526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.145 qpair failed and we were unable to recover it. 00:27:58.145 [2024-07-26 11:35:53.454722] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.145 [2024-07-26 11:35:53.454754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.145 qpair failed and we were unable to recover it. 00:27:58.145 [2024-07-26 11:35:53.455012] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.145 [2024-07-26 11:35:53.455042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.145 qpair failed and we were unable to recover it. 00:27:58.145 [2024-07-26 11:35:53.455248] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.145 [2024-07-26 11:35:53.455279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.145 qpair failed and we were unable to recover it. 00:27:58.145 [2024-07-26 11:35:53.455542] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.145 [2024-07-26 11:35:53.455572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.145 qpair failed and we were unable to recover it. 00:27:58.145 [2024-07-26 11:35:53.455690] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.145 [2024-07-26 11:35:53.455723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.145 qpair failed and we were unable to recover it. 00:27:58.145 [2024-07-26 11:35:53.455921] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.145 [2024-07-26 11:35:53.455953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.145 qpair failed and we were unable to recover it. 00:27:58.145 [2024-07-26 11:35:53.456197] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.145 [2024-07-26 11:35:53.456228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.145 qpair failed and we were unable to recover it. 00:27:58.145 [2024-07-26 11:35:53.456471] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.145 [2024-07-26 11:35:53.456502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.145 qpair failed and we were unable to recover it. 00:27:58.145 [2024-07-26 11:35:53.456778] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.145 [2024-07-26 11:35:53.456810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.145 qpair failed and we were unable to recover it. 00:27:58.145 [2024-07-26 11:35:53.457010] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.145 [2024-07-26 11:35:53.457042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.145 qpair failed and we were unable to recover it. 00:27:58.145 [2024-07-26 11:35:53.457185] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.145 [2024-07-26 11:35:53.457217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.145 qpair failed and we were unable to recover it. 00:27:58.145 [2024-07-26 11:35:53.457501] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.145 [2024-07-26 11:35:53.457532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.145 qpair failed and we were unable to recover it. 00:27:58.145 [2024-07-26 11:35:53.457805] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.145 [2024-07-26 11:35:53.457838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.145 qpair failed and we were unable to recover it. 00:27:58.145 [2024-07-26 11:35:53.458112] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.145 [2024-07-26 11:35:53.458143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.145 qpair failed and we were unable to recover it. 00:27:58.145 [2024-07-26 11:35:53.458343] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.145 [2024-07-26 11:35:53.458373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.145 qpair failed and we were unable to recover it. 00:27:58.145 [2024-07-26 11:35:53.458554] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.145 [2024-07-26 11:35:53.458585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.145 qpair failed and we were unable to recover it. 00:27:58.145 [2024-07-26 11:35:53.458806] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.145 [2024-07-26 11:35:53.458838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.145 qpair failed and we were unable to recover it. 00:27:58.145 [2024-07-26 11:35:53.459035] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.145 [2024-07-26 11:35:53.459067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.145 qpair failed and we were unable to recover it. 00:27:58.145 [2024-07-26 11:35:53.459194] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.145 [2024-07-26 11:35:53.459225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.145 qpair failed and we were unable to recover it. 00:27:58.145 [2024-07-26 11:35:53.459448] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.145 [2024-07-26 11:35:53.459480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.145 qpair failed and we were unable to recover it. 00:27:58.145 [2024-07-26 11:35:53.459672] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.145 [2024-07-26 11:35:53.459704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.145 qpair failed and we were unable to recover it. 00:27:58.145 [2024-07-26 11:35:53.459836] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.145 [2024-07-26 11:35:53.459868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.145 qpair failed and we were unable to recover it. 00:27:58.145 [2024-07-26 11:35:53.460152] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.145 [2024-07-26 11:35:53.460184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.145 qpair failed and we were unable to recover it. 00:27:58.145 [2024-07-26 11:35:53.460402] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.145 [2024-07-26 11:35:53.460433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.145 qpair failed and we were unable to recover it. 00:27:58.145 [2024-07-26 11:35:53.460681] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.145 [2024-07-26 11:35:53.460714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.145 qpair failed and we were unable to recover it. 00:27:58.145 [2024-07-26 11:35:53.460981] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.145 [2024-07-26 11:35:53.461012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.145 qpair failed and we were unable to recover it. 00:27:58.145 [2024-07-26 11:35:53.461204] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.145 [2024-07-26 11:35:53.461236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.145 qpair failed and we were unable to recover it. 00:27:58.145 [2024-07-26 11:35:53.461534] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.145 [2024-07-26 11:35:53.461566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.145 qpair failed and we were unable to recover it. 00:27:58.145 [2024-07-26 11:35:53.461855] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.145 [2024-07-26 11:35:53.461890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.145 qpair failed and we were unable to recover it. 00:27:58.145 [2024-07-26 11:35:53.462101] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.145 [2024-07-26 11:35:53.462133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.145 qpair failed and we were unable to recover it. 00:27:58.145 [2024-07-26 11:35:53.462346] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.145 [2024-07-26 11:35:53.462380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.145 qpair failed and we were unable to recover it. 00:27:58.145 [2024-07-26 11:35:53.462573] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.145 [2024-07-26 11:35:53.462604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.145 qpair failed and we were unable to recover it. 00:27:58.145 [2024-07-26 11:35:53.462867] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.145 [2024-07-26 11:35:53.462900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.145 qpair failed and we were unable to recover it. 00:27:58.145 [2024-07-26 11:35:53.463109] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.145 [2024-07-26 11:35:53.463140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.145 qpair failed and we were unable to recover it. 00:27:58.145 [2024-07-26 11:35:53.463477] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.145 [2024-07-26 11:35:53.463509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.145 qpair failed and we were unable to recover it. 00:27:58.145 [2024-07-26 11:35:53.463640] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.145 [2024-07-26 11:35:53.463678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.145 qpair failed and we were unable to recover it. 00:27:58.146 [2024-07-26 11:35:53.463878] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.146 [2024-07-26 11:35:53.463909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.146 qpair failed and we were unable to recover it. 00:27:58.146 [2024-07-26 11:35:53.464100] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.146 [2024-07-26 11:35:53.464132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.146 qpair failed and we were unable to recover it. 00:27:58.146 [2024-07-26 11:35:53.464328] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.146 [2024-07-26 11:35:53.464360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.146 qpair failed and we were unable to recover it. 00:27:58.146 [2024-07-26 11:35:53.464649] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.146 [2024-07-26 11:35:53.464683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.146 qpair failed and we were unable to recover it. 00:27:58.146 [2024-07-26 11:35:53.464874] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.146 [2024-07-26 11:35:53.464906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.146 qpair failed and we were unable to recover it. 00:27:58.146 [2024-07-26 11:35:53.465020] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.146 [2024-07-26 11:35:53.465051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.146 qpair failed and we were unable to recover it. 00:27:58.146 [2024-07-26 11:35:53.465356] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.146 [2024-07-26 11:35:53.465389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.146 qpair failed and we were unable to recover it. 00:27:58.146 [2024-07-26 11:35:53.465495] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.146 [2024-07-26 11:35:53.465525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.146 qpair failed and we were unable to recover it. 00:27:58.146 [2024-07-26 11:35:53.465715] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:58.146 [2024-07-26 11:35:53.465750] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events[2024-07-26 11:35:53.465742] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.146 at runtime. 00:27:58.146 [2024-07-26 11:35:53.465762] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:58.146 [2024-07-26 11:35:53.465768] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:58.146 [2024-07-26 11:35:53.465773] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:58.146 [2024-07-26 11:35:53.465773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.146 qpair failed and we were unable to recover it. 00:27:58.146 [2024-07-26 11:35:53.465962] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.146 [2024-07-26 11:35:53.465891] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 5 00:27:58.146 [2024-07-26 11:35:53.465992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.146 qpair failed and we were unable to recover it. 00:27:58.146 [2024-07-26 11:35:53.465978] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 6 00:27:58.146 [2024-07-26 11:35:53.466064] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 7 00:27:58.146 [2024-07-26 11:35:53.466062] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:27:58.146 [2024-07-26 11:35:53.466167] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.146 [2024-07-26 11:35:53.466197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.146 qpair failed and we were unable to recover it. 00:27:58.146 [2024-07-26 11:35:53.466400] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.146 [2024-07-26 11:35:53.466430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.146 qpair failed and we were unable to recover it. 00:27:58.146 [2024-07-26 11:35:53.466699] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.146 [2024-07-26 11:35:53.466731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.146 qpair failed and we were unable to recover it. 00:27:58.146 [2024-07-26 11:35:53.466952] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.146 [2024-07-26 11:35:53.466984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.146 qpair failed and we were unable to recover it. 00:27:58.146 [2024-07-26 11:35:53.467195] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.146 [2024-07-26 11:35:53.467225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.146 qpair failed and we were unable to recover it. 00:27:58.146 [2024-07-26 11:35:53.467517] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.146 [2024-07-26 11:35:53.467548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.146 qpair failed and we were unable to recover it. 00:27:58.146 [2024-07-26 11:35:53.467762] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.146 [2024-07-26 11:35:53.467794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.146 qpair failed and we were unable to recover it. 00:27:58.146 [2024-07-26 11:35:53.468062] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.146 [2024-07-26 11:35:53.468094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.146 qpair failed and we were unable to recover it. 00:27:58.146 [2024-07-26 11:35:53.468322] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.146 [2024-07-26 11:35:53.468353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.146 qpair failed and we were unable to recover it. 00:27:58.146 [2024-07-26 11:35:53.468547] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.146 [2024-07-26 11:35:53.468578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.146 qpair failed and we were unable to recover it. 00:27:58.146 [2024-07-26 11:35:53.468717] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.146 [2024-07-26 11:35:53.468749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.146 qpair failed and we were unable to recover it. 00:27:58.146 [2024-07-26 11:35:53.468933] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.146 [2024-07-26 11:35:53.468964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.146 qpair failed and we were unable to recover it. 00:27:58.146 [2024-07-26 11:35:53.469101] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.146 [2024-07-26 11:35:53.469133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.146 qpair failed and we were unable to recover it. 00:27:58.146 [2024-07-26 11:35:53.469375] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.146 [2024-07-26 11:35:53.469407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.146 qpair failed and we were unable to recover it. 00:27:58.146 [2024-07-26 11:35:53.469593] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.146 [2024-07-26 11:35:53.469623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.146 qpair failed and we were unable to recover it. 00:27:58.146 [2024-07-26 11:35:53.469768] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.146 [2024-07-26 11:35:53.469800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.146 qpair failed and we were unable to recover it. 00:27:58.146 [2024-07-26 11:35:53.469993] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.146 [2024-07-26 11:35:53.470024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.146 qpair failed and we were unable to recover it. 00:27:58.147 [2024-07-26 11:35:53.470177] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.147 [2024-07-26 11:35:53.470209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.147 qpair failed and we were unable to recover it. 00:27:58.147 [2024-07-26 11:35:53.470405] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.147 [2024-07-26 11:35:53.470435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.147 qpair failed and we were unable to recover it. 00:27:58.147 [2024-07-26 11:35:53.470648] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.147 [2024-07-26 11:35:53.470680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.147 qpair failed and we were unable to recover it. 00:27:58.147 [2024-07-26 11:35:53.470889] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.147 [2024-07-26 11:35:53.470920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.147 qpair failed and we were unable to recover it. 00:27:58.147 [2024-07-26 11:35:53.471071] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.147 [2024-07-26 11:35:53.471103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.147 qpair failed and we were unable to recover it. 00:27:58.147 [2024-07-26 11:35:53.471344] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.147 [2024-07-26 11:35:53.471376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.147 qpair failed and we were unable to recover it. 00:27:58.147 [2024-07-26 11:35:53.471587] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.147 [2024-07-26 11:35:53.471617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.147 qpair failed and we were unable to recover it. 00:27:58.147 [2024-07-26 11:35:53.471823] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.147 [2024-07-26 11:35:53.471856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.147 qpair failed and we were unable to recover it. 00:27:58.147 [2024-07-26 11:35:53.472051] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.147 [2024-07-26 11:35:53.472082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.147 qpair failed and we were unable to recover it. 00:27:58.147 [2024-07-26 11:35:53.472357] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.147 [2024-07-26 11:35:53.472394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.147 qpair failed and we were unable to recover it. 00:27:58.147 [2024-07-26 11:35:53.472676] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.147 [2024-07-26 11:35:53.472709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.147 qpair failed and we were unable to recover it. 00:27:58.147 [2024-07-26 11:35:53.472847] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.147 [2024-07-26 11:35:53.472878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.147 qpair failed and we were unable to recover it. 00:27:58.147 [2024-07-26 11:35:53.473120] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.147 [2024-07-26 11:35:53.473152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.147 qpair failed and we were unable to recover it. 00:27:58.147 [2024-07-26 11:35:53.473371] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.147 [2024-07-26 11:35:53.473402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.147 qpair failed and we were unable to recover it. 00:27:58.147 [2024-07-26 11:35:53.473652] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.147 [2024-07-26 11:35:53.473684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.147 qpair failed and we were unable to recover it. 00:27:58.147 [2024-07-26 11:35:53.473823] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.147 [2024-07-26 11:35:53.473853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.147 qpair failed and we were unable to recover it. 00:27:58.147 [2024-07-26 11:35:53.473977] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.147 [2024-07-26 11:35:53.474008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.147 qpair failed and we were unable to recover it. 00:27:58.147 [2024-07-26 11:35:53.474220] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.147 [2024-07-26 11:35:53.474252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.147 qpair failed and we were unable to recover it. 00:27:58.147 [2024-07-26 11:35:53.474452] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.147 [2024-07-26 11:35:53.474483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.147 qpair failed and we were unable to recover it. 00:27:58.147 [2024-07-26 11:35:53.474801] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.147 [2024-07-26 11:35:53.474833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.147 qpair failed and we were unable to recover it. 00:27:58.147 [2024-07-26 11:35:53.475020] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.147 [2024-07-26 11:35:53.475052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.147 qpair failed and we were unable to recover it. 00:27:58.147 [2024-07-26 11:35:53.475242] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.147 [2024-07-26 11:35:53.475274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.147 qpair failed and we were unable to recover it. 00:27:58.147 [2024-07-26 11:35:53.475492] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.147 [2024-07-26 11:35:53.475525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.147 qpair failed and we were unable to recover it. 00:27:58.147 [2024-07-26 11:35:53.475734] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.147 [2024-07-26 11:35:53.475767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.147 qpair failed and we were unable to recover it. 00:27:58.147 [2024-07-26 11:35:53.475894] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.147 [2024-07-26 11:35:53.475926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.147 qpair failed and we were unable to recover it. 00:27:58.147 [2024-07-26 11:35:53.476073] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.147 [2024-07-26 11:35:53.476105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.147 qpair failed and we were unable to recover it. 00:27:58.147 [2024-07-26 11:35:53.476310] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.147 [2024-07-26 11:35:53.476342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.147 qpair failed and we were unable to recover it. 00:27:58.147 [2024-07-26 11:35:53.476527] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.147 [2024-07-26 11:35:53.476559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.147 qpair failed and we were unable to recover it. 00:27:58.147 [2024-07-26 11:35:53.476759] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.147 [2024-07-26 11:35:53.476792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.147 qpair failed and we were unable to recover it. 00:27:58.147 [2024-07-26 11:35:53.476975] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.147 [2024-07-26 11:35:53.477006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.147 qpair failed and we were unable to recover it. 00:27:58.147 [2024-07-26 11:35:53.477222] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.147 [2024-07-26 11:35:53.477254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.147 qpair failed and we were unable to recover it. 00:27:58.147 [2024-07-26 11:35:53.477495] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.147 [2024-07-26 11:35:53.477527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.147 qpair failed and we were unable to recover it. 00:27:58.147 [2024-07-26 11:35:53.477801] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.147 [2024-07-26 11:35:53.477833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.147 qpair failed and we were unable to recover it. 00:27:58.147 [2024-07-26 11:35:53.478076] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.147 [2024-07-26 11:35:53.478108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.147 qpair failed and we were unable to recover it. 00:27:58.147 [2024-07-26 11:35:53.478248] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.147 [2024-07-26 11:35:53.478281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.147 qpair failed and we were unable to recover it. 00:27:58.147 [2024-07-26 11:35:53.478521] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.147 [2024-07-26 11:35:53.478553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.147 qpair failed and we were unable to recover it. 00:27:58.147 [2024-07-26 11:35:53.478756] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.147 [2024-07-26 11:35:53.478790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.147 qpair failed and we were unable to recover it. 00:27:58.148 [2024-07-26 11:35:53.478971] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.148 [2024-07-26 11:35:53.479004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.148 qpair failed and we were unable to recover it. 00:27:58.148 [2024-07-26 11:35:53.479125] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.148 [2024-07-26 11:35:53.479157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.148 qpair failed and we were unable to recover it. 00:27:58.148 [2024-07-26 11:35:53.479402] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.148 [2024-07-26 11:35:53.479436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.148 qpair failed and we were unable to recover it. 00:27:58.148 [2024-07-26 11:35:53.479581] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.148 [2024-07-26 11:35:53.479614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.148 qpair failed and we were unable to recover it. 00:27:58.148 [2024-07-26 11:35:53.479731] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.148 [2024-07-26 11:35:53.479765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.148 qpair failed and we were unable to recover it. 00:27:58.148 [2024-07-26 11:35:53.479890] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.148 [2024-07-26 11:35:53.479923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.148 qpair failed and we were unable to recover it. 00:27:58.148 [2024-07-26 11:35:53.480118] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.148 [2024-07-26 11:35:53.480149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.148 qpair failed and we were unable to recover it. 00:27:58.148 [2024-07-26 11:35:53.480417] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.148 [2024-07-26 11:35:53.480449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.148 qpair failed and we were unable to recover it. 00:27:58.148 [2024-07-26 11:35:53.480705] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.148 [2024-07-26 11:35:53.480740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.148 qpair failed and we were unable to recover it. 00:27:58.148 [2024-07-26 11:35:53.480870] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.148 [2024-07-26 11:35:53.480902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.148 qpair failed and we were unable to recover it. 00:27:58.148 [2024-07-26 11:35:53.481076] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.148 [2024-07-26 11:35:53.481107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.148 qpair failed and we were unable to recover it. 00:27:58.148 [2024-07-26 11:35:53.481412] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.148 [2024-07-26 11:35:53.481446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.148 qpair failed and we were unable to recover it. 00:27:58.148 [2024-07-26 11:35:53.481641] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.148 [2024-07-26 11:35:53.481683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.148 qpair failed and we were unable to recover it. 00:27:58.148 [2024-07-26 11:35:53.481822] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.148 [2024-07-26 11:35:53.481853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.148 qpair failed and we were unable to recover it. 00:27:58.148 [2024-07-26 11:35:53.482045] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.148 [2024-07-26 11:35:53.482078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.148 qpair failed and we were unable to recover it. 00:27:58.148 [2024-07-26 11:35:53.482318] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.148 [2024-07-26 11:35:53.482352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.148 qpair failed and we were unable to recover it. 00:27:58.148 [2024-07-26 11:35:53.482549] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.148 [2024-07-26 11:35:53.482582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.148 qpair failed and we were unable to recover it. 00:27:58.148 [2024-07-26 11:35:53.482727] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.148 [2024-07-26 11:35:53.482761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.148 qpair failed and we were unable to recover it. 00:27:58.148 [2024-07-26 11:35:53.482959] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.148 [2024-07-26 11:35:53.482992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.148 qpair failed and we were unable to recover it. 00:27:58.148 [2024-07-26 11:35:53.483168] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.148 [2024-07-26 11:35:53.483200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.148 qpair failed and we were unable to recover it. 00:27:58.148 [2024-07-26 11:35:53.483440] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.148 [2024-07-26 11:35:53.483473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.148 qpair failed and we were unable to recover it. 00:27:58.148 [2024-07-26 11:35:53.483678] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.148 [2024-07-26 11:35:53.483711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.148 qpair failed and we were unable to recover it. 00:27:58.148 [2024-07-26 11:35:53.483895] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.148 [2024-07-26 11:35:53.483927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.148 qpair failed and we were unable to recover it. 00:27:58.148 [2024-07-26 11:35:53.484170] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.148 [2024-07-26 11:35:53.484204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.148 qpair failed and we were unable to recover it. 00:27:58.148 [2024-07-26 11:35:53.484490] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.148 [2024-07-26 11:35:53.484523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.148 qpair failed and we were unable to recover it. 00:27:58.148 [2024-07-26 11:35:53.484786] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.148 [2024-07-26 11:35:53.484819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.148 qpair failed and we were unable to recover it. 00:27:58.148 [2024-07-26 11:35:53.485006] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.148 [2024-07-26 11:35:53.485038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.148 qpair failed and we were unable to recover it. 00:27:58.148 [2024-07-26 11:35:53.485186] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.148 [2024-07-26 11:35:53.485218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.148 qpair failed and we were unable to recover it. 00:27:58.148 [2024-07-26 11:35:53.485502] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.148 [2024-07-26 11:35:53.485534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.148 qpair failed and we were unable to recover it. 00:27:58.148 [2024-07-26 11:35:53.485732] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.148 [2024-07-26 11:35:53.485765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.148 qpair failed and we were unable to recover it. 00:27:58.148 [2024-07-26 11:35:53.485911] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.148 [2024-07-26 11:35:53.485943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.148 qpair failed and we were unable to recover it. 00:27:58.148 [2024-07-26 11:35:53.486136] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.148 [2024-07-26 11:35:53.486168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.148 qpair failed and we were unable to recover it. 00:27:58.148 [2024-07-26 11:35:53.486443] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.148 [2024-07-26 11:35:53.486477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.148 qpair failed and we were unable to recover it. 00:27:58.148 [2024-07-26 11:35:53.486595] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.148 [2024-07-26 11:35:53.486639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.148 qpair failed and we were unable to recover it. 00:27:58.148 [2024-07-26 11:35:53.486844] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.148 [2024-07-26 11:35:53.486876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.148 qpair failed and we were unable to recover it. 00:27:58.148 [2024-07-26 11:35:53.487078] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.148 [2024-07-26 11:35:53.487109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.148 qpair failed and we were unable to recover it. 00:27:58.148 [2024-07-26 11:35:53.487428] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.148 [2024-07-26 11:35:53.487462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.148 qpair failed and we were unable to recover it. 00:27:58.148 [2024-07-26 11:35:53.487707] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.149 [2024-07-26 11:35:53.487740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.149 qpair failed and we were unable to recover it. 00:27:58.149 [2024-07-26 11:35:53.487887] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.149 [2024-07-26 11:35:53.487918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.149 qpair failed and we were unable to recover it. 00:27:58.149 [2024-07-26 11:35:53.488074] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.149 [2024-07-26 11:35:53.488106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.149 qpair failed and we were unable to recover it. 00:27:58.149 [2024-07-26 11:35:53.488254] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.149 [2024-07-26 11:35:53.488287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.149 qpair failed and we were unable to recover it. 00:27:58.149 [2024-07-26 11:35:53.488416] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.149 [2024-07-26 11:35:53.488448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.149 qpair failed and we were unable to recover it. 00:27:58.149 [2024-07-26 11:35:53.488569] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.149 [2024-07-26 11:35:53.488602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.149 qpair failed and we were unable to recover it. 00:27:58.149 [2024-07-26 11:35:53.488854] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.149 [2024-07-26 11:35:53.488888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.149 qpair failed and we were unable to recover it. 00:27:58.149 [2024-07-26 11:35:53.489077] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.149 [2024-07-26 11:35:53.489109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.149 qpair failed and we were unable to recover it. 00:27:58.149 [2024-07-26 11:35:53.489324] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.149 [2024-07-26 11:35:53.489355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.149 qpair failed and we were unable to recover it. 00:27:58.149 [2024-07-26 11:35:53.489597] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.149 [2024-07-26 11:35:53.489638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.149 qpair failed and we were unable to recover it. 00:27:58.149 [2024-07-26 11:35:53.489778] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.149 [2024-07-26 11:35:53.489810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.149 qpair failed and we were unable to recover it. 00:27:58.149 [2024-07-26 11:35:53.489996] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.149 [2024-07-26 11:35:53.490029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.149 qpair failed and we were unable to recover it. 00:27:58.149 [2024-07-26 11:35:53.490166] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.149 [2024-07-26 11:35:53.490197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.149 qpair failed and we were unable to recover it. 00:27:58.149 [2024-07-26 11:35:53.490461] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.149 [2024-07-26 11:35:53.490493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.149 qpair failed and we were unable to recover it. 00:27:58.149 [2024-07-26 11:35:53.490773] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.149 [2024-07-26 11:35:53.490808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.149 qpair failed and we were unable to recover it. 00:27:58.149 [2024-07-26 11:35:53.491007] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.149 [2024-07-26 11:35:53.491046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.149 qpair failed and we were unable to recover it. 00:27:58.149 [2024-07-26 11:35:53.491198] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.149 [2024-07-26 11:35:53.491230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.149 qpair failed and we were unable to recover it. 00:27:58.149 [2024-07-26 11:35:53.491419] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.149 [2024-07-26 11:35:53.491452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.149 qpair failed and we were unable to recover it. 00:27:58.149 [2024-07-26 11:35:53.491744] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.149 [2024-07-26 11:35:53.491776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.149 qpair failed and we were unable to recover it. 00:27:58.149 [2024-07-26 11:35:53.491920] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.149 [2024-07-26 11:35:53.491951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.149 qpair failed and we were unable to recover it. 00:27:58.149 [2024-07-26 11:35:53.492136] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.149 [2024-07-26 11:35:53.492168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.149 qpair failed and we were unable to recover it. 00:27:58.149 [2024-07-26 11:35:53.492303] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.149 [2024-07-26 11:35:53.492334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.149 qpair failed and we were unable to recover it. 00:27:58.149 [2024-07-26 11:35:53.492597] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.149 [2024-07-26 11:35:53.492637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.149 qpair failed and we were unable to recover it. 00:27:58.149 [2024-07-26 11:35:53.492896] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.149 [2024-07-26 11:35:53.492928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.149 qpair failed and we were unable to recover it. 00:27:58.149 [2024-07-26 11:35:53.493134] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.149 [2024-07-26 11:35:53.493165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.149 qpair failed and we were unable to recover it. 00:27:58.149 [2024-07-26 11:35:53.493451] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.149 [2024-07-26 11:35:53.493483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.149 qpair failed and we were unable to recover it. 00:27:58.149 [2024-07-26 11:35:53.493735] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.149 [2024-07-26 11:35:53.493769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.149 qpair failed and we were unable to recover it. 00:27:58.149 [2024-07-26 11:35:53.493963] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.149 [2024-07-26 11:35:53.493995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.149 qpair failed and we were unable to recover it. 00:27:58.149 [2024-07-26 11:35:53.494191] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.149 [2024-07-26 11:35:53.494223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.149 qpair failed and we were unable to recover it. 00:27:58.149 [2024-07-26 11:35:53.494436] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.149 [2024-07-26 11:35:53.494469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.149 qpair failed and we were unable to recover it. 00:27:58.149 [2024-07-26 11:35:53.494680] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.149 [2024-07-26 11:35:53.494713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.149 qpair failed and we were unable to recover it. 00:27:58.149 [2024-07-26 11:35:53.494997] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.149 [2024-07-26 11:35:53.495028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.149 qpair failed and we were unable to recover it. 00:27:58.149 [2024-07-26 11:35:53.495226] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.149 [2024-07-26 11:35:53.495258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.149 qpair failed and we were unable to recover it. 00:27:58.149 [2024-07-26 11:35:53.495447] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.149 [2024-07-26 11:35:53.495479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.149 qpair failed and we were unable to recover it. 00:27:58.149 [2024-07-26 11:35:53.495605] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.149 [2024-07-26 11:35:53.495645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.149 qpair failed and we were unable to recover it. 00:27:58.149 [2024-07-26 11:35:53.495779] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.149 [2024-07-26 11:35:53.495810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.149 qpair failed and we were unable to recover it. 00:27:58.149 [2024-07-26 11:35:53.496016] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.149 [2024-07-26 11:35:53.496048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.149 qpair failed and we were unable to recover it. 00:27:58.149 [2024-07-26 11:35:53.496261] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.149 [2024-07-26 11:35:53.496293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.149 qpair failed and we were unable to recover it. 00:27:58.150 [2024-07-26 11:35:53.496425] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.150 [2024-07-26 11:35:53.496456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.150 qpair failed and we were unable to recover it. 00:27:58.150 [2024-07-26 11:35:53.496697] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.150 [2024-07-26 11:35:53.496730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.150 qpair failed and we were unable to recover it. 00:27:58.150 [2024-07-26 11:35:53.496865] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.150 [2024-07-26 11:35:53.496897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.150 qpair failed and we were unable to recover it. 00:27:58.150 [2024-07-26 11:35:53.497146] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.150 [2024-07-26 11:35:53.497177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.150 qpair failed and we were unable to recover it. 00:27:58.150 [2024-07-26 11:35:53.497307] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.150 [2024-07-26 11:35:53.497339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.150 qpair failed and we were unable to recover it. 00:27:58.150 [2024-07-26 11:35:53.497547] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.150 [2024-07-26 11:35:53.497578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.150 qpair failed and we were unable to recover it. 00:27:58.150 [2024-07-26 11:35:53.497777] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.150 [2024-07-26 11:35:53.497810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.150 qpair failed and we were unable to recover it. 00:27:58.150 [2024-07-26 11:35:53.497957] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.150 [2024-07-26 11:35:53.497989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.150 qpair failed and we were unable to recover it. 00:27:58.150 [2024-07-26 11:35:53.498239] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.150 [2024-07-26 11:35:53.498270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.150 qpair failed and we were unable to recover it. 00:27:58.150 [2024-07-26 11:35:53.498496] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.150 [2024-07-26 11:35:53.498530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.150 qpair failed and we were unable to recover it. 00:27:58.150 [2024-07-26 11:35:53.498719] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.150 [2024-07-26 11:35:53.498752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.150 qpair failed and we were unable to recover it. 00:27:58.150 [2024-07-26 11:35:53.499019] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.150 [2024-07-26 11:35:53.499050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.150 qpair failed and we were unable to recover it. 00:27:58.150 [2024-07-26 11:35:53.499183] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.150 [2024-07-26 11:35:53.499215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.150 qpair failed and we were unable to recover it. 00:27:58.150 [2024-07-26 11:35:53.499423] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.150 [2024-07-26 11:35:53.499456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.150 qpair failed and we were unable to recover it. 00:27:58.150 [2024-07-26 11:35:53.499656] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.150 [2024-07-26 11:35:53.499688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.150 qpair failed and we were unable to recover it. 00:27:58.150 [2024-07-26 11:35:53.499931] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.150 [2024-07-26 11:35:53.499963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.150 qpair failed and we were unable to recover it. 00:27:58.150 [2024-07-26 11:35:53.500137] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.150 [2024-07-26 11:35:53.500169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.150 qpair failed and we were unable to recover it. 00:27:58.150 [2024-07-26 11:35:53.500353] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.150 [2024-07-26 11:35:53.500390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.150 qpair failed and we were unable to recover it. 00:27:58.150 [2024-07-26 11:35:53.500581] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.150 [2024-07-26 11:35:53.500613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.150 qpair failed and we were unable to recover it. 00:27:58.150 [2024-07-26 11:35:53.500818] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.150 [2024-07-26 11:35:53.500850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.150 qpair failed and we were unable to recover it. 00:27:58.150 [2024-07-26 11:35:53.501050] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.150 [2024-07-26 11:35:53.501083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.150 qpair failed and we were unable to recover it. 00:27:58.150 [2024-07-26 11:35:53.501218] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.150 [2024-07-26 11:35:53.501250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.150 qpair failed and we were unable to recover it. 00:27:58.150 [2024-07-26 11:35:53.501369] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.150 [2024-07-26 11:35:53.501400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.150 qpair failed and we were unable to recover it. 00:27:58.150 [2024-07-26 11:35:53.501619] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.150 [2024-07-26 11:35:53.501658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.150 qpair failed and we were unable to recover it. 00:27:58.150 [2024-07-26 11:35:53.501845] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.150 [2024-07-26 11:35:53.501877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.150 qpair failed and we were unable to recover it. 00:27:58.150 [2024-07-26 11:35:53.502068] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.150 [2024-07-26 11:35:53.502099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.150 qpair failed and we were unable to recover it. 00:27:58.150 [2024-07-26 11:35:53.502435] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.150 [2024-07-26 11:35:53.502467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.150 qpair failed and we were unable to recover it. 00:27:58.150 [2024-07-26 11:35:53.502755] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.150 [2024-07-26 11:35:53.502788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.150 qpair failed and we were unable to recover it. 00:27:58.150 [2024-07-26 11:35:53.502918] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.150 [2024-07-26 11:35:53.502950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.150 qpair failed and we were unable to recover it. 00:27:58.150 [2024-07-26 11:35:53.503191] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.150 [2024-07-26 11:35:53.503223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.150 qpair failed and we were unable to recover it. 00:27:58.150 [2024-07-26 11:35:53.503361] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.150 [2024-07-26 11:35:53.503391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.150 qpair failed and we were unable to recover it. 00:27:58.150 [2024-07-26 11:35:53.503534] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.150 [2024-07-26 11:35:53.503566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.150 qpair failed and we were unable to recover it. 00:27:58.150 [2024-07-26 11:35:53.503761] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.150 [2024-07-26 11:35:53.503793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.150 qpair failed and we were unable to recover it. 00:27:58.150 [2024-07-26 11:35:53.504012] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.150 [2024-07-26 11:35:53.504043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.150 qpair failed and we were unable to recover it. 00:27:58.150 [2024-07-26 11:35:53.504187] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.150 [2024-07-26 11:35:53.504217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.150 qpair failed and we were unable to recover it. 00:27:58.150 [2024-07-26 11:35:53.504505] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.150 [2024-07-26 11:35:53.504536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.150 qpair failed and we were unable to recover it. 00:27:58.150 [2024-07-26 11:35:53.504809] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.150 [2024-07-26 11:35:53.504842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.151 qpair failed and we were unable to recover it. 00:27:58.151 [2024-07-26 11:35:53.504981] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.151 [2024-07-26 11:35:53.505011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.151 qpair failed and we were unable to recover it. 00:27:58.151 [2024-07-26 11:35:53.505249] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.151 [2024-07-26 11:35:53.505280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.151 qpair failed and we were unable to recover it. 00:27:58.151 [2024-07-26 11:35:53.505462] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.151 [2024-07-26 11:35:53.505493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.151 qpair failed and we were unable to recover it. 00:27:58.151 [2024-07-26 11:35:53.505803] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.151 [2024-07-26 11:35:53.505834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.151 qpair failed and we were unable to recover it. 00:27:58.151 [2024-07-26 11:35:53.505980] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.151 [2024-07-26 11:35:53.506011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.151 qpair failed and we were unable to recover it. 00:27:58.151 [2024-07-26 11:35:53.506203] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.151 [2024-07-26 11:35:53.506234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.151 qpair failed and we were unable to recover it. 00:27:58.151 [2024-07-26 11:35:53.506504] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.151 [2024-07-26 11:35:53.506544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.151 qpair failed and we were unable to recover it. 00:27:58.151 [2024-07-26 11:35:53.506740] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.151 [2024-07-26 11:35:53.506773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.151 qpair failed and we were unable to recover it. 00:27:58.151 [2024-07-26 11:35:53.506950] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.151 [2024-07-26 11:35:53.506980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.151 qpair failed and we were unable to recover it. 00:27:58.151 [2024-07-26 11:35:53.507126] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.151 [2024-07-26 11:35:53.507158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.151 qpair failed and we were unable to recover it. 00:27:58.151 [2024-07-26 11:35:53.507445] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.151 [2024-07-26 11:35:53.507476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.151 qpair failed and we were unable to recover it. 00:27:58.151 [2024-07-26 11:35:53.507670] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.151 [2024-07-26 11:35:53.507702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.151 qpair failed and we were unable to recover it. 00:27:58.151 [2024-07-26 11:35:53.507864] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.151 [2024-07-26 11:35:53.507894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.151 qpair failed and we were unable to recover it. 00:27:58.151 [2024-07-26 11:35:53.508074] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.151 [2024-07-26 11:35:53.508105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.151 qpair failed and we were unable to recover it. 00:27:58.151 [2024-07-26 11:35:53.508319] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.151 [2024-07-26 11:35:53.508350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.151 qpair failed and we were unable to recover it. 00:27:58.151 [2024-07-26 11:35:53.508539] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.151 [2024-07-26 11:35:53.508570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.151 qpair failed and we were unable to recover it. 00:27:58.151 [2024-07-26 11:35:53.508785] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.151 [2024-07-26 11:35:53.508817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.151 qpair failed and we were unable to recover it. 00:27:58.151 [2024-07-26 11:35:53.508950] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.151 [2024-07-26 11:35:53.508981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.151 qpair failed and we were unable to recover it. 00:27:58.151 [2024-07-26 11:35:53.509261] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.151 [2024-07-26 11:35:53.509293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.151 qpair failed and we were unable to recover it. 00:27:58.151 [2024-07-26 11:35:53.509546] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.151 [2024-07-26 11:35:53.509578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.151 qpair failed and we were unable to recover it. 00:27:58.151 [2024-07-26 11:35:53.509812] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.151 [2024-07-26 11:35:53.509851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.151 qpair failed and we were unable to recover it. 00:27:58.151 [2024-07-26 11:35:53.510064] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.151 [2024-07-26 11:35:53.510096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.151 qpair failed and we were unable to recover it. 00:27:58.151 [2024-07-26 11:35:53.510347] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.151 [2024-07-26 11:35:53.510378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.151 qpair failed and we were unable to recover it. 00:27:58.151 [2024-07-26 11:35:53.510594] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.151 [2024-07-26 11:35:53.510624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.151 qpair failed and we were unable to recover it. 00:27:58.151 [2024-07-26 11:35:53.510909] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.151 [2024-07-26 11:35:53.510942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.151 qpair failed and we were unable to recover it. 00:27:58.151 [2024-07-26 11:35:53.511183] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.151 [2024-07-26 11:35:53.511215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.151 qpair failed and we were unable to recover it. 00:27:58.151 [2024-07-26 11:35:53.511420] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.151 [2024-07-26 11:35:53.511452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.151 qpair failed and we were unable to recover it. 00:27:58.151 [2024-07-26 11:35:53.511707] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.151 [2024-07-26 11:35:53.511740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.151 qpair failed and we were unable to recover it. 00:27:58.151 [2024-07-26 11:35:53.512016] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.151 [2024-07-26 11:35:53.512048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.151 qpair failed and we were unable to recover it. 00:27:58.151 [2024-07-26 11:35:53.512255] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.151 [2024-07-26 11:35:53.512287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.151 qpair failed and we were unable to recover it. 00:27:58.151 [2024-07-26 11:35:53.512420] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.151 [2024-07-26 11:35:53.512451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.151 qpair failed and we were unable to recover it. 00:27:58.151 [2024-07-26 11:35:53.512588] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.151 [2024-07-26 11:35:53.512619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.152 qpair failed and we were unable to recover it. 00:27:58.152 [2024-07-26 11:35:53.512828] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.152 [2024-07-26 11:35:53.512861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.152 qpair failed and we were unable to recover it. 00:27:58.152 [2024-07-26 11:35:53.513066] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.152 [2024-07-26 11:35:53.513098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.152 qpair failed and we were unable to recover it. 00:27:58.152 [2024-07-26 11:35:53.513288] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.152 [2024-07-26 11:35:53.513320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.152 qpair failed and we were unable to recover it. 00:27:58.152 [2024-07-26 11:35:53.513513] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.152 [2024-07-26 11:35:53.513545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.152 qpair failed and we were unable to recover it. 00:27:58.152 [2024-07-26 11:35:53.513725] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.152 [2024-07-26 11:35:53.513758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.152 qpair failed and we were unable to recover it. 00:27:58.152 [2024-07-26 11:35:53.513944] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.152 [2024-07-26 11:35:53.513976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.152 qpair failed and we were unable to recover it. 00:27:58.152 [2024-07-26 11:35:53.514228] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.152 [2024-07-26 11:35:53.514259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.152 qpair failed and we were unable to recover it. 00:27:58.152 [2024-07-26 11:35:53.514445] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.152 [2024-07-26 11:35:53.514476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.152 qpair failed and we were unable to recover it. 00:27:58.152 [2024-07-26 11:35:53.514667] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.152 [2024-07-26 11:35:53.514699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.152 qpair failed and we were unable to recover it. 00:27:58.152 [2024-07-26 11:35:53.514914] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.152 [2024-07-26 11:35:53.514945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.152 qpair failed and we were unable to recover it. 00:27:58.152 [2024-07-26 11:35:53.515075] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.152 [2024-07-26 11:35:53.515106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.152 qpair failed and we were unable to recover it. 00:27:58.152 [2024-07-26 11:35:53.515242] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.152 [2024-07-26 11:35:53.515272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.152 qpair failed and we were unable to recover it. 00:27:58.152 [2024-07-26 11:35:53.515534] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.152 [2024-07-26 11:35:53.515566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.152 qpair failed and we were unable to recover it. 00:27:58.152 [2024-07-26 11:35:53.515761] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.152 [2024-07-26 11:35:53.515807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.152 qpair failed and we were unable to recover it. 00:27:58.152 [2024-07-26 11:35:53.515967] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.152 [2024-07-26 11:35:53.515999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.152 qpair failed and we were unable to recover it. 00:27:58.152 [2024-07-26 11:35:53.516154] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.152 [2024-07-26 11:35:53.516185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.152 qpair failed and we were unable to recover it. 00:27:58.152 [2024-07-26 11:35:53.516393] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.152 [2024-07-26 11:35:53.516424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.152 qpair failed and we were unable to recover it. 00:27:58.152 [2024-07-26 11:35:53.516602] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.152 [2024-07-26 11:35:53.516645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.152 qpair failed and we were unable to recover it. 00:27:58.152 [2024-07-26 11:35:53.516812] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.152 [2024-07-26 11:35:53.516845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.152 qpair failed and we were unable to recover it. 00:27:58.152 [2024-07-26 11:35:53.517116] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.152 [2024-07-26 11:35:53.517161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.152 qpair failed and we were unable to recover it. 00:27:58.152 [2024-07-26 11:35:53.517380] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.152 [2024-07-26 11:35:53.517421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.152 qpair failed and we were unable to recover it. 00:27:58.152 [2024-07-26 11:35:53.517700] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.152 [2024-07-26 11:35:53.517734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.152 qpair failed and we were unable to recover it. 00:27:58.152 [2024-07-26 11:35:53.517870] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.152 [2024-07-26 11:35:53.517903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.152 qpair failed and we were unable to recover it. 00:27:58.152 [2024-07-26 11:35:53.518146] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.152 [2024-07-26 11:35:53.518177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.152 qpair failed and we were unable to recover it. 00:27:58.152 [2024-07-26 11:35:53.518443] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.152 [2024-07-26 11:35:53.518475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.152 qpair failed and we were unable to recover it. 00:27:58.152 [2024-07-26 11:35:53.518771] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.152 [2024-07-26 11:35:53.518803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.152 qpair failed and we were unable to recover it. 00:27:58.152 [2024-07-26 11:35:53.518995] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.152 [2024-07-26 11:35:53.519026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.152 qpair failed and we were unable to recover it. 00:27:58.152 [2024-07-26 11:35:53.519160] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.152 [2024-07-26 11:35:53.519191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.152 qpair failed and we were unable to recover it. 00:27:58.152 [2024-07-26 11:35:53.519418] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.152 [2024-07-26 11:35:53.519456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.152 qpair failed and we were unable to recover it. 00:27:58.152 [2024-07-26 11:35:53.519724] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.152 [2024-07-26 11:35:53.519757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.152 qpair failed and we were unable to recover it. 00:27:58.152 [2024-07-26 11:35:53.519896] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.152 [2024-07-26 11:35:53.519927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.152 qpair failed and we were unable to recover it. 00:27:58.152 [2024-07-26 11:35:53.520049] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.152 [2024-07-26 11:35:53.520081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.152 qpair failed and we were unable to recover it. 00:27:58.152 [2024-07-26 11:35:53.520227] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.152 [2024-07-26 11:35:53.520258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.152 qpair failed and we were unable to recover it. 00:27:58.152 [2024-07-26 11:35:53.520368] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.152 [2024-07-26 11:35:53.520398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.152 qpair failed and we were unable to recover it. 00:27:58.152 [2024-07-26 11:35:53.520683] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.153 [2024-07-26 11:35:53.520715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.153 qpair failed and we were unable to recover it. 00:27:58.153 [2024-07-26 11:35:53.520932] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.153 [2024-07-26 11:35:53.520964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.153 qpair failed and we were unable to recover it. 00:27:58.153 [2024-07-26 11:35:53.521114] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.153 [2024-07-26 11:35:53.521147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.153 qpair failed and we were unable to recover it. 00:27:58.153 [2024-07-26 11:35:53.521343] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.153 [2024-07-26 11:35:53.521374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.153 qpair failed and we were unable to recover it. 00:27:58.153 [2024-07-26 11:35:53.521647] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.153 [2024-07-26 11:35:53.521680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.153 qpair failed and we were unable to recover it. 00:27:58.153 [2024-07-26 11:35:53.521869] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.153 [2024-07-26 11:35:53.521902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.153 qpair failed and we were unable to recover it. 00:27:58.153 [2024-07-26 11:35:53.522163] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.153 [2024-07-26 11:35:53.522195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.153 qpair failed and we were unable to recover it. 00:27:58.153 [2024-07-26 11:35:53.522426] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.153 [2024-07-26 11:35:53.522457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.153 qpair failed and we were unable to recover it. 00:27:58.153 [2024-07-26 11:35:53.522678] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.153 [2024-07-26 11:35:53.522711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.153 qpair failed and we were unable to recover it. 00:27:58.153 [2024-07-26 11:35:53.522978] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.153 [2024-07-26 11:35:53.523010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.153 qpair failed and we were unable to recover it. 00:27:58.153 [2024-07-26 11:35:53.523149] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.153 [2024-07-26 11:35:53.523180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.153 qpair failed and we were unable to recover it. 00:27:58.153 [2024-07-26 11:35:53.523451] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.153 [2024-07-26 11:35:53.523482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.153 qpair failed and we were unable to recover it. 00:27:58.153 [2024-07-26 11:35:53.523677] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.153 [2024-07-26 11:35:53.523709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.153 qpair failed and we were unable to recover it. 00:27:58.153 [2024-07-26 11:35:53.523953] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.153 [2024-07-26 11:35:53.523984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.153 qpair failed and we were unable to recover it. 00:27:58.153 [2024-07-26 11:35:53.524224] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.153 [2024-07-26 11:35:53.524255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.153 qpair failed and we were unable to recover it. 00:27:58.153 [2024-07-26 11:35:53.524446] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.153 [2024-07-26 11:35:53.524477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.153 qpair failed and we were unable to recover it. 00:27:58.153 [2024-07-26 11:35:53.524725] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.153 [2024-07-26 11:35:53.524757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.153 qpair failed and we were unable to recover it. 00:27:58.153 [2024-07-26 11:35:53.524962] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.153 [2024-07-26 11:35:53.524994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.153 qpair failed and we were unable to recover it. 00:27:58.153 [2024-07-26 11:35:53.525140] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.153 [2024-07-26 11:35:53.525171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.153 qpair failed and we were unable to recover it. 00:27:58.153 [2024-07-26 11:35:53.525460] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.153 [2024-07-26 11:35:53.525491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.153 qpair failed and we were unable to recover it. 00:27:58.153 [2024-07-26 11:35:53.525699] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.153 [2024-07-26 11:35:53.525732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.153 qpair failed and we were unable to recover it. 00:27:58.153 [2024-07-26 11:35:53.525925] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.153 [2024-07-26 11:35:53.525957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.153 qpair failed and we were unable to recover it. 00:27:58.153 [2024-07-26 11:35:53.526091] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.153 [2024-07-26 11:35:53.526122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.153 qpair failed and we were unable to recover it. 00:27:58.153 [2024-07-26 11:35:53.526337] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.153 [2024-07-26 11:35:53.526369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.153 qpair failed and we were unable to recover it. 00:27:58.153 [2024-07-26 11:35:53.526499] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.153 [2024-07-26 11:35:53.526530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.153 qpair failed and we were unable to recover it. 00:27:58.153 [2024-07-26 11:35:53.526738] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.153 [2024-07-26 11:35:53.526771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.153 qpair failed and we were unable to recover it. 00:27:58.153 [2024-07-26 11:35:53.526968] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.153 [2024-07-26 11:35:53.526999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.153 qpair failed and we were unable to recover it. 00:27:58.153 [2024-07-26 11:35:53.527124] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.153 [2024-07-26 11:35:53.527156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.153 qpair failed and we were unable to recover it. 00:27:58.153 [2024-07-26 11:35:53.527372] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.153 [2024-07-26 11:35:53.527403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.153 qpair failed and we were unable to recover it. 00:27:58.153 [2024-07-26 11:35:53.527606] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.153 [2024-07-26 11:35:53.527646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.153 qpair failed and we were unable to recover it. 00:27:58.153 [2024-07-26 11:35:53.527846] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.153 [2024-07-26 11:35:53.527877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.153 qpair failed and we were unable to recover it. 00:27:58.153 [2024-07-26 11:35:53.528086] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.153 [2024-07-26 11:35:53.528118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.154 qpair failed and we were unable to recover it. 00:27:58.154 [2024-07-26 11:35:53.528242] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.154 [2024-07-26 11:35:53.528273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.154 qpair failed and we were unable to recover it. 00:27:58.154 [2024-07-26 11:35:53.528559] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.154 [2024-07-26 11:35:53.528590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.154 qpair failed and we were unable to recover it. 00:27:58.154 [2024-07-26 11:35:53.528733] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.154 [2024-07-26 11:35:53.528770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.154 qpair failed and we were unable to recover it. 00:27:58.154 [2024-07-26 11:35:53.528959] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.154 [2024-07-26 11:35:53.528990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.154 qpair failed and we were unable to recover it. 00:27:58.154 [2024-07-26 11:35:53.529212] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.154 [2024-07-26 11:35:53.529244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.154 qpair failed and we were unable to recover it. 00:27:58.154 [2024-07-26 11:35:53.529419] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.154 [2024-07-26 11:35:53.529449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.154 qpair failed and we were unable to recover it. 00:27:58.154 [2024-07-26 11:35:53.529719] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.154 [2024-07-26 11:35:53.529752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.154 qpair failed and we were unable to recover it. 00:27:58.154 [2024-07-26 11:35:53.530014] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.154 [2024-07-26 11:35:53.530045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.154 qpair failed and we were unable to recover it. 00:27:58.154 [2024-07-26 11:35:53.530196] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.154 [2024-07-26 11:35:53.530227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.154 qpair failed and we were unable to recover it. 00:27:58.154 [2024-07-26 11:35:53.530494] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.154 [2024-07-26 11:35:53.530525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.154 qpair failed and we were unable to recover it. 00:27:58.154 [2024-07-26 11:35:53.530771] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.154 [2024-07-26 11:35:53.530804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.154 qpair failed and we were unable to recover it. 00:27:58.154 [2024-07-26 11:35:53.530953] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.154 [2024-07-26 11:35:53.530984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.154 qpair failed and we were unable to recover it. 00:27:58.154 [2024-07-26 11:35:53.531158] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.154 [2024-07-26 11:35:53.531190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.154 qpair failed and we were unable to recover it. 00:27:58.154 [2024-07-26 11:35:53.531372] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.154 [2024-07-26 11:35:53.531404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.154 qpair failed and we were unable to recover it. 00:27:58.154 [2024-07-26 11:35:53.531670] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.154 [2024-07-26 11:35:53.531701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.154 qpair failed and we were unable to recover it. 00:27:58.154 [2024-07-26 11:35:53.531837] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.154 [2024-07-26 11:35:53.531868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.154 qpair failed and we were unable to recover it. 00:27:58.154 [2024-07-26 11:35:53.532018] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.154 [2024-07-26 11:35:53.532049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.154 qpair failed and we were unable to recover it. 00:27:58.154 [2024-07-26 11:35:53.532181] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.154 [2024-07-26 11:35:53.532213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.154 qpair failed and we were unable to recover it. 00:27:58.154 [2024-07-26 11:35:53.532470] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.154 [2024-07-26 11:35:53.532502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.154 qpair failed and we were unable to recover it. 00:27:58.154 [2024-07-26 11:35:53.532783] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.154 [2024-07-26 11:35:53.532816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.154 qpair failed and we were unable to recover it. 00:27:58.154 [2024-07-26 11:35:53.533098] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.154 [2024-07-26 11:35:53.533130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.154 qpair failed and we were unable to recover it. 00:27:58.154 [2024-07-26 11:35:53.533265] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.154 [2024-07-26 11:35:53.533296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.154 qpair failed and we were unable to recover it. 00:27:58.154 [2024-07-26 11:35:53.533497] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.154 [2024-07-26 11:35:53.533528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.154 qpair failed and we were unable to recover it. 00:27:58.154 [2024-07-26 11:35:53.533718] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.154 [2024-07-26 11:35:53.533750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.154 qpair failed and we were unable to recover it. 00:27:58.154 [2024-07-26 11:35:53.533942] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.154 [2024-07-26 11:35:53.533973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.154 qpair failed and we were unable to recover it. 00:27:58.154 [2024-07-26 11:35:53.534157] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.154 [2024-07-26 11:35:53.534188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.154 qpair failed and we were unable to recover it. 00:27:58.154 [2024-07-26 11:35:53.534414] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.154 [2024-07-26 11:35:53.534445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.154 qpair failed and we were unable to recover it. 00:27:58.154 [2024-07-26 11:35:53.534635] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.154 [2024-07-26 11:35:53.534668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.154 qpair failed and we were unable to recover it. 00:27:58.154 [2024-07-26 11:35:53.534861] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.154 [2024-07-26 11:35:53.534892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.154 qpair failed and we were unable to recover it. 00:27:58.154 [2024-07-26 11:35:53.535060] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.154 [2024-07-26 11:35:53.535115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:58.154 qpair failed and we were unable to recover it. 00:27:58.154 [2024-07-26 11:35:53.535349] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.154 [2024-07-26 11:35:53.535394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f8000b90 with addr=10.0.0.2, port=4420 00:27:58.154 qpair failed and we were unable to recover it. 00:27:58.154 [2024-07-26 11:35:53.535647] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.154 [2024-07-26 11:35:53.535680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f8000b90 with addr=10.0.0.2, port=4420 00:27:58.154 qpair failed and we were unable to recover it. 00:27:58.154 [2024-07-26 11:35:53.535884] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.154 [2024-07-26 11:35:53.535915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f8000b90 with addr=10.0.0.2, port=4420 00:27:58.154 qpair failed and we were unable to recover it. 00:27:58.154 [2024-07-26 11:35:53.536063] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.154 [2024-07-26 11:35:53.536094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f8000b90 with addr=10.0.0.2, port=4420 00:27:58.154 qpair failed and we were unable to recover it. 00:27:58.154 [2024-07-26 11:35:53.536341] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.154 [2024-07-26 11:35:53.536372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f8000b90 with addr=10.0.0.2, port=4420 00:27:58.154 qpair failed and we were unable to recover it. 00:27:58.154 [2024-07-26 11:35:53.536566] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.154 [2024-07-26 11:35:53.536597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f8000b90 with addr=10.0.0.2, port=4420 00:27:58.154 qpair failed and we were unable to recover it. 00:27:58.154 [2024-07-26 11:35:53.536788] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.154 [2024-07-26 11:35:53.536823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:58.154 qpair failed and we were unable to recover it. 00:27:58.155 [2024-07-26 11:35:53.537068] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.155 [2024-07-26 11:35:53.537100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:58.155 qpair failed and we were unable to recover it. 00:27:58.155 [2024-07-26 11:35:53.537239] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.155 [2024-07-26 11:35:53.537270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:58.155 qpair failed and we were unable to recover it. 00:27:58.155 [2024-07-26 11:35:53.537542] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.155 [2024-07-26 11:35:53.537573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:58.155 qpair failed and we were unable to recover it. 00:27:58.155 [2024-07-26 11:35:53.537810] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.155 [2024-07-26 11:35:53.537842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:58.155 qpair failed and we were unable to recover it. 00:27:58.155 [2024-07-26 11:35:53.537972] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.155 [2024-07-26 11:35:53.538003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:58.155 qpair failed and we were unable to recover it. 00:27:58.155 [2024-07-26 11:35:53.538128] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.155 [2024-07-26 11:35:53.538167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:58.155 qpair failed and we were unable to recover it. 00:27:58.155 [2024-07-26 11:35:53.538470] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.155 [2024-07-26 11:35:53.538502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:58.155 qpair failed and we were unable to recover it. 00:27:58.155 [2024-07-26 11:35:53.538704] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.155 [2024-07-26 11:35:53.538736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:58.155 qpair failed and we were unable to recover it. 00:27:58.155 [2024-07-26 11:35:53.538870] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.155 [2024-07-26 11:35:53.538900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:58.155 qpair failed and we were unable to recover it. 00:27:58.155 [2024-07-26 11:35:53.539043] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.155 [2024-07-26 11:35:53.539074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:58.155 qpair failed and we were unable to recover it. 00:27:58.155 [2024-07-26 11:35:53.539212] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.155 [2024-07-26 11:35:53.539242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:58.155 qpair failed and we were unable to recover it. 00:27:58.155 [2024-07-26 11:35:53.539435] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.155 [2024-07-26 11:35:53.539466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:58.155 qpair failed and we were unable to recover it. 00:27:58.155 [2024-07-26 11:35:53.539770] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.155 [2024-07-26 11:35:53.539803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:58.155 qpair failed and we were unable to recover it. 00:27:58.155 [2024-07-26 11:35:53.540000] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.155 [2024-07-26 11:35:53.540031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:58.155 qpair failed and we were unable to recover it. 00:27:58.155 [2024-07-26 11:35:53.540226] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.155 [2024-07-26 11:35:53.540257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:58.155 qpair failed and we were unable to recover it. 00:27:58.155 [2024-07-26 11:35:53.540389] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.155 [2024-07-26 11:35:53.540420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:58.155 qpair failed and we were unable to recover it. 00:27:58.155 [2024-07-26 11:35:53.540552] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.155 [2024-07-26 11:35:53.540583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:58.155 qpair failed and we were unable to recover it. 00:27:58.155 [2024-07-26 11:35:53.540801] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.155 [2024-07-26 11:35:53.540834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:58.155 qpair failed and we were unable to recover it. 00:27:58.155 [2024-07-26 11:35:53.540962] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.155 [2024-07-26 11:35:53.540993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:58.155 qpair failed and we were unable to recover it. 00:27:58.155 [2024-07-26 11:35:53.541215] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.155 [2024-07-26 11:35:53.541246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:58.155 qpair failed and we were unable to recover it. 00:27:58.155 [2024-07-26 11:35:53.541368] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.155 [2024-07-26 11:35:53.541399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:58.155 qpair failed and we were unable to recover it. 00:27:58.155 [2024-07-26 11:35:53.541672] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.155 [2024-07-26 11:35:53.541704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:58.155 qpair failed and we were unable to recover it. 00:27:58.155 [2024-07-26 11:35:53.541904] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.155 [2024-07-26 11:35:53.541935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:58.155 qpair failed and we were unable to recover it. 00:27:58.155 [2024-07-26 11:35:53.542064] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.155 [2024-07-26 11:35:53.542094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:58.155 qpair failed and we were unable to recover it. 00:27:58.155 [2024-07-26 11:35:53.542207] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.155 [2024-07-26 11:35:53.542239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:58.155 qpair failed and we were unable to recover it. 00:27:58.155 [2024-07-26 11:35:53.542412] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.155 [2024-07-26 11:35:53.542443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:58.155 qpair failed and we were unable to recover it. 00:27:58.155 [2024-07-26 11:35:53.542644] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.155 [2024-07-26 11:35:53.542676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:58.155 qpair failed and we were unable to recover it. 00:27:58.155 [2024-07-26 11:35:53.542871] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.155 [2024-07-26 11:35:53.542902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:58.155 qpair failed and we were unable to recover it. 00:27:58.155 [2024-07-26 11:35:53.543137] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.155 [2024-07-26 11:35:53.543168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:58.155 qpair failed and we were unable to recover it. 00:27:58.156 [2024-07-26 11:35:53.543320] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.156 [2024-07-26 11:35:53.543352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:58.156 qpair failed and we were unable to recover it. 00:27:58.156 [2024-07-26 11:35:53.543462] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.156 [2024-07-26 11:35:53.543491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:58.156 qpair failed and we were unable to recover it. 00:27:58.156 [2024-07-26 11:35:53.543733] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.156 [2024-07-26 11:35:53.543766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:58.156 qpair failed and we were unable to recover it. 00:27:58.156 [2024-07-26 11:35:53.544015] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.156 [2024-07-26 11:35:53.544049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f8000b90 with addr=10.0.0.2, port=4420 00:27:58.156 qpair failed and we were unable to recover it. 00:27:58.156 [2024-07-26 11:35:53.544338] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.156 [2024-07-26 11:35:53.544369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f8000b90 with addr=10.0.0.2, port=4420 00:27:58.156 qpair failed and we were unable to recover it. 00:27:58.156 [2024-07-26 11:35:53.544573] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.156 [2024-07-26 11:35:53.544604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f8000b90 with addr=10.0.0.2, port=4420 00:27:58.156 qpair failed and we were unable to recover it. 00:27:58.156 [2024-07-26 11:35:53.544806] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.156 [2024-07-26 11:35:53.544837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f8000b90 with addr=10.0.0.2, port=4420 00:27:58.156 qpair failed and we were unable to recover it. 00:27:58.156 [2024-07-26 11:35:53.544985] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.156 [2024-07-26 11:35:53.545016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f8000b90 with addr=10.0.0.2, port=4420 00:27:58.156 qpair failed and we were unable to recover it. 00:27:58.156 [2024-07-26 11:35:53.545156] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.156 [2024-07-26 11:35:53.545186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f8000b90 with addr=10.0.0.2, port=4420 00:27:58.156 qpair failed and we were unable to recover it. 00:27:58.156 [2024-07-26 11:35:53.545299] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.156 [2024-07-26 11:35:53.545330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f8000b90 with addr=10.0.0.2, port=4420 00:27:58.156 qpair failed and we were unable to recover it. 00:27:58.156 [2024-07-26 11:35:53.545508] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.156 [2024-07-26 11:35:53.545539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f8000b90 with addr=10.0.0.2, port=4420 00:27:58.156 qpair failed and we were unable to recover it. 00:27:58.156 [2024-07-26 11:35:53.545667] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.156 [2024-07-26 11:35:53.545699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f8000b90 with addr=10.0.0.2, port=4420 00:27:58.156 qpair failed and we were unable to recover it. 00:27:58.156 [2024-07-26 11:35:53.545825] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.156 [2024-07-26 11:35:53.545857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f8000b90 with addr=10.0.0.2, port=4420 00:27:58.156 qpair failed and we were unable to recover it. 00:27:58.156 [2024-07-26 11:35:53.546053] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.156 [2024-07-26 11:35:53.546084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f8000b90 with addr=10.0.0.2, port=4420 00:27:58.156 qpair failed and we were unable to recover it. 00:27:58.156 [2024-07-26 11:35:53.546289] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.156 [2024-07-26 11:35:53.546320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f8000b90 with addr=10.0.0.2, port=4420 00:27:58.156 qpair failed and we were unable to recover it. 00:27:58.156 [2024-07-26 11:35:53.546563] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.156 [2024-07-26 11:35:53.546595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f8000b90 with addr=10.0.0.2, port=4420 00:27:58.156 qpair failed and we were unable to recover it. 00:27:58.156 [2024-07-26 11:35:53.546758] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.156 [2024-07-26 11:35:53.546796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f8000b90 with addr=10.0.0.2, port=4420 00:27:58.156 qpair failed and we were unable to recover it. 00:27:58.156 [2024-07-26 11:35:53.546930] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.156 [2024-07-26 11:35:53.546962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f8000b90 with addr=10.0.0.2, port=4420 00:27:58.156 qpair failed and we were unable to recover it. 00:27:58.156 [2024-07-26 11:35:53.547148] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.156 [2024-07-26 11:35:53.547179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f8000b90 with addr=10.0.0.2, port=4420 00:27:58.156 qpair failed and we were unable to recover it. 00:27:58.156 [2024-07-26 11:35:53.547297] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.156 [2024-07-26 11:35:53.547327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f8000b90 with addr=10.0.0.2, port=4420 00:27:58.156 qpair failed and we were unable to recover it. 00:27:58.156 [2024-07-26 11:35:53.547533] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.156 [2024-07-26 11:35:53.547564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f8000b90 with addr=10.0.0.2, port=4420 00:27:58.156 qpair failed and we were unable to recover it. 00:27:58.156 [2024-07-26 11:35:53.547787] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.156 [2024-07-26 11:35:53.547818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f8000b90 with addr=10.0.0.2, port=4420 00:27:58.156 qpair failed and we were unable to recover it. 00:27:58.156 [2024-07-26 11:35:53.548015] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.156 [2024-07-26 11:35:53.548046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f8000b90 with addr=10.0.0.2, port=4420 00:27:58.156 qpair failed and we were unable to recover it. 00:27:58.156 [2024-07-26 11:35:53.548186] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.156 [2024-07-26 11:35:53.548218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f8000b90 with addr=10.0.0.2, port=4420 00:27:58.156 qpair failed and we were unable to recover it. 00:27:58.156 [2024-07-26 11:35:53.548440] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.156 [2024-07-26 11:35:53.548471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f8000b90 with addr=10.0.0.2, port=4420 00:27:58.156 qpair failed and we were unable to recover it. 00:27:58.156 [2024-07-26 11:35:53.548711] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.156 [2024-07-26 11:35:53.548743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f8000b90 with addr=10.0.0.2, port=4420 00:27:58.156 qpair failed and we were unable to recover it. 00:27:58.156 [2024-07-26 11:35:53.548882] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.156 [2024-07-26 11:35:53.548913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f8000b90 with addr=10.0.0.2, port=4420 00:27:58.156 qpair failed and we were unable to recover it. 00:27:58.156 [2024-07-26 11:35:53.549123] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.156 [2024-07-26 11:35:53.549154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f8000b90 with addr=10.0.0.2, port=4420 00:27:58.156 qpair failed and we were unable to recover it. 00:27:58.156 [2024-07-26 11:35:53.549359] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.156 [2024-07-26 11:35:53.549389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f8000b90 with addr=10.0.0.2, port=4420 00:27:58.156 qpair failed and we were unable to recover it. 00:27:58.156 [2024-07-26 11:35:53.549531] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.156 [2024-07-26 11:35:53.549562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f8000b90 with addr=10.0.0.2, port=4420 00:27:58.156 qpair failed and we were unable to recover it. 00:27:58.156 [2024-07-26 11:35:53.549749] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.156 [2024-07-26 11:35:53.549782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f8000b90 with addr=10.0.0.2, port=4420 00:27:58.157 qpair failed and we were unable to recover it. 00:27:58.157 [2024-07-26 11:35:53.550011] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.157 [2024-07-26 11:35:53.550042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f8000b90 with addr=10.0.0.2, port=4420 00:27:58.157 qpair failed and we were unable to recover it. 00:27:58.157 [2024-07-26 11:35:53.550178] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.157 [2024-07-26 11:35:53.550209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f8000b90 with addr=10.0.0.2, port=4420 00:27:58.157 qpair failed and we were unable to recover it. 00:27:58.157 [2024-07-26 11:35:53.550466] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.157 [2024-07-26 11:35:53.550496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f8000b90 with addr=10.0.0.2, port=4420 00:27:58.157 qpair failed and we were unable to recover it. 00:27:58.157 [2024-07-26 11:35:53.550707] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.157 [2024-07-26 11:35:53.550739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f8000b90 with addr=10.0.0.2, port=4420 00:27:58.157 qpair failed and we were unable to recover it. 00:27:58.157 [2024-07-26 11:35:53.550880] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.157 [2024-07-26 11:35:53.550911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f8000b90 with addr=10.0.0.2, port=4420 00:27:58.157 qpair failed and we were unable to recover it. 00:27:58.157 [2024-07-26 11:35:53.551116] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.157 [2024-07-26 11:35:53.551146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f8000b90 with addr=10.0.0.2, port=4420 00:27:58.157 qpair failed and we were unable to recover it. 00:27:58.157 [2024-07-26 11:35:53.551492] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.157 [2024-07-26 11:35:53.551524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f8000b90 with addr=10.0.0.2, port=4420 00:27:58.157 qpair failed and we were unable to recover it. 00:27:58.157 [2024-07-26 11:35:53.551783] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.157 [2024-07-26 11:35:53.551815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f8000b90 with addr=10.0.0.2, port=4420 00:27:58.157 qpair failed and we were unable to recover it. 00:27:58.157 [2024-07-26 11:35:53.551999] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.157 [2024-07-26 11:35:53.552030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f8000b90 with addr=10.0.0.2, port=4420 00:27:58.157 qpair failed and we were unable to recover it. 00:27:58.157 [2024-07-26 11:35:53.552275] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.157 [2024-07-26 11:35:53.552307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f8000b90 with addr=10.0.0.2, port=4420 00:27:58.157 qpair failed and we were unable to recover it. 00:27:58.157 [2024-07-26 11:35:53.552579] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.157 [2024-07-26 11:35:53.552610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f8000b90 with addr=10.0.0.2, port=4420 00:27:58.157 qpair failed and we were unable to recover it. 00:27:58.157 [2024-07-26 11:35:53.552837] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.157 [2024-07-26 11:35:53.552869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f8000b90 with addr=10.0.0.2, port=4420 00:27:58.157 qpair failed and we were unable to recover it. 00:27:58.157 [2024-07-26 11:35:53.553176] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.157 [2024-07-26 11:35:53.553239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:58.157 qpair failed and we were unable to recover it. 00:27:58.157 [2024-07-26 11:35:53.553566] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.157 [2024-07-26 11:35:53.553602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.157 qpair failed and we were unable to recover it. 00:27:58.157 [2024-07-26 11:35:53.553813] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.157 [2024-07-26 11:35:53.553845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.157 qpair failed and we were unable to recover it. 00:27:58.157 [2024-07-26 11:35:53.554042] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.157 [2024-07-26 11:35:53.554073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.157 qpair failed and we were unable to recover it. 00:27:58.157 [2024-07-26 11:35:53.554375] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.157 [2024-07-26 11:35:53.554406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.157 qpair failed and we were unable to recover it. 00:27:58.157 [2024-07-26 11:35:53.554669] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.157 [2024-07-26 11:35:53.554702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.157 qpair failed and we were unable to recover it. 00:27:58.157 [2024-07-26 11:35:53.554892] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.157 [2024-07-26 11:35:53.554924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.157 qpair failed and we were unable to recover it. 00:27:58.157 [2024-07-26 11:35:53.555118] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.157 [2024-07-26 11:35:53.555150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.157 qpair failed and we were unable to recover it. 00:27:58.157 [2024-07-26 11:35:53.555350] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.157 [2024-07-26 11:35:53.555382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.157 qpair failed and we were unable to recover it. 00:27:58.157 [2024-07-26 11:35:53.555592] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.157 [2024-07-26 11:35:53.555624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.157 qpair failed and we were unable to recover it. 00:27:58.157 [2024-07-26 11:35:53.555771] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.157 [2024-07-26 11:35:53.555804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.157 qpair failed and we were unable to recover it. 00:27:58.157 [2024-07-26 11:35:53.555935] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.157 [2024-07-26 11:35:53.555967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.157 qpair failed and we were unable to recover it. 00:27:58.157 [2024-07-26 11:35:53.556261] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.157 [2024-07-26 11:35:53.556292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.157 qpair failed and we were unable to recover it. 00:27:58.157 [2024-07-26 11:35:53.556413] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.157 [2024-07-26 11:35:53.556451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.157 qpair failed and we were unable to recover it. 00:27:58.157 [2024-07-26 11:35:53.556694] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.157 [2024-07-26 11:35:53.556726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.157 qpair failed and we were unable to recover it. 00:27:58.157 [2024-07-26 11:35:53.556925] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.157 [2024-07-26 11:35:53.556956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.157 qpair failed and we were unable to recover it. 00:27:58.157 [2024-07-26 11:35:53.557133] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.157 [2024-07-26 11:35:53.557164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.157 qpair failed and we were unable to recover it. 00:27:58.158 [2024-07-26 11:35:53.557439] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.158 [2024-07-26 11:35:53.557471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.158 qpair failed and we were unable to recover it. 00:27:58.158 [2024-07-26 11:35:53.557608] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.158 [2024-07-26 11:35:53.557649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.158 qpair failed and we were unable to recover it. 00:27:58.158 [2024-07-26 11:35:53.557854] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.158 [2024-07-26 11:35:53.557886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.158 qpair failed and we were unable to recover it. 00:27:58.158 [2024-07-26 11:35:53.558018] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.158 [2024-07-26 11:35:53.558049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.158 qpair failed and we were unable to recover it. 00:27:58.158 [2024-07-26 11:35:53.558181] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.158 [2024-07-26 11:35:53.558212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.158 qpair failed and we were unable to recover it. 00:27:58.158 [2024-07-26 11:35:53.558475] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.158 [2024-07-26 11:35:53.558506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.158 qpair failed and we were unable to recover it. 00:27:58.158 [2024-07-26 11:35:53.558778] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.158 [2024-07-26 11:35:53.558810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.158 qpair failed and we were unable to recover it. 00:27:58.158 [2024-07-26 11:35:53.559000] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.158 [2024-07-26 11:35:53.559032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.158 qpair failed and we were unable to recover it. 00:27:58.158 [2024-07-26 11:35:53.559226] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.158 [2024-07-26 11:35:53.559257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.158 qpair failed and we were unable to recover it. 00:27:58.158 [2024-07-26 11:35:53.559525] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.158 [2024-07-26 11:35:53.559557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.158 qpair failed and we were unable to recover it. 00:27:58.158 [2024-07-26 11:35:53.559700] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.158 [2024-07-26 11:35:53.559733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.158 qpair failed and we were unable to recover it. 00:27:58.158 [2024-07-26 11:35:53.559928] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.158 [2024-07-26 11:35:53.559959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.158 qpair failed and we were unable to recover it. 00:27:58.158 [2024-07-26 11:35:53.560140] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.158 [2024-07-26 11:35:53.560171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.158 qpair failed and we were unable to recover it. 00:27:58.158 [2024-07-26 11:35:53.560369] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.158 [2024-07-26 11:35:53.560401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.158 qpair failed and we were unable to recover it. 00:27:58.158 [2024-07-26 11:35:53.560527] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.158 [2024-07-26 11:35:53.560557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.158 qpair failed and we were unable to recover it. 00:27:58.158 [2024-07-26 11:35:53.560685] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.158 [2024-07-26 11:35:53.560718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.158 qpair failed and we were unable to recover it. 00:27:58.158 [2024-07-26 11:35:53.560935] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.158 [2024-07-26 11:35:53.560967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.158 qpair failed and we were unable to recover it. 00:27:58.158 [2024-07-26 11:35:53.561116] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.158 [2024-07-26 11:35:53.561147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.158 qpair failed and we were unable to recover it. 00:27:58.158 [2024-07-26 11:35:53.561445] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.158 [2024-07-26 11:35:53.561476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.158 qpair failed and we were unable to recover it. 00:27:58.158 [2024-07-26 11:35:53.561693] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.158 [2024-07-26 11:35:53.561725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.158 qpair failed and we were unable to recover it. 00:27:58.158 [2024-07-26 11:35:53.561870] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.158 [2024-07-26 11:35:53.561902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.158 qpair failed and we were unable to recover it. 00:27:58.158 [2024-07-26 11:35:53.562087] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.158 [2024-07-26 11:35:53.562117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.158 qpair failed and we were unable to recover it. 00:27:58.158 [2024-07-26 11:35:53.562422] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.158 [2024-07-26 11:35:53.562452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.158 qpair failed and we were unable to recover it. 00:27:58.158 [2024-07-26 11:35:53.562737] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.158 [2024-07-26 11:35:53.562781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:58.158 qpair failed and we were unable to recover it. 00:27:58.158 [2024-07-26 11:35:53.562936] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.158 [2024-07-26 11:35:53.562969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:58.158 qpair failed and we were unable to recover it. 00:27:58.158 [2024-07-26 11:35:53.563118] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.158 [2024-07-26 11:35:53.563149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:58.158 qpair failed and we were unable to recover it. 00:27:58.158 [2024-07-26 11:35:53.563272] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.158 [2024-07-26 11:35:53.563303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:58.158 qpair failed and we were unable to recover it. 00:27:58.158 [2024-07-26 11:35:53.563501] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.158 [2024-07-26 11:35:53.563532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:58.158 qpair failed and we were unable to recover it. 00:27:58.158 [2024-07-26 11:35:53.563740] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.158 [2024-07-26 11:35:53.563775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:58.158 qpair failed and we were unable to recover it. 00:27:58.158 [2024-07-26 11:35:53.563976] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.158 [2024-07-26 11:35:53.564009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:58.158 qpair failed and we were unable to recover it. 00:27:58.158 [2024-07-26 11:35:53.564268] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.158 [2024-07-26 11:35:53.564300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:58.158 qpair failed and we were unable to recover it. 00:27:58.158 [2024-07-26 11:35:53.564440] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.158 [2024-07-26 11:35:53.564471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:58.158 qpair failed and we were unable to recover it. 00:27:58.158 [2024-07-26 11:35:53.564764] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.158 [2024-07-26 11:35:53.564799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:58.158 qpair failed and we were unable to recover it. 00:27:58.158 [2024-07-26 11:35:53.564937] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.158 [2024-07-26 11:35:53.564968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:58.158 qpair failed and we were unable to recover it. 00:27:58.158 [2024-07-26 11:35:53.565106] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.158 [2024-07-26 11:35:53.565137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:58.158 qpair failed and we were unable to recover it. 00:27:58.158 [2024-07-26 11:35:53.565323] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.158 [2024-07-26 11:35:53.565355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:58.158 qpair failed and we were unable to recover it. 00:27:58.158 [2024-07-26 11:35:53.565490] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.158 [2024-07-26 11:35:53.565521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:58.158 qpair failed and we were unable to recover it. 00:27:58.159 [2024-07-26 11:35:53.565831] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.159 [2024-07-26 11:35:53.565865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:58.159 qpair failed and we were unable to recover it. 00:27:58.159 [2024-07-26 11:35:53.566088] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.159 [2024-07-26 11:35:53.566120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:58.159 qpair failed and we were unable to recover it. 00:27:58.159 [2024-07-26 11:35:53.566387] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.159 [2024-07-26 11:35:53.566418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:58.159 qpair failed and we were unable to recover it. 00:27:58.159 [2024-07-26 11:35:53.566720] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.159 [2024-07-26 11:35:53.566754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:58.159 qpair failed and we were unable to recover it. 00:27:58.159 [2024-07-26 11:35:53.566960] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.159 [2024-07-26 11:35:53.566991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:58.159 qpair failed and we were unable to recover it. 00:27:58.159 [2024-07-26 11:35:53.567254] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.159 [2024-07-26 11:35:53.567285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:58.159 qpair failed and we were unable to recover it. 00:27:58.159 [2024-07-26 11:35:53.567550] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.159 [2024-07-26 11:35:53.567582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:58.159 qpair failed and we were unable to recover it. 00:27:58.159 [2024-07-26 11:35:53.567734] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.159 [2024-07-26 11:35:53.567766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:58.159 qpair failed and we were unable to recover it. 00:27:58.159 [2024-07-26 11:35:53.567989] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.159 [2024-07-26 11:35:53.568020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:58.159 qpair failed and we were unable to recover it. 00:27:58.159 [2024-07-26 11:35:53.568198] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.159 [2024-07-26 11:35:53.568230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:58.159 qpair failed and we were unable to recover it. 00:27:58.159 [2024-07-26 11:35:53.568435] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.159 [2024-07-26 11:35:53.568467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:58.159 qpair failed and we were unable to recover it. 00:27:58.159 [2024-07-26 11:35:53.568725] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.159 [2024-07-26 11:35:53.568758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:58.159 qpair failed and we were unable to recover it. 00:27:58.159 [2024-07-26 11:35:53.568956] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.159 [2024-07-26 11:35:53.568988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:58.159 qpair failed and we were unable to recover it. 00:27:58.159 [2024-07-26 11:35:53.569193] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.159 [2024-07-26 11:35:53.569230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:58.159 qpair failed and we were unable to recover it. 00:27:58.159 [2024-07-26 11:35:53.569474] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.159 [2024-07-26 11:35:53.569506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:58.159 qpair failed and we were unable to recover it. 00:27:58.159 [2024-07-26 11:35:53.569685] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.159 [2024-07-26 11:35:53.569717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:58.159 qpair failed and we were unable to recover it. 00:27:58.159 [2024-07-26 11:35:53.569895] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.159 [2024-07-26 11:35:53.569926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:58.159 qpair failed and we were unable to recover it. 00:27:58.159 [2024-07-26 11:35:53.570126] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.159 [2024-07-26 11:35:53.570157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:58.159 qpair failed and we were unable to recover it. 00:27:58.159 [2024-07-26 11:35:53.570370] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.159 [2024-07-26 11:35:53.570401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:58.159 qpair failed and we were unable to recover it. 00:27:58.159 [2024-07-26 11:35:53.570666] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.159 [2024-07-26 11:35:53.570698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:58.159 qpair failed and we were unable to recover it. 00:27:58.159 [2024-07-26 11:35:53.570913] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.159 [2024-07-26 11:35:53.570946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:58.159 qpair failed and we were unable to recover it. 00:27:58.159 [2024-07-26 11:35:53.571131] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.159 [2024-07-26 11:35:53.571161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:58.159 qpair failed and we were unable to recover it. 00:27:58.159 [2024-07-26 11:35:53.571354] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.159 [2024-07-26 11:35:53.571385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:58.159 qpair failed and we were unable to recover it. 00:27:58.159 [2024-07-26 11:35:53.571652] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.159 [2024-07-26 11:35:53.571685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:58.159 qpair failed and we were unable to recover it. 00:27:58.159 [2024-07-26 11:35:53.571863] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.159 [2024-07-26 11:35:53.571894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:58.159 qpair failed and we were unable to recover it. 00:27:58.159 [2024-07-26 11:35:53.572078] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.159 [2024-07-26 11:35:53.572110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:58.159 qpair failed and we were unable to recover it. 00:27:58.159 [2024-07-26 11:35:53.572221] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.159 [2024-07-26 11:35:53.572251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:58.159 qpair failed and we were unable to recover it. 00:27:58.159 [2024-07-26 11:35:53.572400] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.159 [2024-07-26 11:35:53.572432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:58.159 qpair failed and we were unable to recover it. 00:27:58.159 [2024-07-26 11:35:53.572607] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.159 [2024-07-26 11:35:53.572650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:58.159 qpair failed and we were unable to recover it. 00:27:58.159 [2024-07-26 11:35:53.572829] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.159 [2024-07-26 11:35:53.572860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:58.159 qpair failed and we were unable to recover it. 00:27:58.160 [2024-07-26 11:35:53.573053] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.160 [2024-07-26 11:35:53.573085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:58.160 qpair failed and we were unable to recover it. 00:27:58.160 [2024-07-26 11:35:53.573264] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.160 [2024-07-26 11:35:53.573296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:58.160 qpair failed and we were unable to recover it. 00:27:58.160 [2024-07-26 11:35:53.573540] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.160 [2024-07-26 11:35:53.573571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:58.160 qpair failed and we were unable to recover it. 00:27:58.160 [2024-07-26 11:35:53.573722] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.160 [2024-07-26 11:35:53.573755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:58.160 qpair failed and we were unable to recover it. 00:27:58.160 [2024-07-26 11:35:53.573948] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.160 [2024-07-26 11:35:53.573979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:58.160 qpair failed and we were unable to recover it. 00:27:58.160 [2024-07-26 11:35:53.574156] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.160 [2024-07-26 11:35:53.574189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:58.160 qpair failed and we were unable to recover it. 00:27:58.160 [2024-07-26 11:35:53.574390] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.160 [2024-07-26 11:35:53.574422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:58.160 qpair failed and we were unable to recover it. 00:27:58.160 [2024-07-26 11:35:53.574643] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.160 [2024-07-26 11:35:53.574675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:58.160 qpair failed and we were unable to recover it. 00:27:58.160 [2024-07-26 11:35:53.574869] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.160 [2024-07-26 11:35:53.574901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:58.160 qpair failed and we were unable to recover it. 00:27:58.160 [2024-07-26 11:35:53.575077] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.160 [2024-07-26 11:35:53.575108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:58.160 qpair failed and we were unable to recover it. 00:27:58.160 [2024-07-26 11:35:53.575372] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.160 [2024-07-26 11:35:53.575409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:58.160 qpair failed and we were unable to recover it. 00:27:58.160 [2024-07-26 11:35:53.575617] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.160 [2024-07-26 11:35:53.575668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:58.160 qpair failed and we were unable to recover it. 00:27:58.160 [2024-07-26 11:35:53.575934] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.160 [2024-07-26 11:35:53.575966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:58.160 qpair failed and we were unable to recover it. 00:27:58.160 [2024-07-26 11:35:53.576155] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.160 [2024-07-26 11:35:53.576186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:58.160 qpair failed and we were unable to recover it. 00:27:58.160 [2024-07-26 11:35:53.576376] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.160 [2024-07-26 11:35:53.576408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:58.160 qpair failed and we were unable to recover it. 00:27:58.160 [2024-07-26 11:35:53.576582] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.160 [2024-07-26 11:35:53.576613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:58.160 qpair failed and we were unable to recover it. 00:27:58.160 [2024-07-26 11:35:53.576749] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.160 [2024-07-26 11:35:53.576781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:58.160 qpair failed and we were unable to recover it. 00:27:58.160 [2024-07-26 11:35:53.576983] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.160 [2024-07-26 11:35:53.577015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:58.160 qpair failed and we were unable to recover it. 00:27:58.160 [2024-07-26 11:35:53.577120] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.160 [2024-07-26 11:35:53.577152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:58.160 qpair failed and we were unable to recover it. 00:27:58.160 [2024-07-26 11:35:53.577409] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.160 [2024-07-26 11:35:53.577440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:58.160 qpair failed and we were unable to recover it. 00:27:58.160 [2024-07-26 11:35:53.577710] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.160 [2024-07-26 11:35:53.577744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:58.160 qpair failed and we were unable to recover it. 00:27:58.160 [2024-07-26 11:35:53.577953] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.160 [2024-07-26 11:35:53.577985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:58.160 qpair failed and we were unable to recover it. 00:27:58.160 [2024-07-26 11:35:53.578241] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.160 [2024-07-26 11:35:53.578273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:58.160 qpair failed and we were unable to recover it. 00:27:58.160 [2024-07-26 11:35:53.578515] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.160 [2024-07-26 11:35:53.578547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:58.160 qpair failed and we were unable to recover it. 00:27:58.160 [2024-07-26 11:35:53.578827] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.160 [2024-07-26 11:35:53.578859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:58.160 qpair failed and we were unable to recover it. 00:27:58.160 [2024-07-26 11:35:53.579101] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.160 [2024-07-26 11:35:53.579132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:58.160 qpair failed and we were unable to recover it. 00:27:58.160 [2024-07-26 11:35:53.579323] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.160 [2024-07-26 11:35:53.579355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:58.160 qpair failed and we were unable to recover it. 00:27:58.160 [2024-07-26 11:35:53.579563] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.160 [2024-07-26 11:35:53.579594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:58.160 qpair failed and we were unable to recover it. 00:27:58.160 [2024-07-26 11:35:53.579727] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.160 [2024-07-26 11:35:53.579759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:58.160 qpair failed and we were unable to recover it. 00:27:58.160 [2024-07-26 11:35:53.579876] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.160 [2024-07-26 11:35:53.579906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:58.160 qpair failed and we were unable to recover it. 00:27:58.160 [2024-07-26 11:35:53.580043] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.160 [2024-07-26 11:35:53.580074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:58.160 qpair failed and we were unable to recover it. 00:27:58.160 [2024-07-26 11:35:53.580304] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.160 [2024-07-26 11:35:53.580335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:58.160 qpair failed and we were unable to recover it. 00:27:58.160 [2024-07-26 11:35:53.580536] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.160 [2024-07-26 11:35:53.580567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:58.160 qpair failed and we were unable to recover it. 00:27:58.160 [2024-07-26 11:35:53.580766] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.160 [2024-07-26 11:35:53.580798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:58.160 qpair failed and we were unable to recover it. 00:27:58.160 [2024-07-26 11:35:53.580950] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.160 [2024-07-26 11:35:53.580982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:58.160 qpair failed and we were unable to recover it. 00:27:58.160 [2024-07-26 11:35:53.581158] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.161 [2024-07-26 11:35:53.581191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:58.161 qpair failed and we were unable to recover it. 00:27:58.161 [2024-07-26 11:35:53.581455] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.161 [2024-07-26 11:35:53.581486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:58.161 qpair failed and we were unable to recover it. 00:27:58.161 [2024-07-26 11:35:53.581750] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.161 [2024-07-26 11:35:53.581789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:58.161 qpair failed and we were unable to recover it. 00:27:58.161 [2024-07-26 11:35:53.582030] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.161 [2024-07-26 11:35:53.582061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:58.161 qpair failed and we were unable to recover it. 00:27:58.161 [2024-07-26 11:35:53.582319] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.161 [2024-07-26 11:35:53.582351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:58.161 qpair failed and we were unable to recover it. 00:27:58.161 [2024-07-26 11:35:53.582649] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.161 [2024-07-26 11:35:53.582682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:58.161 qpair failed and we were unable to recover it. 00:27:58.161 [2024-07-26 11:35:53.582891] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.161 [2024-07-26 11:35:53.582923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:58.161 qpair failed and we were unable to recover it. 00:27:58.161 [2024-07-26 11:35:53.583064] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.161 [2024-07-26 11:35:53.583096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:58.161 qpair failed and we were unable to recover it. 00:27:58.161 [2024-07-26 11:35:53.583345] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.161 [2024-07-26 11:35:53.583378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:58.161 qpair failed and we were unable to recover it. 00:27:58.161 [2024-07-26 11:35:53.583605] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.161 [2024-07-26 11:35:53.583646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:58.161 qpair failed and we were unable to recover it. 00:27:58.161 [2024-07-26 11:35:53.583758] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.161 [2024-07-26 11:35:53.583790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:58.161 qpair failed and we were unable to recover it. 00:27:58.161 [2024-07-26 11:35:53.583984] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.161 [2024-07-26 11:35:53.584015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:58.161 qpair failed and we were unable to recover it. 00:27:58.161 [2024-07-26 11:35:53.584146] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.161 [2024-07-26 11:35:53.584177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:58.161 qpair failed and we were unable to recover it. 00:27:58.161 [2024-07-26 11:35:53.584365] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.161 [2024-07-26 11:35:53.584396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:58.161 qpair failed and we were unable to recover it. 00:27:58.161 [2024-07-26 11:35:53.584587] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.161 [2024-07-26 11:35:53.584618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:58.161 qpair failed and we were unable to recover it. 00:27:58.161 [2024-07-26 11:35:53.584824] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.161 [2024-07-26 11:35:53.584857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:58.161 qpair failed and we were unable to recover it. 00:27:58.161 [2024-07-26 11:35:53.584999] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.161 [2024-07-26 11:35:53.585034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f8000b90 with addr=10.0.0.2, port=4420 00:27:58.161 qpair failed and we were unable to recover it. 00:27:58.161 [2024-07-26 11:35:53.585179] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.161 [2024-07-26 11:35:53.585211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f8000b90 with addr=10.0.0.2, port=4420 00:27:58.161 qpair failed and we were unable to recover it. 00:27:58.161 [2024-07-26 11:35:53.585337] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.161 [2024-07-26 11:35:53.585367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f8000b90 with addr=10.0.0.2, port=4420 00:27:58.161 qpair failed and we were unable to recover it. 00:27:58.161 [2024-07-26 11:35:53.585560] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.161 [2024-07-26 11:35:53.585592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f8000b90 with addr=10.0.0.2, port=4420 00:27:58.161 qpair failed and we were unable to recover it. 00:27:58.161 [2024-07-26 11:35:53.585743] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.161 [2024-07-26 11:35:53.585776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f8000b90 with addr=10.0.0.2, port=4420 00:27:58.161 qpair failed and we were unable to recover it. 00:27:58.161 [2024-07-26 11:35:53.585918] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.161 [2024-07-26 11:35:53.585949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f8000b90 with addr=10.0.0.2, port=4420 00:27:58.161 qpair failed and we were unable to recover it. 00:27:58.161 [2024-07-26 11:35:53.586183] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.161 [2024-07-26 11:35:53.586214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f8000b90 with addr=10.0.0.2, port=4420 00:27:58.161 qpair failed and we were unable to recover it. 00:27:58.161 [2024-07-26 11:35:53.586480] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.161 [2024-07-26 11:35:53.586510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f8000b90 with addr=10.0.0.2, port=4420 00:27:58.161 qpair failed and we were unable to recover it. 00:27:58.161 [2024-07-26 11:35:53.586717] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.161 [2024-07-26 11:35:53.586750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f8000b90 with addr=10.0.0.2, port=4420 00:27:58.161 qpair failed and we were unable to recover it. 00:27:58.161 [2024-07-26 11:35:53.586898] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.161 [2024-07-26 11:35:53.586929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f8000b90 with addr=10.0.0.2, port=4420 00:27:58.161 qpair failed and we were unable to recover it. 00:27:58.161 [2024-07-26 11:35:53.587118] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.161 [2024-07-26 11:35:53.587150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f8000b90 with addr=10.0.0.2, port=4420 00:27:58.161 qpair failed and we were unable to recover it. 00:27:58.161 [2024-07-26 11:35:53.587437] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.161 [2024-07-26 11:35:53.587468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f8000b90 with addr=10.0.0.2, port=4420 00:27:58.161 qpair failed and we were unable to recover it. 00:27:58.161 [2024-07-26 11:35:53.587674] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.161 [2024-07-26 11:35:53.587706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f8000b90 with addr=10.0.0.2, port=4420 00:27:58.161 qpair failed and we were unable to recover it. 00:27:58.161 [2024-07-26 11:35:53.587883] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.161 [2024-07-26 11:35:53.587926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f8000b90 with addr=10.0.0.2, port=4420 00:27:58.161 qpair failed and we were unable to recover it. 00:27:58.161 [2024-07-26 11:35:53.588077] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.162 [2024-07-26 11:35:53.588108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f8000b90 with addr=10.0.0.2, port=4420 00:27:58.162 qpair failed and we were unable to recover it. 00:27:58.162 [2024-07-26 11:35:53.588316] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.162 [2024-07-26 11:35:53.588347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f8000b90 with addr=10.0.0.2, port=4420 00:27:58.162 qpair failed and we were unable to recover it. 00:27:58.162 [2024-07-26 11:35:53.588459] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.162 [2024-07-26 11:35:53.588488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f8000b90 with addr=10.0.0.2, port=4420 00:27:58.162 qpair failed and we were unable to recover it. 00:27:58.162 [2024-07-26 11:35:53.588703] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.162 [2024-07-26 11:35:53.588735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f8000b90 with addr=10.0.0.2, port=4420 00:27:58.162 qpair failed and we were unable to recover it. 00:27:58.162 [2024-07-26 11:35:53.588933] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.162 [2024-07-26 11:35:53.588963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f8000b90 with addr=10.0.0.2, port=4420 00:27:58.162 qpair failed and we were unable to recover it. 00:27:58.162 [2024-07-26 11:35:53.589157] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.162 [2024-07-26 11:35:53.589188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f8000b90 with addr=10.0.0.2, port=4420 00:27:58.162 qpair failed and we were unable to recover it. 00:27:58.162 [2024-07-26 11:35:53.589453] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.162 [2024-07-26 11:35:53.589484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f8000b90 with addr=10.0.0.2, port=4420 00:27:58.162 qpair failed and we were unable to recover it. 00:27:58.162 [2024-07-26 11:35:53.589784] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.162 [2024-07-26 11:35:53.589815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f8000b90 with addr=10.0.0.2, port=4420 00:27:58.162 qpair failed and we were unable to recover it. 00:27:58.162 [2024-07-26 11:35:53.590028] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.162 [2024-07-26 11:35:53.590058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f8000b90 with addr=10.0.0.2, port=4420 00:27:58.162 qpair failed and we were unable to recover it. 00:27:58.162 [2024-07-26 11:35:53.590182] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.162 [2024-07-26 11:35:53.590214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f8000b90 with addr=10.0.0.2, port=4420 00:27:58.162 qpair failed and we were unable to recover it. 00:27:58.162 [2024-07-26 11:35:53.590457] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.162 [2024-07-26 11:35:53.590487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f8000b90 with addr=10.0.0.2, port=4420 00:27:58.162 qpair failed and we were unable to recover it. 00:27:58.162 [2024-07-26 11:35:53.590671] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.162 [2024-07-26 11:35:53.590702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f8000b90 with addr=10.0.0.2, port=4420 00:27:58.162 qpair failed and we were unable to recover it. 00:27:58.162 [2024-07-26 11:35:53.590856] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.162 [2024-07-26 11:35:53.590887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f8000b90 with addr=10.0.0.2, port=4420 00:27:58.162 qpair failed and we were unable to recover it. 00:27:58.162 [2024-07-26 11:35:53.591071] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.162 [2024-07-26 11:35:53.591103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f8000b90 with addr=10.0.0.2, port=4420 00:27:58.162 qpair failed and we were unable to recover it. 00:27:58.162 [2024-07-26 11:35:53.591312] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.162 [2024-07-26 11:35:53.591343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f8000b90 with addr=10.0.0.2, port=4420 00:27:58.162 qpair failed and we were unable to recover it. 00:27:58.162 [2024-07-26 11:35:53.591593] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.162 [2024-07-26 11:35:53.591624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f8000b90 with addr=10.0.0.2, port=4420 00:27:58.162 qpair failed and we were unable to recover it. 00:27:58.162 [2024-07-26 11:35:53.591762] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.162 [2024-07-26 11:35:53.591795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f8000b90 with addr=10.0.0.2, port=4420 00:27:58.162 qpair failed and we were unable to recover it. 00:27:58.162 [2024-07-26 11:35:53.591936] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.162 [2024-07-26 11:35:53.591967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f8000b90 with addr=10.0.0.2, port=4420 00:27:58.162 qpair failed and we were unable to recover it. 00:27:58.162 [2024-07-26 11:35:53.592148] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.162 [2024-07-26 11:35:53.592179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f8000b90 with addr=10.0.0.2, port=4420 00:27:58.162 qpair failed and we were unable to recover it. 00:27:58.162 [2024-07-26 11:35:53.592395] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.162 [2024-07-26 11:35:53.592427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f8000b90 with addr=10.0.0.2, port=4420 00:27:58.162 qpair failed and we were unable to recover it. 00:27:58.162 [2024-07-26 11:35:53.592565] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.162 [2024-07-26 11:35:53.592596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f8000b90 with addr=10.0.0.2, port=4420 00:27:58.162 qpair failed and we were unable to recover it. 00:27:58.162 [2024-07-26 11:35:53.592739] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.162 [2024-07-26 11:35:53.592771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f8000b90 with addr=10.0.0.2, port=4420 00:27:58.162 qpair failed and we were unable to recover it. 00:27:58.162 [2024-07-26 11:35:53.592969] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.162 [2024-07-26 11:35:53.593001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f8000b90 with addr=10.0.0.2, port=4420 00:27:58.162 qpair failed and we were unable to recover it. 00:27:58.162 [2024-07-26 11:35:53.593271] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.162 [2024-07-26 11:35:53.593302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f8000b90 with addr=10.0.0.2, port=4420 00:27:58.162 qpair failed and we were unable to recover it. 00:27:58.162 [2024-07-26 11:35:53.593442] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.162 [2024-07-26 11:35:53.593473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f8000b90 with addr=10.0.0.2, port=4420 00:27:58.162 qpair failed and we were unable to recover it. 00:27:58.162 [2024-07-26 11:35:53.593664] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.162 [2024-07-26 11:35:53.593696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f8000b90 with addr=10.0.0.2, port=4420 00:27:58.162 qpair failed and we were unable to recover it. 00:27:58.162 [2024-07-26 11:35:53.593861] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.162 [2024-07-26 11:35:53.593899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.162 qpair failed and we were unable to recover it. 00:27:58.162 [2024-07-26 11:35:53.594040] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.162 [2024-07-26 11:35:53.594071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.162 qpair failed and we were unable to recover it. 00:27:58.162 [2024-07-26 11:35:53.594212] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.162 [2024-07-26 11:35:53.594243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.162 qpair failed and we were unable to recover it. 00:27:58.162 [2024-07-26 11:35:53.594488] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.162 [2024-07-26 11:35:53.594519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.162 qpair failed and we were unable to recover it. 00:27:58.162 [2024-07-26 11:35:53.594701] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.162 [2024-07-26 11:35:53.594734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.162 qpair failed and we were unable to recover it. 00:27:58.162 [2024-07-26 11:35:53.594931] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.162 [2024-07-26 11:35:53.594962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.162 qpair failed and we were unable to recover it. 00:27:58.162 [2024-07-26 11:35:53.595153] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.162 [2024-07-26 11:35:53.595184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.162 qpair failed and we were unable to recover it. 00:27:58.162 [2024-07-26 11:35:53.595473] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.162 [2024-07-26 11:35:53.595507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.162 qpair failed and we were unable to recover it. 00:27:58.162 [2024-07-26 11:35:53.595787] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.162 [2024-07-26 11:35:53.595819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.162 qpair failed and we were unable to recover it. 00:27:58.162 [2024-07-26 11:35:53.595955] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.162 [2024-07-26 11:35:53.595986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.162 qpair failed and we were unable to recover it. 00:27:58.162 [2024-07-26 11:35:53.596177] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.162 [2024-07-26 11:35:53.596210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.162 qpair failed and we were unable to recover it. 00:27:58.162 [2024-07-26 11:35:53.596419] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.163 [2024-07-26 11:35:53.596450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.163 qpair failed and we were unable to recover it. 00:27:58.163 [2024-07-26 11:35:53.596707] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.163 [2024-07-26 11:35:53.596741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.163 qpair failed and we were unable to recover it. 00:27:58.163 [2024-07-26 11:35:53.596865] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.163 [2024-07-26 11:35:53.596903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.163 qpair failed and we were unable to recover it. 00:27:58.163 [2024-07-26 11:35:53.597092] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.163 [2024-07-26 11:35:53.597123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.163 qpair failed and we were unable to recover it. 00:27:58.163 [2024-07-26 11:35:53.597406] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.163 [2024-07-26 11:35:53.597437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.163 qpair failed and we were unable to recover it. 00:27:58.163 [2024-07-26 11:35:53.597622] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.163 [2024-07-26 11:35:53.597665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.163 qpair failed and we were unable to recover it. 00:27:58.163 [2024-07-26 11:35:53.597878] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.163 [2024-07-26 11:35:53.597909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.163 qpair failed and we were unable to recover it. 00:27:58.163 [2024-07-26 11:35:53.598097] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.163 [2024-07-26 11:35:53.598129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.163 qpair failed and we were unable to recover it. 00:27:58.163 [2024-07-26 11:35:53.598272] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.163 [2024-07-26 11:35:53.598303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.163 qpair failed and we were unable to recover it. 00:27:58.163 [2024-07-26 11:35:53.598476] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.163 [2024-07-26 11:35:53.598506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.163 qpair failed and we were unable to recover it. 00:27:58.163 [2024-07-26 11:35:53.598648] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.163 [2024-07-26 11:35:53.598680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.163 qpair failed and we were unable to recover it. 00:27:58.163 [2024-07-26 11:35:53.598871] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.163 [2024-07-26 11:35:53.598904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.163 qpair failed and we were unable to recover it. 00:27:58.163 [2024-07-26 11:35:53.599039] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.163 [2024-07-26 11:35:53.599070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.163 qpair failed and we were unable to recover it. 00:27:58.163 [2024-07-26 11:35:53.599207] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.163 [2024-07-26 11:35:53.599238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.163 qpair failed and we were unable to recover it. 00:27:58.163 [2024-07-26 11:35:53.599430] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.163 [2024-07-26 11:35:53.599462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.163 qpair failed and we were unable to recover it. 00:27:58.163 [2024-07-26 11:35:53.599687] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.163 [2024-07-26 11:35:53.599718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.163 qpair failed and we were unable to recover it. 00:27:58.163 [2024-07-26 11:35:53.599865] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.163 [2024-07-26 11:35:53.599896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.163 qpair failed and we were unable to recover it. 00:27:58.163 [2024-07-26 11:35:53.600039] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.163 [2024-07-26 11:35:53.600070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.163 qpair failed and we were unable to recover it. 00:27:58.163 [2024-07-26 11:35:53.600320] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.163 [2024-07-26 11:35:53.600351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.163 qpair failed and we were unable to recover it. 00:27:58.163 [2024-07-26 11:35:53.600617] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.163 [2024-07-26 11:35:53.600661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.163 qpair failed and we were unable to recover it. 00:27:58.163 [2024-07-26 11:35:53.600772] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.163 [2024-07-26 11:35:53.600803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.163 qpair failed and we were unable to recover it. 00:27:58.163 [2024-07-26 11:35:53.600934] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.163 [2024-07-26 11:35:53.600965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.163 qpair failed and we were unable to recover it. 00:27:58.163 [2024-07-26 11:35:53.601090] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.163 [2024-07-26 11:35:53.601119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.163 qpair failed and we were unable to recover it. 00:27:58.163 [2024-07-26 11:35:53.601399] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.163 [2024-07-26 11:35:53.601430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.163 qpair failed and we were unable to recover it. 00:27:58.163 [2024-07-26 11:35:53.601675] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.163 [2024-07-26 11:35:53.601707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.163 qpair failed and we were unable to recover it. 00:27:58.163 [2024-07-26 11:35:53.601894] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.163 [2024-07-26 11:35:53.601925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.163 qpair failed and we were unable to recover it. 00:27:58.163 [2024-07-26 11:35:53.602117] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.163 [2024-07-26 11:35:53.602148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.163 qpair failed and we were unable to recover it. 00:27:58.163 [2024-07-26 11:35:53.602289] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.163 [2024-07-26 11:35:53.602320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.163 qpair failed and we were unable to recover it. 00:27:58.163 [2024-07-26 11:35:53.602515] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.163 [2024-07-26 11:35:53.602546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.163 qpair failed and we were unable to recover it. 00:27:58.163 [2024-07-26 11:35:53.602782] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.163 [2024-07-26 11:35:53.602823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:58.163 qpair failed and we were unable to recover it. 00:27:58.163 [2024-07-26 11:35:53.603018] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.163 [2024-07-26 11:35:53.603050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:58.163 qpair failed and we were unable to recover it. 00:27:58.163 [2024-07-26 11:35:53.603170] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.163 [2024-07-26 11:35:53.603202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:58.163 qpair failed and we were unable to recover it. 00:27:58.163 [2024-07-26 11:35:53.603325] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.163 [2024-07-26 11:35:53.603355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:58.163 qpair failed and we were unable to recover it. 00:27:58.163 [2024-07-26 11:35:53.603594] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.163 [2024-07-26 11:35:53.603625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:58.163 qpair failed and we were unable to recover it. 00:27:58.163 [2024-07-26 11:35:53.603902] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.163 [2024-07-26 11:35:53.603934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:58.163 qpair failed and we were unable to recover it. 00:27:58.163 [2024-07-26 11:35:53.604066] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.163 [2024-07-26 11:35:53.604097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:58.163 qpair failed and we were unable to recover it. 00:27:58.163 [2024-07-26 11:35:53.604322] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.163 [2024-07-26 11:35:53.604353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:58.163 qpair failed and we were unable to recover it. 00:27:58.163 [2024-07-26 11:35:53.604592] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.164 [2024-07-26 11:35:53.604624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:58.164 qpair failed and we were unable to recover it. 00:27:58.164 [2024-07-26 11:35:53.604784] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.164 [2024-07-26 11:35:53.604815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:58.164 qpair failed and we were unable to recover it. 00:27:58.164 [2024-07-26 11:35:53.605104] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.164 [2024-07-26 11:35:53.605135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:58.164 qpair failed and we were unable to recover it. 00:27:58.164 [2024-07-26 11:35:53.605347] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.164 [2024-07-26 11:35:53.605378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:58.164 qpair failed and we were unable to recover it. 00:27:58.164 [2024-07-26 11:35:53.605622] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.164 [2024-07-26 11:35:53.605662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:58.164 qpair failed and we were unable to recover it. 00:27:58.164 [2024-07-26 11:35:53.605846] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.164 [2024-07-26 11:35:53.605884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:58.164 qpair failed and we were unable to recover it. 00:27:58.164 [2024-07-26 11:35:53.606050] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.164 [2024-07-26 11:35:53.606082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:58.164 qpair failed and we were unable to recover it. 00:27:58.164 [2024-07-26 11:35:53.606356] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.164 [2024-07-26 11:35:53.606387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:58.164 qpair failed and we were unable to recover it. 00:27:58.164 [2024-07-26 11:35:53.606577] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.164 [2024-07-26 11:35:53.606607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:58.164 qpair failed and we were unable to recover it. 00:27:58.164 [2024-07-26 11:35:53.606752] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.164 [2024-07-26 11:35:53.606784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:58.164 qpair failed and we were unable to recover it. 00:27:58.164 [2024-07-26 11:35:53.606930] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.164 [2024-07-26 11:35:53.606961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:58.164 qpair failed and we were unable to recover it. 00:27:58.164 [2024-07-26 11:35:53.607151] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.164 [2024-07-26 11:35:53.607182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:58.164 qpair failed and we were unable to recover it. 00:27:58.164 [2024-07-26 11:35:53.607420] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.164 [2024-07-26 11:35:53.607451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:58.164 qpair failed and we were unable to recover it. 00:27:58.164 [2024-07-26 11:35:53.607655] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.164 [2024-07-26 11:35:53.607688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:58.164 qpair failed and we were unable to recover it. 00:27:58.164 [2024-07-26 11:35:53.607874] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.164 [2024-07-26 11:35:53.607907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:58.164 qpair failed and we were unable to recover it. 00:27:58.164 [2024-07-26 11:35:53.608052] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.164 [2024-07-26 11:35:53.608083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:58.164 qpair failed and we were unable to recover it. 00:27:58.164 [2024-07-26 11:35:53.608289] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.164 [2024-07-26 11:35:53.608320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:58.164 qpair failed and we were unable to recover it. 00:27:58.164 [2024-07-26 11:35:53.608491] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.164 [2024-07-26 11:35:53.608522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:58.164 qpair failed and we were unable to recover it. 00:27:58.164 [2024-07-26 11:35:53.608745] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.164 [2024-07-26 11:35:53.608777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:58.164 qpair failed and we were unable to recover it. 00:27:58.164 [2024-07-26 11:35:53.608965] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.164 [2024-07-26 11:35:53.608996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:58.164 qpair failed and we were unable to recover it. 00:27:58.164 [2024-07-26 11:35:53.609265] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.164 [2024-07-26 11:35:53.609296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:58.164 qpair failed and we were unable to recover it. 00:27:58.164 [2024-07-26 11:35:53.609482] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.164 [2024-07-26 11:35:53.609514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:58.164 qpair failed and we were unable to recover it. 00:27:58.164 [2024-07-26 11:35:53.609722] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.164 [2024-07-26 11:35:53.609754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:58.164 qpair failed and we were unable to recover it. 00:27:58.164 [2024-07-26 11:35:53.609890] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.164 [2024-07-26 11:35:53.609922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:58.164 qpair failed and we were unable to recover it. 00:27:58.164 [2024-07-26 11:35:53.610114] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.164 [2024-07-26 11:35:53.610145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:58.164 qpair failed and we were unable to recover it. 00:27:58.164 [2024-07-26 11:35:53.610331] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.164 [2024-07-26 11:35:53.610362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:58.164 qpair failed and we were unable to recover it. 00:27:58.164 [2024-07-26 11:35:53.610567] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.164 [2024-07-26 11:35:53.610598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:58.164 qpair failed and we were unable to recover it. 00:27:58.164 [2024-07-26 11:35:53.610917] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.164 [2024-07-26 11:35:53.610953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f8000b90 with addr=10.0.0.2, port=4420 00:27:58.164 qpair failed and we were unable to recover it. 00:27:58.164 [2024-07-26 11:35:53.611175] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.164 [2024-07-26 11:35:53.611207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f8000b90 with addr=10.0.0.2, port=4420 00:27:58.164 qpair failed and we were unable to recover it. 00:27:58.164 [2024-07-26 11:35:53.611328] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.164 [2024-07-26 11:35:53.611360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f8000b90 with addr=10.0.0.2, port=4420 00:27:58.164 qpair failed and we were unable to recover it. 00:27:58.164 [2024-07-26 11:35:53.611605] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.164 [2024-07-26 11:35:53.611649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f8000b90 with addr=10.0.0.2, port=4420 00:27:58.164 qpair failed and we were unable to recover it. 00:27:58.164 [2024-07-26 11:35:53.611850] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.164 [2024-07-26 11:35:53.611881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f8000b90 with addr=10.0.0.2, port=4420 00:27:58.164 qpair failed and we were unable to recover it. 00:27:58.164 [2024-07-26 11:35:53.612023] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.164 [2024-07-26 11:35:53.612061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:58.164 qpair failed and we were unable to recover it. 00:27:58.164 [2024-07-26 11:35:53.612273] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.164 [2024-07-26 11:35:53.612305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:58.164 qpair failed and we were unable to recover it. 00:27:58.164 [2024-07-26 11:35:53.612411] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.164 [2024-07-26 11:35:53.612440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:58.164 qpair failed and we were unable to recover it. 00:27:58.164 [2024-07-26 11:35:53.612622] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.164 [2024-07-26 11:35:53.612667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:58.164 qpair failed and we were unable to recover it. 00:27:58.164 [2024-07-26 11:35:53.612826] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.164 [2024-07-26 11:35:53.612859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:58.164 qpair failed and we were unable to recover it. 00:27:58.164 [2024-07-26 11:35:53.613048] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.164 [2024-07-26 11:35:53.613079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:58.164 qpair failed and we were unable to recover it. 00:27:58.164 [2024-07-26 11:35:53.613212] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.165 [2024-07-26 11:35:53.613244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:58.165 qpair failed and we were unable to recover it. 00:27:58.165 [2024-07-26 11:35:53.613531] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.165 [2024-07-26 11:35:53.613563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:58.165 qpair failed and we were unable to recover it. 00:27:58.165 [2024-07-26 11:35:53.613713] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.165 [2024-07-26 11:35:53.613746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:58.165 qpair failed and we were unable to recover it. 00:27:58.165 [2024-07-26 11:35:53.613885] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.165 [2024-07-26 11:35:53.613917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:58.165 qpair failed and we were unable to recover it. 00:27:58.165 [2024-07-26 11:35:53.614181] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.165 [2024-07-26 11:35:53.614215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:58.165 qpair failed and we were unable to recover it. 00:27:58.165 [2024-07-26 11:35:53.614477] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.165 [2024-07-26 11:35:53.614508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:58.165 qpair failed and we were unable to recover it. 00:27:58.165 [2024-07-26 11:35:53.614697] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.165 [2024-07-26 11:35:53.614729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:58.165 qpair failed and we were unable to recover it. 00:27:58.165 [2024-07-26 11:35:53.614926] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.165 [2024-07-26 11:35:53.614957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:58.165 qpair failed and we were unable to recover it. 00:27:58.165 [2024-07-26 11:35:53.615150] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.165 [2024-07-26 11:35:53.615182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:58.165 qpair failed and we were unable to recover it. 00:27:58.165 [2024-07-26 11:35:53.615469] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.165 [2024-07-26 11:35:53.615500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:58.165 qpair failed and we were unable to recover it. 00:27:58.165 [2024-07-26 11:35:53.615674] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.165 [2024-07-26 11:35:53.615707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:58.165 qpair failed and we were unable to recover it. 00:27:58.165 [2024-07-26 11:35:53.615951] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.165 [2024-07-26 11:35:53.615984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:58.165 qpair failed and we were unable to recover it. 00:27:58.165 [2024-07-26 11:35:53.616126] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.165 [2024-07-26 11:35:53.616159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:58.165 qpair failed and we were unable to recover it. 00:27:58.165 [2024-07-26 11:35:53.616360] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.165 [2024-07-26 11:35:53.616392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:58.165 qpair failed and we were unable to recover it. 00:27:58.165 [2024-07-26 11:35:53.616654] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.165 [2024-07-26 11:35:53.616686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:58.165 qpair failed and we were unable to recover it. 00:27:58.165 [2024-07-26 11:35:53.616811] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.165 [2024-07-26 11:35:53.616843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:58.165 qpair failed and we were unable to recover it. 00:27:58.165 [2024-07-26 11:35:53.616998] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.165 [2024-07-26 11:35:53.617029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:58.165 qpair failed and we were unable to recover it. 00:27:58.165 [2024-07-26 11:35:53.617306] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.165 [2024-07-26 11:35:53.617338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:58.165 qpair failed and we were unable to recover it. 00:27:58.165 [2024-07-26 11:35:53.617609] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.165 [2024-07-26 11:35:53.617654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:58.165 qpair failed and we were unable to recover it. 00:27:58.165 [2024-07-26 11:35:53.617797] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.165 [2024-07-26 11:35:53.617829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:58.165 qpair failed and we were unable to recover it. 00:27:58.165 [2024-07-26 11:35:53.618010] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.165 [2024-07-26 11:35:53.618042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:58.165 qpair failed and we were unable to recover it. 00:27:58.165 [2024-07-26 11:35:53.618168] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.165 [2024-07-26 11:35:53.618204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:58.165 qpair failed and we were unable to recover it. 00:27:58.165 [2024-07-26 11:35:53.618434] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.165 [2024-07-26 11:35:53.618465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:58.165 qpair failed and we were unable to recover it. 00:27:58.165 [2024-07-26 11:35:53.618761] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.165 [2024-07-26 11:35:53.618794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:58.165 qpair failed and we were unable to recover it. 00:27:58.165 [2024-07-26 11:35:53.618934] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.165 [2024-07-26 11:35:53.618965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:58.165 qpair failed and we were unable to recover it. 00:27:58.165 [2024-07-26 11:35:53.619114] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.165 [2024-07-26 11:35:53.619146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:58.165 qpair failed and we were unable to recover it. 00:27:58.165 [2024-07-26 11:35:53.619337] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.165 [2024-07-26 11:35:53.619368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:58.165 qpair failed and we were unable to recover it. 00:27:58.165 [2024-07-26 11:35:53.619580] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.165 [2024-07-26 11:35:53.619611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:58.165 qpair failed and we were unable to recover it. 00:27:58.165 [2024-07-26 11:35:53.619767] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.165 [2024-07-26 11:35:53.619799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:58.165 qpair failed and we were unable to recover it. 00:27:58.165 [2024-07-26 11:35:53.619991] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.165 [2024-07-26 11:35:53.620022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:58.165 qpair failed and we were unable to recover it. 00:27:58.165 [2024-07-26 11:35:53.620141] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.165 [2024-07-26 11:35:53.620173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:58.165 qpair failed and we were unable to recover it. 00:27:58.165 [2024-07-26 11:35:53.620384] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.165 [2024-07-26 11:35:53.620415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:58.165 qpair failed and we were unable to recover it. 00:27:58.165 [2024-07-26 11:35:53.620609] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.165 [2024-07-26 11:35:53.620652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:58.165 qpair failed and we were unable to recover it. 00:27:58.165 [2024-07-26 11:35:53.620861] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.165 [2024-07-26 11:35:53.620894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:58.165 qpair failed and we were unable to recover it. 00:27:58.165 [2024-07-26 11:35:53.621029] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.165 [2024-07-26 11:35:53.621060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:58.165 qpair failed and we were unable to recover it. 00:27:58.165 [2024-07-26 11:35:53.621290] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.165 [2024-07-26 11:35:53.621322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:58.165 qpair failed and we were unable to recover it. 00:27:58.165 [2024-07-26 11:35:53.621543] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.165 [2024-07-26 11:35:53.621574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:58.165 qpair failed and we were unable to recover it. 00:27:58.165 [2024-07-26 11:35:53.621851] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.166 [2024-07-26 11:35:53.621884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:58.166 qpair failed and we were unable to recover it. 00:27:58.166 [2024-07-26 11:35:53.622071] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.166 [2024-07-26 11:35:53.622103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:58.166 qpair failed and we were unable to recover it. 00:27:58.166 [2024-07-26 11:35:53.622295] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.166 [2024-07-26 11:35:53.622327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:58.166 qpair failed and we were unable to recover it. 00:27:58.166 [2024-07-26 11:35:53.622603] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.166 [2024-07-26 11:35:53.622644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:58.166 qpair failed and we were unable to recover it. 00:27:58.166 [2024-07-26 11:35:53.622843] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.166 [2024-07-26 11:35:53.622876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:58.166 qpair failed and we were unable to recover it. 00:27:58.166 [2024-07-26 11:35:53.623013] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.166 [2024-07-26 11:35:53.623045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:58.166 qpair failed and we were unable to recover it. 00:27:58.166 [2024-07-26 11:35:53.623176] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.166 [2024-07-26 11:35:53.623208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:58.166 qpair failed and we were unable to recover it. 00:27:58.166 [2024-07-26 11:35:53.623423] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.166 [2024-07-26 11:35:53.623454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:58.166 qpair failed and we were unable to recover it. 00:27:58.166 [2024-07-26 11:35:53.623670] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.166 [2024-07-26 11:35:53.623703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:58.166 qpair failed and we were unable to recover it. 00:27:58.166 [2024-07-26 11:35:53.623948] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.166 [2024-07-26 11:35:53.623980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:58.166 qpair failed and we were unable to recover it. 00:27:58.166 [2024-07-26 11:35:53.624120] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.166 [2024-07-26 11:35:53.624152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:58.166 qpair failed and we were unable to recover it. 00:27:58.166 [2024-07-26 11:35:53.624359] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.166 [2024-07-26 11:35:53.624396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:58.166 qpair failed and we were unable to recover it. 00:27:58.166 [2024-07-26 11:35:53.624645] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.166 [2024-07-26 11:35:53.624677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:58.166 qpair failed and we were unable to recover it. 00:27:58.166 [2024-07-26 11:35:53.624870] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.166 [2024-07-26 11:35:53.624903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:58.166 qpair failed and we were unable to recover it. 00:27:58.166 [2024-07-26 11:35:53.625195] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.166 [2024-07-26 11:35:53.625227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:58.166 qpair failed and we were unable to recover it. 00:27:58.166 [2024-07-26 11:35:53.625427] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.166 [2024-07-26 11:35:53.625459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:58.166 qpair failed and we were unable to recover it. 00:27:58.166 [2024-07-26 11:35:53.625588] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.166 [2024-07-26 11:35:53.625619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:58.166 qpair failed and we were unable to recover it. 00:27:58.166 [2024-07-26 11:35:53.625837] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.166 [2024-07-26 11:35:53.625869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:58.166 qpair failed and we were unable to recover it. 00:27:58.166 [2024-07-26 11:35:53.625987] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.166 [2024-07-26 11:35:53.626018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:58.166 qpair failed and we were unable to recover it. 00:27:58.166 [2024-07-26 11:35:53.626146] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.166 [2024-07-26 11:35:53.626177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:58.166 qpair failed and we were unable to recover it. 00:27:58.166 [2024-07-26 11:35:53.626405] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.166 [2024-07-26 11:35:53.626436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:58.166 qpair failed and we were unable to recover it. 00:27:58.166 [2024-07-26 11:35:53.626655] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.166 [2024-07-26 11:35:53.626689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:58.166 qpair failed and we were unable to recover it. 00:27:58.166 [2024-07-26 11:35:53.626882] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.166 [2024-07-26 11:35:53.626914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:58.166 qpair failed and we were unable to recover it. 00:27:58.166 [2024-07-26 11:35:53.627041] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.166 [2024-07-26 11:35:53.627073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:58.166 qpair failed and we were unable to recover it. 00:27:58.166 [2024-07-26 11:35:53.627378] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.166 [2024-07-26 11:35:53.627409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:58.166 qpair failed and we were unable to recover it. 00:27:58.166 [2024-07-26 11:35:53.627599] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.166 [2024-07-26 11:35:53.627643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:58.166 qpair failed and we were unable to recover it. 00:27:58.166 [2024-07-26 11:35:53.627775] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.166 [2024-07-26 11:35:53.627806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:58.166 qpair failed and we were unable to recover it. 00:27:58.166 [2024-07-26 11:35:53.627944] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.166 [2024-07-26 11:35:53.627975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:58.166 qpair failed and we were unable to recover it. 00:27:58.166 [2024-07-26 11:35:53.628096] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.166 [2024-07-26 11:35:53.628127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:58.166 qpair failed and we were unable to recover it. 00:27:58.166 [2024-07-26 11:35:53.628346] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.166 [2024-07-26 11:35:53.628377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:58.166 qpair failed and we were unable to recover it. 00:27:58.166 [2024-07-26 11:35:53.628550] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.166 [2024-07-26 11:35:53.628582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:58.166 qpair failed and we were unable to recover it. 00:27:58.166 [2024-07-26 11:35:53.628791] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.166 [2024-07-26 11:35:53.628823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:58.166 qpair failed and we were unable to recover it. 00:27:58.166 [2024-07-26 11:35:53.629032] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.166 [2024-07-26 11:35:53.629063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:58.166 qpair failed and we were unable to recover it. 00:27:58.167 [2024-07-26 11:35:53.629242] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.167 [2024-07-26 11:35:53.629273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:58.167 qpair failed and we were unable to recover it. 00:27:58.167 [2024-07-26 11:35:53.629418] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.167 [2024-07-26 11:35:53.629449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:58.167 qpair failed and we were unable to recover it. 00:27:58.167 [2024-07-26 11:35:53.629645] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.167 [2024-07-26 11:35:53.629679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:58.167 qpair failed and we were unable to recover it. 00:27:58.167 [2024-07-26 11:35:53.629808] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.167 [2024-07-26 11:35:53.629839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:58.167 qpair failed and we were unable to recover it. 00:27:58.167 [2024-07-26 11:35:53.629982] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.167 [2024-07-26 11:35:53.630013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:58.167 qpair failed and we were unable to recover it. 00:27:58.167 [2024-07-26 11:35:53.630203] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.167 [2024-07-26 11:35:53.630235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:58.167 qpair failed and we were unable to recover it. 00:27:58.167 [2024-07-26 11:35:53.630441] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.167 [2024-07-26 11:35:53.630472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:58.167 qpair failed and we were unable to recover it. 00:27:58.167 [2024-07-26 11:35:53.630654] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.167 [2024-07-26 11:35:53.630686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:58.167 qpair failed and we were unable to recover it. 00:27:58.167 [2024-07-26 11:35:53.630880] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.167 [2024-07-26 11:35:53.630911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:58.167 qpair failed and we were unable to recover it. 00:27:58.167 [2024-07-26 11:35:53.631052] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.167 [2024-07-26 11:35:53.631083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:58.167 qpair failed and we were unable to recover it. 00:27:58.167 [2024-07-26 11:35:53.631341] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.167 [2024-07-26 11:35:53.631372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:58.167 qpair failed and we were unable to recover it. 00:27:58.167 [2024-07-26 11:35:53.631561] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.167 [2024-07-26 11:35:53.631592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:58.167 qpair failed and we were unable to recover it. 00:27:58.167 [2024-07-26 11:35:53.631854] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.167 [2024-07-26 11:35:53.631887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:58.167 qpair failed and we were unable to recover it. 00:27:58.167 [2024-07-26 11:35:53.632037] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.167 [2024-07-26 11:35:53.632068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:58.167 qpair failed and we were unable to recover it. 00:27:58.167 [2024-07-26 11:35:53.632286] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.167 [2024-07-26 11:35:53.632318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:58.167 qpair failed and we were unable to recover it. 00:27:58.167 [2024-07-26 11:35:53.632586] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.167 [2024-07-26 11:35:53.632618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:58.167 qpair failed and we were unable to recover it. 00:27:58.167 [2024-07-26 11:35:53.632832] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.167 [2024-07-26 11:35:53.632864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:58.167 qpair failed and we were unable to recover it. 00:27:58.167 [2024-07-26 11:35:53.633130] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.167 [2024-07-26 11:35:53.633162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:58.167 qpair failed and we were unable to recover it. 00:27:58.167 [2024-07-26 11:35:53.633462] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.167 [2024-07-26 11:35:53.633493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:58.167 qpair failed and we were unable to recover it. 00:27:58.167 [2024-07-26 11:35:53.633622] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.167 [2024-07-26 11:35:53.633673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.167 qpair failed and we were unable to recover it. 00:27:58.167 [2024-07-26 11:35:53.633813] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.167 [2024-07-26 11:35:53.633846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.167 qpair failed and we were unable to recover it. 00:27:58.167 [2024-07-26 11:35:53.634035] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.167 [2024-07-26 11:35:53.634067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.167 qpair failed and we were unable to recover it. 00:27:58.167 [2024-07-26 11:35:53.634314] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.167 [2024-07-26 11:35:53.634344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.167 qpair failed and we were unable to recover it. 00:27:58.167 [2024-07-26 11:35:53.634614] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.167 [2024-07-26 11:35:53.634659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.167 qpair failed and we were unable to recover it. 00:27:58.167 [2024-07-26 11:35:53.634780] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.167 [2024-07-26 11:35:53.634811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.167 qpair failed and we were unable to recover it. 00:27:58.167 [2024-07-26 11:35:53.634953] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.167 [2024-07-26 11:35:53.634984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.167 qpair failed and we were unable to recover it. 00:27:58.167 [2024-07-26 11:35:53.635114] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.167 [2024-07-26 11:35:53.635145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.167 qpair failed and we were unable to recover it. 00:27:58.167 [2024-07-26 11:35:53.635338] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.167 [2024-07-26 11:35:53.635369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.167 qpair failed and we were unable to recover it. 00:27:58.167 [2024-07-26 11:35:53.635612] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.167 [2024-07-26 11:35:53.635656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.167 qpair failed and we were unable to recover it. 00:27:58.167 [2024-07-26 11:35:53.635797] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.167 [2024-07-26 11:35:53.635827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.167 qpair failed and we were unable to recover it. 00:27:58.167 [2024-07-26 11:35:53.636027] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.167 [2024-07-26 11:35:53.636057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.167 qpair failed and we were unable to recover it. 00:27:58.167 [2024-07-26 11:35:53.636234] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.167 [2024-07-26 11:35:53.636265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.167 qpair failed and we were unable to recover it. 00:27:58.167 [2024-07-26 11:35:53.636385] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.167 [2024-07-26 11:35:53.636423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.167 qpair failed and we were unable to recover it. 00:27:58.167 [2024-07-26 11:35:53.636668] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.167 [2024-07-26 11:35:53.636701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.167 qpair failed and we were unable to recover it. 00:27:58.167 [2024-07-26 11:35:53.636834] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.167 [2024-07-26 11:35:53.636865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.167 qpair failed and we were unable to recover it. 00:27:58.167 [2024-07-26 11:35:53.637063] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.167 [2024-07-26 11:35:53.637094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.167 qpair failed and we were unable to recover it. 00:27:58.167 [2024-07-26 11:35:53.637324] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.167 [2024-07-26 11:35:53.637355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.167 qpair failed and we were unable to recover it. 00:27:58.167 [2024-07-26 11:35:53.637648] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.168 [2024-07-26 11:35:53.637680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.168 qpair failed and we were unable to recover it. 00:27:58.168 [2024-07-26 11:35:53.637871] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.168 [2024-07-26 11:35:53.637902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.168 qpair failed and we were unable to recover it. 00:27:58.168 [2024-07-26 11:35:53.638041] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.168 [2024-07-26 11:35:53.638072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.168 qpair failed and we were unable to recover it. 00:27:58.168 [2024-07-26 11:35:53.638222] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.168 [2024-07-26 11:35:53.638253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.168 qpair failed and we were unable to recover it. 00:27:58.168 [2024-07-26 11:35:53.638447] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.168 [2024-07-26 11:35:53.638478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.168 qpair failed and we were unable to recover it. 00:27:58.168 [2024-07-26 11:35:53.638658] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.168 [2024-07-26 11:35:53.638690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.168 qpair failed and we were unable to recover it. 00:27:58.168 [2024-07-26 11:35:53.638978] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.168 [2024-07-26 11:35:53.639009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.168 qpair failed and we were unable to recover it. 00:27:58.168 [2024-07-26 11:35:53.639141] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.168 [2024-07-26 11:35:53.639172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.168 qpair failed and we were unable to recover it. 00:27:58.168 [2024-07-26 11:35:53.639435] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.168 [2024-07-26 11:35:53.639466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.168 qpair failed and we were unable to recover it. 00:27:58.168 [2024-07-26 11:35:53.639804] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.168 [2024-07-26 11:35:53.639838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.168 qpair failed and we were unable to recover it. 00:27:58.168 [2024-07-26 11:35:53.640030] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.168 [2024-07-26 11:35:53.640061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.168 qpair failed and we were unable to recover it. 00:27:58.168 [2024-07-26 11:35:53.640350] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.168 [2024-07-26 11:35:53.640381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.168 qpair failed and we were unable to recover it. 00:27:58.168 [2024-07-26 11:35:53.640625] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.168 [2024-07-26 11:35:53.640669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.168 qpair failed and we were unable to recover it. 00:27:58.168 [2024-07-26 11:35:53.640868] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.168 [2024-07-26 11:35:53.640900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.168 qpair failed and we were unable to recover it. 00:27:58.168 [2024-07-26 11:35:53.641034] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.168 [2024-07-26 11:35:53.641065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.168 qpair failed and we were unable to recover it. 00:27:58.168 [2024-07-26 11:35:53.641309] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.168 [2024-07-26 11:35:53.641340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.168 qpair failed and we were unable to recover it. 00:27:58.168 [2024-07-26 11:35:53.641512] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.168 [2024-07-26 11:35:53.641543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.168 qpair failed and we were unable to recover it. 00:27:58.168 [2024-07-26 11:35:53.641785] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.168 [2024-07-26 11:35:53.641817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.168 qpair failed and we were unable to recover it. 00:27:58.168 [2024-07-26 11:35:53.642020] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.168 [2024-07-26 11:35:53.642051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.168 qpair failed and we were unable to recover it. 00:27:58.168 [2024-07-26 11:35:53.642292] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.168 [2024-07-26 11:35:53.642323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.168 qpair failed and we were unable to recover it. 00:27:58.168 [2024-07-26 11:35:53.642567] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.168 [2024-07-26 11:35:53.642598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.168 qpair failed and we were unable to recover it. 00:27:58.168 [2024-07-26 11:35:53.642825] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.168 [2024-07-26 11:35:53.642861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:58.168 qpair failed and we were unable to recover it. 00:27:58.168 [2024-07-26 11:35:53.643086] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.168 [2024-07-26 11:35:53.643122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f8000b90 with addr=10.0.0.2, port=4420 00:27:58.168 qpair failed and we were unable to recover it. 00:27:58.168 [2024-07-26 11:35:53.643395] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.168 [2024-07-26 11:35:53.643425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f8000b90 with addr=10.0.0.2, port=4420 00:27:58.168 qpair failed and we were unable to recover it. 00:27:58.168 [2024-07-26 11:35:53.643684] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.168 [2024-07-26 11:35:53.643716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f8000b90 with addr=10.0.0.2, port=4420 00:27:58.168 qpair failed and we were unable to recover it. 00:27:58.168 [2024-07-26 11:35:53.643856] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.168 [2024-07-26 11:35:53.643888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f8000b90 with addr=10.0.0.2, port=4420 00:27:58.168 qpair failed and we were unable to recover it. 00:27:58.168 [2024-07-26 11:35:53.644093] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.168 [2024-07-26 11:35:53.644124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f8000b90 with addr=10.0.0.2, port=4420 00:27:58.168 qpair failed and we were unable to recover it. 00:27:58.168 [2024-07-26 11:35:53.644252] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.168 [2024-07-26 11:35:53.644283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f8000b90 with addr=10.0.0.2, port=4420 00:27:58.168 qpair failed and we were unable to recover it. 00:27:58.168 [2024-07-26 11:35:53.644427] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.168 [2024-07-26 11:35:53.644457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f8000b90 with addr=10.0.0.2, port=4420 00:27:58.168 qpair failed and we were unable to recover it. 00:27:58.168 [2024-07-26 11:35:53.644720] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.168 [2024-07-26 11:35:53.644751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f8000b90 with addr=10.0.0.2, port=4420 00:27:58.168 qpair failed and we were unable to recover it. 00:27:58.168 [2024-07-26 11:35:53.644953] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.168 [2024-07-26 11:35:53.644985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f8000b90 with addr=10.0.0.2, port=4420 00:27:58.168 qpair failed and we were unable to recover it. 00:27:58.168 [2024-07-26 11:35:53.645170] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.168 [2024-07-26 11:35:53.645201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f8000b90 with addr=10.0.0.2, port=4420 00:27:58.168 qpair failed and we were unable to recover it. 00:27:58.168 [2024-07-26 11:35:53.645394] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.168 [2024-07-26 11:35:53.645426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f8000b90 with addr=10.0.0.2, port=4420 00:27:58.168 qpair failed and we were unable to recover it. 00:27:58.168 [2024-07-26 11:35:53.645555] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.168 [2024-07-26 11:35:53.645586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f8000b90 with addr=10.0.0.2, port=4420 00:27:58.168 qpair failed and we were unable to recover it. 00:27:58.168 [2024-07-26 11:35:53.645862] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.168 [2024-07-26 11:35:53.645894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f8000b90 with addr=10.0.0.2, port=4420 00:27:58.168 qpair failed and we were unable to recover it. 00:27:58.168 [2024-07-26 11:35:53.646097] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.168 [2024-07-26 11:35:53.646133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f8000b90 with addr=10.0.0.2, port=4420 00:27:58.168 qpair failed and we were unable to recover it. 00:27:58.168 [2024-07-26 11:35:53.646381] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.168 [2024-07-26 11:35:53.646412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f8000b90 with addr=10.0.0.2, port=4420 00:27:58.168 qpair failed and we were unable to recover it. 00:27:58.168 [2024-07-26 11:35:53.646674] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.168 [2024-07-26 11:35:53.646706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f8000b90 with addr=10.0.0.2, port=4420 00:27:58.168 qpair failed and we were unable to recover it. 00:27:58.169 [2024-07-26 11:35:53.646949] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.169 [2024-07-26 11:35:53.646980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f8000b90 with addr=10.0.0.2, port=4420 00:27:58.169 qpair failed and we were unable to recover it. 00:27:58.169 [2024-07-26 11:35:53.647120] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.169 [2024-07-26 11:35:53.647150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f8000b90 with addr=10.0.0.2, port=4420 00:27:58.169 qpair failed and we were unable to recover it. 00:27:58.169 [2024-07-26 11:35:53.647373] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.169 [2024-07-26 11:35:53.647405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f8000b90 with addr=10.0.0.2, port=4420 00:27:58.169 qpair failed and we were unable to recover it. 00:27:58.169 [2024-07-26 11:35:53.647682] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.169 [2024-07-26 11:35:53.647715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f8000b90 with addr=10.0.0.2, port=4420 00:27:58.169 qpair failed and we were unable to recover it. 00:27:58.169 [2024-07-26 11:35:53.647902] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.169 [2024-07-26 11:35:53.647933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f8000b90 with addr=10.0.0.2, port=4420 00:27:58.169 qpair failed and we were unable to recover it. 00:27:58.169 [2024-07-26 11:35:53.648149] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.169 [2024-07-26 11:35:53.648179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f8000b90 with addr=10.0.0.2, port=4420 00:27:58.169 qpair failed and we were unable to recover it. 00:27:58.169 [2024-07-26 11:35:53.648447] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.169 [2024-07-26 11:35:53.648478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f8000b90 with addr=10.0.0.2, port=4420 00:27:58.169 qpair failed and we were unable to recover it. 00:27:58.169 [2024-07-26 11:35:53.648661] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.169 [2024-07-26 11:35:53.648692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f8000b90 with addr=10.0.0.2, port=4420 00:27:58.169 qpair failed and we were unable to recover it. 00:27:58.169 [2024-07-26 11:35:53.648867] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.169 [2024-07-26 11:35:53.648897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f8000b90 with addr=10.0.0.2, port=4420 00:27:58.169 qpair failed and we were unable to recover it. 00:27:58.169 [2024-07-26 11:35:53.649017] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.169 [2024-07-26 11:35:53.649048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f8000b90 with addr=10.0.0.2, port=4420 00:27:58.169 qpair failed and we were unable to recover it. 00:27:58.169 [2024-07-26 11:35:53.649170] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.169 [2024-07-26 11:35:53.649201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f8000b90 with addr=10.0.0.2, port=4420 00:27:58.169 qpair failed and we were unable to recover it. 00:27:58.169 [2024-07-26 11:35:53.649466] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.169 [2024-07-26 11:35:53.649497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f8000b90 with addr=10.0.0.2, port=4420 00:27:58.169 qpair failed and we were unable to recover it. 00:27:58.169 [2024-07-26 11:35:53.649614] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.169 [2024-07-26 11:35:53.649652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f8000b90 with addr=10.0.0.2, port=4420 00:27:58.169 qpair failed and we were unable to recover it. 00:27:58.169 [2024-07-26 11:35:53.649870] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.169 [2024-07-26 11:35:53.649900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f8000b90 with addr=10.0.0.2, port=4420 00:27:58.169 qpair failed and we were unable to recover it. 00:27:58.169 [2024-07-26 11:35:53.650143] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.169 [2024-07-26 11:35:53.650174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f8000b90 with addr=10.0.0.2, port=4420 00:27:58.169 qpair failed and we were unable to recover it. 00:27:58.169 [2024-07-26 11:35:53.650362] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.169 [2024-07-26 11:35:53.650393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f8000b90 with addr=10.0.0.2, port=4420 00:27:58.169 qpair failed and we were unable to recover it. 00:27:58.169 [2024-07-26 11:35:53.650568] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.169 [2024-07-26 11:35:53.650599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f8000b90 with addr=10.0.0.2, port=4420 00:27:58.169 qpair failed and we were unable to recover it. 00:27:58.169 [2024-07-26 11:35:53.650881] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.169 [2024-07-26 11:35:53.650916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:58.169 qpair failed and we were unable to recover it. 00:27:58.169 [2024-07-26 11:35:53.651221] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.169 [2024-07-26 11:35:53.651254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:58.169 qpair failed and we were unable to recover it. 00:27:58.169 [2024-07-26 11:35:53.651444] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.169 [2024-07-26 11:35:53.651474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:58.169 qpair failed and we were unable to recover it. 00:27:58.169 [2024-07-26 11:35:53.651761] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.169 [2024-07-26 11:35:53.651794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:58.169 qpair failed and we were unable to recover it. 00:27:58.169 [2024-07-26 11:35:53.651935] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.169 [2024-07-26 11:35:53.651966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:58.169 qpair failed and we were unable to recover it. 00:27:58.169 [2024-07-26 11:35:53.652209] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.169 [2024-07-26 11:35:53.652240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:58.169 qpair failed and we were unable to recover it. 00:27:58.169 [2024-07-26 11:35:53.652414] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.169 [2024-07-26 11:35:53.652446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:58.169 qpair failed and we were unable to recover it. 00:27:58.169 [2024-07-26 11:35:53.652688] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.169 [2024-07-26 11:35:53.652725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:58.169 qpair failed and we were unable to recover it. 00:27:58.169 [2024-07-26 11:35:53.652968] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.169 [2024-07-26 11:35:53.653000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:58.169 qpair failed and we were unable to recover it. 00:27:58.169 [2024-07-26 11:35:53.653121] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.169 [2024-07-26 11:35:53.653152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:58.169 qpair failed and we were unable to recover it. 00:27:58.169 [2024-07-26 11:35:53.653366] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.169 [2024-07-26 11:35:53.653396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:58.169 qpair failed and we were unable to recover it. 00:27:58.169 [2024-07-26 11:35:53.653644] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.169 [2024-07-26 11:35:53.653675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:58.169 qpair failed and we were unable to recover it. 00:27:58.169 [2024-07-26 11:35:53.653891] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.169 [2024-07-26 11:35:53.653922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:58.169 qpair failed and we were unable to recover it. 00:27:58.169 [2024-07-26 11:35:53.654124] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.169 [2024-07-26 11:35:53.654155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:58.169 qpair failed and we were unable to recover it. 00:27:58.169 [2024-07-26 11:35:53.654284] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.169 [2024-07-26 11:35:53.654315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:58.169 qpair failed and we were unable to recover it. 00:27:58.169 [2024-07-26 11:35:53.654436] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.169 [2024-07-26 11:35:53.654467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:58.169 qpair failed and we were unable to recover it. 00:27:58.169 [2024-07-26 11:35:53.654655] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.169 [2024-07-26 11:35:53.654688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:58.169 qpair failed and we were unable to recover it. 00:27:58.169 [2024-07-26 11:35:53.654929] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.169 [2024-07-26 11:35:53.654961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:58.169 qpair failed and we were unable to recover it. 00:27:58.169 [2024-07-26 11:35:53.655145] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.169 [2024-07-26 11:35:53.655176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:58.169 qpair failed and we were unable to recover it. 00:27:58.169 [2024-07-26 11:35:53.655368] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.169 [2024-07-26 11:35:53.655399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:58.169 qpair failed and we were unable to recover it. 00:27:58.169 [2024-07-26 11:35:53.655664] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.169 [2024-07-26 11:35:53.655697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:58.170 qpair failed and we were unable to recover it. 00:27:58.170 [2024-07-26 11:35:53.655922] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.170 [2024-07-26 11:35:53.655953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:58.170 qpair failed and we were unable to recover it. 00:27:58.170 [2024-07-26 11:35:53.656192] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.170 [2024-07-26 11:35:53.656224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:58.170 qpair failed and we were unable to recover it. 00:27:58.170 [2024-07-26 11:35:53.656416] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.170 [2024-07-26 11:35:53.656447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:58.170 qpair failed and we were unable to recover it. 00:27:58.170 [2024-07-26 11:35:53.656576] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.170 [2024-07-26 11:35:53.656607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:58.170 qpair failed and we were unable to recover it. 00:27:58.170 [2024-07-26 11:35:53.656743] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.170 [2024-07-26 11:35:53.656774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:58.170 qpair failed and we were unable to recover it. 00:27:58.170 [2024-07-26 11:35:53.656898] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.170 [2024-07-26 11:35:53.656929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:58.170 qpair failed and we were unable to recover it. 00:27:58.170 [2024-07-26 11:35:53.657060] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.170 [2024-07-26 11:35:53.657091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:58.170 qpair failed and we were unable to recover it. 00:27:58.170 [2024-07-26 11:35:53.657211] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.170 [2024-07-26 11:35:53.657242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:58.170 qpair failed and we were unable to recover it. 00:27:58.170 [2024-07-26 11:35:53.657369] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.170 [2024-07-26 11:35:53.657401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:58.170 qpair failed and we were unable to recover it. 00:27:58.170 [2024-07-26 11:35:53.657525] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.170 [2024-07-26 11:35:53.657557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:58.170 qpair failed and we were unable to recover it. 00:27:58.170 [2024-07-26 11:35:53.657751] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.170 [2024-07-26 11:35:53.657783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:58.170 qpair failed and we were unable to recover it. 00:27:58.170 [2024-07-26 11:35:53.657922] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.170 [2024-07-26 11:35:53.657953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:58.170 qpair failed and we were unable to recover it. 00:27:58.170 [2024-07-26 11:35:53.658191] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.170 [2024-07-26 11:35:53.658222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:58.170 qpair failed and we were unable to recover it. 00:27:58.170 [2024-07-26 11:35:53.658354] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.170 [2024-07-26 11:35:53.658390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:58.170 qpair failed and we were unable to recover it. 00:27:58.170 [2024-07-26 11:35:53.658675] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.170 [2024-07-26 11:35:53.658707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:58.170 qpair failed and we were unable to recover it. 00:27:58.170 [2024-07-26 11:35:53.658902] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.170 [2024-07-26 11:35:53.658933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:58.170 qpair failed and we were unable to recover it. 00:27:58.170 [2024-07-26 11:35:53.659109] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.170 [2024-07-26 11:35:53.659140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:58.170 qpair failed and we were unable to recover it. 00:27:58.170 [2024-07-26 11:35:53.659336] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.170 [2024-07-26 11:35:53.659368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:58.170 qpair failed and we were unable to recover it. 00:27:58.170 [2024-07-26 11:35:53.659612] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.170 [2024-07-26 11:35:53.659654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:58.170 qpair failed and we were unable to recover it. 00:27:58.170 [2024-07-26 11:35:53.659773] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.170 [2024-07-26 11:35:53.659804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:58.170 qpair failed and we were unable to recover it. 00:27:58.170 [2024-07-26 11:35:53.659928] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.170 [2024-07-26 11:35:53.659959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:58.170 qpair failed and we were unable to recover it. 00:27:58.170 [2024-07-26 11:35:53.660087] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.170 [2024-07-26 11:35:53.660117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:58.170 qpair failed and we were unable to recover it. 00:27:58.170 [2024-07-26 11:35:53.660291] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.170 [2024-07-26 11:35:53.660322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:58.170 qpair failed and we were unable to recover it. 00:27:58.170 [2024-07-26 11:35:53.660448] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.170 [2024-07-26 11:35:53.660479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:58.170 qpair failed and we were unable to recover it. 00:27:58.170 [2024-07-26 11:35:53.660719] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.170 [2024-07-26 11:35:53.660752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:58.170 qpair failed and we were unable to recover it. 00:27:58.170 [2024-07-26 11:35:53.660944] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.170 [2024-07-26 11:35:53.660976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:58.170 qpair failed and we were unable to recover it. 00:27:58.170 [2024-07-26 11:35:53.661155] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.170 [2024-07-26 11:35:53.661186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:58.170 qpair failed and we were unable to recover it. 00:27:58.170 [2024-07-26 11:35:53.661390] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.170 [2024-07-26 11:35:53.661422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:58.170 qpair failed and we were unable to recover it. 00:27:58.170 [2024-07-26 11:35:53.661613] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.170 [2024-07-26 11:35:53.661654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:58.170 qpair failed and we were unable to recover it. 00:27:58.170 [2024-07-26 11:35:53.661833] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.170 [2024-07-26 11:35:53.661865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:58.170 qpair failed and we were unable to recover it. 00:27:58.170 [2024-07-26 11:35:53.662038] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.170 [2024-07-26 11:35:53.662069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:58.170 qpair failed and we were unable to recover it. 00:27:58.170 [2024-07-26 11:35:53.662245] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.170 [2024-07-26 11:35:53.662276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:58.170 qpair failed and we were unable to recover it. 00:27:58.170 [2024-07-26 11:35:53.662530] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.171 [2024-07-26 11:35:53.662562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:58.171 qpair failed and we were unable to recover it. 00:27:58.171 [2024-07-26 11:35:53.662736] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.171 [2024-07-26 11:35:53.662769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:58.171 qpair failed and we were unable to recover it. 00:27:58.171 [2024-07-26 11:35:53.662941] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.171 [2024-07-26 11:35:53.662972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:58.171 qpair failed and we were unable to recover it. 00:27:58.171 [2024-07-26 11:35:53.663188] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.171 [2024-07-26 11:35:53.663219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:58.171 qpair failed and we were unable to recover it. 00:27:58.171 [2024-07-26 11:35:53.663413] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.171 [2024-07-26 11:35:53.663444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:58.171 qpair failed and we were unable to recover it. 00:27:58.171 [2024-07-26 11:35:53.663571] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.171 [2024-07-26 11:35:53.663603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:58.171 qpair failed and we were unable to recover it. 00:27:58.171 [2024-07-26 11:35:53.663808] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.171 [2024-07-26 11:35:53.663840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:58.171 qpair failed and we were unable to recover it. 00:27:58.171 [2024-07-26 11:35:53.663956] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.171 [2024-07-26 11:35:53.663987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:58.171 qpair failed and we were unable to recover it. 00:27:58.171 [2024-07-26 11:35:53.664196] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.171 [2024-07-26 11:35:53.664228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:58.171 qpair failed and we were unable to recover it. 00:27:58.171 [2024-07-26 11:35:53.664413] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.171 [2024-07-26 11:35:53.664444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:58.171 qpair failed and we were unable to recover it. 00:27:58.171 [2024-07-26 11:35:53.664557] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.171 [2024-07-26 11:35:53.664588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:58.171 qpair failed and we were unable to recover it. 00:27:58.171 [2024-07-26 11:35:53.664864] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.171 [2024-07-26 11:35:53.664897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:58.171 qpair failed and we were unable to recover it. 00:27:58.171 [2024-07-26 11:35:53.665070] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.171 [2024-07-26 11:35:53.665102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:58.171 qpair failed and we were unable to recover it. 00:27:58.171 [2024-07-26 11:35:53.665368] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.171 [2024-07-26 11:35:53.665399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:58.171 qpair failed and we were unable to recover it. 00:27:58.171 [2024-07-26 11:35:53.665661] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.171 [2024-07-26 11:35:53.665694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:58.171 qpair failed and we were unable to recover it. 00:27:58.171 [2024-07-26 11:35:53.665967] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.171 [2024-07-26 11:35:53.665999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:58.171 qpair failed and we were unable to recover it. 00:27:58.171 [2024-07-26 11:35:53.666180] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.171 [2024-07-26 11:35:53.666211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:58.171 qpair failed and we were unable to recover it. 00:27:58.171 [2024-07-26 11:35:53.666487] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.171 [2024-07-26 11:35:53.666518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:58.171 qpair failed and we were unable to recover it. 00:27:58.171 [2024-07-26 11:35:53.666709] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.171 [2024-07-26 11:35:53.666743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:58.171 qpair failed and we were unable to recover it. 00:27:58.171 [2024-07-26 11:35:53.666863] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.171 [2024-07-26 11:35:53.666894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:58.171 qpair failed and we were unable to recover it. 00:27:58.171 [2024-07-26 11:35:53.667073] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.171 [2024-07-26 11:35:53.667104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:58.171 qpair failed and we were unable to recover it. 00:27:58.171 [2024-07-26 11:35:53.667392] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.171 [2024-07-26 11:35:53.667423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:58.171 qpair failed and we were unable to recover it. 00:27:58.171 [2024-07-26 11:35:53.667644] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.171 [2024-07-26 11:35:53.667680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f8000b90 with addr=10.0.0.2, port=4420 00:27:58.171 qpair failed and we were unable to recover it. 00:27:58.171 [2024-07-26 11:35:53.667955] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.171 [2024-07-26 11:35:53.667986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f8000b90 with addr=10.0.0.2, port=4420 00:27:58.171 qpair failed and we were unable to recover it. 00:27:58.171 [2024-07-26 11:35:53.668210] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.171 [2024-07-26 11:35:53.668240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f8000b90 with addr=10.0.0.2, port=4420 00:27:58.171 qpair failed and we were unable to recover it. 00:27:58.171 [2024-07-26 11:35:53.668480] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.171 [2024-07-26 11:35:53.668510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f8000b90 with addr=10.0.0.2, port=4420 00:27:58.171 qpair failed and we were unable to recover it. 00:27:58.171 [2024-07-26 11:35:53.668656] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.171 [2024-07-26 11:35:53.668687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f8000b90 with addr=10.0.0.2, port=4420 00:27:58.171 qpair failed and we were unable to recover it. 00:27:58.171 [2024-07-26 11:35:53.668930] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.171 [2024-07-26 11:35:53.668961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f8000b90 with addr=10.0.0.2, port=4420 00:27:58.171 qpair failed and we were unable to recover it. 00:27:58.171 [2024-07-26 11:35:53.669135] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.171 [2024-07-26 11:35:53.669166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f8000b90 with addr=10.0.0.2, port=4420 00:27:58.171 qpair failed and we were unable to recover it. 00:27:58.171 [2024-07-26 11:35:53.669342] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.171 [2024-07-26 11:35:53.669373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f8000b90 with addr=10.0.0.2, port=4420 00:27:58.171 qpair failed and we were unable to recover it. 00:27:58.171 [2024-07-26 11:35:53.669494] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.171 [2024-07-26 11:35:53.669525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f8000b90 with addr=10.0.0.2, port=4420 00:27:58.171 qpair failed and we were unable to recover it. 00:27:58.171 [2024-07-26 11:35:53.669709] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.171 [2024-07-26 11:35:53.669740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f8000b90 with addr=10.0.0.2, port=4420 00:27:58.171 qpair failed and we were unable to recover it. 00:27:58.171 [2024-07-26 11:35:53.669942] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.171 [2024-07-26 11:35:53.669973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f8000b90 with addr=10.0.0.2, port=4420 00:27:58.171 qpair failed and we were unable to recover it. 00:27:58.171 [2024-07-26 11:35:53.670097] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.171 [2024-07-26 11:35:53.670128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f8000b90 with addr=10.0.0.2, port=4420 00:27:58.171 qpair failed and we were unable to recover it. 00:27:58.171 [2024-07-26 11:35:53.670397] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.171 [2024-07-26 11:35:53.670428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f8000b90 with addr=10.0.0.2, port=4420 00:27:58.171 qpair failed and we were unable to recover it. 00:27:58.171 [2024-07-26 11:35:53.670647] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.171 [2024-07-26 11:35:53.670684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f8000b90 with addr=10.0.0.2, port=4420 00:27:58.171 qpair failed and we were unable to recover it. 00:27:58.171 [2024-07-26 11:35:53.670875] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.171 [2024-07-26 11:35:53.670907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f8000b90 with addr=10.0.0.2, port=4420 00:27:58.171 qpair failed and we were unable to recover it. 00:27:58.171 [2024-07-26 11:35:53.671125] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.171 [2024-07-26 11:35:53.671156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f8000b90 with addr=10.0.0.2, port=4420 00:27:58.171 qpair failed and we were unable to recover it. 00:27:58.172 [2024-07-26 11:35:53.671369] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.172 [2024-07-26 11:35:53.671400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f8000b90 with addr=10.0.0.2, port=4420 00:27:58.172 qpair failed and we were unable to recover it. 00:27:58.172 [2024-07-26 11:35:53.671537] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.172 [2024-07-26 11:35:53.671569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f8000b90 with addr=10.0.0.2, port=4420 00:27:58.172 qpair failed and we were unable to recover it. 00:27:58.172 [2024-07-26 11:35:53.671793] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.172 [2024-07-26 11:35:53.671824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f8000b90 with addr=10.0.0.2, port=4420 00:27:58.172 qpair failed and we were unable to recover it. 00:27:58.172 [2024-07-26 11:35:53.672001] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.172 [2024-07-26 11:35:53.672033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f8000b90 with addr=10.0.0.2, port=4420 00:27:58.172 qpair failed and we were unable to recover it. 00:27:58.172 [2024-07-26 11:35:53.672208] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.172 [2024-07-26 11:35:53.672239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f8000b90 with addr=10.0.0.2, port=4420 00:27:58.172 qpair failed and we were unable to recover it. 00:27:58.172 [2024-07-26 11:35:53.672377] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.172 [2024-07-26 11:35:53.672407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f8000b90 with addr=10.0.0.2, port=4420 00:27:58.172 qpair failed and we were unable to recover it. 00:27:58.172 [2024-07-26 11:35:53.672624] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.172 [2024-07-26 11:35:53.672665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f8000b90 with addr=10.0.0.2, port=4420 00:27:58.172 qpair failed and we were unable to recover it. 00:27:58.172 [2024-07-26 11:35:53.672787] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.172 [2024-07-26 11:35:53.672817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f8000b90 with addr=10.0.0.2, port=4420 00:27:58.172 qpair failed and we were unable to recover it. 00:27:58.172 [2024-07-26 11:35:53.673084] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.172 [2024-07-26 11:35:53.673115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f8000b90 with addr=10.0.0.2, port=4420 00:27:58.172 qpair failed and we were unable to recover it. 00:27:58.172 [2024-07-26 11:35:53.673328] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.172 [2024-07-26 11:35:53.673359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f8000b90 with addr=10.0.0.2, port=4420 00:27:58.172 qpair failed and we were unable to recover it. 00:27:58.172 [2024-07-26 11:35:53.673544] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.172 [2024-07-26 11:35:53.673575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f8000b90 with addr=10.0.0.2, port=4420 00:27:58.172 qpair failed and we were unable to recover it. 00:27:58.172 [2024-07-26 11:35:53.673852] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.172 [2024-07-26 11:35:53.673884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f8000b90 with addr=10.0.0.2, port=4420 00:27:58.172 qpair failed and we were unable to recover it. 00:27:58.172 [2024-07-26 11:35:53.674076] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.172 [2024-07-26 11:35:53.674107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f8000b90 with addr=10.0.0.2, port=4420 00:27:58.172 qpair failed and we were unable to recover it. 00:27:58.172 [2024-07-26 11:35:53.674282] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.172 [2024-07-26 11:35:53.674313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f8000b90 with addr=10.0.0.2, port=4420 00:27:58.172 qpair failed and we were unable to recover it. 00:27:58.172 [2024-07-26 11:35:53.674436] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.172 [2024-07-26 11:35:53.674467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f8000b90 with addr=10.0.0.2, port=4420 00:27:58.172 qpair failed and we were unable to recover it. 00:27:58.172 [2024-07-26 11:35:53.674596] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.172 [2024-07-26 11:35:53.674639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f8000b90 with addr=10.0.0.2, port=4420 00:27:58.172 qpair failed and we were unable to recover it. 00:27:58.172 [2024-07-26 11:35:53.674767] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.172 [2024-07-26 11:35:53.674797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f8000b90 with addr=10.0.0.2, port=4420 00:27:58.172 qpair failed and we were unable to recover it. 00:27:58.172 [2024-07-26 11:35:53.674975] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.172 [2024-07-26 11:35:53.675005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f8000b90 with addr=10.0.0.2, port=4420 00:27:58.172 qpair failed and we were unable to recover it. 00:27:58.172 [2024-07-26 11:35:53.675245] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.172 [2024-07-26 11:35:53.675276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f8000b90 with addr=10.0.0.2, port=4420 00:27:58.172 qpair failed and we were unable to recover it. 00:27:58.172 [2024-07-26 11:35:53.675459] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.172 [2024-07-26 11:35:53.675490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f8000b90 with addr=10.0.0.2, port=4420 00:27:58.172 qpair failed and we were unable to recover it. 00:27:58.172 [2024-07-26 11:35:53.675664] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.172 [2024-07-26 11:35:53.675696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f8000b90 with addr=10.0.0.2, port=4420 00:27:58.172 qpair failed and we were unable to recover it. 00:27:58.172 [2024-07-26 11:35:53.675946] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.172 [2024-07-26 11:35:53.675977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f8000b90 with addr=10.0.0.2, port=4420 00:27:58.172 qpair failed and we were unable to recover it. 00:27:58.172 [2024-07-26 11:35:53.676188] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.172 [2024-07-26 11:35:53.676219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f8000b90 with addr=10.0.0.2, port=4420 00:27:58.172 qpair failed and we were unable to recover it. 00:27:58.172 [2024-07-26 11:35:53.676362] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.172 [2024-07-26 11:35:53.676393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f8000b90 with addr=10.0.0.2, port=4420 00:27:58.172 qpair failed and we were unable to recover it. 00:27:58.172 [2024-07-26 11:35:53.676524] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.172 [2024-07-26 11:35:53.676562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:58.172 qpair failed and we were unable to recover it. 00:27:58.172 [2024-07-26 11:35:53.676822] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.172 [2024-07-26 11:35:53.676854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:58.172 qpair failed and we were unable to recover it. 00:27:58.172 [2024-07-26 11:35:53.676981] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.172 [2024-07-26 11:35:53.677012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:58.172 qpair failed and we were unable to recover it. 00:27:58.172 [2024-07-26 11:35:53.677275] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.172 [2024-07-26 11:35:53.677307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:58.172 qpair failed and we were unable to recover it. 00:27:58.172 [2024-07-26 11:35:53.677487] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.172 [2024-07-26 11:35:53.677518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:58.172 qpair failed and we were unable to recover it. 00:27:58.172 [2024-07-26 11:35:53.677647] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.172 [2024-07-26 11:35:53.677679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:58.172 qpair failed and we were unable to recover it. 00:27:58.172 [2024-07-26 11:35:53.677870] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.172 [2024-07-26 11:35:53.677901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:58.172 qpair failed and we were unable to recover it. 00:27:58.172 [2024-07-26 11:35:53.678025] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.172 [2024-07-26 11:35:53.678056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:58.172 qpair failed and we were unable to recover it. 00:27:58.172 [2024-07-26 11:35:53.678177] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.172 [2024-07-26 11:35:53.678207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:58.172 qpair failed and we were unable to recover it. 00:27:58.172 [2024-07-26 11:35:53.678401] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.172 [2024-07-26 11:35:53.678432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:58.172 qpair failed and we were unable to recover it. 00:27:58.172 [2024-07-26 11:35:53.678545] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.172 [2024-07-26 11:35:53.678576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:58.172 qpair failed and we were unable to recover it. 00:27:58.172 [2024-07-26 11:35:53.678691] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.172 [2024-07-26 11:35:53.678724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:58.172 qpair failed and we were unable to recover it. 00:27:58.172 [2024-07-26 11:35:53.679011] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.172 [2024-07-26 11:35:53.679042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:58.172 qpair failed and we were unable to recover it. 00:27:58.172 [2024-07-26 11:35:53.679276] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.173 [2024-07-26 11:35:53.679314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:58.173 qpair failed and we were unable to recover it. 00:27:58.173 [2024-07-26 11:35:53.679488] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.173 [2024-07-26 11:35:53.679519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:58.173 qpair failed and we were unable to recover it. 00:27:58.173 [2024-07-26 11:35:53.679716] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.173 [2024-07-26 11:35:53.679748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:58.173 qpair failed and we were unable to recover it. 00:27:58.173 [2024-07-26 11:35:53.679929] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.173 [2024-07-26 11:35:53.679960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:58.173 qpair failed and we were unable to recover it. 00:27:58.173 [2024-07-26 11:35:53.680176] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.173 [2024-07-26 11:35:53.680207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:58.173 qpair failed and we were unable to recover it. 00:27:58.173 [2024-07-26 11:35:53.680379] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.173 [2024-07-26 11:35:53.680410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:58.173 qpair failed and we were unable to recover it. 00:27:58.173 [2024-07-26 11:35:53.680638] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.173 [2024-07-26 11:35:53.680670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:58.173 qpair failed and we were unable to recover it. 00:27:58.173 [2024-07-26 11:35:53.680859] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.173 [2024-07-26 11:35:53.680890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:58.173 qpair failed and we were unable to recover it. 00:27:58.173 [2024-07-26 11:35:53.681035] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.173 [2024-07-26 11:35:53.681066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:58.173 qpair failed and we were unable to recover it. 00:27:58.173 [2024-07-26 11:35:53.681277] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.173 [2024-07-26 11:35:53.681308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:58.173 qpair failed and we were unable to recover it. 00:27:58.173 [2024-07-26 11:35:53.681572] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.173 [2024-07-26 11:35:53.681602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:58.173 qpair failed and we were unable to recover it. 00:27:58.173 [2024-07-26 11:35:53.681755] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.173 [2024-07-26 11:35:53.681788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:58.173 qpair failed and we were unable to recover it. 00:27:58.173 [2024-07-26 11:35:53.681981] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.173 [2024-07-26 11:35:53.682012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:58.173 qpair failed and we were unable to recover it. 00:27:58.173 [2024-07-26 11:35:53.682143] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.173 [2024-07-26 11:35:53.682174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:58.173 qpair failed and we were unable to recover it. 00:27:58.173 [2024-07-26 11:35:53.682374] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.173 [2024-07-26 11:35:53.682405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:58.173 qpair failed and we were unable to recover it. 00:27:58.173 [2024-07-26 11:35:53.682599] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.173 [2024-07-26 11:35:53.682641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:58.173 qpair failed and we were unable to recover it. 00:27:58.173 [2024-07-26 11:35:53.682883] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.173 [2024-07-26 11:35:53.682914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:58.173 qpair failed and we were unable to recover it. 00:27:58.173 [2024-07-26 11:35:53.683180] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.173 [2024-07-26 11:35:53.683211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:58.173 qpair failed and we were unable to recover it. 00:27:58.173 [2024-07-26 11:35:53.683388] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.173 [2024-07-26 11:35:53.683420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:58.173 qpair failed and we were unable to recover it. 00:27:58.173 [2024-07-26 11:35:53.683661] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.173 [2024-07-26 11:35:53.683693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:58.173 qpair failed and we were unable to recover it. 00:27:58.173 [2024-07-26 11:35:53.683885] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.173 [2024-07-26 11:35:53.683916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:58.173 qpair failed and we were unable to recover it. 00:27:58.173 [2024-07-26 11:35:53.684103] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.173 [2024-07-26 11:35:53.684133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:58.173 qpair failed and we were unable to recover it. 00:27:58.173 [2024-07-26 11:35:53.684346] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.173 [2024-07-26 11:35:53.684377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:58.173 qpair failed and we were unable to recover it. 00:27:58.173 [2024-07-26 11:35:53.684500] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.173 [2024-07-26 11:35:53.684531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:58.173 qpair failed and we were unable to recover it. 00:27:58.173 [2024-07-26 11:35:53.684726] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.173 [2024-07-26 11:35:53.684757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:58.173 qpair failed and we were unable to recover it. 00:27:58.173 [2024-07-26 11:35:53.685023] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.173 [2024-07-26 11:35:53.685054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:58.173 qpair failed and we were unable to recover it. 00:27:58.173 [2024-07-26 11:35:53.685169] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.173 [2024-07-26 11:35:53.685200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:58.173 qpair failed and we were unable to recover it. 00:27:58.173 [2024-07-26 11:35:53.685354] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.173 [2024-07-26 11:35:53.685391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.173 qpair failed and we were unable to recover it. 00:27:58.173 [2024-07-26 11:35:53.685665] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.173 [2024-07-26 11:35:53.685702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:58.173 qpair failed and we were unable to recover it. 00:27:58.173 [2024-07-26 11:35:53.685837] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.173 [2024-07-26 11:35:53.685869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:58.173 qpair failed and we were unable to recover it. 00:27:58.173 [2024-07-26 11:35:53.686084] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.173 [2024-07-26 11:35:53.686116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:58.173 qpair failed and we were unable to recover it. 00:27:58.173 [2024-07-26 11:35:53.686354] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.173 [2024-07-26 11:35:53.686385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:58.173 qpair failed and we were unable to recover it. 00:27:58.173 [2024-07-26 11:35:53.686641] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.173 [2024-07-26 11:35:53.686674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:58.173 qpair failed and we were unable to recover it. 00:27:58.173 [2024-07-26 11:35:53.686862] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.173 [2024-07-26 11:35:53.686893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:58.173 qpair failed and we were unable to recover it. 00:27:58.173 [2024-07-26 11:35:53.687027] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.173 [2024-07-26 11:35:53.687059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:58.173 qpair failed and we were unable to recover it. 00:27:58.173 [2024-07-26 11:35:53.687269] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.173 [2024-07-26 11:35:53.687300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:58.173 qpair failed and we were unable to recover it. 00:27:58.173 [2024-07-26 11:35:53.687412] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.173 [2024-07-26 11:35:53.687443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:58.173 qpair failed and we were unable to recover it. 00:27:58.173 [2024-07-26 11:35:53.687659] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.173 [2024-07-26 11:35:53.687691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:58.173 qpair failed and we were unable to recover it. 00:27:58.173 [2024-07-26 11:35:53.687888] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.174 [2024-07-26 11:35:53.687920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:58.174 qpair failed and we were unable to recover it. 00:27:58.174 [2024-07-26 11:35:53.688136] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.174 [2024-07-26 11:35:53.688167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:58.174 qpair failed and we were unable to recover it. 00:27:58.174 [2024-07-26 11:35:53.688341] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.174 [2024-07-26 11:35:53.688372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:58.174 qpair failed and we were unable to recover it. 00:27:58.174 [2024-07-26 11:35:53.688578] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.174 [2024-07-26 11:35:53.688610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:58.174 qpair failed and we were unable to recover it. 00:27:58.174 [2024-07-26 11:35:53.688818] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.174 [2024-07-26 11:35:53.688850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:58.174 qpair failed and we were unable to recover it. 00:27:58.174 [2024-07-26 11:35:53.689063] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.174 [2024-07-26 11:35:53.689094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:58.174 qpair failed and we were unable to recover it. 00:27:58.174 [2024-07-26 11:35:53.689354] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.174 [2024-07-26 11:35:53.689385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:58.174 qpair failed and we were unable to recover it. 00:27:58.174 [2024-07-26 11:35:53.689514] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.174 [2024-07-26 11:35:53.689544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:58.174 qpair failed and we were unable to recover it. 00:27:58.174 [2024-07-26 11:35:53.689734] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.174 [2024-07-26 11:35:53.689766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:58.174 qpair failed and we were unable to recover it. 00:27:58.174 [2024-07-26 11:35:53.689972] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.174 [2024-07-26 11:35:53.690003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:58.174 qpair failed and we were unable to recover it. 00:27:58.174 [2024-07-26 11:35:53.690295] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.174 [2024-07-26 11:35:53.690326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:58.174 qpair failed and we were unable to recover it. 00:27:58.174 [2024-07-26 11:35:53.690469] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.174 [2024-07-26 11:35:53.690501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:58.174 qpair failed and we were unable to recover it. 00:27:58.174 [2024-07-26 11:35:53.690718] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.174 [2024-07-26 11:35:53.690750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:58.174 qpair failed and we were unable to recover it. 00:27:58.174 [2024-07-26 11:35:53.690969] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.174 [2024-07-26 11:35:53.691000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:58.174 qpair failed and we were unable to recover it. 00:27:58.174 [2024-07-26 11:35:53.691191] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.174 [2024-07-26 11:35:53.691222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:58.174 qpair failed and we were unable to recover it. 00:27:58.174 [2024-07-26 11:35:53.691393] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.174 [2024-07-26 11:35:53.691424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:58.174 qpair failed and we were unable to recover it. 00:27:58.174 [2024-07-26 11:35:53.691610] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.174 [2024-07-26 11:35:53.691660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:58.174 qpair failed and we were unable to recover it. 00:27:58.174 [2024-07-26 11:35:53.691851] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.174 [2024-07-26 11:35:53.691883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:58.174 qpair failed and we were unable to recover it. 00:27:58.174 [2024-07-26 11:35:53.692154] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.174 [2024-07-26 11:35:53.692185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:58.174 qpair failed and we were unable to recover it. 00:27:58.174 [2024-07-26 11:35:53.692382] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.174 [2024-07-26 11:35:53.692413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:58.174 qpair failed and we were unable to recover it. 00:27:58.174 [2024-07-26 11:35:53.692602] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.174 [2024-07-26 11:35:53.692641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:58.174 qpair failed and we were unable to recover it. 00:27:58.174 [2024-07-26 11:35:53.692905] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.174 [2024-07-26 11:35:53.692937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:58.174 qpair failed and we were unable to recover it. 00:27:58.174 [2024-07-26 11:35:53.693126] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.174 [2024-07-26 11:35:53.693157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:58.174 qpair failed and we were unable to recover it. 00:27:58.174 [2024-07-26 11:35:53.693369] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.174 [2024-07-26 11:35:53.693399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:58.174 qpair failed and we were unable to recover it. 00:27:58.174 [2024-07-26 11:35:53.693650] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.174 [2024-07-26 11:35:53.693682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:58.174 qpair failed and we were unable to recover it. 00:27:58.174 [2024-07-26 11:35:53.693799] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.174 [2024-07-26 11:35:53.693830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:58.174 qpair failed and we were unable to recover it. 00:27:58.174 [2024-07-26 11:35:53.694068] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.174 [2024-07-26 11:35:53.694099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:58.174 qpair failed and we were unable to recover it. 00:27:58.174 [2024-07-26 11:35:53.694339] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.174 [2024-07-26 11:35:53.694370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:58.174 qpair failed and we were unable to recover it. 00:27:58.174 [2024-07-26 11:35:53.694495] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.174 [2024-07-26 11:35:53.694526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:58.174 qpair failed and we were unable to recover it. 00:27:58.174 [2024-07-26 11:35:53.694642] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.174 [2024-07-26 11:35:53.694675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:58.174 qpair failed and we were unable to recover it. 00:27:58.174 [2024-07-26 11:35:53.694788] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.174 [2024-07-26 11:35:53.694820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:58.174 qpair failed and we were unable to recover it. 00:27:58.174 [2024-07-26 11:35:53.695028] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.174 [2024-07-26 11:35:53.695059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:58.174 qpair failed and we were unable to recover it. 00:27:58.174 [2024-07-26 11:35:53.695316] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.174 [2024-07-26 11:35:53.695347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:58.174 qpair failed and we were unable to recover it. 00:27:58.174 [2024-07-26 11:35:53.695472] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.174 [2024-07-26 11:35:53.695504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:58.174 qpair failed and we were unable to recover it. 00:27:58.174 [2024-07-26 11:35:53.695702] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.174 [2024-07-26 11:35:53.695734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:58.174 qpair failed and we were unable to recover it. 00:27:58.174 [2024-07-26 11:35:53.695920] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.174 [2024-07-26 11:35:53.695952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:58.174 qpair failed and we were unable to recover it. 00:27:58.174 [2024-07-26 11:35:53.696127] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.174 [2024-07-26 11:35:53.696158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:58.174 qpair failed and we were unable to recover it. 00:27:58.174 [2024-07-26 11:35:53.696348] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.174 [2024-07-26 11:35:53.696379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:58.174 qpair failed and we were unable to recover it. 00:27:58.175 [2024-07-26 11:35:53.696650] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.175 [2024-07-26 11:35:53.696681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:58.175 qpair failed and we were unable to recover it. 00:27:58.175 [2024-07-26 11:35:53.696902] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.175 [2024-07-26 11:35:53.696933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:58.175 qpair failed and we were unable to recover it. 00:27:58.175 [2024-07-26 11:35:53.697110] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.175 [2024-07-26 11:35:53.697141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:58.175 qpair failed and we were unable to recover it. 00:27:58.175 [2024-07-26 11:35:53.697326] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.175 [2024-07-26 11:35:53.697357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:58.175 qpair failed and we were unable to recover it. 00:27:58.175 [2024-07-26 11:35:53.697484] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.175 [2024-07-26 11:35:53.697514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:58.175 qpair failed and we were unable to recover it. 00:27:58.175 [2024-07-26 11:35:53.697647] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.175 [2024-07-26 11:35:53.697686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:58.175 qpair failed and we were unable to recover it. 00:27:58.175 [2024-07-26 11:35:53.697887] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.175 [2024-07-26 11:35:53.697919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:58.175 qpair failed and we were unable to recover it. 00:27:58.175 [2024-07-26 11:35:53.698182] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.175 [2024-07-26 11:35:53.698213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:58.175 qpair failed and we were unable to recover it. 00:27:58.175 [2024-07-26 11:35:53.698402] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.175 [2024-07-26 11:35:53.698433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:58.175 qpair failed and we were unable to recover it. 00:27:58.175 [2024-07-26 11:35:53.698608] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.175 [2024-07-26 11:35:53.698658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:58.175 qpair failed and we were unable to recover it. 00:27:58.175 [2024-07-26 11:35:53.698927] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.175 [2024-07-26 11:35:53.698959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:58.175 qpair failed and we were unable to recover it. 00:27:58.175 [2024-07-26 11:35:53.699148] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.175 [2024-07-26 11:35:53.699179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:58.175 qpair failed and we were unable to recover it. 00:27:58.175 [2024-07-26 11:35:53.699380] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.175 [2024-07-26 11:35:53.699412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:58.175 qpair failed and we were unable to recover it. 00:27:58.175 [2024-07-26 11:35:53.699545] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.175 [2024-07-26 11:35:53.699576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:58.175 qpair failed and we were unable to recover it. 00:27:58.175 [2024-07-26 11:35:53.699776] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.175 [2024-07-26 11:35:53.699809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:58.175 qpair failed and we were unable to recover it. 00:27:58.175 [2024-07-26 11:35:53.699938] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.175 [2024-07-26 11:35:53.699970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:58.175 qpair failed and we were unable to recover it. 00:27:58.175 [2024-07-26 11:35:53.700172] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.175 [2024-07-26 11:35:53.700203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:58.175 qpair failed and we were unable to recover it. 00:27:58.175 [2024-07-26 11:35:53.700416] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.175 [2024-07-26 11:35:53.700447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:58.175 qpair failed and we were unable to recover it. 00:27:58.175 [2024-07-26 11:35:53.700624] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.175 [2024-07-26 11:35:53.700664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:58.175 qpair failed and we were unable to recover it. 00:27:58.175 [2024-07-26 11:35:53.700812] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.175 [2024-07-26 11:35:53.700843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:58.175 qpair failed and we were unable to recover it. 00:27:58.175 [2024-07-26 11:35:53.700975] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.175 [2024-07-26 11:35:53.701006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:58.175 qpair failed and we were unable to recover it. 00:27:58.175 [2024-07-26 11:35:53.701183] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.175 [2024-07-26 11:35:53.701214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:58.175 qpair failed and we were unable to recover it. 00:27:58.175 [2024-07-26 11:35:53.701406] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.175 [2024-07-26 11:35:53.701437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:58.175 qpair failed and we were unable to recover it. 00:27:58.175 [2024-07-26 11:35:53.701571] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.175 [2024-07-26 11:35:53.701601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:58.175 qpair failed and we were unable to recover it. 00:27:58.175 [2024-07-26 11:35:53.701731] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.175 [2024-07-26 11:35:53.701770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.175 qpair failed and we were unable to recover it. 00:27:58.175 [2024-07-26 11:35:53.701891] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.175 [2024-07-26 11:35:53.701922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.175 qpair failed and we were unable to recover it. 00:27:58.175 [2024-07-26 11:35:53.702044] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.175 [2024-07-26 11:35:53.702076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.175 qpair failed and we were unable to recover it. 00:27:58.175 [2024-07-26 11:35:53.702255] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.175 [2024-07-26 11:35:53.702287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.175 qpair failed and we were unable to recover it. 00:27:58.175 [2024-07-26 11:35:53.702478] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.175 [2024-07-26 11:35:53.702510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.175 qpair failed and we were unable to recover it. 00:27:58.175 [2024-07-26 11:35:53.702639] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.175 [2024-07-26 11:35:53.702672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.175 qpair failed and we were unable to recover it. 00:27:58.175 [2024-07-26 11:35:53.702943] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.175 [2024-07-26 11:35:53.702975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.175 qpair failed and we were unable to recover it. 00:27:58.175 [2024-07-26 11:35:53.703161] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.175 [2024-07-26 11:35:53.703192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.175 qpair failed and we were unable to recover it. 00:27:58.175 [2024-07-26 11:35:53.703374] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.176 [2024-07-26 11:35:53.703411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.176 qpair failed and we were unable to recover it. 00:27:58.176 [2024-07-26 11:35:53.703604] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.176 [2024-07-26 11:35:53.703647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.176 qpair failed and we were unable to recover it. 00:27:58.176 [2024-07-26 11:35:53.703826] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.176 [2024-07-26 11:35:53.703856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.176 qpair failed and we were unable to recover it. 00:27:58.176 [2024-07-26 11:35:53.704068] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.176 [2024-07-26 11:35:53.704099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.176 qpair failed and we were unable to recover it. 00:27:58.176 [2024-07-26 11:35:53.704242] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.176 [2024-07-26 11:35:53.704274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.176 qpair failed and we were unable to recover it. 00:27:58.176 [2024-07-26 11:35:53.704448] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.176 [2024-07-26 11:35:53.704478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.176 qpair failed and we were unable to recover it. 00:27:58.176 [2024-07-26 11:35:53.704675] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.176 [2024-07-26 11:35:53.704708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.176 qpair failed and we were unable to recover it. 00:27:58.176 [2024-07-26 11:35:53.704902] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.176 [2024-07-26 11:35:53.704932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.176 qpair failed and we were unable to recover it. 00:27:58.176 [2024-07-26 11:35:53.705109] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.176 [2024-07-26 11:35:53.705140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.176 qpair failed and we were unable to recover it. 00:27:58.176 [2024-07-26 11:35:53.705317] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.176 [2024-07-26 11:35:53.705348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.176 qpair failed and we were unable to recover it. 00:27:58.176 [2024-07-26 11:35:53.705535] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.176 [2024-07-26 11:35:53.705565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.176 qpair failed and we were unable to recover it. 00:27:58.176 [2024-07-26 11:35:53.705811] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.176 [2024-07-26 11:35:53.705843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.176 qpair failed and we were unable to recover it. 00:27:58.176 [2024-07-26 11:35:53.706029] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.176 [2024-07-26 11:35:53.706061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.176 qpair failed and we were unable to recover it. 00:27:58.176 [2024-07-26 11:35:53.706258] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.176 [2024-07-26 11:35:53.706289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.176 qpair failed and we were unable to recover it. 00:27:58.176 [2024-07-26 11:35:53.706476] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.176 [2024-07-26 11:35:53.706508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.176 qpair failed and we were unable to recover it. 00:27:58.176 [2024-07-26 11:35:53.706781] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.176 [2024-07-26 11:35:53.706816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.176 qpair failed and we were unable to recover it. 00:27:58.176 [2024-07-26 11:35:53.706940] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.176 [2024-07-26 11:35:53.706972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.176 qpair failed and we were unable to recover it. 00:27:58.176 [2024-07-26 11:35:53.707169] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.176 [2024-07-26 11:35:53.707200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.176 qpair failed and we were unable to recover it. 00:27:58.176 [2024-07-26 11:35:53.707373] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.176 [2024-07-26 11:35:53.707403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.176 qpair failed and we were unable to recover it. 00:27:58.176 [2024-07-26 11:35:53.707577] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.176 [2024-07-26 11:35:53.707607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.176 qpair failed and we were unable to recover it. 00:27:58.176 [2024-07-26 11:35:53.707815] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.176 [2024-07-26 11:35:53.707847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.176 qpair failed and we were unable to recover it. 00:27:58.176 [2024-07-26 11:35:53.708118] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.176 [2024-07-26 11:35:53.708150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.176 qpair failed and we were unable to recover it. 00:27:58.176 [2024-07-26 11:35:53.708322] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.176 [2024-07-26 11:35:53.708353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.176 qpair failed and we were unable to recover it. 00:27:58.176 [2024-07-26 11:35:53.708525] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.176 [2024-07-26 11:35:53.708556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.176 qpair failed and we were unable to recover it. 00:27:58.176 [2024-07-26 11:35:53.708690] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.176 [2024-07-26 11:35:53.708722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.176 qpair failed and we were unable to recover it. 00:27:58.176 [2024-07-26 11:35:53.708914] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.176 [2024-07-26 11:35:53.708945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.176 qpair failed and we were unable to recover it. 00:27:58.176 [2024-07-26 11:35:53.709190] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.176 [2024-07-26 11:35:53.709221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.176 qpair failed and we were unable to recover it. 00:27:58.176 [2024-07-26 11:35:53.709447] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.176 [2024-07-26 11:35:53.709484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:58.176 qpair failed and we were unable to recover it. 00:27:58.176 [2024-07-26 11:35:53.709638] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.176 [2024-07-26 11:35:53.709670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:58.176 qpair failed and we were unable to recover it. 00:27:58.176 [2024-07-26 11:35:53.709786] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.176 [2024-07-26 11:35:53.709817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:58.176 qpair failed and we were unable to recover it. 00:27:58.176 [2024-07-26 11:35:53.710041] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.176 [2024-07-26 11:35:53.710071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:58.176 qpair failed and we were unable to recover it. 00:27:58.176 [2024-07-26 11:35:53.710267] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.176 [2024-07-26 11:35:53.710298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:58.176 qpair failed and we were unable to recover it. 00:27:58.176 [2024-07-26 11:35:53.710424] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.176 [2024-07-26 11:35:53.710454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:58.176 qpair failed and we were unable to recover it. 00:27:58.176 [2024-07-26 11:35:53.710570] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.176 [2024-07-26 11:35:53.710601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:58.176 qpair failed and we were unable to recover it. 00:27:58.176 [2024-07-26 11:35:53.710750] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.176 [2024-07-26 11:35:53.710785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:58.176 qpair failed and we were unable to recover it. 00:27:58.176 [2024-07-26 11:35:53.711062] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.176 [2024-07-26 11:35:53.711093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:58.176 qpair failed and we were unable to recover it. 00:27:58.176 [2024-07-26 11:35:53.711286] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.176 [2024-07-26 11:35:53.711316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:58.176 qpair failed and we were unable to recover it. 00:27:58.176 [2024-07-26 11:35:53.711434] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.176 [2024-07-26 11:35:53.711465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:58.177 qpair failed and we were unable to recover it. 00:27:58.177 [2024-07-26 11:35:53.711656] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.177 [2024-07-26 11:35:53.711688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:58.177 qpair failed and we were unable to recover it. 00:27:58.177 [2024-07-26 11:35:53.711869] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.177 [2024-07-26 11:35:53.711900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:58.177 qpair failed and we were unable to recover it. 00:27:58.177 [2024-07-26 11:35:53.712080] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.177 [2024-07-26 11:35:53.712117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:58.177 qpair failed and we were unable to recover it. 00:27:58.177 [2024-07-26 11:35:53.712302] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.177 [2024-07-26 11:35:53.712334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:58.177 qpair failed and we were unable to recover it. 00:27:58.177 [2024-07-26 11:35:53.712598] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.177 [2024-07-26 11:35:53.712638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:58.177 qpair failed and we were unable to recover it. 00:27:58.177 [2024-07-26 11:35:53.712865] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.177 [2024-07-26 11:35:53.712896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:58.177 qpair failed and we were unable to recover it. 00:27:58.177 [2024-07-26 11:35:53.713089] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.177 [2024-07-26 11:35:53.713121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:58.177 qpair failed and we were unable to recover it. 00:27:58.177 [2024-07-26 11:35:53.713328] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.177 [2024-07-26 11:35:53.713359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:58.177 qpair failed and we were unable to recover it. 00:27:58.177 [2024-07-26 11:35:53.713480] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.177 [2024-07-26 11:35:53.713510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:58.177 qpair failed and we were unable to recover it. 00:27:58.177 [2024-07-26 11:35:53.713644] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.177 [2024-07-26 11:35:53.713676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:58.177 qpair failed and we were unable to recover it. 00:27:58.177 [2024-07-26 11:35:53.713858] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.177 [2024-07-26 11:35:53.713889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:58.177 qpair failed and we were unable to recover it. 00:27:58.177 [2024-07-26 11:35:53.714008] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.177 [2024-07-26 11:35:53.714039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:58.177 qpair failed and we were unable to recover it. 00:27:58.177 [2024-07-26 11:35:53.714236] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.177 [2024-07-26 11:35:53.714267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:58.177 qpair failed and we were unable to recover it. 00:27:58.177 [2024-07-26 11:35:53.714434] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.177 [2024-07-26 11:35:53.714465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:58.177 qpair failed and we were unable to recover it. 00:27:58.177 [2024-07-26 11:35:53.714646] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.177 [2024-07-26 11:35:53.714678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:58.177 qpair failed and we were unable to recover it. 00:27:58.177 [2024-07-26 11:35:53.714940] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.177 [2024-07-26 11:35:53.714971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:58.177 qpair failed and we were unable to recover it. 00:27:58.177 [2024-07-26 11:35:53.715090] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.177 [2024-07-26 11:35:53.715123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.177 qpair failed and we were unable to recover it. 00:27:58.177 [2024-07-26 11:35:53.715399] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.177 [2024-07-26 11:35:53.715430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.177 qpair failed and we were unable to recover it. 00:27:58.177 [2024-07-26 11:35:53.715624] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.177 [2024-07-26 11:35:53.715671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.177 qpair failed and we were unable to recover it. 00:27:58.177 [2024-07-26 11:35:53.715867] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.177 [2024-07-26 11:35:53.715898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.177 qpair failed and we were unable to recover it. 00:27:58.177 [2024-07-26 11:35:53.716084] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.177 [2024-07-26 11:35:53.716115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.177 qpair failed and we were unable to recover it. 00:27:58.177 [2024-07-26 11:35:53.716298] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.177 [2024-07-26 11:35:53.716329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.177 qpair failed and we were unable to recover it. 00:27:58.177 [2024-07-26 11:35:53.716448] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.177 [2024-07-26 11:35:53.716479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.177 qpair failed and we were unable to recover it. 00:27:58.177 [2024-07-26 11:35:53.716748] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.177 [2024-07-26 11:35:53.716780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.177 qpair failed and we were unable to recover it. 00:27:58.177 [2024-07-26 11:35:53.716923] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.177 [2024-07-26 11:35:53.716954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.177 qpair failed and we were unable to recover it. 00:27:58.177 [2024-07-26 11:35:53.717128] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.177 [2024-07-26 11:35:53.717160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.177 qpair failed and we were unable to recover it. 00:27:58.177 [2024-07-26 11:35:53.717358] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.177 [2024-07-26 11:35:53.717390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.177 qpair failed and we were unable to recover it. 00:27:58.177 [2024-07-26 11:35:53.717663] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.177 [2024-07-26 11:35:53.717694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.177 qpair failed and we were unable to recover it. 00:27:58.177 [2024-07-26 11:35:53.717870] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.177 [2024-07-26 11:35:53.717901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.177 qpair failed and we were unable to recover it. 00:27:58.177 [2024-07-26 11:35:53.718171] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.177 [2024-07-26 11:35:53.718207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.177 qpair failed and we were unable to recover it. 00:27:58.177 [2024-07-26 11:35:53.718399] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.177 [2024-07-26 11:35:53.718430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.177 qpair failed and we were unable to recover it. 00:27:58.177 [2024-07-26 11:35:53.718663] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.177 [2024-07-26 11:35:53.718697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.177 qpair failed and we were unable to recover it. 00:27:58.177 [2024-07-26 11:35:53.718891] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.177 [2024-07-26 11:35:53.718923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.177 qpair failed and we were unable to recover it. 00:27:58.177 [2024-07-26 11:35:53.719116] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.177 [2024-07-26 11:35:53.719147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.177 qpair failed and we were unable to recover it. 00:27:58.177 [2024-07-26 11:35:53.719399] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.177 [2024-07-26 11:35:53.719429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.177 qpair failed and we were unable to recover it. 00:27:58.177 [2024-07-26 11:35:53.719672] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.177 [2024-07-26 11:35:53.719704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.177 qpair failed and we were unable to recover it. 00:27:58.177 [2024-07-26 11:35:53.719907] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.177 [2024-07-26 11:35:53.719937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.177 qpair failed and we were unable to recover it. 00:27:58.178 [2024-07-26 11:35:53.720133] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.178 [2024-07-26 11:35:53.720164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.178 qpair failed and we were unable to recover it. 00:27:58.178 [2024-07-26 11:35:53.720285] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.178 [2024-07-26 11:35:53.720316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.178 qpair failed and we were unable to recover it. 00:27:58.178 [2024-07-26 11:35:53.720508] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.178 [2024-07-26 11:35:53.720538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.178 qpair failed and we were unable to recover it. 00:27:58.178 [2024-07-26 11:35:53.720722] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.178 [2024-07-26 11:35:53.720754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.178 qpair failed and we were unable to recover it. 00:27:58.178 [2024-07-26 11:35:53.720930] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.178 [2024-07-26 11:35:53.720961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.178 qpair failed and we were unable to recover it. 00:27:58.178 [2024-07-26 11:35:53.721228] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.178 [2024-07-26 11:35:53.721259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.178 qpair failed and we were unable to recover it. 00:27:58.178 [2024-07-26 11:35:53.721451] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.178 [2024-07-26 11:35:53.721483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.178 qpair failed and we were unable to recover it. 00:27:58.178 [2024-07-26 11:35:53.721685] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.178 [2024-07-26 11:35:53.721716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.178 qpair failed and we were unable to recover it. 00:27:58.178 [2024-07-26 11:35:53.721908] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.178 [2024-07-26 11:35:53.721939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.178 qpair failed and we were unable to recover it. 00:27:58.178 [2024-07-26 11:35:53.722078] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.178 [2024-07-26 11:35:53.722109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.178 qpair failed and we were unable to recover it. 00:27:58.178 [2024-07-26 11:35:53.722237] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.178 [2024-07-26 11:35:53.722268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.178 qpair failed and we were unable to recover it. 00:27:58.178 [2024-07-26 11:35:53.722534] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.178 [2024-07-26 11:35:53.722564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.178 qpair failed and we were unable to recover it. 00:27:58.178 [2024-07-26 11:35:53.722811] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.178 [2024-07-26 11:35:53.722844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.178 qpair failed and we were unable to recover it. 00:27:58.178 [2024-07-26 11:35:53.723052] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.178 [2024-07-26 11:35:53.723083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.178 qpair failed and we were unable to recover it. 00:27:58.178 [2024-07-26 11:35:53.723272] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.178 [2024-07-26 11:35:53.723303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.178 qpair failed and we were unable to recover it. 00:27:58.178 [2024-07-26 11:35:53.723424] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.178 [2024-07-26 11:35:53.723455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.178 qpair failed and we were unable to recover it. 00:27:58.178 [2024-07-26 11:35:53.723576] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.178 [2024-07-26 11:35:53.723607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.178 qpair failed and we were unable to recover it. 00:27:58.178 [2024-07-26 11:35:53.723737] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.178 [2024-07-26 11:35:53.723769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.178 qpair failed and we were unable to recover it. 00:27:58.178 [2024-07-26 11:35:53.724016] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.178 [2024-07-26 11:35:53.724047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.178 qpair failed and we were unable to recover it. 00:27:58.178 [2024-07-26 11:35:53.724273] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.178 [2024-07-26 11:35:53.724309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:58.178 qpair failed and we were unable to recover it. 00:27:58.178 [2024-07-26 11:35:53.724504] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.178 [2024-07-26 11:35:53.724536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:58.178 qpair failed and we were unable to recover it. 00:27:58.178 [2024-07-26 11:35:53.724653] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.178 [2024-07-26 11:35:53.724685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:58.178 qpair failed and we were unable to recover it. 00:27:58.178 [2024-07-26 11:35:53.724868] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.178 [2024-07-26 11:35:53.724900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:58.178 qpair failed and we were unable to recover it. 00:27:58.178 [2024-07-26 11:35:53.725143] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.178 [2024-07-26 11:35:53.725175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:58.178 qpair failed and we were unable to recover it. 00:27:58.178 [2024-07-26 11:35:53.725417] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.178 [2024-07-26 11:35:53.725449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:58.178 qpair failed and we were unable to recover it. 00:27:58.178 [2024-07-26 11:35:53.725622] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.178 [2024-07-26 11:35:53.725663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:58.178 qpair failed and we were unable to recover it. 00:27:58.178 [2024-07-26 11:35:53.725862] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.178 [2024-07-26 11:35:53.725894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:58.178 qpair failed and we were unable to recover it. 00:27:58.178 [2024-07-26 11:35:53.726160] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.178 [2024-07-26 11:35:53.726191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:58.178 qpair failed and we were unable to recover it. 00:27:58.178 [2024-07-26 11:35:53.726300] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.178 [2024-07-26 11:35:53.726332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:58.178 qpair failed and we were unable to recover it. 00:27:58.178 [2024-07-26 11:35:53.726437] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.178 [2024-07-26 11:35:53.726469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:58.178 qpair failed and we were unable to recover it. 00:27:58.178 [2024-07-26 11:35:53.726662] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.178 [2024-07-26 11:35:53.726694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:58.178 qpair failed and we were unable to recover it. 00:27:58.178 [2024-07-26 11:35:53.726831] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.178 [2024-07-26 11:35:53.726862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:58.178 qpair failed and we were unable to recover it. 00:27:58.178 [2024-07-26 11:35:53.727099] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.178 [2024-07-26 11:35:53.727137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:58.178 qpair failed and we were unable to recover it. 00:27:58.178 [2024-07-26 11:35:53.727261] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.178 [2024-07-26 11:35:53.727292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:58.178 qpair failed and we were unable to recover it. 00:27:58.178 [2024-07-26 11:35:53.727421] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.178 [2024-07-26 11:35:53.727453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:58.178 qpair failed and we were unable to recover it. 00:27:58.178 [2024-07-26 11:35:53.727635] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.178 [2024-07-26 11:35:53.727668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:58.178 qpair failed and we were unable to recover it. 00:27:58.178 [2024-07-26 11:35:53.727907] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.178 [2024-07-26 11:35:53.727939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:58.178 qpair failed and we were unable to recover it. 00:27:58.178 [2024-07-26 11:35:53.728049] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.178 [2024-07-26 11:35:53.728080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:58.179 qpair failed and we were unable to recover it. 00:27:58.179 [2024-07-26 11:35:53.728215] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.179 [2024-07-26 11:35:53.728245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:58.179 qpair failed and we were unable to recover it. 00:27:58.179 [2024-07-26 11:35:53.728364] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.179 [2024-07-26 11:35:53.728396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:58.179 qpair failed and we were unable to recover it. 00:27:58.179 [2024-07-26 11:35:53.728536] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.179 [2024-07-26 11:35:53.728568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:58.179 qpair failed and we were unable to recover it. 00:27:58.179 [2024-07-26 11:35:53.728748] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.179 [2024-07-26 11:35:53.728780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:58.179 qpair failed and we were unable to recover it. 00:27:58.179 [2024-07-26 11:35:53.729051] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.179 [2024-07-26 11:35:53.729083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:58.179 qpair failed and we were unable to recover it. 00:27:58.179 [2024-07-26 11:35:53.729273] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.179 [2024-07-26 11:35:53.729304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:58.179 qpair failed and we were unable to recover it. 00:27:58.179 [2024-07-26 11:35:53.729486] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.179 [2024-07-26 11:35:53.729517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:58.179 qpair failed and we were unable to recover it. 00:27:58.179 [2024-07-26 11:35:53.729658] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.179 [2024-07-26 11:35:53.729691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:58.179 qpair failed and we were unable to recover it. 00:27:58.179 [2024-07-26 11:35:53.729881] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.179 [2024-07-26 11:35:53.729914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:58.179 qpair failed and we were unable to recover it. 00:27:58.179 [2024-07-26 11:35:53.730030] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.179 [2024-07-26 11:35:53.730061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:58.179 qpair failed and we were unable to recover it. 00:27:58.179 [2024-07-26 11:35:53.730315] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.179 [2024-07-26 11:35:53.730348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:58.179 qpair failed and we were unable to recover it. 00:27:58.179 [2024-07-26 11:35:53.730457] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.179 [2024-07-26 11:35:53.730489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:58.179 qpair failed and we were unable to recover it. 00:27:58.179 [2024-07-26 11:35:53.730733] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.179 [2024-07-26 11:35:53.730766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:58.179 qpair failed and we were unable to recover it. 00:27:58.179 [2024-07-26 11:35:53.730948] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.179 [2024-07-26 11:35:53.730981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:58.179 qpair failed and we were unable to recover it. 00:27:58.179 [2024-07-26 11:35:53.731181] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.179 [2024-07-26 11:35:53.731212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:58.179 qpair failed and we were unable to recover it. 00:27:58.179 [2024-07-26 11:35:53.731472] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.179 [2024-07-26 11:35:53.731504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:58.179 qpair failed and we were unable to recover it. 00:27:58.179 [2024-07-26 11:35:53.731691] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.179 [2024-07-26 11:35:53.731723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:58.179 qpair failed and we were unable to recover it. 00:27:58.179 [2024-07-26 11:35:53.731979] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.179 [2024-07-26 11:35:53.732011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:58.179 qpair failed and we were unable to recover it. 00:27:58.179 [2024-07-26 11:35:53.732168] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.179 [2024-07-26 11:35:53.732201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:58.179 qpair failed and we were unable to recover it. 00:27:58.179 [2024-07-26 11:35:53.732463] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.179 [2024-07-26 11:35:53.732495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:58.179 qpair failed and we were unable to recover it. 00:27:58.179 [2024-07-26 11:35:53.732638] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.179 [2024-07-26 11:35:53.732670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:58.179 qpair failed and we were unable to recover it. 00:27:58.179 [2024-07-26 11:35:53.732911] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.179 [2024-07-26 11:35:53.732948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f8000b90 with addr=10.0.0.2, port=4420 00:27:58.179 qpair failed and we were unable to recover it. 00:27:58.179 [2024-07-26 11:35:53.733225] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.179 [2024-07-26 11:35:53.733256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f8000b90 with addr=10.0.0.2, port=4420 00:27:58.179 qpair failed and we were unable to recover it. 00:27:58.179 [2024-07-26 11:35:53.733507] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.179 [2024-07-26 11:35:53.733538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f8000b90 with addr=10.0.0.2, port=4420 00:27:58.179 qpair failed and we were unable to recover it. 00:27:58.179 [2024-07-26 11:35:53.733729] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.179 [2024-07-26 11:35:53.733761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f8000b90 with addr=10.0.0.2, port=4420 00:27:58.179 qpair failed and we were unable to recover it. 00:27:58.179 [2024-07-26 11:35:53.733887] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.179 [2024-07-26 11:35:53.733917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f8000b90 with addr=10.0.0.2, port=4420 00:27:58.179 qpair failed and we were unable to recover it. 00:27:58.179 [2024-07-26 11:35:53.734047] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.179 [2024-07-26 11:35:53.734078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f8000b90 with addr=10.0.0.2, port=4420 00:27:58.179 qpair failed and we were unable to recover it. 00:27:58.179 [2024-07-26 11:35:53.734341] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.179 [2024-07-26 11:35:53.734372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f8000b90 with addr=10.0.0.2, port=4420 00:27:58.179 qpair failed and we were unable to recover it. 00:27:58.179 [2024-07-26 11:35:53.734547] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.179 [2024-07-26 11:35:53.734579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f8000b90 with addr=10.0.0.2, port=4420 00:27:58.179 qpair failed and we were unable to recover it. 00:27:58.179 [2024-07-26 11:35:53.734840] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.179 [2024-07-26 11:35:53.734872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f8000b90 with addr=10.0.0.2, port=4420 00:27:58.179 qpair failed and we were unable to recover it. 00:27:58.179 [2024-07-26 11:35:53.735139] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.179 [2024-07-26 11:35:53.735170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f8000b90 with addr=10.0.0.2, port=4420 00:27:58.179 qpair failed and we were unable to recover it. 00:27:58.179 [2024-07-26 11:35:53.735337] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.179 [2024-07-26 11:35:53.735368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f8000b90 with addr=10.0.0.2, port=4420 00:27:58.179 qpair failed and we were unable to recover it. 00:27:58.179 [2024-07-26 11:35:53.735641] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.179 [2024-07-26 11:35:53.735673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f8000b90 with addr=10.0.0.2, port=4420 00:27:58.179 qpair failed and we were unable to recover it. 00:27:58.179 [2024-07-26 11:35:53.735809] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.179 [2024-07-26 11:35:53.735840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f8000b90 with addr=10.0.0.2, port=4420 00:27:58.179 qpair failed and we were unable to recover it. 00:27:58.179 [2024-07-26 11:35:53.736033] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.179 [2024-07-26 11:35:53.736069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f8000b90 with addr=10.0.0.2, port=4420 00:27:58.179 qpair failed and we were unable to recover it. 00:27:58.179 [2024-07-26 11:35:53.736277] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.179 [2024-07-26 11:35:53.736308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f8000b90 with addr=10.0.0.2, port=4420 00:27:58.179 qpair failed and we were unable to recover it. 00:27:58.179 [2024-07-26 11:35:53.736496] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.179 [2024-07-26 11:35:53.736528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f8000b90 with addr=10.0.0.2, port=4420 00:27:58.179 qpair failed and we were unable to recover it. 00:27:58.179 [2024-07-26 11:35:53.736663] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.180 [2024-07-26 11:35:53.736694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f8000b90 with addr=10.0.0.2, port=4420 00:27:58.180 qpair failed and we were unable to recover it. 00:27:58.180 [2024-07-26 11:35:53.736879] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.180 [2024-07-26 11:35:53.736910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f8000b90 with addr=10.0.0.2, port=4420 00:27:58.180 qpair failed and we were unable to recover it. 00:27:58.180 [2024-07-26 11:35:53.737034] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.180 [2024-07-26 11:35:53.737065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f8000b90 with addr=10.0.0.2, port=4420 00:27:58.180 qpair failed and we were unable to recover it. 00:27:58.180 [2024-07-26 11:35:53.737245] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.180 [2024-07-26 11:35:53.737276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f8000b90 with addr=10.0.0.2, port=4420 00:27:58.180 qpair failed and we were unable to recover it. 00:27:58.180 [2024-07-26 11:35:53.737546] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.180 [2024-07-26 11:35:53.737576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f8000b90 with addr=10.0.0.2, port=4420 00:27:58.180 qpair failed and we were unable to recover it. 00:27:58.180 [2024-07-26 11:35:53.737781] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.180 [2024-07-26 11:35:53.737813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f8000b90 with addr=10.0.0.2, port=4420 00:27:58.180 qpair failed and we were unable to recover it. 00:27:58.180 [2024-07-26 11:35:53.737950] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.180 [2024-07-26 11:35:53.737981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f8000b90 with addr=10.0.0.2, port=4420 00:27:58.180 qpair failed and we were unable to recover it. 00:27:58.180 [2024-07-26 11:35:53.738193] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.180 [2024-07-26 11:35:53.738224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f8000b90 with addr=10.0.0.2, port=4420 00:27:58.180 qpair failed and we were unable to recover it. 00:27:58.180 [2024-07-26 11:35:53.738411] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.180 [2024-07-26 11:35:53.738442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f8000b90 with addr=10.0.0.2, port=4420 00:27:58.180 qpair failed and we were unable to recover it. 00:27:58.180 [2024-07-26 11:35:53.738620] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.180 [2024-07-26 11:35:53.738662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f8000b90 with addr=10.0.0.2, port=4420 00:27:58.180 qpair failed and we were unable to recover it. 00:27:58.180 [2024-07-26 11:35:53.738790] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.180 [2024-07-26 11:35:53.738820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f8000b90 with addr=10.0.0.2, port=4420 00:27:58.180 qpair failed and we were unable to recover it. 00:27:58.180 [2024-07-26 11:35:53.739022] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.180 [2024-07-26 11:35:53.739053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f8000b90 with addr=10.0.0.2, port=4420 00:27:58.180 qpair failed and we were unable to recover it. 00:27:58.180 [2024-07-26 11:35:53.739229] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.180 [2024-07-26 11:35:53.739260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f8000b90 with addr=10.0.0.2, port=4420 00:27:58.180 qpair failed and we were unable to recover it. 00:27:58.180 [2024-07-26 11:35:53.739523] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.180 [2024-07-26 11:35:53.739555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f8000b90 with addr=10.0.0.2, port=4420 00:27:58.180 qpair failed and we were unable to recover it. 00:27:58.180 [2024-07-26 11:35:53.739744] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.180 [2024-07-26 11:35:53.739776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f8000b90 with addr=10.0.0.2, port=4420 00:27:58.180 qpair failed and we were unable to recover it. 00:27:58.180 [2024-07-26 11:35:53.739951] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.180 [2024-07-26 11:35:53.739982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f8000b90 with addr=10.0.0.2, port=4420 00:27:58.180 qpair failed and we were unable to recover it. 00:27:58.180 [2024-07-26 11:35:53.740267] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.180 [2024-07-26 11:35:53.740299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f8000b90 with addr=10.0.0.2, port=4420 00:27:58.180 qpair failed and we were unable to recover it. 00:27:58.180 [2024-07-26 11:35:53.740420] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.180 [2024-07-26 11:35:53.740450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f8000b90 with addr=10.0.0.2, port=4420 00:27:58.180 qpair failed and we were unable to recover it. 00:27:58.180 [2024-07-26 11:35:53.740592] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.180 [2024-07-26 11:35:53.740623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f8000b90 with addr=10.0.0.2, port=4420 00:27:58.180 qpair failed and we were unable to recover it. 00:27:58.180 [2024-07-26 11:35:53.740815] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.180 [2024-07-26 11:35:53.740846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f8000b90 with addr=10.0.0.2, port=4420 00:27:58.180 qpair failed and we were unable to recover it. 00:27:58.180 [2024-07-26 11:35:53.741034] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.180 [2024-07-26 11:35:53.741065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f8000b90 with addr=10.0.0.2, port=4420 00:27:58.180 qpair failed and we were unable to recover it. 00:27:58.180 [2024-07-26 11:35:53.741198] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.180 [2024-07-26 11:35:53.741228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f8000b90 with addr=10.0.0.2, port=4420 00:27:58.180 qpair failed and we were unable to recover it. 00:27:58.180 [2024-07-26 11:35:53.741411] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.180 [2024-07-26 11:35:53.741441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f8000b90 with addr=10.0.0.2, port=4420 00:27:58.180 qpair failed and we were unable to recover it. 00:27:58.180 [2024-07-26 11:35:53.741651] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.180 [2024-07-26 11:35:53.741683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f8000b90 with addr=10.0.0.2, port=4420 00:27:58.180 qpair failed and we were unable to recover it. 00:27:58.180 [2024-07-26 11:35:53.741802] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.180 [2024-07-26 11:35:53.741833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f8000b90 with addr=10.0.0.2, port=4420 00:27:58.180 qpair failed and we were unable to recover it. 00:27:58.180 [2024-07-26 11:35:53.741945] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.180 [2024-07-26 11:35:53.741976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f8000b90 with addr=10.0.0.2, port=4420 00:27:58.180 qpair failed and we were unable to recover it. 00:27:58.180 [2024-07-26 11:35:53.742070] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.180 [2024-07-26 11:35:53.742100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f8000b90 with addr=10.0.0.2, port=4420 00:27:58.180 qpair failed and we were unable to recover it. 00:27:58.180 [2024-07-26 11:35:53.742276] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.180 [2024-07-26 11:35:53.742308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f8000b90 with addr=10.0.0.2, port=4420 00:27:58.180 qpair failed and we were unable to recover it. 00:27:58.180 [2024-07-26 11:35:53.742488] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.180 [2024-07-26 11:35:53.742518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f8000b90 with addr=10.0.0.2, port=4420 00:27:58.180 qpair failed and we were unable to recover it. 00:27:58.180 [2024-07-26 11:35:53.742642] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.180 [2024-07-26 11:35:53.742674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f8000b90 with addr=10.0.0.2, port=4420 00:27:58.180 qpair failed and we were unable to recover it. 00:27:58.180 [2024-07-26 11:35:53.742782] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.180 [2024-07-26 11:35:53.742813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f8000b90 with addr=10.0.0.2, port=4420 00:27:58.180 qpair failed and we were unable to recover it. 00:27:58.180 [2024-07-26 11:35:53.742946] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.180 [2024-07-26 11:35:53.742978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f8000b90 with addr=10.0.0.2, port=4420 00:27:58.180 qpair failed and we were unable to recover it. 00:27:58.180 [2024-07-26 11:35:53.743164] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.181 [2024-07-26 11:35:53.743195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f8000b90 with addr=10.0.0.2, port=4420 00:27:58.181 qpair failed and we were unable to recover it. 00:27:58.181 [2024-07-26 11:35:53.743400] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.181 [2024-07-26 11:35:53.743430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f8000b90 with addr=10.0.0.2, port=4420 00:27:58.181 qpair failed and we were unable to recover it. 00:27:58.181 [2024-07-26 11:35:53.743545] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.181 [2024-07-26 11:35:53.743576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f8000b90 with addr=10.0.0.2, port=4420 00:27:58.181 qpair failed and we were unable to recover it. 00:27:58.181 [2024-07-26 11:35:53.743792] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.181 [2024-07-26 11:35:53.743825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f8000b90 with addr=10.0.0.2, port=4420 00:27:58.181 qpair failed and we were unable to recover it. 00:27:58.181 [2024-07-26 11:35:53.744013] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.181 [2024-07-26 11:35:53.744044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f8000b90 with addr=10.0.0.2, port=4420 00:27:58.181 qpair failed and we were unable to recover it. 00:27:58.181 [2024-07-26 11:35:53.744242] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.181 [2024-07-26 11:35:53.744278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f8000b90 with addr=10.0.0.2, port=4420 00:27:58.181 qpair failed and we were unable to recover it. 00:27:58.181 [2024-07-26 11:35:53.744409] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.181 [2024-07-26 11:35:53.744440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f8000b90 with addr=10.0.0.2, port=4420 00:27:58.181 qpair failed and we were unable to recover it. 00:27:58.181 [2024-07-26 11:35:53.744613] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.181 [2024-07-26 11:35:53.744653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f8000b90 with addr=10.0.0.2, port=4420 00:27:58.181 qpair failed and we were unable to recover it. 00:27:58.181 [2024-07-26 11:35:53.744926] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.181 [2024-07-26 11:35:53.744957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f8000b90 with addr=10.0.0.2, port=4420 00:27:58.181 qpair failed and we were unable to recover it. 00:27:58.181 [2024-07-26 11:35:53.745174] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.181 [2024-07-26 11:35:53.745205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f8000b90 with addr=10.0.0.2, port=4420 00:27:58.181 qpair failed and we were unable to recover it. 00:27:58.181 [2024-07-26 11:35:53.745398] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.181 [2024-07-26 11:35:53.745428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f8000b90 with addr=10.0.0.2, port=4420 00:27:58.181 qpair failed and we were unable to recover it. 00:27:58.181 [2024-07-26 11:35:53.745559] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.181 [2024-07-26 11:35:53.745590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f8000b90 with addr=10.0.0.2, port=4420 00:27:58.181 qpair failed and we were unable to recover it. 00:27:58.181 [2024-07-26 11:35:53.745799] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.181 [2024-07-26 11:35:53.745830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f8000b90 with addr=10.0.0.2, port=4420 00:27:58.181 qpair failed and we were unable to recover it. 00:27:58.181 [2024-07-26 11:35:53.745958] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.181 [2024-07-26 11:35:53.745989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f8000b90 with addr=10.0.0.2, port=4420 00:27:58.181 qpair failed and we were unable to recover it. 00:27:58.181 [2024-07-26 11:35:53.746115] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.181 [2024-07-26 11:35:53.746146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f8000b90 with addr=10.0.0.2, port=4420 00:27:58.181 qpair failed and we were unable to recover it. 00:27:58.181 [2024-07-26 11:35:53.746335] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.181 [2024-07-26 11:35:53.746365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f8000b90 with addr=10.0.0.2, port=4420 00:27:58.181 qpair failed and we were unable to recover it. 00:27:58.181 [2024-07-26 11:35:53.746642] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.181 [2024-07-26 11:35:53.746675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f8000b90 with addr=10.0.0.2, port=4420 00:27:58.181 qpair failed and we were unable to recover it. 00:27:58.181 [2024-07-26 11:35:53.746921] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.181 [2024-07-26 11:35:53.746952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f8000b90 with addr=10.0.0.2, port=4420 00:27:58.181 qpair failed and we were unable to recover it. 00:27:58.181 [2024-07-26 11:35:53.747161] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.181 [2024-07-26 11:35:53.747191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f8000b90 with addr=10.0.0.2, port=4420 00:27:58.181 qpair failed and we were unable to recover it. 00:27:58.181 [2024-07-26 11:35:53.747330] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.181 [2024-07-26 11:35:53.747362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f8000b90 with addr=10.0.0.2, port=4420 00:27:58.181 qpair failed and we were unable to recover it. 00:27:58.181 [2024-07-26 11:35:53.747493] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.181 [2024-07-26 11:35:53.747524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f8000b90 with addr=10.0.0.2, port=4420 00:27:58.181 qpair failed and we were unable to recover it. 00:27:58.181 [2024-07-26 11:35:53.747657] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.181 [2024-07-26 11:35:53.747690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f8000b90 with addr=10.0.0.2, port=4420 00:27:58.181 qpair failed and we were unable to recover it. 00:27:58.181 [2024-07-26 11:35:53.747874] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.181 [2024-07-26 11:35:53.747905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f8000b90 with addr=10.0.0.2, port=4420 00:27:58.181 qpair failed and we were unable to recover it. 00:27:58.181 [2024-07-26 11:35:53.748085] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.181 [2024-07-26 11:35:53.748115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f8000b90 with addr=10.0.0.2, port=4420 00:27:58.181 qpair failed and we were unable to recover it. 00:27:58.181 [2024-07-26 11:35:53.748316] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.181 [2024-07-26 11:35:53.748347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f8000b90 with addr=10.0.0.2, port=4420 00:27:58.181 qpair failed and we were unable to recover it. 00:27:58.181 [2024-07-26 11:35:53.748472] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.181 [2024-07-26 11:35:53.748502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f8000b90 with addr=10.0.0.2, port=4420 00:27:58.181 qpair failed and we were unable to recover it. 00:27:58.181 [2024-07-26 11:35:53.748643] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.181 [2024-07-26 11:35:53.748675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f8000b90 with addr=10.0.0.2, port=4420 00:27:58.181 qpair failed and we were unable to recover it. 00:27:58.181 [2024-07-26 11:35:53.748944] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.181 [2024-07-26 11:35:53.748974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f8000b90 with addr=10.0.0.2, port=4420 00:27:58.181 qpair failed and we were unable to recover it. 00:27:58.181 [2024-07-26 11:35:53.749090] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.181 [2024-07-26 11:35:53.749121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f8000b90 with addr=10.0.0.2, port=4420 00:27:58.181 qpair failed and we were unable to recover it. 00:27:58.181 [2024-07-26 11:35:53.749246] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.181 [2024-07-26 11:35:53.749276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f8000b90 with addr=10.0.0.2, port=4420 00:27:58.181 qpair failed and we were unable to recover it. 00:27:58.181 [2024-07-26 11:35:53.749476] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.181 [2024-07-26 11:35:53.749507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f8000b90 with addr=10.0.0.2, port=4420 00:27:58.181 qpair failed and we were unable to recover it. 00:27:58.181 [2024-07-26 11:35:53.749642] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.181 [2024-07-26 11:35:53.749673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f8000b90 with addr=10.0.0.2, port=4420 00:27:58.181 qpair failed and we were unable to recover it. 00:27:58.181 [2024-07-26 11:35:53.749952] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.181 [2024-07-26 11:35:53.749987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.181 qpair failed and we were unable to recover it. 00:27:58.181 [2024-07-26 11:35:53.750119] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.181 [2024-07-26 11:35:53.750150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.181 qpair failed and we were unable to recover it. 00:27:58.181 [2024-07-26 11:35:53.750272] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.181 [2024-07-26 11:35:53.750304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.181 qpair failed and we were unable to recover it. 00:27:58.181 [2024-07-26 11:35:53.750512] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.181 [2024-07-26 11:35:53.750543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.181 qpair failed and we were unable to recover it. 00:27:58.181 [2024-07-26 11:35:53.750667] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.181 [2024-07-26 11:35:53.750699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.181 qpair failed and we were unable to recover it. 00:27:58.181 [2024-07-26 11:35:53.750913] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.181 [2024-07-26 11:35:53.750943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.181 qpair failed and we were unable to recover it. 00:27:58.182 [2024-07-26 11:35:53.751064] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.182 [2024-07-26 11:35:53.751095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.182 qpair failed and we were unable to recover it. 00:27:58.182 [2024-07-26 11:35:53.751244] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.182 [2024-07-26 11:35:53.751275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.182 qpair failed and we were unable to recover it. 00:27:58.182 [2024-07-26 11:35:53.751478] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.182 [2024-07-26 11:35:53.751508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.182 qpair failed and we were unable to recover it. 00:27:58.182 [2024-07-26 11:35:53.751696] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.182 [2024-07-26 11:35:53.751729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.182 qpair failed and we were unable to recover it. 00:27:58.182 [2024-07-26 11:35:53.751956] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.182 [2024-07-26 11:35:53.751987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.182 qpair failed and we were unable to recover it. 00:27:58.182 [2024-07-26 11:35:53.752161] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.182 [2024-07-26 11:35:53.752192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.182 qpair failed and we were unable to recover it. 00:27:58.182 [2024-07-26 11:35:53.752318] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.182 [2024-07-26 11:35:53.752348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.182 qpair failed and we were unable to recover it. 00:27:58.182 [2024-07-26 11:35:53.752531] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.182 [2024-07-26 11:35:53.752568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.182 qpair failed and we were unable to recover it. 00:27:58.182 [2024-07-26 11:35:53.752752] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.182 [2024-07-26 11:35:53.752785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.182 qpair failed and we were unable to recover it. 00:27:58.182 [2024-07-26 11:35:53.752983] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.182 [2024-07-26 11:35:53.753014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.182 qpair failed and we were unable to recover it. 00:27:58.182 [2024-07-26 11:35:53.753251] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.182 [2024-07-26 11:35:53.753283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.182 qpair failed and we were unable to recover it. 00:27:58.182 [2024-07-26 11:35:53.753420] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.182 [2024-07-26 11:35:53.753452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.182 qpair failed and we were unable to recover it. 00:27:58.182 [2024-07-26 11:35:53.753695] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.182 [2024-07-26 11:35:53.753727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.182 qpair failed and we were unable to recover it. 00:27:58.182 [2024-07-26 11:35:53.753920] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.182 [2024-07-26 11:35:53.753951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.182 qpair failed and we were unable to recover it. 00:27:58.182 [2024-07-26 11:35:53.754143] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.182 [2024-07-26 11:35:53.754174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.182 qpair failed and we were unable to recover it. 00:27:58.182 [2024-07-26 11:35:53.754443] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.182 [2024-07-26 11:35:53.754475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.182 qpair failed and we were unable to recover it. 00:27:58.182 [2024-07-26 11:35:53.754646] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.182 [2024-07-26 11:35:53.754677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.182 qpair failed and we were unable to recover it. 00:27:58.182 [2024-07-26 11:35:53.754883] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.182 [2024-07-26 11:35:53.754914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.182 qpair failed and we were unable to recover it. 00:27:58.182 [2024-07-26 11:35:53.755043] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.182 [2024-07-26 11:35:53.755074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.182 qpair failed and we were unable to recover it. 00:27:58.182 [2024-07-26 11:35:53.755291] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.182 [2024-07-26 11:35:53.755322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.182 qpair failed and we were unable to recover it. 00:27:58.182 [2024-07-26 11:35:53.755519] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.182 [2024-07-26 11:35:53.755551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.182 qpair failed and we were unable to recover it. 00:27:58.182 [2024-07-26 11:35:53.755830] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.182 [2024-07-26 11:35:53.755861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.182 qpair failed and we were unable to recover it. 00:27:58.182 [2024-07-26 11:35:53.755981] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.182 [2024-07-26 11:35:53.756013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.182 qpair failed and we were unable to recover it. 00:27:58.182 [2024-07-26 11:35:53.756196] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.182 [2024-07-26 11:35:53.756228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.182 qpair failed and we were unable to recover it. 00:27:58.182 [2024-07-26 11:35:53.756418] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.182 [2024-07-26 11:35:53.756449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.182 qpair failed and we were unable to recover it. 00:27:58.182 [2024-07-26 11:35:53.756642] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.182 [2024-07-26 11:35:53.756674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.182 qpair failed and we were unable to recover it. 00:27:58.182 [2024-07-26 11:35:53.756853] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.182 [2024-07-26 11:35:53.756883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.182 qpair failed and we were unable to recover it. 00:27:58.182 [2024-07-26 11:35:53.757083] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.182 [2024-07-26 11:35:53.757115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.182 qpair failed and we were unable to recover it. 00:27:58.182 [2024-07-26 11:35:53.757307] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.182 [2024-07-26 11:35:53.757339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.182 qpair failed and we were unable to recover it. 00:27:58.182 [2024-07-26 11:35:53.757457] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.182 [2024-07-26 11:35:53.757488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.182 qpair failed and we were unable to recover it. 00:27:58.182 [2024-07-26 11:35:53.757688] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.182 [2024-07-26 11:35:53.757719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.182 qpair failed and we were unable to recover it. 00:27:58.182 [2024-07-26 11:35:53.757860] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.182 [2024-07-26 11:35:53.757891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.182 qpair failed and we were unable to recover it. 00:27:58.182 [2024-07-26 11:35:53.758025] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.182 [2024-07-26 11:35:53.758057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.182 qpair failed and we were unable to recover it. 00:27:58.182 [2024-07-26 11:35:53.758176] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.182 [2024-07-26 11:35:53.758207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.182 qpair failed and we were unable to recover it. 00:27:58.182 [2024-07-26 11:35:53.758399] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.182 [2024-07-26 11:35:53.758435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.182 qpair failed and we were unable to recover it. 00:27:58.182 [2024-07-26 11:35:53.758547] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.182 [2024-07-26 11:35:53.758579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.182 qpair failed and we were unable to recover it. 00:27:58.182 [2024-07-26 11:35:53.758772] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.182 [2024-07-26 11:35:53.758804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.182 qpair failed and we were unable to recover it. 00:27:58.183 [2024-07-26 11:35:53.759027] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.183 [2024-07-26 11:35:53.759058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.183 qpair failed and we were unable to recover it. 00:27:58.183 [2024-07-26 11:35:53.759281] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.183 [2024-07-26 11:35:53.759312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.183 qpair failed and we were unable to recover it. 00:27:58.183 [2024-07-26 11:35:53.759499] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.183 [2024-07-26 11:35:53.759529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.183 qpair failed and we were unable to recover it. 00:27:58.183 [2024-07-26 11:35:53.759720] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.183 [2024-07-26 11:35:53.759752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.183 qpair failed and we were unable to recover it. 00:27:58.183 [2024-07-26 11:35:53.759954] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.183 [2024-07-26 11:35:53.759985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.183 qpair failed and we were unable to recover it. 00:27:58.183 [2024-07-26 11:35:53.760155] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.183 [2024-07-26 11:35:53.760186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.183 qpair failed and we were unable to recover it. 00:27:58.183 [2024-07-26 11:35:53.760363] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.183 [2024-07-26 11:35:53.760395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.183 qpair failed and we were unable to recover it. 00:27:58.183 [2024-07-26 11:35:53.760604] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.183 [2024-07-26 11:35:53.760649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.183 qpair failed and we were unable to recover it. 00:27:58.183 [2024-07-26 11:35:53.760833] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.183 [2024-07-26 11:35:53.760865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.183 qpair failed and we were unable to recover it. 00:27:58.183 [2024-07-26 11:35:53.761038] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.183 [2024-07-26 11:35:53.761069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.183 qpair failed and we were unable to recover it. 00:27:58.183 [2024-07-26 11:35:53.761253] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.183 [2024-07-26 11:35:53.761284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.183 qpair failed and we were unable to recover it. 00:27:58.183 [2024-07-26 11:35:53.761417] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.183 [2024-07-26 11:35:53.761448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.183 qpair failed and we were unable to recover it. 00:27:58.183 [2024-07-26 11:35:53.761592] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.183 [2024-07-26 11:35:53.761622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.183 qpair failed and we were unable to recover it. 00:27:58.183 [2024-07-26 11:35:53.761756] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.183 [2024-07-26 11:35:53.761787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.183 qpair failed and we were unable to recover it. 00:27:58.183 [2024-07-26 11:35:53.762001] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.183 [2024-07-26 11:35:53.762033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.183 qpair failed and we were unable to recover it. 00:27:58.183 [2024-07-26 11:35:53.762203] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.183 [2024-07-26 11:35:53.762234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.183 qpair failed and we were unable to recover it. 00:27:58.183 [2024-07-26 11:35:53.762474] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.183 [2024-07-26 11:35:53.762505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.183 qpair failed and we were unable to recover it. 00:27:58.183 [2024-07-26 11:35:53.762642] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.183 [2024-07-26 11:35:53.762674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.183 qpair failed and we were unable to recover it. 00:27:58.183 [2024-07-26 11:35:53.762860] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.183 [2024-07-26 11:35:53.762891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.183 qpair failed and we were unable to recover it. 00:27:58.183 [2024-07-26 11:35:53.763069] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.183 [2024-07-26 11:35:53.763099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.183 qpair failed and we were unable to recover it. 00:27:58.183 [2024-07-26 11:35:53.763357] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.183 [2024-07-26 11:35:53.763388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.183 qpair failed and we were unable to recover it. 00:27:58.183 [2024-07-26 11:35:53.763517] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.183 [2024-07-26 11:35:53.763547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.183 qpair failed and we were unable to recover it. 00:27:58.183 [2024-07-26 11:35:53.763838] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.183 [2024-07-26 11:35:53.763870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.183 qpair failed and we were unable to recover it. 00:27:58.183 [2024-07-26 11:35:53.764082] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.183 [2024-07-26 11:35:53.764113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.183 qpair failed and we were unable to recover it. 00:27:58.183 [2024-07-26 11:35:53.764358] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.183 [2024-07-26 11:35:53.764389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.183 qpair failed and we were unable to recover it. 00:27:58.183 [2024-07-26 11:35:53.764512] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.183 [2024-07-26 11:35:53.764543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.183 qpair failed and we were unable to recover it. 00:27:58.183 [2024-07-26 11:35:53.764783] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.183 [2024-07-26 11:35:53.764816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.183 qpair failed and we were unable to recover it. 00:27:58.183 [2024-07-26 11:35:53.765013] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.183 [2024-07-26 11:35:53.765045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.183 qpair failed and we were unable to recover it. 00:27:58.183 [2024-07-26 11:35:53.765305] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.183 [2024-07-26 11:35:53.765335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.183 qpair failed and we were unable to recover it. 00:27:58.183 [2024-07-26 11:35:53.765476] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.183 [2024-07-26 11:35:53.765506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.183 qpair failed and we were unable to recover it. 00:27:58.183 [2024-07-26 11:35:53.765645] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.183 [2024-07-26 11:35:53.765677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.183 qpair failed and we were unable to recover it. 00:27:58.183 [2024-07-26 11:35:53.765871] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.183 [2024-07-26 11:35:53.765903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.183 qpair failed and we were unable to recover it. 00:27:58.183 [2024-07-26 11:35:53.766167] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.183 [2024-07-26 11:35:53.766198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.183 qpair failed and we were unable to recover it. 00:27:58.183 [2024-07-26 11:35:53.766406] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.183 [2024-07-26 11:35:53.766436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.183 qpair failed and we were unable to recover it. 00:27:58.183 [2024-07-26 11:35:53.766749] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.183 [2024-07-26 11:35:53.766781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.183 qpair failed and we were unable to recover it. 00:27:58.183 [2024-07-26 11:35:53.766963] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.183 [2024-07-26 11:35:53.766994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.183 qpair failed and we were unable to recover it. 00:27:58.183 [2024-07-26 11:35:53.767171] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.183 [2024-07-26 11:35:53.767202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.183 qpair failed and we were unable to recover it. 00:27:58.183 [2024-07-26 11:35:53.767529] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.183 [2024-07-26 11:35:53.767566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.183 qpair failed and we were unable to recover it. 00:27:58.183 [2024-07-26 11:35:53.767782] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.183 [2024-07-26 11:35:53.767813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.183 qpair failed and we were unable to recover it. 00:27:58.184 [2024-07-26 11:35:53.767943] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.184 [2024-07-26 11:35:53.767975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.184 qpair failed and we were unable to recover it. 00:27:58.184 [2024-07-26 11:35:53.768186] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.184 [2024-07-26 11:35:53.768217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.184 qpair failed and we were unable to recover it. 00:27:58.184 [2024-07-26 11:35:53.768350] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.184 [2024-07-26 11:35:53.768381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.184 qpair failed and we were unable to recover it. 00:27:58.184 [2024-07-26 11:35:53.768556] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.184 [2024-07-26 11:35:53.768587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.184 qpair failed and we were unable to recover it. 00:27:58.184 [2024-07-26 11:35:53.768800] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.184 [2024-07-26 11:35:53.768833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.184 qpair failed and we were unable to recover it. 00:27:58.184 [2024-07-26 11:35:53.769104] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.184 [2024-07-26 11:35:53.769135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.184 qpair failed and we were unable to recover it. 00:27:58.184 [2024-07-26 11:35:53.769352] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.184 [2024-07-26 11:35:53.769383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.184 qpair failed and we were unable to recover it. 00:27:58.184 [2024-07-26 11:35:53.769584] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.184 [2024-07-26 11:35:53.769614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.184 qpair failed and we were unable to recover it. 00:27:58.184 [2024-07-26 11:35:53.769756] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.184 [2024-07-26 11:35:53.769788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.184 qpair failed and we were unable to recover it. 00:27:58.184 [2024-07-26 11:35:53.770057] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.184 [2024-07-26 11:35:53.770088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.184 qpair failed and we were unable to recover it. 00:27:58.184 [2024-07-26 11:35:53.770288] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.184 [2024-07-26 11:35:53.770319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.184 qpair failed and we were unable to recover it. 00:27:58.184 [2024-07-26 11:35:53.770451] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.184 [2024-07-26 11:35:53.770482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.184 qpair failed and we were unable to recover it. 00:27:58.184 [2024-07-26 11:35:53.770735] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.184 [2024-07-26 11:35:53.770768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.184 qpair failed and we were unable to recover it. 00:27:58.184 [2024-07-26 11:35:53.770956] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.184 [2024-07-26 11:35:53.770988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.184 qpair failed and we were unable to recover it. 00:27:58.184 [2024-07-26 11:35:53.771165] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.184 [2024-07-26 11:35:53.771196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.184 qpair failed and we were unable to recover it. 00:27:58.184 [2024-07-26 11:35:53.771391] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.184 [2024-07-26 11:35:53.771423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.184 qpair failed and we were unable to recover it. 00:27:58.184 [2024-07-26 11:35:53.771701] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.184 [2024-07-26 11:35:53.771733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.184 qpair failed and we were unable to recover it. 00:27:58.184 [2024-07-26 11:35:53.771917] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.184 [2024-07-26 11:35:53.771950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.184 qpair failed and we were unable to recover it. 00:27:58.184 [2024-07-26 11:35:53.772151] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.184 [2024-07-26 11:35:53.772197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.184 qpair failed and we were unable to recover it. 00:27:58.184 [2024-07-26 11:35:53.772408] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.184 [2024-07-26 11:35:53.772446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.184 qpair failed and we were unable to recover it. 00:27:58.184 [2024-07-26 11:35:53.772691] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.184 [2024-07-26 11:35:53.772724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.184 qpair failed and we were unable to recover it. 00:27:58.184 [2024-07-26 11:35:53.772904] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.184 [2024-07-26 11:35:53.772935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.184 qpair failed and we were unable to recover it. 00:27:58.184 [2024-07-26 11:35:53.773130] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.184 [2024-07-26 11:35:53.773162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.184 qpair failed and we were unable to recover it. 00:27:58.184 [2024-07-26 11:35:53.773424] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.184 [2024-07-26 11:35:53.773455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.184 qpair failed and we were unable to recover it. 00:27:58.184 [2024-07-26 11:35:53.773588] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.184 [2024-07-26 11:35:53.773620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.184 qpair failed and we were unable to recover it. 00:27:58.184 [2024-07-26 11:35:53.773935] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.184 [2024-07-26 11:35:53.773982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.184 qpair failed and we were unable to recover it. 00:27:58.184 [2024-07-26 11:35:53.774196] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.184 [2024-07-26 11:35:53.774231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f8000b90 with addr=10.0.0.2, port=4420 00:27:58.184 qpair failed and we were unable to recover it. 00:27:58.184 [2024-07-26 11:35:53.774476] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.184 [2024-07-26 11:35:53.774506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f8000b90 with addr=10.0.0.2, port=4420 00:27:58.184 qpair failed and we were unable to recover it. 00:27:58.184 [2024-07-26 11:35:53.774786] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.184 [2024-07-26 11:35:53.774817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f8000b90 with addr=10.0.0.2, port=4420 00:27:58.184 qpair failed and we were unable to recover it. 00:27:58.184 [2024-07-26 11:35:53.774955] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.184 [2024-07-26 11:35:53.774986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f8000b90 with addr=10.0.0.2, port=4420 00:27:58.184 qpair failed and we were unable to recover it. 00:27:58.184 [2024-07-26 11:35:53.775251] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.184 [2024-07-26 11:35:53.775281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f8000b90 with addr=10.0.0.2, port=4420 00:27:58.184 qpair failed and we were unable to recover it. 00:27:58.184 [2024-07-26 11:35:53.775488] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.184 [2024-07-26 11:35:53.775519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f8000b90 with addr=10.0.0.2, port=4420 00:27:58.184 qpair failed and we were unable to recover it. 00:27:58.184 [2024-07-26 11:35:53.775788] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.184 [2024-07-26 11:35:53.775820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f8000b90 with addr=10.0.0.2, port=4420 00:27:58.184 qpair failed and we were unable to recover it. 00:27:58.470 [2024-07-26 11:35:53.776065] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.470 [2024-07-26 11:35:53.776095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f8000b90 with addr=10.0.0.2, port=4420 00:27:58.470 qpair failed and we were unable to recover it. 00:27:58.470 [2024-07-26 11:35:53.776287] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.470 [2024-07-26 11:35:53.776319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f8000b90 with addr=10.0.0.2, port=4420 00:27:58.470 qpair failed and we were unable to recover it. 00:27:58.470 [2024-07-26 11:35:53.776453] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.470 [2024-07-26 11:35:53.776484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f8000b90 with addr=10.0.0.2, port=4420 00:27:58.470 qpair failed and we were unable to recover it. 00:27:58.470 [2024-07-26 11:35:53.776696] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.470 [2024-07-26 11:35:53.776727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f8000b90 with addr=10.0.0.2, port=4420 00:27:58.470 qpair failed and we were unable to recover it. 00:27:58.470 [2024-07-26 11:35:53.776905] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.470 [2024-07-26 11:35:53.776936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f8000b90 with addr=10.0.0.2, port=4420 00:27:58.470 qpair failed and we were unable to recover it. 00:27:58.470 [2024-07-26 11:35:53.777110] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.470 [2024-07-26 11:35:53.777151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f8000b90 with addr=10.0.0.2, port=4420 00:27:58.470 qpair failed and we were unable to recover it. 00:27:58.470 [2024-07-26 11:35:53.777342] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.470 [2024-07-26 11:35:53.777373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f8000b90 with addr=10.0.0.2, port=4420 00:27:58.470 qpair failed and we were unable to recover it. 00:27:58.470 [2024-07-26 11:35:53.777614] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.470 [2024-07-26 11:35:53.777657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f8000b90 with addr=10.0.0.2, port=4420 00:27:58.470 qpair failed and we were unable to recover it. 00:27:58.470 [2024-07-26 11:35:53.777898] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.470 [2024-07-26 11:35:53.777928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f8000b90 with addr=10.0.0.2, port=4420 00:27:58.470 qpair failed and we were unable to recover it. 00:27:58.470 [2024-07-26 11:35:53.778168] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.470 [2024-07-26 11:35:53.778199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f8000b90 with addr=10.0.0.2, port=4420 00:27:58.470 qpair failed and we were unable to recover it. 00:27:58.470 [2024-07-26 11:35:53.778396] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.470 [2024-07-26 11:35:53.778426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f8000b90 with addr=10.0.0.2, port=4420 00:27:58.470 qpair failed and we were unable to recover it. 00:27:58.470 [2024-07-26 11:35:53.778612] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.470 [2024-07-26 11:35:53.778655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f8000b90 with addr=10.0.0.2, port=4420 00:27:58.470 qpair failed and we were unable to recover it. 00:27:58.470 [2024-07-26 11:35:53.778866] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.470 [2024-07-26 11:35:53.778896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f8000b90 with addr=10.0.0.2, port=4420 00:27:58.470 qpair failed and we were unable to recover it. 00:27:58.470 [2024-07-26 11:35:53.779167] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.470 [2024-07-26 11:35:53.779198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f8000b90 with addr=10.0.0.2, port=4420 00:27:58.470 qpair failed and we were unable to recover it. 00:27:58.470 [2024-07-26 11:35:53.779408] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.470 [2024-07-26 11:35:53.779438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f8000b90 with addr=10.0.0.2, port=4420 00:27:58.470 qpair failed and we were unable to recover it. 00:27:58.470 [2024-07-26 11:35:53.779649] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.470 [2024-07-26 11:35:53.779681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f8000b90 with addr=10.0.0.2, port=4420 00:27:58.470 qpair failed and we were unable to recover it. 00:27:58.470 [2024-07-26 11:35:53.779893] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.470 [2024-07-26 11:35:53.779924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f8000b90 with addr=10.0.0.2, port=4420 00:27:58.470 qpair failed and we were unable to recover it. 00:27:58.470 [2024-07-26 11:35:53.780040] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.470 [2024-07-26 11:35:53.780071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f8000b90 with addr=10.0.0.2, port=4420 00:27:58.470 qpair failed and we were unable to recover it. 00:27:58.470 [2024-07-26 11:35:53.780187] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.470 [2024-07-26 11:35:53.780218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f8000b90 with addr=10.0.0.2, port=4420 00:27:58.470 qpair failed and we were unable to recover it. 00:27:58.470 [2024-07-26 11:35:53.780468] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.470 [2024-07-26 11:35:53.780499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f8000b90 with addr=10.0.0.2, port=4420 00:27:58.470 qpair failed and we were unable to recover it. 00:27:58.470 [2024-07-26 11:35:53.780744] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.471 [2024-07-26 11:35:53.780776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f8000b90 with addr=10.0.0.2, port=4420 00:27:58.471 qpair failed and we were unable to recover it. 00:27:58.471 [2024-07-26 11:35:53.780964] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.471 [2024-07-26 11:35:53.780995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f8000b90 with addr=10.0.0.2, port=4420 00:27:58.471 qpair failed and we were unable to recover it. 00:27:58.471 [2024-07-26 11:35:53.781234] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.471 [2024-07-26 11:35:53.781264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f8000b90 with addr=10.0.0.2, port=4420 00:27:58.471 qpair failed and we were unable to recover it. 00:27:58.471 [2024-07-26 11:35:53.781532] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.471 [2024-07-26 11:35:53.781563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f8000b90 with addr=10.0.0.2, port=4420 00:27:58.471 qpair failed and we were unable to recover it. 00:27:58.471 [2024-07-26 11:35:53.781682] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.471 [2024-07-26 11:35:53.781714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f8000b90 with addr=10.0.0.2, port=4420 00:27:58.471 qpair failed and we were unable to recover it. 00:27:58.471 [2024-07-26 11:35:53.781927] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.471 [2024-07-26 11:35:53.781958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f8000b90 with addr=10.0.0.2, port=4420 00:27:58.471 qpair failed and we were unable to recover it. 00:27:58.471 [2024-07-26 11:35:53.782142] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.471 [2024-07-26 11:35:53.782173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f8000b90 with addr=10.0.0.2, port=4420 00:27:58.471 qpair failed and we were unable to recover it. 00:27:58.471 [2024-07-26 11:35:53.782384] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.471 [2024-07-26 11:35:53.782415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f8000b90 with addr=10.0.0.2, port=4420 00:27:58.471 qpair failed and we were unable to recover it. 00:27:58.471 [2024-07-26 11:35:53.782682] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.471 [2024-07-26 11:35:53.782713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f8000b90 with addr=10.0.0.2, port=4420 00:27:58.471 qpair failed and we were unable to recover it. 00:27:58.471 [2024-07-26 11:35:53.782961] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.471 [2024-07-26 11:35:53.782992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f8000b90 with addr=10.0.0.2, port=4420 00:27:58.471 qpair failed and we were unable to recover it. 00:27:58.471 [2024-07-26 11:35:53.783201] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.471 [2024-07-26 11:35:53.783231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f8000b90 with addr=10.0.0.2, port=4420 00:27:58.471 qpair failed and we were unable to recover it. 00:27:58.471 [2024-07-26 11:35:53.783419] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.471 [2024-07-26 11:35:53.783449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f8000b90 with addr=10.0.0.2, port=4420 00:27:58.471 qpair failed and we were unable to recover it. 00:27:58.471 [2024-07-26 11:35:53.783642] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.471 [2024-07-26 11:35:53.783683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f8000b90 with addr=10.0.0.2, port=4420 00:27:58.471 qpair failed and we were unable to recover it. 00:27:58.471 [2024-07-26 11:35:53.783984] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.471 [2024-07-26 11:35:53.784015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f8000b90 with addr=10.0.0.2, port=4420 00:27:58.471 qpair failed and we were unable to recover it. 00:27:58.471 [2024-07-26 11:35:53.784210] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.471 [2024-07-26 11:35:53.784241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f8000b90 with addr=10.0.0.2, port=4420 00:27:58.471 qpair failed and we were unable to recover it. 00:27:58.471 [2024-07-26 11:35:53.784362] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.471 [2024-07-26 11:35:53.784393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f8000b90 with addr=10.0.0.2, port=4420 00:27:58.471 qpair failed and we were unable to recover it. 00:27:58.471 [2024-07-26 11:35:53.784655] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.471 [2024-07-26 11:35:53.784687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f8000b90 with addr=10.0.0.2, port=4420 00:27:58.471 qpair failed and we were unable to recover it. 00:27:58.471 [2024-07-26 11:35:53.784893] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.471 [2024-07-26 11:35:53.784924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f8000b90 with addr=10.0.0.2, port=4420 00:27:58.471 qpair failed and we were unable to recover it. 00:27:58.471 [2024-07-26 11:35:53.785039] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.471 [2024-07-26 11:35:53.785068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f8000b90 with addr=10.0.0.2, port=4420 00:27:58.471 qpair failed and we were unable to recover it. 00:27:58.471 [2024-07-26 11:35:53.785256] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.471 [2024-07-26 11:35:53.785287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f8000b90 with addr=10.0.0.2, port=4420 00:27:58.471 qpair failed and we were unable to recover it. 00:27:58.471 [2024-07-26 11:35:53.785494] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.471 [2024-07-26 11:35:53.785524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f8000b90 with addr=10.0.0.2, port=4420 00:27:58.471 qpair failed and we were unable to recover it. 00:27:58.471 [2024-07-26 11:35:53.785714] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.471 [2024-07-26 11:35:53.785746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f8000b90 with addr=10.0.0.2, port=4420 00:27:58.471 qpair failed and we were unable to recover it. 00:27:58.471 [2024-07-26 11:35:53.786021] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.471 [2024-07-26 11:35:53.786051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f8000b90 with addr=10.0.0.2, port=4420 00:27:58.471 qpair failed and we were unable to recover it. 00:27:58.471 [2024-07-26 11:35:53.786291] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.471 [2024-07-26 11:35:53.786321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f8000b90 with addr=10.0.0.2, port=4420 00:27:58.471 qpair failed and we were unable to recover it. 00:27:58.471 [2024-07-26 11:35:53.786442] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.471 [2024-07-26 11:35:53.786473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f8000b90 with addr=10.0.0.2, port=4420 00:27:58.471 qpair failed and we were unable to recover it. 00:27:58.471 [2024-07-26 11:35:53.786743] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.471 [2024-07-26 11:35:53.786781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f8000b90 with addr=10.0.0.2, port=4420 00:27:58.471 qpair failed and we were unable to recover it. 00:27:58.471 [2024-07-26 11:35:53.786978] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.471 [2024-07-26 11:35:53.787009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f8000b90 with addr=10.0.0.2, port=4420 00:27:58.471 qpair failed and we were unable to recover it. 00:27:58.471 [2024-07-26 11:35:53.787274] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.471 [2024-07-26 11:35:53.787304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f8000b90 with addr=10.0.0.2, port=4420 00:27:58.471 qpair failed and we were unable to recover it. 00:27:58.471 [2024-07-26 11:35:53.787550] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.471 [2024-07-26 11:35:53.787581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f8000b90 with addr=10.0.0.2, port=4420 00:27:58.471 qpair failed and we were unable to recover it. 00:27:58.471 [2024-07-26 11:35:53.787699] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.471 [2024-07-26 11:35:53.787732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f8000b90 with addr=10.0.0.2, port=4420 00:27:58.471 qpair failed and we were unable to recover it. 00:27:58.471 [2024-07-26 11:35:53.787988] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.471 [2024-07-26 11:35:53.788019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f8000b90 with addr=10.0.0.2, port=4420 00:27:58.471 qpair failed and we were unable to recover it. 00:27:58.471 [2024-07-26 11:35:53.788208] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.471 [2024-07-26 11:35:53.788240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f8000b90 with addr=10.0.0.2, port=4420 00:27:58.471 qpair failed and we were unable to recover it. 00:27:58.471 [2024-07-26 11:35:53.788372] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.471 [2024-07-26 11:35:53.788403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f8000b90 with addr=10.0.0.2, port=4420 00:27:58.471 qpair failed and we were unable to recover it. 00:27:58.471 [2024-07-26 11:35:53.788528] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.471 [2024-07-26 11:35:53.788558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f8000b90 with addr=10.0.0.2, port=4420 00:27:58.471 qpair failed and we were unable to recover it. 00:27:58.471 [2024-07-26 11:35:53.788802] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.471 [2024-07-26 11:35:53.788834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f8000b90 with addr=10.0.0.2, port=4420 00:27:58.471 qpair failed and we were unable to recover it. 00:27:58.471 [2024-07-26 11:35:53.789026] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.471 [2024-07-26 11:35:53.789057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f8000b90 with addr=10.0.0.2, port=4420 00:27:58.471 qpair failed and we were unable to recover it. 00:27:58.471 [2024-07-26 11:35:53.789235] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.471 [2024-07-26 11:35:53.789265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f8000b90 with addr=10.0.0.2, port=4420 00:27:58.471 qpair failed and we were unable to recover it. 00:27:58.471 [2024-07-26 11:35:53.789466] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.471 [2024-07-26 11:35:53.789496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f8000b90 with addr=10.0.0.2, port=4420 00:27:58.472 qpair failed and we were unable to recover it. 00:27:58.472 [2024-07-26 11:35:53.789670] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.472 [2024-07-26 11:35:53.789700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f8000b90 with addr=10.0.0.2, port=4420 00:27:58.472 qpair failed and we were unable to recover it. 00:27:58.472 [2024-07-26 11:35:53.789830] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.472 [2024-07-26 11:35:53.789862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f8000b90 with addr=10.0.0.2, port=4420 00:27:58.472 qpair failed and we were unable to recover it. 00:27:58.472 [2024-07-26 11:35:53.790036] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.472 [2024-07-26 11:35:53.790067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f8000b90 with addr=10.0.0.2, port=4420 00:27:58.472 qpair failed and we were unable to recover it. 00:27:58.472 [2024-07-26 11:35:53.790203] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.472 [2024-07-26 11:35:53.790234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f8000b90 with addr=10.0.0.2, port=4420 00:27:58.472 qpair failed and we were unable to recover it. 00:27:58.472 [2024-07-26 11:35:53.790361] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.472 [2024-07-26 11:35:53.790391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f8000b90 with addr=10.0.0.2, port=4420 00:27:58.472 qpair failed and we were unable to recover it. 00:27:58.472 [2024-07-26 11:35:53.790653] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.472 [2024-07-26 11:35:53.790685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f8000b90 with addr=10.0.0.2, port=4420 00:27:58.472 qpair failed and we were unable to recover it. 00:27:58.472 [2024-07-26 11:35:53.790870] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.472 [2024-07-26 11:35:53.790900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f8000b90 with addr=10.0.0.2, port=4420 00:27:58.472 qpair failed and we were unable to recover it. 00:27:58.472 [2024-07-26 11:35:53.791088] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.472 [2024-07-26 11:35:53.791118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f8000b90 with addr=10.0.0.2, port=4420 00:27:58.472 qpair failed and we were unable to recover it. 00:27:58.472 [2024-07-26 11:35:53.791316] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.472 [2024-07-26 11:35:53.791346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f8000b90 with addr=10.0.0.2, port=4420 00:27:58.472 qpair failed and we were unable to recover it. 00:27:58.472 [2024-07-26 11:35:53.791462] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.472 [2024-07-26 11:35:53.791492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f8000b90 with addr=10.0.0.2, port=4420 00:27:58.472 qpair failed and we were unable to recover it. 00:27:58.472 [2024-07-26 11:35:53.791690] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.472 [2024-07-26 11:35:53.791722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f8000b90 with addr=10.0.0.2, port=4420 00:27:58.472 qpair failed and we were unable to recover it. 00:27:58.472 [2024-07-26 11:35:53.791902] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.472 [2024-07-26 11:35:53.791933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f8000b90 with addr=10.0.0.2, port=4420 00:27:58.472 qpair failed and we were unable to recover it. 00:27:58.472 [2024-07-26 11:35:53.792137] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.472 [2024-07-26 11:35:53.792168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f8000b90 with addr=10.0.0.2, port=4420 00:27:58.472 qpair failed and we were unable to recover it. 00:27:58.472 [2024-07-26 11:35:53.792383] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.472 [2024-07-26 11:35:53.792414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f8000b90 with addr=10.0.0.2, port=4420 00:27:58.472 qpair failed and we were unable to recover it. 00:27:58.472 [2024-07-26 11:35:53.792592] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.472 [2024-07-26 11:35:53.792622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f8000b90 with addr=10.0.0.2, port=4420 00:27:58.472 qpair failed and we were unable to recover it. 00:27:58.472 [2024-07-26 11:35:53.792809] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.472 [2024-07-26 11:35:53.792841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f8000b90 with addr=10.0.0.2, port=4420 00:27:58.472 qpair failed and we were unable to recover it. 00:27:58.472 [2024-07-26 11:35:53.793086] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.472 [2024-07-26 11:35:53.793116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f8000b90 with addr=10.0.0.2, port=4420 00:27:58.472 qpair failed and we were unable to recover it. 00:27:58.472 [2024-07-26 11:35:53.793306] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.472 [2024-07-26 11:35:53.793336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f8000b90 with addr=10.0.0.2, port=4420 00:27:58.472 qpair failed and we were unable to recover it. 00:27:58.472 [2024-07-26 11:35:53.793518] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.472 [2024-07-26 11:35:53.793548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f8000b90 with addr=10.0.0.2, port=4420 00:27:58.472 qpair failed and we were unable to recover it. 00:27:58.472 [2024-07-26 11:35:53.793663] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.472 [2024-07-26 11:35:53.793694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f8000b90 with addr=10.0.0.2, port=4420 00:27:58.472 qpair failed and we were unable to recover it. 00:27:58.472 [2024-07-26 11:35:53.793871] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.472 [2024-07-26 11:35:53.793902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f8000b90 with addr=10.0.0.2, port=4420 00:27:58.472 qpair failed and we were unable to recover it. 00:27:58.472 [2024-07-26 11:35:53.794045] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.472 [2024-07-26 11:35:53.794075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f8000b90 with addr=10.0.0.2, port=4420 00:27:58.472 qpair failed and we were unable to recover it. 00:27:58.472 [2024-07-26 11:35:53.794208] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.472 [2024-07-26 11:35:53.794238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f8000b90 with addr=10.0.0.2, port=4420 00:27:58.472 qpair failed and we were unable to recover it. 00:27:58.472 [2024-07-26 11:35:53.794426] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.472 [2024-07-26 11:35:53.794459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f8000b90 with addr=10.0.0.2, port=4420 00:27:58.472 qpair failed and we were unable to recover it. 00:27:58.472 [2024-07-26 11:35:53.794637] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.472 [2024-07-26 11:35:53.794667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f8000b90 with addr=10.0.0.2, port=4420 00:27:58.472 qpair failed and we were unable to recover it. 00:27:58.472 [2024-07-26 11:35:53.794880] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.472 [2024-07-26 11:35:53.794910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f8000b90 with addr=10.0.0.2, port=4420 00:27:58.472 qpair failed and we were unable to recover it. 00:27:58.472 [2024-07-26 11:35:53.795177] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.472 [2024-07-26 11:35:53.795208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f8000b90 with addr=10.0.0.2, port=4420 00:27:58.472 qpair failed and we were unable to recover it. 00:27:58.472 [2024-07-26 11:35:53.795329] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.472 [2024-07-26 11:35:53.795365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f8000b90 with addr=10.0.0.2, port=4420 00:27:58.472 qpair failed and we were unable to recover it. 00:27:58.472 [2024-07-26 11:35:53.795500] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.472 [2024-07-26 11:35:53.795531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f8000b90 with addr=10.0.0.2, port=4420 00:27:58.472 qpair failed and we were unable to recover it. 00:27:58.472 [2024-07-26 11:35:53.795635] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.472 [2024-07-26 11:35:53.795665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f8000b90 with addr=10.0.0.2, port=4420 00:27:58.472 qpair failed and we were unable to recover it. 00:27:58.472 [2024-07-26 11:35:53.795784] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.472 [2024-07-26 11:35:53.795815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f8000b90 with addr=10.0.0.2, port=4420 00:27:58.472 qpair failed and we were unable to recover it. 00:27:58.472 [2024-07-26 11:35:53.795947] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.472 [2024-07-26 11:35:53.795978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f8000b90 with addr=10.0.0.2, port=4420 00:27:58.472 qpair failed and we were unable to recover it. 00:27:58.472 [2024-07-26 11:35:53.796162] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.472 [2024-07-26 11:35:53.796193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f8000b90 with addr=10.0.0.2, port=4420 00:27:58.472 qpair failed and we were unable to recover it. 00:27:58.472 [2024-07-26 11:35:53.796382] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.472 [2024-07-26 11:35:53.796411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f8000b90 with addr=10.0.0.2, port=4420 00:27:58.472 qpair failed and we were unable to recover it. 00:27:58.472 [2024-07-26 11:35:53.796615] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.472 [2024-07-26 11:35:53.796652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f8000b90 with addr=10.0.0.2, port=4420 00:27:58.472 qpair failed and we were unable to recover it. 00:27:58.472 [2024-07-26 11:35:53.796913] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.472 [2024-07-26 11:35:53.796943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f8000b90 with addr=10.0.0.2, port=4420 00:27:58.472 qpair failed and we were unable to recover it. 00:27:58.472 [2024-07-26 11:35:53.797129] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.472 [2024-07-26 11:35:53.797158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f8000b90 with addr=10.0.0.2, port=4420 00:27:58.472 qpair failed and we were unable to recover it. 00:27:58.472 [2024-07-26 11:35:53.797330] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.473 [2024-07-26 11:35:53.797360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f8000b90 with addr=10.0.0.2, port=4420 00:27:58.473 qpair failed and we were unable to recover it. 00:27:58.473 [2024-07-26 11:35:53.797527] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.473 [2024-07-26 11:35:53.797557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f8000b90 with addr=10.0.0.2, port=4420 00:27:58.473 qpair failed and we were unable to recover it. 00:27:58.473 [2024-07-26 11:35:53.797741] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.473 [2024-07-26 11:35:53.797773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f8000b90 with addr=10.0.0.2, port=4420 00:27:58.473 qpair failed and we were unable to recover it. 00:27:58.473 [2024-07-26 11:35:53.797948] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.473 [2024-07-26 11:35:53.797977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f8000b90 with addr=10.0.0.2, port=4420 00:27:58.473 qpair failed and we were unable to recover it. 00:27:58.473 [2024-07-26 11:35:53.798110] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.473 [2024-07-26 11:35:53.798140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f8000b90 with addr=10.0.0.2, port=4420 00:27:58.473 qpair failed and we were unable to recover it. 00:27:58.473 [2024-07-26 11:35:53.798362] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.473 [2024-07-26 11:35:53.798394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f8000b90 with addr=10.0.0.2, port=4420 00:27:58.473 qpair failed and we were unable to recover it. 00:27:58.473 [2024-07-26 11:35:53.798610] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.473 [2024-07-26 11:35:53.798662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f8000b90 with addr=10.0.0.2, port=4420 00:27:58.473 qpair failed and we were unable to recover it. 00:27:58.473 [2024-07-26 11:35:53.798801] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.473 [2024-07-26 11:35:53.798832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f8000b90 with addr=10.0.0.2, port=4420 00:27:58.473 qpair failed and we were unable to recover it. 00:27:58.473 [2024-07-26 11:35:53.799019] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.473 [2024-07-26 11:35:53.799050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f8000b90 with addr=10.0.0.2, port=4420 00:27:58.473 qpair failed and we were unable to recover it. 00:27:58.473 [2024-07-26 11:35:53.799186] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.473 [2024-07-26 11:35:53.799216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f8000b90 with addr=10.0.0.2, port=4420 00:27:58.473 qpair failed and we were unable to recover it. 00:27:58.473 [2024-07-26 11:35:53.799349] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.473 [2024-07-26 11:35:53.799378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f8000b90 with addr=10.0.0.2, port=4420 00:27:58.473 qpair failed and we were unable to recover it. 00:27:58.473 [2024-07-26 11:35:53.799522] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.473 [2024-07-26 11:35:53.799552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f8000b90 with addr=10.0.0.2, port=4420 00:27:58.473 qpair failed and we were unable to recover it. 00:27:58.473 [2024-07-26 11:35:53.799727] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.473 [2024-07-26 11:35:53.799758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f8000b90 with addr=10.0.0.2, port=4420 00:27:58.473 qpair failed and we were unable to recover it. 00:27:58.473 [2024-07-26 11:35:53.799880] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.473 [2024-07-26 11:35:53.799910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f8000b90 with addr=10.0.0.2, port=4420 00:27:58.473 qpair failed and we were unable to recover it. 00:27:58.473 [2024-07-26 11:35:53.800030] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.473 [2024-07-26 11:35:53.800062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f8000b90 with addr=10.0.0.2, port=4420 00:27:58.473 qpair failed and we were unable to recover it. 00:27:58.473 [2024-07-26 11:35:53.800229] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.473 [2024-07-26 11:35:53.800259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f8000b90 with addr=10.0.0.2, port=4420 00:27:58.473 qpair failed and we were unable to recover it. 00:27:58.473 [2024-07-26 11:35:53.800509] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.473 [2024-07-26 11:35:53.800538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f8000b90 with addr=10.0.0.2, port=4420 00:27:58.473 qpair failed and we were unable to recover it. 00:27:58.473 [2024-07-26 11:35:53.800851] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.473 [2024-07-26 11:35:53.800881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f8000b90 with addr=10.0.0.2, port=4420 00:27:58.473 qpair failed and we were unable to recover it. 00:27:58.473 [2024-07-26 11:35:53.801073] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.473 [2024-07-26 11:35:53.801105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f8000b90 with addr=10.0.0.2, port=4420 00:27:58.473 qpair failed and we were unable to recover it. 00:27:58.473 [2024-07-26 11:35:53.801282] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.473 [2024-07-26 11:35:53.801312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f8000b90 with addr=10.0.0.2, port=4420 00:27:58.473 qpair failed and we were unable to recover it. 00:27:58.473 [2024-07-26 11:35:53.801434] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.473 [2024-07-26 11:35:53.801465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f8000b90 with addr=10.0.0.2, port=4420 00:27:58.473 qpair failed and we were unable to recover it. 00:27:58.473 [2024-07-26 11:35:53.801660] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.473 [2024-07-26 11:35:53.801693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f8000b90 with addr=10.0.0.2, port=4420 00:27:58.473 qpair failed and we were unable to recover it. 00:27:58.473 [2024-07-26 11:35:53.801878] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.473 [2024-07-26 11:35:53.801909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f8000b90 with addr=10.0.0.2, port=4420 00:27:58.473 qpair failed and we were unable to recover it. 00:27:58.473 [2024-07-26 11:35:53.802036] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.473 [2024-07-26 11:35:53.802066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f8000b90 with addr=10.0.0.2, port=4420 00:27:58.473 qpair failed and we were unable to recover it. 00:27:58.473 [2024-07-26 11:35:53.802265] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.473 [2024-07-26 11:35:53.802295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f8000b90 with addr=10.0.0.2, port=4420 00:27:58.473 qpair failed and we were unable to recover it. 00:27:58.473 [2024-07-26 11:35:53.802487] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.473 [2024-07-26 11:35:53.802519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f8000b90 with addr=10.0.0.2, port=4420 00:27:58.473 qpair failed and we were unable to recover it. 00:27:58.473 [2024-07-26 11:35:53.802651] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.473 [2024-07-26 11:35:53.802682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f8000b90 with addr=10.0.0.2, port=4420 00:27:58.473 qpair failed and we were unable to recover it. 00:27:58.473 [2024-07-26 11:35:53.802804] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.473 [2024-07-26 11:35:53.802834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f8000b90 with addr=10.0.0.2, port=4420 00:27:58.473 qpair failed and we were unable to recover it. 00:27:58.473 [2024-07-26 11:35:53.803009] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.473 [2024-07-26 11:35:53.803039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f8000b90 with addr=10.0.0.2, port=4420 00:27:58.473 qpair failed and we were unable to recover it. 00:27:58.473 [2024-07-26 11:35:53.803245] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.473 [2024-07-26 11:35:53.803276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f8000b90 with addr=10.0.0.2, port=4420 00:27:58.473 qpair failed and we were unable to recover it. 00:27:58.473 [2024-07-26 11:35:53.803388] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.473 [2024-07-26 11:35:53.803424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f8000b90 with addr=10.0.0.2, port=4420 00:27:58.473 qpair failed and we were unable to recover it. 00:27:58.473 [2024-07-26 11:35:53.803615] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.473 [2024-07-26 11:35:53.803655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f8000b90 with addr=10.0.0.2, port=4420 00:27:58.473 qpair failed and we were unable to recover it. 00:27:58.473 [2024-07-26 11:35:53.803926] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.473 [2024-07-26 11:35:53.803957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f8000b90 with addr=10.0.0.2, port=4420 00:27:58.473 qpair failed and we were unable to recover it. 00:27:58.473 [2024-07-26 11:35:53.804274] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.473 [2024-07-26 11:35:53.804304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f8000b90 with addr=10.0.0.2, port=4420 00:27:58.473 qpair failed and we were unable to recover it. 00:27:58.473 [2024-07-26 11:35:53.804490] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.473 [2024-07-26 11:35:53.804521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f8000b90 with addr=10.0.0.2, port=4420 00:27:58.473 qpair failed and we were unable to recover it. 00:27:58.473 [2024-07-26 11:35:53.804706] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.473 [2024-07-26 11:35:53.804738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f8000b90 with addr=10.0.0.2, port=4420 00:27:58.473 qpair failed and we were unable to recover it. 00:27:58.473 [2024-07-26 11:35:53.804924] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.473 [2024-07-26 11:35:53.804956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f8000b90 with addr=10.0.0.2, port=4420 00:27:58.473 qpair failed and we were unable to recover it. 00:27:58.473 [2024-07-26 11:35:53.805160] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.473 [2024-07-26 11:35:53.805191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f8000b90 with addr=10.0.0.2, port=4420 00:27:58.474 qpair failed and we were unable to recover it. 00:27:58.474 [2024-07-26 11:35:53.805364] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.474 [2024-07-26 11:35:53.805395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f8000b90 with addr=10.0.0.2, port=4420 00:27:58.474 qpair failed and we were unable to recover it. 00:27:58.474 [2024-07-26 11:35:53.805538] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.474 [2024-07-26 11:35:53.805569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f8000b90 with addr=10.0.0.2, port=4420 00:27:58.474 qpair failed and we were unable to recover it. 00:27:58.474 [2024-07-26 11:35:53.805824] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.474 [2024-07-26 11:35:53.805856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f8000b90 with addr=10.0.0.2, port=4420 00:27:58.474 qpair failed and we were unable to recover it. 00:27:58.474 [2024-07-26 11:35:53.806039] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.474 [2024-07-26 11:35:53.806069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f8000b90 with addr=10.0.0.2, port=4420 00:27:58.474 qpair failed and we were unable to recover it. 00:27:58.474 [2024-07-26 11:35:53.806273] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.474 [2024-07-26 11:35:53.806303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f8000b90 with addr=10.0.0.2, port=4420 00:27:58.474 qpair failed and we were unable to recover it. 00:27:58.474 [2024-07-26 11:35:53.806430] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.474 [2024-07-26 11:35:53.806462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f8000b90 with addr=10.0.0.2, port=4420 00:27:58.474 qpair failed and we were unable to recover it. 00:27:58.474 [2024-07-26 11:35:53.806647] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.474 [2024-07-26 11:35:53.806679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f8000b90 with addr=10.0.0.2, port=4420 00:27:58.474 qpair failed and we were unable to recover it. 00:27:58.474 [2024-07-26 11:35:53.806805] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.474 [2024-07-26 11:35:53.806836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f8000b90 with addr=10.0.0.2, port=4420 00:27:58.474 qpair failed and we were unable to recover it. 00:27:58.474 [2024-07-26 11:35:53.807022] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.474 [2024-07-26 11:35:53.807052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f8000b90 with addr=10.0.0.2, port=4420 00:27:58.474 qpair failed and we were unable to recover it. 00:27:58.474 [2024-07-26 11:35:53.807294] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.474 [2024-07-26 11:35:53.807326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f8000b90 with addr=10.0.0.2, port=4420 00:27:58.474 qpair failed and we were unable to recover it. 00:27:58.474 [2024-07-26 11:35:53.807510] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.474 [2024-07-26 11:35:53.807541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f8000b90 with addr=10.0.0.2, port=4420 00:27:58.474 qpair failed and we were unable to recover it. 00:27:58.474 [2024-07-26 11:35:53.807729] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.474 [2024-07-26 11:35:53.807760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f8000b90 with addr=10.0.0.2, port=4420 00:27:58.474 qpair failed and we were unable to recover it. 00:27:58.474 [2024-07-26 11:35:53.807882] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.474 [2024-07-26 11:35:53.807912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f8000b90 with addr=10.0.0.2, port=4420 00:27:58.474 qpair failed and we were unable to recover it. 00:27:58.474 [2024-07-26 11:35:53.808129] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.474 [2024-07-26 11:35:53.808159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f8000b90 with addr=10.0.0.2, port=4420 00:27:58.474 qpair failed and we were unable to recover it. 00:27:58.474 [2024-07-26 11:35:53.808291] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.474 [2024-07-26 11:35:53.808321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f8000b90 with addr=10.0.0.2, port=4420 00:27:58.474 qpair failed and we were unable to recover it. 00:27:58.474 [2024-07-26 11:35:53.808524] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.474 [2024-07-26 11:35:53.808555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f8000b90 with addr=10.0.0.2, port=4420 00:27:58.474 qpair failed and we were unable to recover it. 00:27:58.474 [2024-07-26 11:35:53.808746] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.474 [2024-07-26 11:35:53.808777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f8000b90 with addr=10.0.0.2, port=4420 00:27:58.474 qpair failed and we were unable to recover it. 00:27:58.474 [2024-07-26 11:35:53.808897] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.474 [2024-07-26 11:35:53.808927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f8000b90 with addr=10.0.0.2, port=4420 00:27:58.474 qpair failed and we were unable to recover it. 00:27:58.474 [2024-07-26 11:35:53.809121] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.474 [2024-07-26 11:35:53.809151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f8000b90 with addr=10.0.0.2, port=4420 00:27:58.474 qpair failed and we were unable to recover it. 00:27:58.474 [2024-07-26 11:35:53.809362] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.474 [2024-07-26 11:35:53.809392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f8000b90 with addr=10.0.0.2, port=4420 00:27:58.474 qpair failed and we were unable to recover it. 00:27:58.474 [2024-07-26 11:35:53.809516] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.474 [2024-07-26 11:35:53.809546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f8000b90 with addr=10.0.0.2, port=4420 00:27:58.474 qpair failed and we were unable to recover it. 00:27:58.474 [2024-07-26 11:35:53.809728] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.474 [2024-07-26 11:35:53.809759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f8000b90 with addr=10.0.0.2, port=4420 00:27:58.474 qpair failed and we were unable to recover it. 00:27:58.474 [2024-07-26 11:35:53.810021] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.474 [2024-07-26 11:35:53.810052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f8000b90 with addr=10.0.0.2, port=4420 00:27:58.474 qpair failed and we were unable to recover it. 00:27:58.474 [2024-07-26 11:35:53.810322] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.474 [2024-07-26 11:35:53.810353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f8000b90 with addr=10.0.0.2, port=4420 00:27:58.474 qpair failed and we were unable to recover it. 00:27:58.474 [2024-07-26 11:35:53.810530] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.474 [2024-07-26 11:35:53.810561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f8000b90 with addr=10.0.0.2, port=4420 00:27:58.474 qpair failed and we were unable to recover it. 00:27:58.474 [2024-07-26 11:35:53.810743] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.474 [2024-07-26 11:35:53.810776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f8000b90 with addr=10.0.0.2, port=4420 00:27:58.474 qpair failed and we were unable to recover it. 00:27:58.474 [2024-07-26 11:35:53.811069] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.474 [2024-07-26 11:35:53.811100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f8000b90 with addr=10.0.0.2, port=4420 00:27:58.474 qpair failed and we were unable to recover it. 00:27:58.474 [2024-07-26 11:35:53.811271] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.474 [2024-07-26 11:35:53.811302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f8000b90 with addr=10.0.0.2, port=4420 00:27:58.474 qpair failed and we were unable to recover it. 00:27:58.474 [2024-07-26 11:35:53.811500] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.474 [2024-07-26 11:35:53.811529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f8000b90 with addr=10.0.0.2, port=4420 00:27:58.474 qpair failed and we were unable to recover it. 00:27:58.474 [2024-07-26 11:35:53.811703] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.474 [2024-07-26 11:35:53.811735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f8000b90 with addr=10.0.0.2, port=4420 00:27:58.474 qpair failed and we were unable to recover it. 00:27:58.474 [2024-07-26 11:35:53.811993] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.474 [2024-07-26 11:35:53.812023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f8000b90 with addr=10.0.0.2, port=4420 00:27:58.474 qpair failed and we were unable to recover it. 00:27:58.474 [2024-07-26 11:35:53.812198] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.474 [2024-07-26 11:35:53.812228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f8000b90 with addr=10.0.0.2, port=4420 00:27:58.474 qpair failed and we were unable to recover it. 00:27:58.474 [2024-07-26 11:35:53.812469] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.474 [2024-07-26 11:35:53.812505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f8000b90 with addr=10.0.0.2, port=4420 00:27:58.474 qpair failed and we were unable to recover it. 00:27:58.474 [2024-07-26 11:35:53.812707] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.474 [2024-07-26 11:35:53.812739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f8000b90 with addr=10.0.0.2, port=4420 00:27:58.474 qpair failed and we were unable to recover it. 00:27:58.474 [2024-07-26 11:35:53.812907] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.474 [2024-07-26 11:35:53.812938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f8000b90 with addr=10.0.0.2, port=4420 00:27:58.474 qpair failed and we were unable to recover it. 00:27:58.474 [2024-07-26 11:35:53.813126] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.474 [2024-07-26 11:35:53.813156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f8000b90 with addr=10.0.0.2, port=4420 00:27:58.474 qpair failed and we were unable to recover it. 00:27:58.475 [2024-07-26 11:35:53.813346] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.475 [2024-07-26 11:35:53.813377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f8000b90 with addr=10.0.0.2, port=4420 00:27:58.475 qpair failed and we were unable to recover it. 00:27:58.475 [2024-07-26 11:35:53.813564] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.475 [2024-07-26 11:35:53.813593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f8000b90 with addr=10.0.0.2, port=4420 00:27:58.475 qpair failed and we were unable to recover it. 00:27:58.475 [2024-07-26 11:35:53.813875] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.475 [2024-07-26 11:35:53.813907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f8000b90 with addr=10.0.0.2, port=4420 00:27:58.475 qpair failed and we were unable to recover it. 00:27:58.475 [2024-07-26 11:35:53.814149] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.475 [2024-07-26 11:35:53.814180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f8000b90 with addr=10.0.0.2, port=4420 00:27:58.475 qpair failed and we were unable to recover it. 00:27:58.475 [2024-07-26 11:35:53.814370] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.475 [2024-07-26 11:35:53.814401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f8000b90 with addr=10.0.0.2, port=4420 00:27:58.475 qpair failed and we were unable to recover it. 00:27:58.475 [2024-07-26 11:35:53.814598] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.475 [2024-07-26 11:35:53.814650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f8000b90 with addr=10.0.0.2, port=4420 00:27:58.475 qpair failed and we were unable to recover it. 00:27:58.475 [2024-07-26 11:35:53.814846] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.475 [2024-07-26 11:35:53.814877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f8000b90 with addr=10.0.0.2, port=4420 00:27:58.475 qpair failed and we were unable to recover it. 00:27:58.475 [2024-07-26 11:35:53.815043] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.475 [2024-07-26 11:35:53.815072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f8000b90 with addr=10.0.0.2, port=4420 00:27:58.475 qpair failed and we were unable to recover it. 00:27:58.475 [2024-07-26 11:35:53.815254] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.475 [2024-07-26 11:35:53.815285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f8000b90 with addr=10.0.0.2, port=4420 00:27:58.475 qpair failed and we were unable to recover it. 00:27:58.475 [2024-07-26 11:35:53.815405] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.475 [2024-07-26 11:35:53.815436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f8000b90 with addr=10.0.0.2, port=4420 00:27:58.475 qpair failed and we were unable to recover it. 00:27:58.475 [2024-07-26 11:35:53.815624] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.475 [2024-07-26 11:35:53.815667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f8000b90 with addr=10.0.0.2, port=4420 00:27:58.475 qpair failed and we were unable to recover it. 00:27:58.475 [2024-07-26 11:35:53.815812] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.475 [2024-07-26 11:35:53.815842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f8000b90 with addr=10.0.0.2, port=4420 00:27:58.475 qpair failed and we were unable to recover it. 00:27:58.475 [2024-07-26 11:35:53.816038] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.475 [2024-07-26 11:35:53.816069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f8000b90 with addr=10.0.0.2, port=4420 00:27:58.475 qpair failed and we were unable to recover it. 00:27:58.475 [2024-07-26 11:35:53.816195] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.475 [2024-07-26 11:35:53.816227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f8000b90 with addr=10.0.0.2, port=4420 00:27:58.475 qpair failed and we were unable to recover it. 00:27:58.475 [2024-07-26 11:35:53.816355] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.475 [2024-07-26 11:35:53.816384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f8000b90 with addr=10.0.0.2, port=4420 00:27:58.475 qpair failed and we were unable to recover it. 00:27:58.475 [2024-07-26 11:35:53.816509] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.475 [2024-07-26 11:35:53.816540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f8000b90 with addr=10.0.0.2, port=4420 00:27:58.475 qpair failed and we were unable to recover it. 00:27:58.475 [2024-07-26 11:35:53.816667] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.475 [2024-07-26 11:35:53.816698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f8000b90 with addr=10.0.0.2, port=4420 00:27:58.475 qpair failed and we were unable to recover it. 00:27:58.475 [2024-07-26 11:35:53.816939] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.475 [2024-07-26 11:35:53.816969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f8000b90 with addr=10.0.0.2, port=4420 00:27:58.475 qpair failed and we were unable to recover it. 00:27:58.475 [2024-07-26 11:35:53.817166] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.475 [2024-07-26 11:35:53.817196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f8000b90 with addr=10.0.0.2, port=4420 00:27:58.475 qpair failed and we were unable to recover it. 00:27:58.475 [2024-07-26 11:35:53.817369] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.475 [2024-07-26 11:35:53.817399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f8000b90 with addr=10.0.0.2, port=4420 00:27:58.475 qpair failed and we were unable to recover it. 00:27:58.475 [2024-07-26 11:35:53.817575] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.475 [2024-07-26 11:35:53.817606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f8000b90 with addr=10.0.0.2, port=4420 00:27:58.475 qpair failed and we were unable to recover it. 00:27:58.475 [2024-07-26 11:35:53.817813] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.475 [2024-07-26 11:35:53.817844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f8000b90 with addr=10.0.0.2, port=4420 00:27:58.475 qpair failed and we were unable to recover it. 00:27:58.475 [2024-07-26 11:35:53.817969] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.475 [2024-07-26 11:35:53.818000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f8000b90 with addr=10.0.0.2, port=4420 00:27:58.475 qpair failed and we were unable to recover it. 00:27:58.475 [2024-07-26 11:35:53.818187] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.475 [2024-07-26 11:35:53.818217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f8000b90 with addr=10.0.0.2, port=4420 00:27:58.475 qpair failed and we were unable to recover it. 00:27:58.475 [2024-07-26 11:35:53.818427] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.475 [2024-07-26 11:35:53.818456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f8000b90 with addr=10.0.0.2, port=4420 00:27:58.475 qpair failed and we were unable to recover it. 00:27:58.475 [2024-07-26 11:35:53.818725] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.475 [2024-07-26 11:35:53.818756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f8000b90 with addr=10.0.0.2, port=4420 00:27:58.475 qpair failed and we were unable to recover it. 00:27:58.475 [2024-07-26 11:35:53.818875] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.475 [2024-07-26 11:35:53.818905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f8000b90 with addr=10.0.0.2, port=4420 00:27:58.475 qpair failed and we were unable to recover it. 00:27:58.475 [2024-07-26 11:35:53.819078] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.475 [2024-07-26 11:35:53.819110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f8000b90 with addr=10.0.0.2, port=4420 00:27:58.475 qpair failed and we were unable to recover it. 00:27:58.475 [2024-07-26 11:35:53.819232] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.475 [2024-07-26 11:35:53.819262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f8000b90 with addr=10.0.0.2, port=4420 00:27:58.475 qpair failed and we were unable to recover it. 00:27:58.475 [2024-07-26 11:35:53.819517] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.475 [2024-07-26 11:35:53.819549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f8000b90 with addr=10.0.0.2, port=4420 00:27:58.475 qpair failed and we were unable to recover it. 00:27:58.475 [2024-07-26 11:35:53.819672] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.475 [2024-07-26 11:35:53.819704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f8000b90 with addr=10.0.0.2, port=4420 00:27:58.475 qpair failed and we were unable to recover it. 00:27:58.475 [2024-07-26 11:35:53.819889] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.476 [2024-07-26 11:35:53.819920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f8000b90 with addr=10.0.0.2, port=4420 00:27:58.476 qpair failed and we were unable to recover it. 00:27:58.476 [2024-07-26 11:35:53.820065] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.476 [2024-07-26 11:35:53.820095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f8000b90 with addr=10.0.0.2, port=4420 00:27:58.476 qpair failed and we were unable to recover it. 00:27:58.476 [2024-07-26 11:35:53.820211] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.476 [2024-07-26 11:35:53.820243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f8000b90 with addr=10.0.0.2, port=4420 00:27:58.476 qpair failed and we were unable to recover it. 00:27:58.476 [2024-07-26 11:35:53.820437] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.476 [2024-07-26 11:35:53.820467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f8000b90 with addr=10.0.0.2, port=4420 00:27:58.476 qpair failed and we were unable to recover it. 00:27:58.476 [2024-07-26 11:35:53.820645] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.476 [2024-07-26 11:35:53.820676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f8000b90 with addr=10.0.0.2, port=4420 00:27:58.476 qpair failed and we were unable to recover it. 00:27:58.476 [2024-07-26 11:35:53.820869] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.476 [2024-07-26 11:35:53.820909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f8000b90 with addr=10.0.0.2, port=4420 00:27:58.476 qpair failed and we were unable to recover it. 00:27:58.476 [2024-07-26 11:35:53.821179] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.476 [2024-07-26 11:35:53.821210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f8000b90 with addr=10.0.0.2, port=4420 00:27:58.476 qpair failed and we were unable to recover it. 00:27:58.476 [2024-07-26 11:35:53.821400] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.476 [2024-07-26 11:35:53.821430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f8000b90 with addr=10.0.0.2, port=4420 00:27:58.476 qpair failed and we were unable to recover it. 00:27:58.476 [2024-07-26 11:35:53.821608] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.476 [2024-07-26 11:35:53.821650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f8000b90 with addr=10.0.0.2, port=4420 00:27:58.476 qpair failed and we were unable to recover it. 00:27:58.476 [2024-07-26 11:35:53.821762] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.476 [2024-07-26 11:35:53.821792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f8000b90 with addr=10.0.0.2, port=4420 00:27:58.476 qpair failed and we were unable to recover it. 00:27:58.476 [2024-07-26 11:35:53.822064] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.476 [2024-07-26 11:35:53.822095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f8000b90 with addr=10.0.0.2, port=4420 00:27:58.476 qpair failed and we were unable to recover it. 00:27:58.476 [2024-07-26 11:35:53.822279] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.476 [2024-07-26 11:35:53.822310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f8000b90 with addr=10.0.0.2, port=4420 00:27:58.476 qpair failed and we were unable to recover it. 00:27:58.476 [2024-07-26 11:35:53.822485] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.476 [2024-07-26 11:35:53.822516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f8000b90 with addr=10.0.0.2, port=4420 00:27:58.476 qpair failed and we were unable to recover it. 00:27:58.476 [2024-07-26 11:35:53.822701] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.476 [2024-07-26 11:35:53.822732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f8000b90 with addr=10.0.0.2, port=4420 00:27:58.476 qpair failed and we were unable to recover it. 00:27:58.476 [2024-07-26 11:35:53.822851] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.476 [2024-07-26 11:35:53.822882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f8000b90 with addr=10.0.0.2, port=4420 00:27:58.476 qpair failed and we were unable to recover it. 00:27:58.476 [2024-07-26 11:35:53.823076] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.476 [2024-07-26 11:35:53.823107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f8000b90 with addr=10.0.0.2, port=4420 00:27:58.476 qpair failed and we were unable to recover it. 00:27:58.476 [2024-07-26 11:35:53.823371] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.476 [2024-07-26 11:35:53.823402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f8000b90 with addr=10.0.0.2, port=4420 00:27:58.476 qpair failed and we were unable to recover it. 00:27:58.476 [2024-07-26 11:35:53.823693] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.476 [2024-07-26 11:35:53.823725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f8000b90 with addr=10.0.0.2, port=4420 00:27:58.476 qpair failed and we were unable to recover it. 00:27:58.476 [2024-07-26 11:35:53.823871] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.476 [2024-07-26 11:35:53.823902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f8000b90 with addr=10.0.0.2, port=4420 00:27:58.476 qpair failed and we were unable to recover it. 00:27:58.476 [2024-07-26 11:35:53.824096] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.476 [2024-07-26 11:35:53.824127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f8000b90 with addr=10.0.0.2, port=4420 00:27:58.476 qpair failed and we were unable to recover it. 00:27:58.476 [2024-07-26 11:35:53.824243] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.476 [2024-07-26 11:35:53.824273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f8000b90 with addr=10.0.0.2, port=4420 00:27:58.476 qpair failed and we were unable to recover it. 00:27:58.476 [2024-07-26 11:35:53.824517] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.476 [2024-07-26 11:35:53.824547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f8000b90 with addr=10.0.0.2, port=4420 00:27:58.476 qpair failed and we were unable to recover it. 00:27:58.476 [2024-07-26 11:35:53.824676] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.476 [2024-07-26 11:35:53.824706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f8000b90 with addr=10.0.0.2, port=4420 00:27:58.476 qpair failed and we were unable to recover it. 00:27:58.476 [2024-07-26 11:35:53.824890] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.476 [2024-07-26 11:35:53.824921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f8000b90 with addr=10.0.0.2, port=4420 00:27:58.476 qpair failed and we were unable to recover it. 00:27:58.476 [2024-07-26 11:35:53.825160] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.476 [2024-07-26 11:35:53.825191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f8000b90 with addr=10.0.0.2, port=4420 00:27:58.476 qpair failed and we were unable to recover it. 00:27:58.476 [2024-07-26 11:35:53.825311] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.476 [2024-07-26 11:35:53.825341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f8000b90 with addr=10.0.0.2, port=4420 00:27:58.476 qpair failed and we were unable to recover it. 00:27:58.476 [2024-07-26 11:35:53.825582] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.476 [2024-07-26 11:35:53.825614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f8000b90 with addr=10.0.0.2, port=4420 00:27:58.476 qpair failed and we were unable to recover it. 00:27:58.476 [2024-07-26 11:35:53.825762] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.476 [2024-07-26 11:35:53.825791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f8000b90 with addr=10.0.0.2, port=4420 00:27:58.476 qpair failed and we were unable to recover it. 00:27:58.476 [2024-07-26 11:35:53.825925] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.476 [2024-07-26 11:35:53.825957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f8000b90 with addr=10.0.0.2, port=4420 00:27:58.476 qpair failed and we were unable to recover it. 00:27:58.476 [2024-07-26 11:35:53.826222] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.476 [2024-07-26 11:35:53.826253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f8000b90 with addr=10.0.0.2, port=4420 00:27:58.476 qpair failed and we were unable to recover it. 00:27:58.476 [2024-07-26 11:35:53.826466] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.476 [2024-07-26 11:35:53.826497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f8000b90 with addr=10.0.0.2, port=4420 00:27:58.476 qpair failed and we were unable to recover it. 00:27:58.476 [2024-07-26 11:35:53.826682] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.476 [2024-07-26 11:35:53.826714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f8000b90 with addr=10.0.0.2, port=4420 00:27:58.476 qpair failed and we were unable to recover it. 00:27:58.476 [2024-07-26 11:35:53.826894] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.476 [2024-07-26 11:35:53.826955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:58.476 qpair failed and we were unable to recover it. 00:27:58.476 [2024-07-26 11:35:53.827190] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.476 [2024-07-26 11:35:53.827239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.476 qpair failed and we were unable to recover it. 00:27:58.476 [2024-07-26 11:35:53.827434] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.476 [2024-07-26 11:35:53.827467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.476 qpair failed and we were unable to recover it. 00:27:58.476 [2024-07-26 11:35:53.827601] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.476 [2024-07-26 11:35:53.827641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.476 qpair failed and we were unable to recover it. 00:27:58.476 [2024-07-26 11:35:53.827825] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.476 [2024-07-26 11:35:53.827856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.476 qpair failed and we were unable to recover it. 00:27:58.476 [2024-07-26 11:35:53.828047] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.476 [2024-07-26 11:35:53.828078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.477 qpair failed and we were unable to recover it. 00:27:58.477 [2024-07-26 11:35:53.828185] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.477 [2024-07-26 11:35:53.828215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.477 qpair failed and we were unable to recover it. 00:27:58.477 [2024-07-26 11:35:53.828339] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.477 [2024-07-26 11:35:53.828371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.477 qpair failed and we were unable to recover it. 00:27:58.477 [2024-07-26 11:35:53.828611] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.477 [2024-07-26 11:35:53.828651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.477 qpair failed and we were unable to recover it. 00:27:58.477 [2024-07-26 11:35:53.828837] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.477 [2024-07-26 11:35:53.828869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.477 qpair failed and we were unable to recover it. 00:27:58.477 [2024-07-26 11:35:53.828992] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.477 [2024-07-26 11:35:53.829023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.477 qpair failed and we were unable to recover it. 00:27:58.477 [2024-07-26 11:35:53.829151] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.477 [2024-07-26 11:35:53.829182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.477 qpair failed and we were unable to recover it. 00:27:58.477 [2024-07-26 11:35:53.829452] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.477 [2024-07-26 11:35:53.829484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.477 qpair failed and we were unable to recover it. 00:27:58.477 [2024-07-26 11:35:53.829614] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.477 [2024-07-26 11:35:53.829664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.477 qpair failed and we were unable to recover it. 00:27:58.477 [2024-07-26 11:35:53.829857] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.477 [2024-07-26 11:35:53.829888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.477 qpair failed and we were unable to recover it. 00:27:58.477 [2024-07-26 11:35:53.830007] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.477 [2024-07-26 11:35:53.830038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.477 qpair failed and we were unable to recover it. 00:27:58.477 [2024-07-26 11:35:53.830281] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.477 [2024-07-26 11:35:53.830312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.477 qpair failed and we were unable to recover it. 00:27:58.477 [2024-07-26 11:35:53.830487] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.477 [2024-07-26 11:35:53.830519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.477 qpair failed and we were unable to recover it. 00:27:58.477 [2024-07-26 11:35:53.830730] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.477 [2024-07-26 11:35:53.830763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.477 qpair failed and we were unable to recover it. 00:27:58.477 [2024-07-26 11:35:53.830951] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.477 [2024-07-26 11:35:53.830982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.477 qpair failed and we were unable to recover it. 00:27:58.477 [2024-07-26 11:35:53.831173] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.477 [2024-07-26 11:35:53.831204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.477 qpair failed and we were unable to recover it. 00:27:58.477 [2024-07-26 11:35:53.831473] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.477 [2024-07-26 11:35:53.831505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.477 qpair failed and we were unable to recover it. 00:27:58.477 [2024-07-26 11:35:53.831699] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.477 [2024-07-26 11:35:53.831732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.477 qpair failed and we were unable to recover it. 00:27:58.477 [2024-07-26 11:35:53.831907] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.477 [2024-07-26 11:35:53.831938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.477 qpair failed and we were unable to recover it. 00:27:58.477 [2024-07-26 11:35:53.832197] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.477 [2024-07-26 11:35:53.832228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.477 qpair failed and we were unable to recover it. 00:27:58.477 [2024-07-26 11:35:53.832362] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.477 [2024-07-26 11:35:53.832394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.477 qpair failed and we were unable to recover it. 00:27:58.477 [2024-07-26 11:35:53.832563] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.477 [2024-07-26 11:35:53.832593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.477 qpair failed and we were unable to recover it. 00:27:58.477 [2024-07-26 11:35:53.832744] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.477 [2024-07-26 11:35:53.832777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.477 qpair failed and we were unable to recover it. 00:27:58.477 [2024-07-26 11:35:53.832950] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.477 [2024-07-26 11:35:53.832982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.477 qpair failed and we were unable to recover it. 00:27:58.477 [2024-07-26 11:35:53.833168] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.477 [2024-07-26 11:35:53.833200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.477 qpair failed and we were unable to recover it. 00:27:58.477 [2024-07-26 11:35:53.833401] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.477 [2024-07-26 11:35:53.833433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.477 qpair failed and we were unable to recover it. 00:27:58.477 [2024-07-26 11:35:53.833703] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.477 [2024-07-26 11:35:53.833734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.477 qpair failed and we were unable to recover it. 00:27:58.477 [2024-07-26 11:35:53.833909] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.477 [2024-07-26 11:35:53.833940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.477 qpair failed and we were unable to recover it. 00:27:58.477 [2024-07-26 11:35:53.834077] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.477 [2024-07-26 11:35:53.834109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.477 qpair failed and we were unable to recover it. 00:27:58.477 [2024-07-26 11:35:53.834350] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.477 [2024-07-26 11:35:53.834381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.477 qpair failed and we were unable to recover it. 00:27:58.477 [2024-07-26 11:35:53.834656] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.477 [2024-07-26 11:35:53.834689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.477 qpair failed and we were unable to recover it. 00:27:58.477 [2024-07-26 11:35:53.834821] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.477 [2024-07-26 11:35:53.834852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.477 qpair failed and we were unable to recover it. 00:27:58.477 [2024-07-26 11:35:53.835037] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.477 [2024-07-26 11:35:53.835069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.477 qpair failed and we were unable to recover it. 00:27:58.477 [2024-07-26 11:35:53.835193] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.477 [2024-07-26 11:35:53.835224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.477 qpair failed and we were unable to recover it. 00:27:58.477 [2024-07-26 11:35:53.835399] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.477 [2024-07-26 11:35:53.835430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.477 qpair failed and we were unable to recover it. 00:27:58.477 [2024-07-26 11:35:53.835711] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.477 [2024-07-26 11:35:53.835753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:58.477 qpair failed and we were unable to recover it. 00:27:58.477 [2024-07-26 11:35:53.835939] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.477 [2024-07-26 11:35:53.835972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:58.477 qpair failed and we were unable to recover it. 00:27:58.477 [2024-07-26 11:35:53.836161] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.477 [2024-07-26 11:35:53.836192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:58.477 qpair failed and we were unable to recover it. 00:27:58.478 [2024-07-26 11:35:53.836317] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.478 [2024-07-26 11:35:53.836348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:58.478 qpair failed and we were unable to recover it. 00:27:58.478 [2024-07-26 11:35:53.836531] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.478 [2024-07-26 11:35:53.836561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:58.478 qpair failed and we were unable to recover it. 00:27:58.478 [2024-07-26 11:35:53.836854] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.478 [2024-07-26 11:35:53.836888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:58.478 qpair failed and we were unable to recover it. 00:27:58.478 [2024-07-26 11:35:53.837065] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.478 [2024-07-26 11:35:53.837097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:58.478 qpair failed and we were unable to recover it. 00:27:58.478 [2024-07-26 11:35:53.837291] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.478 [2024-07-26 11:35:53.837322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:58.478 qpair failed and we were unable to recover it. 00:27:58.478 [2024-07-26 11:35:53.837582] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.478 [2024-07-26 11:35:53.837613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:58.478 qpair failed and we were unable to recover it. 00:27:58.478 [2024-07-26 11:35:53.837764] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.478 [2024-07-26 11:35:53.837796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:58.478 qpair failed and we were unable to recover it. 00:27:58.478 [2024-07-26 11:35:53.838000] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.478 [2024-07-26 11:35:53.838031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:58.478 qpair failed and we were unable to recover it. 00:27:58.478 [2024-07-26 11:35:53.838224] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.478 [2024-07-26 11:35:53.838255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:58.478 qpair failed and we were unable to recover it. 00:27:58.478 [2024-07-26 11:35:53.838447] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.478 [2024-07-26 11:35:53.838478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:58.478 qpair failed and we were unable to recover it. 00:27:58.478 [2024-07-26 11:35:53.838672] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.478 [2024-07-26 11:35:53.838705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:58.478 qpair failed and we were unable to recover it. 00:27:58.478 [2024-07-26 11:35:53.838894] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.478 [2024-07-26 11:35:53.838926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:58.478 qpair failed and we were unable to recover it. 00:27:58.478 [2024-07-26 11:35:53.839060] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.478 [2024-07-26 11:35:53.839091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:58.478 qpair failed and we were unable to recover it. 00:27:58.478 [2024-07-26 11:35:53.839224] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.478 [2024-07-26 11:35:53.839256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:58.478 qpair failed and we were unable to recover it. 00:27:58.478 [2024-07-26 11:35:53.839449] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.478 [2024-07-26 11:35:53.839481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:58.478 qpair failed and we were unable to recover it. 00:27:58.478 [2024-07-26 11:35:53.839690] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.478 [2024-07-26 11:35:53.839722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:58.478 qpair failed and we were unable to recover it. 00:27:58.478 [2024-07-26 11:35:53.839916] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.478 [2024-07-26 11:35:53.839947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:58.478 qpair failed and we were unable to recover it. 00:27:58.478 [2024-07-26 11:35:53.840088] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.478 [2024-07-26 11:35:53.840120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:58.478 qpair failed and we were unable to recover it. 00:27:58.478 [2024-07-26 11:35:53.840307] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.478 [2024-07-26 11:35:53.840338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:58.478 qpair failed and we were unable to recover it. 00:27:58.478 [2024-07-26 11:35:53.840483] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.478 [2024-07-26 11:35:53.840514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:58.478 qpair failed and we were unable to recover it. 00:27:58.478 [2024-07-26 11:35:53.840758] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.478 [2024-07-26 11:35:53.840790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:58.478 qpair failed and we were unable to recover it. 00:27:58.478 [2024-07-26 11:35:53.841058] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.478 [2024-07-26 11:35:53.841089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:58.478 qpair failed and we were unable to recover it. 00:27:58.478 [2024-07-26 11:35:53.841316] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.478 [2024-07-26 11:35:53.841347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:58.478 qpair failed and we were unable to recover it. 00:27:58.478 [2024-07-26 11:35:53.841525] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.478 [2024-07-26 11:35:53.841555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:58.478 qpair failed and we were unable to recover it. 00:27:58.478 [2024-07-26 11:35:53.841810] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.478 [2024-07-26 11:35:53.841847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:58.478 qpair failed and we were unable to recover it. 00:27:58.478 [2024-07-26 11:35:53.841992] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.478 [2024-07-26 11:35:53.842024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:58.478 qpair failed and we were unable to recover it. 00:27:58.478 [2024-07-26 11:35:53.842223] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.478 [2024-07-26 11:35:53.842254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:58.478 qpair failed and we were unable to recover it. 00:27:58.478 [2024-07-26 11:35:53.842467] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.478 [2024-07-26 11:35:53.842499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:58.478 qpair failed and we were unable to recover it. 00:27:58.478 [2024-07-26 11:35:53.842763] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.478 [2024-07-26 11:35:53.842796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:58.478 qpair failed and we were unable to recover it. 00:27:58.478 [2024-07-26 11:35:53.843010] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.478 [2024-07-26 11:35:53.843041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:58.478 qpair failed and we were unable to recover it. 00:27:58.478 [2024-07-26 11:35:53.843217] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.478 [2024-07-26 11:35:53.843249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:58.478 qpair failed and we were unable to recover it. 00:27:58.478 [2024-07-26 11:35:53.843423] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.478 [2024-07-26 11:35:53.843454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:58.478 qpair failed and we were unable to recover it. 00:27:58.478 [2024-07-26 11:35:53.843654] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.478 [2024-07-26 11:35:53.843686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:58.478 qpair failed and we were unable to recover it. 00:27:58.478 [2024-07-26 11:35:53.843924] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.478 [2024-07-26 11:35:53.843956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:58.478 qpair failed and we were unable to recover it. 00:27:58.478 [2024-07-26 11:35:53.844141] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.478 [2024-07-26 11:35:53.844172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:58.478 qpair failed and we were unable to recover it. 00:27:58.478 [2024-07-26 11:35:53.844365] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.478 [2024-07-26 11:35:53.844396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:58.478 qpair failed and we were unable to recover it. 00:27:58.478 [2024-07-26 11:35:53.844567] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.478 [2024-07-26 11:35:53.844597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:58.478 qpair failed and we were unable to recover it. 00:27:58.478 [2024-07-26 11:35:53.844824] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.478 [2024-07-26 11:35:53.844862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:58.478 qpair failed and we were unable to recover it. 00:27:58.479 [2024-07-26 11:35:53.845004] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.479 [2024-07-26 11:35:53.845035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:58.479 qpair failed and we were unable to recover it. 00:27:58.479 [2024-07-26 11:35:53.845227] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.479 [2024-07-26 11:35:53.845257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:58.479 qpair failed and we were unable to recover it. 00:27:58.479 [2024-07-26 11:35:53.845438] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.479 [2024-07-26 11:35:53.845470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:58.479 qpair failed and we were unable to recover it. 00:27:58.479 [2024-07-26 11:35:53.845596] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.479 [2024-07-26 11:35:53.845637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:58.479 qpair failed and we were unable to recover it. 00:27:58.479 [2024-07-26 11:35:53.845898] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.479 [2024-07-26 11:35:53.845930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:58.479 qpair failed and we were unable to recover it. 00:27:58.479 [2024-07-26 11:35:53.846109] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.479 [2024-07-26 11:35:53.846140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:58.479 qpair failed and we were unable to recover it. 00:27:58.479 [2024-07-26 11:35:53.846260] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.479 [2024-07-26 11:35:53.846291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:58.479 qpair failed and we were unable to recover it. 00:27:58.479 [2024-07-26 11:35:53.846534] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.479 [2024-07-26 11:35:53.846565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:58.479 qpair failed and we were unable to recover it. 00:27:58.479 [2024-07-26 11:35:53.846693] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.479 [2024-07-26 11:35:53.846725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:58.479 qpair failed and we were unable to recover it. 00:27:58.479 [2024-07-26 11:35:53.846831] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.479 [2024-07-26 11:35:53.846862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:58.479 qpair failed and we were unable to recover it. 00:27:58.479 [2024-07-26 11:35:53.847077] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.479 [2024-07-26 11:35:53.847109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:58.479 qpair failed and we were unable to recover it. 00:27:58.479 [2024-07-26 11:35:53.847284] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.479 [2024-07-26 11:35:53.847315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:58.479 qpair failed and we were unable to recover it. 00:27:58.479 [2024-07-26 11:35:53.847514] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.479 [2024-07-26 11:35:53.847545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:58.479 qpair failed and we were unable to recover it. 00:27:58.479 [2024-07-26 11:35:53.847827] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.479 [2024-07-26 11:35:53.847876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.479 qpair failed and we were unable to recover it. 00:27:58.479 [2024-07-26 11:35:53.848017] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.479 [2024-07-26 11:35:53.848063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:58.479 qpair failed and we were unable to recover it. 00:27:58.479 [2024-07-26 11:35:53.848315] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.479 [2024-07-26 11:35:53.848346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:58.479 qpair failed and we were unable to recover it. 00:27:58.479 [2024-07-26 11:35:53.848523] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.479 [2024-07-26 11:35:53.848554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:58.479 qpair failed and we were unable to recover it. 00:27:58.479 [2024-07-26 11:35:53.848689] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.479 [2024-07-26 11:35:53.848722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:58.479 qpair failed and we were unable to recover it. 00:27:58.479 [2024-07-26 11:35:53.848978] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.479 [2024-07-26 11:35:53.849009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:58.479 qpair failed and we were unable to recover it. 00:27:58.479 [2024-07-26 11:35:53.849205] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.479 [2024-07-26 11:35:53.849236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:58.479 qpair failed and we were unable to recover it. 00:27:58.479 [2024-07-26 11:35:53.849424] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.479 [2024-07-26 11:35:53.849455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:58.479 qpair failed and we were unable to recover it. 00:27:58.479 [2024-07-26 11:35:53.849644] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.479 [2024-07-26 11:35:53.849676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:58.479 qpair failed and we were unable to recover it. 00:27:58.479 [2024-07-26 11:35:53.849927] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.479 [2024-07-26 11:35:53.849958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:58.479 qpair failed and we were unable to recover it. 00:27:58.479 [2024-07-26 11:35:53.850067] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.479 [2024-07-26 11:35:53.850098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:58.479 qpair failed and we were unable to recover it. 00:27:58.479 [2024-07-26 11:35:53.850290] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.479 [2024-07-26 11:35:53.850321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:58.479 qpair failed and we were unable to recover it. 00:27:58.479 [2024-07-26 11:35:53.850497] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.479 [2024-07-26 11:35:53.850528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:58.479 qpair failed and we were unable to recover it. 00:27:58.479 [2024-07-26 11:35:53.850721] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.479 [2024-07-26 11:35:53.850754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:58.479 qpair failed and we were unable to recover it. 00:27:58.479 [2024-07-26 11:35:53.850955] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.479 [2024-07-26 11:35:53.850987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:58.479 qpair failed and we were unable to recover it. 00:27:58.479 [2024-07-26 11:35:53.851174] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.479 [2024-07-26 11:35:53.851205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:58.479 qpair failed and we were unable to recover it. 00:27:58.479 [2024-07-26 11:35:53.851349] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.479 [2024-07-26 11:35:53.851380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:58.479 qpair failed and we were unable to recover it. 00:27:58.479 [2024-07-26 11:35:53.851642] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.479 [2024-07-26 11:35:53.851674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:58.479 qpair failed and we were unable to recover it. 00:27:58.479 [2024-07-26 11:35:53.851793] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.479 [2024-07-26 11:35:53.851824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:58.479 qpair failed and we were unable to recover it. 00:27:58.479 [2024-07-26 11:35:53.852068] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.480 [2024-07-26 11:35:53.852099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:58.480 qpair failed and we were unable to recover it. 00:27:58.480 [2024-07-26 11:35:53.852232] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.480 [2024-07-26 11:35:53.852263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:58.480 qpair failed and we were unable to recover it. 00:27:58.480 [2024-07-26 11:35:53.852372] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.480 [2024-07-26 11:35:53.852403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:58.480 qpair failed and we were unable to recover it. 00:27:58.480 [2024-07-26 11:35:53.852595] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.480 [2024-07-26 11:35:53.852634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:58.480 qpair failed and we were unable to recover it. 00:27:58.480 [2024-07-26 11:35:53.852772] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.480 [2024-07-26 11:35:53.852804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:58.480 qpair failed and we were unable to recover it. 00:27:58.480 [2024-07-26 11:35:53.852919] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.480 [2024-07-26 11:35:53.852950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:58.480 qpair failed and we were unable to recover it. 00:27:58.480 [2024-07-26 11:35:53.853132] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.480 [2024-07-26 11:35:53.853162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:58.480 qpair failed and we were unable to recover it. 00:27:58.480 [2024-07-26 11:35:53.853268] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.480 [2024-07-26 11:35:53.853299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:58.480 qpair failed and we were unable to recover it. 00:27:58.480 [2024-07-26 11:35:53.853510] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.480 [2024-07-26 11:35:53.853546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:58.480 qpair failed and we were unable to recover it. 00:27:58.480 [2024-07-26 11:35:53.853794] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.480 [2024-07-26 11:35:53.853825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:58.480 qpair failed and we were unable to recover it. 00:27:58.480 [2024-07-26 11:35:53.854022] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.480 [2024-07-26 11:35:53.854053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:58.480 qpair failed and we were unable to recover it. 00:27:58.480 [2024-07-26 11:35:53.854183] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.480 [2024-07-26 11:35:53.854214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:58.480 qpair failed and we were unable to recover it. 00:27:58.480 [2024-07-26 11:35:53.854350] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.480 [2024-07-26 11:35:53.854381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:58.480 qpair failed and we were unable to recover it. 00:27:58.480 [2024-07-26 11:35:53.854568] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.480 [2024-07-26 11:35:53.854599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:58.480 qpair failed and we were unable to recover it. 00:27:58.480 [2024-07-26 11:35:53.854729] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.480 [2024-07-26 11:35:53.854763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:58.480 qpair failed and we were unable to recover it. 00:27:58.480 [2024-07-26 11:35:53.854950] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.480 [2024-07-26 11:35:53.854982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:58.480 qpair failed and we were unable to recover it. 00:27:58.480 [2024-07-26 11:35:53.855163] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.480 [2024-07-26 11:35:53.855194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:58.480 qpair failed and we were unable to recover it. 00:27:58.480 [2024-07-26 11:35:53.855386] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.480 [2024-07-26 11:35:53.855416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:58.480 qpair failed and we were unable to recover it. 00:27:58.480 [2024-07-26 11:35:53.855727] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.480 [2024-07-26 11:35:53.855759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:58.480 qpair failed and we were unable to recover it. 00:27:58.480 [2024-07-26 11:35:53.855934] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.480 [2024-07-26 11:35:53.855965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:58.480 qpair failed and we were unable to recover it. 00:27:58.480 [2024-07-26 11:35:53.856096] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.480 [2024-07-26 11:35:53.856127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:58.480 qpair failed and we were unable to recover it. 00:27:58.480 [2024-07-26 11:35:53.856384] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.480 [2024-07-26 11:35:53.856415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:58.480 qpair failed and we were unable to recover it. 00:27:58.480 [2024-07-26 11:35:53.856615] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.480 [2024-07-26 11:35:53.856655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:58.480 qpair failed and we were unable to recover it. 00:27:58.480 [2024-07-26 11:35:53.856791] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.480 [2024-07-26 11:35:53.856822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:58.480 qpair failed and we were unable to recover it. 00:27:58.480 [2024-07-26 11:35:53.857000] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.480 [2024-07-26 11:35:53.857030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:58.480 qpair failed and we were unable to recover it. 00:27:58.480 [2024-07-26 11:35:53.857187] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.480 [2024-07-26 11:35:53.857219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:58.480 qpair failed and we were unable to recover it. 00:27:58.480 [2024-07-26 11:35:53.857390] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.480 [2024-07-26 11:35:53.857421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:58.480 qpair failed and we were unable to recover it. 00:27:58.480 [2024-07-26 11:35:53.857685] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.480 [2024-07-26 11:35:53.857718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:58.480 qpair failed and we were unable to recover it. 00:27:58.480 [2024-07-26 11:35:53.857969] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.480 [2024-07-26 11:35:53.858000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:58.480 qpair failed and we were unable to recover it. 00:27:58.480 [2024-07-26 11:35:53.858130] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.480 [2024-07-26 11:35:53.858160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:58.480 qpair failed and we were unable to recover it. 00:27:58.480 [2024-07-26 11:35:53.858380] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.480 [2024-07-26 11:35:53.858411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:58.480 qpair failed and we were unable to recover it. 00:27:58.480 [2024-07-26 11:35:53.858619] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.480 [2024-07-26 11:35:53.858663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:58.480 qpair failed and we were unable to recover it. 00:27:58.480 [2024-07-26 11:35:53.858912] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.480 [2024-07-26 11:35:53.858943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:58.480 qpair failed and we were unable to recover it. 00:27:58.480 [2024-07-26 11:35:53.859214] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.480 [2024-07-26 11:35:53.859245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:58.480 qpair failed and we were unable to recover it. 00:27:58.480 [2024-07-26 11:35:53.859497] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.480 [2024-07-26 11:35:53.859528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:58.480 qpair failed and we were unable to recover it. 00:27:58.480 [2024-07-26 11:35:53.859803] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.480 [2024-07-26 11:35:53.859836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:58.480 qpair failed and we were unable to recover it. 00:27:58.480 [2024-07-26 11:35:53.860047] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.480 [2024-07-26 11:35:53.860078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:58.480 qpair failed and we were unable to recover it. 00:27:58.480 [2024-07-26 11:35:53.860207] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.480 [2024-07-26 11:35:53.860238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:58.480 qpair failed and we were unable to recover it. 00:27:58.481 [2024-07-26 11:35:53.860410] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.481 [2024-07-26 11:35:53.860441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:58.481 qpair failed and we were unable to recover it. 00:27:58.481 [2024-07-26 11:35:53.860729] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.481 [2024-07-26 11:35:53.860761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:58.481 qpair failed and we were unable to recover it. 00:27:58.481 [2024-07-26 11:35:53.860899] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.481 [2024-07-26 11:35:53.860929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:58.481 qpair failed and we were unable to recover it. 00:27:58.481 [2024-07-26 11:35:53.861179] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.481 [2024-07-26 11:35:53.861210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:58.481 qpair failed and we were unable to recover it. 00:27:58.481 [2024-07-26 11:35:53.861405] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.481 [2024-07-26 11:35:53.861437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:58.481 qpair failed and we were unable to recover it. 00:27:58.481 [2024-07-26 11:35:53.861620] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.481 [2024-07-26 11:35:53.861659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:58.481 qpair failed and we were unable to recover it. 00:27:58.481 [2024-07-26 11:35:53.861966] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.481 [2024-07-26 11:35:53.861997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:58.481 qpair failed and we were unable to recover it. 00:27:58.481 [2024-07-26 11:35:53.862116] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.481 [2024-07-26 11:35:53.862147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:58.481 qpair failed and we were unable to recover it. 00:27:58.481 [2024-07-26 11:35:53.862287] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.481 [2024-07-26 11:35:53.862318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:58.481 qpair failed and we were unable to recover it. 00:27:58.481 [2024-07-26 11:35:53.862532] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.481 [2024-07-26 11:35:53.862563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:58.481 qpair failed and we were unable to recover it. 00:27:58.481 [2024-07-26 11:35:53.862807] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.481 [2024-07-26 11:35:53.862845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:58.481 qpair failed and we were unable to recover it. 00:27:58.481 [2024-07-26 11:35:53.863023] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.481 [2024-07-26 11:35:53.863053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:58.481 qpair failed and we were unable to recover it. 00:27:58.481 [2024-07-26 11:35:53.863246] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.481 [2024-07-26 11:35:53.863277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:58.481 qpair failed and we were unable to recover it. 00:27:58.481 [2024-07-26 11:35:53.863513] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.481 [2024-07-26 11:35:53.863544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:58.481 qpair failed and we were unable to recover it. 00:27:58.481 [2024-07-26 11:35:53.863726] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.481 [2024-07-26 11:35:53.863758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:58.481 qpair failed and we were unable to recover it. 00:27:58.481 [2024-07-26 11:35:53.863965] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.481 [2024-07-26 11:35:53.863996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:58.481 qpair failed and we were unable to recover it. 00:27:58.481 [2024-07-26 11:35:53.864243] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.481 [2024-07-26 11:35:53.864273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:58.481 qpair failed and we were unable to recover it. 00:27:58.481 [2024-07-26 11:35:53.864512] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.481 [2024-07-26 11:35:53.864542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:58.481 qpair failed and we were unable to recover it. 00:27:58.481 [2024-07-26 11:35:53.864676] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.481 [2024-07-26 11:35:53.864707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:58.481 qpair failed and we were unable to recover it. 00:27:58.481 [2024-07-26 11:35:53.864950] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.481 [2024-07-26 11:35:53.864980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:58.481 qpair failed and we were unable to recover it. 00:27:58.481 [2024-07-26 11:35:53.865170] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.481 [2024-07-26 11:35:53.865201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:58.481 qpair failed and we were unable to recover it. 00:27:58.481 [2024-07-26 11:35:53.865429] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.481 [2024-07-26 11:35:53.865459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:58.481 qpair failed and we were unable to recover it. 00:27:58.481 [2024-07-26 11:35:53.865575] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.481 [2024-07-26 11:35:53.865606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:58.481 qpair failed and we were unable to recover it. 00:27:58.481 [2024-07-26 11:35:53.865806] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.481 [2024-07-26 11:35:53.865837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:58.481 qpair failed and we were unable to recover it. 00:27:58.481 [2024-07-26 11:35:53.866017] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.481 [2024-07-26 11:35:53.866048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:58.481 qpair failed and we were unable to recover it. 00:27:58.481 [2024-07-26 11:35:53.866238] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.481 [2024-07-26 11:35:53.866269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:58.481 qpair failed and we were unable to recover it. 00:27:58.481 [2024-07-26 11:35:53.866455] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.481 [2024-07-26 11:35:53.866486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:58.481 qpair failed and we were unable to recover it. 00:27:58.481 [2024-07-26 11:35:53.866671] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.481 [2024-07-26 11:35:53.866703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:58.481 qpair failed and we were unable to recover it. 00:27:58.481 [2024-07-26 11:35:53.866890] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.481 [2024-07-26 11:35:53.866921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:58.481 qpair failed and we were unable to recover it. 00:27:58.481 [2024-07-26 11:35:53.867043] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.481 [2024-07-26 11:35:53.867074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:58.481 qpair failed and we were unable to recover it. 00:27:58.481 [2024-07-26 11:35:53.867319] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.481 [2024-07-26 11:35:53.867350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:58.481 qpair failed and we were unable to recover it. 00:27:58.481 [2024-07-26 11:35:53.867479] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.481 [2024-07-26 11:35:53.867510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:58.481 qpair failed and we were unable to recover it. 00:27:58.481 [2024-07-26 11:35:53.867791] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.481 [2024-07-26 11:35:53.867823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:58.481 qpair failed and we were unable to recover it. 00:27:58.481 [2024-07-26 11:35:53.868069] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.481 [2024-07-26 11:35:53.868100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:58.481 qpair failed and we were unable to recover it. 00:27:58.481 [2024-07-26 11:35:53.868221] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.481 [2024-07-26 11:35:53.868251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:58.481 qpair failed and we were unable to recover it. 00:27:58.481 [2024-07-26 11:35:53.868434] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.481 [2024-07-26 11:35:53.868465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:58.481 qpair failed and we were unable to recover it. 00:27:58.481 [2024-07-26 11:35:53.868730] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.481 [2024-07-26 11:35:53.868762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:58.481 qpair failed and we were unable to recover it. 00:27:58.481 [2024-07-26 11:35:53.868915] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.481 [2024-07-26 11:35:53.868950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.481 qpair failed and we were unable to recover it. 00:27:58.482 [2024-07-26 11:35:53.869129] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.482 [2024-07-26 11:35:53.869161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.482 qpair failed and we were unable to recover it. 00:27:58.482 [2024-07-26 11:35:53.869307] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.482 [2024-07-26 11:35:53.869339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.482 qpair failed and we were unable to recover it. 00:27:58.482 [2024-07-26 11:35:53.869517] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.482 [2024-07-26 11:35:53.869548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.482 qpair failed and we were unable to recover it. 00:27:58.482 [2024-07-26 11:35:53.869833] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.482 [2024-07-26 11:35:53.869866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.482 qpair failed and we were unable to recover it. 00:27:58.482 [2024-07-26 11:35:53.869986] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.482 [2024-07-26 11:35:53.870018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.482 qpair failed and we were unable to recover it. 00:27:58.482 [2024-07-26 11:35:53.870147] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.482 [2024-07-26 11:35:53.870178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.482 qpair failed and we were unable to recover it. 00:27:58.482 [2024-07-26 11:35:53.870376] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.482 [2024-07-26 11:35:53.870407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.482 qpair failed and we were unable to recover it. 00:27:58.482 [2024-07-26 11:35:53.870650] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.482 [2024-07-26 11:35:53.870681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.482 qpair failed and we were unable to recover it. 00:27:58.482 [2024-07-26 11:35:53.870951] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.482 [2024-07-26 11:35:53.870981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.482 qpair failed and we were unable to recover it. 00:27:58.482 [2024-07-26 11:35:53.871157] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.482 [2024-07-26 11:35:53.871189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.482 qpair failed and we were unable to recover it. 00:27:58.482 [2024-07-26 11:35:53.871389] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.482 [2024-07-26 11:35:53.871419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.482 qpair failed and we were unable to recover it. 00:27:58.482 [2024-07-26 11:35:53.871617] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.482 [2024-07-26 11:35:53.871658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.482 qpair failed and we were unable to recover it. 00:27:58.482 [2024-07-26 11:35:53.871887] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.482 [2024-07-26 11:35:53.871926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.482 qpair failed and we were unable to recover it. 00:27:58.482 [2024-07-26 11:35:53.872127] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.482 [2024-07-26 11:35:53.872158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.482 qpair failed and we were unable to recover it. 00:27:58.482 [2024-07-26 11:35:53.872336] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.482 [2024-07-26 11:35:53.872368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.482 qpair failed and we were unable to recover it. 00:27:58.482 [2024-07-26 11:35:53.872512] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.482 [2024-07-26 11:35:53.872544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.482 qpair failed and we were unable to recover it. 00:27:58.482 [2024-07-26 11:35:53.872723] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.482 [2024-07-26 11:35:53.872754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.482 qpair failed and we were unable to recover it. 00:27:58.482 [2024-07-26 11:35:53.872962] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.482 [2024-07-26 11:35:53.872993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.482 qpair failed and we were unable to recover it. 00:27:58.482 [2024-07-26 11:35:53.873177] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.482 [2024-07-26 11:35:53.873208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.482 qpair failed and we were unable to recover it. 00:27:58.482 [2024-07-26 11:35:53.873384] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.482 [2024-07-26 11:35:53.873415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.482 qpair failed and we were unable to recover it. 00:27:58.482 [2024-07-26 11:35:53.873659] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.482 [2024-07-26 11:35:53.873691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.482 qpair failed and we were unable to recover it. 00:27:58.482 [2024-07-26 11:35:53.873828] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.482 [2024-07-26 11:35:53.873859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.482 qpair failed and we were unable to recover it. 00:27:58.482 [2024-07-26 11:35:53.873980] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.482 [2024-07-26 11:35:53.874011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.482 qpair failed and we were unable to recover it. 00:27:58.482 [2024-07-26 11:35:53.874132] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.482 [2024-07-26 11:35:53.874164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.482 qpair failed and we were unable to recover it. 00:27:58.482 [2024-07-26 11:35:53.874432] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.482 [2024-07-26 11:35:53.874463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.482 qpair failed and we were unable to recover it. 00:27:58.482 [2024-07-26 11:35:53.874703] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.482 [2024-07-26 11:35:53.874736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.482 qpair failed and we were unable to recover it. 00:27:58.482 [2024-07-26 11:35:53.874914] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.482 [2024-07-26 11:35:53.874945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.482 qpair failed and we were unable to recover it. 00:27:58.482 [2024-07-26 11:35:53.875200] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.482 [2024-07-26 11:35:53.875231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.482 qpair failed and we were unable to recover it. 00:27:58.482 [2024-07-26 11:35:53.875420] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.482 [2024-07-26 11:35:53.875452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.482 qpair failed and we were unable to recover it. 00:27:58.482 [2024-07-26 11:35:53.875579] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.482 [2024-07-26 11:35:53.875611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.482 qpair failed and we were unable to recover it. 00:27:58.482 [2024-07-26 11:35:53.875760] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.482 [2024-07-26 11:35:53.875792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.482 qpair failed and we were unable to recover it. 00:27:58.482 [2024-07-26 11:35:53.876036] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.482 [2024-07-26 11:35:53.876067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.482 qpair failed and we were unable to recover it. 00:27:58.482 [2024-07-26 11:35:53.876309] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.482 [2024-07-26 11:35:53.876340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.482 qpair failed and we were unable to recover it. 00:27:58.482 [2024-07-26 11:35:53.876582] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.482 [2024-07-26 11:35:53.876613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.482 qpair failed and we were unable to recover it. 00:27:58.482 [2024-07-26 11:35:53.876767] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.482 [2024-07-26 11:35:53.876798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.482 qpair failed and we were unable to recover it. 00:27:58.482 [2024-07-26 11:35:53.877046] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.482 [2024-07-26 11:35:53.877078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.482 qpair failed and we were unable to recover it. 00:27:58.482 [2024-07-26 11:35:53.877201] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.482 [2024-07-26 11:35:53.877234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.482 qpair failed and we were unable to recover it. 00:27:58.482 [2024-07-26 11:35:53.877374] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.483 [2024-07-26 11:35:53.877405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.483 qpair failed and we were unable to recover it. 00:27:58.483 [2024-07-26 11:35:53.877594] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.483 [2024-07-26 11:35:53.877625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.483 qpair failed and we were unable to recover it. 00:27:58.483 [2024-07-26 11:35:53.877860] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.483 [2024-07-26 11:35:53.877897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f8000b90 with addr=10.0.0.2, port=4420 00:27:58.483 qpair failed and we were unable to recover it. 00:27:58.483 [2024-07-26 11:35:53.878039] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.483 [2024-07-26 11:35:53.878071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f8000b90 with addr=10.0.0.2, port=4420 00:27:58.483 qpair failed and we were unable to recover it. 00:27:58.483 [2024-07-26 11:35:53.878190] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.483 [2024-07-26 11:35:53.878220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f8000b90 with addr=10.0.0.2, port=4420 00:27:58.483 qpair failed and we were unable to recover it. 00:27:58.483 [2024-07-26 11:35:53.878414] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.483 [2024-07-26 11:35:53.878443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f8000b90 with addr=10.0.0.2, port=4420 00:27:58.483 qpair failed and we were unable to recover it. 00:27:58.483 [2024-07-26 11:35:53.878582] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.483 [2024-07-26 11:35:53.878612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f8000b90 with addr=10.0.0.2, port=4420 00:27:58.483 qpair failed and we were unable to recover it. 00:27:58.483 [2024-07-26 11:35:53.878799] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.483 [2024-07-26 11:35:53.878830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f8000b90 with addr=10.0.0.2, port=4420 00:27:58.483 qpair failed and we were unable to recover it. 00:27:58.483 [2024-07-26 11:35:53.879020] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.483 [2024-07-26 11:35:53.879051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f8000b90 with addr=10.0.0.2, port=4420 00:27:58.483 qpair failed and we were unable to recover it. 00:27:58.483 [2024-07-26 11:35:53.879156] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.483 [2024-07-26 11:35:53.879185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f8000b90 with addr=10.0.0.2, port=4420 00:27:58.483 qpair failed and we were unable to recover it. 00:27:58.483 [2024-07-26 11:35:53.879438] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.483 [2024-07-26 11:35:53.879469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f8000b90 with addr=10.0.0.2, port=4420 00:27:58.483 qpair failed and we were unable to recover it. 00:27:58.483 [2024-07-26 11:35:53.879656] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.483 [2024-07-26 11:35:53.879688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f8000b90 with addr=10.0.0.2, port=4420 00:27:58.483 qpair failed and we were unable to recover it. 00:27:58.483 [2024-07-26 11:35:53.879861] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.483 [2024-07-26 11:35:53.879891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f8000b90 with addr=10.0.0.2, port=4420 00:27:58.483 qpair failed and we were unable to recover it. 00:27:58.483 [2024-07-26 11:35:53.880098] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.483 [2024-07-26 11:35:53.880129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f8000b90 with addr=10.0.0.2, port=4420 00:27:58.483 qpair failed and we were unable to recover it. 00:27:58.483 [2024-07-26 11:35:53.880373] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.483 [2024-07-26 11:35:53.880404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f8000b90 with addr=10.0.0.2, port=4420 00:27:58.483 qpair failed and we were unable to recover it. 00:27:58.483 [2024-07-26 11:35:53.880597] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.483 [2024-07-26 11:35:53.880641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f8000b90 with addr=10.0.0.2, port=4420 00:27:58.483 qpair failed and we were unable to recover it. 00:27:58.483 [2024-07-26 11:35:53.880830] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.483 [2024-07-26 11:35:53.880861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f8000b90 with addr=10.0.0.2, port=4420 00:27:58.483 qpair failed and we were unable to recover it. 00:27:58.483 [2024-07-26 11:35:53.881049] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.483 [2024-07-26 11:35:53.881079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f8000b90 with addr=10.0.0.2, port=4420 00:27:58.483 qpair failed and we were unable to recover it. 00:27:58.483 [2024-07-26 11:35:53.881253] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.483 [2024-07-26 11:35:53.881283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f8000b90 with addr=10.0.0.2, port=4420 00:27:58.483 qpair failed and we were unable to recover it. 00:27:58.483 [2024-07-26 11:35:53.881415] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.483 [2024-07-26 11:35:53.881446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f8000b90 with addr=10.0.0.2, port=4420 00:27:58.483 qpair failed and we were unable to recover it. 00:27:58.483 [2024-07-26 11:35:53.881649] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.483 [2024-07-26 11:35:53.881681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f8000b90 with addr=10.0.0.2, port=4420 00:27:58.483 qpair failed and we were unable to recover it. 00:27:58.483 [2024-07-26 11:35:53.881860] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.483 [2024-07-26 11:35:53.881891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f8000b90 with addr=10.0.0.2, port=4420 00:27:58.483 qpair failed and we were unable to recover it. 00:27:58.483 [2024-07-26 11:35:53.882169] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.483 [2024-07-26 11:35:53.882200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f8000b90 with addr=10.0.0.2, port=4420 00:27:58.483 qpair failed and we were unable to recover it. 00:27:58.483 [2024-07-26 11:35:53.882375] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.483 [2024-07-26 11:35:53.882406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f8000b90 with addr=10.0.0.2, port=4420 00:27:58.483 qpair failed and we were unable to recover it. 00:27:58.483 [2024-07-26 11:35:53.882611] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.483 [2024-07-26 11:35:53.882651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f8000b90 with addr=10.0.0.2, port=4420 00:27:58.483 qpair failed and we were unable to recover it. 00:27:58.483 [2024-07-26 11:35:53.882920] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.483 [2024-07-26 11:35:53.882951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f8000b90 with addr=10.0.0.2, port=4420 00:27:58.483 qpair failed and we were unable to recover it. 00:27:58.483 [2024-07-26 11:35:53.883089] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.483 [2024-07-26 11:35:53.883119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f8000b90 with addr=10.0.0.2, port=4420 00:27:58.483 qpair failed and we were unable to recover it. 00:27:58.483 [2024-07-26 11:35:53.883303] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.483 [2024-07-26 11:35:53.883332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f8000b90 with addr=10.0.0.2, port=4420 00:27:58.483 qpair failed and we were unable to recover it. 00:27:58.483 [2024-07-26 11:35:53.883524] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.483 [2024-07-26 11:35:53.883555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f8000b90 with addr=10.0.0.2, port=4420 00:27:58.483 qpair failed and we were unable to recover it. 00:27:58.483 [2024-07-26 11:35:53.883835] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.483 [2024-07-26 11:35:53.883869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f8000b90 with addr=10.0.0.2, port=4420 00:27:58.483 qpair failed and we were unable to recover it. 00:27:58.483 [2024-07-26 11:35:53.884041] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.483 [2024-07-26 11:35:53.884071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f8000b90 with addr=10.0.0.2, port=4420 00:27:58.483 qpair failed and we were unable to recover it. 00:27:58.483 [2024-07-26 11:35:53.884262] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.483 [2024-07-26 11:35:53.884292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f8000b90 with addr=10.0.0.2, port=4420 00:27:58.483 qpair failed and we were unable to recover it. 00:27:58.483 [2024-07-26 11:35:53.884407] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.483 [2024-07-26 11:35:53.884437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f8000b90 with addr=10.0.0.2, port=4420 00:27:58.483 qpair failed and we were unable to recover it. 00:27:58.483 [2024-07-26 11:35:53.884682] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.483 [2024-07-26 11:35:53.884712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f8000b90 with addr=10.0.0.2, port=4420 00:27:58.483 qpair failed and we were unable to recover it. 00:27:58.483 [2024-07-26 11:35:53.884906] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.483 [2024-07-26 11:35:53.884936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f8000b90 with addr=10.0.0.2, port=4420 00:27:58.483 qpair failed and we were unable to recover it. 00:27:58.483 [2024-07-26 11:35:53.885073] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.483 [2024-07-26 11:35:53.885103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f8000b90 with addr=10.0.0.2, port=4420 00:27:58.483 qpair failed and we were unable to recover it. 00:27:58.483 [2024-07-26 11:35:53.885211] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.483 [2024-07-26 11:35:53.885240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f8000b90 with addr=10.0.0.2, port=4420 00:27:58.483 qpair failed and we were unable to recover it. 00:27:58.483 [2024-07-26 11:35:53.885460] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.483 [2024-07-26 11:35:53.885491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f8000b90 with addr=10.0.0.2, port=4420 00:27:58.483 qpair failed and we were unable to recover it. 00:27:58.484 [2024-07-26 11:35:53.885678] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.484 [2024-07-26 11:35:53.885709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f8000b90 with addr=10.0.0.2, port=4420 00:27:58.484 qpair failed and we were unable to recover it. 00:27:58.484 [2024-07-26 11:35:53.885853] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.484 [2024-07-26 11:35:53.885883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f8000b90 with addr=10.0.0.2, port=4420 00:27:58.484 qpair failed and we were unable to recover it. 00:27:58.484 [2024-07-26 11:35:53.886126] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.484 [2024-07-26 11:35:53.886158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f8000b90 with addr=10.0.0.2, port=4420 00:27:58.484 qpair failed and we were unable to recover it. 00:27:58.484 [2024-07-26 11:35:53.886402] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.484 [2024-07-26 11:35:53.886433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f8000b90 with addr=10.0.0.2, port=4420 00:27:58.484 qpair failed and we were unable to recover it. 00:27:58.484 [2024-07-26 11:35:53.886640] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.484 [2024-07-26 11:35:53.886682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:58.484 qpair failed and we were unable to recover it. 00:27:58.484 [2024-07-26 11:35:53.886862] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.484 [2024-07-26 11:35:53.886894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:58.484 qpair failed and we were unable to recover it. 00:27:58.484 [2024-07-26 11:35:53.887016] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.484 [2024-07-26 11:35:53.887047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:58.484 qpair failed and we were unable to recover it. 00:27:58.484 [2024-07-26 11:35:53.887163] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.484 [2024-07-26 11:35:53.887194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:58.484 qpair failed and we were unable to recover it. 00:27:58.484 [2024-07-26 11:35:53.887403] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.484 [2024-07-26 11:35:53.887434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:58.484 qpair failed and we were unable to recover it. 00:27:58.484 [2024-07-26 11:35:53.887624] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.484 [2024-07-26 11:35:53.887667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:58.484 qpair failed and we were unable to recover it. 00:27:58.484 [2024-07-26 11:35:53.887852] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.484 [2024-07-26 11:35:53.887884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:58.484 qpair failed and we were unable to recover it. 00:27:58.484 [2024-07-26 11:35:53.888125] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.484 [2024-07-26 11:35:53.888156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:58.484 qpair failed and we were unable to recover it. 00:27:58.484 [2024-07-26 11:35:53.888290] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.484 [2024-07-26 11:35:53.888321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:58.484 qpair failed and we were unable to recover it. 00:27:58.484 [2024-07-26 11:35:53.888438] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.484 [2024-07-26 11:35:53.888470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:58.484 qpair failed and we were unable to recover it. 00:27:58.484 [2024-07-26 11:35:53.888672] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.484 [2024-07-26 11:35:53.888704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:58.484 qpair failed and we were unable to recover it. 00:27:58.484 [2024-07-26 11:35:53.888894] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.484 [2024-07-26 11:35:53.888926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:58.484 qpair failed and we were unable to recover it. 00:27:58.484 [2024-07-26 11:35:53.889166] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.484 [2024-07-26 11:35:53.889197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:58.484 qpair failed and we were unable to recover it. 00:27:58.484 [2024-07-26 11:35:53.889461] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.484 [2024-07-26 11:35:53.889492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:58.484 qpair failed and we were unable to recover it. 00:27:58.484 [2024-07-26 11:35:53.889695] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.484 [2024-07-26 11:35:53.889727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:58.484 qpair failed and we were unable to recover it. 00:27:58.484 [2024-07-26 11:35:53.889870] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.484 [2024-07-26 11:35:53.889901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:58.484 qpair failed and we were unable to recover it. 00:27:58.484 [2024-07-26 11:35:53.890089] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.484 [2024-07-26 11:35:53.890120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:58.484 qpair failed and we were unable to recover it. 00:27:58.484 [2024-07-26 11:35:53.890252] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.484 [2024-07-26 11:35:53.890284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:58.484 qpair failed and we were unable to recover it. 00:27:58.484 [2024-07-26 11:35:53.890458] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.484 [2024-07-26 11:35:53.890490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:58.484 qpair failed and we were unable to recover it. 00:27:58.484 [2024-07-26 11:35:53.890678] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.484 [2024-07-26 11:35:53.890711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:58.484 qpair failed and we were unable to recover it. 00:27:58.484 [2024-07-26 11:35:53.890822] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.484 [2024-07-26 11:35:53.890853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:58.484 qpair failed and we were unable to recover it. 00:27:58.484 [2024-07-26 11:35:53.890995] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.484 [2024-07-26 11:35:53.891026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:58.484 qpair failed and we were unable to recover it. 00:27:58.484 [2024-07-26 11:35:53.891266] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.484 [2024-07-26 11:35:53.891297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:58.484 qpair failed and we were unable to recover it. 00:27:58.484 [2024-07-26 11:35:53.891409] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.484 [2024-07-26 11:35:53.891440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:58.484 qpair failed and we were unable to recover it. 00:27:58.484 [2024-07-26 11:35:53.891644] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.484 [2024-07-26 11:35:53.891676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:58.484 qpair failed and we were unable to recover it. 00:27:58.484 [2024-07-26 11:35:53.891851] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.484 [2024-07-26 11:35:53.891883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:58.484 qpair failed and we were unable to recover it. 00:27:58.484 [2024-07-26 11:35:53.892145] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.484 [2024-07-26 11:35:53.892176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:58.484 qpair failed and we were unable to recover it. 00:27:58.484 [2024-07-26 11:35:53.892292] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.484 [2024-07-26 11:35:53.892328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:58.484 qpair failed and we were unable to recover it. 00:27:58.484 [2024-07-26 11:35:53.892645] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.485 [2024-07-26 11:35:53.892679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:58.485 qpair failed and we were unable to recover it. 00:27:58.485 [2024-07-26 11:35:53.892807] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.485 [2024-07-26 11:35:53.892839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:58.485 qpair failed and we were unable to recover it. 00:27:58.485 [2024-07-26 11:35:53.892973] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.485 [2024-07-26 11:35:53.893005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:58.485 qpair failed and we were unable to recover it. 00:27:58.485 [2024-07-26 11:35:53.893201] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.485 [2024-07-26 11:35:53.893233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:58.485 qpair failed and we were unable to recover it. 00:27:58.485 [2024-07-26 11:35:53.893482] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.485 [2024-07-26 11:35:53.893513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:58.485 qpair failed and we were unable to recover it. 00:27:58.485 [2024-07-26 11:35:53.893699] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.485 [2024-07-26 11:35:53.893731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:58.485 qpair failed and we were unable to recover it. 00:27:58.485 [2024-07-26 11:35:53.893912] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.485 [2024-07-26 11:35:53.893944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:58.485 qpair failed and we were unable to recover it. 00:27:58.485 [2024-07-26 11:35:53.894130] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.485 [2024-07-26 11:35:53.894162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:58.485 qpair failed and we were unable to recover it. 00:27:58.485 [2024-07-26 11:35:53.894334] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.485 [2024-07-26 11:35:53.894365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:58.485 qpair failed and we were unable to recover it. 00:27:58.485 [2024-07-26 11:35:53.894508] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.485 [2024-07-26 11:35:53.894539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:58.485 qpair failed and we were unable to recover it. 00:27:58.485 [2024-07-26 11:35:53.894719] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.485 [2024-07-26 11:35:53.894752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:58.485 qpair failed and we were unable to recover it. 00:27:58.485 [2024-07-26 11:35:53.894933] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.485 [2024-07-26 11:35:53.894964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:58.485 qpair failed and we were unable to recover it. 00:27:58.485 [2024-07-26 11:35:53.895233] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.485 [2024-07-26 11:35:53.895264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:58.485 qpair failed and we were unable to recover it. 00:27:58.485 [2024-07-26 11:35:53.895458] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.485 [2024-07-26 11:35:53.895490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:58.485 qpair failed and we were unable to recover it. 00:27:58.485 [2024-07-26 11:35:53.895757] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.485 [2024-07-26 11:35:53.895789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:58.485 qpair failed and we were unable to recover it. 00:27:58.485 [2024-07-26 11:35:53.896070] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.485 [2024-07-26 11:35:53.896101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:58.485 qpair failed and we were unable to recover it. 00:27:58.485 [2024-07-26 11:35:53.896347] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.485 [2024-07-26 11:35:53.896378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:58.485 qpair failed and we were unable to recover it. 00:27:58.485 [2024-07-26 11:35:53.896512] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.485 [2024-07-26 11:35:53.896543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:58.485 qpair failed and we were unable to recover it. 00:27:58.485 [2024-07-26 11:35:53.896788] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.485 [2024-07-26 11:35:53.896821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:58.485 qpair failed and we were unable to recover it. 00:27:58.485 [2024-07-26 11:35:53.897062] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.485 [2024-07-26 11:35:53.897092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:58.485 qpair failed and we were unable to recover it. 00:27:58.485 [2024-07-26 11:35:53.897222] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.485 [2024-07-26 11:35:53.897254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:58.485 qpair failed and we were unable to recover it. 00:27:58.485 [2024-07-26 11:35:53.897514] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.485 [2024-07-26 11:35:53.897546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:58.485 qpair failed and we were unable to recover it. 00:27:58.485 [2024-07-26 11:35:53.897824] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.485 [2024-07-26 11:35:53.897857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:58.485 qpair failed and we were unable to recover it. 00:27:58.485 [2024-07-26 11:35:53.898040] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.485 [2024-07-26 11:35:53.898071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:58.485 qpair failed and we were unable to recover it. 00:27:58.485 [2024-07-26 11:35:53.898321] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.485 [2024-07-26 11:35:53.898352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:58.485 qpair failed and we were unable to recover it. 00:27:58.485 [2024-07-26 11:35:53.898532] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.485 [2024-07-26 11:35:53.898563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:58.485 qpair failed and we were unable to recover it. 00:27:58.485 [2024-07-26 11:35:53.898771] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.485 [2024-07-26 11:35:53.898808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:58.485 qpair failed and we were unable to recover it. 00:27:58.485 [2024-07-26 11:35:53.899002] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.485 [2024-07-26 11:35:53.899034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:58.485 qpair failed and we were unable to recover it. 00:27:58.485 [2024-07-26 11:35:53.899152] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.485 [2024-07-26 11:35:53.899183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:58.485 qpair failed and we were unable to recover it. 00:27:58.485 [2024-07-26 11:35:53.899296] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.485 [2024-07-26 11:35:53.899327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:58.485 qpair failed and we were unable to recover it. 00:27:58.485 [2024-07-26 11:35:53.899502] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.485 [2024-07-26 11:35:53.899533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:58.485 qpair failed and we were unable to recover it. 00:27:58.485 [2024-07-26 11:35:53.899722] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.485 [2024-07-26 11:35:53.899754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:58.485 qpair failed and we were unable to recover it. 00:27:58.485 [2024-07-26 11:35:53.899936] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.485 [2024-07-26 11:35:53.899968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:58.485 qpair failed and we were unable to recover it. 00:27:58.485 [2024-07-26 11:35:53.900258] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.485 [2024-07-26 11:35:53.900289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:58.485 qpair failed and we were unable to recover it. 00:27:58.485 [2024-07-26 11:35:53.900479] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.485 [2024-07-26 11:35:53.900510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:58.485 qpair failed and we were unable to recover it. 00:27:58.485 [2024-07-26 11:35:53.900748] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.485 [2024-07-26 11:35:53.900780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:58.485 qpair failed and we were unable to recover it. 00:27:58.485 [2024-07-26 11:35:53.900907] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.485 [2024-07-26 11:35:53.900938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:58.485 qpair failed and we were unable to recover it. 00:27:58.485 [2024-07-26 11:35:53.901183] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.485 [2024-07-26 11:35:53.901214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:58.485 qpair failed and we were unable to recover it. 00:27:58.486 [2024-07-26 11:35:53.901394] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.486 [2024-07-26 11:35:53.901425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:58.486 qpair failed and we were unable to recover it. 00:27:58.486 [2024-07-26 11:35:53.901609] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.486 [2024-07-26 11:35:53.901651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:58.486 qpair failed and we were unable to recover it. 00:27:58.486 [2024-07-26 11:35:53.901893] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.486 [2024-07-26 11:35:53.901924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:58.486 qpair failed and we were unable to recover it. 00:27:58.486 [2024-07-26 11:35:53.902102] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.486 [2024-07-26 11:35:53.902133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:58.486 qpair failed and we were unable to recover it. 00:27:58.486 [2024-07-26 11:35:53.902335] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.486 [2024-07-26 11:35:53.902367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:58.486 qpair failed and we were unable to recover it. 00:27:58.486 [2024-07-26 11:35:53.902662] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.486 [2024-07-26 11:35:53.902695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:58.486 qpair failed and we were unable to recover it. 00:27:58.486 [2024-07-26 11:35:53.902905] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.486 [2024-07-26 11:35:53.902936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:58.486 qpair failed and we were unable to recover it. 00:27:58.486 [2024-07-26 11:35:53.903139] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.486 [2024-07-26 11:35:53.903170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:58.486 qpair failed and we were unable to recover it. 00:27:58.486 [2024-07-26 11:35:53.903378] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.486 [2024-07-26 11:35:53.903409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:58.486 qpair failed and we were unable to recover it. 00:27:58.486 [2024-07-26 11:35:53.903615] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.486 [2024-07-26 11:35:53.903656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:58.486 qpair failed and we were unable to recover it. 00:27:58.486 [2024-07-26 11:35:53.903835] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.486 [2024-07-26 11:35:53.903866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:58.486 qpair failed and we were unable to recover it. 00:27:58.486 [2024-07-26 11:35:53.904135] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.486 [2024-07-26 11:35:53.904167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:58.486 qpair failed and we were unable to recover it. 00:27:58.486 [2024-07-26 11:35:53.904359] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.486 [2024-07-26 11:35:53.904391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:58.486 qpair failed and we were unable to recover it. 00:27:58.486 [2024-07-26 11:35:53.904587] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.486 [2024-07-26 11:35:53.904618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:58.486 qpair failed and we were unable to recover it. 00:27:58.486 [2024-07-26 11:35:53.904748] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.486 [2024-07-26 11:35:53.904780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:58.486 qpair failed and we were unable to recover it. 00:27:58.486 [2024-07-26 11:35:53.905069] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.486 [2024-07-26 11:35:53.905106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:58.486 qpair failed and we were unable to recover it. 00:27:58.486 [2024-07-26 11:35:53.905226] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.486 [2024-07-26 11:35:53.905257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:58.486 qpair failed and we were unable to recover it. 00:27:58.486 [2024-07-26 11:35:53.905442] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.486 [2024-07-26 11:35:53.905473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:58.486 qpair failed and we were unable to recover it. 00:27:58.486 [2024-07-26 11:35:53.905654] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.486 [2024-07-26 11:35:53.905686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:58.486 qpair failed and we were unable to recover it. 00:27:58.486 [2024-07-26 11:35:53.905878] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.486 [2024-07-26 11:35:53.905910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:58.486 qpair failed and we were unable to recover it. 00:27:58.486 [2024-07-26 11:35:53.906107] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.486 [2024-07-26 11:35:53.906138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:58.486 qpair failed and we were unable to recover it. 00:27:58.486 [2024-07-26 11:35:53.906316] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.486 [2024-07-26 11:35:53.906347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:58.486 qpair failed and we were unable to recover it. 00:27:58.486 [2024-07-26 11:35:53.906476] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.486 [2024-07-26 11:35:53.906508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:58.486 qpair failed and we were unable to recover it. 00:27:58.486 [2024-07-26 11:35:53.906657] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.486 [2024-07-26 11:35:53.906690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:58.486 qpair failed and we were unable to recover it. 00:27:58.486 [2024-07-26 11:35:53.906939] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.486 [2024-07-26 11:35:53.906972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:58.486 qpair failed and we were unable to recover it. 00:27:58.486 [2024-07-26 11:35:53.907238] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.486 [2024-07-26 11:35:53.907269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:58.486 qpair failed and we were unable to recover it. 00:27:58.486 [2024-07-26 11:35:53.907463] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.486 [2024-07-26 11:35:53.907495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:58.486 qpair failed and we were unable to recover it. 00:27:58.486 [2024-07-26 11:35:53.907698] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.486 [2024-07-26 11:35:53.907730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:58.486 qpair failed and we were unable to recover it. 00:27:58.486 [2024-07-26 11:35:53.907937] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.486 [2024-07-26 11:35:53.907968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:58.486 qpair failed and we were unable to recover it. 00:27:58.486 [2024-07-26 11:35:53.908079] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.486 [2024-07-26 11:35:53.908110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:58.486 qpair failed and we were unable to recover it. 00:27:58.486 [2024-07-26 11:35:53.908243] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.486 [2024-07-26 11:35:53.908274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:58.486 qpair failed and we were unable to recover it. 00:27:58.486 [2024-07-26 11:35:53.908534] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.486 [2024-07-26 11:35:53.908565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:58.486 qpair failed and we were unable to recover it. 00:27:58.486 [2024-07-26 11:35:53.908768] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.486 [2024-07-26 11:35:53.908799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:58.486 qpair failed and we were unable to recover it. 00:27:58.486 [2024-07-26 11:35:53.908982] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.486 [2024-07-26 11:35:53.909013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:58.486 qpair failed and we were unable to recover it. 00:27:58.486 [2024-07-26 11:35:53.909218] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.486 [2024-07-26 11:35:53.909249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:58.486 qpair failed and we were unable to recover it. 00:27:58.486 [2024-07-26 11:35:53.909373] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.486 [2024-07-26 11:35:53.909404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:58.486 qpair failed and we were unable to recover it. 00:27:58.486 [2024-07-26 11:35:53.909618] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.486 [2024-07-26 11:35:53.909660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:58.486 qpair failed and we were unable to recover it. 00:27:58.486 [2024-07-26 11:35:53.909834] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.486 [2024-07-26 11:35:53.909865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:58.486 qpair failed and we were unable to recover it. 00:27:58.487 [2024-07-26 11:35:53.910038] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.487 [2024-07-26 11:35:53.910070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:58.487 qpair failed and we were unable to recover it. 00:27:58.487 [2024-07-26 11:35:53.910195] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.487 [2024-07-26 11:35:53.910226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:58.487 qpair failed and we were unable to recover it. 00:27:58.487 [2024-07-26 11:35:53.910340] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.487 [2024-07-26 11:35:53.910372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:58.487 qpair failed and we were unable to recover it. 00:27:58.487 [2024-07-26 11:35:53.910502] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.487 [2024-07-26 11:35:53.910533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:58.487 qpair failed and we were unable to recover it. 00:27:58.487 [2024-07-26 11:35:53.910777] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.487 [2024-07-26 11:35:53.910809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:58.487 qpair failed and we were unable to recover it. 00:27:58.487 [2024-07-26 11:35:53.910989] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.487 [2024-07-26 11:35:53.911020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:58.487 qpair failed and we were unable to recover it. 00:27:58.487 [2024-07-26 11:35:53.911209] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.487 [2024-07-26 11:35:53.911240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:58.487 qpair failed and we were unable to recover it. 00:27:58.487 [2024-07-26 11:35:53.911363] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.487 [2024-07-26 11:35:53.911394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:58.487 qpair failed and we were unable to recover it. 00:27:58.487 [2024-07-26 11:35:53.911590] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.487 [2024-07-26 11:35:53.911621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:58.487 qpair failed and we were unable to recover it. 00:27:58.487 [2024-07-26 11:35:53.911886] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.487 [2024-07-26 11:35:53.911918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:58.487 qpair failed and we were unable to recover it. 00:27:58.487 [2024-07-26 11:35:53.912109] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.487 [2024-07-26 11:35:53.912141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:58.487 qpair failed and we were unable to recover it. 00:27:58.487 [2024-07-26 11:35:53.912340] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.487 [2024-07-26 11:35:53.912372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:58.487 qpair failed and we were unable to recover it. 00:27:58.487 [2024-07-26 11:35:53.912561] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.487 [2024-07-26 11:35:53.912591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:58.487 qpair failed and we were unable to recover it. 00:27:58.487 [2024-07-26 11:35:53.912849] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.487 [2024-07-26 11:35:53.912880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:58.487 qpair failed and we were unable to recover it. 00:27:58.487 [2024-07-26 11:35:53.913132] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.487 [2024-07-26 11:35:53.913163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:58.487 qpair failed and we were unable to recover it. 00:27:58.487 [2024-07-26 11:35:53.913409] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.487 [2024-07-26 11:35:53.913440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:58.487 qpair failed and we were unable to recover it. 00:27:58.487 [2024-07-26 11:35:53.913617] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.487 [2024-07-26 11:35:53.913666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:58.487 qpair failed and we were unable to recover it. 00:27:58.487 [2024-07-26 11:35:53.913795] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.487 [2024-07-26 11:35:53.913825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:58.487 qpair failed and we were unable to recover it. 00:27:58.487 [2024-07-26 11:35:53.914104] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.487 [2024-07-26 11:35:53.914142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.487 qpair failed and we were unable to recover it. 00:27:58.487 [2024-07-26 11:35:53.914268] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.487 [2024-07-26 11:35:53.914300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.487 qpair failed and we were unable to recover it. 00:27:58.487 [2024-07-26 11:35:53.914422] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.487 [2024-07-26 11:35:53.914454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.487 qpair failed and we were unable to recover it. 00:27:58.487 [2024-07-26 11:35:53.914746] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.487 [2024-07-26 11:35:53.914779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.487 qpair failed and we were unable to recover it. 00:27:58.487 [2024-07-26 11:35:53.915011] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.487 [2024-07-26 11:35:53.915042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.487 qpair failed and we were unable to recover it. 00:27:58.487 [2024-07-26 11:35:53.915191] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.487 [2024-07-26 11:35:53.915223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.487 qpair failed and we were unable to recover it. 00:27:58.487 [2024-07-26 11:35:53.915399] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.487 [2024-07-26 11:35:53.915431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.487 qpair failed and we were unable to recover it. 00:27:58.487 [2024-07-26 11:35:53.915567] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.487 [2024-07-26 11:35:53.915599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.487 qpair failed and we were unable to recover it. 00:27:58.487 [2024-07-26 11:35:53.915793] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.487 [2024-07-26 11:35:53.915830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:58.487 qpair failed and we were unable to recover it. 00:27:58.487 [2024-07-26 11:35:53.916010] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.487 [2024-07-26 11:35:53.916041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:58.487 qpair failed and we were unable to recover it. 00:27:58.487 [2024-07-26 11:35:53.916147] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.487 [2024-07-26 11:35:53.916178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:58.487 qpair failed and we were unable to recover it. 00:27:58.487 [2024-07-26 11:35:53.916360] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.487 [2024-07-26 11:35:53.916391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:58.487 qpair failed and we were unable to recover it. 00:27:58.487 [2024-07-26 11:35:53.916528] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.487 [2024-07-26 11:35:53.916559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:58.487 qpair failed and we were unable to recover it. 00:27:58.487 [2024-07-26 11:35:53.916734] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.487 [2024-07-26 11:35:53.916772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:58.487 qpair failed and we were unable to recover it. 00:27:58.487 [2024-07-26 11:35:53.916956] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.487 [2024-07-26 11:35:53.916987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:58.487 qpair failed and we were unable to recover it. 00:27:58.487 [2024-07-26 11:35:53.917268] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.487 [2024-07-26 11:35:53.917299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:58.487 qpair failed and we were unable to recover it. 00:27:58.487 [2024-07-26 11:35:53.917412] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.487 [2024-07-26 11:35:53.917442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:58.487 qpair failed and we were unable to recover it. 00:27:58.487 [2024-07-26 11:35:53.917644] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.487 [2024-07-26 11:35:53.917676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:58.487 qpair failed and we were unable to recover it. 00:27:58.487 [2024-07-26 11:35:53.917871] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.487 [2024-07-26 11:35:53.917902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:58.487 qpair failed and we were unable to recover it. 00:27:58.487 [2024-07-26 11:35:53.918093] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.487 [2024-07-26 11:35:53.918124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:58.487 qpair failed and we were unable to recover it. 00:27:58.488 [2024-07-26 11:35:53.918312] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.488 [2024-07-26 11:35:53.918343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:58.488 qpair failed and we were unable to recover it. 00:27:58.488 [2024-07-26 11:35:53.918547] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.488 [2024-07-26 11:35:53.918577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:58.488 qpair failed and we were unable to recover it. 00:27:58.488 [2024-07-26 11:35:53.918712] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.488 [2024-07-26 11:35:53.918744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:58.488 qpair failed and we were unable to recover it. 00:27:58.488 [2024-07-26 11:35:53.918873] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.488 [2024-07-26 11:35:53.918904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:58.488 qpair failed and we were unable to recover it. 00:27:58.488 [2024-07-26 11:35:53.919076] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.488 [2024-07-26 11:35:53.919106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:58.488 qpair failed and we were unable to recover it. 00:27:58.488 [2024-07-26 11:35:53.919371] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.488 [2024-07-26 11:35:53.919403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:58.488 qpair failed and we were unable to recover it. 00:27:58.488 [2024-07-26 11:35:53.919610] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.488 [2024-07-26 11:35:53.919650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:58.488 qpair failed and we were unable to recover it. 00:27:58.488 [2024-07-26 11:35:53.919946] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.488 [2024-07-26 11:35:53.919977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:58.488 qpair failed and we were unable to recover it. 00:27:58.488 [2024-07-26 11:35:53.920103] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.488 [2024-07-26 11:35:53.920134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:58.488 qpair failed and we were unable to recover it. 00:27:58.488 [2024-07-26 11:35:53.920326] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.488 [2024-07-26 11:35:53.920356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:58.488 qpair failed and we were unable to recover it. 00:27:58.488 [2024-07-26 11:35:53.920560] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.488 [2024-07-26 11:35:53.920591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:58.488 qpair failed and we were unable to recover it. 00:27:58.488 [2024-07-26 11:35:53.920841] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.488 [2024-07-26 11:35:53.920873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:58.488 qpair failed and we were unable to recover it. 00:27:58.488 [2024-07-26 11:35:53.921000] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.488 [2024-07-26 11:35:53.921032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:58.488 qpair failed and we were unable to recover it. 00:27:58.488 [2024-07-26 11:35:53.921283] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.488 [2024-07-26 11:35:53.921314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:58.488 qpair failed and we were unable to recover it. 00:27:58.488 [2024-07-26 11:35:53.921492] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.488 [2024-07-26 11:35:53.921526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:58.488 qpair failed and we were unable to recover it. 00:27:58.488 [2024-07-26 11:35:53.921770] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.488 [2024-07-26 11:35:53.921803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:58.488 qpair failed and we were unable to recover it. 00:27:58.488 [2024-07-26 11:35:53.921913] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.488 [2024-07-26 11:35:53.921945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:58.488 qpair failed and we were unable to recover it. 00:27:58.488 [2024-07-26 11:35:53.922073] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.488 [2024-07-26 11:35:53.922104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:58.488 qpair failed and we were unable to recover it. 00:27:58.488 [2024-07-26 11:35:53.922309] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.488 [2024-07-26 11:35:53.922339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:58.488 qpair failed and we were unable to recover it. 00:27:58.488 [2024-07-26 11:35:53.922551] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.488 [2024-07-26 11:35:53.922582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:58.488 qpair failed and we were unable to recover it. 00:27:58.488 [2024-07-26 11:35:53.922729] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.488 [2024-07-26 11:35:53.922766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f8000b90 with addr=10.0.0.2, port=4420 00:27:58.488 qpair failed and we were unable to recover it. 00:27:58.488 [2024-07-26 11:35:53.922955] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.488 [2024-07-26 11:35:53.922986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f8000b90 with addr=10.0.0.2, port=4420 00:27:58.488 qpair failed and we were unable to recover it. 00:27:58.488 [2024-07-26 11:35:53.923115] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.488 [2024-07-26 11:35:53.923145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f8000b90 with addr=10.0.0.2, port=4420 00:27:58.488 qpair failed and we were unable to recover it. 00:27:58.488 [2024-07-26 11:35:53.923340] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.488 [2024-07-26 11:35:53.923371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f8000b90 with addr=10.0.0.2, port=4420 00:27:58.488 qpair failed and we were unable to recover it. 00:27:58.488 [2024-07-26 11:35:53.923571] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.488 [2024-07-26 11:35:53.923602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f8000b90 with addr=10.0.0.2, port=4420 00:27:58.488 qpair failed and we were unable to recover it. 00:27:58.488 [2024-07-26 11:35:53.923790] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.488 [2024-07-26 11:35:53.923822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f8000b90 with addr=10.0.0.2, port=4420 00:27:58.488 qpair failed and we were unable to recover it. 00:27:58.488 [2024-07-26 11:35:53.923954] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.488 [2024-07-26 11:35:53.923985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f8000b90 with addr=10.0.0.2, port=4420 00:27:58.488 qpair failed and we were unable to recover it. 00:27:58.488 [2024-07-26 11:35:53.924233] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.488 [2024-07-26 11:35:53.924264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f8000b90 with addr=10.0.0.2, port=4420 00:27:58.488 qpair failed and we were unable to recover it. 00:27:58.488 [2024-07-26 11:35:53.924381] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.488 [2024-07-26 11:35:53.924412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f8000b90 with addr=10.0.0.2, port=4420 00:27:58.488 qpair failed and we were unable to recover it. 00:27:58.488 [2024-07-26 11:35:53.924588] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.488 [2024-07-26 11:35:53.924619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f8000b90 with addr=10.0.0.2, port=4420 00:27:58.488 qpair failed and we were unable to recover it. 00:27:58.488 [2024-07-26 11:35:53.924912] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.488 [2024-07-26 11:35:53.924944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f8000b90 with addr=10.0.0.2, port=4420 00:27:58.488 qpair failed and we were unable to recover it. 00:27:58.488 [2024-07-26 11:35:53.925129] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.488 [2024-07-26 11:35:53.925160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f8000b90 with addr=10.0.0.2, port=4420 00:27:58.488 qpair failed and we were unable to recover it. 00:27:58.488 [2024-07-26 11:35:53.925331] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.488 [2024-07-26 11:35:53.925362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f8000b90 with addr=10.0.0.2, port=4420 00:27:58.488 qpair failed and we were unable to recover it. 00:27:58.488 [2024-07-26 11:35:53.925502] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.488 [2024-07-26 11:35:53.925533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f8000b90 with addr=10.0.0.2, port=4420 00:27:58.488 qpair failed and we were unable to recover it. 00:27:58.488 [2024-07-26 11:35:53.925736] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.488 [2024-07-26 11:35:53.925768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f8000b90 with addr=10.0.0.2, port=4420 00:27:58.488 qpair failed and we were unable to recover it. 00:27:58.488 [2024-07-26 11:35:53.926009] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.488 [2024-07-26 11:35:53.926039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f8000b90 with addr=10.0.0.2, port=4420 00:27:58.489 qpair failed and we were unable to recover it. 00:27:58.489 [2024-07-26 11:35:53.926285] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.489 [2024-07-26 11:35:53.926316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f8000b90 with addr=10.0.0.2, port=4420 00:27:58.489 qpair failed and we were unable to recover it. 00:27:58.489 [2024-07-26 11:35:53.926500] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.489 [2024-07-26 11:35:53.926531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f8000b90 with addr=10.0.0.2, port=4420 00:27:58.489 qpair failed and we were unable to recover it. 00:27:58.489 [2024-07-26 11:35:53.926843] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.489 [2024-07-26 11:35:53.926875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f8000b90 with addr=10.0.0.2, port=4420 00:27:58.489 qpair failed and we were unable to recover it. 00:27:58.489 [2024-07-26 11:35:53.927061] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.489 [2024-07-26 11:35:53.927092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f8000b90 with addr=10.0.0.2, port=4420 00:27:58.489 qpair failed and we were unable to recover it. 00:27:58.489 [2024-07-26 11:35:53.927204] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.489 [2024-07-26 11:35:53.927235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f8000b90 with addr=10.0.0.2, port=4420 00:27:58.489 qpair failed and we were unable to recover it. 00:27:58.489 [2024-07-26 11:35:53.927416] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.489 [2024-07-26 11:35:53.927448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f8000b90 with addr=10.0.0.2, port=4420 00:27:58.489 qpair failed and we were unable to recover it. 00:27:58.489 [2024-07-26 11:35:53.927692] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.489 [2024-07-26 11:35:53.927723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f8000b90 with addr=10.0.0.2, port=4420 00:27:58.489 qpair failed and we were unable to recover it. 00:27:58.489 [2024-07-26 11:35:53.927914] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.489 [2024-07-26 11:35:53.927945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f8000b90 with addr=10.0.0.2, port=4420 00:27:58.489 qpair failed and we were unable to recover it. 00:27:58.489 [2024-07-26 11:35:53.928127] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.489 [2024-07-26 11:35:53.928159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f8000b90 with addr=10.0.0.2, port=4420 00:27:58.489 qpair failed and we were unable to recover it. 00:27:58.489 [2024-07-26 11:35:53.928354] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.489 [2024-07-26 11:35:53.928385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f8000b90 with addr=10.0.0.2, port=4420 00:27:58.489 qpair failed and we were unable to recover it. 00:27:58.489 [2024-07-26 11:35:53.928529] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.489 [2024-07-26 11:35:53.928560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f8000b90 with addr=10.0.0.2, port=4420 00:27:58.489 qpair failed and we were unable to recover it. 00:27:58.489 [2024-07-26 11:35:53.928758] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.489 [2024-07-26 11:35:53.928790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f8000b90 with addr=10.0.0.2, port=4420 00:27:58.489 qpair failed and we were unable to recover it. 00:27:58.489 [2024-07-26 11:35:53.929049] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.489 [2024-07-26 11:35:53.929080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f8000b90 with addr=10.0.0.2, port=4420 00:27:58.489 qpair failed and we were unable to recover it. 00:27:58.489 [2024-07-26 11:35:53.929269] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.489 [2024-07-26 11:35:53.929300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f8000b90 with addr=10.0.0.2, port=4420 00:27:58.489 qpair failed and we were unable to recover it. 00:27:58.489 [2024-07-26 11:35:53.929414] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.489 [2024-07-26 11:35:53.929444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f8000b90 with addr=10.0.0.2, port=4420 00:27:58.489 qpair failed and we were unable to recover it. 00:27:58.489 [2024-07-26 11:35:53.929567] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.489 [2024-07-26 11:35:53.929597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f8000b90 with addr=10.0.0.2, port=4420 00:27:58.489 qpair failed and we were unable to recover it. 00:27:58.489 [2024-07-26 11:35:53.929829] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.489 [2024-07-26 11:35:53.929867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:58.489 qpair failed and we were unable to recover it. 00:27:58.489 [2024-07-26 11:35:53.929994] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.489 [2024-07-26 11:35:53.930025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:58.489 qpair failed and we were unable to recover it. 00:27:58.489 [2024-07-26 11:35:53.930170] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.489 [2024-07-26 11:35:53.930200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:58.489 qpair failed and we were unable to recover it. 00:27:58.489 [2024-07-26 11:35:53.930375] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.489 [2024-07-26 11:35:53.930406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:58.489 qpair failed and we were unable to recover it. 00:27:58.489 [2024-07-26 11:35:53.930595] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.489 [2024-07-26 11:35:53.930635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:58.489 qpair failed and we were unable to recover it. 00:27:58.489 [2024-07-26 11:35:53.930831] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.489 [2024-07-26 11:35:53.930862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:58.489 qpair failed and we were unable to recover it. 00:27:58.489 [2024-07-26 11:35:53.930994] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.489 [2024-07-26 11:35:53.931026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:58.489 qpair failed and we were unable to recover it. 00:27:58.489 [2024-07-26 11:35:53.931137] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.489 [2024-07-26 11:35:53.931166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:58.489 qpair failed and we were unable to recover it. 00:27:58.489 [2024-07-26 11:35:53.931372] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.489 [2024-07-26 11:35:53.931409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:58.489 qpair failed and we were unable to recover it. 00:27:58.489 [2024-07-26 11:35:53.931615] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.489 [2024-07-26 11:35:53.931655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:58.489 qpair failed and we were unable to recover it. 00:27:58.489 [2024-07-26 11:35:53.931845] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.489 [2024-07-26 11:35:53.931877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:58.489 qpair failed and we were unable to recover it. 00:27:58.489 [2024-07-26 11:35:53.932093] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.489 [2024-07-26 11:35:53.932124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:58.489 qpair failed and we were unable to recover it. 00:27:58.489 [2024-07-26 11:35:53.932240] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.489 [2024-07-26 11:35:53.932270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:58.489 qpair failed and we were unable to recover it. 00:27:58.489 [2024-07-26 11:35:53.932406] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.489 [2024-07-26 11:35:53.932437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:58.489 qpair failed and we were unable to recover it. 00:27:58.489 [2024-07-26 11:35:53.932682] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.489 [2024-07-26 11:35:53.932714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:58.489 qpair failed and we were unable to recover it. 00:27:58.489 [2024-07-26 11:35:53.932955] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.489 [2024-07-26 11:35:53.932987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:58.489 qpair failed and we were unable to recover it. 00:27:58.489 [2024-07-26 11:35:53.933095] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.489 [2024-07-26 11:35:53.933126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:58.489 qpair failed and we were unable to recover it. 00:27:58.489 [2024-07-26 11:35:53.933325] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.489 [2024-07-26 11:35:53.933357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:58.489 qpair failed and we were unable to recover it. 00:27:58.490 [2024-07-26 11:35:53.933547] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.490 [2024-07-26 11:35:53.933578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:58.490 qpair failed and we were unable to recover it. 00:27:58.490 [2024-07-26 11:35:53.933870] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.490 [2024-07-26 11:35:53.933902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:58.490 qpair failed and we were unable to recover it. 00:27:58.490 [2024-07-26 11:35:53.934090] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.490 [2024-07-26 11:35:53.934121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:58.490 qpair failed and we were unable to recover it. 00:27:58.490 [2024-07-26 11:35:53.934263] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.490 [2024-07-26 11:35:53.934294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:58.490 qpair failed and we were unable to recover it. 00:27:58.490 [2024-07-26 11:35:53.934544] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.490 [2024-07-26 11:35:53.934576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:58.490 qpair failed and we were unable to recover it. 00:27:58.490 [2024-07-26 11:35:53.934832] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.490 [2024-07-26 11:35:53.934864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:58.490 qpair failed and we were unable to recover it. 00:27:58.490 [2024-07-26 11:35:53.935062] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.490 [2024-07-26 11:35:53.935093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:58.490 qpair failed and we were unable to recover it. 00:27:58.490 [2024-07-26 11:35:53.935216] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.490 [2024-07-26 11:35:53.935247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:58.490 qpair failed and we were unable to recover it. 00:27:58.490 [2024-07-26 11:35:53.935435] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.490 [2024-07-26 11:35:53.935466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:58.490 qpair failed and we were unable to recover it. 00:27:58.490 [2024-07-26 11:35:53.935665] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.490 [2024-07-26 11:35:53.935697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:58.490 qpair failed and we were unable to recover it. 00:27:58.490 [2024-07-26 11:35:53.935889] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.490 [2024-07-26 11:35:53.935921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:58.490 qpair failed and we were unable to recover it. 00:27:58.490 [2024-07-26 11:35:53.936163] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.490 [2024-07-26 11:35:53.936198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:58.490 qpair failed and we were unable to recover it. 00:27:58.490 [2024-07-26 11:35:53.936376] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.490 [2024-07-26 11:35:53.936404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:58.490 qpair failed and we were unable to recover it. 00:27:58.490 [2024-07-26 11:35:53.936529] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.490 [2024-07-26 11:35:53.936559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:58.490 qpair failed and we were unable to recover it. 00:27:58.490 [2024-07-26 11:35:53.936676] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.490 [2024-07-26 11:35:53.936710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:58.490 qpair failed and we were unable to recover it. 00:27:58.490 [2024-07-26 11:35:53.936837] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.490 [2024-07-26 11:35:53.936866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:58.490 qpair failed and we were unable to recover it. 00:27:58.490 [2024-07-26 11:35:53.937127] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.490 [2024-07-26 11:35:53.937158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:58.490 qpair failed and we were unable to recover it. 00:27:58.490 [2024-07-26 11:35:53.937283] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.490 [2024-07-26 11:35:53.937318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:58.490 qpair failed and we were unable to recover it. 00:27:58.490 [2024-07-26 11:35:53.937448] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.490 [2024-07-26 11:35:53.937480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:58.490 qpair failed and we were unable to recover it. 00:27:58.490 [2024-07-26 11:35:53.937664] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.490 [2024-07-26 11:35:53.937695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:58.490 qpair failed and we were unable to recover it. 00:27:58.490 [2024-07-26 11:35:53.937909] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.490 [2024-07-26 11:35:53.937940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:58.490 qpair failed and we were unable to recover it. 00:27:58.490 [2024-07-26 11:35:53.938199] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.490 [2024-07-26 11:35:53.938230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:58.490 qpair failed and we were unable to recover it. 00:27:58.490 [2024-07-26 11:35:53.938367] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.490 [2024-07-26 11:35:53.938398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:58.490 qpair failed and we were unable to recover it. 00:27:58.490 [2024-07-26 11:35:53.938533] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.490 [2024-07-26 11:35:53.938563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:58.490 qpair failed and we were unable to recover it. 00:27:58.490 [2024-07-26 11:35:53.938744] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.490 [2024-07-26 11:35:53.938775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:58.490 qpair failed and we were unable to recover it. 00:27:58.490 [2024-07-26 11:35:53.938919] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.490 [2024-07-26 11:35:53.938950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:58.490 qpair failed and we were unable to recover it. 00:27:58.490 [2024-07-26 11:35:53.939047] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.490 [2024-07-26 11:35:53.939077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:58.490 qpair failed and we were unable to recover it. 00:27:58.490 [2024-07-26 11:35:53.939336] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.490 [2024-07-26 11:35:53.939366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:58.490 qpair failed and we were unable to recover it. 00:27:58.490 [2024-07-26 11:35:53.939575] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.490 [2024-07-26 11:35:53.939605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:58.490 qpair failed and we were unable to recover it. 00:27:58.490 [2024-07-26 11:35:53.939861] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.490 [2024-07-26 11:35:53.939893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:58.490 qpair failed and we were unable to recover it. 00:27:58.490 [2024-07-26 11:35:53.940029] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.490 [2024-07-26 11:35:53.940059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:58.490 qpair failed and we were unable to recover it. 00:27:58.490 [2024-07-26 11:35:53.940270] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.490 [2024-07-26 11:35:53.940301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:58.490 qpair failed and we were unable to recover it. 00:27:58.490 [2024-07-26 11:35:53.940475] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.490 [2024-07-26 11:35:53.940505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:58.490 qpair failed and we were unable to recover it. 00:27:58.490 [2024-07-26 11:35:53.940721] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.490 [2024-07-26 11:35:53.940752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:58.490 qpair failed and we were unable to recover it. 00:27:58.490 [2024-07-26 11:35:53.940940] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.490 [2024-07-26 11:35:53.940970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:58.490 qpair failed and we were unable to recover it. 00:27:58.490 [2024-07-26 11:35:53.941260] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.490 [2024-07-26 11:35:53.941291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:58.490 qpair failed and we were unable to recover it. 00:27:58.490 [2024-07-26 11:35:53.941569] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.490 [2024-07-26 11:35:53.941599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:58.490 qpair failed and we were unable to recover it. 00:27:58.490 [2024-07-26 11:35:53.941736] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.490 [2024-07-26 11:35:53.941767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:58.491 qpair failed and we were unable to recover it. 00:27:58.491 [2024-07-26 11:35:53.942013] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.491 [2024-07-26 11:35:53.942046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:58.491 qpair failed and we were unable to recover it. 00:27:58.491 [2024-07-26 11:35:53.942265] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.491 [2024-07-26 11:35:53.942294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:58.491 qpair failed and we were unable to recover it. 00:27:58.491 [2024-07-26 11:35:53.942429] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.491 [2024-07-26 11:35:53.942460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:58.491 qpair failed and we were unable to recover it. 00:27:58.491 [2024-07-26 11:35:53.942581] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.491 [2024-07-26 11:35:53.942612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:58.491 qpair failed and we were unable to recover it. 00:27:58.491 [2024-07-26 11:35:53.942874] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.491 [2024-07-26 11:35:53.942904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:58.491 qpair failed and we were unable to recover it. 00:27:58.491 [2024-07-26 11:35:53.943093] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.491 [2024-07-26 11:35:53.943123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:58.491 qpair failed and we were unable to recover it. 00:27:58.491 [2024-07-26 11:35:53.943297] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.491 [2024-07-26 11:35:53.943327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:58.491 qpair failed and we were unable to recover it. 00:27:58.491 [2024-07-26 11:35:53.943573] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.491 [2024-07-26 11:35:53.943604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:58.491 qpair failed and we were unable to recover it. 00:27:58.491 [2024-07-26 11:35:53.943767] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.491 [2024-07-26 11:35:53.943798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:58.491 qpair failed and we were unable to recover it. 00:27:58.491 [2024-07-26 11:35:53.944066] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.491 [2024-07-26 11:35:53.944096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:58.491 qpair failed and we were unable to recover it. 00:27:58.491 [2024-07-26 11:35:53.944283] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.491 [2024-07-26 11:35:53.944313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:58.491 qpair failed and we were unable to recover it. 00:27:58.491 [2024-07-26 11:35:53.944580] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.491 [2024-07-26 11:35:53.944611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:58.491 qpair failed and we were unable to recover it. 00:27:58.491 [2024-07-26 11:35:53.944860] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.491 [2024-07-26 11:35:53.944891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:58.491 qpair failed and we were unable to recover it. 00:27:58.491 [2024-07-26 11:35:53.945015] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.491 [2024-07-26 11:35:53.945044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:58.491 qpair failed and we were unable to recover it. 00:27:58.491 [2024-07-26 11:35:53.945163] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.491 [2024-07-26 11:35:53.945195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:58.491 qpair failed and we were unable to recover it. 00:27:58.491 [2024-07-26 11:35:53.945381] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.491 [2024-07-26 11:35:53.945411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:58.491 qpair failed and we were unable to recover it. 00:27:58.491 [2024-07-26 11:35:53.945595] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.491 [2024-07-26 11:35:53.945625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:58.491 qpair failed and we were unable to recover it. 00:27:58.491 [2024-07-26 11:35:53.945830] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.491 [2024-07-26 11:35:53.945861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:58.491 qpair failed and we were unable to recover it. 00:27:58.491 [2024-07-26 11:35:53.946056] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.491 [2024-07-26 11:35:53.946086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:58.491 qpair failed and we were unable to recover it. 00:27:58.491 [2024-07-26 11:35:53.946274] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.491 [2024-07-26 11:35:53.946304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:58.491 qpair failed and we were unable to recover it. 00:27:58.491 [2024-07-26 11:35:53.946551] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.491 [2024-07-26 11:35:53.946582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:58.491 qpair failed and we were unable to recover it. 00:27:58.491 [2024-07-26 11:35:53.946774] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.491 [2024-07-26 11:35:53.946806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:58.491 qpair failed and we were unable to recover it. 00:27:58.491 [2024-07-26 11:35:53.947042] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.491 [2024-07-26 11:35:53.947072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:58.491 qpair failed and we were unable to recover it. 00:27:58.491 [2024-07-26 11:35:53.947300] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.491 [2024-07-26 11:35:53.947331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:58.491 qpair failed and we were unable to recover it. 00:27:58.491 [2024-07-26 11:35:53.947513] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.491 [2024-07-26 11:35:53.947543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:58.491 qpair failed and we were unable to recover it. 00:27:58.491 [2024-07-26 11:35:53.947730] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.491 [2024-07-26 11:35:53.947762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:58.491 qpair failed and we were unable to recover it. 00:27:58.491 [2024-07-26 11:35:53.947899] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.491 [2024-07-26 11:35:53.947930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:58.491 qpair failed and we were unable to recover it. 00:27:58.491 [2024-07-26 11:35:53.948043] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.491 [2024-07-26 11:35:53.948074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:58.491 qpair failed and we were unable to recover it. 00:27:58.491 [2024-07-26 11:35:53.948336] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.491 [2024-07-26 11:35:53.948366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:58.491 qpair failed and we were unable to recover it. 00:27:58.491 [2024-07-26 11:35:53.948565] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.491 [2024-07-26 11:35:53.948595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:58.491 qpair failed and we were unable to recover it. 00:27:58.491 [2024-07-26 11:35:53.948777] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.491 [2024-07-26 11:35:53.948808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:58.491 qpair failed and we were unable to recover it. 00:27:58.491 [2024-07-26 11:35:53.949002] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.491 [2024-07-26 11:35:53.949033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:58.491 qpair failed and we were unable to recover it. 00:27:58.491 [2024-07-26 11:35:53.949280] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.491 [2024-07-26 11:35:53.949310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:58.491 qpair failed and we were unable to recover it. 00:27:58.491 [2024-07-26 11:35:53.949510] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.491 [2024-07-26 11:35:53.949540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:58.491 qpair failed and we were unable to recover it. 00:27:58.491 [2024-07-26 11:35:53.949653] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.491 [2024-07-26 11:35:53.949685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:58.491 qpair failed and we were unable to recover it. 00:27:58.491 [2024-07-26 11:35:53.949809] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.491 [2024-07-26 11:35:53.949840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:58.491 qpair failed and we were unable to recover it. 00:27:58.491 [2024-07-26 11:35:53.950057] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.491 [2024-07-26 11:35:53.950086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:58.491 qpair failed and we were unable to recover it. 00:27:58.491 [2024-07-26 11:35:53.950331] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.491 [2024-07-26 11:35:53.950362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:58.491 qpair failed and we were unable to recover it. 00:27:58.492 [2024-07-26 11:35:53.950660] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.492 [2024-07-26 11:35:53.950693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:58.492 qpair failed and we were unable to recover it. 00:27:58.492 [2024-07-26 11:35:53.950868] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.492 [2024-07-26 11:35:53.950898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:58.492 qpair failed and we were unable to recover it. 00:27:58.492 [2024-07-26 11:35:53.951098] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.492 [2024-07-26 11:35:53.951128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:58.492 qpair failed and we were unable to recover it. 00:27:58.492 [2024-07-26 11:35:53.951338] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.492 [2024-07-26 11:35:53.951368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:58.492 qpair failed and we were unable to recover it. 00:27:58.492 [2024-07-26 11:35:53.951557] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.492 [2024-07-26 11:35:53.951587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:58.492 qpair failed and we were unable to recover it. 00:27:58.492 [2024-07-26 11:35:53.951717] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.492 [2024-07-26 11:35:53.951748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:58.492 qpair failed and we were unable to recover it. 00:27:58.492 [2024-07-26 11:35:53.951930] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.492 [2024-07-26 11:35:53.951962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:58.492 qpair failed and we were unable to recover it. 00:27:58.492 [2024-07-26 11:35:53.952202] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.492 [2024-07-26 11:35:53.952233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:58.492 qpair failed and we were unable to recover it. 00:27:58.492 [2024-07-26 11:35:53.952407] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.492 [2024-07-26 11:35:53.952437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:58.492 qpair failed and we were unable to recover it. 00:27:58.492 [2024-07-26 11:35:53.952705] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.492 [2024-07-26 11:35:53.952742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:58.492 qpair failed and we were unable to recover it. 00:27:58.492 [2024-07-26 11:35:53.952871] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.492 [2024-07-26 11:35:53.952902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:58.492 qpair failed and we were unable to recover it. 00:27:58.492 [2024-07-26 11:35:53.953092] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.492 [2024-07-26 11:35:53.953122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:58.492 qpair failed and we were unable to recover it. 00:27:58.492 [2024-07-26 11:35:53.953320] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.492 [2024-07-26 11:35:53.953351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:58.492 qpair failed and we were unable to recover it. 00:27:58.492 [2024-07-26 11:35:53.953485] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.492 [2024-07-26 11:35:53.953515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:58.492 qpair failed and we were unable to recover it. 00:27:58.492 [2024-07-26 11:35:53.953682] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.492 [2024-07-26 11:35:53.953712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:58.492 qpair failed and we were unable to recover it. 00:27:58.492 [2024-07-26 11:35:53.953909] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.492 [2024-07-26 11:35:53.953939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:58.492 qpair failed and we were unable to recover it. 00:27:58.492 [2024-07-26 11:35:53.954048] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.492 [2024-07-26 11:35:53.954078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:58.492 qpair failed and we were unable to recover it. 00:27:58.492 [2024-07-26 11:35:53.954206] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.492 [2024-07-26 11:35:53.954236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:58.492 qpair failed and we were unable to recover it. 00:27:58.492 [2024-07-26 11:35:53.954444] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.492 [2024-07-26 11:35:53.954475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:58.492 qpair failed and we were unable to recover it. 00:27:58.492 [2024-07-26 11:35:53.954739] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.492 [2024-07-26 11:35:53.954770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:58.492 qpair failed and we were unable to recover it. 00:27:58.492 [2024-07-26 11:35:53.955015] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.492 [2024-07-26 11:35:53.955045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:58.492 qpair failed and we were unable to recover it. 00:27:58.492 [2024-07-26 11:35:53.955291] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.492 [2024-07-26 11:35:53.955322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:58.492 qpair failed and we were unable to recover it. 00:27:58.492 [2024-07-26 11:35:53.955503] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.492 [2024-07-26 11:35:53.955533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:58.492 qpair failed and we were unable to recover it. 00:27:58.492 [2024-07-26 11:35:53.955678] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.492 [2024-07-26 11:35:53.955709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:58.492 qpair failed and we were unable to recover it. 00:27:58.492 [2024-07-26 11:35:53.955886] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.492 [2024-07-26 11:35:53.955916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:58.492 qpair failed and we were unable to recover it. 00:27:58.492 [2024-07-26 11:35:53.956089] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.492 [2024-07-26 11:35:53.956119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:58.492 qpair failed and we were unable to recover it. 00:27:58.492 [2024-07-26 11:35:53.956253] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.492 [2024-07-26 11:35:53.956283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:58.492 qpair failed and we were unable to recover it. 00:27:58.492 [2024-07-26 11:35:53.956483] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.492 [2024-07-26 11:35:53.956513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:58.492 qpair failed and we were unable to recover it. 00:27:58.492 [2024-07-26 11:35:53.956778] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.492 [2024-07-26 11:35:53.956809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:58.492 qpair failed and we were unable to recover it. 00:27:58.492 [2024-07-26 11:35:53.956995] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.492 [2024-07-26 11:35:53.957026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:58.492 qpair failed and we were unable to recover it. 00:27:58.492 [2024-07-26 11:35:53.957210] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.492 [2024-07-26 11:35:53.957240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:58.492 qpair failed and we were unable to recover it. 00:27:58.492 [2024-07-26 11:35:53.957447] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.492 [2024-07-26 11:35:53.957477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:58.492 qpair failed and we were unable to recover it. 00:27:58.492 [2024-07-26 11:35:53.957612] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.492 [2024-07-26 11:35:53.957651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:58.492 qpair failed and we were unable to recover it. 00:27:58.492 [2024-07-26 11:35:53.957782] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.492 [2024-07-26 11:35:53.957812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:58.492 qpair failed and we were unable to recover it. 00:27:58.492 [2024-07-26 11:35:53.957989] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.492 [2024-07-26 11:35:53.958019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:58.492 qpair failed and we were unable to recover it. 00:27:58.492 [2024-07-26 11:35:53.958258] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.492 [2024-07-26 11:35:53.958287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:58.492 qpair failed and we were unable to recover it. 00:27:58.492 [2024-07-26 11:35:53.958504] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.492 [2024-07-26 11:35:53.958542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:58.492 qpair failed and we were unable to recover it. 00:27:58.492 [2024-07-26 11:35:53.958677] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.492 [2024-07-26 11:35:53.958707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:58.493 qpair failed and we were unable to recover it. 00:27:58.493 [2024-07-26 11:35:53.958845] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.493 [2024-07-26 11:35:53.958875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:58.493 qpair failed and we were unable to recover it. 00:27:58.493 [2024-07-26 11:35:53.959067] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.493 [2024-07-26 11:35:53.959097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:58.493 qpair failed and we were unable to recover it. 00:27:58.493 [2024-07-26 11:35:53.959341] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.493 [2024-07-26 11:35:53.959371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:58.493 qpair failed and we were unable to recover it. 00:27:58.493 [2024-07-26 11:35:53.959561] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.493 [2024-07-26 11:35:53.959592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:58.493 qpair failed and we were unable to recover it. 00:27:58.493 [2024-07-26 11:35:53.959876] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.493 [2024-07-26 11:35:53.959908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:58.493 qpair failed and we were unable to recover it. 00:27:58.493 [2024-07-26 11:35:53.960014] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.493 [2024-07-26 11:35:53.960044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:58.493 qpair failed and we were unable to recover it. 00:27:58.493 [2024-07-26 11:35:53.960306] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.493 [2024-07-26 11:35:53.960337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:58.493 qpair failed and we were unable to recover it. 00:27:58.493 [2024-07-26 11:35:53.960457] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.493 [2024-07-26 11:35:53.960487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:58.493 qpair failed and we were unable to recover it. 00:27:58.493 [2024-07-26 11:35:53.960638] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.493 [2024-07-26 11:35:53.960669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:58.493 qpair failed and we were unable to recover it. 00:27:58.493 [2024-07-26 11:35:53.960792] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.493 [2024-07-26 11:35:53.960821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:58.493 qpair failed and we were unable to recover it. 00:27:58.493 [2024-07-26 11:35:53.961095] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.493 [2024-07-26 11:35:53.961125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:58.493 qpair failed and we were unable to recover it. 00:27:58.493 [2024-07-26 11:35:53.961252] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.493 [2024-07-26 11:35:53.961282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:58.493 qpair failed and we were unable to recover it. 00:27:58.493 [2024-07-26 11:35:53.961557] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.493 [2024-07-26 11:35:53.961587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:58.493 qpair failed and we were unable to recover it. 00:27:58.493 [2024-07-26 11:35:53.961866] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.493 [2024-07-26 11:35:53.961898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:58.493 qpair failed and we were unable to recover it. 00:27:58.493 [2024-07-26 11:35:53.962089] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.493 [2024-07-26 11:35:53.962119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:58.493 qpair failed and we were unable to recover it. 00:27:58.493 [2024-07-26 11:35:53.962234] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.493 [2024-07-26 11:35:53.962264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:58.493 qpair failed and we were unable to recover it. 00:27:58.493 [2024-07-26 11:35:53.962527] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.493 [2024-07-26 11:35:53.962558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:58.493 qpair failed and we were unable to recover it. 00:27:58.493 [2024-07-26 11:35:53.962792] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.493 [2024-07-26 11:35:53.962824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:58.493 qpair failed and we were unable to recover it. 00:27:58.493 [2024-07-26 11:35:53.963075] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.493 [2024-07-26 11:35:53.963106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:58.493 qpair failed and we were unable to recover it. 00:27:58.493 [2024-07-26 11:35:53.963298] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.493 [2024-07-26 11:35:53.963329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:58.493 qpair failed and we were unable to recover it. 00:27:58.493 [2024-07-26 11:35:53.963455] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.493 [2024-07-26 11:35:53.963485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:58.493 qpair failed and we were unable to recover it. 00:27:58.493 [2024-07-26 11:35:53.963692] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.493 [2024-07-26 11:35:53.963723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:58.493 qpair failed and we were unable to recover it. 00:27:58.493 [2024-07-26 11:35:53.964014] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.493 [2024-07-26 11:35:53.964044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:58.493 qpair failed and we were unable to recover it. 00:27:58.493 [2024-07-26 11:35:53.964218] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.493 [2024-07-26 11:35:53.964249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:58.493 qpair failed and we were unable to recover it. 00:27:58.493 [2024-07-26 11:35:53.964501] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.493 [2024-07-26 11:35:53.964531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:58.493 qpair failed and we were unable to recover it. 00:27:58.493 [2024-07-26 11:35:53.964779] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.493 [2024-07-26 11:35:53.964810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:58.493 qpair failed and we were unable to recover it. 00:27:58.493 [2024-07-26 11:35:53.965081] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.493 [2024-07-26 11:35:53.965112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:58.493 qpair failed and we were unable to recover it. 00:27:58.493 [2024-07-26 11:35:53.965304] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.493 [2024-07-26 11:35:53.965333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:58.493 qpair failed and we were unable to recover it. 00:27:58.493 [2024-07-26 11:35:53.965469] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.493 [2024-07-26 11:35:53.965499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:58.493 qpair failed and we were unable to recover it. 00:27:58.493 [2024-07-26 11:35:53.965688] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.493 [2024-07-26 11:35:53.965721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:58.493 qpair failed and we were unable to recover it. 00:27:58.493 [2024-07-26 11:35:53.965862] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.493 [2024-07-26 11:35:53.965892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:58.493 qpair failed and we were unable to recover it. 00:27:58.493 [2024-07-26 11:35:53.966010] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.493 [2024-07-26 11:35:53.966041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:58.493 qpair failed and we were unable to recover it. 00:27:58.493 [2024-07-26 11:35:53.966300] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.494 [2024-07-26 11:35:53.966331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:58.494 qpair failed and we were unable to recover it. 00:27:58.494 [2024-07-26 11:35:53.966507] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.494 [2024-07-26 11:35:53.966537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:58.494 qpair failed and we were unable to recover it. 00:27:58.494 [2024-07-26 11:35:53.966676] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.494 [2024-07-26 11:35:53.966707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:58.494 qpair failed and we were unable to recover it. 00:27:58.494 [2024-07-26 11:35:53.966887] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.494 [2024-07-26 11:35:53.966917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:58.494 qpair failed and we were unable to recover it. 00:27:58.494 [2024-07-26 11:35:53.967107] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.494 [2024-07-26 11:35:53.967136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:58.494 qpair failed and we were unable to recover it. 00:27:58.494 [2024-07-26 11:35:53.967402] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.494 [2024-07-26 11:35:53.967433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:58.494 qpair failed and we were unable to recover it. 00:27:58.494 [2024-07-26 11:35:53.967673] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.494 [2024-07-26 11:35:53.967704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:58.494 qpair failed and we were unable to recover it. 00:27:58.494 [2024-07-26 11:35:53.967987] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.494 [2024-07-26 11:35:53.968026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.494 qpair failed and we were unable to recover it. 00:27:58.494 [2024-07-26 11:35:53.968231] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.494 [2024-07-26 11:35:53.968263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.494 qpair failed and we were unable to recover it. 00:27:58.494 [2024-07-26 11:35:53.968482] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.494 [2024-07-26 11:35:53.968513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.494 qpair failed and we were unable to recover it. 00:27:58.494 [2024-07-26 11:35:53.968637] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.494 [2024-07-26 11:35:53.968668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.494 qpair failed and we were unable to recover it. 00:27:58.494 [2024-07-26 11:35:53.968812] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.494 [2024-07-26 11:35:53.968843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.494 qpair failed and we were unable to recover it. 00:27:58.494 [2024-07-26 11:35:53.968980] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.494 [2024-07-26 11:35:53.969010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.494 qpair failed and we were unable to recover it. 00:27:58.494 [2024-07-26 11:35:53.969191] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.494 [2024-07-26 11:35:53.969222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.494 qpair failed and we were unable to recover it. 00:27:58.494 [2024-07-26 11:35:53.969495] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.494 [2024-07-26 11:35:53.969526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.494 qpair failed and we were unable to recover it. 00:27:58.494 [2024-07-26 11:35:53.969772] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.494 [2024-07-26 11:35:53.969802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.494 qpair failed and we were unable to recover it. 00:27:58.494 [2024-07-26 11:35:53.969975] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.494 [2024-07-26 11:35:53.970004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.494 qpair failed and we were unable to recover it. 00:27:58.494 [2024-07-26 11:35:53.970182] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.494 [2024-07-26 11:35:53.970213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.494 qpair failed and we were unable to recover it. 00:27:58.494 [2024-07-26 11:35:53.970383] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.494 [2024-07-26 11:35:53.970413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.494 qpair failed and we were unable to recover it. 00:27:58.494 [2024-07-26 11:35:53.970649] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.494 [2024-07-26 11:35:53.970680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.494 qpair failed and we were unable to recover it. 00:27:58.494 [2024-07-26 11:35:53.970933] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.494 [2024-07-26 11:35:53.970970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.494 qpair failed and we were unable to recover it. 00:27:58.494 [2024-07-26 11:35:53.971209] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.494 [2024-07-26 11:35:53.971238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.494 qpair failed and we were unable to recover it. 00:27:58.494 [2024-07-26 11:35:53.971355] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.494 [2024-07-26 11:35:53.971385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.494 qpair failed and we were unable to recover it. 00:27:58.494 [2024-07-26 11:35:53.971656] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.494 [2024-07-26 11:35:53.971688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.494 qpair failed and we were unable to recover it. 00:27:58.494 [2024-07-26 11:35:53.971811] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.494 [2024-07-26 11:35:53.971841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.494 qpair failed and we were unable to recover it. 00:27:58.494 [2024-07-26 11:35:53.971950] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.494 [2024-07-26 11:35:53.971980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.494 qpair failed and we were unable to recover it. 00:27:58.494 [2024-07-26 11:35:53.972162] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.494 [2024-07-26 11:35:53.972197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.494 qpair failed and we were unable to recover it. 00:27:58.494 [2024-07-26 11:35:53.972409] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.494 [2024-07-26 11:35:53.972439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.494 qpair failed and we were unable to recover it. 00:27:58.494 [2024-07-26 11:35:53.972575] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.494 [2024-07-26 11:35:53.972605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.494 qpair failed and we were unable to recover it. 00:27:58.494 [2024-07-26 11:35:53.972749] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.494 [2024-07-26 11:35:53.972780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.494 qpair failed and we were unable to recover it. 00:27:58.494 [2024-07-26 11:35:53.972976] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.494 [2024-07-26 11:35:53.973007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.494 qpair failed and we were unable to recover it. 00:27:58.494 [2024-07-26 11:35:53.973141] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.494 [2024-07-26 11:35:53.973172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.494 qpair failed and we were unable to recover it. 00:27:58.494 [2024-07-26 11:35:53.973388] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.494 [2024-07-26 11:35:53.973418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.494 qpair failed and we were unable to recover it. 00:27:58.494 [2024-07-26 11:35:53.973542] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.494 [2024-07-26 11:35:53.973571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.494 qpair failed and we were unable to recover it. 00:27:58.494 [2024-07-26 11:35:53.973723] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.494 [2024-07-26 11:35:53.973755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.494 qpair failed and we were unable to recover it. 00:27:58.494 [2024-07-26 11:35:53.973956] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.494 [2024-07-26 11:35:53.973987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.494 qpair failed and we were unable to recover it. 00:27:58.494 [2024-07-26 11:35:53.974254] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.494 [2024-07-26 11:35:53.974283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.494 qpair failed and we were unable to recover it. 00:27:58.494 [2024-07-26 11:35:53.974520] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.494 [2024-07-26 11:35:53.974550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.494 qpair failed and we were unable to recover it. 00:27:58.494 [2024-07-26 11:35:53.974730] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.495 [2024-07-26 11:35:53.974762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.495 qpair failed and we were unable to recover it. 00:27:58.495 [2024-07-26 11:35:53.974951] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.495 [2024-07-26 11:35:53.974981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.495 qpair failed and we were unable to recover it. 00:27:58.495 [2024-07-26 11:35:53.975153] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.495 [2024-07-26 11:35:53.975182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.495 qpair failed and we were unable to recover it. 00:27:58.495 [2024-07-26 11:35:53.975374] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.495 [2024-07-26 11:35:53.975404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.495 qpair failed and we were unable to recover it. 00:27:58.495 [2024-07-26 11:35:53.975527] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.495 [2024-07-26 11:35:53.975557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.495 qpair failed and we were unable to recover it. 00:27:58.495 [2024-07-26 11:35:53.975746] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.495 [2024-07-26 11:35:53.975777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.495 qpair failed and we were unable to recover it. 00:27:58.495 [2024-07-26 11:35:53.975892] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.495 [2024-07-26 11:35:53.975923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.495 qpair failed and we were unable to recover it. 00:27:58.495 [2024-07-26 11:35:53.976097] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.495 [2024-07-26 11:35:53.976127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.495 qpair failed and we were unable to recover it. 00:27:58.495 [2024-07-26 11:35:53.976316] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.495 [2024-07-26 11:35:53.976345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.495 qpair failed and we were unable to recover it. 00:27:58.495 [2024-07-26 11:35:53.976537] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.495 [2024-07-26 11:35:53.976568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.495 qpair failed and we were unable to recover it. 00:27:58.495 [2024-07-26 11:35:53.976774] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.495 [2024-07-26 11:35:53.976805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.495 qpair failed and we were unable to recover it. 00:27:58.495 [2024-07-26 11:35:53.977101] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.495 [2024-07-26 11:35:53.977131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.495 qpair failed and we were unable to recover it. 00:27:58.495 [2024-07-26 11:35:53.977377] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.495 [2024-07-26 11:35:53.977407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.495 qpair failed and we were unable to recover it. 00:27:58.495 [2024-07-26 11:35:53.977591] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.495 [2024-07-26 11:35:53.977620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.495 qpair failed and we were unable to recover it. 00:27:58.495 [2024-07-26 11:35:53.977771] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.495 [2024-07-26 11:35:53.977802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.495 qpair failed and we were unable to recover it. 00:27:58.495 [2024-07-26 11:35:53.978074] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.495 [2024-07-26 11:35:53.978104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.495 qpair failed and we were unable to recover it. 00:27:58.495 [2024-07-26 11:35:53.978216] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.495 [2024-07-26 11:35:53.978246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.495 qpair failed and we were unable to recover it. 00:27:58.495 [2024-07-26 11:35:53.978364] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.495 [2024-07-26 11:35:53.978393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.495 qpair failed and we were unable to recover it. 00:27:58.495 [2024-07-26 11:35:53.978611] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.495 [2024-07-26 11:35:53.978651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.495 qpair failed and we were unable to recover it. 00:27:58.495 [2024-07-26 11:35:53.978796] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.495 [2024-07-26 11:35:53.978825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.495 qpair failed and we were unable to recover it. 00:27:58.495 [2024-07-26 11:35:53.979037] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.495 [2024-07-26 11:35:53.979067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.495 qpair failed and we were unable to recover it. 00:27:58.495 [2024-07-26 11:35:53.979243] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.495 [2024-07-26 11:35:53.979273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.495 qpair failed and we were unable to recover it. 00:27:58.495 [2024-07-26 11:35:53.979562] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.495 [2024-07-26 11:35:53.979597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.495 qpair failed and we were unable to recover it. 00:27:58.495 [2024-07-26 11:35:53.979743] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.495 [2024-07-26 11:35:53.979778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:58.495 qpair failed and we were unable to recover it. 00:27:58.495 [2024-07-26 11:35:53.979918] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.495 [2024-07-26 11:35:53.979949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:58.495 qpair failed and we were unable to recover it. 00:27:58.495 [2024-07-26 11:35:53.980135] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.495 [2024-07-26 11:35:53.980165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:58.495 qpair failed and we were unable to recover it. 00:27:58.495 [2024-07-26 11:35:53.980351] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.495 [2024-07-26 11:35:53.980381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:58.495 qpair failed and we were unable to recover it. 00:27:58.495 [2024-07-26 11:35:53.980614] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.495 [2024-07-26 11:35:53.980653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:58.495 qpair failed and we were unable to recover it. 00:27:58.495 [2024-07-26 11:35:53.980785] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.495 [2024-07-26 11:35:53.980815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:58.495 qpair failed and we were unable to recover it. 00:27:58.495 [2024-07-26 11:35:53.981074] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.495 [2024-07-26 11:35:53.981104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:58.495 qpair failed and we were unable to recover it. 00:27:58.495 [2024-07-26 11:35:53.981235] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.495 [2024-07-26 11:35:53.981266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:58.495 qpair failed and we were unable to recover it. 00:27:58.495 [2024-07-26 11:35:53.981461] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.495 [2024-07-26 11:35:53.981491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:58.495 qpair failed and we were unable to recover it. 00:27:58.495 [2024-07-26 11:35:53.981672] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.495 [2024-07-26 11:35:53.981703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:58.495 qpair failed and we were unable to recover it. 00:27:58.495 [2024-07-26 11:35:53.981911] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.495 [2024-07-26 11:35:53.981942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:58.495 qpair failed and we were unable to recover it. 00:27:58.495 [2024-07-26 11:35:53.982185] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.495 [2024-07-26 11:35:53.982215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:58.495 qpair failed and we were unable to recover it. 00:27:58.495 [2024-07-26 11:35:53.982329] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.495 [2024-07-26 11:35:53.982359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:58.495 qpair failed and we were unable to recover it. 00:27:58.495 [2024-07-26 11:35:53.982548] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.495 [2024-07-26 11:35:53.982578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:58.495 qpair failed and we were unable to recover it. 00:27:58.495 [2024-07-26 11:35:53.982781] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.495 [2024-07-26 11:35:53.982812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:58.495 qpair failed and we were unable to recover it. 00:27:58.496 [2024-07-26 11:35:53.983007] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.496 [2024-07-26 11:35:53.983037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:58.496 qpair failed and we were unable to recover it. 00:27:58.496 [2024-07-26 11:35:53.983162] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.496 [2024-07-26 11:35:53.983193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:58.496 qpair failed and we were unable to recover it. 00:27:58.496 [2024-07-26 11:35:53.983369] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.496 [2024-07-26 11:35:53.983398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:58.496 qpair failed and we were unable to recover it. 00:27:58.496 [2024-07-26 11:35:53.983519] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.496 [2024-07-26 11:35:53.983549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:58.496 qpair failed and we were unable to recover it. 00:27:58.496 [2024-07-26 11:35:53.983753] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.496 [2024-07-26 11:35:53.983785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:58.496 qpair failed and we were unable to recover it. 00:27:58.496 [2024-07-26 11:35:53.983922] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.496 [2024-07-26 11:35:53.983952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:58.496 qpair failed and we were unable to recover it. 00:27:58.496 [2024-07-26 11:35:53.984192] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.496 [2024-07-26 11:35:53.984222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:58.496 qpair failed and we were unable to recover it. 00:27:58.496 [2024-07-26 11:35:53.984414] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.496 [2024-07-26 11:35:53.984445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:58.496 qpair failed and we were unable to recover it. 00:27:58.496 [2024-07-26 11:35:53.984573] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.496 [2024-07-26 11:35:53.984603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:58.496 qpair failed and we were unable to recover it. 00:27:58.496 [2024-07-26 11:35:53.984798] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.496 [2024-07-26 11:35:53.984829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:58.496 qpair failed and we were unable to recover it. 00:27:58.496 [2024-07-26 11:35:53.984953] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.496 [2024-07-26 11:35:53.984983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:58.496 qpair failed and we were unable to recover it. 00:27:58.496 [2024-07-26 11:35:53.985191] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.496 [2024-07-26 11:35:53.985222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:58.496 qpair failed and we were unable to recover it. 00:27:58.496 [2024-07-26 11:35:53.985433] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.496 [2024-07-26 11:35:53.985463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:58.496 qpair failed and we were unable to recover it. 00:27:58.496 [2024-07-26 11:35:53.985685] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.496 [2024-07-26 11:35:53.985716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:58.496 qpair failed and we were unable to recover it. 00:27:58.496 [2024-07-26 11:35:53.985897] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.496 [2024-07-26 11:35:53.985927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:58.496 qpair failed and we were unable to recover it. 00:27:58.496 [2024-07-26 11:35:53.986051] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.496 [2024-07-26 11:35:53.986081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:58.496 qpair failed and we were unable to recover it. 00:27:58.496 [2024-07-26 11:35:53.986325] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.496 [2024-07-26 11:35:53.986355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:58.496 qpair failed and we were unable to recover it. 00:27:58.496 [2024-07-26 11:35:53.986596] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.496 [2024-07-26 11:35:53.986644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:58.496 qpair failed and we were unable to recover it. 00:27:58.496 [2024-07-26 11:35:53.986865] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.496 [2024-07-26 11:35:53.986894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:58.496 qpair failed and we were unable to recover it. 00:27:58.496 [2024-07-26 11:35:53.987026] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.496 [2024-07-26 11:35:53.987056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:58.496 qpair failed and we were unable to recover it. 00:27:58.496 [2024-07-26 11:35:53.987248] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.496 [2024-07-26 11:35:53.987277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:58.496 qpair failed and we were unable to recover it. 00:27:58.496 [2024-07-26 11:35:53.987460] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.496 [2024-07-26 11:35:53.987490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:58.496 qpair failed and we were unable to recover it. 00:27:58.496 [2024-07-26 11:35:53.987687] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.496 [2024-07-26 11:35:53.987719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:58.496 qpair failed and we were unable to recover it. 00:27:58.496 [2024-07-26 11:35:53.987894] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.496 [2024-07-26 11:35:53.987924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:58.496 qpair failed and we were unable to recover it. 00:27:58.496 [2024-07-26 11:35:53.988184] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.496 [2024-07-26 11:35:53.988214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:58.496 qpair failed and we were unable to recover it. 00:27:58.496 [2024-07-26 11:35:53.988334] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.496 [2024-07-26 11:35:53.988364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:58.496 qpair failed and we were unable to recover it. 00:27:58.496 [2024-07-26 11:35:53.988557] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.496 [2024-07-26 11:35:53.988587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:58.496 qpair failed and we were unable to recover it. 00:27:58.496 [2024-07-26 11:35:53.988857] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.496 [2024-07-26 11:35:53.988888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:58.496 qpair failed and we were unable to recover it. 00:27:58.496 [2024-07-26 11:35:53.989010] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.496 [2024-07-26 11:35:53.989044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:58.496 qpair failed and we were unable to recover it. 00:27:58.496 [2024-07-26 11:35:53.989285] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.496 [2024-07-26 11:35:53.989315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:58.496 qpair failed and we were unable to recover it. 00:27:58.496 [2024-07-26 11:35:53.989577] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.496 [2024-07-26 11:35:53.989607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:58.496 qpair failed and we were unable to recover it. 00:27:58.496 [2024-07-26 11:35:53.989801] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.496 [2024-07-26 11:35:53.989831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:58.496 qpair failed and we were unable to recover it. 00:27:58.496 [2024-07-26 11:35:53.989969] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.496 [2024-07-26 11:35:53.989999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:58.496 qpair failed and we were unable to recover it. 00:27:58.496 [2024-07-26 11:35:53.990183] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.496 [2024-07-26 11:35:53.990213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:58.496 qpair failed and we were unable to recover it. 00:27:58.496 [2024-07-26 11:35:53.990329] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.496 [2024-07-26 11:35:53.990361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:58.496 qpair failed and we were unable to recover it. 00:27:58.496 [2024-07-26 11:35:53.990505] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.496 [2024-07-26 11:35:53.990535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:58.496 qpair failed and we were unable to recover it. 00:27:58.496 [2024-07-26 11:35:53.990718] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.496 [2024-07-26 11:35:53.990749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:58.496 qpair failed and we were unable to recover it. 00:27:58.496 [2024-07-26 11:35:53.990946] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.496 [2024-07-26 11:35:53.990977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:58.496 qpair failed and we were unable to recover it. 00:27:58.497 [2024-07-26 11:35:53.991086] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.497 [2024-07-26 11:35:53.991116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:58.497 qpair failed and we were unable to recover it. 00:27:58.497 [2024-07-26 11:35:53.991310] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.497 [2024-07-26 11:35:53.991340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:58.497 qpair failed and we were unable to recover it. 00:27:58.497 [2024-07-26 11:35:53.991515] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.497 [2024-07-26 11:35:53.991545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:58.497 qpair failed and we were unable to recover it. 00:27:58.497 [2024-07-26 11:35:53.991687] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.497 [2024-07-26 11:35:53.991719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:58.497 qpair failed and we were unable to recover it. 00:27:58.497 [2024-07-26 11:35:53.991982] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.497 [2024-07-26 11:35:53.992012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:58.497 qpair failed and we were unable to recover it. 00:27:58.497 [2024-07-26 11:35:53.992265] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.497 [2024-07-26 11:35:53.992296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:58.497 qpair failed and we were unable to recover it. 00:27:58.497 [2024-07-26 11:35:53.992402] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.497 [2024-07-26 11:35:53.992432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:58.497 qpair failed and we were unable to recover it. 00:27:58.497 [2024-07-26 11:35:53.992679] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.497 [2024-07-26 11:35:53.992709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:58.497 qpair failed and we were unable to recover it. 00:27:58.497 [2024-07-26 11:35:53.992904] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.497 [2024-07-26 11:35:53.992935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:58.497 qpair failed and we were unable to recover it. 00:27:58.497 [2024-07-26 11:35:53.993197] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.497 [2024-07-26 11:35:53.993227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:58.497 qpair failed and we were unable to recover it. 00:27:58.497 [2024-07-26 11:35:53.993490] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.497 [2024-07-26 11:35:53.993520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:58.497 qpair failed and we were unable to recover it. 00:27:58.497 [2024-07-26 11:35:53.993648] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.497 [2024-07-26 11:35:53.993679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:58.497 qpair failed and we were unable to recover it. 00:27:58.497 [2024-07-26 11:35:53.993799] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.497 [2024-07-26 11:35:53.993828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:58.497 qpair failed and we were unable to recover it. 00:27:58.497 [2024-07-26 11:35:53.994019] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.497 [2024-07-26 11:35:53.994049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:58.497 qpair failed and we were unable to recover it. 00:27:58.497 [2024-07-26 11:35:53.994249] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.497 [2024-07-26 11:35:53.994286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:58.497 qpair failed and we were unable to recover it. 00:27:58.497 [2024-07-26 11:35:53.994469] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.497 [2024-07-26 11:35:53.994498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:58.497 qpair failed and we were unable to recover it. 00:27:58.497 [2024-07-26 11:35:53.994603] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.497 [2024-07-26 11:35:53.994640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:58.497 qpair failed and we were unable to recover it. 00:27:58.497 [2024-07-26 11:35:53.994834] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.497 [2024-07-26 11:35:53.994864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:58.497 qpair failed and we were unable to recover it. 00:27:58.497 [2024-07-26 11:35:53.994969] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.497 [2024-07-26 11:35:53.994999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:58.497 qpair failed and we were unable to recover it. 00:27:58.497 [2024-07-26 11:35:53.995115] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.497 [2024-07-26 11:35:53.995145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:58.497 qpair failed and we were unable to recover it. 00:27:58.497 [2024-07-26 11:35:53.995258] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.497 [2024-07-26 11:35:53.995288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:58.497 qpair failed and we were unable to recover it. 00:27:58.497 [2024-07-26 11:35:53.995424] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.497 [2024-07-26 11:35:53.995454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:58.497 qpair failed and we were unable to recover it. 00:27:58.497 [2024-07-26 11:35:53.995653] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.497 [2024-07-26 11:35:53.995684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:58.497 qpair failed and we were unable to recover it. 00:27:58.497 [2024-07-26 11:35:53.995796] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.497 [2024-07-26 11:35:53.995827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:58.497 qpair failed and we were unable to recover it. 00:27:58.497 [2024-07-26 11:35:53.995960] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.497 [2024-07-26 11:35:53.995990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:58.497 qpair failed and we were unable to recover it. 00:27:58.497 [2024-07-26 11:35:53.996131] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.497 [2024-07-26 11:35:53.996162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:58.497 qpair failed and we were unable to recover it. 00:27:58.497 [2024-07-26 11:35:53.996429] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.497 [2024-07-26 11:35:53.996459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:58.497 qpair failed and we were unable to recover it. 00:27:58.497 [2024-07-26 11:35:53.996707] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.497 [2024-07-26 11:35:53.996738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:58.497 qpair failed and we were unable to recover it. 00:27:58.497 [2024-07-26 11:35:53.996876] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.497 [2024-07-26 11:35:53.996906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:58.497 qpair failed and we were unable to recover it. 00:27:58.497 [2024-07-26 11:35:53.997044] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.497 [2024-07-26 11:35:53.997074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:58.497 qpair failed and we were unable to recover it. 00:27:58.497 [2024-07-26 11:35:53.997358] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.497 [2024-07-26 11:35:53.997387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:58.497 qpair failed and we were unable to recover it. 00:27:58.497 [2024-07-26 11:35:53.997693] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.497 [2024-07-26 11:35:53.997724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:58.497 qpair failed and we were unable to recover it. 00:27:58.497 [2024-07-26 11:35:53.997996] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.497 [2024-07-26 11:35:53.998026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:58.497 qpair failed and we were unable to recover it. 00:27:58.497 [2024-07-26 11:35:53.998143] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.497 [2024-07-26 11:35:53.998173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:58.497 qpair failed and we were unable to recover it. 00:27:58.497 [2024-07-26 11:35:53.998358] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.497 [2024-07-26 11:35:53.998388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:58.497 qpair failed and we were unable to recover it. 00:27:58.497 [2024-07-26 11:35:53.998687] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.497 [2024-07-26 11:35:53.998720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:58.497 qpair failed and we were unable to recover it. 00:27:58.497 [2024-07-26 11:35:53.998906] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.497 [2024-07-26 11:35:53.998936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:58.497 qpair failed and we were unable to recover it. 00:27:58.497 [2024-07-26 11:35:53.999119] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.497 [2024-07-26 11:35:53.999149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:58.497 qpair failed and we were unable to recover it. 00:27:58.498 [2024-07-26 11:35:53.999399] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.498 [2024-07-26 11:35:53.999430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:58.498 qpair failed and we were unable to recover it. 00:27:58.498 [2024-07-26 11:35:53.999609] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.498 [2024-07-26 11:35:53.999651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:58.498 qpair failed and we were unable to recover it. 00:27:58.498 [2024-07-26 11:35:53.999826] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.498 [2024-07-26 11:35:53.999857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:58.498 qpair failed and we were unable to recover it. 00:27:58.498 [2024-07-26 11:35:53.999988] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.498 [2024-07-26 11:35:54.000024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:58.498 qpair failed and we were unable to recover it. 00:27:58.498 [2024-07-26 11:35:54.000236] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.498 [2024-07-26 11:35:54.000267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:58.498 qpair failed and we were unable to recover it. 00:27:58.498 [2024-07-26 11:35:54.000461] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.498 [2024-07-26 11:35:54.000491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:58.498 qpair failed and we were unable to recover it. 00:27:58.498 [2024-07-26 11:35:54.000623] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.498 [2024-07-26 11:35:54.000663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:58.498 qpair failed and we were unable to recover it. 00:27:58.498 [2024-07-26 11:35:54.000852] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.498 [2024-07-26 11:35:54.000882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:58.498 qpair failed and we were unable to recover it. 00:27:58.498 [2024-07-26 11:35:54.001055] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.498 [2024-07-26 11:35:54.001085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:58.498 qpair failed and we were unable to recover it. 00:27:58.498 [2024-07-26 11:35:54.001335] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.498 [2024-07-26 11:35:54.001365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:58.498 qpair failed and we were unable to recover it. 00:27:58.498 [2024-07-26 11:35:54.001551] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.498 [2024-07-26 11:35:54.001581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:58.498 qpair failed and we were unable to recover it. 00:27:58.498 [2024-07-26 11:35:54.001713] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.498 [2024-07-26 11:35:54.001743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:58.498 qpair failed and we were unable to recover it. 00:27:58.498 [2024-07-26 11:35:54.001853] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.498 [2024-07-26 11:35:54.001886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:58.498 qpair failed and we were unable to recover it. 00:27:58.498 [2024-07-26 11:35:54.002095] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.498 [2024-07-26 11:35:54.002126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:58.498 qpair failed and we were unable to recover it. 00:27:58.498 [2024-07-26 11:35:54.002385] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.498 [2024-07-26 11:35:54.002415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:58.498 qpair failed and we were unable to recover it. 00:27:58.498 [2024-07-26 11:35:54.002555] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.498 [2024-07-26 11:35:54.002585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:58.498 qpair failed and we were unable to recover it. 00:27:58.498 [2024-07-26 11:35:54.002752] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.498 [2024-07-26 11:35:54.002784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:58.498 qpair failed and we were unable to recover it. 00:27:58.498 [2024-07-26 11:35:54.002895] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.498 [2024-07-26 11:35:54.002927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:58.498 qpair failed and we were unable to recover it. 00:27:58.498 [2024-07-26 11:35:54.003047] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.498 [2024-07-26 11:35:54.003077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:58.498 qpair failed and we were unable to recover it. 00:27:58.498 [2024-07-26 11:35:54.003320] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.498 [2024-07-26 11:35:54.003351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:58.498 qpair failed and we were unable to recover it. 00:27:58.498 [2024-07-26 11:35:54.003620] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.498 [2024-07-26 11:35:54.003662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:58.498 qpair failed and we were unable to recover it. 00:27:58.498 [2024-07-26 11:35:54.003859] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.498 [2024-07-26 11:35:54.003889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:58.498 qpair failed and we were unable to recover it. 00:27:58.498 [2024-07-26 11:35:54.004019] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.498 [2024-07-26 11:35:54.004056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:58.498 qpair failed and we were unable to recover it. 00:27:58.498 [2024-07-26 11:35:54.004189] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.498 [2024-07-26 11:35:54.004219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:58.498 qpair failed and we were unable to recover it. 00:27:58.498 [2024-07-26 11:35:54.004426] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.498 [2024-07-26 11:35:54.004455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:58.498 qpair failed and we were unable to recover it. 00:27:58.498 [2024-07-26 11:35:54.004697] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.498 [2024-07-26 11:35:54.004728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:58.498 qpair failed and we were unable to recover it. 00:27:58.498 [2024-07-26 11:35:54.004833] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.498 [2024-07-26 11:35:54.004862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:58.498 qpair failed and we were unable to recover it. 00:27:58.498 [2024-07-26 11:35:54.005052] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.498 [2024-07-26 11:35:54.005082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:58.498 qpair failed and we were unable to recover it. 00:27:58.498 [2024-07-26 11:35:54.005346] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.498 [2024-07-26 11:35:54.005376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:58.498 qpair failed and we were unable to recover it. 00:27:58.498 [2024-07-26 11:35:54.005516] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.498 [2024-07-26 11:35:54.005545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:58.498 qpair failed and we were unable to recover it. 00:27:58.498 [2024-07-26 11:35:54.005760] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.498 [2024-07-26 11:35:54.005797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:58.498 qpair failed and we were unable to recover it. 00:27:58.498 [2024-07-26 11:35:54.005984] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.498 [2024-07-26 11:35:54.006014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:58.498 qpair failed and we were unable to recover it. 00:27:58.498 [2024-07-26 11:35:54.006203] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.499 [2024-07-26 11:35:54.006233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:58.499 qpair failed and we were unable to recover it. 00:27:58.499 [2024-07-26 11:35:54.006382] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.499 [2024-07-26 11:35:54.006413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:58.499 qpair failed and we were unable to recover it. 00:27:58.499 [2024-07-26 11:35:54.006538] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.499 [2024-07-26 11:35:54.006568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:58.499 qpair failed and we were unable to recover it. 00:27:58.499 [2024-07-26 11:35:54.006809] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.499 [2024-07-26 11:35:54.006840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:58.499 qpair failed and we were unable to recover it. 00:27:58.499 [2024-07-26 11:35:54.006977] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.499 [2024-07-26 11:35:54.007006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:58.499 qpair failed and we were unable to recover it. 00:27:58.499 [2024-07-26 11:35:54.007190] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.499 [2024-07-26 11:35:54.007220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:58.499 qpair failed and we were unable to recover it. 00:27:58.499 [2024-07-26 11:35:54.007402] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.499 [2024-07-26 11:35:54.007432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:58.499 qpair failed and we were unable to recover it. 00:27:58.499 [2024-07-26 11:35:54.007635] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.499 [2024-07-26 11:35:54.007666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:58.499 qpair failed and we were unable to recover it. 00:27:58.499 [2024-07-26 11:35:54.007906] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.499 [2024-07-26 11:35:54.007936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:58.499 qpair failed and we were unable to recover it. 00:27:58.499 [2024-07-26 11:35:54.008062] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.499 [2024-07-26 11:35:54.008091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:58.499 qpair failed and we were unable to recover it. 00:27:58.499 [2024-07-26 11:35:54.008267] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.499 [2024-07-26 11:35:54.008298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:58.499 qpair failed and we were unable to recover it. 00:27:58.499 [2024-07-26 11:35:54.008495] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.499 [2024-07-26 11:35:54.008525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:58.499 qpair failed and we were unable to recover it. 00:27:58.499 [2024-07-26 11:35:54.008720] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.499 [2024-07-26 11:35:54.008759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.499 qpair failed and we were unable to recover it. 00:27:58.499 [2024-07-26 11:35:54.009059] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.499 [2024-07-26 11:35:54.009090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.499 qpair failed and we were unable to recover it. 00:27:58.499 [2024-07-26 11:35:54.009265] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.499 [2024-07-26 11:35:54.009296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.499 qpair failed and we were unable to recover it. 00:27:58.499 [2024-07-26 11:35:54.009437] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.499 [2024-07-26 11:35:54.009467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.499 qpair failed and we were unable to recover it. 00:27:58.499 [2024-07-26 11:35:54.009605] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.499 [2024-07-26 11:35:54.009643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.499 qpair failed and we were unable to recover it. 00:27:58.499 [2024-07-26 11:35:54.009868] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.499 [2024-07-26 11:35:54.009897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.499 qpair failed and we were unable to recover it. 00:27:58.499 [2024-07-26 11:35:54.010071] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.499 [2024-07-26 11:35:54.010102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.499 qpair failed and we were unable to recover it. 00:27:58.499 [2024-07-26 11:35:54.010345] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.499 [2024-07-26 11:35:54.010376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.499 qpair failed and we were unable to recover it. 00:27:58.499 [2024-07-26 11:35:54.010671] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.499 [2024-07-26 11:35:54.010702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.499 qpair failed and we were unable to recover it. 00:27:58.499 [2024-07-26 11:35:54.010821] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.499 [2024-07-26 11:35:54.010851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.499 qpair failed and we were unable to recover it. 00:27:58.499 [2024-07-26 11:35:54.010974] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.499 [2024-07-26 11:35:54.011005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.499 qpair failed and we were unable to recover it. 00:27:58.499 [2024-07-26 11:35:54.011174] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.499 [2024-07-26 11:35:54.011204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.499 qpair failed and we were unable to recover it. 00:27:58.499 [2024-07-26 11:35:54.011391] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.499 [2024-07-26 11:35:54.011421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.499 qpair failed and we were unable to recover it. 00:27:58.499 [2024-07-26 11:35:54.011665] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.499 [2024-07-26 11:35:54.011703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.499 qpair failed and we were unable to recover it. 00:27:58.499 [2024-07-26 11:35:54.011968] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.499 [2024-07-26 11:35:54.011999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.499 qpair failed and we were unable to recover it. 00:27:58.499 [2024-07-26 11:35:54.012186] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.499 [2024-07-26 11:35:54.012216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.499 qpair failed and we were unable to recover it. 00:27:58.499 [2024-07-26 11:35:54.012407] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.499 [2024-07-26 11:35:54.012437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.499 qpair failed and we were unable to recover it. 00:27:58.499 [2024-07-26 11:35:54.012568] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.499 [2024-07-26 11:35:54.012610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.499 qpair failed and we were unable to recover it. 00:27:58.499 [2024-07-26 11:35:54.012823] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.499 [2024-07-26 11:35:54.012854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.499 qpair failed and we were unable to recover it. 00:27:58.499 [2024-07-26 11:35:54.013121] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.499 [2024-07-26 11:35:54.013152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.499 qpair failed and we were unable to recover it. 00:27:58.499 [2024-07-26 11:35:54.013393] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.499 [2024-07-26 11:35:54.013423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.499 qpair failed and we were unable to recover it. 00:27:58.499 [2024-07-26 11:35:54.013616] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.499 [2024-07-26 11:35:54.013667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.499 qpair failed and we were unable to recover it. 00:27:58.499 [2024-07-26 11:35:54.013791] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.499 [2024-07-26 11:35:54.013820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.499 qpair failed and we were unable to recover it. 00:27:58.499 [2024-07-26 11:35:54.014009] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.499 [2024-07-26 11:35:54.014038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.499 qpair failed and we were unable to recover it. 00:27:58.499 [2024-07-26 11:35:54.014226] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.499 [2024-07-26 11:35:54.014256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.499 qpair failed and we were unable to recover it. 00:27:58.499 [2024-07-26 11:35:54.014454] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.499 [2024-07-26 11:35:54.014485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.499 qpair failed and we were unable to recover it. 00:27:58.500 [2024-07-26 11:35:54.014725] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.500 [2024-07-26 11:35:54.014757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.500 qpair failed and we were unable to recover it. 00:27:58.500 [2024-07-26 11:35:54.014957] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.500 [2024-07-26 11:35:54.014987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.500 qpair failed and we were unable to recover it. 00:27:58.500 [2024-07-26 11:35:54.015188] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.500 [2024-07-26 11:35:54.015218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.500 qpair failed and we were unable to recover it. 00:27:58.500 [2024-07-26 11:35:54.015471] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.500 [2024-07-26 11:35:54.015502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.500 qpair failed and we were unable to recover it. 00:27:58.500 [2024-07-26 11:35:54.015714] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.500 [2024-07-26 11:35:54.015745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.500 qpair failed and we were unable to recover it. 00:27:58.500 [2024-07-26 11:35:54.016016] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.500 [2024-07-26 11:35:54.016047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.500 qpair failed and we were unable to recover it. 00:27:58.500 [2024-07-26 11:35:54.016232] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.500 [2024-07-26 11:35:54.016262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.500 qpair failed and we were unable to recover it. 00:27:58.500 [2024-07-26 11:35:54.016376] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.500 [2024-07-26 11:35:54.016407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.500 qpair failed and we were unable to recover it. 00:27:58.500 [2024-07-26 11:35:54.016551] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.500 [2024-07-26 11:35:54.016581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.500 qpair failed and we were unable to recover it. 00:27:58.500 [2024-07-26 11:35:54.016786] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.500 [2024-07-26 11:35:54.016817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.500 qpair failed and we were unable to recover it. 00:27:58.500 [2024-07-26 11:35:54.017060] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.500 [2024-07-26 11:35:54.017089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.500 qpair failed and we were unable to recover it. 00:27:58.500 [2024-07-26 11:35:54.017236] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.500 [2024-07-26 11:35:54.017267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.500 qpair failed and we were unable to recover it. 00:27:58.500 [2024-07-26 11:35:54.017434] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.500 [2024-07-26 11:35:54.017463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.500 qpair failed and we were unable to recover it. 00:27:58.500 [2024-07-26 11:35:54.017648] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.500 [2024-07-26 11:35:54.017679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.500 qpair failed and we were unable to recover it. 00:27:58.500 [2024-07-26 11:35:54.017801] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.500 [2024-07-26 11:35:54.017831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.500 qpair failed and we were unable to recover it. 00:27:58.500 [2024-07-26 11:35:54.018014] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.500 [2024-07-26 11:35:54.018045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.500 qpair failed and we were unable to recover it. 00:27:58.500 [2024-07-26 11:35:54.018239] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.500 [2024-07-26 11:35:54.018269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.500 qpair failed and we were unable to recover it. 00:27:58.500 [2024-07-26 11:35:54.018444] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.500 [2024-07-26 11:35:54.018473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.500 qpair failed and we were unable to recover it. 00:27:58.500 [2024-07-26 11:35:54.018662] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.500 [2024-07-26 11:35:54.018692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.500 qpair failed and we were unable to recover it. 00:27:58.500 [2024-07-26 11:35:54.018963] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.500 [2024-07-26 11:35:54.018993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.500 qpair failed and we were unable to recover it. 00:27:58.500 [2024-07-26 11:35:54.019173] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.500 [2024-07-26 11:35:54.019203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.500 qpair failed and we were unable to recover it. 00:27:58.500 [2024-07-26 11:35:54.019394] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.500 [2024-07-26 11:35:54.019423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.500 qpair failed and we were unable to recover it. 00:27:58.500 [2024-07-26 11:35:54.019620] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.500 [2024-07-26 11:35:54.019658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.500 qpair failed and we were unable to recover it. 00:27:58.500 [2024-07-26 11:35:54.019843] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.500 [2024-07-26 11:35:54.019874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.500 qpair failed and we were unable to recover it. 00:27:58.500 [2024-07-26 11:35:54.020008] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.500 [2024-07-26 11:35:54.020036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.500 qpair failed and we were unable to recover it. 00:27:58.500 [2024-07-26 11:35:54.020222] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.500 [2024-07-26 11:35:54.020252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.500 qpair failed and we were unable to recover it. 00:27:58.500 [2024-07-26 11:35:54.020430] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.500 [2024-07-26 11:35:54.020460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.500 qpair failed and we were unable to recover it. 00:27:58.500 [2024-07-26 11:35:54.020675] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.500 [2024-07-26 11:35:54.020716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.500 qpair failed and we were unable to recover it. 00:27:58.500 [2024-07-26 11:35:54.020926] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.500 [2024-07-26 11:35:54.020956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.500 qpair failed and we were unable to recover it. 00:27:58.500 [2024-07-26 11:35:54.021079] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.500 [2024-07-26 11:35:54.021108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.500 qpair failed and we were unable to recover it. 00:27:58.500 [2024-07-26 11:35:54.021230] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.500 [2024-07-26 11:35:54.021259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.500 qpair failed and we were unable to recover it. 00:27:58.500 [2024-07-26 11:35:54.021462] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.500 [2024-07-26 11:35:54.021492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.500 qpair failed and we were unable to recover it. 00:27:58.500 [2024-07-26 11:35:54.021680] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.500 [2024-07-26 11:35:54.021712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.500 qpair failed and we were unable to recover it. 00:27:58.500 [2024-07-26 11:35:54.021997] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.500 [2024-07-26 11:35:54.022027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.500 qpair failed and we were unable to recover it. 00:27:58.500 [2024-07-26 11:35:54.022299] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.500 [2024-07-26 11:35:54.022329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.500 qpair failed and we were unable to recover it. 00:27:58.500 [2024-07-26 11:35:54.022574] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.500 [2024-07-26 11:35:54.022605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.500 qpair failed and we were unable to recover it. 00:27:58.500 [2024-07-26 11:35:54.022726] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.500 [2024-07-26 11:35:54.022757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.500 qpair failed and we were unable to recover it. 00:27:58.500 [2024-07-26 11:35:54.022982] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.500 [2024-07-26 11:35:54.023011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.501 qpair failed and we were unable to recover it. 00:27:58.501 [2024-07-26 11:35:54.023142] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.501 [2024-07-26 11:35:54.023172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.501 qpair failed and we were unable to recover it. 00:27:58.501 [2024-07-26 11:35:54.023307] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.501 [2024-07-26 11:35:54.023336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.501 qpair failed and we were unable to recover it. 00:27:58.501 [2024-07-26 11:35:54.023516] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.501 [2024-07-26 11:35:54.023546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.501 qpair failed and we were unable to recover it. 00:27:58.501 [2024-07-26 11:35:54.023758] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.501 [2024-07-26 11:35:54.023791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.501 qpair failed and we were unable to recover it. 00:27:58.501 [2024-07-26 11:35:54.024057] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.501 [2024-07-26 11:35:54.024087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.501 qpair failed and we were unable to recover it. 00:27:58.501 [2024-07-26 11:35:54.024273] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.501 [2024-07-26 11:35:54.024304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.501 qpair failed and we were unable to recover it. 00:27:58.501 [2024-07-26 11:35:54.024481] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.501 [2024-07-26 11:35:54.024512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.501 qpair failed and we were unable to recover it. 00:27:58.501 [2024-07-26 11:35:54.024749] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.501 [2024-07-26 11:35:54.024779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.501 qpair failed and we were unable to recover it. 00:27:58.501 [2024-07-26 11:35:54.024916] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.501 [2024-07-26 11:35:54.024944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.501 qpair failed and we were unable to recover it. 00:27:58.501 [2024-07-26 11:35:54.025084] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.501 [2024-07-26 11:35:54.025114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.501 qpair failed and we were unable to recover it. 00:27:58.501 [2024-07-26 11:35:54.025305] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.501 [2024-07-26 11:35:54.025335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.501 qpair failed and we were unable to recover it. 00:27:58.501 [2024-07-26 11:35:54.025605] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.501 [2024-07-26 11:35:54.025642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.501 qpair failed and we were unable to recover it. 00:27:58.501 [2024-07-26 11:35:54.025764] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.501 [2024-07-26 11:35:54.025795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.501 qpair failed and we were unable to recover it. 00:27:58.501 [2024-07-26 11:35:54.025903] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.501 [2024-07-26 11:35:54.025932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.501 qpair failed and we were unable to recover it. 00:27:58.501 [2024-07-26 11:35:54.026122] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.501 [2024-07-26 11:35:54.026151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.501 qpair failed and we were unable to recover it. 00:27:58.501 [2024-07-26 11:35:54.026420] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.501 [2024-07-26 11:35:54.026451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.501 qpair failed and we were unable to recover it. 00:27:58.501 [2024-07-26 11:35:54.026672] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.501 [2024-07-26 11:35:54.026723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:58.501 qpair failed and we were unable to recover it. 00:27:58.501 [2024-07-26 11:35:54.026924] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.501 [2024-07-26 11:35:54.026956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:58.501 qpair failed and we were unable to recover it. 00:27:58.501 [2024-07-26 11:35:54.027208] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.501 [2024-07-26 11:35:54.027240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:58.501 qpair failed and we were unable to recover it. 00:27:58.501 [2024-07-26 11:35:54.027480] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.501 [2024-07-26 11:35:54.027510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:58.501 qpair failed and we were unable to recover it. 00:27:58.501 [2024-07-26 11:35:54.027622] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.501 [2024-07-26 11:35:54.027664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:58.501 qpair failed and we were unable to recover it. 00:27:58.501 [2024-07-26 11:35:54.027936] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.501 [2024-07-26 11:35:54.027966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:58.501 qpair failed and we were unable to recover it. 00:27:58.501 [2024-07-26 11:35:54.028171] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.501 [2024-07-26 11:35:54.028201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:58.501 qpair failed and we were unable to recover it. 00:27:58.501 [2024-07-26 11:35:54.028455] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.501 [2024-07-26 11:35:54.028485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:58.501 qpair failed and we were unable to recover it. 00:27:58.501 [2024-07-26 11:35:54.028617] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.501 [2024-07-26 11:35:54.028657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:58.501 qpair failed and we were unable to recover it. 00:27:58.501 [2024-07-26 11:35:54.028930] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.501 [2024-07-26 11:35:54.028960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:58.501 qpair failed and we were unable to recover it. 00:27:58.501 [2024-07-26 11:35:54.029132] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.501 [2024-07-26 11:35:54.029162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:58.501 qpair failed and we were unable to recover it. 00:27:58.501 [2024-07-26 11:35:54.029293] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.501 [2024-07-26 11:35:54.029324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:58.501 qpair failed and we were unable to recover it. 00:27:58.501 [2024-07-26 11:35:54.029500] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.501 [2024-07-26 11:35:54.029530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:58.501 qpair failed and we were unable to recover it. 00:27:58.501 [2024-07-26 11:35:54.029728] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.501 [2024-07-26 11:35:54.029766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:58.501 qpair failed and we were unable to recover it. 00:27:58.501 [2024-07-26 11:35:54.030034] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.501 [2024-07-26 11:35:54.030065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:58.501 qpair failed and we were unable to recover it. 00:27:58.501 [2024-07-26 11:35:54.030245] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.501 [2024-07-26 11:35:54.030274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:58.501 qpair failed and we were unable to recover it. 00:27:58.501 [2024-07-26 11:35:54.030514] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.501 [2024-07-26 11:35:54.030544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:58.501 qpair failed and we were unable to recover it. 00:27:58.501 [2024-07-26 11:35:54.030755] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.501 [2024-07-26 11:35:54.030787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:58.501 qpair failed and we were unable to recover it. 00:27:58.501 [2024-07-26 11:35:54.030910] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.501 [2024-07-26 11:35:54.030940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:58.501 qpair failed and we were unable to recover it. 00:27:58.501 [2024-07-26 11:35:54.031177] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.501 [2024-07-26 11:35:54.031207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:58.501 qpair failed and we were unable to recover it. 00:27:58.501 [2024-07-26 11:35:54.031380] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.501 [2024-07-26 11:35:54.031410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:58.501 qpair failed and we were unable to recover it. 00:27:58.501 [2024-07-26 11:35:54.031679] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.502 [2024-07-26 11:35:54.031711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:58.502 qpair failed and we were unable to recover it. 00:27:58.502 [2024-07-26 11:35:54.031862] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.502 [2024-07-26 11:35:54.031892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:58.502 qpair failed and we were unable to recover it. 00:27:58.502 [2024-07-26 11:35:54.032050] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.502 [2024-07-26 11:35:54.032081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:58.502 qpair failed and we were unable to recover it. 00:27:58.502 [2024-07-26 11:35:54.032253] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.502 [2024-07-26 11:35:54.032284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:58.502 qpair failed and we were unable to recover it. 00:27:58.502 [2024-07-26 11:35:54.032472] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.502 [2024-07-26 11:35:54.032502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:58.502 qpair failed and we were unable to recover it. 00:27:58.502 [2024-07-26 11:35:54.032692] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.502 [2024-07-26 11:35:54.032723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:58.502 qpair failed and we were unable to recover it. 00:27:58.502 [2024-07-26 11:35:54.032859] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.502 [2024-07-26 11:35:54.032892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:58.502 qpair failed and we were unable to recover it. 00:27:58.502 [2024-07-26 11:35:54.033011] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.502 [2024-07-26 11:35:54.033041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:58.502 qpair failed and we were unable to recover it. 00:27:58.502 [2024-07-26 11:35:54.033173] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.502 [2024-07-26 11:35:54.033203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:58.502 qpair failed and we were unable to recover it. 00:27:58.502 [2024-07-26 11:35:54.033445] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.502 [2024-07-26 11:35:54.033474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:58.502 qpair failed and we were unable to recover it. 00:27:58.502 [2024-07-26 11:35:54.033717] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.502 [2024-07-26 11:35:54.033747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:58.502 qpair failed and we were unable to recover it. 00:27:58.502 [2024-07-26 11:35:54.033991] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.502 [2024-07-26 11:35:54.034022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:58.502 qpair failed and we were unable to recover it. 00:27:58.502 [2024-07-26 11:35:54.034148] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.502 [2024-07-26 11:35:54.034178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:58.502 qpair failed and we were unable to recover it. 00:27:58.502 [2024-07-26 11:35:54.034347] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.502 [2024-07-26 11:35:54.034377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:58.502 qpair failed and we were unable to recover it. 00:27:58.502 [2024-07-26 11:35:54.034499] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.502 [2024-07-26 11:35:54.034531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:58.502 qpair failed and we were unable to recover it. 00:27:58.502 [2024-07-26 11:35:54.034779] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.502 [2024-07-26 11:35:54.034810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:58.502 qpair failed and we were unable to recover it. 00:27:58.502 [2024-07-26 11:35:54.034993] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.502 [2024-07-26 11:35:54.035023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:58.502 qpair failed and we were unable to recover it. 00:27:58.502 [2024-07-26 11:35:54.035218] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.502 [2024-07-26 11:35:54.035248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:58.502 qpair failed and we were unable to recover it. 00:27:58.502 [2024-07-26 11:35:54.035369] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.502 [2024-07-26 11:35:54.035399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:58.502 qpair failed and we were unable to recover it. 00:27:58.502 [2024-07-26 11:35:54.035601] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.502 [2024-07-26 11:35:54.035650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:58.502 qpair failed and we were unable to recover it. 00:27:58.502 [2024-07-26 11:35:54.035773] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.502 [2024-07-26 11:35:54.035805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:58.502 qpair failed and we were unable to recover it. 00:27:58.502 [2024-07-26 11:35:54.036015] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.502 [2024-07-26 11:35:54.036046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:58.502 qpair failed and we were unable to recover it. 00:27:58.502 [2024-07-26 11:35:54.036241] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.502 [2024-07-26 11:35:54.036271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:58.502 qpair failed and we were unable to recover it. 00:27:58.502 [2024-07-26 11:35:54.036532] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.502 [2024-07-26 11:35:54.036562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:58.502 qpair failed and we were unable to recover it. 00:27:58.502 [2024-07-26 11:35:54.036746] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.502 [2024-07-26 11:35:54.036777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:58.502 qpair failed and we were unable to recover it. 00:27:58.502 [2024-07-26 11:35:54.037018] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.502 [2024-07-26 11:35:54.037048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:58.502 qpair failed and we were unable to recover it. 00:27:58.502 [2024-07-26 11:35:54.037288] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.502 [2024-07-26 11:35:54.037318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:58.502 qpair failed and we were unable to recover it. 00:27:58.502 [2024-07-26 11:35:54.037451] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.502 [2024-07-26 11:35:54.037481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:58.502 qpair failed and we were unable to recover it. 00:27:58.502 [2024-07-26 11:35:54.037585] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.502 [2024-07-26 11:35:54.037617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:58.502 qpair failed and we were unable to recover it. 00:27:58.502 [2024-07-26 11:35:54.037832] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.502 [2024-07-26 11:35:54.037863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:58.502 qpair failed and we were unable to recover it. 00:27:58.502 [2024-07-26 11:35:54.038110] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.502 [2024-07-26 11:35:54.038139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:58.502 qpair failed and we were unable to recover it. 00:27:58.502 [2024-07-26 11:35:54.038327] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.502 [2024-07-26 11:35:54.038357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:58.502 qpair failed and we were unable to recover it. 00:27:58.502 [2024-07-26 11:35:54.038485] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.502 [2024-07-26 11:35:54.038515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:58.502 qpair failed and we were unable to recover it. 00:27:58.503 [2024-07-26 11:35:54.038769] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.503 [2024-07-26 11:35:54.038802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:58.503 qpair failed and we were unable to recover it. 00:27:58.503 [2024-07-26 11:35:54.038927] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.503 [2024-07-26 11:35:54.038958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:58.503 qpair failed and we were unable to recover it. 00:27:58.503 [2024-07-26 11:35:54.039142] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.503 [2024-07-26 11:35:54.039172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:58.503 qpair failed and we were unable to recover it. 00:27:58.503 [2024-07-26 11:35:54.039361] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.503 [2024-07-26 11:35:54.039390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:58.503 qpair failed and we were unable to recover it. 00:27:58.503 [2024-07-26 11:35:54.039702] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.503 [2024-07-26 11:35:54.039733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:58.503 qpair failed and we were unable to recover it. 00:27:58.503 [2024-07-26 11:35:54.039856] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.503 [2024-07-26 11:35:54.039886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:58.503 qpair failed and we were unable to recover it. 00:27:58.503 [2024-07-26 11:35:54.040135] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.503 [2024-07-26 11:35:54.040165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:58.503 qpair failed and we were unable to recover it. 00:27:58.503 [2024-07-26 11:35:54.040364] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.503 [2024-07-26 11:35:54.040394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:58.503 qpair failed and we were unable to recover it. 00:27:58.503 [2024-07-26 11:35:54.040610] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.503 [2024-07-26 11:35:54.040648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:58.503 qpair failed and we were unable to recover it. 00:27:58.503 [2024-07-26 11:35:54.040824] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.503 [2024-07-26 11:35:54.040854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:58.503 qpair failed and we were unable to recover it. 00:27:58.503 [2024-07-26 11:35:54.040977] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.503 [2024-07-26 11:35:54.041008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:58.503 qpair failed and we were unable to recover it. 00:27:58.503 [2024-07-26 11:35:54.041115] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.503 [2024-07-26 11:35:54.041145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:58.503 qpair failed and we were unable to recover it. 00:27:58.503 [2024-07-26 11:35:54.041278] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.503 [2024-07-26 11:35:54.041308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:58.503 qpair failed and we were unable to recover it. 00:27:58.503 [2024-07-26 11:35:54.041488] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.503 [2024-07-26 11:35:54.041524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:58.503 qpair failed and we were unable to recover it. 00:27:58.503 [2024-07-26 11:35:54.041769] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.503 [2024-07-26 11:35:54.041800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:58.503 qpair failed and we were unable to recover it. 00:27:58.503 [2024-07-26 11:35:54.042007] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.503 [2024-07-26 11:35:54.042037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:58.503 qpair failed and we were unable to recover it. 00:27:58.503 [2024-07-26 11:35:54.042255] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.503 [2024-07-26 11:35:54.042286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:58.503 qpair failed and we were unable to recover it. 00:27:58.503 [2024-07-26 11:35:54.042405] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.503 [2024-07-26 11:35:54.042435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:58.503 qpair failed and we were unable to recover it. 00:27:58.503 [2024-07-26 11:35:54.042616] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.503 [2024-07-26 11:35:54.042656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:58.503 qpair failed and we were unable to recover it. 00:27:58.503 [2024-07-26 11:35:54.042921] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.503 [2024-07-26 11:35:54.042952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:58.503 qpair failed and we were unable to recover it. 00:27:58.503 [2024-07-26 11:35:54.043132] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.503 [2024-07-26 11:35:54.043162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:58.503 qpair failed and we were unable to recover it. 00:27:58.503 [2024-07-26 11:35:54.043337] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.503 [2024-07-26 11:35:54.043367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:58.503 qpair failed and we were unable to recover it. 00:27:58.503 [2024-07-26 11:35:54.043507] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.503 [2024-07-26 11:35:54.043537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:58.503 qpair failed and we were unable to recover it. 00:27:58.503 [2024-07-26 11:35:54.043812] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.503 [2024-07-26 11:35:54.043845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:58.503 qpair failed and we were unable to recover it. 00:27:58.503 [2024-07-26 11:35:54.044086] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.503 [2024-07-26 11:35:54.044116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:58.503 qpair failed and we were unable to recover it. 00:27:58.503 [2024-07-26 11:35:54.044257] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.503 [2024-07-26 11:35:54.044288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:58.503 qpair failed and we were unable to recover it. 00:27:58.503 [2024-07-26 11:35:54.044534] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.503 [2024-07-26 11:35:54.044564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:58.503 qpair failed and we were unable to recover it. 00:27:58.503 [2024-07-26 11:35:54.044776] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.503 [2024-07-26 11:35:54.044809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:58.503 qpair failed and we were unable to recover it. 00:27:58.503 [2024-07-26 11:35:54.045005] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.503 [2024-07-26 11:35:54.045036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:58.503 qpair failed and we were unable to recover it. 00:27:58.503 [2024-07-26 11:35:54.045237] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.503 [2024-07-26 11:35:54.045267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:58.503 qpair failed and we were unable to recover it. 00:27:58.503 [2024-07-26 11:35:54.045401] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.503 [2024-07-26 11:35:54.045431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:58.503 qpair failed and we were unable to recover it. 00:27:58.503 [2024-07-26 11:35:54.045537] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.503 [2024-07-26 11:35:54.045568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:58.503 qpair failed and we were unable to recover it. 00:27:58.503 [2024-07-26 11:35:54.045681] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.503 [2024-07-26 11:35:54.045712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:58.503 qpair failed and we were unable to recover it. 00:27:58.503 [2024-07-26 11:35:54.045907] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.503 [2024-07-26 11:35:54.045937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:58.503 qpair failed and we were unable to recover it. 00:27:58.503 [2024-07-26 11:35:54.046230] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.503 [2024-07-26 11:35:54.046261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:58.503 qpair failed and we were unable to recover it. 00:27:58.503 [2024-07-26 11:35:54.046385] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.503 [2024-07-26 11:35:54.046415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:58.503 qpair failed and we were unable to recover it. 00:27:58.503 [2024-07-26 11:35:54.046531] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.503 [2024-07-26 11:35:54.046561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:58.503 qpair failed and we were unable to recover it. 00:27:58.503 [2024-07-26 11:35:54.046753] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.504 [2024-07-26 11:35:54.046785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:58.504 qpair failed and we were unable to recover it. 00:27:58.504 [2024-07-26 11:35:54.047010] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.504 [2024-07-26 11:35:54.047040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:58.504 qpair failed and we were unable to recover it. 00:27:58.504 [2024-07-26 11:35:54.047285] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.504 [2024-07-26 11:35:54.047314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:58.504 qpair failed and we were unable to recover it. 00:27:58.504 [2024-07-26 11:35:54.047437] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.504 [2024-07-26 11:35:54.047473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:58.504 qpair failed and we were unable to recover it. 00:27:58.504 [2024-07-26 11:35:54.047744] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.504 [2024-07-26 11:35:54.047775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:58.504 qpair failed and we were unable to recover it. 00:27:58.504 [2024-07-26 11:35:54.047958] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.504 [2024-07-26 11:35:54.047989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:58.504 qpair failed and we were unable to recover it. 00:27:58.504 [2024-07-26 11:35:54.048106] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.504 [2024-07-26 11:35:54.048137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:58.504 qpair failed and we were unable to recover it. 00:27:58.504 [2024-07-26 11:35:54.048314] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.504 [2024-07-26 11:35:54.048343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:58.504 qpair failed and we were unable to recover it. 00:27:58.504 [2024-07-26 11:35:54.048532] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.504 [2024-07-26 11:35:54.048562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:58.504 qpair failed and we were unable to recover it. 00:27:58.504 [2024-07-26 11:35:54.048756] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.504 [2024-07-26 11:35:54.048788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:58.504 qpair failed and we were unable to recover it. 00:27:58.504 [2024-07-26 11:35:54.048988] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.504 [2024-07-26 11:35:54.049018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:58.504 qpair failed and we were unable to recover it. 00:27:58.504 [2024-07-26 11:35:54.049135] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.504 [2024-07-26 11:35:54.049165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:58.504 qpair failed and we were unable to recover it. 00:27:58.504 [2024-07-26 11:35:54.049347] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.504 [2024-07-26 11:35:54.049377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:58.504 qpair failed and we were unable to recover it. 00:27:58.504 [2024-07-26 11:35:54.049522] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.504 [2024-07-26 11:35:54.049552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:58.504 qpair failed and we were unable to recover it. 00:27:58.504 [2024-07-26 11:35:54.049743] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.504 [2024-07-26 11:35:54.049775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:58.504 qpair failed and we were unable to recover it. 00:27:58.504 [2024-07-26 11:35:54.049952] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.504 [2024-07-26 11:35:54.049983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:58.504 qpair failed and we were unable to recover it. 00:27:58.504 [2024-07-26 11:35:54.050224] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.504 [2024-07-26 11:35:54.050255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:58.504 qpair failed and we were unable to recover it. 00:27:58.504 [2024-07-26 11:35:54.050453] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.504 [2024-07-26 11:35:54.050484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:58.504 qpair failed and we were unable to recover it. 00:27:58.504 [2024-07-26 11:35:54.050677] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.504 [2024-07-26 11:35:54.050709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:58.504 qpair failed and we were unable to recover it. 00:27:58.504 [2024-07-26 11:35:54.050903] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.504 [2024-07-26 11:35:54.050933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:58.504 qpair failed and we were unable to recover it. 00:27:58.504 [2024-07-26 11:35:54.051110] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.504 [2024-07-26 11:35:54.051140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:58.504 qpair failed and we were unable to recover it. 00:27:58.504 [2024-07-26 11:35:54.051384] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.504 [2024-07-26 11:35:54.051415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:58.504 qpair failed and we were unable to recover it. 00:27:58.504 [2024-07-26 11:35:54.051542] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.504 [2024-07-26 11:35:54.051571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:58.504 qpair failed and we were unable to recover it. 00:27:58.504 [2024-07-26 11:35:54.051786] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.504 [2024-07-26 11:35:54.051818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:58.504 qpair failed and we were unable to recover it. 00:27:58.504 [2024-07-26 11:35:54.051938] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.504 [2024-07-26 11:35:54.051969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:58.504 qpair failed and we were unable to recover it. 00:27:58.504 [2024-07-26 11:35:54.052176] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.504 [2024-07-26 11:35:54.052206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:58.504 qpair failed and we were unable to recover it. 00:27:58.504 [2024-07-26 11:35:54.052443] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.504 [2024-07-26 11:35:54.052473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:58.504 qpair failed and we were unable to recover it. 00:27:58.504 [2024-07-26 11:35:54.052672] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.504 [2024-07-26 11:35:54.052703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:58.504 qpair failed and we were unable to recover it. 00:27:58.504 [2024-07-26 11:35:54.052913] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.504 [2024-07-26 11:35:54.052943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:58.504 qpair failed and we were unable to recover it. 00:27:58.504 [2024-07-26 11:35:54.053175] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.504 [2024-07-26 11:35:54.053206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:58.504 qpair failed and we were unable to recover it. 00:27:58.504 [2024-07-26 11:35:54.053387] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.504 [2024-07-26 11:35:54.053417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:58.504 qpair failed and we were unable to recover it. 00:27:58.504 [2024-07-26 11:35:54.053615] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.504 [2024-07-26 11:35:54.053656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:58.504 qpair failed and we were unable to recover it. 00:27:58.504 [2024-07-26 11:35:54.053917] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.504 [2024-07-26 11:35:54.053947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:58.504 qpair failed and we were unable to recover it. 00:27:58.504 [2024-07-26 11:35:54.054057] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.504 [2024-07-26 11:35:54.054087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:58.504 qpair failed and we were unable to recover it. 00:27:58.504 [2024-07-26 11:35:54.054197] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.504 [2024-07-26 11:35:54.054228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:58.504 qpair failed and we were unable to recover it. 00:27:58.504 [2024-07-26 11:35:54.054397] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.504 [2024-07-26 11:35:54.054428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:58.504 qpair failed and we were unable to recover it. 00:27:58.504 [2024-07-26 11:35:54.054643] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.504 [2024-07-26 11:35:54.054675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:58.504 qpair failed and we were unable to recover it. 00:27:58.504 [2024-07-26 11:35:54.054812] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.504 [2024-07-26 11:35:54.054842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:58.504 qpair failed and we were unable to recover it. 00:27:58.504 [2024-07-26 11:35:54.055015] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.505 [2024-07-26 11:35:54.055045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:58.505 qpair failed and we were unable to recover it. 00:27:58.505 [2024-07-26 11:35:54.055257] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.505 [2024-07-26 11:35:54.055288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:58.505 qpair failed and we were unable to recover it. 00:27:58.505 [2024-07-26 11:35:54.055474] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.505 [2024-07-26 11:35:54.055505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:58.505 qpair failed and we were unable to recover it. 00:27:58.505 [2024-07-26 11:35:54.055748] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.505 [2024-07-26 11:35:54.055779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:58.505 qpair failed and we were unable to recover it. 00:27:58.505 [2024-07-26 11:35:54.055956] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.505 [2024-07-26 11:35:54.055987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:58.505 qpair failed and we were unable to recover it. 00:27:58.505 [2024-07-26 11:35:54.056238] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.505 [2024-07-26 11:35:54.056269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:58.505 qpair failed and we were unable to recover it. 00:27:58.505 [2024-07-26 11:35:54.056413] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.505 [2024-07-26 11:35:54.056448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.505 qpair failed and we were unable to recover it. 00:27:58.505 [2024-07-26 11:35:54.056590] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.505 [2024-07-26 11:35:54.056622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.505 qpair failed and we were unable to recover it. 00:27:58.505 [2024-07-26 11:35:54.056856] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.505 [2024-07-26 11:35:54.056888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.505 qpair failed and we were unable to recover it. 00:27:58.505 [2024-07-26 11:35:54.057061] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.505 [2024-07-26 11:35:54.057091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.505 qpair failed and we were unable to recover it. 00:27:58.505 [2024-07-26 11:35:54.057281] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.505 [2024-07-26 11:35:54.057312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.505 qpair failed and we were unable to recover it. 00:27:58.505 [2024-07-26 11:35:54.057447] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.505 [2024-07-26 11:35:54.057478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.505 qpair failed and we were unable to recover it. 00:27:58.505 [2024-07-26 11:35:54.057611] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.505 [2024-07-26 11:35:54.057652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.505 qpair failed and we were unable to recover it. 00:27:58.505 [2024-07-26 11:35:54.057784] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.505 [2024-07-26 11:35:54.057815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.505 qpair failed and we were unable to recover it. 00:27:58.505 [2024-07-26 11:35:54.058070] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.505 [2024-07-26 11:35:54.058101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.505 qpair failed and we were unable to recover it. 00:27:58.505 [2024-07-26 11:35:54.058270] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.505 [2024-07-26 11:35:54.058300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.505 qpair failed and we were unable to recover it. 00:27:58.505 [2024-07-26 11:35:54.058426] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.505 [2024-07-26 11:35:54.058457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.505 qpair failed and we were unable to recover it. 00:27:58.505 [2024-07-26 11:35:54.058654] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.505 [2024-07-26 11:35:54.058685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.505 qpair failed and we were unable to recover it. 00:27:58.505 [2024-07-26 11:35:54.058884] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.505 [2024-07-26 11:35:54.058914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.505 qpair failed and we were unable to recover it. 00:27:58.505 [2024-07-26 11:35:54.059208] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.505 [2024-07-26 11:35:54.059244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.505 qpair failed and we were unable to recover it. 00:27:58.505 [2024-07-26 11:35:54.059352] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.505 [2024-07-26 11:35:54.059382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.505 qpair failed and we were unable to recover it. 00:27:58.505 [2024-07-26 11:35:54.059645] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.505 [2024-07-26 11:35:54.059676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.505 qpair failed and we were unable to recover it. 00:27:58.505 [2024-07-26 11:35:54.059800] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.505 [2024-07-26 11:35:54.059831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.505 qpair failed and we were unable to recover it. 00:27:58.505 [2024-07-26 11:35:54.059956] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.505 [2024-07-26 11:35:54.059986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.505 qpair failed and we were unable to recover it. 00:27:58.505 [2024-07-26 11:35:54.060104] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.505 [2024-07-26 11:35:54.060135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.505 qpair failed and we were unable to recover it. 00:27:58.505 [2024-07-26 11:35:54.060330] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.505 [2024-07-26 11:35:54.060360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.505 qpair failed and we were unable to recover it. 00:27:58.505 [2024-07-26 11:35:54.060543] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.505 [2024-07-26 11:35:54.060574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.505 qpair failed and we were unable to recover it. 00:27:58.505 [2024-07-26 11:35:54.060778] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.505 [2024-07-26 11:35:54.060810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.505 qpair failed and we were unable to recover it. 00:27:58.505 [2024-07-26 11:35:54.060985] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.505 [2024-07-26 11:35:54.061015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.505 qpair failed and we were unable to recover it. 00:27:58.505 [2024-07-26 11:35:54.061151] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.505 [2024-07-26 11:35:54.061183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.505 qpair failed and we were unable to recover it. 00:27:58.505 [2024-07-26 11:35:54.061299] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.505 [2024-07-26 11:35:54.061329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.505 qpair failed and we were unable to recover it. 00:27:58.505 [2024-07-26 11:35:54.061442] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.505 [2024-07-26 11:35:54.061474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.505 qpair failed and we were unable to recover it. 00:27:58.505 [2024-07-26 11:35:54.061661] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.505 [2024-07-26 11:35:54.061693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.505 qpair failed and we were unable to recover it. 00:27:58.505 [2024-07-26 11:35:54.061937] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.505 [2024-07-26 11:35:54.061968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.505 qpair failed and we were unable to recover it. 00:27:58.505 [2024-07-26 11:35:54.062228] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.505 [2024-07-26 11:35:54.062258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.505 qpair failed and we were unable to recover it. 00:27:58.505 [2024-07-26 11:35:54.062394] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.505 [2024-07-26 11:35:54.062425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.505 qpair failed and we were unable to recover it. 00:27:58.505 [2024-07-26 11:35:54.062612] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.505 [2024-07-26 11:35:54.062653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.505 qpair failed and we were unable to recover it. 00:27:58.505 [2024-07-26 11:35:54.062892] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.505 [2024-07-26 11:35:54.062923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.505 qpair failed and we were unable to recover it. 00:27:58.505 [2024-07-26 11:35:54.063064] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.506 [2024-07-26 11:35:54.063094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.506 qpair failed and we were unable to recover it. 00:27:58.506 [2024-07-26 11:35:54.063284] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.506 [2024-07-26 11:35:54.063315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.506 qpair failed and we were unable to recover it. 00:27:58.506 [2024-07-26 11:35:54.063511] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.506 [2024-07-26 11:35:54.063542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.506 qpair failed and we were unable to recover it. 00:27:58.506 [2024-07-26 11:35:54.063744] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.506 [2024-07-26 11:35:54.063776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.506 qpair failed and we were unable to recover it. 00:27:58.506 [2024-07-26 11:35:54.064049] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.506 [2024-07-26 11:35:54.064080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.506 qpair failed and we were unable to recover it. 00:27:58.506 [2024-07-26 11:35:54.064324] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.506 [2024-07-26 11:35:54.064355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.506 qpair failed and we were unable to recover it. 00:27:58.506 [2024-07-26 11:35:54.064574] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.506 [2024-07-26 11:35:54.064605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.506 qpair failed and we were unable to recover it. 00:27:58.506 [2024-07-26 11:35:54.064887] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.506 [2024-07-26 11:35:54.064917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.506 qpair failed and we were unable to recover it. 00:27:58.506 [2024-07-26 11:35:54.065166] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.506 [2024-07-26 11:35:54.065198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.506 qpair failed and we were unable to recover it. 00:27:58.506 [2024-07-26 11:35:54.065468] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.506 [2024-07-26 11:35:54.065499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.506 qpair failed and we were unable to recover it. 00:27:58.506 [2024-07-26 11:35:54.065637] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.506 [2024-07-26 11:35:54.065668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.506 qpair failed and we were unable to recover it. 00:27:58.506 [2024-07-26 11:35:54.065863] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.506 [2024-07-26 11:35:54.065894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.506 qpair failed and we were unable to recover it. 00:27:58.506 [2024-07-26 11:35:54.066165] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.506 [2024-07-26 11:35:54.066197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.506 qpair failed and we were unable to recover it. 00:27:58.506 [2024-07-26 11:35:54.066389] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.506 [2024-07-26 11:35:54.066420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.506 qpair failed and we were unable to recover it. 00:27:58.506 [2024-07-26 11:35:54.066714] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.506 [2024-07-26 11:35:54.066746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.506 qpair failed and we were unable to recover it. 00:27:58.506 [2024-07-26 11:35:54.066917] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.506 [2024-07-26 11:35:54.066947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.506 qpair failed and we were unable to recover it. 00:27:58.506 [2024-07-26 11:35:54.067083] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.506 [2024-07-26 11:35:54.067114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.506 qpair failed and we were unable to recover it. 00:27:58.506 [2024-07-26 11:35:54.067323] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.506 [2024-07-26 11:35:54.067353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.506 qpair failed and we were unable to recover it. 00:27:58.506 [2024-07-26 11:35:54.067494] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.506 [2024-07-26 11:35:54.067525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.506 qpair failed and we were unable to recover it. 00:27:58.506 [2024-07-26 11:35:54.067766] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.506 [2024-07-26 11:35:54.067797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.506 qpair failed and we were unable to recover it. 00:27:58.506 [2024-07-26 11:35:54.067975] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.506 [2024-07-26 11:35:54.068007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.506 qpair failed and we were unable to recover it. 00:27:58.506 [2024-07-26 11:35:54.068248] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.506 [2024-07-26 11:35:54.068284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.506 qpair failed and we were unable to recover it. 00:27:58.506 [2024-07-26 11:35:54.068497] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.506 [2024-07-26 11:35:54.068527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.506 qpair failed and we were unable to recover it. 00:27:58.506 [2024-07-26 11:35:54.068654] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.506 [2024-07-26 11:35:54.068687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.506 qpair failed and we were unable to recover it. 00:27:58.506 [2024-07-26 11:35:54.068828] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.506 [2024-07-26 11:35:54.068864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.506 qpair failed and we were unable to recover it. 00:27:58.506 [2024-07-26 11:35:54.068987] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.506 [2024-07-26 11:35:54.069018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.506 qpair failed and we were unable to recover it. 00:27:58.506 [2024-07-26 11:35:54.069208] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.506 [2024-07-26 11:35:54.069239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.506 qpair failed and we were unable to recover it. 00:27:58.506 [2024-07-26 11:35:54.069455] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.506 [2024-07-26 11:35:54.069485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.506 qpair failed and we were unable to recover it. 00:27:58.506 [2024-07-26 11:35:54.069664] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.506 [2024-07-26 11:35:54.069706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.506 qpair failed and we were unable to recover it. 00:27:58.506 [2024-07-26 11:35:54.069879] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.506 [2024-07-26 11:35:54.069910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.506 qpair failed and we were unable to recover it. 00:27:58.506 [2024-07-26 11:35:54.070030] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.506 [2024-07-26 11:35:54.070060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.506 qpair failed and we were unable to recover it. 00:27:58.506 [2024-07-26 11:35:54.070278] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.506 [2024-07-26 11:35:54.070309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.506 qpair failed and we were unable to recover it. 00:27:58.506 [2024-07-26 11:35:54.070513] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.506 [2024-07-26 11:35:54.070544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.506 qpair failed and we were unable to recover it. 00:27:58.506 [2024-07-26 11:35:54.070751] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.506 [2024-07-26 11:35:54.070782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.506 qpair failed and we were unable to recover it. 00:27:58.506 [2024-07-26 11:35:54.070985] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.506 [2024-07-26 11:35:54.071015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.506 qpair failed and we were unable to recover it. 00:27:58.506 [2024-07-26 11:35:54.071273] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.506 [2024-07-26 11:35:54.071303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.506 qpair failed and we were unable to recover it. 00:27:58.506 [2024-07-26 11:35:54.071489] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.506 [2024-07-26 11:35:54.071520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.506 qpair failed and we were unable to recover it. 00:27:58.506 [2024-07-26 11:35:54.071726] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.506 [2024-07-26 11:35:54.071757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.507 qpair failed and we were unable to recover it. 00:27:58.507 [2024-07-26 11:35:54.071872] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.507 [2024-07-26 11:35:54.071903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.507 qpair failed and we were unable to recover it. 00:27:58.507 [2024-07-26 11:35:54.072098] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.507 [2024-07-26 11:35:54.072128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.507 qpair failed and we were unable to recover it. 00:27:58.507 [2024-07-26 11:35:54.072253] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.507 [2024-07-26 11:35:54.072284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.507 qpair failed and we were unable to recover it. 00:27:58.507 [2024-07-26 11:35:54.072475] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.507 [2024-07-26 11:35:54.072507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.507 qpair failed and we were unable to recover it. 00:27:58.507 [2024-07-26 11:35:54.072697] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.507 [2024-07-26 11:35:54.072727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.507 qpair failed and we were unable to recover it. 00:27:58.507 [2024-07-26 11:35:54.072901] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.507 [2024-07-26 11:35:54.072932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.507 qpair failed and we were unable to recover it. 00:27:58.507 [2024-07-26 11:35:54.073058] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.507 [2024-07-26 11:35:54.073089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.507 qpair failed and we were unable to recover it. 00:27:58.507 [2024-07-26 11:35:54.073210] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.507 [2024-07-26 11:35:54.073241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.507 qpair failed and we were unable to recover it. 00:27:58.507 [2024-07-26 11:35:54.073409] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.507 [2024-07-26 11:35:54.073441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.507 qpair failed and we were unable to recover it. 00:27:58.507 [2024-07-26 11:35:54.073617] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.507 [2024-07-26 11:35:54.073659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.507 qpair failed and we were unable to recover it. 00:27:58.507 [2024-07-26 11:35:54.073915] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.507 [2024-07-26 11:35:54.073969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f8000b90 with addr=10.0.0.2, port=4420 00:27:58.507 qpair failed and we were unable to recover it. 00:27:58.507 [2024-07-26 11:35:54.074102] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.507 [2024-07-26 11:35:54.074134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f8000b90 with addr=10.0.0.2, port=4420 00:27:58.507 qpair failed and we were unable to recover it. 00:27:58.507 [2024-07-26 11:35:54.074285] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.507 [2024-07-26 11:35:54.074316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f8000b90 with addr=10.0.0.2, port=4420 00:27:58.507 qpair failed and we were unable to recover it. 00:27:58.507 [2024-07-26 11:35:54.074492] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.507 [2024-07-26 11:35:54.074522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f8000b90 with addr=10.0.0.2, port=4420 00:27:58.507 qpair failed and we were unable to recover it. 00:27:58.507 [2024-07-26 11:35:54.074794] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.507 [2024-07-26 11:35:54.074826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f8000b90 with addr=10.0.0.2, port=4420 00:27:58.507 qpair failed and we were unable to recover it. 00:27:58.507 [2024-07-26 11:35:54.075018] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.507 [2024-07-26 11:35:54.075049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f8000b90 with addr=10.0.0.2, port=4420 00:27:58.507 qpair failed and we were unable to recover it. 00:27:58.507 [2024-07-26 11:35:54.075237] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.507 [2024-07-26 11:35:54.075267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f8000b90 with addr=10.0.0.2, port=4420 00:27:58.507 qpair failed and we were unable to recover it. 00:27:58.507 [2024-07-26 11:35:54.075491] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.507 [2024-07-26 11:35:54.075520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f8000b90 with addr=10.0.0.2, port=4420 00:27:58.507 qpair failed and we were unable to recover it. 00:27:58.507 [2024-07-26 11:35:54.075649] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.507 [2024-07-26 11:35:54.075681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f8000b90 with addr=10.0.0.2, port=4420 00:27:58.507 qpair failed and we were unable to recover it. 00:27:58.507 [2024-07-26 11:35:54.075868] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.507 [2024-07-26 11:35:54.075900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f8000b90 with addr=10.0.0.2, port=4420 00:27:58.507 qpair failed and we were unable to recover it. 00:27:58.507 [2024-07-26 11:35:54.076034] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.507 [2024-07-26 11:35:54.076063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f8000b90 with addr=10.0.0.2, port=4420 00:27:58.507 qpair failed and we were unable to recover it. 00:27:58.507 [2024-07-26 11:35:54.076257] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.507 [2024-07-26 11:35:54.076287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f8000b90 with addr=10.0.0.2, port=4420 00:27:58.507 qpair failed and we were unable to recover it. 00:27:58.507 [2024-07-26 11:35:54.076472] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.507 [2024-07-26 11:35:54.076502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f8000b90 with addr=10.0.0.2, port=4420 00:27:58.507 qpair failed and we were unable to recover it. 00:27:58.507 [2024-07-26 11:35:54.076772] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.507 [2024-07-26 11:35:54.076810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f8000b90 with addr=10.0.0.2, port=4420 00:27:58.507 qpair failed and we were unable to recover it. 00:27:58.507 [2024-07-26 11:35:54.076951] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.507 [2024-07-26 11:35:54.076982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f8000b90 with addr=10.0.0.2, port=4420 00:27:58.507 qpair failed and we were unable to recover it. 00:27:58.507 [2024-07-26 11:35:54.077183] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.507 [2024-07-26 11:35:54.077214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f8000b90 with addr=10.0.0.2, port=4420 00:27:58.507 qpair failed and we were unable to recover it. 00:27:58.507 [2024-07-26 11:35:54.077386] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.507 [2024-07-26 11:35:54.077416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f8000b90 with addr=10.0.0.2, port=4420 00:27:58.507 qpair failed and we were unable to recover it. 00:27:58.507 [2024-07-26 11:35:54.077547] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.507 [2024-07-26 11:35:54.077578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f8000b90 with addr=10.0.0.2, port=4420 00:27:58.507 qpair failed and we were unable to recover it. 00:27:58.507 [2024-07-26 11:35:54.077732] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.507 [2024-07-26 11:35:54.077764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f8000b90 with addr=10.0.0.2, port=4420 00:27:58.507 qpair failed and we were unable to recover it. 00:27:58.507 [2024-07-26 11:35:54.077933] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.507 [2024-07-26 11:35:54.077962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f8000b90 with addr=10.0.0.2, port=4420 00:27:58.507 qpair failed and we were unable to recover it. 00:27:58.507 [2024-07-26 11:35:54.078189] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.507 [2024-07-26 11:35:54.078220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f8000b90 with addr=10.0.0.2, port=4420 00:27:58.507 qpair failed and we were unable to recover it. 00:27:58.507 [2024-07-26 11:35:54.078459] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.507 [2024-07-26 11:35:54.078489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f8000b90 with addr=10.0.0.2, port=4420 00:27:58.507 qpair failed and we were unable to recover it. 00:27:58.507 [2024-07-26 11:35:54.078623] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.508 [2024-07-26 11:35:54.078665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f8000b90 with addr=10.0.0.2, port=4420 00:27:58.508 qpair failed and we were unable to recover it. 00:27:58.508 [2024-07-26 11:35:54.078911] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.508 [2024-07-26 11:35:54.078941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f8000b90 with addr=10.0.0.2, port=4420 00:27:58.508 qpair failed and we were unable to recover it. 00:27:58.508 [2024-07-26 11:35:54.079129] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.508 [2024-07-26 11:35:54.079159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f8000b90 with addr=10.0.0.2, port=4420 00:27:58.508 qpair failed and we were unable to recover it. 00:27:58.508 [2024-07-26 11:35:54.079281] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.508 [2024-07-26 11:35:54.079310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f8000b90 with addr=10.0.0.2, port=4420 00:27:58.508 qpair failed and we were unable to recover it. 00:27:58.508 [2024-07-26 11:35:54.079552] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.508 [2024-07-26 11:35:54.079583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f8000b90 with addr=10.0.0.2, port=4420 00:27:58.508 qpair failed and we were unable to recover it. 00:27:58.508 [2024-07-26 11:35:54.079839] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.508 [2024-07-26 11:35:54.079871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f8000b90 with addr=10.0.0.2, port=4420 00:27:58.508 qpair failed and we were unable to recover it. 00:27:58.508 [2024-07-26 11:35:54.080006] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.508 [2024-07-26 11:35:54.080037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f8000b90 with addr=10.0.0.2, port=4420 00:27:58.508 qpair failed and we were unable to recover it. 00:27:58.508 [2024-07-26 11:35:54.080221] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.508 [2024-07-26 11:35:54.080251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f8000b90 with addr=10.0.0.2, port=4420 00:27:58.508 qpair failed and we were unable to recover it. 00:27:58.508 [2024-07-26 11:35:54.080430] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.508 [2024-07-26 11:35:54.080458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f8000b90 with addr=10.0.0.2, port=4420 00:27:58.508 qpair failed and we were unable to recover it. 00:27:58.508 [2024-07-26 11:35:54.080650] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.508 [2024-07-26 11:35:54.080680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f8000b90 with addr=10.0.0.2, port=4420 00:27:58.508 qpair failed and we were unable to recover it. 00:27:58.508 [2024-07-26 11:35:54.080876] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.508 [2024-07-26 11:35:54.080906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f8000b90 with addr=10.0.0.2, port=4420 00:27:58.508 qpair failed and we were unable to recover it. 00:27:58.508 [2024-07-26 11:35:54.081114] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.508 [2024-07-26 11:35:54.081144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f8000b90 with addr=10.0.0.2, port=4420 00:27:58.508 qpair failed and we were unable to recover it. 00:27:58.508 [2024-07-26 11:35:54.081249] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.508 [2024-07-26 11:35:54.081278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f8000b90 with addr=10.0.0.2, port=4420 00:27:58.508 qpair failed and we were unable to recover it. 00:27:58.508 [2024-07-26 11:35:54.081527] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.508 [2024-07-26 11:35:54.081559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f8000b90 with addr=10.0.0.2, port=4420 00:27:58.508 qpair failed and we were unable to recover it. 00:27:58.508 [2024-07-26 11:35:54.081688] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.508 [2024-07-26 11:35:54.081718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f8000b90 with addr=10.0.0.2, port=4420 00:27:58.508 qpair failed and we were unable to recover it. 00:27:58.508 [2024-07-26 11:35:54.081955] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.508 [2024-07-26 11:35:54.081984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f8000b90 with addr=10.0.0.2, port=4420 00:27:58.508 qpair failed and we were unable to recover it. 00:27:58.508 [2024-07-26 11:35:54.082180] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.508 [2024-07-26 11:35:54.082208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f8000b90 with addr=10.0.0.2, port=4420 00:27:58.508 qpair failed and we were unable to recover it. 00:27:58.508 [2024-07-26 11:35:54.082478] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.508 [2024-07-26 11:35:54.082508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f8000b90 with addr=10.0.0.2, port=4420 00:27:58.508 qpair failed and we were unable to recover it. 00:27:58.508 [2024-07-26 11:35:54.082714] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.508 [2024-07-26 11:35:54.082753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:58.508 qpair failed and we were unable to recover it. 00:27:58.508 [2024-07-26 11:35:54.082974] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.508 [2024-07-26 11:35:54.083005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:58.508 qpair failed and we were unable to recover it. 00:27:58.508 [2024-07-26 11:35:54.083204] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.508 [2024-07-26 11:35:54.083235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:58.508 qpair failed and we were unable to recover it. 00:27:58.508 [2024-07-26 11:35:54.083425] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.508 [2024-07-26 11:35:54.083456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:58.508 qpair failed and we were unable to recover it. 00:27:58.508 [2024-07-26 11:35:54.083655] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.508 [2024-07-26 11:35:54.083687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:58.508 qpair failed and we were unable to recover it. 00:27:58.508 [2024-07-26 11:35:54.083928] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.508 [2024-07-26 11:35:54.083959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:58.508 qpair failed and we were unable to recover it. 00:27:58.508 [2024-07-26 11:35:54.084135] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.508 [2024-07-26 11:35:54.084165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:58.508 qpair failed and we were unable to recover it. 00:27:58.508 [2024-07-26 11:35:54.084433] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.508 [2024-07-26 11:35:54.084464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:58.508 qpair failed and we were unable to recover it. 00:27:58.508 [2024-07-26 11:35:54.084659] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.508 [2024-07-26 11:35:54.084693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.508 qpair failed and we were unable to recover it. 00:27:58.508 [2024-07-26 11:35:54.084872] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.508 [2024-07-26 11:35:54.084902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.508 qpair failed and we were unable to recover it. 00:27:58.508 [2024-07-26 11:35:54.085175] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.508 [2024-07-26 11:35:54.085206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.508 qpair failed and we were unable to recover it. 00:27:58.508 [2024-07-26 11:35:54.085399] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.508 [2024-07-26 11:35:54.085430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.508 qpair failed and we were unable to recover it. 00:27:58.508 [2024-07-26 11:35:54.085644] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.508 [2024-07-26 11:35:54.085675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.508 qpair failed and we were unable to recover it. 00:27:58.508 [2024-07-26 11:35:54.085820] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.508 [2024-07-26 11:35:54.085853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.508 qpair failed and we were unable to recover it. 00:27:58.508 [2024-07-26 11:35:54.086039] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.508 [2024-07-26 11:35:54.086069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.508 qpair failed and we were unable to recover it. 00:27:58.508 [2024-07-26 11:35:54.086175] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.508 [2024-07-26 11:35:54.086206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.508 qpair failed and we were unable to recover it. 00:27:58.508 [2024-07-26 11:35:54.086477] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.508 [2024-07-26 11:35:54.086507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.508 qpair failed and we were unable to recover it. 00:27:58.508 [2024-07-26 11:35:54.086772] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.508 [2024-07-26 11:35:54.086803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.508 qpair failed and we were unable to recover it. 00:27:58.508 [2024-07-26 11:35:54.087063] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.508 [2024-07-26 11:35:54.087094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.508 qpair failed and we were unable to recover it. 00:27:58.508 [2024-07-26 11:35:54.087217] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.508 [2024-07-26 11:35:54.087247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.508 qpair failed and we were unable to recover it. 00:27:58.509 [2024-07-26 11:35:54.087443] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.509 [2024-07-26 11:35:54.087473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.509 qpair failed and we were unable to recover it. 00:27:58.509 [2024-07-26 11:35:54.087593] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.509 [2024-07-26 11:35:54.087623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.509 qpair failed and we were unable to recover it. 00:27:58.509 [2024-07-26 11:35:54.087813] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.509 [2024-07-26 11:35:54.087844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.509 qpair failed and we were unable to recover it. 00:27:58.509 [2024-07-26 11:35:54.087982] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.509 [2024-07-26 11:35:54.088014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.509 qpair failed and we were unable to recover it. 00:27:58.509 [2024-07-26 11:35:54.088129] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.509 [2024-07-26 11:35:54.088159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.509 qpair failed and we were unable to recover it. 00:27:58.509 [2024-07-26 11:35:54.088367] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.509 [2024-07-26 11:35:54.088407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.509 qpair failed and we were unable to recover it. 00:27:58.509 [2024-07-26 11:35:54.088605] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.509 [2024-07-26 11:35:54.088656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.509 qpair failed and we were unable to recover it. 00:27:58.509 [2024-07-26 11:35:54.088856] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.509 [2024-07-26 11:35:54.088887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.509 qpair failed and we were unable to recover it. 00:27:58.509 [2024-07-26 11:35:54.089080] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.509 [2024-07-26 11:35:54.089112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.509 qpair failed and we were unable to recover it. 00:27:58.509 [2024-07-26 11:35:54.089300] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.509 [2024-07-26 11:35:54.089330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.509 qpair failed and we were unable to recover it. 00:27:58.509 [2024-07-26 11:35:54.089533] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.509 [2024-07-26 11:35:54.089563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.509 qpair failed and we were unable to recover it. 00:27:58.509 [2024-07-26 11:35:54.089817] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.509 [2024-07-26 11:35:54.089850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.509 qpair failed and we were unable to recover it. 00:27:58.509 [2024-07-26 11:35:54.090035] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.509 [2024-07-26 11:35:54.090066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.509 qpair failed and we were unable to recover it. 00:27:58.509 [2024-07-26 11:35:54.090259] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.509 [2024-07-26 11:35:54.090290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.509 qpair failed and we were unable to recover it. 00:27:58.509 [2024-07-26 11:35:54.090545] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.509 [2024-07-26 11:35:54.090576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.509 qpair failed and we were unable to recover it. 00:27:58.509 [2024-07-26 11:35:54.090777] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.509 [2024-07-26 11:35:54.090808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.509 qpair failed and we were unable to recover it. 00:27:58.509 [2024-07-26 11:35:54.091067] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.509 [2024-07-26 11:35:54.091097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.509 qpair failed and we were unable to recover it. 00:27:58.509 [2024-07-26 11:35:54.091279] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.509 [2024-07-26 11:35:54.091310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.509 qpair failed and we were unable to recover it. 00:27:58.509 [2024-07-26 11:35:54.091416] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.509 [2024-07-26 11:35:54.091446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.509 qpair failed and we were unable to recover it. 00:27:58.509 [2024-07-26 11:35:54.091728] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.509 [2024-07-26 11:35:54.091760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.509 qpair failed and we were unable to recover it. 00:27:58.509 [2024-07-26 11:35:54.091954] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.509 [2024-07-26 11:35:54.091991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.509 qpair failed and we were unable to recover it. 00:27:58.509 [2024-07-26 11:35:54.092271] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.509 [2024-07-26 11:35:54.092301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.509 qpair failed and we were unable to recover it. 00:27:58.509 [2024-07-26 11:35:54.092522] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.509 [2024-07-26 11:35:54.092553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.509 qpair failed and we were unable to recover it. 00:27:58.509 [2024-07-26 11:35:54.092751] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.509 [2024-07-26 11:35:54.092782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.509 qpair failed and we were unable to recover it. 00:27:58.509 [2024-07-26 11:35:54.093023] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.509 [2024-07-26 11:35:54.093054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.509 qpair failed and we were unable to recover it. 00:27:58.509 [2024-07-26 11:35:54.093263] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.509 [2024-07-26 11:35:54.093295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.509 qpair failed and we were unable to recover it. 00:27:58.509 [2024-07-26 11:35:54.093517] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.509 [2024-07-26 11:35:54.093548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.509 qpair failed and we were unable to recover it. 00:27:58.509 [2024-07-26 11:35:54.093797] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.509 [2024-07-26 11:35:54.093832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.509 qpair failed and we were unable to recover it. 00:27:58.509 [2024-07-26 11:35:54.093971] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.509 [2024-07-26 11:35:54.094002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.509 qpair failed and we were unable to recover it. 00:27:58.509 [2024-07-26 11:35:54.094211] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.509 [2024-07-26 11:35:54.094242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.509 qpair failed and we were unable to recover it. 00:27:58.509 [2024-07-26 11:35:54.094369] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.509 [2024-07-26 11:35:54.094402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.509 qpair failed and we were unable to recover it. 00:27:58.509 [2024-07-26 11:35:54.094574] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.509 [2024-07-26 11:35:54.094605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.509 qpair failed and we were unable to recover it. 00:27:58.509 [2024-07-26 11:35:54.094718] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.509 [2024-07-26 11:35:54.094750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.509 qpair failed and we were unable to recover it. 00:27:58.509 [2024-07-26 11:35:54.094876] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.509 [2024-07-26 11:35:54.094907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.509 qpair failed and we were unable to recover it. 00:27:58.509 [2024-07-26 11:35:54.095204] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.509 [2024-07-26 11:35:54.095234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.509 qpair failed and we were unable to recover it. 00:27:58.509 [2024-07-26 11:35:54.095426] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.509 [2024-07-26 11:35:54.095457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.509 qpair failed and we were unable to recover it. 00:27:58.509 [2024-07-26 11:35:54.095661] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.509 [2024-07-26 11:35:54.095692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.509 qpair failed and we were unable to recover it. 00:27:58.509 [2024-07-26 11:35:54.095938] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.510 [2024-07-26 11:35:54.095968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.510 qpair failed and we were unable to recover it. 00:27:58.510 [2024-07-26 11:35:54.096152] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.510 [2024-07-26 11:35:54.096182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.510 qpair failed and we were unable to recover it. 00:27:58.510 [2024-07-26 11:35:54.096306] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.510 [2024-07-26 11:35:54.096340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.510 qpair failed and we were unable to recover it. 00:27:58.510 [2024-07-26 11:35:54.096548] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.510 [2024-07-26 11:35:54.096577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.510 qpair failed and we were unable to recover it. 00:27:58.510 [2024-07-26 11:35:54.096862] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.510 [2024-07-26 11:35:54.096895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.510 qpair failed and we were unable to recover it. 00:27:58.510 [2024-07-26 11:35:54.097015] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.510 [2024-07-26 11:35:54.097047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.510 qpair failed and we were unable to recover it. 00:27:58.510 [2024-07-26 11:35:54.097238] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.510 [2024-07-26 11:35:54.097268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.510 qpair failed and we were unable to recover it. 00:27:58.510 [2024-07-26 11:35:54.097411] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.510 [2024-07-26 11:35:54.097452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.510 qpair failed and we were unable to recover it. 00:27:58.510 [2024-07-26 11:35:54.097597] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.510 [2024-07-26 11:35:54.097655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.510 qpair failed and we were unable to recover it. 00:27:58.510 [2024-07-26 11:35:54.097875] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.510 [2024-07-26 11:35:54.097920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.510 qpair failed and we were unable to recover it. 00:27:58.510 [2024-07-26 11:35:54.098130] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.510 [2024-07-26 11:35:54.098163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.510 qpair failed and we were unable to recover it. 00:27:58.510 [2024-07-26 11:35:54.098296] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.510 [2024-07-26 11:35:54.098327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.510 qpair failed and we were unable to recover it. 00:27:58.510 [2024-07-26 11:35:54.098456] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.510 [2024-07-26 11:35:54.098486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.510 qpair failed and we were unable to recover it. 00:27:58.510 [2024-07-26 11:35:54.098702] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.510 [2024-07-26 11:35:54.098733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.510 qpair failed and we were unable to recover it. 00:27:58.510 [2024-07-26 11:35:54.098929] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.510 [2024-07-26 11:35:54.098959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.510 qpair failed and we were unable to recover it. 00:27:58.510 [2024-07-26 11:35:54.099206] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.510 [2024-07-26 11:35:54.099236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.510 qpair failed and we were unable to recover it. 00:27:58.510 [2024-07-26 11:35:54.099353] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.510 [2024-07-26 11:35:54.099383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.510 qpair failed and we were unable to recover it. 00:27:58.510 [2024-07-26 11:35:54.099652] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.510 [2024-07-26 11:35:54.099699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.510 qpair failed and we were unable to recover it. 00:27:58.510 [2024-07-26 11:35:54.099991] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.510 [2024-07-26 11:35:54.100028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.510 qpair failed and we were unable to recover it. 00:27:58.510 [2024-07-26 11:35:54.100221] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.510 [2024-07-26 11:35:54.100256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.510 qpair failed and we were unable to recover it. 00:27:58.510 [2024-07-26 11:35:54.100450] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.510 [2024-07-26 11:35:54.100481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.510 qpair failed and we were unable to recover it. 00:27:58.510 [2024-07-26 11:35:54.100620] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.510 [2024-07-26 11:35:54.100663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.510 qpair failed and we were unable to recover it. 00:27:58.510 [2024-07-26 11:35:54.100840] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.510 [2024-07-26 11:35:54.100871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.510 qpair failed and we were unable to recover it. 00:27:58.510 [2024-07-26 11:35:54.101079] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.510 [2024-07-26 11:35:54.101116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.510 qpair failed and we were unable to recover it. 00:27:58.510 [2024-07-26 11:35:54.101293] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.510 [2024-07-26 11:35:54.101327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.510 qpair failed and we were unable to recover it. 00:27:58.510 [2024-07-26 11:35:54.101527] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.510 [2024-07-26 11:35:54.101574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.510 qpair failed and we were unable to recover it. 00:27:58.510 [2024-07-26 11:35:54.101778] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.510 [2024-07-26 11:35:54.101825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.510 qpair failed and we were unable to recover it. 00:27:58.780 [2024-07-26 11:35:54.101960] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.780 [2024-07-26 11:35:54.101992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.780 qpair failed and we were unable to recover it. 00:27:58.780 [2024-07-26 11:35:54.102195] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.780 [2024-07-26 11:35:54.102239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.780 qpair failed and we were unable to recover it. 00:27:58.780 [2024-07-26 11:35:54.102382] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.780 [2024-07-26 11:35:54.102413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.780 qpair failed and we were unable to recover it. 00:27:58.780 [2024-07-26 11:35:54.102608] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.780 [2024-07-26 11:35:54.102651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.780 qpair failed and we were unable to recover it. 00:27:58.780 [2024-07-26 11:35:54.102831] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.780 [2024-07-26 11:35:54.102861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.780 qpair failed and we were unable to recover it. 00:27:58.780 [2024-07-26 11:35:54.103048] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.780 [2024-07-26 11:35:54.103078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.780 qpair failed and we were unable to recover it. 00:27:58.780 [2024-07-26 11:35:54.103294] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.780 [2024-07-26 11:35:54.103324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.780 qpair failed and we were unable to recover it. 00:27:58.780 [2024-07-26 11:35:54.103510] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.780 [2024-07-26 11:35:54.103541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.780 qpair failed and we were unable to recover it. 00:27:58.780 [2024-07-26 11:35:54.103718] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.780 [2024-07-26 11:35:54.103750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.780 qpair failed and we were unable to recover it. 00:27:58.780 [2024-07-26 11:35:54.103854] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.780 [2024-07-26 11:35:54.103884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.780 qpair failed and we were unable to recover it. 00:27:58.780 [2024-07-26 11:35:54.104029] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.780 [2024-07-26 11:35:54.104060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.780 qpair failed and we were unable to recover it. 00:27:58.780 [2024-07-26 11:35:54.104165] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.780 [2024-07-26 11:35:54.104195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.780 qpair failed and we were unable to recover it. 00:27:58.780 [2024-07-26 11:35:54.104329] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.780 [2024-07-26 11:35:54.104360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.780 qpair failed and we were unable to recover it. 00:27:58.780 [2024-07-26 11:35:54.104479] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.780 [2024-07-26 11:35:54.104509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.780 qpair failed and we were unable to recover it. 00:27:58.780 [2024-07-26 11:35:54.104759] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.780 [2024-07-26 11:35:54.104789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.780 qpair failed and we were unable to recover it. 00:27:58.780 [2024-07-26 11:35:54.104999] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.780 [2024-07-26 11:35:54.105028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.780 qpair failed and we were unable to recover it. 00:27:58.780 [2024-07-26 11:35:54.105206] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.780 [2024-07-26 11:35:54.105236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.780 qpair failed and we were unable to recover it. 00:27:58.780 [2024-07-26 11:35:54.105428] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.780 [2024-07-26 11:35:54.105459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.780 qpair failed and we were unable to recover it. 00:27:58.780 [2024-07-26 11:35:54.105667] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.780 [2024-07-26 11:35:54.105699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.780 qpair failed and we were unable to recover it. 00:27:58.780 [2024-07-26 11:35:54.105883] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.780 [2024-07-26 11:35:54.105915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.780 qpair failed and we were unable to recover it. 00:27:58.780 [2024-07-26 11:35:54.106116] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.780 [2024-07-26 11:35:54.106147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.780 qpair failed and we were unable to recover it. 00:27:58.780 [2024-07-26 11:35:54.106435] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.780 [2024-07-26 11:35:54.106465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.780 qpair failed and we were unable to recover it. 00:27:58.780 [2024-07-26 11:35:54.106735] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.780 [2024-07-26 11:35:54.106766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.780 qpair failed and we were unable to recover it. 00:27:58.780 [2024-07-26 11:35:54.106950] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.780 [2024-07-26 11:35:54.107009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:58.780 qpair failed and we were unable to recover it. 00:27:58.780 [2024-07-26 11:35:54.107133] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.780 [2024-07-26 11:35:54.107166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:58.780 qpair failed and we were unable to recover it. 00:27:58.780 [2024-07-26 11:35:54.107421] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.780 [2024-07-26 11:35:54.107452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:58.780 qpair failed and we were unable to recover it. 00:27:58.780 [2024-07-26 11:35:54.107699] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.780 [2024-07-26 11:35:54.107732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:58.780 qpair failed and we were unable to recover it. 00:27:58.780 [2024-07-26 11:35:54.107864] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.780 [2024-07-26 11:35:54.107895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:58.780 qpair failed and we were unable to recover it. 00:27:58.780 [2024-07-26 11:35:54.108163] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.780 [2024-07-26 11:35:54.108194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:58.780 qpair failed and we were unable to recover it. 00:27:58.780 [2024-07-26 11:35:54.108382] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.780 [2024-07-26 11:35:54.108412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:58.780 qpair failed and we were unable to recover it. 00:27:58.780 [2024-07-26 11:35:54.108614] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.780 [2024-07-26 11:35:54.108660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:58.780 qpair failed and we were unable to recover it. 00:27:58.780 [2024-07-26 11:35:54.108810] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.780 [2024-07-26 11:35:54.108840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:58.780 qpair failed and we were unable to recover it. 00:27:58.780 [2024-07-26 11:35:54.109109] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.780 [2024-07-26 11:35:54.109140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:58.780 qpair failed and we were unable to recover it. 00:27:58.780 [2024-07-26 11:35:54.109264] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.780 [2024-07-26 11:35:54.109294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:58.780 qpair failed and we were unable to recover it. 00:27:58.780 [2024-07-26 11:35:54.109489] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.780 [2024-07-26 11:35:54.109519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:58.781 qpair failed and we were unable to recover it. 00:27:58.781 [2024-07-26 11:35:54.109703] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.781 [2024-07-26 11:35:54.109735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:58.781 qpair failed and we were unable to recover it. 00:27:58.781 [2024-07-26 11:35:54.109975] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.781 [2024-07-26 11:35:54.110005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:58.781 qpair failed and we were unable to recover it. 00:27:58.781 [2024-07-26 11:35:54.110164] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.781 [2024-07-26 11:35:54.110195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:58.781 qpair failed and we were unable to recover it. 00:27:58.781 [2024-07-26 11:35:54.110462] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.781 [2024-07-26 11:35:54.110492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:58.781 qpair failed and we were unable to recover it. 00:27:58.781 [2024-07-26 11:35:54.110759] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.781 [2024-07-26 11:35:54.110791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:58.781 qpair failed and we were unable to recover it. 00:27:58.781 [2024-07-26 11:35:54.110964] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.781 [2024-07-26 11:35:54.110995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:58.781 qpair failed and we were unable to recover it. 00:27:58.781 [2024-07-26 11:35:54.111246] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.781 [2024-07-26 11:35:54.111277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:58.781 qpair failed and we were unable to recover it. 00:27:58.781 [2024-07-26 11:35:54.111408] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.781 [2024-07-26 11:35:54.111438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:58.781 qpair failed and we were unable to recover it. 00:27:58.781 [2024-07-26 11:35:54.111651] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.781 [2024-07-26 11:35:54.111685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:58.781 qpair failed and we were unable to recover it. 00:27:58.781 [2024-07-26 11:35:54.111861] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.781 [2024-07-26 11:35:54.111892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:58.781 qpair failed and we were unable to recover it. 00:27:58.781 [2024-07-26 11:35:54.112065] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.781 [2024-07-26 11:35:54.112094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:58.781 qpair failed and we were unable to recover it. 00:27:58.781 [2024-07-26 11:35:54.112265] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.781 [2024-07-26 11:35:54.112295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:58.781 qpair failed and we were unable to recover it. 00:27:58.781 [2024-07-26 11:35:54.112475] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.781 [2024-07-26 11:35:54.112505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:58.781 qpair failed and we were unable to recover it. 00:27:58.781 [2024-07-26 11:35:54.114790] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.781 [2024-07-26 11:35:54.114824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:58.781 qpair failed and we were unable to recover it. 00:27:58.781 [2024-07-26 11:35:54.115117] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.781 [2024-07-26 11:35:54.115147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:58.781 qpair failed and we were unable to recover it. 00:27:58.781 [2024-07-26 11:35:54.115349] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.781 [2024-07-26 11:35:54.115384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.781 qpair failed and we were unable to recover it. 00:27:58.781 [2024-07-26 11:35:54.115596] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.781 [2024-07-26 11:35:54.115647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.781 qpair failed and we were unable to recover it. 00:27:58.781 [2024-07-26 11:35:54.115897] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.781 [2024-07-26 11:35:54.115928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.781 qpair failed and we were unable to recover it. 00:27:58.781 [2024-07-26 11:35:54.116194] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.781 [2024-07-26 11:35:54.116224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.781 qpair failed and we were unable to recover it. 00:27:58.781 [2024-07-26 11:35:54.116416] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.781 [2024-07-26 11:35:54.116447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.781 qpair failed and we were unable to recover it. 00:27:58.781 [2024-07-26 11:35:54.116636] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.781 [2024-07-26 11:35:54.116667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.781 qpair failed and we were unable to recover it. 00:27:58.781 [2024-07-26 11:35:54.116858] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.781 [2024-07-26 11:35:54.116888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.781 qpair failed and we were unable to recover it. 00:27:58.781 [2024-07-26 11:35:54.117079] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.781 [2024-07-26 11:35:54.117110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.781 qpair failed and we were unable to recover it. 00:27:58.781 [2024-07-26 11:35:54.117284] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.781 [2024-07-26 11:35:54.117314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.781 qpair failed and we were unable to recover it. 00:27:58.781 [2024-07-26 11:35:54.117500] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.781 [2024-07-26 11:35:54.117530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.781 qpair failed and we were unable to recover it. 00:27:58.781 [2024-07-26 11:35:54.117729] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.781 [2024-07-26 11:35:54.117760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.781 qpair failed and we were unable to recover it. 00:27:58.781 [2024-07-26 11:35:54.118005] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.781 [2024-07-26 11:35:54.118036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.781 qpair failed and we were unable to recover it. 00:27:58.781 [2024-07-26 11:35:54.118239] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.781 [2024-07-26 11:35:54.118268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.781 qpair failed and we were unable to recover it. 00:27:58.781 [2024-07-26 11:35:54.118404] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.781 [2024-07-26 11:35:54.118440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.781 qpair failed and we were unable to recover it. 00:27:58.781 [2024-07-26 11:35:54.118563] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.781 [2024-07-26 11:35:54.118594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.781 qpair failed and we were unable to recover it. 00:27:58.781 [2024-07-26 11:35:54.118728] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.781 [2024-07-26 11:35:54.118759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.781 qpair failed and we were unable to recover it. 00:27:58.781 [2024-07-26 11:35:54.118931] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.781 [2024-07-26 11:35:54.118961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.781 qpair failed and we were unable to recover it. 00:27:58.781 [2024-07-26 11:35:54.119151] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.781 [2024-07-26 11:35:54.119183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.781 qpair failed and we were unable to recover it. 00:27:58.781 [2024-07-26 11:35:54.119382] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.781 [2024-07-26 11:35:54.119412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.781 qpair failed and we were unable to recover it. 00:27:58.781 [2024-07-26 11:35:54.119616] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.781 [2024-07-26 11:35:54.119654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.781 qpair failed and we were unable to recover it. 00:27:58.781 [2024-07-26 11:35:54.119901] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.781 [2024-07-26 11:35:54.119932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.781 qpair failed and we were unable to recover it. 00:27:58.781 [2024-07-26 11:35:54.120051] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.781 [2024-07-26 11:35:54.120080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.781 qpair failed and we were unable to recover it. 00:27:58.782 [2024-07-26 11:35:54.120345] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.782 [2024-07-26 11:35:54.120375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.782 qpair failed and we were unable to recover it. 00:27:58.782 [2024-07-26 11:35:54.120555] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.782 [2024-07-26 11:35:54.120585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.782 qpair failed and we were unable to recover it. 00:27:58.782 [2024-07-26 11:35:54.120782] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.782 [2024-07-26 11:35:54.120813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.782 qpair failed and we were unable to recover it. 00:27:58.782 [2024-07-26 11:35:54.121008] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.782 [2024-07-26 11:35:54.121038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.782 qpair failed and we were unable to recover it. 00:27:58.782 [2024-07-26 11:35:54.121316] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.782 [2024-07-26 11:35:54.121347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.782 qpair failed and we were unable to recover it. 00:27:58.782 [2024-07-26 11:35:54.121569] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.782 [2024-07-26 11:35:54.121599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.782 qpair failed and we were unable to recover it. 00:27:58.782 [2024-07-26 11:35:54.121873] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.782 [2024-07-26 11:35:54.121904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.782 qpair failed and we were unable to recover it. 00:27:58.782 [2024-07-26 11:35:54.122093] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.782 [2024-07-26 11:35:54.122124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.782 qpair failed and we were unable to recover it. 00:27:58.782 [2024-07-26 11:35:54.122257] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.782 [2024-07-26 11:35:54.122286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.782 qpair failed and we were unable to recover it. 00:27:58.782 [2024-07-26 11:35:54.122475] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.782 [2024-07-26 11:35:54.122505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.782 qpair failed and we were unable to recover it. 00:27:58.782 [2024-07-26 11:35:54.122724] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.782 [2024-07-26 11:35:54.122755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.782 qpair failed and we were unable to recover it. 00:27:58.782 [2024-07-26 11:35:54.122931] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.782 [2024-07-26 11:35:54.122962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.782 qpair failed and we were unable to recover it. 00:27:58.782 [2024-07-26 11:35:54.123094] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.782 [2024-07-26 11:35:54.123124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.782 qpair failed and we were unable to recover it. 00:27:58.782 [2024-07-26 11:35:54.123377] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.782 [2024-07-26 11:35:54.123407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.782 qpair failed and we were unable to recover it. 00:27:58.782 [2024-07-26 11:35:54.123620] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.782 [2024-07-26 11:35:54.123659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.782 qpair failed and we were unable to recover it. 00:27:58.782 [2024-07-26 11:35:54.123921] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.782 [2024-07-26 11:35:54.123951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.782 qpair failed and we were unable to recover it. 00:27:58.782 [2024-07-26 11:35:54.124219] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.782 [2024-07-26 11:35:54.124250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.782 qpair failed and we were unable to recover it. 00:27:58.782 [2024-07-26 11:35:54.124396] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.782 [2024-07-26 11:35:54.124426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.782 qpair failed and we were unable to recover it. 00:27:58.782 [2024-07-26 11:35:54.124620] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.782 [2024-07-26 11:35:54.124672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.782 qpair failed and we were unable to recover it. 00:27:58.782 [2024-07-26 11:35:54.124852] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.782 [2024-07-26 11:35:54.124883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.782 qpair failed and we were unable to recover it. 00:27:58.782 [2024-07-26 11:35:54.125071] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.782 [2024-07-26 11:35:54.125102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.782 qpair failed and we were unable to recover it. 00:27:58.782 [2024-07-26 11:35:54.125204] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.782 [2024-07-26 11:35:54.125234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.782 qpair failed and we were unable to recover it. 00:27:58.782 [2024-07-26 11:35:54.125422] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.782 [2024-07-26 11:35:54.125453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.782 qpair failed and we were unable to recover it. 00:27:58.782 [2024-07-26 11:35:54.125651] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.782 [2024-07-26 11:35:54.125683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.782 qpair failed and we were unable to recover it. 00:27:58.782 [2024-07-26 11:35:54.125938] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.782 [2024-07-26 11:35:54.125969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.782 qpair failed and we were unable to recover it. 00:27:58.782 [2024-07-26 11:35:54.126110] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.782 [2024-07-26 11:35:54.126141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.782 qpair failed and we were unable to recover it. 00:27:58.782 [2024-07-26 11:35:54.126316] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.782 [2024-07-26 11:35:54.126346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.782 qpair failed and we were unable to recover it. 00:27:58.782 [2024-07-26 11:35:54.126458] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.782 [2024-07-26 11:35:54.126488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.782 qpair failed and we were unable to recover it. 00:27:58.782 [2024-07-26 11:35:54.126689] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.782 [2024-07-26 11:35:54.126720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.782 qpair failed and we were unable to recover it. 00:27:58.782 [2024-07-26 11:35:54.126938] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.782 [2024-07-26 11:35:54.126968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.782 qpair failed and we were unable to recover it. 00:27:58.782 [2024-07-26 11:35:54.127157] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.782 [2024-07-26 11:35:54.127187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.782 qpair failed and we were unable to recover it. 00:27:58.782 [2024-07-26 11:35:54.127391] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.782 [2024-07-26 11:35:54.127421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.782 qpair failed and we were unable to recover it. 00:27:58.782 [2024-07-26 11:35:54.127685] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.782 [2024-07-26 11:35:54.127717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.782 qpair failed and we were unable to recover it. 00:27:58.782 [2024-07-26 11:35:54.127971] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.782 [2024-07-26 11:35:54.128001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.782 qpair failed and we were unable to recover it. 00:27:58.782 [2024-07-26 11:35:54.128121] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.782 [2024-07-26 11:35:54.128152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.782 qpair failed and we were unable to recover it. 00:27:58.782 [2024-07-26 11:35:54.128365] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.782 [2024-07-26 11:35:54.128395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.782 qpair failed and we were unable to recover it. 00:27:58.782 [2024-07-26 11:35:54.128509] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.782 [2024-07-26 11:35:54.128539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.782 qpair failed and we were unable to recover it. 00:27:58.782 [2024-07-26 11:35:54.128766] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.783 [2024-07-26 11:35:54.128798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.783 qpair failed and we were unable to recover it. 00:27:58.783 [2024-07-26 11:35:54.128933] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.783 [2024-07-26 11:35:54.128963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.783 qpair failed and we were unable to recover it. 00:27:58.783 [2024-07-26 11:35:54.129206] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.783 [2024-07-26 11:35:54.129236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.783 qpair failed and we were unable to recover it. 00:27:58.783 [2024-07-26 11:35:54.129432] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.783 [2024-07-26 11:35:54.129463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.783 qpair failed and we were unable to recover it. 00:27:58.783 [2024-07-26 11:35:54.129647] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.783 [2024-07-26 11:35:54.129684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.783 qpair failed and we were unable to recover it. 00:27:58.783 [2024-07-26 11:35:54.129922] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.783 [2024-07-26 11:35:54.129952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.783 qpair failed and we were unable to recover it. 00:27:58.783 [2024-07-26 11:35:54.130090] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.783 [2024-07-26 11:35:54.130120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.783 qpair failed and we were unable to recover it. 00:27:58.783 [2024-07-26 11:35:54.130318] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.783 [2024-07-26 11:35:54.130348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.783 qpair failed and we were unable to recover it. 00:27:58.783 [2024-07-26 11:35:54.130606] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.783 [2024-07-26 11:35:54.130645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.783 qpair failed and we were unable to recover it. 00:27:58.783 [2024-07-26 11:35:54.130829] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.783 [2024-07-26 11:35:54.130860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.783 qpair failed and we were unable to recover it. 00:27:58.783 [2024-07-26 11:35:54.131049] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.783 [2024-07-26 11:35:54.131079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.783 qpair failed and we were unable to recover it. 00:27:58.783 [2024-07-26 11:35:54.131359] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.783 [2024-07-26 11:35:54.131389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.783 qpair failed and we were unable to recover it. 00:27:58.783 [2024-07-26 11:35:54.131516] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.783 [2024-07-26 11:35:54.131547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.783 qpair failed and we were unable to recover it. 00:27:58.783 [2024-07-26 11:35:54.131741] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.783 [2024-07-26 11:35:54.131772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.783 qpair failed and we were unable to recover it. 00:27:58.783 [2024-07-26 11:35:54.131910] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.783 [2024-07-26 11:35:54.131940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.783 qpair failed and we were unable to recover it. 00:27:58.783 [2024-07-26 11:35:54.132197] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.783 [2024-07-26 11:35:54.132228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.783 qpair failed and we were unable to recover it. 00:27:58.783 [2024-07-26 11:35:54.132356] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.783 [2024-07-26 11:35:54.132386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.783 qpair failed and we were unable to recover it. 00:27:58.783 [2024-07-26 11:35:54.132565] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.783 [2024-07-26 11:35:54.132595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.783 qpair failed and we were unable to recover it. 00:27:58.783 [2024-07-26 11:35:54.132799] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.783 [2024-07-26 11:35:54.132830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.783 qpair failed and we were unable to recover it. 00:27:58.783 [2024-07-26 11:35:54.133101] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.783 [2024-07-26 11:35:54.133130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.783 qpair failed and we were unable to recover it. 00:27:58.783 [2024-07-26 11:35:54.133247] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.783 [2024-07-26 11:35:54.133277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.783 qpair failed and we were unable to recover it. 00:27:58.783 [2024-07-26 11:35:54.133450] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.783 [2024-07-26 11:35:54.133485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.783 qpair failed and we were unable to recover it. 00:27:58.783 [2024-07-26 11:35:54.133703] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.783 [2024-07-26 11:35:54.133734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.783 qpair failed and we were unable to recover it. 00:27:58.783 [2024-07-26 11:35:54.134000] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.783 [2024-07-26 11:35:54.134030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.783 qpair failed and we were unable to recover it. 00:27:58.783 [2024-07-26 11:35:54.134223] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.783 [2024-07-26 11:35:54.134253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.783 qpair failed and we were unable to recover it. 00:27:58.783 [2024-07-26 11:35:54.134377] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.783 [2024-07-26 11:35:54.134407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.783 qpair failed and we were unable to recover it. 00:27:58.783 [2024-07-26 11:35:54.134518] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.783 [2024-07-26 11:35:54.134547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.783 qpair failed and we were unable to recover it. 00:27:58.783 [2024-07-26 11:35:54.134811] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.783 [2024-07-26 11:35:54.134843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.783 qpair failed and we were unable to recover it. 00:27:58.783 11:35:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:27:58.783 [2024-07-26 11:35:54.135087] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.783 [2024-07-26 11:35:54.135118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.783 qpair failed and we were unable to recover it. 00:27:58.783 11:35:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@864 -- # return 0 00:27:58.783 [2024-07-26 11:35:54.135356] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.783 [2024-07-26 11:35:54.135388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.783 qpair failed and we were unable to recover it. 00:27:58.783 [2024-07-26 11:35:54.135585] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.783 [2024-07-26 11:35:54.135616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.783 qpair failed and we were unable to recover it. 00:27:58.783 11:35:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:27:58.783 [2024-07-26 11:35:54.135762] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.783 [2024-07-26 11:35:54.135793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.783 qpair failed and we were unable to recover it. 00:27:58.783 [2024-07-26 11:35:54.135900] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.783 [2024-07-26 11:35:54.135929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.783 qpair failed and we were unable to recover it. 00:27:58.783 11:35:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@730 -- # xtrace_disable 00:27:58.783 [2024-07-26 11:35:54.136056] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.783 [2024-07-26 11:35:54.136086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.783 qpair failed and we were unable to recover it. 00:27:58.783 11:35:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:58.783 [2024-07-26 11:35:54.136261] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.783 [2024-07-26 11:35:54.136292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.783 qpair failed and we were unable to recover it. 00:27:58.783 [2024-07-26 11:35:54.136557] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.783 [2024-07-26 11:35:54.136587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.783 qpair failed and we were unable to recover it. 00:27:58.784 [2024-07-26 11:35:54.136745] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.784 [2024-07-26 11:35:54.136778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.784 qpair failed and we were unable to recover it. 00:27:58.784 [2024-07-26 11:35:54.137076] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.784 [2024-07-26 11:35:54.137106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.784 qpair failed and we were unable to recover it. 00:27:58.784 [2024-07-26 11:35:54.137281] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.784 [2024-07-26 11:35:54.137311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.784 qpair failed and we were unable to recover it. 00:27:58.784 [2024-07-26 11:35:54.137516] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.784 [2024-07-26 11:35:54.137547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.784 qpair failed and we were unable to recover it. 00:27:58.784 [2024-07-26 11:35:54.137681] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.784 [2024-07-26 11:35:54.137712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.784 qpair failed and we were unable to recover it. 00:27:58.784 [2024-07-26 11:35:54.137889] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.784 [2024-07-26 11:35:54.137919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.784 qpair failed and we were unable to recover it. 00:27:58.784 [2024-07-26 11:35:54.138112] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.784 [2024-07-26 11:35:54.138143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.784 qpair failed and we were unable to recover it. 00:27:58.784 [2024-07-26 11:35:54.138361] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.784 [2024-07-26 11:35:54.138391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.784 qpair failed and we were unable to recover it. 00:27:58.784 [2024-07-26 11:35:54.138597] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.784 [2024-07-26 11:35:54.138635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.784 qpair failed and we were unable to recover it. 00:27:58.784 [2024-07-26 11:35:54.138810] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.784 [2024-07-26 11:35:54.138841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.784 qpair failed and we were unable to recover it. 00:27:58.784 [2024-07-26 11:35:54.139047] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.784 [2024-07-26 11:35:54.139077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.784 qpair failed and we were unable to recover it. 00:27:58.784 [2024-07-26 11:35:54.139200] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.784 [2024-07-26 11:35:54.139231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.784 qpair failed and we were unable to recover it. 00:27:58.784 [2024-07-26 11:35:54.139371] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.784 [2024-07-26 11:35:54.139402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.784 qpair failed and we were unable to recover it. 00:27:58.784 [2024-07-26 11:35:54.139647] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.784 [2024-07-26 11:35:54.139678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.784 qpair failed and we were unable to recover it. 00:27:58.784 [2024-07-26 11:35:54.139947] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.784 [2024-07-26 11:35:54.139978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.784 qpair failed and we were unable to recover it. 00:27:58.784 [2024-07-26 11:35:54.140108] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.784 [2024-07-26 11:35:54.140139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.784 qpair failed and we were unable to recover it. 00:27:58.784 [2024-07-26 11:35:54.140269] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.784 [2024-07-26 11:35:54.140301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.784 qpair failed and we were unable to recover it. 00:27:58.784 [2024-07-26 11:35:54.140554] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.784 [2024-07-26 11:35:54.140584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.784 qpair failed and we were unable to recover it. 00:27:58.784 [2024-07-26 11:35:54.140731] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.784 [2024-07-26 11:35:54.140762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.784 qpair failed and we were unable to recover it. 00:27:58.784 [2024-07-26 11:35:54.140951] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.784 [2024-07-26 11:35:54.140980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.784 qpair failed and we were unable to recover it. 00:27:58.784 [2024-07-26 11:35:54.141153] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.784 [2024-07-26 11:35:54.141183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.784 qpair failed and we were unable to recover it. 00:27:58.784 [2024-07-26 11:35:54.141301] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.784 [2024-07-26 11:35:54.141331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.784 qpair failed and we were unable to recover it. 00:27:58.784 [2024-07-26 11:35:54.141506] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.784 [2024-07-26 11:35:54.141535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.784 qpair failed and we were unable to recover it. 00:27:58.784 [2024-07-26 11:35:54.141774] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.784 [2024-07-26 11:35:54.141843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f8000b90 with addr=10.0.0.2, port=4420 00:27:58.784 qpair failed and we were unable to recover it. 00:27:58.784 [2024-07-26 11:35:54.142081] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.784 [2024-07-26 11:35:54.142114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f8000b90 with addr=10.0.0.2, port=4420 00:27:58.784 qpair failed and we were unable to recover it. 00:27:58.784 [2024-07-26 11:35:54.142251] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.784 [2024-07-26 11:35:54.142281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f8000b90 with addr=10.0.0.2, port=4420 00:27:58.784 qpair failed and we were unable to recover it. 00:27:58.784 [2024-07-26 11:35:54.142401] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.784 [2024-07-26 11:35:54.142431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f8000b90 with addr=10.0.0.2, port=4420 00:27:58.784 qpair failed and we were unable to recover it. 00:27:58.784 [2024-07-26 11:35:54.142641] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.784 [2024-07-26 11:35:54.142673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f8000b90 with addr=10.0.0.2, port=4420 00:27:58.784 qpair failed and we were unable to recover it. 00:27:58.784 [2024-07-26 11:35:54.142934] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.784 [2024-07-26 11:35:54.142966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f8000b90 with addr=10.0.0.2, port=4420 00:27:58.784 qpair failed and we were unable to recover it. 00:27:58.784 [2024-07-26 11:35:54.143086] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.784 [2024-07-26 11:35:54.143114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f8000b90 with addr=10.0.0.2, port=4420 00:27:58.784 qpair failed and we were unable to recover it. 00:27:58.784 [2024-07-26 11:35:54.143244] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.785 [2024-07-26 11:35:54.143275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f8000b90 with addr=10.0.0.2, port=4420 00:27:58.785 qpair failed and we were unable to recover it. 00:27:58.785 [2024-07-26 11:35:54.143466] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.785 [2024-07-26 11:35:54.143497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f8000b90 with addr=10.0.0.2, port=4420 00:27:58.785 qpair failed and we were unable to recover it. 00:27:58.785 [2024-07-26 11:35:54.143676] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.785 [2024-07-26 11:35:54.143708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f8000b90 with addr=10.0.0.2, port=4420 00:27:58.785 qpair failed and we were unable to recover it. 00:27:58.785 [2024-07-26 11:35:54.143899] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.785 [2024-07-26 11:35:54.143928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f8000b90 with addr=10.0.0.2, port=4420 00:27:58.785 qpair failed and we were unable to recover it. 00:27:58.785 [2024-07-26 11:35:54.144121] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.785 [2024-07-26 11:35:54.144151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f8000b90 with addr=10.0.0.2, port=4420 00:27:58.785 qpair failed and we were unable to recover it. 00:27:58.785 [2024-07-26 11:35:54.144339] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.785 [2024-07-26 11:35:54.144369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f8000b90 with addr=10.0.0.2, port=4420 00:27:58.785 qpair failed and we were unable to recover it. 00:27:58.785 [2024-07-26 11:35:54.144624] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.785 [2024-07-26 11:35:54.144674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f8000b90 with addr=10.0.0.2, port=4420 00:27:58.785 qpair failed and we were unable to recover it. 00:27:58.785 [2024-07-26 11:35:54.144811] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.785 [2024-07-26 11:35:54.144841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f8000b90 with addr=10.0.0.2, port=4420 00:27:58.785 qpair failed and we were unable to recover it. 00:27:58.785 [2024-07-26 11:35:54.145104] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.785 [2024-07-26 11:35:54.145134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f8000b90 with addr=10.0.0.2, port=4420 00:27:58.785 qpair failed and we were unable to recover it. 00:27:58.785 [2024-07-26 11:35:54.145254] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.785 [2024-07-26 11:35:54.145283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f8000b90 with addr=10.0.0.2, port=4420 00:27:58.785 qpair failed and we were unable to recover it. 00:27:58.785 [2024-07-26 11:35:54.145390] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.785 [2024-07-26 11:35:54.145418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f8000b90 with addr=10.0.0.2, port=4420 00:27:58.785 qpair failed and we were unable to recover it. 00:27:58.785 [2024-07-26 11:35:54.145558] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.785 [2024-07-26 11:35:54.145588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f8000b90 with addr=10.0.0.2, port=4420 00:27:58.785 qpair failed and we were unable to recover it. 00:27:58.785 [2024-07-26 11:35:54.145858] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.785 [2024-07-26 11:35:54.145889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f8000b90 with addr=10.0.0.2, port=4420 00:27:58.785 qpair failed and we were unable to recover it. 00:27:58.785 [2024-07-26 11:35:54.146013] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.785 [2024-07-26 11:35:54.146043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f8000b90 with addr=10.0.0.2, port=4420 00:27:58.785 qpair failed and we were unable to recover it. 00:27:58.785 [2024-07-26 11:35:54.146194] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.785 [2024-07-26 11:35:54.146223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f8000b90 with addr=10.0.0.2, port=4420 00:27:58.785 qpair failed and we were unable to recover it. 00:27:58.785 [2024-07-26 11:35:54.146464] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.785 [2024-07-26 11:35:54.146495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f8000b90 with addr=10.0.0.2, port=4420 00:27:58.785 qpair failed and we were unable to recover it. 00:27:58.785 [2024-07-26 11:35:54.146671] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.785 [2024-07-26 11:35:54.146701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f8000b90 with addr=10.0.0.2, port=4420 00:27:58.785 qpair failed and we were unable to recover it. 00:27:58.785 [2024-07-26 11:35:54.146833] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.785 [2024-07-26 11:35:54.146863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f8000b90 with addr=10.0.0.2, port=4420 00:27:58.785 qpair failed and we were unable to recover it. 00:27:58.785 [2024-07-26 11:35:54.146972] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.785 [2024-07-26 11:35:54.147005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f8000b90 with addr=10.0.0.2, port=4420 00:27:58.785 qpair failed and we were unable to recover it. 00:27:58.785 [2024-07-26 11:35:54.147212] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.785 [2024-07-26 11:35:54.147242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f8000b90 with addr=10.0.0.2, port=4420 00:27:58.785 qpair failed and we were unable to recover it. 00:27:58.785 [2024-07-26 11:35:54.147443] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.785 [2024-07-26 11:35:54.147474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f8000b90 with addr=10.0.0.2, port=4420 00:27:58.785 qpair failed and we were unable to recover it. 00:27:58.785 [2024-07-26 11:35:54.147606] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.785 [2024-07-26 11:35:54.147653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f8000b90 with addr=10.0.0.2, port=4420 00:27:58.785 qpair failed and we were unable to recover it. 00:27:58.785 [2024-07-26 11:35:54.147843] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.785 [2024-07-26 11:35:54.147874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f8000b90 with addr=10.0.0.2, port=4420 00:27:58.785 qpair failed and we were unable to recover it. 00:27:58.785 [2024-07-26 11:35:54.148071] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.785 [2024-07-26 11:35:54.148101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f8000b90 with addr=10.0.0.2, port=4420 00:27:58.785 qpair failed and we were unable to recover it. 00:27:58.785 [2024-07-26 11:35:54.148281] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.785 [2024-07-26 11:35:54.148310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f8000b90 with addr=10.0.0.2, port=4420 00:27:58.785 qpair failed and we were unable to recover it. 00:27:58.785 [2024-07-26 11:35:54.148513] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.785 [2024-07-26 11:35:54.148543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f8000b90 with addr=10.0.0.2, port=4420 00:27:58.785 qpair failed and we were unable to recover it. 00:27:58.785 [2024-07-26 11:35:54.148739] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.785 [2024-07-26 11:35:54.148771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f8000b90 with addr=10.0.0.2, port=4420 00:27:58.785 qpair failed and we were unable to recover it. 00:27:58.785 [2024-07-26 11:35:54.148947] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.785 [2024-07-26 11:35:54.148977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f8000b90 with addr=10.0.0.2, port=4420 00:27:58.785 qpair failed and we were unable to recover it. 00:27:58.785 [2024-07-26 11:35:54.149085] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.785 [2024-07-26 11:35:54.149113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f8000b90 with addr=10.0.0.2, port=4420 00:27:58.785 qpair failed and we were unable to recover it. 00:27:58.785 [2024-07-26 11:35:54.149239] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.785 [2024-07-26 11:35:54.149268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f8000b90 with addr=10.0.0.2, port=4420 00:27:58.785 qpair failed and we were unable to recover it. 00:27:58.785 [2024-07-26 11:35:54.149442] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.785 [2024-07-26 11:35:54.149472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f8000b90 with addr=10.0.0.2, port=4420 00:27:58.785 qpair failed and we were unable to recover it. 00:27:58.785 [2024-07-26 11:35:54.149596] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.785 [2024-07-26 11:35:54.149624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f8000b90 with addr=10.0.0.2, port=4420 00:27:58.785 qpair failed and we were unable to recover it. 00:27:58.785 [2024-07-26 11:35:54.149827] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.785 [2024-07-26 11:35:54.149858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f8000b90 with addr=10.0.0.2, port=4420 00:27:58.785 qpair failed and we were unable to recover it. 00:27:58.785 [2024-07-26 11:35:54.150078] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.785 [2024-07-26 11:35:54.150134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:58.785 qpair failed and we were unable to recover it. 00:27:58.785 [2024-07-26 11:35:54.150272] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.785 [2024-07-26 11:35:54.150305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:58.785 qpair failed and we were unable to recover it. 00:27:58.785 [2024-07-26 11:35:54.150494] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.785 [2024-07-26 11:35:54.150525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:58.785 qpair failed and we were unable to recover it. 00:27:58.785 [2024-07-26 11:35:54.150668] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.785 [2024-07-26 11:35:54.150700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:58.785 qpair failed and we were unable to recover it. 00:27:58.785 [2024-07-26 11:35:54.150885] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.785 [2024-07-26 11:35:54.150916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:58.785 qpair failed and we were unable to recover it. 00:27:58.786 [2024-07-26 11:35:54.151210] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.786 [2024-07-26 11:35:54.151241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:58.786 qpair failed and we were unable to recover it. 00:27:58.786 [2024-07-26 11:35:54.151360] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.786 [2024-07-26 11:35:54.151389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:58.786 qpair failed and we were unable to recover it. 00:27:58.786 [2024-07-26 11:35:54.151598] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.786 [2024-07-26 11:35:54.151641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:58.786 qpair failed and we were unable to recover it. 00:27:58.786 [2024-07-26 11:35:54.151834] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.786 [2024-07-26 11:35:54.151865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:58.786 qpair failed and we were unable to recover it. 00:27:58.786 [2024-07-26 11:35:54.152071] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.786 [2024-07-26 11:35:54.152101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:58.786 qpair failed and we were unable to recover it. 00:27:58.786 [2024-07-26 11:35:54.152210] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.786 [2024-07-26 11:35:54.152240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:58.786 qpair failed and we were unable to recover it. 00:27:58.786 [2024-07-26 11:35:54.152363] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.786 [2024-07-26 11:35:54.152393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:58.786 qpair failed and we were unable to recover it. 00:27:58.786 [2024-07-26 11:35:54.152662] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.786 [2024-07-26 11:35:54.152694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:58.786 qpair failed and we were unable to recover it. 00:27:58.786 [2024-07-26 11:35:54.152824] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.786 [2024-07-26 11:35:54.152854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:58.786 qpair failed and we were unable to recover it. 00:27:58.786 [2024-07-26 11:35:54.153042] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.786 [2024-07-26 11:35:54.153072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:58.786 qpair failed and we were unable to recover it. 00:27:58.786 [2024-07-26 11:35:54.153192] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.786 [2024-07-26 11:35:54.153222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:58.786 qpair failed and we were unable to recover it. 00:27:58.786 [2024-07-26 11:35:54.153420] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.786 [2024-07-26 11:35:54.153449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:58.786 qpair failed and we were unable to recover it. 00:27:58.786 [2024-07-26 11:35:54.153576] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.786 [2024-07-26 11:35:54.153606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:58.786 qpair failed and we were unable to recover it. 00:27:58.786 [2024-07-26 11:35:54.153739] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.786 [2024-07-26 11:35:54.153770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:58.786 qpair failed and we were unable to recover it. 00:27:58.786 [2024-07-26 11:35:54.153910] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.786 [2024-07-26 11:35:54.153940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:58.786 qpair failed and we were unable to recover it. 00:27:58.786 [2024-07-26 11:35:54.154060] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.786 [2024-07-26 11:35:54.154091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:58.786 qpair failed and we were unable to recover it. 00:27:58.786 [2024-07-26 11:35:54.154206] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.786 [2024-07-26 11:35:54.154236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:58.786 qpair failed and we were unable to recover it. 00:27:58.786 [2024-07-26 11:35:54.154358] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.786 [2024-07-26 11:35:54.154389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:58.786 qpair failed and we were unable to recover it. 00:27:58.786 [2024-07-26 11:35:54.154599] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.786 [2024-07-26 11:35:54.154639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:58.786 qpair failed and we were unable to recover it. 00:27:58.786 [2024-07-26 11:35:54.154764] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.786 [2024-07-26 11:35:54.154794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:58.786 qpair failed and we were unable to recover it. 00:27:58.786 [2024-07-26 11:35:54.154921] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.786 [2024-07-26 11:35:54.154951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:58.786 qpair failed and we were unable to recover it. 00:27:58.786 [2024-07-26 11:35:54.155063] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.786 [2024-07-26 11:35:54.155094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:58.786 qpair failed and we were unable to recover it. 00:27:58.786 [2024-07-26 11:35:54.155276] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.786 [2024-07-26 11:35:54.155306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:58.786 qpair failed and we were unable to recover it. 00:27:58.786 [2024-07-26 11:35:54.155545] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.786 [2024-07-26 11:35:54.155575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:58.786 qpair failed and we were unable to recover it. 00:27:58.786 [2024-07-26 11:35:54.155705] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.786 [2024-07-26 11:35:54.155736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:58.786 qpair failed and we were unable to recover it. 00:27:58.786 [2024-07-26 11:35:54.155864] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.786 [2024-07-26 11:35:54.155895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:58.786 qpair failed and we were unable to recover it. 00:27:58.786 [2024-07-26 11:35:54.156022] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.786 [2024-07-26 11:35:54.156052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:58.786 qpair failed and we were unable to recover it. 00:27:58.786 [2024-07-26 11:35:54.156240] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.786 [2024-07-26 11:35:54.156270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:58.786 qpair failed and we were unable to recover it. 00:27:58.786 [2024-07-26 11:35:54.156385] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.786 [2024-07-26 11:35:54.156416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:58.786 qpair failed and we were unable to recover it. 00:27:58.786 [2024-07-26 11:35:54.156527] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.786 [2024-07-26 11:35:54.156556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:58.786 qpair failed and we were unable to recover it. 00:27:58.786 [2024-07-26 11:35:54.156681] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.786 [2024-07-26 11:35:54.156712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:58.786 qpair failed and we were unable to recover it. 00:27:58.786 [2024-07-26 11:35:54.156896] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.786 [2024-07-26 11:35:54.156929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:58.786 qpair failed and we were unable to recover it. 00:27:58.786 [2024-07-26 11:35:54.157106] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.786 [2024-07-26 11:35:54.157136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:58.786 qpair failed and we were unable to recover it. 00:27:58.786 [2024-07-26 11:35:54.157263] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.786 [2024-07-26 11:35:54.157294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:58.786 qpair failed and we were unable to recover it. 00:27:58.786 [2024-07-26 11:35:54.157488] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.786 [2024-07-26 11:35:54.157518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:58.786 qpair failed and we were unable to recover it. 00:27:58.786 [2024-07-26 11:35:54.157621] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.786 [2024-07-26 11:35:54.157665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:58.786 qpair failed and we were unable to recover it. 00:27:58.786 [2024-07-26 11:35:54.157792] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.786 [2024-07-26 11:35:54.157823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:58.786 qpair failed and we were unable to recover it. 00:27:58.786 [2024-07-26 11:35:54.157949] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.787 [2024-07-26 11:35:54.157978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:58.787 qpair failed and we were unable to recover it. 00:27:58.787 [2024-07-26 11:35:54.158153] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.787 [2024-07-26 11:35:54.158183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:58.787 qpair failed and we were unable to recover it. 00:27:58.787 [2024-07-26 11:35:54.158313] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.787 [2024-07-26 11:35:54.158344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:58.787 qpair failed and we were unable to recover it. 00:27:58.787 [2024-07-26 11:35:54.158480] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.787 [2024-07-26 11:35:54.158510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:58.787 qpair failed and we were unable to recover it. 00:27:58.787 [2024-07-26 11:35:54.158718] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.787 [2024-07-26 11:35:54.158749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:58.787 qpair failed and we were unable to recover it. 00:27:58.787 [2024-07-26 11:35:54.158858] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.787 [2024-07-26 11:35:54.158890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:58.787 qpair failed and we were unable to recover it. 00:27:58.787 [2024-07-26 11:35:54.159092] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.787 [2024-07-26 11:35:54.159122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:58.787 qpair failed and we were unable to recover it. 00:27:58.787 [2024-07-26 11:35:54.159224] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.787 [2024-07-26 11:35:54.159253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:58.787 qpair failed and we were unable to recover it. 00:27:58.787 [2024-07-26 11:35:54.159385] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.787 [2024-07-26 11:35:54.159416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:58.787 qpair failed and we were unable to recover it. 00:27:58.787 [2024-07-26 11:35:54.159615] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.787 [2024-07-26 11:35:54.159653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:58.787 qpair failed and we were unable to recover it. 00:27:58.787 [2024-07-26 11:35:54.159841] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.787 [2024-07-26 11:35:54.159871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:58.787 qpair failed and we were unable to recover it. 00:27:58.787 [2024-07-26 11:35:54.159988] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.787 [2024-07-26 11:35:54.160018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:58.787 qpair failed and we were unable to recover it. 00:27:58.787 [2024-07-26 11:35:54.160200] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.787 [2024-07-26 11:35:54.160231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:58.787 qpair failed and we were unable to recover it. 00:27:58.787 [2024-07-26 11:35:54.160425] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.787 [2024-07-26 11:35:54.160456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:58.787 qpair failed and we were unable to recover it. 00:27:58.787 [2024-07-26 11:35:54.160645] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.787 [2024-07-26 11:35:54.160676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:58.787 qpair failed and we were unable to recover it. 00:27:58.787 [2024-07-26 11:35:54.160811] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.787 [2024-07-26 11:35:54.160841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:58.787 qpair failed and we were unable to recover it. 00:27:58.787 [2024-07-26 11:35:54.161020] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.787 [2024-07-26 11:35:54.161050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:58.787 qpair failed and we were unable to recover it. 00:27:58.787 [2024-07-26 11:35:54.161171] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.787 [2024-07-26 11:35:54.161203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:58.787 qpair failed and we were unable to recover it. 00:27:58.787 [2024-07-26 11:35:54.161333] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.787 [2024-07-26 11:35:54.161364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:58.787 qpair failed and we were unable to recover it. 00:27:58.787 [2024-07-26 11:35:54.161496] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.787 [2024-07-26 11:35:54.161528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:58.787 qpair failed and we were unable to recover it. 00:27:58.787 [2024-07-26 11:35:54.161660] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.787 [2024-07-26 11:35:54.161692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:58.787 qpair failed and we were unable to recover it. 00:27:58.787 [2024-07-26 11:35:54.161809] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.787 [2024-07-26 11:35:54.161840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:58.787 qpair failed and we were unable to recover it. 00:27:58.787 [2024-07-26 11:35:54.161954] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.787 [2024-07-26 11:35:54.161983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:58.787 qpair failed and we were unable to recover it. 00:27:58.787 [2024-07-26 11:35:54.162170] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.787 [2024-07-26 11:35:54.162202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:58.787 qpair failed and we were unable to recover it. 00:27:58.787 [2024-07-26 11:35:54.162329] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.787 [2024-07-26 11:35:54.162358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:58.787 qpair failed and we were unable to recover it. 00:27:58.787 [2024-07-26 11:35:54.162542] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.787 [2024-07-26 11:35:54.162574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:58.787 qpair failed and we were unable to recover it. 00:27:58.787 [2024-07-26 11:35:54.162777] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.787 [2024-07-26 11:35:54.162810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:58.787 qpair failed and we were unable to recover it. 00:27:58.787 [2024-07-26 11:35:54.162919] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.787 [2024-07-26 11:35:54.162948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:58.787 qpair failed and we were unable to recover it. 00:27:58.787 [2024-07-26 11:35:54.163075] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.787 [2024-07-26 11:35:54.163105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:58.787 qpair failed and we were unable to recover it. 00:27:58.787 [2024-07-26 11:35:54.163231] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.787 [2024-07-26 11:35:54.163262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:58.787 qpair failed and we were unable to recover it. 00:27:58.787 [2024-07-26 11:35:54.163370] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.787 [2024-07-26 11:35:54.163401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:58.787 qpair failed and we were unable to recover it. 00:27:58.787 [2024-07-26 11:35:54.163514] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.787 [2024-07-26 11:35:54.163544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:58.787 qpair failed and we were unable to recover it. 00:27:58.787 [2024-07-26 11:35:54.163736] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.787 [2024-07-26 11:35:54.163766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:58.787 qpair failed and we were unable to recover it. 00:27:58.787 [2024-07-26 11:35:54.163872] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.787 [2024-07-26 11:35:54.163902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:58.787 qpair failed and we were unable to recover it. 00:27:58.787 [2024-07-26 11:35:54.164025] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.787 [2024-07-26 11:35:54.164056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:58.787 qpair failed and we were unable to recover it. 00:27:58.787 [2024-07-26 11:35:54.164158] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.787 [2024-07-26 11:35:54.164192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:58.787 qpair failed and we were unable to recover it. 00:27:58.787 [2024-07-26 11:35:54.164322] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.787 [2024-07-26 11:35:54.164351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:58.787 qpair failed and we were unable to recover it. 00:27:58.787 [2024-07-26 11:35:54.164454] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.787 [2024-07-26 11:35:54.164484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:58.787 qpair failed and we were unable to recover it. 00:27:58.788 [2024-07-26 11:35:54.164601] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.788 [2024-07-26 11:35:54.164649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:58.788 qpair failed and we were unable to recover it. 00:27:58.788 [2024-07-26 11:35:54.164897] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.788 [2024-07-26 11:35:54.164928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:58.788 qpair failed and we were unable to recover it. 00:27:58.788 [2024-07-26 11:35:54.165122] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.788 [2024-07-26 11:35:54.165152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:58.788 qpair failed and we were unable to recover it. 00:27:58.788 [2024-07-26 11:35:54.165267] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.788 [2024-07-26 11:35:54.165298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:58.788 qpair failed and we were unable to recover it. 00:27:58.788 [2024-07-26 11:35:54.165477] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.788 [2024-07-26 11:35:54.165507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:58.788 qpair failed and we were unable to recover it. 00:27:58.788 [2024-07-26 11:35:54.165614] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.788 [2024-07-26 11:35:54.165657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:58.788 qpair failed and we were unable to recover it. 00:27:58.788 [2024-07-26 11:35:54.165837] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.788 [2024-07-26 11:35:54.165867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:58.788 qpair failed and we were unable to recover it. 00:27:58.788 [2024-07-26 11:35:54.165991] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.788 [2024-07-26 11:35:54.166021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:58.788 qpair failed and we were unable to recover it. 00:27:58.788 [2024-07-26 11:35:54.166123] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.788 [2024-07-26 11:35:54.166153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:58.788 qpair failed and we were unable to recover it. 00:27:58.788 [2024-07-26 11:35:54.166272] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.788 [2024-07-26 11:35:54.166302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:58.788 qpair failed and we were unable to recover it. 00:27:58.788 [2024-07-26 11:35:54.166514] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.788 [2024-07-26 11:35:54.166545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:58.788 qpair failed and we were unable to recover it. 00:27:58.788 [2024-07-26 11:35:54.166726] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.788 [2024-07-26 11:35:54.166757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:58.788 qpair failed and we were unable to recover it. 00:27:58.788 [2024-07-26 11:35:54.166884] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.788 [2024-07-26 11:35:54.166915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:58.788 qpair failed and we were unable to recover it. 00:27:58.788 [2024-07-26 11:35:54.167032] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.788 [2024-07-26 11:35:54.167063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:58.788 qpair failed and we were unable to recover it. 00:27:58.788 [2024-07-26 11:35:54.167176] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.788 [2024-07-26 11:35:54.167206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:58.788 qpair failed and we were unable to recover it. 00:27:58.788 [2024-07-26 11:35:54.167313] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.788 [2024-07-26 11:35:54.167342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:58.788 qpair failed and we were unable to recover it. 00:27:58.788 [2024-07-26 11:35:54.167465] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.788 [2024-07-26 11:35:54.167495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:58.788 qpair failed and we were unable to recover it. 00:27:58.788 [2024-07-26 11:35:54.167614] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.788 [2024-07-26 11:35:54.167655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:58.788 qpair failed and we were unable to recover it. 00:27:58.788 [2024-07-26 11:35:54.167766] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.788 [2024-07-26 11:35:54.167797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:58.788 qpair failed and we were unable to recover it. 00:27:58.788 [2024-07-26 11:35:54.167912] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.788 [2024-07-26 11:35:54.167941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:58.788 qpair failed and we were unable to recover it. 00:27:58.788 [2024-07-26 11:35:54.168126] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.788 [2024-07-26 11:35:54.168156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:58.788 qpair failed and we were unable to recover it. 00:27:58.788 [2024-07-26 11:35:54.168275] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.788 [2024-07-26 11:35:54.168305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:58.788 qpair failed and we were unable to recover it. 00:27:58.788 [2024-07-26 11:35:54.168411] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.788 [2024-07-26 11:35:54.168441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:58.788 qpair failed and we were unable to recover it. 00:27:58.788 [2024-07-26 11:35:54.168556] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.788 [2024-07-26 11:35:54.168586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:58.788 qpair failed and we were unable to recover it. 00:27:58.788 [2024-07-26 11:35:54.168770] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.788 [2024-07-26 11:35:54.168802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:58.788 qpair failed and we were unable to recover it. 00:27:58.788 [2024-07-26 11:35:54.168909] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.788 [2024-07-26 11:35:54.168940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:58.788 qpair failed and we were unable to recover it. 00:27:58.788 [2024-07-26 11:35:54.169183] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.788 [2024-07-26 11:35:54.169213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:58.788 qpair failed and we were unable to recover it. 00:27:58.788 [2024-07-26 11:35:54.169334] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.788 [2024-07-26 11:35:54.169365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:58.788 qpair failed and we were unable to recover it. 00:27:58.788 [2024-07-26 11:35:54.169473] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.788 [2024-07-26 11:35:54.169503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:58.788 qpair failed and we were unable to recover it. 00:27:58.788 [2024-07-26 11:35:54.169624] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.788 [2024-07-26 11:35:54.169673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:58.788 qpair failed and we were unable to recover it. 00:27:58.788 [2024-07-26 11:35:54.169915] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.788 [2024-07-26 11:35:54.169946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:58.788 qpair failed and we were unable to recover it. 00:27:58.788 [2024-07-26 11:35:54.170057] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.788 [2024-07-26 11:35:54.170087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:58.788 qpair failed and we were unable to recover it. 00:27:58.788 [2024-07-26 11:35:54.170206] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.788 [2024-07-26 11:35:54.170236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:58.788 qpair failed and we were unable to recover it. 00:27:58.788 [2024-07-26 11:35:54.170354] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.788 [2024-07-26 11:35:54.170385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:58.788 qpair failed and we were unable to recover it. 00:27:58.788 [2024-07-26 11:35:54.170495] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.788 [2024-07-26 11:35:54.170525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:58.788 qpair failed and we were unable to recover it. 00:27:58.788 [2024-07-26 11:35:54.170659] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.788 [2024-07-26 11:35:54.170691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:58.788 qpair failed and we were unable to recover it. 00:27:58.788 [2024-07-26 11:35:54.170802] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.788 [2024-07-26 11:35:54.170831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:58.788 qpair failed and we were unable to recover it. 00:27:58.788 [2024-07-26 11:35:54.170944] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.788 [2024-07-26 11:35:54.170974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:58.789 qpair failed and we were unable to recover it. 00:27:58.789 [2024-07-26 11:35:54.171097] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.789 [2024-07-26 11:35:54.171128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:58.789 qpair failed and we were unable to recover it. 00:27:58.789 11:35:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:58.789 [2024-07-26 11:35:54.171298] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.789 [2024-07-26 11:35:54.171329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:58.789 qpair failed and we were unable to recover it. 00:27:58.789 [2024-07-26 11:35:54.171454] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.789 [2024-07-26 11:35:54.171484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:58.789 qpair failed and we were unable to recover it. 00:27:58.789 11:35:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:27:58.789 [2024-07-26 11:35:54.171591] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.789 [2024-07-26 11:35:54.171622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:58.789 qpair failed and we were unable to recover it. 00:27:58.789 [2024-07-26 11:35:54.171748] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.789 [2024-07-26 11:35:54.171778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:58.789 qpair failed and we were unable to recover it. 00:27:58.789 11:35:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:58.789 [2024-07-26 11:35:54.171952] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.789 [2024-07-26 11:35:54.171983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:58.789 qpair failed and we were unable to recover it. 00:27:58.789 [2024-07-26 11:35:54.172088] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.789 [2024-07-26 11:35:54.172123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:58.789 qpair failed and we were unable to recover it. 00:27:58.789 11:35:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:58.789 [2024-07-26 11:35:54.172238] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.789 [2024-07-26 11:35:54.172269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:58.789 qpair failed and we were unable to recover it. 00:27:58.789 [2024-07-26 11:35:54.172399] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.789 [2024-07-26 11:35:54.172429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:58.789 qpair failed and we were unable to recover it. 00:27:58.789 [2024-07-26 11:35:54.172550] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.789 [2024-07-26 11:35:54.172579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:58.789 qpair failed and we were unable to recover it. 00:27:58.789 [2024-07-26 11:35:54.172708] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.789 [2024-07-26 11:35:54.172739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:58.789 qpair failed and we were unable to recover it. 00:27:58.789 [2024-07-26 11:35:54.172919] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.789 [2024-07-26 11:35:54.172949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:58.789 qpair failed and we were unable to recover it. 00:27:58.789 [2024-07-26 11:35:54.173122] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.789 [2024-07-26 11:35:54.173153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:58.789 qpair failed and we were unable to recover it. 00:27:58.789 [2024-07-26 11:35:54.173264] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.789 [2024-07-26 11:35:54.173294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:58.789 qpair failed and we were unable to recover it. 00:27:58.789 [2024-07-26 11:35:54.173430] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.789 [2024-07-26 11:35:54.173460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:58.789 qpair failed and we were unable to recover it. 00:27:58.789 [2024-07-26 11:35:54.173577] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.789 [2024-07-26 11:35:54.173607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:58.789 qpair failed and we were unable to recover it. 00:27:58.789 [2024-07-26 11:35:54.173733] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.789 [2024-07-26 11:35:54.173764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:58.789 qpair failed and we were unable to recover it. 00:27:58.789 [2024-07-26 11:35:54.173867] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.789 [2024-07-26 11:35:54.173897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:58.789 qpair failed and we were unable to recover it. 00:27:58.789 [2024-07-26 11:35:54.174166] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.789 [2024-07-26 11:35:54.174196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:58.789 qpair failed and we were unable to recover it. 00:27:58.789 [2024-07-26 11:35:54.174316] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.789 [2024-07-26 11:35:54.174346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:58.789 qpair failed and we were unable to recover it. 00:27:58.789 [2024-07-26 11:35:54.174455] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.789 [2024-07-26 11:35:54.174484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:58.789 qpair failed and we were unable to recover it. 00:27:58.789 [2024-07-26 11:35:54.174594] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.789 [2024-07-26 11:35:54.174624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:58.789 qpair failed and we were unable to recover it. 00:27:58.789 [2024-07-26 11:35:54.174765] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.789 [2024-07-26 11:35:54.174796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:58.789 qpair failed and we were unable to recover it. 00:27:58.789 [2024-07-26 11:35:54.174904] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.789 [2024-07-26 11:35:54.174934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:58.789 qpair failed and we were unable to recover it. 00:27:58.789 [2024-07-26 11:35:54.175043] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.789 [2024-07-26 11:35:54.175073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:58.789 qpair failed and we were unable to recover it. 00:27:58.789 [2024-07-26 11:35:54.175193] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.789 [2024-07-26 11:35:54.175223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:58.789 qpair failed and we were unable to recover it. 00:27:58.789 [2024-07-26 11:35:54.175401] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.789 [2024-07-26 11:35:54.175432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:58.789 qpair failed and we were unable to recover it. 00:27:58.789 [2024-07-26 11:35:54.175544] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.789 [2024-07-26 11:35:54.175574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:58.789 qpair failed and we were unable to recover it. 00:27:58.789 [2024-07-26 11:35:54.175695] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.789 [2024-07-26 11:35:54.175726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:58.789 qpair failed and we were unable to recover it. 00:27:58.789 [2024-07-26 11:35:54.175843] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.790 [2024-07-26 11:35:54.175873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:58.790 qpair failed and we were unable to recover it. 00:27:58.790 [2024-07-26 11:35:54.175997] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.790 [2024-07-26 11:35:54.176027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:58.790 qpair failed and we were unable to recover it. 00:27:58.790 [2024-07-26 11:35:54.176140] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.790 [2024-07-26 11:35:54.176169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:58.790 qpair failed and we were unable to recover it. 00:27:58.790 [2024-07-26 11:35:54.176279] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.790 [2024-07-26 11:35:54.176308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:58.790 qpair failed and we were unable to recover it. 00:27:58.790 [2024-07-26 11:35:54.176417] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.790 [2024-07-26 11:35:54.176447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:58.790 qpair failed and we were unable to recover it. 00:27:58.790 [2024-07-26 11:35:54.176600] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.790 [2024-07-26 11:35:54.176640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:58.790 qpair failed and we were unable to recover it. 00:27:58.790 [2024-07-26 11:35:54.176831] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.790 [2024-07-26 11:35:54.176861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:58.790 qpair failed and we were unable to recover it. 00:27:58.790 [2024-07-26 11:35:54.176987] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.790 [2024-07-26 11:35:54.177017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:58.790 qpair failed and we were unable to recover it. 00:27:58.790 [2024-07-26 11:35:54.177120] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.790 [2024-07-26 11:35:54.177151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:58.790 qpair failed and we were unable to recover it. 00:27:58.790 [2024-07-26 11:35:54.177327] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.790 [2024-07-26 11:35:54.177357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:58.790 qpair failed and we were unable to recover it. 00:27:58.790 [2024-07-26 11:35:54.177466] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.790 [2024-07-26 11:35:54.177496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:58.790 qpair failed and we were unable to recover it. 00:27:58.790 [2024-07-26 11:35:54.177734] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.790 [2024-07-26 11:35:54.177771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:58.790 qpair failed and we were unable to recover it. 00:27:58.790 [2024-07-26 11:35:54.177879] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.790 [2024-07-26 11:35:54.177909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:58.790 qpair failed and we were unable to recover it. 00:27:58.790 [2024-07-26 11:35:54.178015] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.790 [2024-07-26 11:35:54.178045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:58.790 qpair failed and we were unable to recover it. 00:27:58.790 [2024-07-26 11:35:54.178151] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.790 [2024-07-26 11:35:54.178180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:58.790 qpair failed and we were unable to recover it. 00:27:58.790 [2024-07-26 11:35:54.178285] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.790 [2024-07-26 11:35:54.178314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:58.790 qpair failed and we were unable to recover it. 00:27:58.790 [2024-07-26 11:35:54.178488] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.790 [2024-07-26 11:35:54.178519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:58.790 qpair failed and we were unable to recover it. 00:27:58.790 [2024-07-26 11:35:54.178646] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.790 [2024-07-26 11:35:54.178676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:58.790 qpair failed and we were unable to recover it. 00:27:58.790 [2024-07-26 11:35:54.178778] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.790 [2024-07-26 11:35:54.178808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:58.790 qpair failed and we were unable to recover it. 00:27:58.790 [2024-07-26 11:35:54.178916] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.790 [2024-07-26 11:35:54.178946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:58.790 qpair failed and we were unable to recover it. 00:27:58.790 [2024-07-26 11:35:54.179057] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.790 [2024-07-26 11:35:54.179086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:58.790 qpair failed and we were unable to recover it. 00:27:58.790 [2024-07-26 11:35:54.179257] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.790 [2024-07-26 11:35:54.179288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:58.790 qpair failed and we were unable to recover it. 00:27:58.790 [2024-07-26 11:35:54.179402] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.790 [2024-07-26 11:35:54.179432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:58.790 qpair failed and we were unable to recover it. 00:27:58.790 [2024-07-26 11:35:54.179610] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.790 [2024-07-26 11:35:54.179649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:58.790 qpair failed and we were unable to recover it. 00:27:58.790 [2024-07-26 11:35:54.179758] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.790 [2024-07-26 11:35:54.179788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:58.790 qpair failed and we were unable to recover it. 00:27:58.790 [2024-07-26 11:35:54.179900] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.790 [2024-07-26 11:35:54.179930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:58.790 qpair failed and we were unable to recover it. 00:27:58.790 [2024-07-26 11:35:54.180108] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.790 [2024-07-26 11:35:54.180138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:58.790 qpair failed and we were unable to recover it. 00:27:58.790 [2024-07-26 11:35:54.180260] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.790 [2024-07-26 11:35:54.180289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:58.790 qpair failed and we were unable to recover it. 00:27:58.790 [2024-07-26 11:35:54.180403] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.790 [2024-07-26 11:35:54.180432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:58.790 qpair failed and we were unable to recover it. 00:27:58.790 [2024-07-26 11:35:54.180539] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.790 [2024-07-26 11:35:54.180569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:58.790 qpair failed and we were unable to recover it. 00:27:58.790 [2024-07-26 11:35:54.180862] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.790 [2024-07-26 11:35:54.180893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:58.790 qpair failed and we were unable to recover it. 00:27:58.790 [2024-07-26 11:35:54.181004] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.790 [2024-07-26 11:35:54.181033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:58.790 qpair failed and we were unable to recover it. 00:27:58.790 [2024-07-26 11:35:54.181162] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.790 [2024-07-26 11:35:54.181191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:58.790 qpair failed and we were unable to recover it. 00:27:58.790 [2024-07-26 11:35:54.181364] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.790 [2024-07-26 11:35:54.181394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:58.790 qpair failed and we were unable to recover it. 00:27:58.790 [2024-07-26 11:35:54.181515] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.790 [2024-07-26 11:35:54.181544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:58.790 qpair failed and we were unable to recover it. 00:27:58.790 [2024-07-26 11:35:54.181735] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.790 [2024-07-26 11:35:54.181766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:58.790 qpair failed and we were unable to recover it. 00:27:58.790 [2024-07-26 11:35:54.181942] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.790 [2024-07-26 11:35:54.181971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:58.790 qpair failed and we were unable to recover it. 00:27:58.790 [2024-07-26 11:35:54.182078] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.790 [2024-07-26 11:35:54.182109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:58.790 qpair failed and we were unable to recover it. 00:27:58.791 [2024-07-26 11:35:54.182219] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.791 [2024-07-26 11:35:54.182249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:58.791 qpair failed and we were unable to recover it. 00:27:58.791 [2024-07-26 11:35:54.182359] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.791 [2024-07-26 11:35:54.182389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:58.791 qpair failed and we were unable to recover it. 00:27:58.791 [2024-07-26 11:35:54.182496] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.791 [2024-07-26 11:35:54.182525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:58.791 qpair failed and we were unable to recover it. 00:27:58.791 [2024-07-26 11:35:54.182707] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.791 [2024-07-26 11:35:54.182738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:58.791 qpair failed and we were unable to recover it. 00:27:58.791 [2024-07-26 11:35:54.182851] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.791 [2024-07-26 11:35:54.182881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:58.791 qpair failed and we were unable to recover it. 00:27:58.791 [2024-07-26 11:35:54.183006] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.791 [2024-07-26 11:35:54.183035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:58.791 qpair failed and we were unable to recover it. 00:27:58.791 [2024-07-26 11:35:54.183141] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.791 [2024-07-26 11:35:54.183171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:58.791 qpair failed and we were unable to recover it. 00:27:58.791 [2024-07-26 11:35:54.183278] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.791 [2024-07-26 11:35:54.183307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:58.791 qpair failed and we were unable to recover it. 00:27:58.791 [2024-07-26 11:35:54.183500] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.791 [2024-07-26 11:35:54.183529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:58.791 qpair failed and we were unable to recover it. 00:27:58.791 [2024-07-26 11:35:54.183715] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.791 [2024-07-26 11:35:54.183747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:58.791 qpair failed and we were unable to recover it. 00:27:58.791 [2024-07-26 11:35:54.183873] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.791 [2024-07-26 11:35:54.183904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:58.791 qpair failed and we were unable to recover it. 00:27:58.791 [2024-07-26 11:35:54.184014] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.791 [2024-07-26 11:35:54.184043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:58.791 qpair failed and we were unable to recover it. 00:27:58.791 [2024-07-26 11:35:54.184157] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.791 [2024-07-26 11:35:54.184188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:58.791 qpair failed and we were unable to recover it. 00:27:58.791 [2024-07-26 11:35:54.184301] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.791 [2024-07-26 11:35:54.184337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:58.791 qpair failed and we were unable to recover it. 00:27:58.791 [2024-07-26 11:35:54.184448] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.791 [2024-07-26 11:35:54.184478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:58.791 qpair failed and we were unable to recover it. 00:27:58.791 [2024-07-26 11:35:54.184588] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.791 [2024-07-26 11:35:54.184617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:58.791 qpair failed and we were unable to recover it. 00:27:58.791 [2024-07-26 11:35:54.184733] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.791 [2024-07-26 11:35:54.184762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:58.791 qpair failed and we were unable to recover it. 00:27:58.791 [2024-07-26 11:35:54.184873] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.791 [2024-07-26 11:35:54.184902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:58.791 qpair failed and we were unable to recover it. 00:27:58.791 [2024-07-26 11:35:54.185038] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.791 [2024-07-26 11:35:54.185068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:58.791 qpair failed and we were unable to recover it. 00:27:58.791 [2024-07-26 11:35:54.185247] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.791 [2024-07-26 11:35:54.185279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:58.791 qpair failed and we were unable to recover it. 00:27:58.791 [2024-07-26 11:35:54.185393] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.791 [2024-07-26 11:35:54.185424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:58.791 qpair failed and we were unable to recover it. 00:27:58.791 [2024-07-26 11:35:54.185545] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.791 [2024-07-26 11:35:54.185575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:58.791 qpair failed and we were unable to recover it. 00:27:58.791 [2024-07-26 11:35:54.185698] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.791 [2024-07-26 11:35:54.185730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:58.791 qpair failed and we were unable to recover it. 00:27:58.791 [2024-07-26 11:35:54.185856] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.791 [2024-07-26 11:35:54.185888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:58.791 qpair failed and we were unable to recover it. 00:27:58.791 [2024-07-26 11:35:54.186089] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.791 [2024-07-26 11:35:54.186120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:58.791 qpair failed and we were unable to recover it. 00:27:58.791 [2024-07-26 11:35:54.186306] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.791 [2024-07-26 11:35:54.186338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:58.791 qpair failed and we were unable to recover it. 00:27:58.791 [2024-07-26 11:35:54.186562] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.791 [2024-07-26 11:35:54.186624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:58.791 qpair failed and we were unable to recover it. 00:27:58.791 [2024-07-26 11:35:54.186840] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.791 [2024-07-26 11:35:54.186874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:58.791 qpair failed and we were unable to recover it. 00:27:58.791 [2024-07-26 11:35:54.187067] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.791 [2024-07-26 11:35:54.187098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:58.791 qpair failed and we were unable to recover it. 00:27:58.791 [2024-07-26 11:35:54.187222] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.791 [2024-07-26 11:35:54.187253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:58.791 qpair failed and we were unable to recover it. 00:27:58.791 [2024-07-26 11:35:54.187442] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.791 [2024-07-26 11:35:54.187473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:58.791 qpair failed and we were unable to recover it. 00:27:58.791 [2024-07-26 11:35:54.187599] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.791 [2024-07-26 11:35:54.187637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:58.791 qpair failed and we were unable to recover it. 00:27:58.791 [2024-07-26 11:35:54.187815] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.791 [2024-07-26 11:35:54.187846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:58.791 qpair failed and we were unable to recover it. 00:27:58.791 [2024-07-26 11:35:54.188019] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.791 [2024-07-26 11:35:54.188049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:58.791 qpair failed and we were unable to recover it. 00:27:58.791 [2024-07-26 11:35:54.188182] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.791 [2024-07-26 11:35:54.188212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:58.791 qpair failed and we were unable to recover it. 00:27:58.791 [2024-07-26 11:35:54.188316] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.791 [2024-07-26 11:35:54.188346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:58.791 qpair failed and we were unable to recover it. 00:27:58.791 [2024-07-26 11:35:54.188466] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.791 [2024-07-26 11:35:54.188495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:58.791 qpair failed and we were unable to recover it. 00:27:58.792 [2024-07-26 11:35:54.188690] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.792 [2024-07-26 11:35:54.188721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:58.792 qpair failed and we were unable to recover it. 00:27:58.792 [2024-07-26 11:35:54.188918] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.792 [2024-07-26 11:35:54.188948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:58.792 qpair failed and we were unable to recover it. 00:27:58.792 [2024-07-26 11:35:54.189130] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.792 [2024-07-26 11:35:54.189159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:58.792 qpair failed and we were unable to recover it. 00:27:58.792 [2024-07-26 11:35:54.189338] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.792 [2024-07-26 11:35:54.189404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:58.792 qpair failed and we were unable to recover it. 00:27:58.792 [2024-07-26 11:35:54.189654] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.792 [2024-07-26 11:35:54.189684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:58.792 qpair failed and we were unable to recover it. 00:27:58.792 [2024-07-26 11:35:54.189801] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.792 [2024-07-26 11:35:54.189832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:58.792 qpair failed and we were unable to recover it. 00:27:58.792 [2024-07-26 11:35:54.189943] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.792 [2024-07-26 11:35:54.189973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:58.792 qpair failed and we were unable to recover it. 00:27:58.792 [2024-07-26 11:35:54.190164] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.792 [2024-07-26 11:35:54.190193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:58.792 qpair failed and we were unable to recover it. 00:27:58.792 [2024-07-26 11:35:54.190316] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.792 [2024-07-26 11:35:54.190345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:58.792 qpair failed and we were unable to recover it. 00:27:58.792 [2024-07-26 11:35:54.190465] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.792 Malloc0 00:27:58.792 [2024-07-26 11:35:54.190495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:58.792 qpair failed and we were unable to recover it. 00:27:58.792 [2024-07-26 11:35:54.190669] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.792 [2024-07-26 11:35:54.190699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:58.792 qpair failed and we were unable to recover it. 00:27:58.792 [2024-07-26 11:35:54.190806] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.792 [2024-07-26 11:35:54.190836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:58.792 qpair failed and we were unable to recover it. 00:27:58.792 [2024-07-26 11:35:54.191105] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.792 [2024-07-26 11:35:54.191135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:58.792 qpair failed and we were unable to recover it. 00:27:58.792 11:35:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:58.792 [2024-07-26 11:35:54.191322] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.792 [2024-07-26 11:35:54.191352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:58.792 qpair failed and we were unable to recover it. 00:27:58.792 [2024-07-26 11:35:54.191557] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.792 [2024-07-26 11:35:54.191587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b8f30 with addr=10.0.0.2, port=4420 00:27:58.792 11:35:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:27:58.792 qpair failed and we were unable to recover it. 00:27:58.792 [2024-07-26 11:35:54.191788] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.792 [2024-07-26 11:35:54.191828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f8000b90 with addr=10.0.0.2, port=4420 00:27:58.792 qpair failed and we were unable to recover it. 00:27:58.792 11:35:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:58.792 [2024-07-26 11:35:54.192019] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.792 [2024-07-26 11:35:54.192049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:58.792 qpair failed and we were unable to recover it. 00:27:58.792 11:35:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:58.792 [2024-07-26 11:35:54.192240] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.792 [2024-07-26 11:35:54.192268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:58.792 qpair failed and we were unable to recover it. 00:27:58.792 [2024-07-26 11:35:54.192436] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.792 [2024-07-26 11:35:54.192463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:58.792 qpair failed and we were unable to recover it. 00:27:58.792 [2024-07-26 11:35:54.192562] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.792 [2024-07-26 11:35:54.192589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:58.792 qpair failed and we were unable to recover it. 00:27:58.792 [2024-07-26 11:35:54.192703] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.792 [2024-07-26 11:35:54.192730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:58.792 qpair failed and we were unable to recover it. 00:27:58.792 [2024-07-26 11:35:54.192966] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.792 [2024-07-26 11:35:54.192993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:58.792 qpair failed and we were unable to recover it. 00:27:58.792 [2024-07-26 11:35:54.193104] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.792 [2024-07-26 11:35:54.193131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:58.792 qpair failed and we were unable to recover it. 00:27:58.792 [2024-07-26 11:35:54.193309] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.792 [2024-07-26 11:35:54.193337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:58.792 qpair failed and we were unable to recover it. 00:27:58.792 [2024-07-26 11:35:54.193575] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.792 [2024-07-26 11:35:54.193602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4200000b90 with addr=10.0.0.2, port=4420 00:27:58.792 qpair failed and we were unable to recover it. 00:27:58.792 [2024-07-26 11:35:54.193723] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.792 [2024-07-26 11:35:54.193756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f8000b90 with addr=10.0.0.2, port=4420 00:27:58.792 qpair failed and we were unable to recover it. 00:27:58.792 [2024-07-26 11:35:54.193896] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.792 [2024-07-26 11:35:54.193925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f8000b90 with addr=10.0.0.2, port=4420 00:27:58.792 qpair failed and we were unable to recover it. 00:27:58.792 [2024-07-26 11:35:54.194117] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.792 [2024-07-26 11:35:54.194146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f8000b90 with addr=10.0.0.2, port=4420 00:27:58.792 qpair failed and we were unable to recover it. 00:27:58.792 [2024-07-26 11:35:54.194318] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.792 [2024-07-26 11:35:54.194357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f8000b90 with addr=10.0.0.2, port=4420 00:27:58.792 qpair failed and we were unable to recover it. 00:27:58.792 [2024-07-26 11:35:54.194538] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.792 [2024-07-26 11:35:54.194568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f8000b90 with addr=10.0.0.2, port=4420 00:27:58.792 qpair failed and we were unable to recover it. 00:27:58.792 [2024-07-26 11:35:54.194760] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.792 [2024-07-26 11:35:54.194791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f8000b90 with addr=10.0.0.2, port=4420 00:27:58.792 qpair failed and we were unable to recover it. 00:27:58.792 [2024-07-26 11:35:54.194903] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.792 [2024-07-26 11:35:54.194931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f8000b90 with addr=10.0.0.2, port=4420 00:27:58.792 qpair failed and we were unable to recover it. 00:27:58.792 [2024-07-26 11:35:54.195049] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.792 [2024-07-26 11:35:54.195077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f8000b90 with addr=10.0.0.2, port=4420 00:27:58.792 qpair failed and we were unable to recover it. 00:27:58.792 [2024-07-26 11:35:54.195284] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.792 [2024-07-26 11:35:54.195314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f8000b90 with addr=10.0.0.2, port=4420 00:27:58.792 qpair failed and we were unable to recover it. 00:27:58.792 [2024-07-26 11:35:54.195485] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.792 [2024-07-26 11:35:54.195515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f8000b90 with addr=10.0.0.2, port=4420 00:27:58.792 qpair failed and we were unable to recover it. 00:27:58.792 [2024-07-26 11:35:54.195759] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.792 [2024-07-26 11:35:54.195789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f8000b90 with addr=10.0.0.2, port=4420 00:27:58.792 qpair failed and we were unable to recover it. 00:27:58.793 [2024-07-26 11:35:54.195960] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.793 [2024-07-26 11:35:54.195989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f8000b90 with addr=10.0.0.2, port=4420 00:27:58.793 qpair failed and we were unable to recover it. 00:27:58.793 [2024-07-26 11:35:54.196161] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.793 [2024-07-26 11:35:54.196189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f8000b90 with addr=10.0.0.2, port=4420 00:27:58.793 qpair failed and we were unable to recover it. 00:27:58.793 [2024-07-26 11:35:54.196386] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.793 [2024-07-26 11:35:54.196416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f8000b90 with addr=10.0.0.2, port=4420 00:27:58.793 qpair failed and we were unable to recover it. 00:27:58.793 [2024-07-26 11:35:54.196606] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.793 [2024-07-26 11:35:54.196646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f8000b90 with addr=10.0.0.2, port=4420 00:27:58.793 qpair failed and we were unable to recover it. 00:27:58.793 [2024-07-26 11:35:54.196913] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.793 [2024-07-26 11:35:54.196943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f8000b90 with addr=10.0.0.2, port=4420 00:27:58.793 qpair failed and we were unable to recover it. 00:27:58.793 [2024-07-26 11:35:54.197050] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.793 [2024-07-26 11:35:54.197080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f8000b90 with addr=10.0.0.2, port=4420 00:27:58.793 qpair failed and we were unable to recover it. 00:27:58.793 [2024-07-26 11:35:54.197227] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.793 [2024-07-26 11:35:54.197255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f8000b90 with addr=10.0.0.2, port=4420 00:27:58.793 qpair failed and we were unable to recover it. 00:27:58.793 [2024-07-26 11:35:54.197360] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.793 [2024-07-26 11:35:54.197388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f8000b90 with addr=10.0.0.2, port=4420 00:27:58.793 qpair failed and we were unable to recover it. 00:27:58.793 [2024-07-26 11:35:54.197588] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.793 [2024-07-26 11:35:54.197617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f8000b90 with addr=10.0.0.2, port=4420 00:27:58.793 qpair failed and we were unable to recover it. 00:27:58.793 [2024-07-26 11:35:54.197888] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.793 [2024-07-26 11:35:54.197918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f8000b90 with addr=10.0.0.2, port=4420 00:27:58.793 qpair failed and we were unable to recover it. 00:27:58.793 [2024-07-26 11:35:54.198097] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:58.793 [2024-07-26 11:35:54.198132] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.793 [2024-07-26 11:35:54.198161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f8000b90 with addr=10.0.0.2, port=4420 00:27:58.793 qpair failed and we were unable to recover it. 00:27:58.793 [2024-07-26 11:35:54.198271] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.793 [2024-07-26 11:35:54.198298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f8000b90 with addr=10.0.0.2, port=4420 00:27:58.793 qpair failed and we were unable to recover it. 00:27:58.793 [2024-07-26 11:35:54.198553] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.793 [2024-07-26 11:35:54.198583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f8000b90 with addr=10.0.0.2, port=4420 00:27:58.793 qpair failed and we were unable to recover it. 00:27:58.793 [2024-07-26 11:35:54.198833] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.793 [2024-07-26 11:35:54.198864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f8000b90 with addr=10.0.0.2, port=4420 00:27:58.793 qpair failed and we were unable to recover it. 00:27:58.793 [2024-07-26 11:35:54.199037] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.793 [2024-07-26 11:35:54.199065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f8000b90 with addr=10.0.0.2, port=4420 00:27:58.793 qpair failed and we were unable to recover it. 00:27:58.793 [2024-07-26 11:35:54.199234] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.793 [2024-07-26 11:35:54.199264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f8000b90 with addr=10.0.0.2, port=4420 00:27:58.793 qpair failed and we were unable to recover it. 00:27:58.793 [2024-07-26 11:35:54.199433] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.793 [2024-07-26 11:35:54.199463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f8000b90 with addr=10.0.0.2, port=4420 00:27:58.793 qpair failed and we were unable to recover it. 00:27:58.793 [2024-07-26 11:35:54.199588] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.793 [2024-07-26 11:35:54.199617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f8000b90 with addr=10.0.0.2, port=4420 00:27:58.793 qpair failed and we were unable to recover it. 00:27:58.793 [2024-07-26 11:35:54.199820] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.793 [2024-07-26 11:35:54.199851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f8000b90 with addr=10.0.0.2, port=4420 00:27:58.793 qpair failed and we were unable to recover it. 00:27:58.793 [2024-07-26 11:35:54.200044] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.793 [2024-07-26 11:35:54.200072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f8000b90 with addr=10.0.0.2, port=4420 00:27:58.793 qpair failed and we were unable to recover it. 00:27:58.793 [2024-07-26 11:35:54.200284] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.793 [2024-07-26 11:35:54.200313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f8000b90 with addr=10.0.0.2, port=4420 00:27:58.793 qpair failed and we were unable to recover it. 00:27:58.793 [2024-07-26 11:35:54.200505] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.793 [2024-07-26 11:35:54.200535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f8000b90 with addr=10.0.0.2, port=4420 00:27:58.793 qpair failed and we were unable to recover it. 00:27:58.793 [2024-07-26 11:35:54.200709] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.793 [2024-07-26 11:35:54.200739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f8000b90 with addr=10.0.0.2, port=4420 00:27:58.793 qpair failed and we were unable to recover it. 00:27:58.793 [2024-07-26 11:35:54.200929] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.793 [2024-07-26 11:35:54.200959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f8000b90 with addr=10.0.0.2, port=4420 00:27:58.793 qpair failed and we were unable to recover it. 00:27:58.793 [2024-07-26 11:35:54.201152] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.793 [2024-07-26 11:35:54.201182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f8000b90 with addr=10.0.0.2, port=4420 00:27:58.793 qpair failed and we were unable to recover it. 00:27:58.793 [2024-07-26 11:35:54.201373] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.793 [2024-07-26 11:35:54.201402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f8000b90 with addr=10.0.0.2, port=4420 00:27:58.793 qpair failed and we were unable to recover it. 00:27:58.793 [2024-07-26 11:35:54.201586] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.793 [2024-07-26 11:35:54.201615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f8000b90 with addr=10.0.0.2, port=4420 00:27:58.793 qpair failed and we were unable to recover it. 00:27:58.793 [2024-07-26 11:35:54.201824] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.793 [2024-07-26 11:35:54.201855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f8000b90 with addr=10.0.0.2, port=4420 00:27:58.793 qpair failed and we were unable to recover it. 00:27:58.793 [2024-07-26 11:35:54.202042] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.793 [2024-07-26 11:35:54.202071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f8000b90 with addr=10.0.0.2, port=4420 00:27:58.793 qpair failed and we were unable to recover it. 00:27:58.793 [2024-07-26 11:35:54.202339] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.793 [2024-07-26 11:35:54.202369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f8000b90 with addr=10.0.0.2, port=4420 00:27:58.793 qpair failed and we were unable to recover it. 00:27:58.793 [2024-07-26 11:35:54.202479] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.793 [2024-07-26 11:35:54.202509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f8000b90 with addr=10.0.0.2, port=4420 00:27:58.793 qpair failed and we were unable to recover it. 00:27:58.793 [2024-07-26 11:35:54.202762] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.793 [2024-07-26 11:35:54.202792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f8000b90 with addr=10.0.0.2, port=4420 00:27:58.793 qpair failed and we were unable to recover it. 00:27:58.793 [2024-07-26 11:35:54.203010] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.793 [2024-07-26 11:35:54.203068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.793 qpair failed and we were unable to recover it. 00:27:58.793 [2024-07-26 11:35:54.203220] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.793 [2024-07-26 11:35:54.203253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.793 qpair failed and we were unable to recover it. 00:27:58.793 [2024-07-26 11:35:54.203444] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.793 11:35:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:58.793 [2024-07-26 11:35:54.203474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.794 qpair failed and we were unable to recover it. 00:27:58.794 [2024-07-26 11:35:54.203715] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.794 [2024-07-26 11:35:54.203748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.794 qpair failed and we were unable to recover it. 00:27:58.794 11:35:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:27:58.794 [2024-07-26 11:35:54.203953] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.794 [2024-07-26 11:35:54.203984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.794 qpair failed and we were unable to recover it. 00:27:58.794 11:35:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:58.794 [2024-07-26 11:35:54.204255] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.794 [2024-07-26 11:35:54.204286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.794 qpair failed and we were unable to recover it. 00:27:58.794 11:35:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:58.794 [2024-07-26 11:35:54.204475] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.794 [2024-07-26 11:35:54.204505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.794 qpair failed and we were unable to recover it. 00:27:58.794 [2024-07-26 11:35:54.204723] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.794 [2024-07-26 11:35:54.204756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.794 qpair failed and we were unable to recover it. 00:27:58.794 [2024-07-26 11:35:54.205024] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.794 [2024-07-26 11:35:54.205054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.794 qpair failed and we were unable to recover it. 00:27:58.794 [2024-07-26 11:35:54.205229] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.794 [2024-07-26 11:35:54.205259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.794 qpair failed and we were unable to recover it. 00:27:58.794 [2024-07-26 11:35:54.205384] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.794 [2024-07-26 11:35:54.205414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.794 qpair failed and we were unable to recover it. 00:27:58.794 [2024-07-26 11:35:54.205707] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.794 [2024-07-26 11:35:54.205747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.794 qpair failed and we were unable to recover it. 00:27:58.794 [2024-07-26 11:35:54.205966] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.794 [2024-07-26 11:35:54.205997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.794 qpair failed and we were unable to recover it. 00:27:58.794 [2024-07-26 11:35:54.206265] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.794 [2024-07-26 11:35:54.206295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.794 qpair failed and we were unable to recover it. 00:27:58.794 [2024-07-26 11:35:54.206419] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.794 [2024-07-26 11:35:54.206448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.794 qpair failed and we were unable to recover it. 00:27:58.794 [2024-07-26 11:35:54.206672] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.794 [2024-07-26 11:35:54.206702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.794 qpair failed and we were unable to recover it. 00:27:58.794 [2024-07-26 11:35:54.206907] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.794 [2024-07-26 11:35:54.206935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.794 qpair failed and we were unable to recover it. 00:27:58.794 [2024-07-26 11:35:54.207202] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.794 [2024-07-26 11:35:54.207235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.794 qpair failed and we were unable to recover it. 00:27:58.794 [2024-07-26 11:35:54.207430] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.794 [2024-07-26 11:35:54.207460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.794 qpair failed and we were unable to recover it. 00:27:58.794 [2024-07-26 11:35:54.207669] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.794 [2024-07-26 11:35:54.207700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.794 qpair failed and we were unable to recover it. 00:27:58.794 [2024-07-26 11:35:54.207886] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.794 [2024-07-26 11:35:54.207916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.794 qpair failed and we were unable to recover it. 00:27:58.794 [2024-07-26 11:35:54.208042] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.794 [2024-07-26 11:35:54.208070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.794 qpair failed and we were unable to recover it. 00:27:58.794 [2024-07-26 11:35:54.208276] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.794 [2024-07-26 11:35:54.208305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.794 qpair failed and we were unable to recover it. 00:27:58.794 [2024-07-26 11:35:54.208546] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.794 [2024-07-26 11:35:54.208576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.794 qpair failed and we were unable to recover it. 00:27:58.794 [2024-07-26 11:35:54.208714] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.794 [2024-07-26 11:35:54.208743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.794 qpair failed and we were unable to recover it. 00:27:58.794 [2024-07-26 11:35:54.208967] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.794 [2024-07-26 11:35:54.208998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.794 qpair failed and we were unable to recover it. 00:27:58.794 [2024-07-26 11:35:54.209114] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.794 [2024-07-26 11:35:54.209144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.794 qpair failed and we were unable to recover it. 00:27:58.794 [2024-07-26 11:35:54.209271] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.794 [2024-07-26 11:35:54.209301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.794 qpair failed and we were unable to recover it. 00:27:58.794 [2024-07-26 11:35:54.209487] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.794 [2024-07-26 11:35:54.209517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.794 qpair failed and we were unable to recover it. 00:27:58.794 [2024-07-26 11:35:54.209689] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.794 [2024-07-26 11:35:54.209719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.794 qpair failed and we were unable to recover it. 00:27:58.794 [2024-07-26 11:35:54.209960] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.794 [2024-07-26 11:35:54.209989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.794 qpair failed and we were unable to recover it. 00:27:58.794 [2024-07-26 11:35:54.210109] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.794 [2024-07-26 11:35:54.210138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.794 qpair failed and we were unable to recover it. 00:27:58.794 [2024-07-26 11:35:54.210347] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.794 [2024-07-26 11:35:54.210377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.794 qpair failed and we were unable to recover it. 00:27:58.794 [2024-07-26 11:35:54.210511] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.794 [2024-07-26 11:35:54.210541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.794 qpair failed and we were unable to recover it. 00:27:58.794 [2024-07-26 11:35:54.210753] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.794 [2024-07-26 11:35:54.210783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.795 qpair failed and we were unable to recover it. 00:27:58.795 [2024-07-26 11:35:54.211047] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.795 [2024-07-26 11:35:54.211077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.795 qpair failed and we were unable to recover it. 00:27:58.795 [2024-07-26 11:35:54.211289] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.795 [2024-07-26 11:35:54.211319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.795 qpair failed and we were unable to recover it. 00:27:58.795 11:35:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:58.795 [2024-07-26 11:35:54.211509] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.795 [2024-07-26 11:35:54.211538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.795 qpair failed and we were unable to recover it. 00:27:58.795 [2024-07-26 11:35:54.211663] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.795 [2024-07-26 11:35:54.211694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.795 qpair failed and we were unable to recover it. 00:27:58.795 [2024-07-26 11:35:54.211808] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.795 11:35:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:27:58.795 [2024-07-26 11:35:54.211839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.795 qpair failed and we were unable to recover it. 00:27:58.795 [2024-07-26 11:35:54.212038] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.795 [2024-07-26 11:35:54.212067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.795 qpair failed and we were unable to recover it. 00:27:58.795 11:35:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:58.795 [2024-07-26 11:35:54.212350] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.795 [2024-07-26 11:35:54.212381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.795 qpair failed and we were unable to recover it. 00:27:58.795 11:35:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:58.795 [2024-07-26 11:35:54.212590] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.795 [2024-07-26 11:35:54.212620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.795 qpair failed and we were unable to recover it. 00:27:58.795 [2024-07-26 11:35:54.212825] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.795 [2024-07-26 11:35:54.212855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.795 qpair failed and we were unable to recover it. 00:27:58.795 [2024-07-26 11:35:54.213045] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.795 [2024-07-26 11:35:54.213074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.795 qpair failed and we were unable to recover it. 00:27:58.795 [2024-07-26 11:35:54.213283] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.795 [2024-07-26 11:35:54.213313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.795 qpair failed and we were unable to recover it. 00:27:58.795 [2024-07-26 11:35:54.213584] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.795 [2024-07-26 11:35:54.213614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.795 qpair failed and we were unable to recover it. 00:27:58.795 [2024-07-26 11:35:54.213761] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.795 [2024-07-26 11:35:54.213792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.795 qpair failed and we were unable to recover it. 00:27:58.795 [2024-07-26 11:35:54.213985] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.795 [2024-07-26 11:35:54.214015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.795 qpair failed and we were unable to recover it. 00:27:58.795 [2024-07-26 11:35:54.214158] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.795 [2024-07-26 11:35:54.214188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.795 qpair failed and we were unable to recover it. 00:27:58.795 [2024-07-26 11:35:54.214405] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.795 [2024-07-26 11:35:54.214436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.795 qpair failed and we were unable to recover it. 00:27:58.795 [2024-07-26 11:35:54.214612] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.795 [2024-07-26 11:35:54.214653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.795 qpair failed and we were unable to recover it. 00:27:58.795 [2024-07-26 11:35:54.214799] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.795 [2024-07-26 11:35:54.214829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.795 qpair failed and we were unable to recover it. 00:27:58.795 [2024-07-26 11:35:54.215021] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.795 [2024-07-26 11:35:54.215051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.795 qpair failed and we were unable to recover it. 00:27:58.795 [2024-07-26 11:35:54.215346] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.795 [2024-07-26 11:35:54.215375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.795 qpair failed and we were unable to recover it. 00:27:58.795 [2024-07-26 11:35:54.215497] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.795 [2024-07-26 11:35:54.215526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.795 qpair failed and we were unable to recover it. 00:27:58.795 [2024-07-26 11:35:54.215656] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.795 [2024-07-26 11:35:54.215687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.795 qpair failed and we were unable to recover it. 00:27:58.795 [2024-07-26 11:35:54.215950] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.795 [2024-07-26 11:35:54.215979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.795 qpair failed and we were unable to recover it. 00:27:58.795 [2024-07-26 11:35:54.216150] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.795 [2024-07-26 11:35:54.216179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.795 qpair failed and we were unable to recover it. 00:27:58.795 [2024-07-26 11:35:54.216366] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.795 [2024-07-26 11:35:54.216396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.795 qpair failed and we were unable to recover it. 00:27:58.795 [2024-07-26 11:35:54.216591] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.795 [2024-07-26 11:35:54.216620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.795 qpair failed and we were unable to recover it. 00:27:58.795 [2024-07-26 11:35:54.216832] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.795 [2024-07-26 11:35:54.216862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.795 qpair failed and we were unable to recover it. 00:27:58.795 [2024-07-26 11:35:54.217004] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.795 [2024-07-26 11:35:54.217034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f0000b90 with addr=10.0.0.2, port=4420 00:27:58.795 qpair failed and we were unable to recover it. 00:27:58.795 [2024-07-26 11:35:54.217247] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.795 [2024-07-26 11:35:54.217281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f8000b90 with addr=10.0.0.2, port=4420 00:27:58.795 qpair failed and we were unable to recover it. 00:27:58.795 [2024-07-26 11:35:54.217408] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.795 [2024-07-26 11:35:54.217437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f8000b90 with addr=10.0.0.2, port=4420 00:27:58.795 qpair failed and we were unable to recover it. 00:27:58.795 [2024-07-26 11:35:54.217705] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.795 [2024-07-26 11:35:54.217736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f8000b90 with addr=10.0.0.2, port=4420 00:27:58.795 qpair failed and we were unable to recover it. 00:27:58.795 [2024-07-26 11:35:54.217923] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.795 [2024-07-26 11:35:54.217953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f8000b90 with addr=10.0.0.2, port=4420 00:27:58.795 qpair failed and we were unable to recover it. 00:27:58.795 [2024-07-26 11:35:54.218257] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.795 [2024-07-26 11:35:54.218286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f8000b90 with addr=10.0.0.2, port=4420 00:27:58.795 qpair failed and we were unable to recover it. 00:27:58.795 [2024-07-26 11:35:54.218393] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.795 [2024-07-26 11:35:54.218422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f8000b90 with addr=10.0.0.2, port=4420 00:27:58.795 qpair failed and we were unable to recover it. 00:27:58.795 [2024-07-26 11:35:54.218607] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.795 [2024-07-26 11:35:54.218646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f8000b90 with addr=10.0.0.2, port=4420 00:27:58.795 qpair failed and we were unable to recover it. 00:27:58.795 [2024-07-26 11:35:54.218823] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.796 [2024-07-26 11:35:54.218853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f8000b90 with addr=10.0.0.2, port=4420 00:27:58.796 qpair failed and we were unable to recover it. 00:27:58.796 [2024-07-26 11:35:54.219121] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.796 [2024-07-26 11:35:54.219150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f8000b90 with addr=10.0.0.2, port=4420 00:27:58.796 qpair failed and we were unable to recover it. 00:27:58.796 [2024-07-26 11:35:54.219345] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.796 [2024-07-26 11:35:54.219375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f8000b90 with addr=10.0.0.2, port=4420 00:27:58.796 qpair failed and we were unable to recover it. 00:27:58.796 11:35:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:58.796 [2024-07-26 11:35:54.219494] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.796 [2024-07-26 11:35:54.219538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f8000b90 with addr=10.0.0.2, port=4420 00:27:58.796 qpair failed and we were unable to recover it. 00:27:58.796 [2024-07-26 11:35:54.219803] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.796 [2024-07-26 11:35:54.219835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f8000b90 with addr=10.0.0.2, port=4420 00:27:58.796 qpair failed and we were unable to recover it. 00:27:58.796 11:35:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:27:58.796 [2024-07-26 11:35:54.220047] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.796 [2024-07-26 11:35:54.220082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f8000b90 with addr=10.0.0.2, port=4420 00:27:58.796 qpair failed and we were unable to recover it. 00:27:58.796 11:35:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:58.796 [2024-07-26 11:35:54.220364] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.796 [2024-07-26 11:35:54.220394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f8000b90 with addr=10.0.0.2, port=4420 00:27:58.796 qpair failed and we were unable to recover it. 00:27:58.796 11:35:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:58.796 [2024-07-26 11:35:54.220587] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.796 [2024-07-26 11:35:54.220617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f8000b90 with addr=10.0.0.2, port=4420 00:27:58.796 qpair failed and we were unable to recover it. 00:27:58.796 [2024-07-26 11:35:54.220763] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.796 [2024-07-26 11:35:54.220793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f8000b90 with addr=10.0.0.2, port=4420 00:27:58.796 qpair failed and we were unable to recover it. 00:27:58.796 [2024-07-26 11:35:54.220983] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.796 [2024-07-26 11:35:54.221013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f8000b90 with addr=10.0.0.2, port=4420 00:27:58.796 qpair failed and we were unable to recover it. 00:27:58.796 [2024-07-26 11:35:54.221138] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.796 [2024-07-26 11:35:54.221166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f8000b90 with addr=10.0.0.2, port=4420 00:27:58.796 qpair failed and we were unable to recover it. 00:27:58.796 [2024-07-26 11:35:54.221433] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.796 [2024-07-26 11:35:54.221462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f8000b90 with addr=10.0.0.2, port=4420 00:27:58.796 qpair failed and we were unable to recover it. 00:27:58.796 [2024-07-26 11:35:54.221710] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.796 [2024-07-26 11:35:54.221743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f8000b90 with addr=10.0.0.2, port=4420 00:27:58.796 qpair failed and we were unable to recover it. 00:27:58.796 [2024-07-26 11:35:54.221879] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.796 [2024-07-26 11:35:54.221910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f8000b90 with addr=10.0.0.2, port=4420 00:27:58.796 qpair failed and we were unable to recover it. 00:27:58.796 [2024-07-26 11:35:54.222101] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.796 [2024-07-26 11:35:54.222131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f8000b90 with addr=10.0.0.2, port=4420 00:27:58.796 qpair failed and we were unable to recover it. 00:27:58.796 [2024-07-26 11:35:54.222267] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.796 [2024-07-26 11:35:54.222297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f8000b90 with addr=10.0.0.2, port=4420 00:27:58.796 qpair failed and we were unable to recover it. 00:27:58.796 [2024-07-26 11:35:54.222416] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.796 [2024-07-26 11:35:54.222445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f8000b90 with addr=10.0.0.2, port=4420 00:27:58.796 qpair failed and we were unable to recover it. 00:27:58.796 [2024-07-26 11:35:54.222625] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.796 [2024-07-26 11:35:54.222665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f8000b90 with addr=10.0.0.2, port=4420 00:27:58.796 qpair failed and we were unable to recover it. 00:27:58.796 [2024-07-26 11:35:54.222914] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.796 [2024-07-26 11:35:54.222945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f41f8000b90 with addr=10.0.0.2, port=4420 00:27:58.796 qpair failed and we were unable to recover it. 00:27:58.796 [2024-07-26 11:35:54.222997] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:58.796 11:35:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:58.796 11:35:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:27:58.796 11:35:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:58.796 11:35:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:58.796 [2024-07-26 11:35:54.228717] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:58.796 [2024-07-26 11:35:54.228844] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:58.796 [2024-07-26 11:35:54.228890] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:58.796 [2024-07-26 11:35:54.228912] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:58.796 [2024-07-26 11:35:54.228931] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f41f8000b90 00:27:58.796 [2024-07-26 11:35:54.228979] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:58.796 qpair failed and we were unable to recover it. 00:27:58.796 11:35:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:58.796 11:35:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@50 -- # wait 1671578 00:27:58.796 [2024-07-26 11:35:54.238598] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:58.796 [2024-07-26 11:35:54.238692] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:58.796 [2024-07-26 11:35:54.238721] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:58.796 [2024-07-26 11:35:54.238735] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:58.796 [2024-07-26 11:35:54.238747] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f41f8000b90 00:27:58.796 [2024-07-26 11:35:54.238778] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:58.796 qpair failed and we were unable to recover it. 00:27:58.796 [2024-07-26 11:35:54.248672] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:58.796 [2024-07-26 11:35:54.248796] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:58.796 [2024-07-26 11:35:54.248817] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:58.796 [2024-07-26 11:35:54.248827] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:58.796 [2024-07-26 11:35:54.248836] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f41f8000b90 00:27:58.796 [2024-07-26 11:35:54.248858] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:58.796 qpair failed and we were unable to recover it. 00:27:58.796 [2024-07-26 11:35:54.258638] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:58.796 [2024-07-26 11:35:54.258743] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:58.796 [2024-07-26 11:35:54.258760] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:58.796 [2024-07-26 11:35:54.258767] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:58.796 [2024-07-26 11:35:54.258773] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f41f8000b90 00:27:58.796 [2024-07-26 11:35:54.258788] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:58.796 qpair failed and we were unable to recover it. 00:27:58.796 [2024-07-26 11:35:54.268639] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:58.796 [2024-07-26 11:35:54.268708] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:58.796 [2024-07-26 11:35:54.268723] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:58.796 [2024-07-26 11:35:54.268730] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:58.796 [2024-07-26 11:35:54.268736] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f41f8000b90 00:27:58.796 [2024-07-26 11:35:54.268751] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:58.796 qpair failed and we were unable to recover it. 00:27:58.796 [2024-07-26 11:35:54.278693] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:58.797 [2024-07-26 11:35:54.278750] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:58.797 [2024-07-26 11:35:54.278777] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:58.797 [2024-07-26 11:35:54.278784] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:58.797 [2024-07-26 11:35:54.278790] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f41f8000b90 00:27:58.797 [2024-07-26 11:35:54.278804] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:58.797 qpair failed and we were unable to recover it. 00:27:58.797 [2024-07-26 11:35:54.288687] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:58.797 [2024-07-26 11:35:54.288774] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:58.797 [2024-07-26 11:35:54.288789] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:58.797 [2024-07-26 11:35:54.288795] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:58.797 [2024-07-26 11:35:54.288801] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f41f8000b90 00:27:58.797 [2024-07-26 11:35:54.288816] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:58.797 qpair failed and we were unable to recover it. 00:27:58.797 [2024-07-26 11:35:54.298744] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:58.797 [2024-07-26 11:35:54.298801] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:58.797 [2024-07-26 11:35:54.298816] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:58.797 [2024-07-26 11:35:54.298827] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:58.797 [2024-07-26 11:35:54.298834] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f41f8000b90 00:27:58.797 [2024-07-26 11:35:54.298848] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:58.797 qpair failed and we were unable to recover it. 00:27:58.797 [2024-07-26 11:35:54.308727] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:58.797 [2024-07-26 11:35:54.308785] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:58.797 [2024-07-26 11:35:54.308799] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:58.797 [2024-07-26 11:35:54.308806] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:58.797 [2024-07-26 11:35:54.308812] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f41f8000b90 00:27:58.797 [2024-07-26 11:35:54.308827] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:58.797 qpair failed and we were unable to recover it. 00:27:58.797 [2024-07-26 11:35:54.318753] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:58.797 [2024-07-26 11:35:54.318805] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:58.797 [2024-07-26 11:35:54.318820] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:58.797 [2024-07-26 11:35:54.318827] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:58.797 [2024-07-26 11:35:54.318834] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f41f8000b90 00:27:58.797 [2024-07-26 11:35:54.318848] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:58.797 qpair failed and we were unable to recover it. 00:27:58.797 [2024-07-26 11:35:54.328788] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:58.797 [2024-07-26 11:35:54.328838] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:58.797 [2024-07-26 11:35:54.328853] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:58.797 [2024-07-26 11:35:54.328859] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:58.797 [2024-07-26 11:35:54.328865] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f41f8000b90 00:27:58.797 [2024-07-26 11:35:54.328879] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:58.797 qpair failed and we were unable to recover it. 00:27:58.797 [2024-07-26 11:35:54.338814] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:58.797 [2024-07-26 11:35:54.338872] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:58.797 [2024-07-26 11:35:54.338885] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:58.797 [2024-07-26 11:35:54.338892] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:58.797 [2024-07-26 11:35:54.338899] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f41f8000b90 00:27:58.797 [2024-07-26 11:35:54.338913] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:58.797 qpair failed and we were unable to recover it. 00:27:58.797 [2024-07-26 11:35:54.348856] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:58.797 [2024-07-26 11:35:54.348911] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:58.797 [2024-07-26 11:35:54.348925] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:58.797 [2024-07-26 11:35:54.348932] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:58.797 [2024-07-26 11:35:54.348938] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f41f8000b90 00:27:58.797 [2024-07-26 11:35:54.348953] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:58.797 qpair failed and we were unable to recover it. 00:27:58.797 [2024-07-26 11:35:54.358894] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:58.797 [2024-07-26 11:35:54.358949] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:58.797 [2024-07-26 11:35:54.358963] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:58.797 [2024-07-26 11:35:54.358970] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:58.797 [2024-07-26 11:35:54.358977] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f41f8000b90 00:27:58.797 [2024-07-26 11:35:54.358991] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:58.797 qpair failed and we were unable to recover it. 00:27:58.797 [2024-07-26 11:35:54.368928] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:58.797 [2024-07-26 11:35:54.368993] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:58.797 [2024-07-26 11:35:54.369008] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:58.797 [2024-07-26 11:35:54.369014] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:58.797 [2024-07-26 11:35:54.369020] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f41f8000b90 00:27:58.797 [2024-07-26 11:35:54.369034] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:58.797 qpair failed and we were unable to recover it. 00:27:58.797 [2024-07-26 11:35:54.378959] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:58.797 [2024-07-26 11:35:54.379012] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:58.797 [2024-07-26 11:35:54.379028] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:58.797 [2024-07-26 11:35:54.379035] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:58.797 [2024-07-26 11:35:54.379044] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f41f8000b90 00:27:58.797 [2024-07-26 11:35:54.379058] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:58.797 qpair failed and we were unable to recover it. 00:27:58.797 [2024-07-26 11:35:54.388892] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:58.797 [2024-07-26 11:35:54.388950] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:58.797 [2024-07-26 11:35:54.388968] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:58.797 [2024-07-26 11:35:54.388975] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:58.797 [2024-07-26 11:35:54.388981] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f41f8000b90 00:27:58.797 [2024-07-26 11:35:54.388995] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:58.797 qpair failed and we were unable to recover it. 00:27:58.797 [2024-07-26 11:35:54.398997] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:58.797 [2024-07-26 11:35:54.399049] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:58.797 [2024-07-26 11:35:54.399062] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:58.797 [2024-07-26 11:35:54.399069] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:58.797 [2024-07-26 11:35:54.399075] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f41f8000b90 00:27:58.797 [2024-07-26 11:35:54.399089] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:58.797 qpair failed and we were unable to recover it. 00:27:58.797 [2024-07-26 11:35:54.409015] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:58.797 [2024-07-26 11:35:54.409069] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:58.797 [2024-07-26 11:35:54.409083] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:58.798 [2024-07-26 11:35:54.409090] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:58.798 [2024-07-26 11:35:54.409096] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f41f8000b90 00:27:58.798 [2024-07-26 11:35:54.409109] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:58.798 qpair failed and we were unable to recover it. 00:27:58.798 [2024-07-26 11:35:54.418979] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:58.798 [2024-07-26 11:35:54.419038] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:58.798 [2024-07-26 11:35:54.419053] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:58.798 [2024-07-26 11:35:54.419060] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:58.798 [2024-07-26 11:35:54.419066] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f41f8000b90 00:27:58.798 [2024-07-26 11:35:54.419080] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:58.798 qpair failed and we were unable to recover it. 00:27:59.057 [2024-07-26 11:35:54.429047] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:59.057 [2024-07-26 11:35:54.429114] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:59.057 [2024-07-26 11:35:54.429130] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:59.057 [2024-07-26 11:35:54.429139] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:59.057 [2024-07-26 11:35:54.429146] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f41f8000b90 00:27:59.057 [2024-07-26 11:35:54.429165] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:59.057 qpair failed and we were unable to recover it. 00:27:59.057 [2024-07-26 11:35:54.439130] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:59.057 [2024-07-26 11:35:54.439186] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:59.057 [2024-07-26 11:35:54.439201] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:59.057 [2024-07-26 11:35:54.439209] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:59.057 [2024-07-26 11:35:54.439215] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f41f8000b90 00:27:59.057 [2024-07-26 11:35:54.439230] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:59.057 qpair failed and we were unable to recover it. 00:27:59.057 [2024-07-26 11:35:54.449143] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:59.057 [2024-07-26 11:35:54.449200] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:59.057 [2024-07-26 11:35:54.449214] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:59.057 [2024-07-26 11:35:54.449221] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:59.057 [2024-07-26 11:35:54.449227] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f41f8000b90 00:27:59.057 [2024-07-26 11:35:54.449241] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:59.057 qpair failed and we were unable to recover it. 00:27:59.057 [2024-07-26 11:35:54.459152] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:59.057 [2024-07-26 11:35:54.459209] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:59.057 [2024-07-26 11:35:54.459222] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:59.057 [2024-07-26 11:35:54.459229] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:59.057 [2024-07-26 11:35:54.459236] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f41f8000b90 00:27:59.057 [2024-07-26 11:35:54.459250] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:59.057 qpair failed and we were unable to recover it. 00:27:59.057 [2024-07-26 11:35:54.469186] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:59.057 [2024-07-26 11:35:54.469280] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:59.057 [2024-07-26 11:35:54.469294] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:59.057 [2024-07-26 11:35:54.469301] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:59.057 [2024-07-26 11:35:54.469307] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f41f8000b90 00:27:59.058 [2024-07-26 11:35:54.469322] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:59.058 qpair failed and we were unable to recover it. 00:27:59.058 [2024-07-26 11:35:54.479335] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:59.058 [2024-07-26 11:35:54.479401] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:59.058 [2024-07-26 11:35:54.479418] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:59.058 [2024-07-26 11:35:54.479425] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:59.058 [2024-07-26 11:35:54.479431] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f41f8000b90 00:27:59.058 [2024-07-26 11:35:54.479444] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:59.058 qpair failed and we were unable to recover it. 00:27:59.058 [2024-07-26 11:35:54.489281] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:59.058 [2024-07-26 11:35:54.489335] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:59.058 [2024-07-26 11:35:54.489350] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:59.058 [2024-07-26 11:35:54.489356] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:59.058 [2024-07-26 11:35:54.489363] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f41f8000b90 00:27:59.058 [2024-07-26 11:35:54.489377] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:59.058 qpair failed and we were unable to recover it. 00:27:59.058 [2024-07-26 11:35:54.499331] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:59.058 [2024-07-26 11:35:54.499393] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:59.058 [2024-07-26 11:35:54.499407] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:59.058 [2024-07-26 11:35:54.499414] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:59.058 [2024-07-26 11:35:54.499420] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f41f8000b90 00:27:59.058 [2024-07-26 11:35:54.499434] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:59.058 qpair failed and we were unable to recover it. 00:27:59.058 [2024-07-26 11:35:54.509348] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:59.058 [2024-07-26 11:35:54.509405] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:59.058 [2024-07-26 11:35:54.509420] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:59.058 [2024-07-26 11:35:54.509428] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:59.058 [2024-07-26 11:35:54.509434] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f41f8000b90 00:27:59.058 [2024-07-26 11:35:54.509447] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:59.058 qpair failed and we were unable to recover it. 00:27:59.058 [2024-07-26 11:35:54.519339] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:59.058 [2024-07-26 11:35:54.519400] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:59.058 [2024-07-26 11:35:54.519414] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:59.058 [2024-07-26 11:35:54.519421] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:59.058 [2024-07-26 11:35:54.519431] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f41f8000b90 00:27:59.058 [2024-07-26 11:35:54.519445] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:59.058 qpair failed and we were unable to recover it. 00:27:59.058 [2024-07-26 11:35:54.529381] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:59.058 [2024-07-26 11:35:54.529438] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:59.058 [2024-07-26 11:35:54.529453] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:59.058 [2024-07-26 11:35:54.529460] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:59.058 [2024-07-26 11:35:54.529466] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f41f8000b90 00:27:59.058 [2024-07-26 11:35:54.529481] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:59.058 qpair failed and we were unable to recover it. 00:27:59.058 [2024-07-26 11:35:54.539358] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:59.058 [2024-07-26 11:35:54.539457] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:59.058 [2024-07-26 11:35:54.539472] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:59.058 [2024-07-26 11:35:54.539479] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:59.058 [2024-07-26 11:35:54.539485] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f41f8000b90 00:27:59.058 [2024-07-26 11:35:54.539499] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:59.058 qpair failed and we were unable to recover it. 00:27:59.058 [2024-07-26 11:35:54.549409] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:59.058 [2024-07-26 11:35:54.549465] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:59.058 [2024-07-26 11:35:54.549479] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:59.058 [2024-07-26 11:35:54.549486] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:59.058 [2024-07-26 11:35:54.549492] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f41f8000b90 00:27:59.058 [2024-07-26 11:35:54.549507] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:59.058 qpair failed and we were unable to recover it. 00:27:59.058 [2024-07-26 11:35:54.559449] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:59.058 [2024-07-26 11:35:54.559506] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:59.058 [2024-07-26 11:35:54.559521] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:59.058 [2024-07-26 11:35:54.559527] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:59.058 [2024-07-26 11:35:54.559533] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f41f8000b90 00:27:59.058 [2024-07-26 11:35:54.559547] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:59.058 qpair failed and we were unable to recover it. 00:27:59.058 [2024-07-26 11:35:54.569482] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:59.058 [2024-07-26 11:35:54.569551] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:59.058 [2024-07-26 11:35:54.569565] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:59.058 [2024-07-26 11:35:54.569572] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:59.058 [2024-07-26 11:35:54.569578] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f41f8000b90 00:27:59.058 [2024-07-26 11:35:54.569592] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:59.058 qpair failed and we were unable to recover it. 00:27:59.058 [2024-07-26 11:35:54.579499] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:59.058 [2024-07-26 11:35:54.579569] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:59.058 [2024-07-26 11:35:54.579584] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:59.058 [2024-07-26 11:35:54.579591] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:59.058 [2024-07-26 11:35:54.579596] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f41f8000b90 00:27:59.058 [2024-07-26 11:35:54.579611] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:59.058 qpair failed and we were unable to recover it. 00:27:59.058 [2024-07-26 11:35:54.589524] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:59.058 [2024-07-26 11:35:54.589585] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:59.058 [2024-07-26 11:35:54.589599] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:59.058 [2024-07-26 11:35:54.589606] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:59.058 [2024-07-26 11:35:54.589612] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f41f8000b90 00:27:59.058 [2024-07-26 11:35:54.589629] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:59.058 qpair failed and we were unable to recover it. 00:27:59.058 [2024-07-26 11:35:54.599582] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:59.058 [2024-07-26 11:35:54.599648] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:59.058 [2024-07-26 11:35:54.599663] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:59.058 [2024-07-26 11:35:54.599670] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:59.058 [2024-07-26 11:35:54.599675] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f41f8000b90 00:27:59.058 [2024-07-26 11:35:54.599689] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:59.058 qpair failed and we were unable to recover it. 00:27:59.058 [2024-07-26 11:35:54.609556] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:59.059 [2024-07-26 11:35:54.609611] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:59.059 [2024-07-26 11:35:54.609624] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:59.059 [2024-07-26 11:35:54.609636] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:59.059 [2024-07-26 11:35:54.609645] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f41f8000b90 00:27:59.059 [2024-07-26 11:35:54.609660] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:59.059 qpair failed and we were unable to recover it. 00:27:59.059 [2024-07-26 11:35:54.619610] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:59.059 [2024-07-26 11:35:54.619685] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:59.059 [2024-07-26 11:35:54.619699] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:59.059 [2024-07-26 11:35:54.619706] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:59.059 [2024-07-26 11:35:54.619712] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f41f8000b90 00:27:59.059 [2024-07-26 11:35:54.619726] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:59.059 qpair failed and we were unable to recover it. 00:27:59.059 [2024-07-26 11:35:54.629643] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:59.059 [2024-07-26 11:35:54.629704] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:59.059 [2024-07-26 11:35:54.629718] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:59.059 [2024-07-26 11:35:54.629725] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:59.059 [2024-07-26 11:35:54.629731] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f41f8000b90 00:27:59.059 [2024-07-26 11:35:54.629746] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:59.059 qpair failed and we were unable to recover it. 00:27:59.059 [2024-07-26 11:35:54.639667] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:59.059 [2024-07-26 11:35:54.639726] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:59.059 [2024-07-26 11:35:54.639741] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:59.059 [2024-07-26 11:35:54.639749] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:59.059 [2024-07-26 11:35:54.639757] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f41f8000b90 00:27:59.059 [2024-07-26 11:35:54.639771] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:59.059 qpair failed and we were unable to recover it. 00:27:59.059 [2024-07-26 11:35:54.649615] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:59.059 [2024-07-26 11:35:54.649674] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:59.059 [2024-07-26 11:35:54.649689] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:59.059 [2024-07-26 11:35:54.649696] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:59.059 [2024-07-26 11:35:54.649702] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f41f8000b90 00:27:59.059 [2024-07-26 11:35:54.649716] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:59.059 qpair failed and we were unable to recover it. 00:27:59.059 [2024-07-26 11:35:54.659653] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:59.059 [2024-07-26 11:35:54.659709] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:59.059 [2024-07-26 11:35:54.659723] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:59.059 [2024-07-26 11:35:54.659730] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:59.059 [2024-07-26 11:35:54.659736] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f41f8000b90 00:27:59.059 [2024-07-26 11:35:54.659750] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:59.059 qpair failed and we were unable to recover it. 00:27:59.059 [2024-07-26 11:35:54.669742] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:59.059 [2024-07-26 11:35:54.669802] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:59.059 [2024-07-26 11:35:54.669816] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:59.059 [2024-07-26 11:35:54.669823] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:59.059 [2024-07-26 11:35:54.669829] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f41f8000b90 00:27:59.059 [2024-07-26 11:35:54.669843] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:59.059 qpair failed and we were unable to recover it. 00:27:59.059 [2024-07-26 11:35:54.679779] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:59.059 [2024-07-26 11:35:54.679861] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:59.059 [2024-07-26 11:35:54.679876] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:59.059 [2024-07-26 11:35:54.679883] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:59.059 [2024-07-26 11:35:54.679889] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f41f8000b90 00:27:59.059 [2024-07-26 11:35:54.679903] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:59.059 qpair failed and we were unable to recover it. 00:27:59.059 [2024-07-26 11:35:54.689739] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:59.059 [2024-07-26 11:35:54.689803] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:59.059 [2024-07-26 11:35:54.689817] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:59.059 [2024-07-26 11:35:54.689824] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:59.059 [2024-07-26 11:35:54.689830] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f41f8000b90 00:27:59.059 [2024-07-26 11:35:54.689844] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:59.059 qpair failed and we were unable to recover it. 00:27:59.059 [2024-07-26 11:35:54.699833] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:59.059 [2024-07-26 11:35:54.699890] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:59.059 [2024-07-26 11:35:54.699905] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:59.059 [2024-07-26 11:35:54.699917] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:59.059 [2024-07-26 11:35:54.699924] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f41f8000b90 00:27:59.059 [2024-07-26 11:35:54.699938] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:59.059 qpair failed and we were unable to recover it. 00:27:59.059 [2024-07-26 11:35:54.709852] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:59.059 [2024-07-26 11:35:54.709909] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:59.059 [2024-07-26 11:35:54.709923] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:59.059 [2024-07-26 11:35:54.709930] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:59.059 [2024-07-26 11:35:54.709936] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f41f8000b90 00:27:59.059 [2024-07-26 11:35:54.709950] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:59.059 qpair failed and we were unable to recover it. 00:27:59.319 [2024-07-26 11:35:54.719818] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:59.319 [2024-07-26 11:35:54.719887] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:59.319 [2024-07-26 11:35:54.719903] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:59.319 [2024-07-26 11:35:54.719911] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:59.319 [2024-07-26 11:35:54.719917] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f41f8000b90 00:27:59.319 [2024-07-26 11:35:54.719931] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:59.319 qpair failed and we were unable to recover it. 00:27:59.319 [2024-07-26 11:35:54.729832] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:59.319 [2024-07-26 11:35:54.729903] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:59.319 [2024-07-26 11:35:54.729917] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:59.319 [2024-07-26 11:35:54.729924] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:59.319 [2024-07-26 11:35:54.729930] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f41f8000b90 00:27:59.319 [2024-07-26 11:35:54.729944] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:59.319 qpair failed and we were unable to recover it. 00:27:59.319 [2024-07-26 11:35:54.739955] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:59.319 [2024-07-26 11:35:54.740010] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:59.319 [2024-07-26 11:35:54.740024] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:59.319 [2024-07-26 11:35:54.740031] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:59.319 [2024-07-26 11:35:54.740037] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f41f8000b90 00:27:59.319 [2024-07-26 11:35:54.740050] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:59.319 qpair failed and we were unable to recover it. 00:27:59.319 [2024-07-26 11:35:54.750049] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:59.319 [2024-07-26 11:35:54.750134] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:59.319 [2024-07-26 11:35:54.750148] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:59.319 [2024-07-26 11:35:54.750155] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:59.319 [2024-07-26 11:35:54.750161] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f41f8000b90 00:27:59.319 [2024-07-26 11:35:54.750176] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:59.319 qpair failed and we were unable to recover it. 00:27:59.319 [2024-07-26 11:35:54.759945] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:59.319 [2024-07-26 11:35:54.760038] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:59.319 [2024-07-26 11:35:54.760053] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:59.319 [2024-07-26 11:35:54.760061] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:59.319 [2024-07-26 11:35:54.760067] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f41f8000b90 00:27:59.319 [2024-07-26 11:35:54.760081] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:59.319 qpair failed and we were unable to recover it. 00:27:59.319 [2024-07-26 11:35:54.769975] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:59.319 [2024-07-26 11:35:54.770032] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:59.319 [2024-07-26 11:35:54.770046] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:59.319 [2024-07-26 11:35:54.770053] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:59.319 [2024-07-26 11:35:54.770060] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f41f8000b90 00:27:59.319 [2024-07-26 11:35:54.770073] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:59.319 qpair failed and we were unable to recover it. 00:27:59.319 [2024-07-26 11:35:54.780106] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:59.319 [2024-07-26 11:35:54.780163] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:59.319 [2024-07-26 11:35:54.780178] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:59.319 [2024-07-26 11:35:54.780184] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:59.319 [2024-07-26 11:35:54.780191] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f41f8000b90 00:27:59.319 [2024-07-26 11:35:54.780205] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:59.319 qpair failed and we were unable to recover it. 00:27:59.319 [2024-07-26 11:35:54.790025] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:59.319 [2024-07-26 11:35:54.790084] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:59.319 [2024-07-26 11:35:54.790101] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:59.319 [2024-07-26 11:35:54.790108] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:59.319 [2024-07-26 11:35:54.790115] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f41f8000b90 00:27:59.319 [2024-07-26 11:35:54.790128] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:59.319 qpair failed and we were unable to recover it. 00:27:59.319 [2024-07-26 11:35:54.800065] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:59.319 [2024-07-26 11:35:54.800120] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:59.319 [2024-07-26 11:35:54.800134] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:59.319 [2024-07-26 11:35:54.800141] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:59.319 [2024-07-26 11:35:54.800147] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f41f8000b90 00:27:59.319 [2024-07-26 11:35:54.800161] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:59.319 qpair failed and we were unable to recover it. 00:27:59.319 [2024-07-26 11:35:54.810151] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:59.319 [2024-07-26 11:35:54.810208] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:59.319 [2024-07-26 11:35:54.810222] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:59.319 [2024-07-26 11:35:54.810228] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:59.319 [2024-07-26 11:35:54.810234] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f41f8000b90 00:27:59.319 [2024-07-26 11:35:54.810248] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:59.319 qpair failed and we were unable to recover it. 00:27:59.319 [2024-07-26 11:35:54.820200] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:59.319 [2024-07-26 11:35:54.820258] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:59.319 [2024-07-26 11:35:54.820272] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:59.319 [2024-07-26 11:35:54.820279] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:59.319 [2024-07-26 11:35:54.820284] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f41f8000b90 00:27:59.319 [2024-07-26 11:35:54.820298] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:59.319 qpair failed and we were unable to recover it. 00:27:59.320 [2024-07-26 11:35:54.830210] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:59.320 [2024-07-26 11:35:54.830272] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:59.320 [2024-07-26 11:35:54.830286] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:59.320 [2024-07-26 11:35:54.830292] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:59.320 [2024-07-26 11:35:54.830298] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f41f8000b90 00:27:59.320 [2024-07-26 11:35:54.830315] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:59.320 qpair failed and we were unable to recover it. 00:27:59.320 [2024-07-26 11:35:54.840237] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:59.320 [2024-07-26 11:35:54.840290] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:59.320 [2024-07-26 11:35:54.840304] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:59.320 [2024-07-26 11:35:54.840311] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:59.320 [2024-07-26 11:35:54.840316] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f41f8000b90 00:27:59.320 [2024-07-26 11:35:54.840331] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:59.320 qpair failed and we were unable to recover it. 00:27:59.320 [2024-07-26 11:35:54.850243] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:59.320 [2024-07-26 11:35:54.850295] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:59.320 [2024-07-26 11:35:54.850309] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:59.320 [2024-07-26 11:35:54.850316] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:59.320 [2024-07-26 11:35:54.850322] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f41f8000b90 00:27:59.320 [2024-07-26 11:35:54.850336] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:59.320 qpair failed and we were unable to recover it. 00:27:59.320 [2024-07-26 11:35:54.860286] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:59.320 [2024-07-26 11:35:54.860343] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:59.320 [2024-07-26 11:35:54.860357] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:59.320 [2024-07-26 11:35:54.860364] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:59.320 [2024-07-26 11:35:54.860371] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f41f8000b90 00:27:59.320 [2024-07-26 11:35:54.860385] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:59.320 qpair failed and we were unable to recover it. 00:27:59.320 [2024-07-26 11:35:54.870338] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:59.320 [2024-07-26 11:35:54.870394] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:59.320 [2024-07-26 11:35:54.870409] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:59.320 [2024-07-26 11:35:54.870416] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:59.320 [2024-07-26 11:35:54.870423] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f41f8000b90 00:27:59.320 [2024-07-26 11:35:54.870436] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:59.320 qpair failed and we were unable to recover it. 00:27:59.320 [2024-07-26 11:35:54.880373] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:59.320 [2024-07-26 11:35:54.880428] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:59.320 [2024-07-26 11:35:54.880445] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:59.320 [2024-07-26 11:35:54.880452] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:59.320 [2024-07-26 11:35:54.880458] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f41f8000b90 00:27:59.320 [2024-07-26 11:35:54.880472] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:59.320 qpair failed and we were unable to recover it. 00:27:59.320 [2024-07-26 11:35:54.890375] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:59.320 [2024-07-26 11:35:54.890428] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:59.320 [2024-07-26 11:35:54.890442] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:59.320 [2024-07-26 11:35:54.890449] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:59.320 [2024-07-26 11:35:54.890456] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f41f8000b90 00:27:59.320 [2024-07-26 11:35:54.890470] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:59.320 qpair failed and we were unable to recover it. 00:27:59.320 [2024-07-26 11:35:54.900404] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:59.320 [2024-07-26 11:35:54.900456] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:59.320 [2024-07-26 11:35:54.900470] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:59.320 [2024-07-26 11:35:54.900477] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:59.320 [2024-07-26 11:35:54.900483] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f41f8000b90 00:27:59.320 [2024-07-26 11:35:54.900497] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:59.320 qpair failed and we were unable to recover it. 00:27:59.320 [2024-07-26 11:35:54.910414] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:59.320 [2024-07-26 11:35:54.910468] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:59.320 [2024-07-26 11:35:54.910483] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:59.320 [2024-07-26 11:35:54.910489] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:59.320 [2024-07-26 11:35:54.910495] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f41f8000b90 00:27:59.320 [2024-07-26 11:35:54.910510] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:59.320 qpair failed and we were unable to recover it. 00:27:59.320 [2024-07-26 11:35:54.920460] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:59.320 [2024-07-26 11:35:54.920517] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:59.320 [2024-07-26 11:35:54.920531] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:59.320 [2024-07-26 11:35:54.920538] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:59.320 [2024-07-26 11:35:54.920545] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f41f8000b90 00:27:59.320 [2024-07-26 11:35:54.920562] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:59.320 qpair failed and we were unable to recover it. 00:27:59.320 [2024-07-26 11:35:54.930488] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:59.320 [2024-07-26 11:35:54.930539] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:59.320 [2024-07-26 11:35:54.930554] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:59.320 [2024-07-26 11:35:54.930561] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:59.320 [2024-07-26 11:35:54.930567] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f41f8000b90 00:27:59.320 [2024-07-26 11:35:54.930581] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:59.320 qpair failed and we were unable to recover it. 00:27:59.320 [2024-07-26 11:35:54.940528] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:59.320 [2024-07-26 11:35:54.940593] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:59.320 [2024-07-26 11:35:54.940607] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:59.320 [2024-07-26 11:35:54.940615] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:59.320 [2024-07-26 11:35:54.940620] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f41f8000b90 00:27:59.320 [2024-07-26 11:35:54.940638] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:59.320 qpair failed and we were unable to recover it. 00:27:59.320 [2024-07-26 11:35:54.950551] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:59.320 [2024-07-26 11:35:54.950613] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:59.320 [2024-07-26 11:35:54.950631] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:59.320 [2024-07-26 11:35:54.950639] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:59.320 [2024-07-26 11:35:54.950645] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f41f8000b90 00:27:59.320 [2024-07-26 11:35:54.950659] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:59.320 qpair failed and we were unable to recover it. 00:27:59.320 [2024-07-26 11:35:54.960561] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:59.321 [2024-07-26 11:35:54.960612] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:59.321 [2024-07-26 11:35:54.960631] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:59.321 [2024-07-26 11:35:54.960638] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:59.321 [2024-07-26 11:35:54.960645] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f41f8000b90 00:27:59.321 [2024-07-26 11:35:54.960659] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:59.321 qpair failed and we were unable to recover it. 00:27:59.321 [2024-07-26 11:35:54.970592] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:59.321 [2024-07-26 11:35:54.970654] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:59.321 [2024-07-26 11:35:54.970669] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:59.321 [2024-07-26 11:35:54.970676] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:59.321 [2024-07-26 11:35:54.970682] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f41f8000b90 00:27:59.321 [2024-07-26 11:35:54.970696] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:59.321 qpair failed and we were unable to recover it. 00:27:59.579 [2024-07-26 11:35:54.980655] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:59.579 [2024-07-26 11:35:54.980711] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:59.579 [2024-07-26 11:35:54.980725] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:59.579 [2024-07-26 11:35:54.980732] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:59.579 [2024-07-26 11:35:54.980738] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f41f8000b90 00:27:59.579 [2024-07-26 11:35:54.980752] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:59.579 qpair failed and we were unable to recover it. 00:27:59.579 [2024-07-26 11:35:54.990614] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:59.579 [2024-07-26 11:35:54.990709] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:59.579 [2024-07-26 11:35:54.990724] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:59.579 [2024-07-26 11:35:54.990730] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:59.579 [2024-07-26 11:35:54.990736] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f41f8000b90 00:27:59.579 [2024-07-26 11:35:54.990751] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:59.579 qpair failed and we were unable to recover it. 00:27:59.579 [2024-07-26 11:35:55.000706] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:59.579 [2024-07-26 11:35:55.000775] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:59.579 [2024-07-26 11:35:55.000789] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:59.579 [2024-07-26 11:35:55.000796] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:59.579 [2024-07-26 11:35:55.000802] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f41f8000b90 00:27:59.579 [2024-07-26 11:35:55.000815] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:59.579 qpair failed and we were unable to recover it. 00:27:59.579 [2024-07-26 11:35:55.010706] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:59.579 [2024-07-26 11:35:55.010761] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:59.579 [2024-07-26 11:35:55.010776] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:59.579 [2024-07-26 11:35:55.010783] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:59.580 [2024-07-26 11:35:55.010792] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f41f8000b90 00:27:59.580 [2024-07-26 11:35:55.010807] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:59.580 qpair failed and we were unable to recover it. 00:27:59.580 [2024-07-26 11:35:55.020770] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:59.580 [2024-07-26 11:35:55.020823] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:59.580 [2024-07-26 11:35:55.020837] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:59.580 [2024-07-26 11:35:55.020844] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:59.580 [2024-07-26 11:35:55.020851] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f41f8000b90 00:27:59.580 [2024-07-26 11:35:55.020866] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:59.580 qpair failed and we were unable to recover it. 00:27:59.580 [2024-07-26 11:35:55.030818] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:59.580 [2024-07-26 11:35:55.030875] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:59.580 [2024-07-26 11:35:55.030889] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:59.580 [2024-07-26 11:35:55.030896] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:59.580 [2024-07-26 11:35:55.030902] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f41f8000b90 00:27:59.580 [2024-07-26 11:35:55.030916] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:59.580 qpair failed and we were unable to recover it. 00:27:59.580 [2024-07-26 11:35:55.040849] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:59.580 [2024-07-26 11:35:55.040900] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:59.580 [2024-07-26 11:35:55.040915] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:59.580 [2024-07-26 11:35:55.040921] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:59.580 [2024-07-26 11:35:55.040927] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f41f8000b90 00:27:59.580 [2024-07-26 11:35:55.040942] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:59.580 qpair failed and we were unable to recover it. 00:27:59.580 [2024-07-26 11:35:55.050768] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:59.580 [2024-07-26 11:35:55.050822] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:59.580 [2024-07-26 11:35:55.050836] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:59.580 [2024-07-26 11:35:55.050843] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:59.580 [2024-07-26 11:35:55.050850] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f41f8000b90 00:27:59.580 [2024-07-26 11:35:55.050863] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:59.580 qpair failed and we were unable to recover it. 00:27:59.580 [2024-07-26 11:35:55.060807] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:59.580 [2024-07-26 11:35:55.060861] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:59.580 [2024-07-26 11:35:55.060875] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:59.580 [2024-07-26 11:35:55.060882] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:59.580 [2024-07-26 11:35:55.060889] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f41f8000b90 00:27:59.580 [2024-07-26 11:35:55.060902] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:59.580 qpair failed and we were unable to recover it. 00:27:59.580 [2024-07-26 11:35:55.070823] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:59.580 [2024-07-26 11:35:55.070881] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:59.580 [2024-07-26 11:35:55.070895] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:59.580 [2024-07-26 11:35:55.070902] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:59.580 [2024-07-26 11:35:55.070908] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f41f8000b90 00:27:59.580 [2024-07-26 11:35:55.070922] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:59.580 qpair failed and we were unable to recover it. 00:27:59.580 [2024-07-26 11:35:55.080901] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:59.580 [2024-07-26 11:35:55.080984] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:59.580 [2024-07-26 11:35:55.080999] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:59.580 [2024-07-26 11:35:55.081006] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:59.580 [2024-07-26 11:35:55.081012] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f41f8000b90 00:27:59.580 [2024-07-26 11:35:55.081026] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:59.580 qpair failed and we were unable to recover it. 00:27:59.580 [2024-07-26 11:35:55.090942] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:59.580 [2024-07-26 11:35:55.090995] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:59.580 [2024-07-26 11:35:55.091010] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:59.580 [2024-07-26 11:35:55.091016] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:59.580 [2024-07-26 11:35:55.091023] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f41f8000b90 00:27:59.580 [2024-07-26 11:35:55.091036] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:59.580 qpair failed and we were unable to recover it. 00:27:59.580 [2024-07-26 11:35:55.100918] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:59.580 [2024-07-26 11:35:55.100972] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:59.580 [2024-07-26 11:35:55.100987] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:59.580 [2024-07-26 11:35:55.100998] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:59.580 [2024-07-26 11:35:55.101004] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f41f8000b90 00:27:59.580 [2024-07-26 11:35:55.101017] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:59.580 qpair failed and we were unable to recover it. 00:27:59.580 [2024-07-26 11:35:55.111022] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:59.580 [2024-07-26 11:35:55.111123] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:59.580 [2024-07-26 11:35:55.111137] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:59.580 [2024-07-26 11:35:55.111144] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:59.580 [2024-07-26 11:35:55.111150] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f41f8000b90 00:27:59.580 [2024-07-26 11:35:55.111164] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:59.580 qpair failed and we were unable to recover it. 00:27:59.580 [2024-07-26 11:35:55.121038] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:59.580 [2024-07-26 11:35:55.121095] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:59.580 [2024-07-26 11:35:55.121109] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:59.580 [2024-07-26 11:35:55.121115] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:59.580 [2024-07-26 11:35:55.121122] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f41f8000b90 00:27:59.580 [2024-07-26 11:35:55.121136] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:59.580 qpair failed and we were unable to recover it. 00:27:59.580 [2024-07-26 11:35:55.131064] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:59.580 [2024-07-26 11:35:55.131120] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:59.580 [2024-07-26 11:35:55.131134] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:59.580 [2024-07-26 11:35:55.131142] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:59.580 [2024-07-26 11:35:55.131148] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f41f8000b90 00:27:59.580 [2024-07-26 11:35:55.131161] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:59.580 qpair failed and we were unable to recover it. 00:27:59.580 [2024-07-26 11:35:55.141096] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:59.580 [2024-07-26 11:35:55.141150] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:59.580 [2024-07-26 11:35:55.141164] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:59.580 [2024-07-26 11:35:55.141171] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:59.580 [2024-07-26 11:35:55.141177] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f41f8000b90 00:27:59.580 [2024-07-26 11:35:55.141192] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:59.581 qpair failed and we were unable to recover it. 00:27:59.581 [2024-07-26 11:35:55.151124] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:59.581 [2024-07-26 11:35:55.151185] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:59.581 [2024-07-26 11:35:55.151199] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:59.581 [2024-07-26 11:35:55.151206] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:59.581 [2024-07-26 11:35:55.151213] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f41f8000b90 00:27:59.581 [2024-07-26 11:35:55.151227] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:59.581 qpair failed and we were unable to recover it. 00:27:59.581 [2024-07-26 11:35:55.161180] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:59.581 [2024-07-26 11:35:55.161245] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:59.581 [2024-07-26 11:35:55.161259] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:59.581 [2024-07-26 11:35:55.161266] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:59.581 [2024-07-26 11:35:55.161272] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f41f8000b90 00:27:59.581 [2024-07-26 11:35:55.161286] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:59.581 qpair failed and we were unable to recover it. 00:27:59.581 [2024-07-26 11:35:55.171248] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:59.581 [2024-07-26 11:35:55.171326] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:59.581 [2024-07-26 11:35:55.171341] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:59.581 [2024-07-26 11:35:55.171348] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:59.581 [2024-07-26 11:35:55.171353] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f41f8000b90 00:27:59.581 [2024-07-26 11:35:55.171367] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:59.581 qpair failed and we were unable to recover it. 00:27:59.581 [2024-07-26 11:35:55.181241] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:59.581 [2024-07-26 11:35:55.181296] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:59.581 [2024-07-26 11:35:55.181309] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:59.581 [2024-07-26 11:35:55.181316] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:59.581 [2024-07-26 11:35:55.181323] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f41f8000b90 00:27:59.581 [2024-07-26 11:35:55.181337] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:59.581 qpair failed and we were unable to recover it. 00:27:59.581 [2024-07-26 11:35:55.191244] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:59.581 [2024-07-26 11:35:55.191302] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:59.581 [2024-07-26 11:35:55.191316] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:59.581 [2024-07-26 11:35:55.191327] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:59.581 [2024-07-26 11:35:55.191333] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f41f8000b90 00:27:59.581 [2024-07-26 11:35:55.191346] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:59.581 qpair failed and we were unable to recover it. 00:27:59.581 [2024-07-26 11:35:55.201336] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:59.581 [2024-07-26 11:35:55.201398] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:59.581 [2024-07-26 11:35:55.201412] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:59.581 [2024-07-26 11:35:55.201419] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:59.581 [2024-07-26 11:35:55.201425] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f41f8000b90 00:27:59.581 [2024-07-26 11:35:55.201439] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:59.581 qpair failed and we were unable to recover it. 00:27:59.581 [2024-07-26 11:35:55.211304] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:59.581 [2024-07-26 11:35:55.211357] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:59.581 [2024-07-26 11:35:55.211372] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:59.581 [2024-07-26 11:35:55.211379] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:59.581 [2024-07-26 11:35:55.211386] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f41f8000b90 00:27:59.581 [2024-07-26 11:35:55.211401] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:59.581 qpair failed and we were unable to recover it. 00:27:59.581 [2024-07-26 11:35:55.221354] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:59.581 [2024-07-26 11:35:55.221429] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:59.581 [2024-07-26 11:35:55.221444] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:59.581 [2024-07-26 11:35:55.221451] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:59.581 [2024-07-26 11:35:55.221457] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f41f8000b90 00:27:59.581 [2024-07-26 11:35:55.221470] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:59.581 qpair failed and we were unable to recover it. 00:27:59.581 [2024-07-26 11:35:55.231341] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:59.581 [2024-07-26 11:35:55.231401] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:59.581 [2024-07-26 11:35:55.231414] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:59.581 [2024-07-26 11:35:55.231422] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:59.581 [2024-07-26 11:35:55.231428] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f41f8000b90 00:27:59.581 [2024-07-26 11:35:55.231441] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:59.581 qpair failed and we were unable to recover it. 00:27:59.838 [2024-07-26 11:35:55.241390] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:59.839 [2024-07-26 11:35:55.241449] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:59.839 [2024-07-26 11:35:55.241463] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:59.839 [2024-07-26 11:35:55.241470] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:59.839 [2024-07-26 11:35:55.241476] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f41f8000b90 00:27:59.839 [2024-07-26 11:35:55.241490] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:59.839 qpair failed and we were unable to recover it. 00:27:59.839 [2024-07-26 11:35:55.251407] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:59.839 [2024-07-26 11:35:55.251456] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:59.839 [2024-07-26 11:35:55.251471] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:59.839 [2024-07-26 11:35:55.251478] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:59.839 [2024-07-26 11:35:55.251485] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f41f8000b90 00:27:59.839 [2024-07-26 11:35:55.251499] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:59.839 qpair failed and we were unable to recover it. 00:27:59.839 [2024-07-26 11:35:55.261518] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:59.839 [2024-07-26 11:35:55.261671] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:59.839 [2024-07-26 11:35:55.261725] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:59.839 [2024-07-26 11:35:55.261749] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:59.839 [2024-07-26 11:35:55.261769] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18b8f30 00:27:59.839 [2024-07-26 11:35:55.261817] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:59.839 qpair failed and we were unable to recover it. 00:27:59.839 [2024-07-26 11:35:55.271497] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:59.839 [2024-07-26 11:35:55.271573] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:59.839 [2024-07-26 11:35:55.271601] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:59.839 [2024-07-26 11:35:55.271616] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:59.839 [2024-07-26 11:35:55.271637] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18b8f30 00:27:59.839 [2024-07-26 11:35:55.271666] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:59.839 qpair failed and we were unable to recover it. 00:27:59.839 [2024-07-26 11:35:55.281496] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:59.839 [2024-07-26 11:35:55.281556] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:59.839 [2024-07-26 11:35:55.281580] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:59.839 [2024-07-26 11:35:55.281590] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:59.839 [2024-07-26 11:35:55.281599] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18b8f30 00:27:59.839 [2024-07-26 11:35:55.281618] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:59.839 qpair failed and we were unable to recover it. 00:27:59.839 [2024-07-26 11:35:55.291527] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:59.839 [2024-07-26 11:35:55.291578] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:59.839 [2024-07-26 11:35:55.291593] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:59.839 [2024-07-26 11:35:55.291600] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:59.839 [2024-07-26 11:35:55.291606] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18b8f30 00:27:59.839 [2024-07-26 11:35:55.291620] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:59.839 qpair failed and we were unable to recover it. 00:27:59.839 [2024-07-26 11:35:55.301564] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:59.839 [2024-07-26 11:35:55.301621] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:59.839 [2024-07-26 11:35:55.301641] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:59.839 [2024-07-26 11:35:55.301648] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:59.839 [2024-07-26 11:35:55.301654] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18b8f30 00:27:59.839 [2024-07-26 11:35:55.301669] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:59.839 qpair failed and we were unable to recover it. 00:27:59.839 [2024-07-26 11:35:55.311577] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:59.839 [2024-07-26 11:35:55.311636] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:59.839 [2024-07-26 11:35:55.311652] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:59.839 [2024-07-26 11:35:55.311659] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:59.839 [2024-07-26 11:35:55.311665] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18b8f30 00:27:59.839 [2024-07-26 11:35:55.311679] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:59.839 qpair failed and we were unable to recover it. 00:27:59.839 [2024-07-26 11:35:55.321652] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:59.839 [2024-07-26 11:35:55.321708] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:59.839 [2024-07-26 11:35:55.321725] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:59.839 [2024-07-26 11:35:55.321733] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:59.839 [2024-07-26 11:35:55.321739] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18b8f30 00:27:59.839 [2024-07-26 11:35:55.321759] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:59.839 qpair failed and we were unable to recover it. 00:27:59.839 [2024-07-26 11:35:55.331661] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:59.839 [2024-07-26 11:35:55.331719] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:59.839 [2024-07-26 11:35:55.331735] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:59.839 [2024-07-26 11:35:55.331742] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:59.839 [2024-07-26 11:35:55.331748] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18b8f30 00:27:59.839 [2024-07-26 11:35:55.331762] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:59.839 qpair failed and we were unable to recover it. 00:27:59.839 [2024-07-26 11:35:55.341680] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:59.839 [2024-07-26 11:35:55.341733] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:59.839 [2024-07-26 11:35:55.341748] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:59.839 [2024-07-26 11:35:55.341755] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:59.839 [2024-07-26 11:35:55.341761] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18b8f30 00:27:59.839 [2024-07-26 11:35:55.341775] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:59.839 qpair failed and we were unable to recover it. 00:27:59.839 [2024-07-26 11:35:55.351689] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:59.839 [2024-07-26 11:35:55.351759] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:59.839 [2024-07-26 11:35:55.351775] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:59.839 [2024-07-26 11:35:55.351783] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:59.839 [2024-07-26 11:35:55.351788] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18b8f30 00:27:59.839 [2024-07-26 11:35:55.351802] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:59.839 qpair failed and we were unable to recover it. 00:27:59.839 [2024-07-26 11:35:55.361710] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:59.839 [2024-07-26 11:35:55.361764] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:59.840 [2024-07-26 11:35:55.361779] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:59.840 [2024-07-26 11:35:55.361786] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:59.840 [2024-07-26 11:35:55.361792] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18b8f30 00:27:59.840 [2024-07-26 11:35:55.361807] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:59.840 qpair failed and we were unable to recover it. 00:27:59.840 [2024-07-26 11:35:55.371830] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:59.840 [2024-07-26 11:35:55.371913] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:59.840 [2024-07-26 11:35:55.371931] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:59.840 [2024-07-26 11:35:55.371939] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:59.840 [2024-07-26 11:35:55.371945] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18b8f30 00:27:59.840 [2024-07-26 11:35:55.371959] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:59.840 qpair failed and we were unable to recover it. 00:27:59.840 [2024-07-26 11:35:55.381707] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:59.840 [2024-07-26 11:35:55.381802] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:59.840 [2024-07-26 11:35:55.381816] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:59.840 [2024-07-26 11:35:55.381823] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:59.840 [2024-07-26 11:35:55.381829] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18b8f30 00:27:59.840 [2024-07-26 11:35:55.381842] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:59.840 qpair failed and we were unable to recover it. 00:27:59.840 [2024-07-26 11:35:55.391873] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:59.840 [2024-07-26 11:35:55.391957] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:59.840 [2024-07-26 11:35:55.391972] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:59.840 [2024-07-26 11:35:55.391979] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:59.840 [2024-07-26 11:35:55.391984] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18b8f30 00:27:59.840 [2024-07-26 11:35:55.391998] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:59.840 qpair failed and we were unable to recover it. 00:27:59.840 [2024-07-26 11:35:55.401886] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:59.840 [2024-07-26 11:35:55.401952] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:59.840 [2024-07-26 11:35:55.401967] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:59.840 [2024-07-26 11:35:55.401974] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:59.840 [2024-07-26 11:35:55.401980] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18b8f30 00:27:59.840 [2024-07-26 11:35:55.401994] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:59.840 qpair failed and we were unable to recover it. 00:27:59.840 [2024-07-26 11:35:55.411865] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:59.840 [2024-07-26 11:35:55.411919] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:59.840 [2024-07-26 11:35:55.411934] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:59.840 [2024-07-26 11:35:55.411941] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:59.840 [2024-07-26 11:35:55.411947] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18b8f30 00:27:59.840 [2024-07-26 11:35:55.411964] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:59.840 qpair failed and we were unable to recover it. 00:27:59.840 [2024-07-26 11:35:55.421964] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:59.840 [2024-07-26 11:35:55.422022] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:59.840 [2024-07-26 11:35:55.422038] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:59.840 [2024-07-26 11:35:55.422045] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:59.840 [2024-07-26 11:35:55.422052] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18b8f30 00:27:59.840 [2024-07-26 11:35:55.422066] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:59.840 qpair failed and we were unable to recover it. 00:27:59.840 [2024-07-26 11:35:55.431851] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:59.840 [2024-07-26 11:35:55.431906] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:59.840 [2024-07-26 11:35:55.431920] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:59.840 [2024-07-26 11:35:55.431927] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:59.840 [2024-07-26 11:35:55.431933] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18b8f30 00:27:59.840 [2024-07-26 11:35:55.431947] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:59.840 qpair failed and we were unable to recover it. 00:27:59.840 [2024-07-26 11:35:55.441888] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:59.840 [2024-07-26 11:35:55.441944] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:59.840 [2024-07-26 11:35:55.441959] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:59.840 [2024-07-26 11:35:55.441966] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:59.840 [2024-07-26 11:35:55.441973] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18b8f30 00:27:59.840 [2024-07-26 11:35:55.441986] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:59.840 qpair failed and we were unable to recover it. 00:27:59.840 [2024-07-26 11:35:55.451991] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:59.840 [2024-07-26 11:35:55.452041] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:59.840 [2024-07-26 11:35:55.452056] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:59.840 [2024-07-26 11:35:55.452063] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:59.840 [2024-07-26 11:35:55.452069] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18b8f30 00:27:59.840 [2024-07-26 11:35:55.452082] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:59.840 qpair failed and we were unable to recover it. 00:27:59.840 [2024-07-26 11:35:55.462020] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:59.840 [2024-07-26 11:35:55.462076] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:59.840 [2024-07-26 11:35:55.462096] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:59.840 [2024-07-26 11:35:55.462104] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:59.840 [2024-07-26 11:35:55.462109] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18b8f30 00:27:59.840 [2024-07-26 11:35:55.462124] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:59.840 qpair failed and we were unable to recover it. 00:27:59.840 [2024-07-26 11:35:55.472037] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:59.840 [2024-07-26 11:35:55.472095] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:59.840 [2024-07-26 11:35:55.472109] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:59.840 [2024-07-26 11:35:55.472117] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:59.840 [2024-07-26 11:35:55.472123] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18b8f30 00:27:59.840 [2024-07-26 11:35:55.472137] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:59.840 qpair failed and we were unable to recover it. 00:27:59.840 [2024-07-26 11:35:55.482109] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:59.840 [2024-07-26 11:35:55.482165] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:59.840 [2024-07-26 11:35:55.482179] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:59.840 [2024-07-26 11:35:55.482187] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:59.840 [2024-07-26 11:35:55.482194] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18b8f30 00:27:59.840 [2024-07-26 11:35:55.482210] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:59.840 qpair failed and we were unable to recover it. 00:27:59.840 [2024-07-26 11:35:55.492093] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:59.840 [2024-07-26 11:35:55.492153] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:59.840 [2024-07-26 11:35:55.492168] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:59.841 [2024-07-26 11:35:55.492175] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:59.841 [2024-07-26 11:35:55.492181] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18b8f30 00:27:59.841 [2024-07-26 11:35:55.492195] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:59.841 qpair failed and we were unable to recover it. 00:28:00.096 [2024-07-26 11:35:55.502197] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:00.096 [2024-07-26 11:35:55.502300] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:00.096 [2024-07-26 11:35:55.502319] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:00.096 [2024-07-26 11:35:55.502327] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:00.096 [2024-07-26 11:35:55.502337] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18b8f30 00:28:00.096 [2024-07-26 11:35:55.502354] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:00.097 qpair failed and we were unable to recover it. 00:28:00.097 [2024-07-26 11:35:55.512151] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:00.097 [2024-07-26 11:35:55.512211] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:00.097 [2024-07-26 11:35:55.512227] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:00.097 [2024-07-26 11:35:55.512236] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:00.097 [2024-07-26 11:35:55.512242] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18b8f30 00:28:00.097 [2024-07-26 11:35:55.512257] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:00.097 qpair failed and we were unable to recover it. 00:28:00.097 [2024-07-26 11:35:55.522266] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:00.097 [2024-07-26 11:35:55.522323] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:00.097 [2024-07-26 11:35:55.522338] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:00.097 [2024-07-26 11:35:55.522345] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:00.097 [2024-07-26 11:35:55.522351] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18b8f30 00:28:00.097 [2024-07-26 11:35:55.522365] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:00.097 qpair failed and we were unable to recover it. 00:28:00.097 [2024-07-26 11:35:55.532211] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:00.097 [2024-07-26 11:35:55.532265] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:00.097 [2024-07-26 11:35:55.532279] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:00.097 [2024-07-26 11:35:55.532287] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:00.097 [2024-07-26 11:35:55.532293] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18b8f30 00:28:00.097 [2024-07-26 11:35:55.532307] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:00.097 qpair failed and we were unable to recover it. 00:28:00.097 [2024-07-26 11:35:55.542317] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:00.097 [2024-07-26 11:35:55.542423] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:00.097 [2024-07-26 11:35:55.542438] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:00.097 [2024-07-26 11:35:55.542444] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:00.097 [2024-07-26 11:35:55.542451] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18b8f30 00:28:00.097 [2024-07-26 11:35:55.542464] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:00.097 qpair failed and we were unable to recover it. 00:28:00.097 [2024-07-26 11:35:55.552302] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:00.097 [2024-07-26 11:35:55.552363] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:00.097 [2024-07-26 11:35:55.552378] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:00.097 [2024-07-26 11:35:55.552385] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:00.097 [2024-07-26 11:35:55.552391] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18b8f30 00:28:00.097 [2024-07-26 11:35:55.552404] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:00.097 qpair failed and we were unable to recover it. 00:28:00.097 [2024-07-26 11:35:55.562294] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:00.097 [2024-07-26 11:35:55.562348] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:00.097 [2024-07-26 11:35:55.562364] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:00.097 [2024-07-26 11:35:55.562371] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:00.097 [2024-07-26 11:35:55.562377] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18b8f30 00:28:00.097 [2024-07-26 11:35:55.562390] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:00.097 qpair failed and we were unable to recover it. 00:28:00.097 [2024-07-26 11:35:55.572349] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:00.097 [2024-07-26 11:35:55.572404] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:00.097 [2024-07-26 11:35:55.572419] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:00.097 [2024-07-26 11:35:55.572426] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:00.097 [2024-07-26 11:35:55.572432] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18b8f30 00:28:00.097 [2024-07-26 11:35:55.572445] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:00.097 qpair failed and we were unable to recover it. 00:28:00.097 [2024-07-26 11:35:55.582384] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:00.097 [2024-07-26 11:35:55.582449] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:00.097 [2024-07-26 11:35:55.582464] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:00.097 [2024-07-26 11:35:55.582471] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:00.097 [2024-07-26 11:35:55.582477] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18b8f30 00:28:00.097 [2024-07-26 11:35:55.582491] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:00.097 qpair failed and we were unable to recover it. 00:28:00.097 [2024-07-26 11:35:55.592375] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:00.097 [2024-07-26 11:35:55.592432] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:00.097 [2024-07-26 11:35:55.592446] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:00.097 [2024-07-26 11:35:55.592453] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:00.097 [2024-07-26 11:35:55.592463] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18b8f30 00:28:00.097 [2024-07-26 11:35:55.592477] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:00.097 qpair failed and we were unable to recover it. 00:28:00.097 [2024-07-26 11:35:55.602405] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:00.097 [2024-07-26 11:35:55.602463] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:00.097 [2024-07-26 11:35:55.602477] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:00.097 [2024-07-26 11:35:55.602484] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:00.098 [2024-07-26 11:35:55.602491] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18b8f30 00:28:00.098 [2024-07-26 11:35:55.602504] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:00.098 qpair failed and we were unable to recover it. 00:28:00.098 [2024-07-26 11:35:55.612425] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:00.098 [2024-07-26 11:35:55.612485] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:00.098 [2024-07-26 11:35:55.612500] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:00.098 [2024-07-26 11:35:55.612507] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:00.098 [2024-07-26 11:35:55.612513] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18b8f30 00:28:00.098 [2024-07-26 11:35:55.612527] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:00.098 qpair failed and we were unable to recover it. 00:28:00.098 [2024-07-26 11:35:55.622469] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:00.098 [2024-07-26 11:35:55.622541] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:00.098 [2024-07-26 11:35:55.622556] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:00.098 [2024-07-26 11:35:55.622563] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:00.098 [2024-07-26 11:35:55.622570] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18b8f30 00:28:00.098 [2024-07-26 11:35:55.622585] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:00.098 qpair failed and we were unable to recover it. 00:28:00.098 [2024-07-26 11:35:55.632495] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:00.098 [2024-07-26 11:35:55.632551] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:00.098 [2024-07-26 11:35:55.632566] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:00.098 [2024-07-26 11:35:55.632574] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:00.098 [2024-07-26 11:35:55.632580] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18b8f30 00:28:00.098 [2024-07-26 11:35:55.632594] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:00.098 qpair failed and we were unable to recover it. 00:28:00.098 [2024-07-26 11:35:55.642508] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:00.098 [2024-07-26 11:35:55.642567] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:00.098 [2024-07-26 11:35:55.642582] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:00.098 [2024-07-26 11:35:55.642590] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:00.098 [2024-07-26 11:35:55.642596] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18b8f30 00:28:00.098 [2024-07-26 11:35:55.642610] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:00.098 qpair failed and we were unable to recover it. 00:28:00.098 [2024-07-26 11:35:55.652542] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:00.098 [2024-07-26 11:35:55.652593] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:00.098 [2024-07-26 11:35:55.652607] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:00.098 [2024-07-26 11:35:55.652614] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:00.098 [2024-07-26 11:35:55.652621] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18b8f30 00:28:00.098 [2024-07-26 11:35:55.652639] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:00.098 qpair failed and we were unable to recover it. 00:28:00.098 [2024-07-26 11:35:55.662591] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:00.098 [2024-07-26 11:35:55.662650] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:00.098 [2024-07-26 11:35:55.662664] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:00.098 [2024-07-26 11:35:55.662671] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:00.098 [2024-07-26 11:35:55.662677] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18b8f30 00:28:00.098 [2024-07-26 11:35:55.662691] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:00.098 qpair failed and we were unable to recover it. 00:28:00.098 [2024-07-26 11:35:55.672607] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:00.098 [2024-07-26 11:35:55.672667] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:00.098 [2024-07-26 11:35:55.672682] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:00.098 [2024-07-26 11:35:55.672689] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:00.098 [2024-07-26 11:35:55.672695] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18b8f30 00:28:00.098 [2024-07-26 11:35:55.672709] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:00.098 qpair failed and we were unable to recover it. 00:28:00.098 [2024-07-26 11:35:55.682678] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:00.098 [2024-07-26 11:35:55.682738] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:00.098 [2024-07-26 11:35:55.682752] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:00.098 [2024-07-26 11:35:55.682762] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:00.098 [2024-07-26 11:35:55.682769] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18b8f30 00:28:00.098 [2024-07-26 11:35:55.682782] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:00.098 qpair failed and we were unable to recover it. 00:28:00.098 [2024-07-26 11:35:55.692654] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:00.098 [2024-07-26 11:35:55.692704] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:00.098 [2024-07-26 11:35:55.692719] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:00.098 [2024-07-26 11:35:55.692726] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:00.098 [2024-07-26 11:35:55.692732] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18b8f30 00:28:00.098 [2024-07-26 11:35:55.692746] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:00.098 qpair failed and we were unable to recover it. 00:28:00.098 [2024-07-26 11:35:55.702693] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:00.098 [2024-07-26 11:35:55.702748] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:00.098 [2024-07-26 11:35:55.702764] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:00.099 [2024-07-26 11:35:55.702772] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:00.099 [2024-07-26 11:35:55.702779] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18b8f30 00:28:00.099 [2024-07-26 11:35:55.702793] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:00.099 qpair failed and we were unable to recover it. 00:28:00.099 [2024-07-26 11:35:55.712716] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:00.099 [2024-07-26 11:35:55.712768] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:00.099 [2024-07-26 11:35:55.712782] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:00.099 [2024-07-26 11:35:55.712789] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:00.099 [2024-07-26 11:35:55.712795] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18b8f30 00:28:00.099 [2024-07-26 11:35:55.712809] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:00.099 qpair failed and we were unable to recover it. 00:28:00.099 [2024-07-26 11:35:55.722736] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:00.099 [2024-07-26 11:35:55.722831] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:00.099 [2024-07-26 11:35:55.722845] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:00.099 [2024-07-26 11:35:55.722852] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:00.099 [2024-07-26 11:35:55.722858] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18b8f30 00:28:00.099 [2024-07-26 11:35:55.722872] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:00.099 qpair failed and we were unable to recover it. 00:28:00.099 [2024-07-26 11:35:55.732765] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:00.099 [2024-07-26 11:35:55.732821] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:00.099 [2024-07-26 11:35:55.732836] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:00.099 [2024-07-26 11:35:55.732843] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:00.099 [2024-07-26 11:35:55.732849] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18b8f30 00:28:00.099 [2024-07-26 11:35:55.732863] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:00.099 qpair failed and we were unable to recover it. 00:28:00.099 [2024-07-26 11:35:55.742809] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:00.099 [2024-07-26 11:35:55.742909] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:00.099 [2024-07-26 11:35:55.742924] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:00.099 [2024-07-26 11:35:55.742930] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:00.099 [2024-07-26 11:35:55.742936] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18b8f30 00:28:00.099 [2024-07-26 11:35:55.742949] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:00.099 qpair failed and we were unable to recover it. 00:28:00.099 [2024-07-26 11:35:55.752807] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:00.099 [2024-07-26 11:35:55.752912] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:00.099 [2024-07-26 11:35:55.752926] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:00.099 [2024-07-26 11:35:55.752933] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:00.099 [2024-07-26 11:35:55.752939] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18b8f30 00:28:00.099 [2024-07-26 11:35:55.752953] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:00.099 qpair failed and we were unable to recover it. 00:28:00.356 [2024-07-26 11:35:55.762880] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:00.356 [2024-07-26 11:35:55.762940] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:00.356 [2024-07-26 11:35:55.762959] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:00.356 [2024-07-26 11:35:55.762966] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:00.356 [2024-07-26 11:35:55.762972] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18b8f30 00:28:00.356 [2024-07-26 11:35:55.762989] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:00.356 qpair failed and we were unable to recover it. 00:28:00.356 [2024-07-26 11:35:55.772859] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:00.356 [2024-07-26 11:35:55.772925] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:00.357 [2024-07-26 11:35:55.772942] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:00.357 [2024-07-26 11:35:55.772952] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:00.357 [2024-07-26 11:35:55.772958] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18b8f30 00:28:00.357 [2024-07-26 11:35:55.772972] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:00.357 qpair failed and we were unable to recover it. 00:28:00.357 [2024-07-26 11:35:55.782909] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:00.357 [2024-07-26 11:35:55.782964] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:00.357 [2024-07-26 11:35:55.782979] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:00.357 [2024-07-26 11:35:55.782986] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:00.357 [2024-07-26 11:35:55.782993] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18b8f30 00:28:00.357 [2024-07-26 11:35:55.783007] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:00.357 qpair failed and we were unable to recover it. 00:28:00.357 [2024-07-26 11:35:55.792982] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:00.357 [2024-07-26 11:35:55.793038] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:00.357 [2024-07-26 11:35:55.793053] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:00.357 [2024-07-26 11:35:55.793060] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:00.357 [2024-07-26 11:35:55.793066] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18b8f30 00:28:00.357 [2024-07-26 11:35:55.793080] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:00.357 qpair failed and we were unable to recover it. 00:28:00.357 [2024-07-26 11:35:55.802961] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:00.357 [2024-07-26 11:35:55.803014] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:00.357 [2024-07-26 11:35:55.803030] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:00.357 [2024-07-26 11:35:55.803037] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:00.357 [2024-07-26 11:35:55.803043] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18b8f30 00:28:00.357 [2024-07-26 11:35:55.803057] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:00.357 qpair failed and we were unable to recover it. 00:28:00.357 [2024-07-26 11:35:55.812988] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:00.357 [2024-07-26 11:35:55.813043] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:00.357 [2024-07-26 11:35:55.813057] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:00.357 [2024-07-26 11:35:55.813064] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:00.357 [2024-07-26 11:35:55.813071] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18b8f30 00:28:00.357 [2024-07-26 11:35:55.813084] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:00.357 qpair failed and we were unable to recover it. 00:28:00.357 [2024-07-26 11:35:55.823027] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:00.357 [2024-07-26 11:35:55.823097] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:00.357 [2024-07-26 11:35:55.823113] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:00.357 [2024-07-26 11:35:55.823120] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:00.357 [2024-07-26 11:35:55.823126] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18b8f30 00:28:00.357 [2024-07-26 11:35:55.823140] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:00.357 qpair failed and we were unable to recover it. 00:28:00.357 [2024-07-26 11:35:55.833047] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:00.357 [2024-07-26 11:35:55.833105] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:00.357 [2024-07-26 11:35:55.833119] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:00.357 [2024-07-26 11:35:55.833126] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:00.357 [2024-07-26 11:35:55.833132] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18b8f30 00:28:00.357 [2024-07-26 11:35:55.833145] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:00.357 qpair failed and we were unable to recover it. 00:28:00.357 [2024-07-26 11:35:55.843081] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:00.357 [2024-07-26 11:35:55.843134] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:00.357 [2024-07-26 11:35:55.843148] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:00.357 [2024-07-26 11:35:55.843155] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:00.357 [2024-07-26 11:35:55.843161] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18b8f30 00:28:00.357 [2024-07-26 11:35:55.843174] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:00.357 qpair failed and we were unable to recover it. 00:28:00.357 [2024-07-26 11:35:55.853086] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:00.357 [2024-07-26 11:35:55.853182] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:00.357 [2024-07-26 11:35:55.853196] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:00.357 [2024-07-26 11:35:55.853203] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:00.357 [2024-07-26 11:35:55.853209] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18b8f30 00:28:00.357 [2024-07-26 11:35:55.853223] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:00.357 qpair failed and we were unable to recover it. 00:28:00.357 [2024-07-26 11:35:55.863171] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:00.357 [2024-07-26 11:35:55.863273] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:00.357 [2024-07-26 11:35:55.863288] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:00.357 [2024-07-26 11:35:55.863298] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:00.357 [2024-07-26 11:35:55.863304] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18b8f30 00:28:00.357 [2024-07-26 11:35:55.863318] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:00.357 qpair failed and we were unable to recover it. 00:28:00.357 [2024-07-26 11:35:55.873152] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:00.357 [2024-07-26 11:35:55.873230] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:00.357 [2024-07-26 11:35:55.873244] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:00.357 [2024-07-26 11:35:55.873251] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:00.357 [2024-07-26 11:35:55.873257] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18b8f30 00:28:00.357 [2024-07-26 11:35:55.873270] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:00.357 qpair failed and we were unable to recover it. 00:28:00.357 [2024-07-26 11:35:55.883177] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:00.357 [2024-07-26 11:35:55.883250] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:00.357 [2024-07-26 11:35:55.883265] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:00.357 [2024-07-26 11:35:55.883272] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:00.357 [2024-07-26 11:35:55.883278] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18b8f30 00:28:00.357 [2024-07-26 11:35:55.883291] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:00.357 qpair failed and we were unable to recover it. 00:28:00.357 [2024-07-26 11:35:55.893235] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:00.357 [2024-07-26 11:35:55.893285] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:00.357 [2024-07-26 11:35:55.893300] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:00.357 [2024-07-26 11:35:55.893307] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:00.357 [2024-07-26 11:35:55.893313] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18b8f30 00:28:00.357 [2024-07-26 11:35:55.893328] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:00.357 qpair failed and we were unable to recover it. 00:28:00.357 [2024-07-26 11:35:55.903285] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:00.357 [2024-07-26 11:35:55.903363] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:00.357 [2024-07-26 11:35:55.903378] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:00.358 [2024-07-26 11:35:55.903385] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:00.358 [2024-07-26 11:35:55.903391] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18b8f30 00:28:00.358 [2024-07-26 11:35:55.903405] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:00.358 qpair failed and we were unable to recover it. 00:28:00.358 [2024-07-26 11:35:55.913260] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:00.358 [2024-07-26 11:35:55.913315] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:00.358 [2024-07-26 11:35:55.913330] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:00.358 [2024-07-26 11:35:55.913337] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:00.358 [2024-07-26 11:35:55.913343] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18b8f30 00:28:00.358 [2024-07-26 11:35:55.913358] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:00.358 qpair failed and we were unable to recover it. 00:28:00.358 [2024-07-26 11:35:55.923294] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:00.358 [2024-07-26 11:35:55.923347] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:00.358 [2024-07-26 11:35:55.923362] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:00.358 [2024-07-26 11:35:55.923368] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:00.358 [2024-07-26 11:35:55.923374] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18b8f30 00:28:00.358 [2024-07-26 11:35:55.923388] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:00.358 qpair failed and we were unable to recover it. 00:28:00.358 [2024-07-26 11:35:55.933324] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:00.358 [2024-07-26 11:35:55.933442] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:00.358 [2024-07-26 11:35:55.933457] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:00.358 [2024-07-26 11:35:55.933464] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:00.358 [2024-07-26 11:35:55.933470] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18b8f30 00:28:00.358 [2024-07-26 11:35:55.933483] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:00.358 qpair failed and we were unable to recover it. 00:28:00.358 [2024-07-26 11:35:55.943371] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:00.358 [2024-07-26 11:35:55.943454] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:00.358 [2024-07-26 11:35:55.943469] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:00.358 [2024-07-26 11:35:55.943475] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:00.358 [2024-07-26 11:35:55.943481] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18b8f30 00:28:00.358 [2024-07-26 11:35:55.943495] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:00.358 qpair failed and we were unable to recover it. 00:28:00.358 [2024-07-26 11:35:55.953387] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:00.358 [2024-07-26 11:35:55.953441] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:00.358 [2024-07-26 11:35:55.953459] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:00.358 [2024-07-26 11:35:55.953465] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:00.358 [2024-07-26 11:35:55.953472] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18b8f30 00:28:00.358 [2024-07-26 11:35:55.953486] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:00.358 qpair failed and we were unable to recover it. 00:28:00.358 [2024-07-26 11:35:55.963395] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:00.358 [2024-07-26 11:35:55.963451] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:00.358 [2024-07-26 11:35:55.963467] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:00.358 [2024-07-26 11:35:55.963474] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:00.358 [2024-07-26 11:35:55.963480] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18b8f30 00:28:00.358 [2024-07-26 11:35:55.963494] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:00.358 qpair failed and we were unable to recover it. 00:28:00.358 [2024-07-26 11:35:55.973431] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:00.358 [2024-07-26 11:35:55.973485] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:00.358 [2024-07-26 11:35:55.973500] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:00.358 [2024-07-26 11:35:55.973507] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:00.358 [2024-07-26 11:35:55.973513] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18b8f30 00:28:00.358 [2024-07-26 11:35:55.973526] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:00.358 qpair failed and we were unable to recover it. 00:28:00.358 [2024-07-26 11:35:55.983505] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:00.358 [2024-07-26 11:35:55.983605] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:00.358 [2024-07-26 11:35:55.983620] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:00.358 [2024-07-26 11:35:55.983631] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:00.358 [2024-07-26 11:35:55.983637] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18b8f30 00:28:00.358 [2024-07-26 11:35:55.983651] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:00.358 qpair failed and we were unable to recover it. 00:28:00.358 [2024-07-26 11:35:55.993513] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:00.358 [2024-07-26 11:35:55.993570] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:00.358 [2024-07-26 11:35:55.993584] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:00.358 [2024-07-26 11:35:55.993591] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:00.358 [2024-07-26 11:35:55.993597] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18b8f30 00:28:00.358 [2024-07-26 11:35:55.993611] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:00.358 qpair failed and we were unable to recover it. 00:28:00.358 [2024-07-26 11:35:56.003448] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:00.358 [2024-07-26 11:35:56.003511] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:00.358 [2024-07-26 11:35:56.003525] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:00.358 [2024-07-26 11:35:56.003533] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:00.358 [2024-07-26 11:35:56.003539] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18b8f30 00:28:00.358 [2024-07-26 11:35:56.003552] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:00.358 qpair failed and we were unable to recover it. 00:28:00.358 [2024-07-26 11:35:56.013609] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:00.358 [2024-07-26 11:35:56.013734] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:00.358 [2024-07-26 11:35:56.013758] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:00.358 [2024-07-26 11:35:56.013769] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:00.358 [2024-07-26 11:35:56.013776] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18b8f30 00:28:00.358 [2024-07-26 11:35:56.013794] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:00.358 qpair failed and we were unable to recover it. 00:28:00.616 [2024-07-26 11:35:56.023583] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:00.616 [2024-07-26 11:35:56.023643] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:00.616 [2024-07-26 11:35:56.023662] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:00.616 [2024-07-26 11:35:56.023670] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:00.616 [2024-07-26 11:35:56.023676] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18b8f30 00:28:00.616 [2024-07-26 11:35:56.023693] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:00.616 qpair failed and we were unable to recover it. 00:28:00.616 [2024-07-26 11:35:56.033663] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:00.616 [2024-07-26 11:35:56.033764] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:00.617 [2024-07-26 11:35:56.033780] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:00.617 [2024-07-26 11:35:56.033787] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:00.617 [2024-07-26 11:35:56.033793] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18b8f30 00:28:00.617 [2024-07-26 11:35:56.033807] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:00.617 qpair failed and we were unable to recover it. 00:28:00.617 [2024-07-26 11:35:56.043688] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:00.617 [2024-07-26 11:35:56.043742] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:00.617 [2024-07-26 11:35:56.043760] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:00.617 [2024-07-26 11:35:56.043767] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:00.617 [2024-07-26 11:35:56.043773] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18b8f30 00:28:00.617 [2024-07-26 11:35:56.043787] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:00.617 qpair failed and we were unable to recover it. 00:28:00.617 [2024-07-26 11:35:56.053665] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:00.617 [2024-07-26 11:35:56.053717] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:00.617 [2024-07-26 11:35:56.053732] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:00.617 [2024-07-26 11:35:56.053739] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:00.617 [2024-07-26 11:35:56.053745] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18b8f30 00:28:00.617 [2024-07-26 11:35:56.053758] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:00.617 qpair failed and we were unable to recover it. 00:28:00.617 [2024-07-26 11:35:56.063705] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:00.617 [2024-07-26 11:35:56.063761] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:00.617 [2024-07-26 11:35:56.063776] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:00.617 [2024-07-26 11:35:56.063783] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:00.617 [2024-07-26 11:35:56.063789] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18b8f30 00:28:00.617 [2024-07-26 11:35:56.063802] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:00.617 qpair failed and we were unable to recover it. 00:28:00.617 [2024-07-26 11:35:56.073753] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:00.617 [2024-07-26 11:35:56.073860] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:00.617 [2024-07-26 11:35:56.073875] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:00.617 [2024-07-26 11:35:56.073882] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:00.617 [2024-07-26 11:35:56.073888] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18b8f30 00:28:00.617 [2024-07-26 11:35:56.073902] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:00.617 qpair failed and we were unable to recover it. 00:28:00.617 [2024-07-26 11:35:56.083683] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:00.617 [2024-07-26 11:35:56.083741] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:00.617 [2024-07-26 11:35:56.083756] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:00.617 [2024-07-26 11:35:56.083764] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:00.617 [2024-07-26 11:35:56.083770] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18b8f30 00:28:00.617 [2024-07-26 11:35:56.083787] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:00.617 qpair failed and we were unable to recover it. 00:28:00.617 [2024-07-26 11:35:56.093764] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:00.617 [2024-07-26 11:35:56.093819] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:00.617 [2024-07-26 11:35:56.093833] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:00.617 [2024-07-26 11:35:56.093840] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:00.617 [2024-07-26 11:35:56.093846] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18b8f30 00:28:00.617 [2024-07-26 11:35:56.093860] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:00.617 qpair failed and we were unable to recover it. 00:28:00.617 [2024-07-26 11:35:56.103810] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:00.617 [2024-07-26 11:35:56.103866] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:00.617 [2024-07-26 11:35:56.103881] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:00.617 [2024-07-26 11:35:56.103888] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:00.617 [2024-07-26 11:35:56.103893] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18b8f30 00:28:00.617 [2024-07-26 11:35:56.103907] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:00.617 qpair failed and we were unable to recover it. 00:28:00.617 [2024-07-26 11:35:56.113826] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:00.617 [2024-07-26 11:35:56.113889] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:00.617 [2024-07-26 11:35:56.113903] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:00.617 [2024-07-26 11:35:56.113910] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:00.617 [2024-07-26 11:35:56.113916] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18b8f30 00:28:00.617 [2024-07-26 11:35:56.113929] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:00.617 qpair failed and we were unable to recover it. 00:28:00.617 [2024-07-26 11:35:56.123903] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:00.617 [2024-07-26 11:35:56.123960] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:00.617 [2024-07-26 11:35:56.123975] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:00.617 [2024-07-26 11:35:56.123983] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:00.617 [2024-07-26 11:35:56.123989] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18b8f30 00:28:00.617 [2024-07-26 11:35:56.124002] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:00.617 qpair failed and we were unable to recover it. 00:28:00.617 [2024-07-26 11:35:56.133898] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:00.617 [2024-07-26 11:35:56.133994] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:00.617 [2024-07-26 11:35:56.134012] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:00.617 [2024-07-26 11:35:56.134019] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:00.617 [2024-07-26 11:35:56.134025] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18b8f30 00:28:00.617 [2024-07-26 11:35:56.134038] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:00.617 qpair failed and we were unable to recover it. 00:28:00.617 [2024-07-26 11:35:56.143846] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:00.617 [2024-07-26 11:35:56.143903] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:00.617 [2024-07-26 11:35:56.143918] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:00.617 [2024-07-26 11:35:56.143926] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:00.617 [2024-07-26 11:35:56.143932] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18b8f30 00:28:00.617 [2024-07-26 11:35:56.143946] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:00.617 qpair failed and we were unable to recover it. 00:28:00.617 [2024-07-26 11:35:56.153932] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:00.617 [2024-07-26 11:35:56.154000] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:00.617 [2024-07-26 11:35:56.154015] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:00.617 [2024-07-26 11:35:56.154022] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:00.617 [2024-07-26 11:35:56.154028] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18b8f30 00:28:00.617 [2024-07-26 11:35:56.154042] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:00.617 qpair failed and we were unable to recover it. 00:28:00.617 [2024-07-26 11:35:56.163963] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:00.617 [2024-07-26 11:35:56.164018] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:00.617 [2024-07-26 11:35:56.164033] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:00.617 [2024-07-26 11:35:56.164042] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:00.618 [2024-07-26 11:35:56.164048] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18b8f30 00:28:00.618 [2024-07-26 11:35:56.164061] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:00.618 qpair failed and we were unable to recover it. 00:28:00.618 [2024-07-26 11:35:56.173976] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:00.618 [2024-07-26 11:35:56.174033] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:00.618 [2024-07-26 11:35:56.174047] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:00.618 [2024-07-26 11:35:56.174054] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:00.618 [2024-07-26 11:35:56.174060] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18b8f30 00:28:00.618 [2024-07-26 11:35:56.174078] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:00.618 qpair failed and we were unable to recover it. 00:28:00.618 [2024-07-26 11:35:56.184029] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:00.618 [2024-07-26 11:35:56.184096] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:00.618 [2024-07-26 11:35:56.184111] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:00.618 [2024-07-26 11:35:56.184118] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:00.618 [2024-07-26 11:35:56.184124] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18b8f30 00:28:00.618 [2024-07-26 11:35:56.184137] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:00.618 qpair failed and we were unable to recover it. 00:28:00.618 [2024-07-26 11:35:56.194094] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:00.618 [2024-07-26 11:35:56.194175] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:00.618 [2024-07-26 11:35:56.194190] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:00.618 [2024-07-26 11:35:56.194197] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:00.618 [2024-07-26 11:35:56.194203] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18b8f30 00:28:00.618 [2024-07-26 11:35:56.194217] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:00.618 qpair failed and we were unable to recover it. 00:28:00.618 [2024-07-26 11:35:56.204114] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:00.618 [2024-07-26 11:35:56.204172] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:00.618 [2024-07-26 11:35:56.204187] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:00.618 [2024-07-26 11:35:56.204194] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:00.618 [2024-07-26 11:35:56.204200] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18b8f30 00:28:00.618 [2024-07-26 11:35:56.204214] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:00.618 qpair failed and we were unable to recover it. 00:28:00.618 [2024-07-26 11:35:56.214113] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:00.618 [2024-07-26 11:35:56.214172] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:00.618 [2024-07-26 11:35:56.214187] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:00.618 [2024-07-26 11:35:56.214195] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:00.618 [2024-07-26 11:35:56.214201] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18b8f30 00:28:00.618 [2024-07-26 11:35:56.214214] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:00.618 qpair failed and we were unable to recover it. 00:28:00.618 [2024-07-26 11:35:56.224132] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:00.618 [2024-07-26 11:35:56.224184] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:00.618 [2024-07-26 11:35:56.224205] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:00.618 [2024-07-26 11:35:56.224212] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:00.618 [2024-07-26 11:35:56.224218] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18b8f30 00:28:00.618 [2024-07-26 11:35:56.224232] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:00.618 qpair failed and we were unable to recover it. 00:28:00.618 [2024-07-26 11:35:56.234101] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:00.618 [2024-07-26 11:35:56.234159] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:00.618 [2024-07-26 11:35:56.234174] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:00.618 [2024-07-26 11:35:56.234181] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:00.618 [2024-07-26 11:35:56.234187] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18b8f30 00:28:00.618 [2024-07-26 11:35:56.234201] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:00.618 qpair failed and we were unable to recover it. 00:28:00.618 [2024-07-26 11:35:56.244140] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:00.618 [2024-07-26 11:35:56.244233] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:00.618 [2024-07-26 11:35:56.244248] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:00.618 [2024-07-26 11:35:56.244255] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:00.618 [2024-07-26 11:35:56.244261] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18b8f30 00:28:00.618 [2024-07-26 11:35:56.244274] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:00.618 qpair failed and we were unable to recover it. 00:28:00.618 [2024-07-26 11:35:56.254294] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:00.618 [2024-07-26 11:35:56.254391] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:00.618 [2024-07-26 11:35:56.254406] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:00.618 [2024-07-26 11:35:56.254413] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:00.618 [2024-07-26 11:35:56.254419] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18b8f30 00:28:00.618 [2024-07-26 11:35:56.254433] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:00.618 qpair failed and we were unable to recover it. 00:28:00.618 [2024-07-26 11:35:56.264244] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:00.618 [2024-07-26 11:35:56.264294] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:00.618 [2024-07-26 11:35:56.264309] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:00.618 [2024-07-26 11:35:56.264316] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:00.618 [2024-07-26 11:35:56.264325] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18b8f30 00:28:00.618 [2024-07-26 11:35:56.264339] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:00.618 qpair failed and we were unable to recover it. 00:28:00.618 [2024-07-26 11:35:56.274212] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:00.618 [2024-07-26 11:35:56.274266] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:00.618 [2024-07-26 11:35:56.274285] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:00.618 [2024-07-26 11:35:56.274293] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:00.618 [2024-07-26 11:35:56.274299] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18b8f30 00:28:00.618 [2024-07-26 11:35:56.274315] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:00.618 qpair failed and we were unable to recover it. 00:28:00.875 [2024-07-26 11:35:56.284252] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:00.875 [2024-07-26 11:35:56.284309] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:00.875 [2024-07-26 11:35:56.284327] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:00.876 [2024-07-26 11:35:56.284336] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:00.876 [2024-07-26 11:35:56.284342] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18b8f30 00:28:00.876 [2024-07-26 11:35:56.284358] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:00.876 qpair failed and we were unable to recover it. 00:28:00.876 [2024-07-26 11:35:56.294340] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:00.876 [2024-07-26 11:35:56.294404] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:00.876 [2024-07-26 11:35:56.294420] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:00.876 [2024-07-26 11:35:56.294428] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:00.876 [2024-07-26 11:35:56.294435] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18b8f30 00:28:00.876 [2024-07-26 11:35:56.294449] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:00.876 qpair failed and we were unable to recover it. 00:28:00.876 [2024-07-26 11:35:56.304426] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:00.876 [2024-07-26 11:35:56.304534] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:00.876 [2024-07-26 11:35:56.304549] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:00.876 [2024-07-26 11:35:56.304557] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:00.876 [2024-07-26 11:35:56.304563] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18b8f30 00:28:00.876 [2024-07-26 11:35:56.304577] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:00.876 qpair failed and we were unable to recover it. 00:28:00.876 [2024-07-26 11:35:56.314407] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:00.876 [2024-07-26 11:35:56.314483] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:00.876 [2024-07-26 11:35:56.314500] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:00.876 [2024-07-26 11:35:56.314508] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:00.876 [2024-07-26 11:35:56.314514] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18b8f30 00:28:00.876 [2024-07-26 11:35:56.314528] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:00.876 qpair failed and we were unable to recover it. 00:28:00.876 [2024-07-26 11:35:56.324436] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:00.876 [2024-07-26 11:35:56.324493] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:00.876 [2024-07-26 11:35:56.324509] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:00.876 [2024-07-26 11:35:56.324515] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:00.876 [2024-07-26 11:35:56.324521] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18b8f30 00:28:00.876 [2024-07-26 11:35:56.324536] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:00.876 qpair failed and we were unable to recover it. 00:28:00.876 [2024-07-26 11:35:56.334480] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:00.876 [2024-07-26 11:35:56.334562] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:00.876 [2024-07-26 11:35:56.334578] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:00.876 [2024-07-26 11:35:56.334585] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:00.876 [2024-07-26 11:35:56.334592] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18b8f30 00:28:00.876 [2024-07-26 11:35:56.334607] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:00.876 qpair failed and we were unable to recover it. 00:28:00.876 [2024-07-26 11:35:56.344477] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:00.876 [2024-07-26 11:35:56.344533] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:00.876 [2024-07-26 11:35:56.344548] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:00.876 [2024-07-26 11:35:56.344555] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:00.876 [2024-07-26 11:35:56.344561] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18b8f30 00:28:00.876 [2024-07-26 11:35:56.344575] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:00.876 qpair failed and we were unable to recover it. 00:28:00.876 [2024-07-26 11:35:56.354509] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:00.876 [2024-07-26 11:35:56.354565] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:00.876 [2024-07-26 11:35:56.354579] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:00.876 [2024-07-26 11:35:56.354586] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:00.876 [2024-07-26 11:35:56.354596] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18b8f30 00:28:00.876 [2024-07-26 11:35:56.354610] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:00.876 qpair failed and we were unable to recover it. 00:28:00.876 [2024-07-26 11:35:56.364531] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:00.876 [2024-07-26 11:35:56.364595] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:00.876 [2024-07-26 11:35:56.364610] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:00.876 [2024-07-26 11:35:56.364618] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:00.876 [2024-07-26 11:35:56.364624] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18b8f30 00:28:00.876 [2024-07-26 11:35:56.364642] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:00.876 qpair failed and we were unable to recover it. 00:28:00.876 [2024-07-26 11:35:56.374501] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:00.876 [2024-07-26 11:35:56.374553] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:00.876 [2024-07-26 11:35:56.374568] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:00.876 [2024-07-26 11:35:56.374575] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:00.876 [2024-07-26 11:35:56.374581] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18b8f30 00:28:00.876 [2024-07-26 11:35:56.374595] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:00.876 qpair failed and we were unable to recover it. 00:28:00.876 [2024-07-26 11:35:56.384601] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:00.876 [2024-07-26 11:35:56.384662] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:00.876 [2024-07-26 11:35:56.384676] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:00.876 [2024-07-26 11:35:56.384683] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:00.876 [2024-07-26 11:35:56.384689] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18b8f30 00:28:00.876 [2024-07-26 11:35:56.384704] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:00.876 qpair failed and we were unable to recover it. 00:28:00.876 [2024-07-26 11:35:56.394641] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:00.876 [2024-07-26 11:35:56.394715] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:00.876 [2024-07-26 11:35:56.394730] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:00.876 [2024-07-26 11:35:56.394737] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:00.876 [2024-07-26 11:35:56.394743] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18b8f30 00:28:00.876 [2024-07-26 11:35:56.394757] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:00.876 qpair failed and we were unable to recover it. 00:28:00.876 [2024-07-26 11:35:56.404696] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:00.876 [2024-07-26 11:35:56.404758] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:00.876 [2024-07-26 11:35:56.404773] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:00.876 [2024-07-26 11:35:56.404781] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:00.876 [2024-07-26 11:35:56.404787] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18b8f30 00:28:00.876 [2024-07-26 11:35:56.404801] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:00.876 qpair failed and we were unable to recover it. 00:28:00.876 [2024-07-26 11:35:56.414631] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:00.876 [2024-07-26 11:35:56.414698] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:00.876 [2024-07-26 11:35:56.414713] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:00.876 [2024-07-26 11:35:56.414720] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:00.876 [2024-07-26 11:35:56.414726] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18b8f30 00:28:00.876 [2024-07-26 11:35:56.414740] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:00.877 qpair failed and we were unable to recover it. 00:28:00.877 [2024-07-26 11:35:56.424667] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:00.877 [2024-07-26 11:35:56.424729] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:00.877 [2024-07-26 11:35:56.424743] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:00.877 [2024-07-26 11:35:56.424750] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:00.877 [2024-07-26 11:35:56.424756] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18b8f30 00:28:00.877 [2024-07-26 11:35:56.424770] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:00.877 qpair failed and we were unable to recover it. 00:28:00.877 [2024-07-26 11:35:56.434676] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:00.877 [2024-07-26 11:35:56.434738] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:00.877 [2024-07-26 11:35:56.434755] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:00.877 [2024-07-26 11:35:56.434763] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:00.877 [2024-07-26 11:35:56.434768] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18b8f30 00:28:00.877 [2024-07-26 11:35:56.434782] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:00.877 qpair failed and we were unable to recover it. 00:28:00.877 [2024-07-26 11:35:56.444719] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:00.877 [2024-07-26 11:35:56.444807] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:00.877 [2024-07-26 11:35:56.444822] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:00.877 [2024-07-26 11:35:56.444829] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:00.877 [2024-07-26 11:35:56.444839] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18b8f30 00:28:00.877 [2024-07-26 11:35:56.444853] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:00.877 qpair failed and we were unable to recover it. 00:28:00.877 [2024-07-26 11:35:56.454788] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:00.877 [2024-07-26 11:35:56.454841] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:00.877 [2024-07-26 11:35:56.454856] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:00.877 [2024-07-26 11:35:56.454863] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:00.877 [2024-07-26 11:35:56.454870] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18b8f30 00:28:00.877 [2024-07-26 11:35:56.454883] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:00.877 qpair failed and we were unable to recover it. 00:28:00.877 [2024-07-26 11:35:56.464850] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:00.877 [2024-07-26 11:35:56.464909] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:00.877 [2024-07-26 11:35:56.464924] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:00.877 [2024-07-26 11:35:56.464931] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:00.877 [2024-07-26 11:35:56.464937] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18b8f30 00:28:00.877 [2024-07-26 11:35:56.464950] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:00.877 qpair failed and we were unable to recover it. 00:28:00.877 [2024-07-26 11:35:56.474866] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:00.877 [2024-07-26 11:35:56.474975] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:00.877 [2024-07-26 11:35:56.474989] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:00.877 [2024-07-26 11:35:56.474996] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:00.877 [2024-07-26 11:35:56.475002] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18b8f30 00:28:00.877 [2024-07-26 11:35:56.475016] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:00.877 qpair failed and we were unable to recover it. 00:28:00.877 [2024-07-26 11:35:56.484903] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:00.877 [2024-07-26 11:35:56.484979] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:00.877 [2024-07-26 11:35:56.484994] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:00.877 [2024-07-26 11:35:56.485001] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:00.877 [2024-07-26 11:35:56.485007] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18b8f30 00:28:00.877 [2024-07-26 11:35:56.485021] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:00.877 qpair failed and we were unable to recover it. 00:28:00.877 [2024-07-26 11:35:56.494925] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:00.877 [2024-07-26 11:35:56.494984] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:00.877 [2024-07-26 11:35:56.494999] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:00.877 [2024-07-26 11:35:56.495005] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:00.877 [2024-07-26 11:35:56.495011] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18b8f30 00:28:00.877 [2024-07-26 11:35:56.495025] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:00.877 qpair failed and we were unable to recover it. 00:28:00.877 [2024-07-26 11:35:56.504902] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:00.877 [2024-07-26 11:35:56.504991] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:00.877 [2024-07-26 11:35:56.505006] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:00.877 [2024-07-26 11:35:56.505013] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:00.877 [2024-07-26 11:35:56.505018] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18b8f30 00:28:00.877 [2024-07-26 11:35:56.505032] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:00.877 qpair failed and we were unable to recover it. 00:28:00.877 [2024-07-26 11:35:56.515010] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:00.877 [2024-07-26 11:35:56.515062] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:00.877 [2024-07-26 11:35:56.515077] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:00.877 [2024-07-26 11:35:56.515084] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:00.877 [2024-07-26 11:35:56.515090] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18b8f30 00:28:00.877 [2024-07-26 11:35:56.515103] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:00.877 qpair failed and we were unable to recover it. 00:28:00.877 [2024-07-26 11:35:56.525002] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:00.877 [2024-07-26 11:35:56.525053] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:00.877 [2024-07-26 11:35:56.525068] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:00.877 [2024-07-26 11:35:56.525075] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:00.877 [2024-07-26 11:35:56.525081] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18b8f30 00:28:00.877 [2024-07-26 11:35:56.525095] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:00.877 qpair failed and we were unable to recover it. 00:28:01.136 [2024-07-26 11:35:56.535031] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:01.136 [2024-07-26 11:35:56.535090] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:01.136 [2024-07-26 11:35:56.535110] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:01.136 [2024-07-26 11:35:56.535122] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:01.136 [2024-07-26 11:35:56.535128] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18b8f30 00:28:01.136 [2024-07-26 11:35:56.535145] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:01.136 qpair failed and we were unable to recover it. 00:28:01.136 [2024-07-26 11:35:56.545071] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:01.136 [2024-07-26 11:35:56.545127] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:01.136 [2024-07-26 11:35:56.545144] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:01.136 [2024-07-26 11:35:56.545152] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:01.136 [2024-07-26 11:35:56.545159] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18b8f30 00:28:01.136 [2024-07-26 11:35:56.545174] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:01.136 qpair failed and we were unable to recover it. 00:28:01.136 [2024-07-26 11:35:56.555083] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:01.136 [2024-07-26 11:35:56.555138] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:01.136 [2024-07-26 11:35:56.555153] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:01.136 [2024-07-26 11:35:56.555160] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:01.136 [2024-07-26 11:35:56.555166] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18b8f30 00:28:01.136 [2024-07-26 11:35:56.555180] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:01.136 qpair failed and we were unable to recover it. 00:28:01.136 [2024-07-26 11:35:56.565126] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:01.136 [2024-07-26 11:35:56.565191] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:01.136 [2024-07-26 11:35:56.565206] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:01.136 [2024-07-26 11:35:56.565213] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:01.136 [2024-07-26 11:35:56.565219] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18b8f30 00:28:01.136 [2024-07-26 11:35:56.565233] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:01.136 qpair failed and we were unable to recover it. 00:28:01.136 [2024-07-26 11:35:56.575138] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:01.136 [2024-07-26 11:35:56.575245] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:01.136 [2024-07-26 11:35:56.575261] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:01.136 [2024-07-26 11:35:56.575268] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:01.136 [2024-07-26 11:35:56.575274] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18b8f30 00:28:01.136 [2024-07-26 11:35:56.575288] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:01.136 qpair failed and we were unable to recover it. 00:28:01.136 [2024-07-26 11:35:56.585180] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:01.136 [2024-07-26 11:35:56.585233] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:01.136 [2024-07-26 11:35:56.585248] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:01.136 [2024-07-26 11:35:56.585255] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:01.136 [2024-07-26 11:35:56.585261] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18b8f30 00:28:01.136 [2024-07-26 11:35:56.585275] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:01.136 qpair failed and we were unable to recover it. 00:28:01.136 [2024-07-26 11:35:56.595185] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:01.136 [2024-07-26 11:35:56.595240] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:01.136 [2024-07-26 11:35:56.595255] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:01.136 [2024-07-26 11:35:56.595262] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:01.136 [2024-07-26 11:35:56.595269] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18b8f30 00:28:01.136 [2024-07-26 11:35:56.595282] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:01.136 qpair failed and we were unable to recover it. 00:28:01.136 [2024-07-26 11:35:56.605242] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:01.136 [2024-07-26 11:35:56.605309] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:01.136 [2024-07-26 11:35:56.605324] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:01.136 [2024-07-26 11:35:56.605331] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:01.136 [2024-07-26 11:35:56.605338] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18b8f30 00:28:01.136 [2024-07-26 11:35:56.605351] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:01.136 qpair failed and we were unable to recover it. 00:28:01.136 [2024-07-26 11:35:56.615257] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:01.136 [2024-07-26 11:35:56.615310] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:01.136 [2024-07-26 11:35:56.615325] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:01.136 [2024-07-26 11:35:56.615332] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:01.136 [2024-07-26 11:35:56.615338] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18b8f30 00:28:01.136 [2024-07-26 11:35:56.615352] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:01.136 qpair failed and we were unable to recover it. 00:28:01.136 [2024-07-26 11:35:56.625325] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:01.136 [2024-07-26 11:35:56.625382] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:01.136 [2024-07-26 11:35:56.625398] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:01.136 [2024-07-26 11:35:56.625410] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:01.136 [2024-07-26 11:35:56.625416] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18b8f30 00:28:01.136 [2024-07-26 11:35:56.625430] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:01.136 qpair failed and we were unable to recover it. 00:28:01.136 [2024-07-26 11:35:56.635282] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:01.136 [2024-07-26 11:35:56.635341] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:01.136 [2024-07-26 11:35:56.635356] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:01.136 [2024-07-26 11:35:56.635362] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:01.136 [2024-07-26 11:35:56.635368] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18b8f30 00:28:01.136 [2024-07-26 11:35:56.635382] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:01.136 qpair failed and we were unable to recover it. 00:28:01.136 [2024-07-26 11:35:56.645345] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:01.136 [2024-07-26 11:35:56.645402] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:01.136 [2024-07-26 11:35:56.645417] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:01.136 [2024-07-26 11:35:56.645425] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:01.136 [2024-07-26 11:35:56.645431] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18b8f30 00:28:01.136 [2024-07-26 11:35:56.645444] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:01.136 qpair failed and we were unable to recover it. 00:28:01.136 [2024-07-26 11:35:56.655356] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:01.136 [2024-07-26 11:35:56.655408] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:01.136 [2024-07-26 11:35:56.655423] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:01.136 [2024-07-26 11:35:56.655430] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:01.136 [2024-07-26 11:35:56.655436] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18b8f30 00:28:01.136 [2024-07-26 11:35:56.655450] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:01.136 qpair failed and we were unable to recover it. 00:28:01.136 [2024-07-26 11:35:56.665402] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:01.136 [2024-07-26 11:35:56.665457] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:01.136 [2024-07-26 11:35:56.665472] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:01.136 [2024-07-26 11:35:56.665480] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:01.136 [2024-07-26 11:35:56.665486] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18b8f30 00:28:01.136 [2024-07-26 11:35:56.665500] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:01.136 qpair failed and we were unable to recover it. 00:28:01.136 [2024-07-26 11:35:56.675435] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:01.136 [2024-07-26 11:35:56.675493] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:01.136 [2024-07-26 11:35:56.675508] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:01.136 [2024-07-26 11:35:56.675516] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:01.136 [2024-07-26 11:35:56.675523] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18b8f30 00:28:01.136 [2024-07-26 11:35:56.675548] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:01.136 qpair failed and we were unable to recover it. 00:28:01.136 [2024-07-26 11:35:56.685433] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:01.136 [2024-07-26 11:35:56.685498] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:01.136 [2024-07-26 11:35:56.685513] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:01.136 [2024-07-26 11:35:56.685520] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:01.136 [2024-07-26 11:35:56.685527] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18b8f30 00:28:01.136 [2024-07-26 11:35:56.685540] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:01.136 qpair failed and we were unable to recover it. 00:28:01.136 [2024-07-26 11:35:56.695427] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:01.136 [2024-07-26 11:35:56.695524] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:01.136 [2024-07-26 11:35:56.695538] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:01.136 [2024-07-26 11:35:56.695545] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:01.136 [2024-07-26 11:35:56.695551] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18b8f30 00:28:01.136 [2024-07-26 11:35:56.695564] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:01.136 qpair failed and we were unable to recover it. 00:28:01.136 [2024-07-26 11:35:56.705511] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:01.136 [2024-07-26 11:35:56.705568] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:01.136 [2024-07-26 11:35:56.705583] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:01.136 [2024-07-26 11:35:56.705589] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:01.136 [2024-07-26 11:35:56.705596] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18b8f30 00:28:01.136 [2024-07-26 11:35:56.705609] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:01.136 qpair failed and we were unable to recover it. 00:28:01.136 [2024-07-26 11:35:56.715524] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:01.136 [2024-07-26 11:35:56.715583] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:01.136 [2024-07-26 11:35:56.715601] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:01.136 [2024-07-26 11:35:56.715609] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:01.136 [2024-07-26 11:35:56.715615] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18b8f30 00:28:01.136 [2024-07-26 11:35:56.715633] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:01.136 qpair failed and we were unable to recover it. 00:28:01.136 [2024-07-26 11:35:56.725593] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:01.136 [2024-07-26 11:35:56.725703] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:01.136 [2024-07-26 11:35:56.725718] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:01.136 [2024-07-26 11:35:56.725725] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:01.136 [2024-07-26 11:35:56.725731] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18b8f30 00:28:01.136 [2024-07-26 11:35:56.725745] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:01.136 qpair failed and we were unable to recover it. 00:28:01.136 [2024-07-26 11:35:56.735590] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:01.137 [2024-07-26 11:35:56.735669] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:01.137 [2024-07-26 11:35:56.735684] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:01.137 [2024-07-26 11:35:56.735691] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:01.137 [2024-07-26 11:35:56.735697] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18b8f30 00:28:01.137 [2024-07-26 11:35:56.735715] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:01.137 qpair failed and we were unable to recover it. 00:28:01.137 [2024-07-26 11:35:56.745624] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:01.137 [2024-07-26 11:35:56.745686] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:01.137 [2024-07-26 11:35:56.745701] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:01.137 [2024-07-26 11:35:56.745708] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:01.137 [2024-07-26 11:35:56.745714] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18b8f30 00:28:01.137 [2024-07-26 11:35:56.745728] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:01.137 qpair failed and we were unable to recover it. 00:28:01.137 [2024-07-26 11:35:56.755659] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:01.137 [2024-07-26 11:35:56.755713] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:01.137 [2024-07-26 11:35:56.755727] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:01.137 [2024-07-26 11:35:56.755734] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:01.137 [2024-07-26 11:35:56.755742] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18b8f30 00:28:01.137 [2024-07-26 11:35:56.755755] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:01.137 qpair failed and we were unable to recover it. 00:28:01.137 [2024-07-26 11:35:56.765702] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:01.137 [2024-07-26 11:35:56.765761] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:01.137 [2024-07-26 11:35:56.765776] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:01.137 [2024-07-26 11:35:56.765784] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:01.137 [2024-07-26 11:35:56.765790] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18b8f30 00:28:01.137 [2024-07-26 11:35:56.765803] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:01.137 qpair failed and we were unable to recover it. 00:28:01.137 [2024-07-26 11:35:56.775755] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:01.137 [2024-07-26 11:35:56.775809] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:01.137 [2024-07-26 11:35:56.775823] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:01.137 [2024-07-26 11:35:56.775830] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:01.137 [2024-07-26 11:35:56.775836] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18b8f30 00:28:01.137 [2024-07-26 11:35:56.775850] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:01.137 qpair failed and we were unable to recover it. 00:28:01.137 [2024-07-26 11:35:56.785802] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:01.137 [2024-07-26 11:35:56.785881] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:01.137 [2024-07-26 11:35:56.785895] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:01.137 [2024-07-26 11:35:56.785903] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:01.137 [2024-07-26 11:35:56.785909] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18b8f30 00:28:01.137 [2024-07-26 11:35:56.785922] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:01.137 qpair failed and we were unable to recover it. 00:28:01.395 [2024-07-26 11:35:56.795835] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:01.395 [2024-07-26 11:35:56.795888] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:01.395 [2024-07-26 11:35:56.795907] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:01.395 [2024-07-26 11:35:56.795915] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:01.395 [2024-07-26 11:35:56.795922] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18b8f30 00:28:01.395 [2024-07-26 11:35:56.795938] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:01.395 qpair failed and we were unable to recover it. 00:28:01.395 [2024-07-26 11:35:56.805798] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:01.395 [2024-07-26 11:35:56.805855] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:01.395 [2024-07-26 11:35:56.805876] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:01.395 [2024-07-26 11:35:56.805884] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:01.395 [2024-07-26 11:35:56.805890] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18b8f30 00:28:01.395 [2024-07-26 11:35:56.805906] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:01.395 qpair failed and we were unable to recover it. 00:28:01.395 [2024-07-26 11:35:56.815863] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:01.395 [2024-07-26 11:35:56.815966] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:01.395 [2024-07-26 11:35:56.815982] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:01.395 [2024-07-26 11:35:56.815989] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:01.395 [2024-07-26 11:35:56.815995] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18b8f30 00:28:01.395 [2024-07-26 11:35:56.816010] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:01.395 qpair failed and we were unable to recover it. 00:28:01.395 [2024-07-26 11:35:56.825866] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:01.395 [2024-07-26 11:35:56.825969] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:01.395 [2024-07-26 11:35:56.825985] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:01.395 [2024-07-26 11:35:56.825991] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:01.395 [2024-07-26 11:35:56.825997] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18b8f30 00:28:01.395 [2024-07-26 11:35:56.826011] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:01.395 qpair failed and we were unable to recover it. 00:28:01.395 [2024-07-26 11:35:56.835883] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:01.395 [2024-07-26 11:35:56.835934] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:01.396 [2024-07-26 11:35:56.835949] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:01.396 [2024-07-26 11:35:56.835956] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:01.396 [2024-07-26 11:35:56.835963] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18b8f30 00:28:01.396 [2024-07-26 11:35:56.835977] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:01.396 qpair failed and we were unable to recover it. 00:28:01.396 [2024-07-26 11:35:56.845949] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:01.396 [2024-07-26 11:35:56.846002] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:01.396 [2024-07-26 11:35:56.846017] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:01.396 [2024-07-26 11:35:56.846024] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:01.396 [2024-07-26 11:35:56.846030] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18b8f30 00:28:01.396 [2024-07-26 11:35:56.846047] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:01.396 qpair failed and we were unable to recover it. 00:28:01.396 [2024-07-26 11:35:56.855942] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:01.396 [2024-07-26 11:35:56.855998] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:01.396 [2024-07-26 11:35:56.856014] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:01.396 [2024-07-26 11:35:56.856021] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:01.396 [2024-07-26 11:35:56.856027] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18b8f30 00:28:01.396 [2024-07-26 11:35:56.856041] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:01.396 qpair failed and we were unable to recover it. 00:28:01.396 [2024-07-26 11:35:56.865972] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:01.396 [2024-07-26 11:35:56.866058] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:01.396 [2024-07-26 11:35:56.866073] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:01.396 [2024-07-26 11:35:56.866080] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:01.396 [2024-07-26 11:35:56.866086] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18b8f30 00:28:01.396 [2024-07-26 11:35:56.866100] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:01.396 qpair failed and we were unable to recover it. 00:28:01.396 [2024-07-26 11:35:56.876001] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:01.396 [2024-07-26 11:35:56.876058] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:01.396 [2024-07-26 11:35:56.876073] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:01.396 [2024-07-26 11:35:56.876079] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:01.396 [2024-07-26 11:35:56.876085] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18b8f30 00:28:01.396 [2024-07-26 11:35:56.876099] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:01.396 qpair failed and we were unable to recover it. 00:28:01.396 [2024-07-26 11:35:56.886028] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:01.396 [2024-07-26 11:35:56.886083] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:01.396 [2024-07-26 11:35:56.886099] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:01.396 [2024-07-26 11:35:56.886105] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:01.396 [2024-07-26 11:35:56.886111] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18b8f30 00:28:01.396 [2024-07-26 11:35:56.886124] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:01.396 qpair failed and we were unable to recover it. 00:28:01.396 [2024-07-26 11:35:56.896119] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:01.396 [2024-07-26 11:35:56.896180] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:01.396 [2024-07-26 11:35:56.896198] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:01.396 [2024-07-26 11:35:56.896205] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:01.396 [2024-07-26 11:35:56.896211] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18b8f30 00:28:01.396 [2024-07-26 11:35:56.896225] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:01.396 qpair failed and we were unable to recover it. 00:28:01.396 [2024-07-26 11:35:56.906100] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:01.396 [2024-07-26 11:35:56.906157] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:01.396 [2024-07-26 11:35:56.906172] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:01.396 [2024-07-26 11:35:56.906179] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:01.396 [2024-07-26 11:35:56.906185] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18b8f30 00:28:01.396 [2024-07-26 11:35:56.906198] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:01.396 qpair failed and we were unable to recover it. 00:28:01.396 [2024-07-26 11:35:56.916114] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:01.396 [2024-07-26 11:35:56.916174] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:01.396 [2024-07-26 11:35:56.916188] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:01.396 [2024-07-26 11:35:56.916195] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:01.396 [2024-07-26 11:35:56.916201] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18b8f30 00:28:01.396 [2024-07-26 11:35:56.916215] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:01.396 qpair failed and we were unable to recover it. 00:28:01.396 [2024-07-26 11:35:56.926144] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:01.396 [2024-07-26 11:35:56.926198] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:01.396 [2024-07-26 11:35:56.926213] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:01.396 [2024-07-26 11:35:56.926221] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:01.396 [2024-07-26 11:35:56.926227] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18b8f30 00:28:01.396 [2024-07-26 11:35:56.926241] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:01.396 qpair failed and we were unable to recover it. 00:28:01.396 [2024-07-26 11:35:56.936176] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:01.396 [2024-07-26 11:35:56.936226] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:01.396 [2024-07-26 11:35:56.936240] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:01.396 [2024-07-26 11:35:56.936247] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:01.396 [2024-07-26 11:35:56.936253] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18b8f30 00:28:01.396 [2024-07-26 11:35:56.936271] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:01.396 qpair failed and we were unable to recover it. 00:28:01.396 [2024-07-26 11:35:56.946218] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:01.396 [2024-07-26 11:35:56.946318] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:01.396 [2024-07-26 11:35:56.946332] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:01.396 [2024-07-26 11:35:56.946339] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:01.396 [2024-07-26 11:35:56.946345] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18b8f30 00:28:01.396 [2024-07-26 11:35:56.946358] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:01.396 qpair failed and we were unable to recover it. 00:28:01.396 [2024-07-26 11:35:56.956244] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:01.396 [2024-07-26 11:35:56.956298] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:01.396 [2024-07-26 11:35:56.956313] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:01.396 [2024-07-26 11:35:56.956320] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:01.396 [2024-07-26 11:35:56.956326] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18b8f30 00:28:01.396 [2024-07-26 11:35:56.956340] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:01.396 qpair failed and we were unable to recover it. 00:28:01.396 [2024-07-26 11:35:56.966293] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:01.396 [2024-07-26 11:35:56.966348] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:01.397 [2024-07-26 11:35:56.966363] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:01.397 [2024-07-26 11:35:56.966371] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:01.397 [2024-07-26 11:35:56.966377] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18b8f30 00:28:01.397 [2024-07-26 11:35:56.966390] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:01.397 qpair failed and we were unable to recover it. 00:28:01.397 [2024-07-26 11:35:56.976293] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:01.397 [2024-07-26 11:35:56.976346] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:01.397 [2024-07-26 11:35:56.976361] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:01.397 [2024-07-26 11:35:56.976368] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:01.397 [2024-07-26 11:35:56.976375] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18b8f30 00:28:01.397 [2024-07-26 11:35:56.976388] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:01.397 qpair failed and we were unable to recover it. 00:28:01.397 [2024-07-26 11:35:56.986335] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:01.397 [2024-07-26 11:35:56.986386] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:01.397 [2024-07-26 11:35:56.986405] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:01.397 [2024-07-26 11:35:56.986414] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:01.397 [2024-07-26 11:35:56.986420] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18b8f30 00:28:01.397 [2024-07-26 11:35:56.986434] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:01.397 qpair failed and we were unable to recover it. 00:28:01.397 [2024-07-26 11:35:56.996372] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:01.397 [2024-07-26 11:35:56.996430] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:01.397 [2024-07-26 11:35:56.996445] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:01.397 [2024-07-26 11:35:56.996453] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:01.397 [2024-07-26 11:35:56.996459] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18b8f30 00:28:01.397 [2024-07-26 11:35:56.996473] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:01.397 qpair failed and we were unable to recover it. 00:28:01.397 [2024-07-26 11:35:57.006390] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:01.397 [2024-07-26 11:35:57.006445] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:01.397 [2024-07-26 11:35:57.006459] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:01.397 [2024-07-26 11:35:57.006466] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:01.397 [2024-07-26 11:35:57.006473] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18b8f30 00:28:01.397 [2024-07-26 11:35:57.006486] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:01.397 qpair failed and we were unable to recover it. 00:28:01.397 [2024-07-26 11:35:57.016425] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:01.397 [2024-07-26 11:35:57.016486] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:01.397 [2024-07-26 11:35:57.016501] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:01.397 [2024-07-26 11:35:57.016508] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:01.397 [2024-07-26 11:35:57.016514] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18b8f30 00:28:01.397 [2024-07-26 11:35:57.016528] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:01.397 qpair failed and we were unable to recover it. 00:28:01.397 [2024-07-26 11:35:57.026486] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:01.397 [2024-07-26 11:35:57.026544] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:01.397 [2024-07-26 11:35:57.026559] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:01.397 [2024-07-26 11:35:57.026566] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:01.397 [2024-07-26 11:35:57.026575] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18b8f30 00:28:01.397 [2024-07-26 11:35:57.026589] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:01.397 qpair failed and we were unable to recover it. 00:28:01.397 [2024-07-26 11:35:57.036525] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:01.397 [2024-07-26 11:35:57.036612] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:01.397 [2024-07-26 11:35:57.036631] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:01.397 [2024-07-26 11:35:57.036638] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:01.397 [2024-07-26 11:35:57.036644] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18b8f30 00:28:01.397 [2024-07-26 11:35:57.036658] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:01.397 qpair failed and we were unable to recover it. 00:28:01.397 [2024-07-26 11:35:57.046493] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:01.397 [2024-07-26 11:35:57.046546] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:01.397 [2024-07-26 11:35:57.046562] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:01.397 [2024-07-26 11:35:57.046569] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:01.397 [2024-07-26 11:35:57.046575] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18b8f30 00:28:01.397 [2024-07-26 11:35:57.046588] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:01.397 qpair failed and we were unable to recover it. 00:28:01.655 [2024-07-26 11:35:57.056524] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:01.655 [2024-07-26 11:35:57.056622] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:01.655 [2024-07-26 11:35:57.056645] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:01.655 [2024-07-26 11:35:57.056653] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:01.655 [2024-07-26 11:35:57.056659] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18b8f30 00:28:01.655 [2024-07-26 11:35:57.056675] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:01.655 qpair failed and we were unable to recover it. 00:28:01.655 [2024-07-26 11:35:57.066616] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:01.655 [2024-07-26 11:35:57.066724] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:01.655 [2024-07-26 11:35:57.066742] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:01.655 [2024-07-26 11:35:57.066749] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:01.655 [2024-07-26 11:35:57.066755] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18b8f30 00:28:01.655 [2024-07-26 11:35:57.066772] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:01.655 qpair failed and we were unable to recover it. 00:28:01.655 [2024-07-26 11:35:57.076584] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:01.655 [2024-07-26 11:35:57.076646] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:01.655 [2024-07-26 11:35:57.076661] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:01.655 [2024-07-26 11:35:57.076669] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:01.655 [2024-07-26 11:35:57.076675] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18b8f30 00:28:01.655 [2024-07-26 11:35:57.076689] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:01.655 qpair failed and we were unable to recover it. 00:28:01.655 [2024-07-26 11:35:57.086617] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:01.655 [2024-07-26 11:35:57.086673] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:01.655 [2024-07-26 11:35:57.086688] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:01.655 [2024-07-26 11:35:57.086696] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:01.655 [2024-07-26 11:35:57.086702] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18b8f30 00:28:01.655 [2024-07-26 11:35:57.086716] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:01.655 qpair failed and we were unable to recover it. 00:28:01.655 [2024-07-26 11:35:57.096648] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:01.655 [2024-07-26 11:35:57.096702] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:01.655 [2024-07-26 11:35:57.096717] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:01.655 [2024-07-26 11:35:57.096724] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:01.655 [2024-07-26 11:35:57.096731] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18b8f30 00:28:01.655 [2024-07-26 11:35:57.096744] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:01.655 qpair failed and we were unable to recover it. 00:28:01.655 [2024-07-26 11:35:57.106683] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:01.655 [2024-07-26 11:35:57.106748] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:01.655 [2024-07-26 11:35:57.106763] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:01.655 [2024-07-26 11:35:57.106770] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:01.655 [2024-07-26 11:35:57.106776] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18b8f30 00:28:01.655 [2024-07-26 11:35:57.106790] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:01.655 qpair failed and we were unable to recover it. 00:28:01.655 [2024-07-26 11:35:57.116694] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:01.655 [2024-07-26 11:35:57.116752] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:01.655 [2024-07-26 11:35:57.116766] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:01.655 [2024-07-26 11:35:57.116774] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:01.655 [2024-07-26 11:35:57.116783] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18b8f30 00:28:01.655 [2024-07-26 11:35:57.116797] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:01.655 qpair failed and we were unable to recover it. 00:28:01.655 [2024-07-26 11:35:57.126667] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:01.655 [2024-07-26 11:35:57.126760] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:01.655 [2024-07-26 11:35:57.126775] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:01.655 [2024-07-26 11:35:57.126782] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:01.655 [2024-07-26 11:35:57.126788] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18b8f30 00:28:01.655 [2024-07-26 11:35:57.126802] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:01.655 qpair failed and we were unable to recover it. 00:28:01.655 [2024-07-26 11:35:57.136755] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:01.655 [2024-07-26 11:35:57.136811] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:01.655 [2024-07-26 11:35:57.136825] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:01.655 [2024-07-26 11:35:57.136832] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:01.655 [2024-07-26 11:35:57.136839] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18b8f30 00:28:01.655 [2024-07-26 11:35:57.136852] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:01.655 qpair failed and we were unable to recover it. 00:28:01.655 [2024-07-26 11:35:57.146719] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:01.655 [2024-07-26 11:35:57.146771] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:01.655 [2024-07-26 11:35:57.146786] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:01.655 [2024-07-26 11:35:57.146792] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:01.655 [2024-07-26 11:35:57.146799] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18b8f30 00:28:01.655 [2024-07-26 11:35:57.146812] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:01.655 qpair failed and we were unable to recover it. 00:28:01.655 [2024-07-26 11:35:57.156814] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:01.655 [2024-07-26 11:35:57.156870] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:01.655 [2024-07-26 11:35:57.156885] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:01.655 [2024-07-26 11:35:57.156892] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:01.655 [2024-07-26 11:35:57.156898] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18b8f30 00:28:01.655 [2024-07-26 11:35:57.156912] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:01.655 qpair failed and we were unable to recover it. 00:28:01.655 [2024-07-26 11:35:57.166854] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:01.655 [2024-07-26 11:35:57.166913] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:01.655 [2024-07-26 11:35:57.166927] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:01.655 [2024-07-26 11:35:57.166934] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:01.655 [2024-07-26 11:35:57.166941] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18b8f30 00:28:01.655 [2024-07-26 11:35:57.166953] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:01.655 qpair failed and we were unable to recover it. 00:28:01.655 [2024-07-26 11:35:57.176808] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:01.655 [2024-07-26 11:35:57.176863] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:01.655 [2024-07-26 11:35:57.176878] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:01.655 [2024-07-26 11:35:57.176885] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:01.655 [2024-07-26 11:35:57.176892] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18b8f30 00:28:01.655 [2024-07-26 11:35:57.176906] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:01.655 qpair failed and we were unable to recover it. 00:28:01.655 [2024-07-26 11:35:57.186960] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:01.655 [2024-07-26 11:35:57.187065] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:01.655 [2024-07-26 11:35:57.187080] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:01.655 [2024-07-26 11:35:57.187087] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:01.655 [2024-07-26 11:35:57.187093] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18b8f30 00:28:01.655 [2024-07-26 11:35:57.187107] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:01.655 qpair failed and we were unable to recover it. 00:28:01.655 [2024-07-26 11:35:57.196937] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:01.655 [2024-07-26 11:35:57.196994] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:01.655 [2024-07-26 11:35:57.197009] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:01.655 [2024-07-26 11:35:57.197016] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:01.655 [2024-07-26 11:35:57.197022] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18b8f30 00:28:01.655 [2024-07-26 11:35:57.197036] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:01.655 qpair failed and we were unable to recover it. 00:28:01.655 [2024-07-26 11:35:57.206987] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:01.655 [2024-07-26 11:35:57.207051] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:01.655 [2024-07-26 11:35:57.207066] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:01.655 [2024-07-26 11:35:57.207073] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:01.655 [2024-07-26 11:35:57.207082] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18b8f30 00:28:01.655 [2024-07-26 11:35:57.207096] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:01.655 qpair failed and we were unable to recover it. 00:28:01.655 [2024-07-26 11:35:57.217032] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:01.655 [2024-07-26 11:35:57.217130] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:01.655 [2024-07-26 11:35:57.217145] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:01.655 [2024-07-26 11:35:57.217152] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:01.655 [2024-07-26 11:35:57.217158] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18b8f30 00:28:01.655 [2024-07-26 11:35:57.217171] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:01.655 qpair failed and we were unable to recover it. 00:28:01.655 [2024-07-26 11:35:57.227019] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:01.655 [2024-07-26 11:35:57.227075] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:01.655 [2024-07-26 11:35:57.227089] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:01.655 [2024-07-26 11:35:57.227096] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:01.655 [2024-07-26 11:35:57.227102] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18b8f30 00:28:01.655 [2024-07-26 11:35:57.227116] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:01.655 qpair failed and we were unable to recover it. 00:28:01.655 [2024-07-26 11:35:57.237100] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:01.655 [2024-07-26 11:35:57.237158] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:01.655 [2024-07-26 11:35:57.237174] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:01.655 [2024-07-26 11:35:57.237181] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:01.655 [2024-07-26 11:35:57.237188] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18b8f30 00:28:01.655 [2024-07-26 11:35:57.237202] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:01.655 qpair failed and we were unable to recover it. 00:28:01.656 [2024-07-26 11:35:57.247082] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:01.656 [2024-07-26 11:35:57.247135] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:01.656 [2024-07-26 11:35:57.247150] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:01.656 [2024-07-26 11:35:57.247157] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:01.656 [2024-07-26 11:35:57.247163] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18b8f30 00:28:01.656 [2024-07-26 11:35:57.247177] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:01.656 qpair failed and we were unable to recover it. 00:28:01.656 [2024-07-26 11:35:57.257139] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:01.656 [2024-07-26 11:35:57.257195] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:01.656 [2024-07-26 11:35:57.257209] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:01.656 [2024-07-26 11:35:57.257216] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:01.656 [2024-07-26 11:35:57.257222] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18b8f30 00:28:01.656 [2024-07-26 11:35:57.257236] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:01.656 qpair failed and we were unable to recover it. 00:28:01.656 [2024-07-26 11:35:57.267138] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:01.656 [2024-07-26 11:35:57.267191] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:01.656 [2024-07-26 11:35:57.267206] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:01.656 [2024-07-26 11:35:57.267213] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:01.656 [2024-07-26 11:35:57.267219] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18b8f30 00:28:01.656 [2024-07-26 11:35:57.267232] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:01.656 qpair failed and we were unable to recover it. 00:28:01.656 [2024-07-26 11:35:57.277101] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:01.656 [2024-07-26 11:35:57.277162] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:01.656 [2024-07-26 11:35:57.277176] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:01.656 [2024-07-26 11:35:57.277183] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:01.656 [2024-07-26 11:35:57.277189] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18b8f30 00:28:01.656 [2024-07-26 11:35:57.277203] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:01.656 qpair failed and we were unable to recover it. 00:28:01.656 [2024-07-26 11:35:57.287199] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:01.656 [2024-07-26 11:35:57.287261] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:01.656 [2024-07-26 11:35:57.287275] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:01.656 [2024-07-26 11:35:57.287282] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:01.656 [2024-07-26 11:35:57.287288] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18b8f30 00:28:01.656 [2024-07-26 11:35:57.287302] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:01.656 qpair failed and we were unable to recover it. 00:28:01.656 [2024-07-26 11:35:57.297227] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:01.656 [2024-07-26 11:35:57.297301] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:01.656 [2024-07-26 11:35:57.297317] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:01.656 [2024-07-26 11:35:57.297327] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:01.656 [2024-07-26 11:35:57.297333] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18b8f30 00:28:01.656 [2024-07-26 11:35:57.297346] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:01.656 qpair failed and we were unable to recover it. 00:28:01.656 [2024-07-26 11:35:57.307295] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:01.656 [2024-07-26 11:35:57.307353] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:01.656 [2024-07-26 11:35:57.307368] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:01.656 [2024-07-26 11:35:57.307375] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:01.656 [2024-07-26 11:35:57.307381] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18b8f30 00:28:01.656 [2024-07-26 11:35:57.307395] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:01.656 qpair failed and we were unable to recover it. 00:28:01.912 [2024-07-26 11:35:57.317308] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:01.912 [2024-07-26 11:35:57.317369] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:01.912 [2024-07-26 11:35:57.317389] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:01.912 [2024-07-26 11:35:57.317397] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:01.912 [2024-07-26 11:35:57.317403] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18b8f30 00:28:01.912 [2024-07-26 11:35:57.317420] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:01.912 qpair failed and we were unable to recover it. 00:28:01.912 [2024-07-26 11:35:57.327318] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:01.912 [2024-07-26 11:35:57.327375] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:01.912 [2024-07-26 11:35:57.327392] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:01.912 [2024-07-26 11:35:57.327400] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:01.912 [2024-07-26 11:35:57.327406] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18b8f30 00:28:01.912 [2024-07-26 11:35:57.327420] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:01.912 qpair failed and we were unable to recover it. 00:28:01.912 [2024-07-26 11:35:57.337341] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:01.912 [2024-07-26 11:35:57.337397] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:01.912 [2024-07-26 11:35:57.337413] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:01.912 [2024-07-26 11:35:57.337421] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:01.912 [2024-07-26 11:35:57.337427] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18b8f30 00:28:01.912 [2024-07-26 11:35:57.337441] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:01.912 qpair failed and we were unable to recover it. 00:28:01.912 [2024-07-26 11:35:57.347411] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:01.912 [2024-07-26 11:35:57.347464] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:01.912 [2024-07-26 11:35:57.347479] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:01.912 [2024-07-26 11:35:57.347486] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:01.912 [2024-07-26 11:35:57.347492] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18b8f30 00:28:01.912 [2024-07-26 11:35:57.347506] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:01.912 qpair failed and we were unable to recover it. 00:28:01.912 [2024-07-26 11:35:57.357321] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:01.912 [2024-07-26 11:35:57.357386] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:01.912 [2024-07-26 11:35:57.357401] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:01.912 [2024-07-26 11:35:57.357408] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:01.912 [2024-07-26 11:35:57.357415] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18b8f30 00:28:01.912 [2024-07-26 11:35:57.357428] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:01.912 qpair failed and we were unable to recover it. 00:28:01.912 [2024-07-26 11:35:57.367435] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:01.912 [2024-07-26 11:35:57.367489] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:01.912 [2024-07-26 11:35:57.367505] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:01.912 [2024-07-26 11:35:57.367513] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:01.912 [2024-07-26 11:35:57.367519] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18b8f30 00:28:01.912 [2024-07-26 11:35:57.367533] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:01.912 qpair failed and we were unable to recover it. 00:28:01.912 [2024-07-26 11:35:57.377455] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:01.912 [2024-07-26 11:35:57.377507] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:01.912 [2024-07-26 11:35:57.377522] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:01.912 [2024-07-26 11:35:57.377529] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:01.912 [2024-07-26 11:35:57.377535] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18b8f30 00:28:01.912 [2024-07-26 11:35:57.377549] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:01.912 qpair failed and we were unable to recover it. 00:28:01.912 [2024-07-26 11:35:57.387479] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:01.912 [2024-07-26 11:35:57.387535] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:01.912 [2024-07-26 11:35:57.387550] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:01.912 [2024-07-26 11:35:57.387560] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:01.912 [2024-07-26 11:35:57.387567] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18b8f30 00:28:01.912 [2024-07-26 11:35:57.387580] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:01.912 qpair failed and we were unable to recover it. 00:28:01.912 [2024-07-26 11:35:57.397551] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:01.912 [2024-07-26 11:35:57.397609] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:01.912 [2024-07-26 11:35:57.397624] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:01.912 [2024-07-26 11:35:57.397635] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:01.912 [2024-07-26 11:35:57.397642] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18b8f30 00:28:01.912 [2024-07-26 11:35:57.397655] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:01.912 qpair failed and we were unable to recover it. 00:28:01.912 [2024-07-26 11:35:57.407575] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:01.912 [2024-07-26 11:35:57.407636] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:01.912 [2024-07-26 11:35:57.407652] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:01.912 [2024-07-26 11:35:57.407660] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:01.912 [2024-07-26 11:35:57.407666] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18b8f30 00:28:01.912 [2024-07-26 11:35:57.407679] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:01.912 qpair failed and we were unable to recover it. 00:28:01.912 [2024-07-26 11:35:57.417578] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:01.912 [2024-07-26 11:35:57.417663] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:01.912 [2024-07-26 11:35:57.417678] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:01.912 [2024-07-26 11:35:57.417685] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:01.912 [2024-07-26 11:35:57.417692] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18b8f30 00:28:01.912 [2024-07-26 11:35:57.417706] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:01.912 qpair failed and we were unable to recover it. 00:28:01.912 [2024-07-26 11:35:57.427678] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:01.912 [2024-07-26 11:35:57.427754] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:01.912 [2024-07-26 11:35:57.427769] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:01.912 [2024-07-26 11:35:57.427776] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:01.912 [2024-07-26 11:35:57.427782] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18b8f30 00:28:01.912 [2024-07-26 11:35:57.427796] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:01.912 qpair failed and we were unable to recover it. 00:28:01.912 [2024-07-26 11:35:57.437642] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:01.912 [2024-07-26 11:35:57.437700] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:01.912 [2024-07-26 11:35:57.437715] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:01.912 [2024-07-26 11:35:57.437722] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:01.912 [2024-07-26 11:35:57.437729] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18b8f30 00:28:01.912 [2024-07-26 11:35:57.437743] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:01.912 qpair failed and we were unable to recover it. 00:28:01.912 [2024-07-26 11:35:57.447654] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:01.912 [2024-07-26 11:35:57.447709] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:01.912 [2024-07-26 11:35:57.447724] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:01.912 [2024-07-26 11:35:57.447732] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:01.912 [2024-07-26 11:35:57.447738] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18b8f30 00:28:01.912 [2024-07-26 11:35:57.447752] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:01.912 qpair failed and we were unable to recover it. 00:28:01.912 [2024-07-26 11:35:57.457710] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:01.912 [2024-07-26 11:35:57.457765] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:01.913 [2024-07-26 11:35:57.457780] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:01.913 [2024-07-26 11:35:57.457787] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:01.913 [2024-07-26 11:35:57.457793] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18b8f30 00:28:01.913 [2024-07-26 11:35:57.457807] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:01.913 qpair failed and we were unable to recover it. 00:28:01.913 [2024-07-26 11:35:57.467762] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:01.913 [2024-07-26 11:35:57.467816] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:01.913 [2024-07-26 11:35:57.467831] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:01.913 [2024-07-26 11:35:57.467838] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:01.913 [2024-07-26 11:35:57.467844] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18b8f30 00:28:01.913 [2024-07-26 11:35:57.467857] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:01.913 qpair failed and we were unable to recover it. 00:28:01.913 [2024-07-26 11:35:57.477785] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:01.913 [2024-07-26 11:35:57.477852] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:01.913 [2024-07-26 11:35:57.477867] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:01.913 [2024-07-26 11:35:57.477878] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:01.913 [2024-07-26 11:35:57.477884] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18b8f30 00:28:01.913 [2024-07-26 11:35:57.477898] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:01.913 qpair failed and we were unable to recover it. 00:28:01.913 [2024-07-26 11:35:57.487817] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:01.913 [2024-07-26 11:35:57.487881] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:01.913 [2024-07-26 11:35:57.487896] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:01.913 [2024-07-26 11:35:57.487903] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:01.913 [2024-07-26 11:35:57.487909] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18b8f30 00:28:01.913 [2024-07-26 11:35:57.487923] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:01.913 qpair failed and we were unable to recover it. 00:28:01.913 [2024-07-26 11:35:57.497836] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:01.913 [2024-07-26 11:35:57.497903] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:01.913 [2024-07-26 11:35:57.497917] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:01.913 [2024-07-26 11:35:57.497925] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:01.913 [2024-07-26 11:35:57.497931] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18b8f30 00:28:01.913 [2024-07-26 11:35:57.497944] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:01.913 qpair failed and we were unable to recover it. 00:28:01.913 [2024-07-26 11:35:57.507850] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:01.913 [2024-07-26 11:35:57.507905] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:01.913 [2024-07-26 11:35:57.507919] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:01.913 [2024-07-26 11:35:57.507927] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:01.913 [2024-07-26 11:35:57.507933] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18b8f30 00:28:01.913 [2024-07-26 11:35:57.507948] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:01.913 qpair failed and we were unable to recover it. 00:28:01.913 [2024-07-26 11:35:57.517888] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:01.913 [2024-07-26 11:35:57.517946] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:01.913 [2024-07-26 11:35:57.517961] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:01.913 [2024-07-26 11:35:57.517968] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:01.913 [2024-07-26 11:35:57.517976] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18b8f30 00:28:01.913 [2024-07-26 11:35:57.517991] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:01.913 qpair failed and we were unable to recover it. 00:28:01.913 [2024-07-26 11:35:57.527907] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:01.913 [2024-07-26 11:35:57.527967] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:01.913 [2024-07-26 11:35:57.527982] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:01.913 [2024-07-26 11:35:57.527990] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:01.913 [2024-07-26 11:35:57.527996] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18b8f30 00:28:01.913 [2024-07-26 11:35:57.528009] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:01.913 qpair failed and we were unable to recover it. 00:28:01.913 [2024-07-26 11:35:57.537904] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:01.913 [2024-07-26 11:35:57.537974] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:01.913 [2024-07-26 11:35:57.537989] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:01.913 [2024-07-26 11:35:57.537995] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:01.913 [2024-07-26 11:35:57.538002] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18b8f30 00:28:01.913 [2024-07-26 11:35:57.538015] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:01.913 qpair failed and we were unable to recover it. 00:28:01.913 [2024-07-26 11:35:57.547944] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:01.913 [2024-07-26 11:35:57.547999] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:01.913 [2024-07-26 11:35:57.548013] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:01.913 [2024-07-26 11:35:57.548020] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:01.913 [2024-07-26 11:35:57.548026] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18b8f30 00:28:01.913 [2024-07-26 11:35:57.548040] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:01.913 qpair failed and we were unable to recover it. 00:28:01.913 [2024-07-26 11:35:57.557975] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:01.913 [2024-07-26 11:35:57.558036] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:01.913 [2024-07-26 11:35:57.558051] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:01.913 [2024-07-26 11:35:57.558058] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:01.913 [2024-07-26 11:35:57.558064] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18b8f30 00:28:01.913 [2024-07-26 11:35:57.558078] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:01.913 qpair failed and we were unable to recover it. 00:28:01.913 [2024-07-26 11:35:57.568004] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:01.913 [2024-07-26 11:35:57.568100] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:01.913 [2024-07-26 11:35:57.568118] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:01.913 [2024-07-26 11:35:57.568125] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:01.913 [2024-07-26 11:35:57.568131] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18b8f30 00:28:01.913 [2024-07-26 11:35:57.568144] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:01.913 qpair failed and we were unable to recover it. 00:28:02.170 [2024-07-26 11:35:57.578035] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:02.170 [2024-07-26 11:35:57.578095] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:02.170 [2024-07-26 11:35:57.578115] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:02.170 [2024-07-26 11:35:57.578123] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:02.170 [2024-07-26 11:35:57.578129] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18b8f30 00:28:02.170 [2024-07-26 11:35:57.578145] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:02.170 qpair failed and we were unable to recover it. 00:28:02.170 [2024-07-26 11:35:57.588066] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:02.170 [2024-07-26 11:35:57.588133] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:02.170 [2024-07-26 11:35:57.588148] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:02.170 [2024-07-26 11:35:57.588155] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:02.170 [2024-07-26 11:35:57.588161] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18b8f30 00:28:02.170 [2024-07-26 11:35:57.588175] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:02.170 qpair failed and we were unable to recover it. 00:28:02.170 [2024-07-26 11:35:57.598024] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:02.170 [2024-07-26 11:35:57.598079] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:02.170 [2024-07-26 11:35:57.598094] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:02.170 [2024-07-26 11:35:57.598101] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:02.170 [2024-07-26 11:35:57.598108] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18b8f30 00:28:02.170 [2024-07-26 11:35:57.598122] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:02.170 qpair failed and we were unable to recover it. 00:28:02.170 [2024-07-26 11:35:57.608099] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:02.170 [2024-07-26 11:35:57.608154] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:02.170 [2024-07-26 11:35:57.608169] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:02.170 [2024-07-26 11:35:57.608178] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:02.170 [2024-07-26 11:35:57.608184] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18b8f30 00:28:02.170 [2024-07-26 11:35:57.608201] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:02.170 qpair failed and we were unable to recover it. 00:28:02.170 [2024-07-26 11:35:57.618161] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:02.170 [2024-07-26 11:35:57.618216] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:02.170 [2024-07-26 11:35:57.618231] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:02.170 [2024-07-26 11:35:57.618239] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:02.170 [2024-07-26 11:35:57.618245] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18b8f30 00:28:02.170 [2024-07-26 11:35:57.618259] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:02.170 qpair failed and we were unable to recover it. 00:28:02.170 [2024-07-26 11:35:57.628192] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:02.170 [2024-07-26 11:35:57.628267] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:02.170 [2024-07-26 11:35:57.628282] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:02.170 [2024-07-26 11:35:57.628289] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:02.170 [2024-07-26 11:35:57.628296] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18b8f30 00:28:02.170 [2024-07-26 11:35:57.628309] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:02.170 qpair failed and we were unable to recover it. 00:28:02.170 [2024-07-26 11:35:57.638121] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:02.170 [2024-07-26 11:35:57.638185] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:02.170 [2024-07-26 11:35:57.638201] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:02.170 [2024-07-26 11:35:57.638208] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:02.170 [2024-07-26 11:35:57.638214] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18b8f30 00:28:02.170 [2024-07-26 11:35:57.638228] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:02.170 qpair failed and we were unable to recover it. 00:28:02.170 [2024-07-26 11:35:57.648217] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:02.170 [2024-07-26 11:35:57.648316] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:02.170 [2024-07-26 11:35:57.648330] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:02.170 [2024-07-26 11:35:57.648338] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:02.170 [2024-07-26 11:35:57.648344] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18b8f30 00:28:02.170 [2024-07-26 11:35:57.648357] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:02.170 qpair failed and we were unable to recover it. 00:28:02.170 [2024-07-26 11:35:57.658192] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:02.170 [2024-07-26 11:35:57.658248] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:02.170 [2024-07-26 11:35:57.658268] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:02.170 [2024-07-26 11:35:57.658275] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:02.170 [2024-07-26 11:35:57.658281] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18b8f30 00:28:02.170 [2024-07-26 11:35:57.658294] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:02.170 qpair failed and we were unable to recover it. 00:28:02.170 [2024-07-26 11:35:57.668267] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:02.170 [2024-07-26 11:35:57.668325] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:02.170 [2024-07-26 11:35:57.668340] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:02.170 [2024-07-26 11:35:57.668347] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:02.170 [2024-07-26 11:35:57.668354] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18b8f30 00:28:02.170 [2024-07-26 11:35:57.668368] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:02.170 qpair failed and we were unable to recover it. 00:28:02.170 [2024-07-26 11:35:57.678395] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:02.170 [2024-07-26 11:35:57.678453] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:02.170 [2024-07-26 11:35:57.678468] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:02.170 [2024-07-26 11:35:57.678476] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:02.170 [2024-07-26 11:35:57.678482] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18b8f30 00:28:02.170 [2024-07-26 11:35:57.678498] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:02.170 qpair failed and we were unable to recover it. 00:28:02.170 [2024-07-26 11:35:57.688283] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:02.170 [2024-07-26 11:35:57.688357] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:02.170 [2024-07-26 11:35:57.688373] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:02.170 [2024-07-26 11:35:57.688380] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:02.170 [2024-07-26 11:35:57.688387] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18b8f30 00:28:02.170 [2024-07-26 11:35:57.688401] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:02.170 qpair failed and we were unable to recover it. 00:28:02.170 [2024-07-26 11:35:57.698374] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:02.170 [2024-07-26 11:35:57.698429] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:02.170 [2024-07-26 11:35:57.698445] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:02.170 [2024-07-26 11:35:57.698452] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:02.170 [2024-07-26 11:35:57.698458] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18b8f30 00:28:02.170 [2024-07-26 11:35:57.698475] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:02.170 qpair failed and we were unable to recover it. 00:28:02.170 [2024-07-26 11:35:57.708335] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:02.170 [2024-07-26 11:35:57.708388] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:02.170 [2024-07-26 11:35:57.708402] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:02.170 [2024-07-26 11:35:57.708409] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:02.170 [2024-07-26 11:35:57.708416] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18b8f30 00:28:02.170 [2024-07-26 11:35:57.708430] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:02.170 qpair failed and we were unable to recover it. 00:28:02.170 [2024-07-26 11:35:57.718350] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:02.171 [2024-07-26 11:35:57.718407] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:02.171 [2024-07-26 11:35:57.718422] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:02.171 [2024-07-26 11:35:57.718429] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:02.171 [2024-07-26 11:35:57.718435] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18b8f30 00:28:02.171 [2024-07-26 11:35:57.718449] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:02.171 qpair failed and we were unable to recover it. 00:28:02.171 [2024-07-26 11:35:57.728455] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:02.171 [2024-07-26 11:35:57.728512] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:02.171 [2024-07-26 11:35:57.728527] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:02.171 [2024-07-26 11:35:57.728534] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:02.171 [2024-07-26 11:35:57.728541] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18b8f30 00:28:02.171 [2024-07-26 11:35:57.728555] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:02.171 qpair failed and we were unable to recover it. 00:28:02.171 [2024-07-26 11:35:57.738512] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:02.171 [2024-07-26 11:35:57.738598] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:02.171 [2024-07-26 11:35:57.738613] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:02.171 [2024-07-26 11:35:57.738621] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:02.171 [2024-07-26 11:35:57.738632] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18b8f30 00:28:02.171 [2024-07-26 11:35:57.738647] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:02.171 qpair failed and we were unable to recover it. 00:28:02.171 [2024-07-26 11:35:57.748537] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:02.171 [2024-07-26 11:35:57.748613] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:02.171 [2024-07-26 11:35:57.748640] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:02.171 [2024-07-26 11:35:57.748647] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:02.171 [2024-07-26 11:35:57.748653] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18b8f30 00:28:02.171 [2024-07-26 11:35:57.748669] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:02.171 qpair failed and we were unable to recover it. 00:28:02.171 [2024-07-26 11:35:57.758493] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:02.171 [2024-07-26 11:35:57.758581] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:02.171 [2024-07-26 11:35:57.758598] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:02.171 [2024-07-26 11:35:57.758605] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:02.171 [2024-07-26 11:35:57.758611] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18b8f30 00:28:02.171 [2024-07-26 11:35:57.758630] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:02.171 qpair failed and we were unable to recover it. 00:28:02.171 [2024-07-26 11:35:57.768512] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:02.171 [2024-07-26 11:35:57.768567] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:02.171 [2024-07-26 11:35:57.768582] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:02.171 [2024-07-26 11:35:57.768589] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:02.171 [2024-07-26 11:35:57.768595] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18b8f30 00:28:02.171 [2024-07-26 11:35:57.768609] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:02.171 qpair failed and we were unable to recover it. 00:28:02.171 [2024-07-26 11:35:57.778598] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:02.171 [2024-07-26 11:35:57.778692] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:02.171 [2024-07-26 11:35:57.778706] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:02.171 [2024-07-26 11:35:57.778714] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:02.171 [2024-07-26 11:35:57.778720] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18b8f30 00:28:02.171 [2024-07-26 11:35:57.778733] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:02.171 qpair failed and we were unable to recover it. 00:28:02.171 [2024-07-26 11:35:57.788634] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:02.171 [2024-07-26 11:35:57.788692] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:02.171 [2024-07-26 11:35:57.788706] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:02.171 [2024-07-26 11:35:57.788713] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:02.171 [2024-07-26 11:35:57.788719] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18b8f30 00:28:02.171 [2024-07-26 11:35:57.788737] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:02.171 qpair failed and we were unable to recover it. 00:28:02.171 [2024-07-26 11:35:57.798704] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:02.171 [2024-07-26 11:35:57.798762] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:02.171 [2024-07-26 11:35:57.798776] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:02.171 [2024-07-26 11:35:57.798783] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:02.171 [2024-07-26 11:35:57.798790] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18b8f30 00:28:02.171 [2024-07-26 11:35:57.798803] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:02.171 qpair failed and we were unable to recover it. 00:28:02.171 [2024-07-26 11:35:57.808689] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:02.171 [2024-07-26 11:35:57.808748] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:02.171 [2024-07-26 11:35:57.808762] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:02.171 [2024-07-26 11:35:57.808770] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:02.171 [2024-07-26 11:35:57.808776] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18b8f30 00:28:02.171 [2024-07-26 11:35:57.808789] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:02.171 qpair failed and we were unable to recover it. 00:28:02.171 [2024-07-26 11:35:57.818725] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:02.171 [2024-07-26 11:35:57.818782] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:02.171 [2024-07-26 11:35:57.818796] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:02.171 [2024-07-26 11:35:57.818803] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:02.171 [2024-07-26 11:35:57.818809] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18b8f30 00:28:02.171 [2024-07-26 11:35:57.818822] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:02.171 qpair failed and we were unable to recover it. 00:28:02.427 [2024-07-26 11:35:57.828770] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:02.427 [2024-07-26 11:35:57.828840] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:02.427 [2024-07-26 11:35:57.828859] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:02.427 [2024-07-26 11:35:57.828867] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:02.427 [2024-07-26 11:35:57.828873] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18b8f30 00:28:02.427 [2024-07-26 11:35:57.828890] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:02.427 qpair failed and we were unable to recover it. 00:28:02.427 [2024-07-26 11:35:57.838815] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:02.427 [2024-07-26 11:35:57.838906] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:02.427 [2024-07-26 11:35:57.838928] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:02.427 [2024-07-26 11:35:57.838936] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:02.427 [2024-07-26 11:35:57.838942] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18b8f30 00:28:02.427 [2024-07-26 11:35:57.838957] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:02.427 qpair failed and we were unable to recover it. 00:28:02.427 [2024-07-26 11:35:57.848810] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:02.427 [2024-07-26 11:35:57.848870] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:02.427 [2024-07-26 11:35:57.848885] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:02.427 [2024-07-26 11:35:57.848892] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:02.427 [2024-07-26 11:35:57.848900] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18b8f30 00:28:02.427 [2024-07-26 11:35:57.848915] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:02.427 qpair failed and we were unable to recover it. 00:28:02.427 [2024-07-26 11:35:57.858774] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:02.427 [2024-07-26 11:35:57.858830] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:02.427 [2024-07-26 11:35:57.858846] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:02.427 [2024-07-26 11:35:57.858853] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:02.427 [2024-07-26 11:35:57.858859] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18b8f30 00:28:02.427 [2024-07-26 11:35:57.858874] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:02.427 qpair failed and we were unable to recover it. 00:28:02.427 [2024-07-26 11:35:57.868850] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:02.427 [2024-07-26 11:35:57.868937] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:02.427 [2024-07-26 11:35:57.868952] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:02.427 [2024-07-26 11:35:57.868959] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:02.427 [2024-07-26 11:35:57.868965] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18b8f30 00:28:02.427 [2024-07-26 11:35:57.868979] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:02.427 qpair failed and we were unable to recover it. 00:28:02.427 [2024-07-26 11:35:57.878893] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:02.427 [2024-07-26 11:35:57.878948] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:02.427 [2024-07-26 11:35:57.878962] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:02.427 [2024-07-26 11:35:57.878969] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:02.427 [2024-07-26 11:35:57.878979] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18b8f30 00:28:02.427 [2024-07-26 11:35:57.878992] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:02.427 qpair failed and we were unable to recover it. 00:28:02.427 [2024-07-26 11:35:57.888895] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:02.427 [2024-07-26 11:35:57.888951] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:02.427 [2024-07-26 11:35:57.888966] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:02.427 [2024-07-26 11:35:57.888973] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:02.427 [2024-07-26 11:35:57.888979] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18b8f30 00:28:02.427 [2024-07-26 11:35:57.888993] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:02.427 qpair failed and we were unable to recover it. 00:28:02.427 [2024-07-26 11:35:57.898933] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:02.427 [2024-07-26 11:35:57.898990] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:02.427 [2024-07-26 11:35:57.899004] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:02.427 [2024-07-26 11:35:57.899011] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:02.427 [2024-07-26 11:35:57.899017] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18b8f30 00:28:02.427 [2024-07-26 11:35:57.899031] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:02.427 qpair failed and we were unable to recover it. 00:28:02.427 [2024-07-26 11:35:57.908987] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:02.427 [2024-07-26 11:35:57.909050] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:02.427 [2024-07-26 11:35:57.909065] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:02.427 [2024-07-26 11:35:57.909073] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:02.427 [2024-07-26 11:35:57.909079] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18b8f30 00:28:02.427 [2024-07-26 11:35:57.909092] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:02.427 qpair failed and we were unable to recover it. 00:28:02.427 [2024-07-26 11:35:57.918986] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:02.427 [2024-07-26 11:35:57.919064] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:02.427 [2024-07-26 11:35:57.919079] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:02.427 [2024-07-26 11:35:57.919087] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:02.427 [2024-07-26 11:35:57.919093] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18b8f30 00:28:02.427 [2024-07-26 11:35:57.919107] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:02.427 qpair failed and we were unable to recover it. 00:28:02.427 [2024-07-26 11:35:57.929047] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:02.427 [2024-07-26 11:35:57.929127] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:02.427 [2024-07-26 11:35:57.929143] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:02.427 [2024-07-26 11:35:57.929150] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:02.427 [2024-07-26 11:35:57.929156] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18b8f30 00:28:02.427 [2024-07-26 11:35:57.929170] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:02.427 qpair failed and we were unable to recover it. 00:28:02.427 [2024-07-26 11:35:57.939063] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:02.427 [2024-07-26 11:35:57.939120] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:02.427 [2024-07-26 11:35:57.939135] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:02.427 [2024-07-26 11:35:57.939142] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:02.427 [2024-07-26 11:35:57.939148] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18b8f30 00:28:02.427 [2024-07-26 11:35:57.939161] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:02.427 qpair failed and we were unable to recover it. 00:28:02.427 [2024-07-26 11:35:57.949128] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:02.427 [2024-07-26 11:35:57.949187] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:02.427 [2024-07-26 11:35:57.949202] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:02.427 [2024-07-26 11:35:57.949210] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:02.427 [2024-07-26 11:35:57.949216] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18b8f30 00:28:02.427 [2024-07-26 11:35:57.949229] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:02.427 qpair failed and we were unable to recover it. 00:28:02.427 [2024-07-26 11:35:57.959099] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:02.427 [2024-07-26 11:35:57.959157] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:02.427 [2024-07-26 11:35:57.959172] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:02.427 [2024-07-26 11:35:57.959180] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:02.427 [2024-07-26 11:35:57.959186] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18b8f30 00:28:02.427 [2024-07-26 11:35:57.959199] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:02.427 qpair failed and we were unable to recover it. 00:28:02.427 [2024-07-26 11:35:57.969154] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:02.427 [2024-07-26 11:35:57.969207] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:02.427 [2024-07-26 11:35:57.969221] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:02.427 [2024-07-26 11:35:57.969228] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:02.427 [2024-07-26 11:35:57.969239] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18b8f30 00:28:02.427 [2024-07-26 11:35:57.969252] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:02.428 qpair failed and we were unable to recover it. 00:28:02.428 [2024-07-26 11:35:57.979165] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:02.428 [2024-07-26 11:35:57.979222] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:02.428 [2024-07-26 11:35:57.979237] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:02.428 [2024-07-26 11:35:57.979245] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:02.428 [2024-07-26 11:35:57.979251] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18b8f30 00:28:02.428 [2024-07-26 11:35:57.979264] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:02.428 qpair failed and we were unable to recover it. 00:28:02.428 [2024-07-26 11:35:57.989208] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:02.428 [2024-07-26 11:35:57.989262] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:02.428 [2024-07-26 11:35:57.989277] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:02.428 [2024-07-26 11:35:57.989283] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:02.428 [2024-07-26 11:35:57.989290] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18b8f30 00:28:02.428 [2024-07-26 11:35:57.989303] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:02.428 qpair failed and we were unable to recover it. 00:28:02.428 [2024-07-26 11:35:57.999227] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:02.428 [2024-07-26 11:35:57.999279] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:02.428 [2024-07-26 11:35:57.999294] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:02.428 [2024-07-26 11:35:57.999301] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:02.428 [2024-07-26 11:35:57.999307] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18b8f30 00:28:02.428 [2024-07-26 11:35:57.999320] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:02.428 qpair failed and we were unable to recover it. 00:28:02.428 [2024-07-26 11:35:58.009268] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:02.428 [2024-07-26 11:35:58.009322] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:02.428 [2024-07-26 11:35:58.009337] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:02.428 [2024-07-26 11:35:58.009344] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:02.428 [2024-07-26 11:35:58.009350] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18b8f30 00:28:02.428 [2024-07-26 11:35:58.009363] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:02.428 qpair failed and we were unable to recover it. 00:28:02.428 [2024-07-26 11:35:58.019316] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:02.428 [2024-07-26 11:35:58.019393] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:02.428 [2024-07-26 11:35:58.019408] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:02.428 [2024-07-26 11:35:58.019415] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:02.428 [2024-07-26 11:35:58.019420] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18b8f30 00:28:02.428 [2024-07-26 11:35:58.019434] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:02.428 qpair failed and we were unable to recover it. 00:28:02.428 [2024-07-26 11:35:58.029320] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:02.428 [2024-07-26 11:35:58.029374] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:02.428 [2024-07-26 11:35:58.029389] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:02.428 [2024-07-26 11:35:58.029395] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:02.428 [2024-07-26 11:35:58.029402] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18b8f30 00:28:02.428 [2024-07-26 11:35:58.029415] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:02.428 qpair failed and we were unable to recover it. 00:28:02.428 [2024-07-26 11:35:58.039372] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:02.428 [2024-07-26 11:35:58.039429] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:02.428 [2024-07-26 11:35:58.039443] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:02.428 [2024-07-26 11:35:58.039449] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:02.428 [2024-07-26 11:35:58.039456] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18b8f30 00:28:02.428 [2024-07-26 11:35:58.039469] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:02.428 qpair failed and we were unable to recover it. 00:28:02.428 [2024-07-26 11:35:58.049375] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:02.428 [2024-07-26 11:35:58.049435] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:02.428 [2024-07-26 11:35:58.049450] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:02.428 [2024-07-26 11:35:58.049457] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:02.428 [2024-07-26 11:35:58.049463] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18b8f30 00:28:02.428 [2024-07-26 11:35:58.049477] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:02.428 qpair failed and we were unable to recover it. 00:28:02.428 [2024-07-26 11:35:58.059396] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:02.428 [2024-07-26 11:35:58.059449] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:02.428 [2024-07-26 11:35:58.059464] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:02.428 [2024-07-26 11:35:58.059476] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:02.428 [2024-07-26 11:35:58.059481] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18b8f30 00:28:02.428 [2024-07-26 11:35:58.059495] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:02.428 qpair failed and we were unable to recover it. 00:28:02.428 [2024-07-26 11:35:58.069459] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:02.428 [2024-07-26 11:35:58.069515] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:02.428 [2024-07-26 11:35:58.069530] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:02.428 [2024-07-26 11:35:58.069537] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:02.428 [2024-07-26 11:35:58.069543] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18b8f30 00:28:02.428 [2024-07-26 11:35:58.069557] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:02.428 qpair failed and we were unable to recover it. 00:28:02.428 [2024-07-26 11:35:58.079498] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:02.428 [2024-07-26 11:35:58.079553] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:02.428 [2024-07-26 11:35:58.079568] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:02.428 [2024-07-26 11:35:58.079574] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:02.428 [2024-07-26 11:35:58.079580] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18b8f30 00:28:02.428 [2024-07-26 11:35:58.079594] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:02.428 qpair failed and we were unable to recover it. 00:28:02.687 [2024-07-26 11:35:58.089524] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:02.687 [2024-07-26 11:35:58.089602] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:02.687 [2024-07-26 11:35:58.089624] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:02.687 [2024-07-26 11:35:58.089638] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:02.687 [2024-07-26 11:35:58.089645] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18b8f30 00:28:02.687 [2024-07-26 11:35:58.089679] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:02.687 qpair failed and we were unable to recover it. 00:28:02.687 [2024-07-26 11:35:58.099562] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:02.687 [2024-07-26 11:35:58.099643] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:02.687 [2024-07-26 11:35:58.099660] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:02.687 [2024-07-26 11:35:58.099667] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:02.687 [2024-07-26 11:35:58.099673] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18b8f30 00:28:02.687 [2024-07-26 11:35:58.099688] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:02.687 qpair failed and we were unable to recover it. 00:28:02.687 [2024-07-26 11:35:58.109568] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:02.687 [2024-07-26 11:35:58.109633] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:02.687 [2024-07-26 11:35:58.109648] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:02.687 [2024-07-26 11:35:58.109655] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:02.687 [2024-07-26 11:35:58.109661] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18b8f30 00:28:02.687 [2024-07-26 11:35:58.109675] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:02.687 qpair failed and we were unable to recover it. 00:28:02.687 [2024-07-26 11:35:58.119567] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:02.687 [2024-07-26 11:35:58.119623] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:02.687 [2024-07-26 11:35:58.119641] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:02.687 [2024-07-26 11:35:58.119649] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:02.687 [2024-07-26 11:35:58.119654] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18b8f30 00:28:02.687 [2024-07-26 11:35:58.119669] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:02.687 qpair failed and we were unable to recover it. 00:28:02.687 [2024-07-26 11:35:58.129599] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:02.687 [2024-07-26 11:35:58.129661] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:02.687 [2024-07-26 11:35:58.129677] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:02.687 [2024-07-26 11:35:58.129684] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:02.687 [2024-07-26 11:35:58.129690] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18b8f30 00:28:02.687 [2024-07-26 11:35:58.129704] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:02.687 qpair failed and we were unable to recover it. 00:28:02.687 [2024-07-26 11:35:58.139622] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:02.687 [2024-07-26 11:35:58.139691] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:02.687 [2024-07-26 11:35:58.139707] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:02.687 [2024-07-26 11:35:58.139713] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:02.687 [2024-07-26 11:35:58.139720] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18b8f30 00:28:02.687 [2024-07-26 11:35:58.139733] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:02.687 qpair failed and we were unable to recover it. 00:28:02.687 [2024-07-26 11:35:58.149710] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:02.687 [2024-07-26 11:35:58.149784] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:02.687 [2024-07-26 11:35:58.149799] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:02.687 [2024-07-26 11:35:58.149809] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:02.687 [2024-07-26 11:35:58.149815] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18b8f30 00:28:02.687 [2024-07-26 11:35:58.149829] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:02.687 qpair failed and we were unable to recover it. 00:28:02.687 [2024-07-26 11:35:58.159687] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:02.687 [2024-07-26 11:35:58.159745] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:02.687 [2024-07-26 11:35:58.159761] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:02.687 [2024-07-26 11:35:58.159768] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:02.687 [2024-07-26 11:35:58.159774] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18b8f30 00:28:02.687 [2024-07-26 11:35:58.159788] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:02.687 qpair failed and we were unable to recover it. 00:28:02.687 [2024-07-26 11:35:58.169706] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:02.687 [2024-07-26 11:35:58.169768] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:02.687 [2024-07-26 11:35:58.169783] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:02.687 [2024-07-26 11:35:58.169790] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:02.687 [2024-07-26 11:35:58.169795] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18b8f30 00:28:02.687 [2024-07-26 11:35:58.169808] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:02.687 qpair failed and we were unable to recover it. 00:28:02.687 [2024-07-26 11:35:58.179733] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:02.687 [2024-07-26 11:35:58.179783] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:02.687 [2024-07-26 11:35:58.179798] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:02.687 [2024-07-26 11:35:58.179804] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:02.687 [2024-07-26 11:35:58.179811] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18b8f30 00:28:02.687 [2024-07-26 11:35:58.179824] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:02.687 qpair failed and we were unable to recover it. 00:28:02.688 [2024-07-26 11:35:58.189753] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:02.688 [2024-07-26 11:35:58.189809] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:02.688 [2024-07-26 11:35:58.189823] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:02.688 [2024-07-26 11:35:58.189831] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:02.688 [2024-07-26 11:35:58.189836] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18b8f30 00:28:02.688 [2024-07-26 11:35:58.189849] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:02.688 qpair failed and we were unable to recover it. 00:28:02.688 [2024-07-26 11:35:58.199792] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:02.688 [2024-07-26 11:35:58.199868] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:02.688 [2024-07-26 11:35:58.199884] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:02.688 [2024-07-26 11:35:58.199891] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:02.688 [2024-07-26 11:35:58.199897] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18b8f30 00:28:02.688 [2024-07-26 11:35:58.199911] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:02.688 qpair failed and we were unable to recover it. 00:28:02.688 [2024-07-26 11:35:58.209825] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:02.688 [2024-07-26 11:35:58.209885] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:02.688 [2024-07-26 11:35:58.209899] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:02.688 [2024-07-26 11:35:58.209907] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:02.688 [2024-07-26 11:35:58.209913] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18b8f30 00:28:02.688 [2024-07-26 11:35:58.209927] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:02.688 qpair failed and we were unable to recover it. 00:28:02.688 [2024-07-26 11:35:58.219852] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:02.688 [2024-07-26 11:35:58.219903] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:02.688 [2024-07-26 11:35:58.219919] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:02.688 [2024-07-26 11:35:58.219926] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:02.688 [2024-07-26 11:35:58.219933] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18b8f30 00:28:02.688 [2024-07-26 11:35:58.219946] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:02.688 qpair failed and we were unable to recover it. 00:28:02.688 [2024-07-26 11:35:58.229871] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:02.688 [2024-07-26 11:35:58.229927] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:02.688 [2024-07-26 11:35:58.229942] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:02.688 [2024-07-26 11:35:58.229949] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:02.688 [2024-07-26 11:35:58.229955] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18b8f30 00:28:02.688 [2024-07-26 11:35:58.229969] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:02.688 qpair failed and we were unable to recover it. 00:28:02.688 [2024-07-26 11:35:58.239932] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:02.688 [2024-07-26 11:35:58.240003] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:02.688 [2024-07-26 11:35:58.240017] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:02.688 [2024-07-26 11:35:58.240028] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:02.688 [2024-07-26 11:35:58.240034] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18b8f30 00:28:02.688 [2024-07-26 11:35:58.240047] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:02.688 qpair failed and we were unable to recover it. 00:28:02.688 [2024-07-26 11:35:58.249930] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:02.688 [2024-07-26 11:35:58.249984] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:02.688 [2024-07-26 11:35:58.249999] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:02.688 [2024-07-26 11:35:58.250006] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:02.688 [2024-07-26 11:35:58.250013] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18b8f30 00:28:02.688 [2024-07-26 11:35:58.250027] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:02.688 qpair failed and we were unable to recover it. 00:28:02.688 [2024-07-26 11:35:58.260009] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:02.688 [2024-07-26 11:35:58.260065] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:02.688 [2024-07-26 11:35:58.260079] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:02.688 [2024-07-26 11:35:58.260087] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:02.688 [2024-07-26 11:35:58.260093] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18b8f30 00:28:02.688 [2024-07-26 11:35:58.260107] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:02.688 qpair failed and we were unable to recover it. 00:28:02.688 [2024-07-26 11:35:58.269988] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:02.688 [2024-07-26 11:35:58.270055] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:02.688 [2024-07-26 11:35:58.270070] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:02.688 [2024-07-26 11:35:58.270077] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:02.688 [2024-07-26 11:35:58.270083] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18b8f30 00:28:02.688 [2024-07-26 11:35:58.270097] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:02.688 qpair failed and we were unable to recover it. 00:28:02.688 [2024-07-26 11:35:58.280014] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:02.688 [2024-07-26 11:35:58.280070] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:02.688 [2024-07-26 11:35:58.280085] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:02.688 [2024-07-26 11:35:58.280092] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:02.688 [2024-07-26 11:35:58.280098] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18b8f30 00:28:02.688 [2024-07-26 11:35:58.280111] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:02.688 qpair failed and we were unable to recover it. 00:28:02.688 [2024-07-26 11:35:58.290049] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:02.688 [2024-07-26 11:35:58.290142] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:02.688 [2024-07-26 11:35:58.290156] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:02.688 [2024-07-26 11:35:58.290163] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:02.688 [2024-07-26 11:35:58.290170] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18b8f30 00:28:02.688 [2024-07-26 11:35:58.290186] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:02.688 qpair failed and we were unable to recover it. 00:28:02.688 [2024-07-26 11:35:58.300071] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:02.688 [2024-07-26 11:35:58.300121] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:02.688 [2024-07-26 11:35:58.300135] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:02.688 [2024-07-26 11:35:58.300142] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:02.688 [2024-07-26 11:35:58.300148] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18b8f30 00:28:02.688 [2024-07-26 11:35:58.300162] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:02.688 qpair failed and we were unable to recover it. 00:28:02.688 [2024-07-26 11:35:58.310184] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:02.688 [2024-07-26 11:35:58.310270] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:02.688 [2024-07-26 11:35:58.310285] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:02.688 [2024-07-26 11:35:58.310292] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:02.688 [2024-07-26 11:35:58.310298] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18b8f30 00:28:02.688 [2024-07-26 11:35:58.310311] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:02.688 qpair failed and we were unable to recover it. 00:28:02.688 [2024-07-26 11:35:58.320131] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:02.688 [2024-07-26 11:35:58.320183] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:02.688 [2024-07-26 11:35:58.320200] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:02.688 [2024-07-26 11:35:58.320207] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:02.688 [2024-07-26 11:35:58.320213] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18b8f30 00:28:02.688 [2024-07-26 11:35:58.320228] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:02.688 qpair failed and we were unable to recover it. 00:28:02.688 [2024-07-26 11:35:58.330120] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:02.688 [2024-07-26 11:35:58.330200] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:02.688 [2024-07-26 11:35:58.330219] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:02.688 [2024-07-26 11:35:58.330226] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:02.688 [2024-07-26 11:35:58.330232] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18b8f30 00:28:02.688 [2024-07-26 11:35:58.330246] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:02.688 qpair failed and we were unable to recover it. 00:28:02.688 [2024-07-26 11:35:58.340181] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:02.688 [2024-07-26 11:35:58.340236] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:02.688 [2024-07-26 11:35:58.340251] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:02.688 [2024-07-26 11:35:58.340259] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:02.688 [2024-07-26 11:35:58.340265] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18b8f30 00:28:02.688 [2024-07-26 11:35:58.340279] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:02.688 qpair failed and we were unable to recover it. 00:28:02.945 [2024-07-26 11:35:58.350217] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:02.945 [2024-07-26 11:35:58.350275] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:02.945 [2024-07-26 11:35:58.350295] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:02.945 [2024-07-26 11:35:58.350304] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:02.945 [2024-07-26 11:35:58.350310] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18b8f30 00:28:02.945 [2024-07-26 11:35:58.350327] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:02.945 qpair failed and we were unable to recover it. 00:28:02.945 [2024-07-26 11:35:58.360234] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:02.945 [2024-07-26 11:35:58.360290] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:02.945 [2024-07-26 11:35:58.360306] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:02.945 [2024-07-26 11:35:58.360313] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:02.945 [2024-07-26 11:35:58.360319] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18b8f30 00:28:02.945 [2024-07-26 11:35:58.360334] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:02.945 qpair failed and we were unable to recover it. 00:28:02.945 [2024-07-26 11:35:58.370280] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:02.945 [2024-07-26 11:35:58.370336] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:02.945 [2024-07-26 11:35:58.370351] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:02.945 [2024-07-26 11:35:58.370358] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:02.945 [2024-07-26 11:35:58.370365] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18b8f30 00:28:02.945 [2024-07-26 11:35:58.370379] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:02.945 qpair failed and we were unable to recover it. 00:28:02.945 [2024-07-26 11:35:58.380305] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:02.945 [2024-07-26 11:35:58.380357] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:02.945 [2024-07-26 11:35:58.380372] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:02.945 [2024-07-26 11:35:58.380379] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:02.945 [2024-07-26 11:35:58.380386] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18b8f30 00:28:02.945 [2024-07-26 11:35:58.380399] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:02.945 qpair failed and we were unable to recover it. 00:28:02.945 [2024-07-26 11:35:58.390337] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:02.945 [2024-07-26 11:35:58.390393] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:02.945 [2024-07-26 11:35:58.390409] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:02.945 [2024-07-26 11:35:58.390417] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:02.945 [2024-07-26 11:35:58.390422] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18b8f30 00:28:02.945 [2024-07-26 11:35:58.390436] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:02.946 qpair failed and we were unable to recover it. 00:28:02.946 [2024-07-26 11:35:58.400376] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:02.946 [2024-07-26 11:35:58.400430] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:02.946 [2024-07-26 11:35:58.400445] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:02.946 [2024-07-26 11:35:58.400452] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:02.946 [2024-07-26 11:35:58.400458] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18b8f30 00:28:02.946 [2024-07-26 11:35:58.400473] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:02.946 qpair failed and we were unable to recover it. 00:28:02.946 [2024-07-26 11:35:58.410392] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:02.946 [2024-07-26 11:35:58.410444] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:02.946 [2024-07-26 11:35:58.410458] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:02.946 [2024-07-26 11:35:58.410465] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:02.946 [2024-07-26 11:35:58.410471] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18b8f30 00:28:02.946 [2024-07-26 11:35:58.410485] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:02.946 qpair failed and we were unable to recover it. 00:28:02.946 [2024-07-26 11:35:58.420344] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:02.946 [2024-07-26 11:35:58.420408] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:02.946 [2024-07-26 11:35:58.420427] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:02.946 [2024-07-26 11:35:58.420434] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:02.946 [2024-07-26 11:35:58.420440] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18b8f30 00:28:02.946 [2024-07-26 11:35:58.420454] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:02.946 qpair failed and we were unable to recover it. 00:28:02.946 [2024-07-26 11:35:58.430456] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:02.946 [2024-07-26 11:35:58.430510] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:02.946 [2024-07-26 11:35:58.430524] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:02.946 [2024-07-26 11:35:58.430531] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:02.946 [2024-07-26 11:35:58.430538] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18b8f30 00:28:02.946 [2024-07-26 11:35:58.430552] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:02.946 qpair failed and we were unable to recover it. 00:28:02.946 [2024-07-26 11:35:58.440463] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:02.946 [2024-07-26 11:35:58.440525] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:02.946 [2024-07-26 11:35:58.440539] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:02.946 [2024-07-26 11:35:58.440546] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:02.946 [2024-07-26 11:35:58.440551] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18b8f30 00:28:02.946 [2024-07-26 11:35:58.440565] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:02.946 qpair failed and we were unable to recover it. 00:28:02.946 [2024-07-26 11:35:58.450570] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:02.946 [2024-07-26 11:35:58.450623] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:02.946 [2024-07-26 11:35:58.450642] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:02.946 [2024-07-26 11:35:58.450650] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:02.946 [2024-07-26 11:35:58.450656] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18b8f30 00:28:02.946 [2024-07-26 11:35:58.450669] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:02.946 qpair failed and we were unable to recover it. 00:28:02.946 [2024-07-26 11:35:58.460569] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:02.946 [2024-07-26 11:35:58.460630] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:02.946 [2024-07-26 11:35:58.460646] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:02.946 [2024-07-26 11:35:58.460653] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:02.946 [2024-07-26 11:35:58.460659] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18b8f30 00:28:02.946 [2024-07-26 11:35:58.460676] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:02.946 qpair failed and we were unable to recover it. 00:28:02.946 [2024-07-26 11:35:58.470499] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:02.946 [2024-07-26 11:35:58.470563] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:02.946 [2024-07-26 11:35:58.470578] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:02.946 [2024-07-26 11:35:58.470585] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:02.946 [2024-07-26 11:35:58.470591] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18b8f30 00:28:02.946 [2024-07-26 11:35:58.470605] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:02.946 qpair failed and we were unable to recover it. 00:28:02.946 [2024-07-26 11:35:58.480524] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:02.946 [2024-07-26 11:35:58.480613] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:02.946 [2024-07-26 11:35:58.480631] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:02.946 [2024-07-26 11:35:58.480639] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:02.946 [2024-07-26 11:35:58.480645] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18b8f30 00:28:02.946 [2024-07-26 11:35:58.480659] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:02.946 qpair failed and we were unable to recover it. 00:28:02.946 [2024-07-26 11:35:58.490547] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:02.946 [2024-07-26 11:35:58.490647] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:02.946 [2024-07-26 11:35:58.490662] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:02.946 [2024-07-26 11:35:58.490669] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:02.946 [2024-07-26 11:35:58.490675] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18b8f30 00:28:02.946 [2024-07-26 11:35:58.490689] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:02.946 qpair failed and we were unable to recover it. 00:28:02.946 [2024-07-26 11:35:58.500687] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:02.946 [2024-07-26 11:35:58.500738] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:02.946 [2024-07-26 11:35:58.500752] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:02.946 [2024-07-26 11:35:58.500760] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:02.946 [2024-07-26 11:35:58.500765] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18b8f30 00:28:02.946 [2024-07-26 11:35:58.500779] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:02.946 qpair failed and we were unable to recover it. 00:28:02.946 [2024-07-26 11:35:58.510611] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:02.946 [2024-07-26 11:35:58.510669] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:02.946 [2024-07-26 11:35:58.510690] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:02.946 [2024-07-26 11:35:58.510697] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:02.946 [2024-07-26 11:35:58.510703] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18b8f30 00:28:02.946 [2024-07-26 11:35:58.510717] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:02.946 qpair failed and we were unable to recover it. 00:28:02.946 [2024-07-26 11:35:58.520705] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:02.946 [2024-07-26 11:35:58.520765] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:02.946 [2024-07-26 11:35:58.520780] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:02.946 [2024-07-26 11:35:58.520788] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:02.946 [2024-07-26 11:35:58.520794] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18b8f30 00:28:02.946 [2024-07-26 11:35:58.520808] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:02.947 qpair failed and we were unable to recover it. 00:28:02.947 [2024-07-26 11:35:58.530762] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:02.947 [2024-07-26 11:35:58.530836] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:02.947 [2024-07-26 11:35:58.530851] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:02.947 [2024-07-26 11:35:58.530858] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:02.947 [2024-07-26 11:35:58.530864] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18b8f30 00:28:02.947 [2024-07-26 11:35:58.530879] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:02.947 qpair failed and we were unable to recover it. 00:28:02.947 [2024-07-26 11:35:58.540765] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:02.947 [2024-07-26 11:35:58.540820] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:02.947 [2024-07-26 11:35:58.540835] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:02.947 [2024-07-26 11:35:58.540842] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:02.947 [2024-07-26 11:35:58.540848] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18b8f30 00:28:02.947 [2024-07-26 11:35:58.540862] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:02.947 qpair failed and we were unable to recover it. 00:28:02.947 [2024-07-26 11:35:58.550795] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:02.947 [2024-07-26 11:35:58.550872] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:02.947 [2024-07-26 11:35:58.550887] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:02.947 [2024-07-26 11:35:58.550895] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:02.947 [2024-07-26 11:35:58.550901] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18b8f30 00:28:02.947 [2024-07-26 11:35:58.550918] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:02.947 qpair failed and we were unable to recover it. 00:28:02.947 [2024-07-26 11:35:58.560830] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:02.947 [2024-07-26 11:35:58.560888] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:02.947 [2024-07-26 11:35:58.560903] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:02.947 [2024-07-26 11:35:58.560910] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:02.947 [2024-07-26 11:35:58.560916] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18b8f30 00:28:02.947 [2024-07-26 11:35:58.560929] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:02.947 qpair failed and we were unable to recover it. 00:28:02.947 [2024-07-26 11:35:58.570775] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:02.947 [2024-07-26 11:35:58.570828] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:02.947 [2024-07-26 11:35:58.570842] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:02.947 [2024-07-26 11:35:58.570850] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:02.947 [2024-07-26 11:35:58.570856] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18b8f30 00:28:02.947 [2024-07-26 11:35:58.570869] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:02.947 qpair failed and we were unable to recover it. 00:28:02.947 [2024-07-26 11:35:58.580876] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:02.947 [2024-07-26 11:35:58.580927] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:02.947 [2024-07-26 11:35:58.580941] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:02.947 [2024-07-26 11:35:58.580948] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:02.947 [2024-07-26 11:35:58.580954] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18b8f30 00:28:02.947 [2024-07-26 11:35:58.580968] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:02.947 qpair failed and we were unable to recover it. 00:28:02.947 [2024-07-26 11:35:58.590909] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:02.947 [2024-07-26 11:35:58.590961] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:02.947 [2024-07-26 11:35:58.590976] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:02.947 [2024-07-26 11:35:58.590982] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:02.947 [2024-07-26 11:35:58.590989] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18b8f30 00:28:02.947 [2024-07-26 11:35:58.591002] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:02.947 qpair failed and we were unable to recover it. 00:28:02.947 [2024-07-26 11:35:58.600927] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:02.947 [2024-07-26 11:35:58.601013] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:02.947 [2024-07-26 11:35:58.601030] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:02.947 [2024-07-26 11:35:58.601037] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:02.947 [2024-07-26 11:35:58.601043] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18b8f30 00:28:02.947 [2024-07-26 11:35:58.601056] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:02.947 qpair failed and we were unable to recover it. 00:28:03.205 [2024-07-26 11:35:58.610957] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:03.205 [2024-07-26 11:35:58.611025] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:03.205 [2024-07-26 11:35:58.611044] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:03.205 [2024-07-26 11:35:58.611052] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:03.205 [2024-07-26 11:35:58.611058] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18b8f30 00:28:03.205 [2024-07-26 11:35:58.611074] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:03.205 qpair failed and we were unable to recover it. 00:28:03.205 [2024-07-26 11:35:58.620987] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:03.205 [2024-07-26 11:35:58.621042] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:03.205 [2024-07-26 11:35:58.621057] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:03.205 [2024-07-26 11:35:58.621066] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:03.205 [2024-07-26 11:35:58.621072] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18b8f30 00:28:03.205 [2024-07-26 11:35:58.621086] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:03.205 qpair failed and we were unable to recover it. 00:28:03.205 [2024-07-26 11:35:58.631037] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:03.205 [2024-07-26 11:35:58.631093] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:03.205 [2024-07-26 11:35:58.631108] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:03.205 [2024-07-26 11:35:58.631115] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:03.205 [2024-07-26 11:35:58.631122] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18b8f30 00:28:03.205 [2024-07-26 11:35:58.631136] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:03.205 qpair failed and we were unable to recover it. 00:28:03.205 [2024-07-26 11:35:58.641076] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:03.205 [2024-07-26 11:35:58.641133] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:03.205 [2024-07-26 11:35:58.641147] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:03.205 [2024-07-26 11:35:58.641154] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:03.205 [2024-07-26 11:35:58.641164] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18b8f30 00:28:03.205 [2024-07-26 11:35:58.641178] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:03.205 qpair failed and we were unable to recover it. 00:28:03.205 [2024-07-26 11:35:58.651065] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:03.205 [2024-07-26 11:35:58.651122] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:03.205 [2024-07-26 11:35:58.651139] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:03.205 [2024-07-26 11:35:58.651147] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:03.205 [2024-07-26 11:35:58.651154] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18b8f30 00:28:03.205 [2024-07-26 11:35:58.651169] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:03.205 qpair failed and we were unable to recover it. 00:28:03.205 [2024-07-26 11:35:58.661119] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:03.205 [2024-07-26 11:35:58.661175] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:03.205 [2024-07-26 11:35:58.661190] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:03.205 [2024-07-26 11:35:58.661197] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:03.205 [2024-07-26 11:35:58.661203] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18b8f30 00:28:03.205 [2024-07-26 11:35:58.661216] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:03.205 qpair failed and we were unable to recover it. 00:28:03.205 [2024-07-26 11:35:58.671122] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:03.205 [2024-07-26 11:35:58.671189] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:03.205 [2024-07-26 11:35:58.671204] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:03.205 [2024-07-26 11:35:58.671211] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:03.205 [2024-07-26 11:35:58.671217] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18b8f30 00:28:03.205 [2024-07-26 11:35:58.671231] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:03.205 qpair failed and we were unable to recover it. 00:28:03.205 [2024-07-26 11:35:58.681173] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:03.205 [2024-07-26 11:35:58.681237] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:03.205 [2024-07-26 11:35:58.681252] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:03.205 [2024-07-26 11:35:58.681259] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:03.206 [2024-07-26 11:35:58.681265] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18b8f30 00:28:03.206 [2024-07-26 11:35:58.681278] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:03.206 qpair failed and we were unable to recover it. 00:28:03.206 [2024-07-26 11:35:58.691173] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:03.206 [2024-07-26 11:35:58.691244] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:03.206 [2024-07-26 11:35:58.691258] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:03.206 [2024-07-26 11:35:58.691265] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:03.206 [2024-07-26 11:35:58.691271] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18b8f30 00:28:03.206 [2024-07-26 11:35:58.691284] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:03.206 qpair failed and we were unable to recover it. 00:28:03.206 [2024-07-26 11:35:58.701146] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:03.206 [2024-07-26 11:35:58.701236] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:03.206 [2024-07-26 11:35:58.701251] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:03.206 [2024-07-26 11:35:58.701258] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:03.206 [2024-07-26 11:35:58.701264] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18b8f30 00:28:03.206 [2024-07-26 11:35:58.701277] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:03.206 qpair failed and we were unable to recover it. 00:28:03.206 [2024-07-26 11:35:58.711279] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:03.206 [2024-07-26 11:35:58.711331] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:03.206 [2024-07-26 11:35:58.711345] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:03.206 [2024-07-26 11:35:58.711352] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:03.206 [2024-07-26 11:35:58.711358] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18b8f30 00:28:03.206 [2024-07-26 11:35:58.711372] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:03.206 qpair failed and we were unable to recover it. 00:28:03.206 [2024-07-26 11:35:58.721288] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:03.206 [2024-07-26 11:35:58.721347] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:03.206 [2024-07-26 11:35:58.721361] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:03.206 [2024-07-26 11:35:58.721368] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:03.206 [2024-07-26 11:35:58.721374] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18b8f30 00:28:03.206 [2024-07-26 11:35:58.721387] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:03.206 qpair failed and we were unable to recover it. 00:28:03.206 [2024-07-26 11:35:58.731297] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:03.206 [2024-07-26 11:35:58.731353] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:03.206 [2024-07-26 11:35:58.731368] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:03.206 [2024-07-26 11:35:58.731375] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:03.206 [2024-07-26 11:35:58.731384] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18b8f30 00:28:03.206 [2024-07-26 11:35:58.731398] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:03.206 qpair failed and we were unable to recover it. 00:28:03.206 [2024-07-26 11:35:58.741355] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:03.206 [2024-07-26 11:35:58.741450] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:03.206 [2024-07-26 11:35:58.741465] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:03.206 [2024-07-26 11:35:58.741472] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:03.206 [2024-07-26 11:35:58.741478] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18b8f30 00:28:03.206 [2024-07-26 11:35:58.741491] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:03.206 qpair failed and we were unable to recover it. 00:28:03.206 [2024-07-26 11:35:58.751399] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:03.206 [2024-07-26 11:35:58.751451] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:03.206 [2024-07-26 11:35:58.751466] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:03.206 [2024-07-26 11:35:58.751472] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:03.206 [2024-07-26 11:35:58.751480] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18b8f30 00:28:03.206 [2024-07-26 11:35:58.751493] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:03.206 qpair failed and we were unable to recover it. 00:28:03.206 [2024-07-26 11:35:58.761381] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:03.206 [2024-07-26 11:35:58.761437] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:03.206 [2024-07-26 11:35:58.761452] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:03.206 [2024-07-26 11:35:58.761459] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:03.206 [2024-07-26 11:35:58.761466] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18b8f30 00:28:03.206 [2024-07-26 11:35:58.761479] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:03.206 qpair failed and we were unable to recover it. 00:28:03.206 [2024-07-26 11:35:58.771407] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:03.206 [2024-07-26 11:35:58.771484] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:03.206 [2024-07-26 11:35:58.771499] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:03.206 [2024-07-26 11:35:58.771506] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:03.206 [2024-07-26 11:35:58.771512] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18b8f30 00:28:03.206 [2024-07-26 11:35:58.771526] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:03.206 qpair failed and we were unable to recover it. 00:28:03.206 [2024-07-26 11:35:58.781440] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:03.206 [2024-07-26 11:35:58.781496] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:03.206 [2024-07-26 11:35:58.781512] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:03.206 [2024-07-26 11:35:58.781519] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:03.206 [2024-07-26 11:35:58.781525] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18b8f30 00:28:03.206 [2024-07-26 11:35:58.781539] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:03.206 qpair failed and we were unable to recover it. 00:28:03.206 [2024-07-26 11:35:58.791493] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:03.206 [2024-07-26 11:35:58.791550] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:03.206 [2024-07-26 11:35:58.791566] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:03.206 [2024-07-26 11:35:58.791573] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:03.206 [2024-07-26 11:35:58.791579] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18b8f30 00:28:03.206 [2024-07-26 11:35:58.791594] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:03.206 qpair failed and we were unable to recover it. 00:28:03.206 [2024-07-26 11:35:58.801543] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:03.206 [2024-07-26 11:35:58.801611] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:03.206 [2024-07-26 11:35:58.801630] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:03.206 [2024-07-26 11:35:58.801638] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:03.206 [2024-07-26 11:35:58.801644] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18b8f30 00:28:03.207 [2024-07-26 11:35:58.801658] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:03.207 qpair failed and we were unable to recover it. 00:28:03.207 [2024-07-26 11:35:58.811570] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:03.207 [2024-07-26 11:35:58.811640] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:03.207 [2024-07-26 11:35:58.811655] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:03.207 [2024-07-26 11:35:58.811662] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:03.207 [2024-07-26 11:35:58.811668] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18b8f30 00:28:03.207 [2024-07-26 11:35:58.811682] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:03.207 qpair failed and we were unable to recover it. 00:28:03.207 [2024-07-26 11:35:58.821539] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:03.207 [2024-07-26 11:35:58.821594] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:03.207 [2024-07-26 11:35:58.821610] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:03.207 [2024-07-26 11:35:58.821617] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:03.207 [2024-07-26 11:35:58.821631] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18b8f30 00:28:03.207 [2024-07-26 11:35:58.821645] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:03.207 qpair failed and we were unable to recover it. 00:28:03.207 [2024-07-26 11:35:58.831582] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:03.207 [2024-07-26 11:35:58.831643] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:03.207 [2024-07-26 11:35:58.831658] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:03.207 [2024-07-26 11:35:58.831665] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:03.207 [2024-07-26 11:35:58.831671] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18b8f30 00:28:03.207 [2024-07-26 11:35:58.831685] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:03.207 qpair failed and we were unable to recover it. 00:28:03.207 [2024-07-26 11:35:58.841603] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:03.207 [2024-07-26 11:35:58.841667] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:03.207 [2024-07-26 11:35:58.841683] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:03.207 [2024-07-26 11:35:58.841690] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:03.207 [2024-07-26 11:35:58.841695] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18b8f30 00:28:03.207 [2024-07-26 11:35:58.841709] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:03.207 qpair failed and we were unable to recover it. 00:28:03.207 [2024-07-26 11:35:58.851696] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:03.207 [2024-07-26 11:35:58.851751] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:03.207 [2024-07-26 11:35:58.851765] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:03.207 [2024-07-26 11:35:58.851773] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:03.207 [2024-07-26 11:35:58.851780] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18b8f30 00:28:03.207 [2024-07-26 11:35:58.851793] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:03.207 qpair failed and we were unable to recover it. 00:28:03.207 [2024-07-26 11:35:58.861666] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:03.207 [2024-07-26 11:35:58.861726] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:03.207 [2024-07-26 11:35:58.861744] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:03.207 [2024-07-26 11:35:58.861751] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:03.207 [2024-07-26 11:35:58.861758] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18b8f30 00:28:03.207 [2024-07-26 11:35:58.861774] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:03.207 qpair failed and we were unable to recover it. 00:28:03.466 [2024-07-26 11:35:58.871712] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:03.466 [2024-07-26 11:35:58.871784] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:03.466 [2024-07-26 11:35:58.871803] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:03.466 [2024-07-26 11:35:58.871811] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:03.466 [2024-07-26 11:35:58.871817] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18b8f30 00:28:03.466 [2024-07-26 11:35:58.871833] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:03.466 qpair failed and we were unable to recover it. 00:28:03.466 [2024-07-26 11:35:58.881723] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:03.466 [2024-07-26 11:35:58.881783] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:03.466 [2024-07-26 11:35:58.881798] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:03.466 [2024-07-26 11:35:58.881806] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:03.466 [2024-07-26 11:35:58.881812] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18b8f30 00:28:03.466 [2024-07-26 11:35:58.881827] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:03.466 qpair failed and we were unable to recover it. 00:28:03.466 [2024-07-26 11:35:58.891735] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:03.466 [2024-07-26 11:35:58.891795] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:03.466 [2024-07-26 11:35:58.891811] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:03.466 [2024-07-26 11:35:58.891818] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:03.466 [2024-07-26 11:35:58.891824] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18b8f30 00:28:03.466 [2024-07-26 11:35:58.891838] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:03.466 qpair failed and we were unable to recover it. 00:28:03.466 [2024-07-26 11:35:58.901784] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:03.466 [2024-07-26 11:35:58.901839] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:03.466 [2024-07-26 11:35:58.901853] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:03.466 [2024-07-26 11:35:58.901860] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:03.466 [2024-07-26 11:35:58.901866] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18b8f30 00:28:03.466 [2024-07-26 11:35:58.901880] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:03.466 qpair failed and we were unable to recover it. 00:28:03.466 [2024-07-26 11:35:58.911766] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:03.466 [2024-07-26 11:35:58.911866] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:03.466 [2024-07-26 11:35:58.911881] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:03.466 [2024-07-26 11:35:58.911891] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:03.466 [2024-07-26 11:35:58.911897] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18b8f30 00:28:03.466 [2024-07-26 11:35:58.911912] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:03.466 qpair failed and we were unable to recover it. 00:28:03.466 [2024-07-26 11:35:58.921854] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:03.466 [2024-07-26 11:35:58.921916] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:03.466 [2024-07-26 11:35:58.921931] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:03.466 [2024-07-26 11:35:58.921939] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:03.466 [2024-07-26 11:35:58.921945] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18b8f30 00:28:03.466 [2024-07-26 11:35:58.921958] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:03.466 qpair failed and we were unable to recover it. 00:28:03.466 [2024-07-26 11:35:58.931890] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:03.466 [2024-07-26 11:35:58.931945] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:03.466 [2024-07-26 11:35:58.931959] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:03.466 [2024-07-26 11:35:58.931967] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:03.466 [2024-07-26 11:35:58.931973] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18b8f30 00:28:03.466 [2024-07-26 11:35:58.931987] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:03.466 qpair failed and we were unable to recover it. 00:28:03.466 [2024-07-26 11:35:58.941882] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:03.466 [2024-07-26 11:35:58.941946] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:03.466 [2024-07-26 11:35:58.941961] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:03.466 [2024-07-26 11:35:58.941969] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:03.466 [2024-07-26 11:35:58.941975] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18b8f30 00:28:03.466 [2024-07-26 11:35:58.941990] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:03.466 qpair failed and we were unable to recover it. 00:28:03.466 [2024-07-26 11:35:58.951963] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:03.466 [2024-07-26 11:35:58.952022] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:03.466 [2024-07-26 11:35:58.952037] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:03.466 [2024-07-26 11:35:58.952044] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:03.466 [2024-07-26 11:35:58.952050] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18b8f30 00:28:03.466 [2024-07-26 11:35:58.952064] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:03.466 qpair failed and we were unable to recover it. 00:28:03.466 [2024-07-26 11:35:58.961961] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:03.466 [2024-07-26 11:35:58.962020] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:03.466 [2024-07-26 11:35:58.962036] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:03.466 [2024-07-26 11:35:58.962043] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:03.466 [2024-07-26 11:35:58.962049] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18b8f30 00:28:03.467 [2024-07-26 11:35:58.962062] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:03.467 qpair failed and we were unable to recover it. 00:28:03.467 [2024-07-26 11:35:58.971933] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:03.467 [2024-07-26 11:35:58.971992] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:03.467 [2024-07-26 11:35:58.972006] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:03.467 [2024-07-26 11:35:58.972013] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:03.467 [2024-07-26 11:35:58.972019] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18b8f30 00:28:03.467 [2024-07-26 11:35:58.972033] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:03.467 qpair failed and we were unable to recover it. 00:28:03.467 [2024-07-26 11:35:58.981947] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:03.467 [2024-07-26 11:35:58.982000] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:03.467 [2024-07-26 11:35:58.982015] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:03.467 [2024-07-26 11:35:58.982022] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:03.467 [2024-07-26 11:35:58.982028] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18b8f30 00:28:03.467 [2024-07-26 11:35:58.982042] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:03.467 qpair failed and we were unable to recover it. 00:28:03.467 [2024-07-26 11:35:58.992092] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:03.467 [2024-07-26 11:35:58.992157] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:03.467 [2024-07-26 11:35:58.992172] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:03.467 [2024-07-26 11:35:58.992179] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:03.467 [2024-07-26 11:35:58.992185] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18b8f30 00:28:03.467 [2024-07-26 11:35:58.992198] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:03.467 qpair failed and we were unable to recover it. 00:28:03.467 [2024-07-26 11:35:59.002098] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:03.467 [2024-07-26 11:35:59.002159] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:03.467 [2024-07-26 11:35:59.002175] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:03.467 [2024-07-26 11:35:59.002185] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:03.467 [2024-07-26 11:35:59.002191] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18b8f30 00:28:03.467 [2024-07-26 11:35:59.002205] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:03.467 qpair failed and we were unable to recover it. 00:28:03.467 [2024-07-26 11:35:59.012017] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:03.467 [2024-07-26 11:35:59.012085] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:03.467 [2024-07-26 11:35:59.012100] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:03.467 [2024-07-26 11:35:59.012107] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:03.467 [2024-07-26 11:35:59.012113] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18b8f30 00:28:03.467 [2024-07-26 11:35:59.012127] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:03.467 qpair failed and we were unable to recover it. 00:28:03.467 [2024-07-26 11:35:59.022126] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:03.467 [2024-07-26 11:35:59.022186] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:03.467 [2024-07-26 11:35:59.022201] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:03.467 [2024-07-26 11:35:59.022209] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:03.467 [2024-07-26 11:35:59.022215] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18b8f30 00:28:03.467 [2024-07-26 11:35:59.022230] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:03.467 qpair failed and we were unable to recover it. 00:28:03.467 [2024-07-26 11:35:59.032174] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:03.467 [2024-07-26 11:35:59.032232] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:03.467 [2024-07-26 11:35:59.032246] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:03.467 [2024-07-26 11:35:59.032254] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:03.467 [2024-07-26 11:35:59.032259] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18b8f30 00:28:03.467 [2024-07-26 11:35:59.032274] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:03.467 qpair failed and we were unable to recover it. 00:28:03.467 [2024-07-26 11:35:59.042181] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:03.467 [2024-07-26 11:35:59.042245] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:03.467 [2024-07-26 11:35:59.042259] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:03.467 [2024-07-26 11:35:59.042267] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:03.467 [2024-07-26 11:35:59.042273] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18b8f30 00:28:03.467 [2024-07-26 11:35:59.042287] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:03.467 qpair failed and we were unable to recover it. 00:28:03.467 [2024-07-26 11:35:59.052222] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:03.467 [2024-07-26 11:35:59.052274] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:03.467 [2024-07-26 11:35:59.052288] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:03.467 [2024-07-26 11:35:59.052296] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:03.467 [2024-07-26 11:35:59.052302] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18b8f30 00:28:03.467 [2024-07-26 11:35:59.052315] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:03.467 qpair failed and we were unable to recover it. 00:28:03.467 [2024-07-26 11:35:59.062237] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:03.467 [2024-07-26 11:35:59.062291] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:03.467 [2024-07-26 11:35:59.062306] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:03.467 [2024-07-26 11:35:59.062313] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:03.467 [2024-07-26 11:35:59.062320] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18b8f30 00:28:03.467 [2024-07-26 11:35:59.062333] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:03.467 qpair failed and we were unable to recover it. 00:28:03.467 [2024-07-26 11:35:59.072260] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:03.467 [2024-07-26 11:35:59.072312] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:03.467 [2024-07-26 11:35:59.072326] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:03.467 [2024-07-26 11:35:59.072333] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:03.467 [2024-07-26 11:35:59.072339] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18b8f30 00:28:03.467 [2024-07-26 11:35:59.072352] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:03.467 qpair failed and we were unable to recover it. 00:28:03.467 [2024-07-26 11:35:59.082258] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:03.467 [2024-07-26 11:35:59.082327] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:03.467 [2024-07-26 11:35:59.082342] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:03.467 [2024-07-26 11:35:59.082350] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:03.467 [2024-07-26 11:35:59.082356] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18b8f30 00:28:03.467 [2024-07-26 11:35:59.082370] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:03.467 qpair failed and we were unable to recover it. 00:28:03.467 [2024-07-26 11:35:59.092256] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:03.467 [2024-07-26 11:35:59.092311] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:03.467 [2024-07-26 11:35:59.092330] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:03.467 [2024-07-26 11:35:59.092337] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:03.467 [2024-07-26 11:35:59.092344] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18b8f30 00:28:03.467 [2024-07-26 11:35:59.092357] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:03.468 qpair failed and we were unable to recover it. 00:28:03.468 [2024-07-26 11:35:59.102399] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:03.468 [2024-07-26 11:35:59.102455] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:03.468 [2024-07-26 11:35:59.102470] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:03.468 [2024-07-26 11:35:59.102476] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:03.468 [2024-07-26 11:35:59.102483] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18b8f30 00:28:03.468 [2024-07-26 11:35:59.102497] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:03.468 qpair failed and we were unable to recover it. 00:28:03.468 [2024-07-26 11:35:59.112411] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:03.468 [2024-07-26 11:35:59.112467] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:03.468 [2024-07-26 11:35:59.112482] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:03.468 [2024-07-26 11:35:59.112489] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:03.468 [2024-07-26 11:35:59.112496] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18b8f30 00:28:03.468 [2024-07-26 11:35:59.112509] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:03.468 qpair failed and we were unable to recover it. 00:28:03.468 [2024-07-26 11:35:59.122397] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:03.468 [2024-07-26 11:35:59.122454] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:03.468 [2024-07-26 11:35:59.122472] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:03.468 [2024-07-26 11:35:59.122480] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:03.468 [2024-07-26 11:35:59.122487] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18b8f30 00:28:03.468 [2024-07-26 11:35:59.122502] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:03.468 qpair failed and we were unable to recover it. 00:28:03.726 [2024-07-26 11:35:59.132452] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:03.726 [2024-07-26 11:35:59.132513] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:03.726 [2024-07-26 11:35:59.132532] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:03.726 [2024-07-26 11:35:59.132540] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:03.726 [2024-07-26 11:35:59.132547] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18b8f30 00:28:03.726 [2024-07-26 11:35:59.132563] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:03.726 qpair failed and we were unable to recover it. 00:28:03.726 [2024-07-26 11:35:59.142412] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:03.726 [2024-07-26 11:35:59.142500] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:03.726 [2024-07-26 11:35:59.142517] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:03.726 [2024-07-26 11:35:59.142524] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:03.726 [2024-07-26 11:35:59.142531] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18b8f30 00:28:03.726 [2024-07-26 11:35:59.142546] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:03.726 qpair failed and we were unable to recover it. 00:28:03.726 [2024-07-26 11:35:59.152526] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:03.726 [2024-07-26 11:35:59.152587] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:03.726 [2024-07-26 11:35:59.152602] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:03.726 [2024-07-26 11:35:59.152610] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:03.726 [2024-07-26 11:35:59.152616] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18b8f30 00:28:03.726 [2024-07-26 11:35:59.152634] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:03.726 qpair failed and we were unable to recover it. 00:28:03.726 [2024-07-26 11:35:59.162482] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:03.726 [2024-07-26 11:35:59.162555] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:03.726 [2024-07-26 11:35:59.162570] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:03.726 [2024-07-26 11:35:59.162577] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:03.726 [2024-07-26 11:35:59.162583] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18b8f30 00:28:03.726 [2024-07-26 11:35:59.162597] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:03.726 qpair failed and we were unable to recover it. 00:28:03.726 [2024-07-26 11:35:59.172541] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:03.726 [2024-07-26 11:35:59.172592] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:03.726 [2024-07-26 11:35:59.172608] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:03.726 [2024-07-26 11:35:59.172615] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:03.726 [2024-07-26 11:35:59.172621] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18b8f30 00:28:03.726 [2024-07-26 11:35:59.172639] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:03.726 qpair failed and we were unable to recover it. 00:28:03.726 [2024-07-26 11:35:59.182592] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:03.726 [2024-07-26 11:35:59.182649] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:03.726 [2024-07-26 11:35:59.182668] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:03.726 [2024-07-26 11:35:59.182675] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:03.726 [2024-07-26 11:35:59.182681] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18b8f30 00:28:03.726 [2024-07-26 11:35:59.182695] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:03.726 qpair failed and we were unable to recover it. 00:28:03.726 [2024-07-26 11:35:59.192601] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:03.726 [2024-07-26 11:35:59.192662] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:03.726 [2024-07-26 11:35:59.192677] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:03.726 [2024-07-26 11:35:59.192684] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:03.726 [2024-07-26 11:35:59.192690] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18b8f30 00:28:03.726 [2024-07-26 11:35:59.192703] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:03.726 qpair failed and we were unable to recover it. 00:28:03.726 [2024-07-26 11:35:59.202630] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:03.726 [2024-07-26 11:35:59.202693] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:03.726 [2024-07-26 11:35:59.202708] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:03.726 [2024-07-26 11:35:59.202714] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:03.726 [2024-07-26 11:35:59.202720] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18b8f30 00:28:03.726 [2024-07-26 11:35:59.202735] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:03.726 qpair failed and we were unable to recover it. 00:28:03.726 [2024-07-26 11:35:59.212652] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:03.726 [2024-07-26 11:35:59.212709] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:03.726 [2024-07-26 11:35:59.212725] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:03.726 [2024-07-26 11:35:59.212733] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:03.726 [2024-07-26 11:35:59.212738] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18b8f30 00:28:03.726 [2024-07-26 11:35:59.212752] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:03.726 qpair failed and we were unable to recover it. 00:28:03.726 [2024-07-26 11:35:59.222625] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:03.726 [2024-07-26 11:35:59.222686] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:03.726 [2024-07-26 11:35:59.222701] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:03.726 [2024-07-26 11:35:59.222709] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:03.726 [2024-07-26 11:35:59.222715] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18b8f30 00:28:03.726 [2024-07-26 11:35:59.222732] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:03.726 qpair failed and we were unable to recover it. 00:28:03.726 [2024-07-26 11:35:59.232671] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:03.726 [2024-07-26 11:35:59.232728] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:03.726 [2024-07-26 11:35:59.232743] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:03.726 [2024-07-26 11:35:59.232750] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:03.726 [2024-07-26 11:35:59.232756] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18b8f30 00:28:03.726 [2024-07-26 11:35:59.232769] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:03.726 qpair failed and we were unable to recover it. 00:28:03.726 [2024-07-26 11:35:59.242767] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:03.726 [2024-07-26 11:35:59.242832] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:03.726 [2024-07-26 11:35:59.242847] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:03.726 [2024-07-26 11:35:59.242854] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:03.726 [2024-07-26 11:35:59.242860] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18b8f30 00:28:03.726 [2024-07-26 11:35:59.242874] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:03.726 qpair failed and we were unable to recover it. 00:28:03.726 [2024-07-26 11:35:59.252796] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:03.726 [2024-07-26 11:35:59.252852] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:03.726 [2024-07-26 11:35:59.252867] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:03.726 [2024-07-26 11:35:59.252875] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:03.726 [2024-07-26 11:35:59.252881] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18b8f30 00:28:03.726 [2024-07-26 11:35:59.252894] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:03.726 qpair failed and we were unable to recover it. 00:28:03.726 [2024-07-26 11:35:59.262806] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:03.726 [2024-07-26 11:35:59.262861] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:03.726 [2024-07-26 11:35:59.262876] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:03.726 [2024-07-26 11:35:59.262883] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:03.726 [2024-07-26 11:35:59.262889] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18b8f30 00:28:03.726 [2024-07-26 11:35:59.262903] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:03.726 qpair failed and we were unable to recover it. 00:28:03.726 [2024-07-26 11:35:59.272840] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:03.726 [2024-07-26 11:35:59.272895] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:03.726 [2024-07-26 11:35:59.272915] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:03.726 [2024-07-26 11:35:59.272922] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:03.726 [2024-07-26 11:35:59.272928] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18b8f30 00:28:03.726 [2024-07-26 11:35:59.272942] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:03.726 qpair failed and we were unable to recover it. 00:28:03.726 [2024-07-26 11:35:59.282867] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:03.726 [2024-07-26 11:35:59.282922] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:03.726 [2024-07-26 11:35:59.282937] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:03.726 [2024-07-26 11:35:59.282944] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:03.726 [2024-07-26 11:35:59.282950] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18b8f30 00:28:03.726 [2024-07-26 11:35:59.282963] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:03.726 qpair failed and we were unable to recover it. 00:28:03.726 [2024-07-26 11:35:59.292827] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:03.726 [2024-07-26 11:35:59.292880] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:03.726 [2024-07-26 11:35:59.292894] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:03.726 [2024-07-26 11:35:59.292901] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:03.726 [2024-07-26 11:35:59.292908] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18b8f30 00:28:03.726 [2024-07-26 11:35:59.292922] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:03.726 qpair failed and we were unable to recover it. 00:28:03.726 [2024-07-26 11:35:59.302918] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:03.726 [2024-07-26 11:35:59.303007] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:03.726 [2024-07-26 11:35:59.303022] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:03.726 [2024-07-26 11:35:59.303029] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:03.726 [2024-07-26 11:35:59.303035] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18b8f30 00:28:03.726 [2024-07-26 11:35:59.303048] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:03.726 qpair failed and we were unable to recover it. 00:28:03.726 [2024-07-26 11:35:59.312885] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:03.726 [2024-07-26 11:35:59.312939] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:03.726 [2024-07-26 11:35:59.312953] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:03.726 [2024-07-26 11:35:59.312960] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:03.726 [2024-07-26 11:35:59.312966] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18b8f30 00:28:03.726 [2024-07-26 11:35:59.312982] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:03.726 qpair failed and we were unable to recover it. 00:28:03.726 [2024-07-26 11:35:59.322970] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:03.726 [2024-07-26 11:35:59.323032] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:03.726 [2024-07-26 11:35:59.323050] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:03.726 [2024-07-26 11:35:59.323057] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:03.726 [2024-07-26 11:35:59.323063] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18b8f30 00:28:03.726 [2024-07-26 11:35:59.323077] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:03.726 qpair failed and we were unable to recover it. 00:28:03.726 [2024-07-26 11:35:59.333036] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:03.726 [2024-07-26 11:35:59.333096] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:03.726 [2024-07-26 11:35:59.333112] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:03.726 [2024-07-26 11:35:59.333119] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:03.726 [2024-07-26 11:35:59.333125] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18b8f30 00:28:03.726 [2024-07-26 11:35:59.333139] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:03.726 qpair failed and we were unable to recover it. 00:28:03.726 [2024-07-26 11:35:59.343042] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:03.726 [2024-07-26 11:35:59.343102] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:03.726 [2024-07-26 11:35:59.343117] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:03.726 [2024-07-26 11:35:59.343124] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:03.726 [2024-07-26 11:35:59.343130] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18b8f30 00:28:03.726 [2024-07-26 11:35:59.343144] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:03.726 qpair failed and we were unable to recover it. 00:28:03.726 [2024-07-26 11:35:59.353073] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:03.726 [2024-07-26 11:35:59.353125] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:03.726 [2024-07-26 11:35:59.353140] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:03.726 [2024-07-26 11:35:59.353147] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:03.726 [2024-07-26 11:35:59.353153] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18b8f30 00:28:03.726 [2024-07-26 11:35:59.353167] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:03.726 qpair failed and we were unable to recover it. 00:28:03.726 [2024-07-26 11:35:59.363030] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:03.726 [2024-07-26 11:35:59.363118] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:03.726 [2024-07-26 11:35:59.363136] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:03.726 [2024-07-26 11:35:59.363143] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:03.726 [2024-07-26 11:35:59.363149] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18b8f30 00:28:03.726 [2024-07-26 11:35:59.363163] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:03.726 qpair failed and we were unable to recover it. 00:28:03.726 [2024-07-26 11:35:59.373120] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:03.726 [2024-07-26 11:35:59.373194] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:03.726 [2024-07-26 11:35:59.373210] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:03.726 [2024-07-26 11:35:59.373217] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:03.726 [2024-07-26 11:35:59.373223] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18b8f30 00:28:03.726 [2024-07-26 11:35:59.373236] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:03.726 qpair failed and we were unable to recover it. 00:28:03.726 [2024-07-26 11:35:59.383107] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:03.726 [2024-07-26 11:35:59.383166] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:03.726 [2024-07-26 11:35:59.383185] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:03.726 [2024-07-26 11:35:59.383193] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:03.726 [2024-07-26 11:35:59.383199] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18b8f30 00:28:03.726 [2024-07-26 11:35:59.383216] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:03.726 qpair failed and we were unable to recover it. 00:28:03.984 [2024-07-26 11:35:59.393104] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:03.984 [2024-07-26 11:35:59.393162] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:03.984 [2024-07-26 11:35:59.393180] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:03.984 [2024-07-26 11:35:59.393188] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:03.984 [2024-07-26 11:35:59.393196] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18b8f30 00:28:03.984 [2024-07-26 11:35:59.393211] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:03.984 qpair failed and we were unable to recover it. 00:28:03.984 [2024-07-26 11:35:59.403186] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:03.984 [2024-07-26 11:35:59.403246] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:03.984 [2024-07-26 11:35:59.403261] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:03.984 [2024-07-26 11:35:59.403268] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:03.984 [2024-07-26 11:35:59.403278] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18b8f30 00:28:03.984 [2024-07-26 11:35:59.403292] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:03.985 qpair failed and we were unable to recover it. 00:28:03.985 [2024-07-26 11:35:59.413206] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:03.985 [2024-07-26 11:35:59.413260] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:03.985 [2024-07-26 11:35:59.413276] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:03.985 [2024-07-26 11:35:59.413283] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:03.985 [2024-07-26 11:35:59.413289] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18b8f30 00:28:03.985 [2024-07-26 11:35:59.413303] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:03.985 qpair failed and we were unable to recover it. 00:28:03.985 [2024-07-26 11:35:59.423190] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:03.985 [2024-07-26 11:35:59.423243] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:03.985 [2024-07-26 11:35:59.423258] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:03.985 [2024-07-26 11:35:59.423265] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:03.985 [2024-07-26 11:35:59.423271] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18b8f30 00:28:03.985 [2024-07-26 11:35:59.423285] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:03.985 qpair failed and we were unable to recover it. 00:28:03.985 [2024-07-26 11:35:59.433269] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:03.985 [2024-07-26 11:35:59.433322] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:03.985 [2024-07-26 11:35:59.433337] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:03.985 [2024-07-26 11:35:59.433343] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:03.985 [2024-07-26 11:35:59.433350] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18b8f30 00:28:03.985 [2024-07-26 11:35:59.433364] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:03.985 qpair failed and we were unable to recover it. 00:28:03.985 [2024-07-26 11:35:59.443292] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:03.985 [2024-07-26 11:35:59.443350] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:03.985 [2024-07-26 11:35:59.443364] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:03.985 [2024-07-26 11:35:59.443371] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:03.985 [2024-07-26 11:35:59.443377] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18b8f30 00:28:03.985 [2024-07-26 11:35:59.443391] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:03.985 qpair failed and we were unable to recover it. 00:28:03.985 [2024-07-26 11:35:59.453319] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:03.985 [2024-07-26 11:35:59.453390] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:03.985 [2024-07-26 11:35:59.453405] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:03.985 [2024-07-26 11:35:59.453412] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:03.985 [2024-07-26 11:35:59.453418] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18b8f30 00:28:03.985 [2024-07-26 11:35:59.453431] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:03.985 qpair failed and we were unable to recover it. 00:28:03.985 [2024-07-26 11:35:59.463384] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:03.985 [2024-07-26 11:35:59.463448] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:03.985 [2024-07-26 11:35:59.463463] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:03.985 [2024-07-26 11:35:59.463470] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:03.985 [2024-07-26 11:35:59.463476] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18b8f30 00:28:03.985 [2024-07-26 11:35:59.463490] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:03.985 qpair failed and we were unable to recover it. 00:28:03.985 [2024-07-26 11:35:59.473387] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:03.985 [2024-07-26 11:35:59.473502] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:03.985 [2024-07-26 11:35:59.473554] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:03.985 [2024-07-26 11:35:59.473579] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:03.985 [2024-07-26 11:35:59.473598] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f41f8000b90 00:28:03.985 [2024-07-26 11:35:59.473659] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:03.985 qpair failed and we were unable to recover it. 00:28:03.985 [2024-07-26 11:35:59.483439] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:03.985 [2024-07-26 11:35:59.483533] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:03.985 [2024-07-26 11:35:59.483564] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:03.985 [2024-07-26 11:35:59.483580] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:03.985 [2024-07-26 11:35:59.483595] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f41f8000b90 00:28:03.985 [2024-07-26 11:35:59.483637] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:03.985 qpair failed and we were unable to recover it. 00:28:03.985 [2024-07-26 11:35:59.493614] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:03.985 [2024-07-26 11:35:59.493742] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:03.985 [2024-07-26 11:35:59.493810] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:03.985 [2024-07-26 11:35:59.493841] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:03.985 [2024-07-26 11:35:59.493870] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f41f0000b90 00:28:03.985 [2024-07-26 11:35:59.493919] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:03.985 qpair failed and we were unable to recover it. 00:28:03.985 [2024-07-26 11:35:59.503472] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:03.985 [2024-07-26 11:35:59.503551] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:03.985 [2024-07-26 11:35:59.503585] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:03.985 [2024-07-26 11:35:59.503602] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:03.985 [2024-07-26 11:35:59.503618] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f41f0000b90 00:28:03.985 [2024-07-26 11:35:59.503658] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:03.985 qpair failed and we were unable to recover it. 00:28:03.985 [2024-07-26 11:35:59.503817] nvme_ctrlr.c:4480:nvme_ctrlr_keep_alive: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Submitting Keep Alive failed 00:28:03.985 A controller has encountered a failure and is being reset. 00:28:03.985 [2024-07-26 11:35:59.513535] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:03.985 [2024-07-26 11:35:59.513686] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:03.985 [2024-07-26 11:35:59.513740] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:03.985 [2024-07-26 11:35:59.513766] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:03.985 [2024-07-26 11:35:59.513786] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4200000b90 00:28:03.985 [2024-07-26 11:35:59.513833] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:03.985 qpair failed and we were unable to recover it. 00:28:03.985 [2024-07-26 11:35:59.523556] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:03.985 [2024-07-26 11:35:59.523647] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:03.985 [2024-07-26 11:35:59.523681] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:03.985 [2024-07-26 11:35:59.523697] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:03.985 [2024-07-26 11:35:59.523712] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4200000b90 00:28:03.985 [2024-07-26 11:35:59.523746] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:03.985 qpair failed and we were unable to recover it. 00:28:03.985 Controller properly reset. 00:28:03.985 Initializing NVMe Controllers 00:28:03.985 Attaching to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:28:03.985 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:28:03.985 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 0 00:28:03.985 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 1 00:28:03.985 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 2 00:28:03.985 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 3 00:28:03.985 Initialization complete. Launching workers. 00:28:03.985 Starting thread on core 1 00:28:03.985 Starting thread on core 2 00:28:03.985 Starting thread on core 3 00:28:03.985 Starting thread on core 0 00:28:03.985 11:35:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@51 -- # sync 00:28:03.985 00:28:03.985 real 0m10.759s 00:28:03.985 user 0m19.304s 00:28:03.985 sys 0m4.723s 00:28:03.985 11:35:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@1126 -- # xtrace_disable 00:28:03.985 11:35:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:03.985 ************************************ 00:28:03.985 END TEST nvmf_target_disconnect_tc2 00:28:03.985 ************************************ 00:28:03.985 11:35:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@72 -- # '[' -n '' ']' 00:28:03.985 11:35:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@76 -- # trap - SIGINT SIGTERM EXIT 00:28:03.985 11:35:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@77 -- # nvmftestfini 00:28:03.985 11:35:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@488 -- # nvmfcleanup 00:28:03.985 11:35:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@117 -- # sync 00:28:03.985 11:35:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:28:03.985 11:35:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@120 -- # set +e 00:28:03.985 11:35:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@121 -- # for i in {1..20} 00:28:03.985 11:35:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:28:04.243 rmmod nvme_tcp 00:28:04.243 rmmod nvme_fabrics 00:28:04.243 rmmod nvme_keyring 00:28:04.243 11:35:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:28:04.243 11:35:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@124 -- # set -e 00:28:04.243 11:35:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@125 -- # return 0 00:28:04.243 11:35:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@489 -- # '[' -n 1672092 ']' 00:28:04.243 11:35:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@490 -- # killprocess 1672092 00:28:04.243 11:35:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@950 -- # '[' -z 1672092 ']' 00:28:04.243 11:35:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@954 -- # kill -0 1672092 00:28:04.243 11:35:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@955 -- # uname 00:28:04.243 11:35:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:28:04.243 11:35:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1672092 00:28:04.243 11:35:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@956 -- # process_name=reactor_4 00:28:04.243 11:35:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@960 -- # '[' reactor_4 = sudo ']' 00:28:04.243 11:35:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1672092' 00:28:04.243 killing process with pid 1672092 00:28:04.243 11:35:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@969 -- # kill 1672092 00:28:04.243 11:35:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@974 -- # wait 1672092 00:28:04.501 11:35:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:28:04.501 11:35:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:28:04.501 11:35:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:28:04.501 11:35:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:28:04.501 11:35:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@278 -- # remove_spdk_ns 00:28:04.501 11:35:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:04.501 11:35:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:04.501 11:35:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:06.402 11:36:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:28:06.402 00:28:06.402 real 0m19.279s 00:28:06.402 user 0m46.591s 00:28:06.402 sys 0m9.455s 00:28:06.402 11:36:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1126 -- # xtrace_disable 00:28:06.402 11:36:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:28:06.402 ************************************ 00:28:06.402 END TEST nvmf_target_disconnect 00:28:06.402 ************************************ 00:28:06.660 11:36:02 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:28:06.660 00:28:06.660 real 5m56.616s 00:28:06.660 user 10m54.573s 00:28:06.660 sys 1m54.214s 00:28:06.660 11:36:02 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1126 -- # xtrace_disable 00:28:06.660 11:36:02 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:28:06.660 ************************************ 00:28:06.660 END TEST nvmf_host 00:28:06.660 ************************************ 00:28:06.660 00:28:06.660 real 21m23.168s 00:28:06.660 user 45m19.464s 00:28:06.660 sys 6m40.852s 00:28:06.660 11:36:02 nvmf_tcp -- common/autotest_common.sh@1126 -- # xtrace_disable 00:28:06.660 11:36:02 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:28:06.660 ************************************ 00:28:06.660 END TEST nvmf_tcp 00:28:06.660 ************************************ 00:28:06.660 11:36:02 -- spdk/autotest.sh@292 -- # [[ 0 -eq 0 ]] 00:28:06.660 11:36:02 -- spdk/autotest.sh@293 -- # run_test spdkcli_nvmf_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:28:06.660 11:36:02 -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:28:06.660 11:36:02 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:28:06.660 11:36:02 -- common/autotest_common.sh@10 -- # set +x 00:28:06.660 ************************************ 00:28:06.660 START TEST spdkcli_nvmf_tcp 00:28:06.660 ************************************ 00:28:06.660 11:36:02 spdkcli_nvmf_tcp -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:28:06.660 * Looking for test storage... 00:28:06.660 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli 00:28:06.660 11:36:02 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/common.sh 00:28:06.660 11:36:02 spdkcli_nvmf_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:28:06.660 11:36:02 spdkcli_nvmf_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py 00:28:06.661 11:36:02 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:28:06.661 11:36:02 spdkcli_nvmf_tcp -- nvmf/common.sh@7 -- # uname -s 00:28:06.661 11:36:02 spdkcli_nvmf_tcp -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:06.661 11:36:02 spdkcli_nvmf_tcp -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:06.661 11:36:02 spdkcli_nvmf_tcp -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:06.661 11:36:02 spdkcli_nvmf_tcp -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:06.661 11:36:02 spdkcli_nvmf_tcp -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:06.661 11:36:02 spdkcli_nvmf_tcp -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:06.661 11:36:02 spdkcli_nvmf_tcp -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:06.661 11:36:02 spdkcli_nvmf_tcp -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:06.661 11:36:02 spdkcli_nvmf_tcp -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:06.661 11:36:02 spdkcli_nvmf_tcp -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:06.661 11:36:02 spdkcli_nvmf_tcp -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:28:06.661 11:36:02 spdkcli_nvmf_tcp -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:28:06.661 11:36:02 spdkcli_nvmf_tcp -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:06.661 11:36:02 spdkcli_nvmf_tcp -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:06.661 11:36:02 spdkcli_nvmf_tcp -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:28:06.661 11:36:02 spdkcli_nvmf_tcp -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:28:06.661 11:36:02 spdkcli_nvmf_tcp -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:28:06.661 11:36:02 spdkcli_nvmf_tcp -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:06.661 11:36:02 spdkcli_nvmf_tcp -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:06.661 11:36:02 spdkcli_nvmf_tcp -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:06.661 11:36:02 spdkcli_nvmf_tcp -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:06.661 11:36:02 spdkcli_nvmf_tcp -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:06.661 11:36:02 spdkcli_nvmf_tcp -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:06.661 11:36:02 spdkcli_nvmf_tcp -- paths/export.sh@5 -- # export PATH 00:28:06.661 11:36:02 spdkcli_nvmf_tcp -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:06.661 11:36:02 spdkcli_nvmf_tcp -- nvmf/common.sh@47 -- # : 0 00:28:06.661 11:36:02 spdkcli_nvmf_tcp -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:28:06.661 11:36:02 spdkcli_nvmf_tcp -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:28:06.661 11:36:02 spdkcli_nvmf_tcp -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:28:06.661 11:36:02 spdkcli_nvmf_tcp -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:06.661 11:36:02 spdkcli_nvmf_tcp -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:06.661 11:36:02 spdkcli_nvmf_tcp -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:28:06.661 11:36:02 spdkcli_nvmf_tcp -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:28:06.661 11:36:02 spdkcli_nvmf_tcp -- nvmf/common.sh@51 -- # have_pci_nics=0 00:28:06.661 11:36:02 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@12 -- # MATCH_FILE=spdkcli_nvmf.test 00:28:06.661 11:36:02 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@13 -- # SPDKCLI_BRANCH=/nvmf 00:28:06.661 11:36:02 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@15 -- # trap cleanup EXIT 00:28:06.661 11:36:02 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@17 -- # timing_enter run_nvmf_tgt 00:28:06.661 11:36:02 spdkcli_nvmf_tcp -- common/autotest_common.sh@724 -- # xtrace_disable 00:28:06.661 11:36:02 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:28:06.661 11:36:02 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@18 -- # run_nvmf_tgt 00:28:06.661 11:36:02 spdkcli_nvmf_tcp -- spdkcli/common.sh@33 -- # nvmf_tgt_pid=1673937 00:28:06.661 11:36:02 spdkcli_nvmf_tcp -- spdkcli/common.sh@34 -- # waitforlisten 1673937 00:28:06.661 11:36:02 spdkcli_nvmf_tcp -- common/autotest_common.sh@831 -- # '[' -z 1673937 ']' 00:28:06.661 11:36:02 spdkcli_nvmf_tcp -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:06.661 11:36:02 spdkcli_nvmf_tcp -- spdkcli/common.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x3 -p 0 00:28:06.661 11:36:02 spdkcli_nvmf_tcp -- common/autotest_common.sh@836 -- # local max_retries=100 00:28:06.661 11:36:02 spdkcli_nvmf_tcp -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:06.661 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:06.661 11:36:02 spdkcli_nvmf_tcp -- common/autotest_common.sh@840 -- # xtrace_disable 00:28:06.661 11:36:02 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:28:06.919 [2024-07-26 11:36:02.344540] Starting SPDK v24.09-pre git sha1 487ff9e1a / DPDK 24.03.0 initialization... 00:28:06.919 [2024-07-26 11:36:02.344590] [ DPDK EAL parameters: nvmf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1673937 ] 00:28:06.919 EAL: No free 2048 kB hugepages reported on node 1 00:28:06.919 [2024-07-26 11:36:02.410205] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:28:06.919 [2024-07-26 11:36:02.487995] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:28:06.919 [2024-07-26 11:36:02.487997] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:28:07.483 11:36:03 spdkcli_nvmf_tcp -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:28:07.483 11:36:03 spdkcli_nvmf_tcp -- common/autotest_common.sh@864 -- # return 0 00:28:07.483 11:36:03 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@19 -- # timing_exit run_nvmf_tgt 00:28:07.483 11:36:03 spdkcli_nvmf_tcp -- common/autotest_common.sh@730 -- # xtrace_disable 00:28:07.483 11:36:03 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:28:07.739 11:36:03 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@21 -- # NVMF_TARGET_IP=127.0.0.1 00:28:07.739 11:36:03 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@22 -- # [[ tcp == \r\d\m\a ]] 00:28:07.739 11:36:03 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@27 -- # timing_enter spdkcli_create_nvmf_config 00:28:07.739 11:36:03 spdkcli_nvmf_tcp -- common/autotest_common.sh@724 -- # xtrace_disable 00:28:07.739 11:36:03 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:28:07.739 11:36:03 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/malloc create 32 512 Malloc1'\'' '\''Malloc1'\'' True 00:28:07.739 '\''/bdevs/malloc create 32 512 Malloc2'\'' '\''Malloc2'\'' True 00:28:07.739 '\''/bdevs/malloc create 32 512 Malloc3'\'' '\''Malloc3'\'' True 00:28:07.739 '\''/bdevs/malloc create 32 512 Malloc4'\'' '\''Malloc4'\'' True 00:28:07.739 '\''/bdevs/malloc create 32 512 Malloc5'\'' '\''Malloc5'\'' True 00:28:07.739 '\''/bdevs/malloc create 32 512 Malloc6'\'' '\''Malloc6'\'' True 00:28:07.739 '\''nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192'\'' '\'''\'' True 00:28:07.739 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:28:07.739 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1'\'' '\''Malloc3'\'' True 00:28:07.739 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2'\'' '\''Malloc4'\'' True 00:28:07.739 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:28:07.739 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:28:07.739 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2'\'' '\''Malloc2'\'' True 00:28:07.739 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:28:07.739 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:28:07.739 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1'\'' '\''Malloc1'\'' True 00:28:07.739 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:28:07.739 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:28:07.739 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:28:07.739 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:28:07.739 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True'\'' '\''Allow any host'\'' 00:28:07.739 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False'\'' '\''Allow any host'\'' True 00:28:07.739 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:28:07.739 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4'\'' '\''127.0.0.1:4262'\'' True 00:28:07.739 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:28:07.739 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5'\'' '\''Malloc5'\'' True 00:28:07.739 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6'\'' '\''Malloc6'\'' True 00:28:07.739 '\''/nvmf/referral create tcp 127.0.0.2 4030 IPv4'\'' 00:28:07.739 ' 00:28:10.263 [2024-07-26 11:36:05.748696] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:11.634 [2024-07-26 11:36:07.028931] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4260 *** 00:28:14.158 [2024-07-26 11:36:09.412362] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4261 *** 00:28:16.054 [2024-07-26 11:36:11.458760] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4262 *** 00:28:17.426 Executing command: ['/bdevs/malloc create 32 512 Malloc1', 'Malloc1', True] 00:28:17.426 Executing command: ['/bdevs/malloc create 32 512 Malloc2', 'Malloc2', True] 00:28:17.426 Executing command: ['/bdevs/malloc create 32 512 Malloc3', 'Malloc3', True] 00:28:17.426 Executing command: ['/bdevs/malloc create 32 512 Malloc4', 'Malloc4', True] 00:28:17.426 Executing command: ['/bdevs/malloc create 32 512 Malloc5', 'Malloc5', True] 00:28:17.426 Executing command: ['/bdevs/malloc create 32 512 Malloc6', 'Malloc6', True] 00:28:17.426 Executing command: ['nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192', '', True] 00:28:17.426 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode1', True] 00:28:17.426 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1', 'Malloc3', True] 00:28:17.426 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2', 'Malloc4', True] 00:28:17.426 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:28:17.426 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:28:17.426 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2', 'Malloc2', True] 00:28:17.426 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:28:17.426 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:28:17.426 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1', 'Malloc1', True] 00:28:17.426 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:28:17.426 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:28:17.426 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1', 'nqn.2014-08.org.spdk:cnode1', True] 00:28:17.426 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:28:17.426 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True', 'Allow any host', False] 00:28:17.426 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False', 'Allow any host', True] 00:28:17.426 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:28:17.426 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4', '127.0.0.1:4262', True] 00:28:17.426 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:28:17.426 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5', 'Malloc5', True] 00:28:17.426 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6', 'Malloc6', True] 00:28:17.426 Executing command: ['/nvmf/referral create tcp 127.0.0.2 4030 IPv4', False] 00:28:17.684 11:36:13 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@66 -- # timing_exit spdkcli_create_nvmf_config 00:28:17.684 11:36:13 spdkcli_nvmf_tcp -- common/autotest_common.sh@730 -- # xtrace_disable 00:28:17.684 11:36:13 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:28:17.684 11:36:13 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@68 -- # timing_enter spdkcli_check_match 00:28:17.684 11:36:13 spdkcli_nvmf_tcp -- common/autotest_common.sh@724 -- # xtrace_disable 00:28:17.684 11:36:13 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:28:17.684 11:36:13 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@69 -- # check_match 00:28:17.684 11:36:13 spdkcli_nvmf_tcp -- spdkcli/common.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdkcli.py ll /nvmf 00:28:17.941 11:36:13 spdkcli_nvmf_tcp -- spdkcli/common.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/match/match /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test.match 00:28:17.941 11:36:13 spdkcli_nvmf_tcp -- spdkcli/common.sh@46 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test 00:28:17.941 11:36:13 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@70 -- # timing_exit spdkcli_check_match 00:28:17.941 11:36:13 spdkcli_nvmf_tcp -- common/autotest_common.sh@730 -- # xtrace_disable 00:28:17.941 11:36:13 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:28:18.198 11:36:13 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@72 -- # timing_enter spdkcli_clear_nvmf_config 00:28:18.198 11:36:13 spdkcli_nvmf_tcp -- common/autotest_common.sh@724 -- # xtrace_disable 00:28:18.198 11:36:13 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:28:18.198 11:36:13 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1'\'' '\''Malloc3'\'' 00:28:18.198 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all'\'' '\''Malloc4'\'' 00:28:18.198 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:28:18.198 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' 00:28:18.198 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262'\'' '\''127.0.0.1:4262'\'' 00:28:18.198 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all'\'' '\''127.0.0.1:4261'\'' 00:28:18.198 '\''/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3'\'' '\''nqn.2014-08.org.spdk:cnode3'\'' 00:28:18.198 '\''/nvmf/subsystem delete_all'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:28:18.198 '\''/bdevs/malloc delete Malloc6'\'' '\''Malloc6'\'' 00:28:18.198 '\''/bdevs/malloc delete Malloc5'\'' '\''Malloc5'\'' 00:28:18.198 '\''/bdevs/malloc delete Malloc4'\'' '\''Malloc4'\'' 00:28:18.198 '\''/bdevs/malloc delete Malloc3'\'' '\''Malloc3'\'' 00:28:18.198 '\''/bdevs/malloc delete Malloc2'\'' '\''Malloc2'\'' 00:28:18.198 '\''/bdevs/malloc delete Malloc1'\'' '\''Malloc1'\'' 00:28:18.198 ' 00:28:23.459 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1', 'Malloc3', False] 00:28:23.459 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all', 'Malloc4', False] 00:28:23.460 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', False] 00:28:23.460 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all', 'nqn.2014-08.org.spdk:cnode1', False] 00:28:23.460 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262', '127.0.0.1:4262', False] 00:28:23.460 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all', '127.0.0.1:4261', False] 00:28:23.460 Executing command: ['/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3', 'nqn.2014-08.org.spdk:cnode3', False] 00:28:23.460 Executing command: ['/nvmf/subsystem delete_all', 'nqn.2014-08.org.spdk:cnode2', False] 00:28:23.460 Executing command: ['/bdevs/malloc delete Malloc6', 'Malloc6', False] 00:28:23.460 Executing command: ['/bdevs/malloc delete Malloc5', 'Malloc5', False] 00:28:23.460 Executing command: ['/bdevs/malloc delete Malloc4', 'Malloc4', False] 00:28:23.460 Executing command: ['/bdevs/malloc delete Malloc3', 'Malloc3', False] 00:28:23.460 Executing command: ['/bdevs/malloc delete Malloc2', 'Malloc2', False] 00:28:23.460 Executing command: ['/bdevs/malloc delete Malloc1', 'Malloc1', False] 00:28:23.460 11:36:18 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@88 -- # timing_exit spdkcli_clear_nvmf_config 00:28:23.460 11:36:18 spdkcli_nvmf_tcp -- common/autotest_common.sh@730 -- # xtrace_disable 00:28:23.460 11:36:18 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:28:23.460 11:36:19 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@90 -- # killprocess 1673937 00:28:23.460 11:36:19 spdkcli_nvmf_tcp -- common/autotest_common.sh@950 -- # '[' -z 1673937 ']' 00:28:23.460 11:36:19 spdkcli_nvmf_tcp -- common/autotest_common.sh@954 -- # kill -0 1673937 00:28:23.460 11:36:19 spdkcli_nvmf_tcp -- common/autotest_common.sh@955 -- # uname 00:28:23.460 11:36:19 spdkcli_nvmf_tcp -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:28:23.460 11:36:19 spdkcli_nvmf_tcp -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1673937 00:28:23.460 11:36:19 spdkcli_nvmf_tcp -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:28:23.460 11:36:19 spdkcli_nvmf_tcp -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:28:23.460 11:36:19 spdkcli_nvmf_tcp -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1673937' 00:28:23.460 killing process with pid 1673937 00:28:23.460 11:36:19 spdkcli_nvmf_tcp -- common/autotest_common.sh@969 -- # kill 1673937 00:28:23.460 11:36:19 spdkcli_nvmf_tcp -- common/autotest_common.sh@974 -- # wait 1673937 00:28:23.719 11:36:19 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@1 -- # cleanup 00:28:23.719 11:36:19 spdkcli_nvmf_tcp -- spdkcli/common.sh@10 -- # '[' -n '' ']' 00:28:23.719 11:36:19 spdkcli_nvmf_tcp -- spdkcli/common.sh@13 -- # '[' -n 1673937 ']' 00:28:23.719 11:36:19 spdkcli_nvmf_tcp -- spdkcli/common.sh@14 -- # killprocess 1673937 00:28:23.719 11:36:19 spdkcli_nvmf_tcp -- common/autotest_common.sh@950 -- # '[' -z 1673937 ']' 00:28:23.719 11:36:19 spdkcli_nvmf_tcp -- common/autotest_common.sh@954 -- # kill -0 1673937 00:28:23.719 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 954: kill: (1673937) - No such process 00:28:23.719 11:36:19 spdkcli_nvmf_tcp -- common/autotest_common.sh@977 -- # echo 'Process with pid 1673937 is not found' 00:28:23.719 Process with pid 1673937 is not found 00:28:23.719 11:36:19 spdkcli_nvmf_tcp -- spdkcli/common.sh@16 -- # '[' -n '' ']' 00:28:23.719 11:36:19 spdkcli_nvmf_tcp -- spdkcli/common.sh@19 -- # '[' -n '' ']' 00:28:23.719 11:36:19 spdkcli_nvmf_tcp -- spdkcli/common.sh@22 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_nvmf.test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_details_vhost.test /tmp/sample_aio 00:28:23.719 00:28:23.719 real 0m17.079s 00:28:23.719 user 0m37.221s 00:28:23.719 sys 0m0.856s 00:28:23.719 11:36:19 spdkcli_nvmf_tcp -- common/autotest_common.sh@1126 -- # xtrace_disable 00:28:23.719 11:36:19 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:28:23.719 ************************************ 00:28:23.719 END TEST spdkcli_nvmf_tcp 00:28:23.719 ************************************ 00:28:23.719 11:36:19 -- spdk/autotest.sh@294 -- # run_test nvmf_identify_passthru /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:28:23.719 11:36:19 -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:28:23.719 11:36:19 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:28:23.719 11:36:19 -- common/autotest_common.sh@10 -- # set +x 00:28:23.719 ************************************ 00:28:23.719 START TEST nvmf_identify_passthru 00:28:23.719 ************************************ 00:28:23.719 11:36:19 nvmf_identify_passthru -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:28:23.978 * Looking for test storage... 00:28:23.978 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:28:23.978 11:36:19 nvmf_identify_passthru -- target/identify_passthru.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:28:23.978 11:36:19 nvmf_identify_passthru -- nvmf/common.sh@7 -- # uname -s 00:28:23.978 11:36:19 nvmf_identify_passthru -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:23.978 11:36:19 nvmf_identify_passthru -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:23.978 11:36:19 nvmf_identify_passthru -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:23.978 11:36:19 nvmf_identify_passthru -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:23.978 11:36:19 nvmf_identify_passthru -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:23.978 11:36:19 nvmf_identify_passthru -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:23.978 11:36:19 nvmf_identify_passthru -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:23.978 11:36:19 nvmf_identify_passthru -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:23.978 11:36:19 nvmf_identify_passthru -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:23.978 11:36:19 nvmf_identify_passthru -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:23.978 11:36:19 nvmf_identify_passthru -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:28:23.978 11:36:19 nvmf_identify_passthru -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:28:23.978 11:36:19 nvmf_identify_passthru -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:23.978 11:36:19 nvmf_identify_passthru -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:23.978 11:36:19 nvmf_identify_passthru -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:28:23.978 11:36:19 nvmf_identify_passthru -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:28:23.978 11:36:19 nvmf_identify_passthru -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:28:23.978 11:36:19 nvmf_identify_passthru -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:23.978 11:36:19 nvmf_identify_passthru -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:23.978 11:36:19 nvmf_identify_passthru -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:23.978 11:36:19 nvmf_identify_passthru -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:23.978 11:36:19 nvmf_identify_passthru -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:23.978 11:36:19 nvmf_identify_passthru -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:23.978 11:36:19 nvmf_identify_passthru -- paths/export.sh@5 -- # export PATH 00:28:23.978 11:36:19 nvmf_identify_passthru -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:23.978 11:36:19 nvmf_identify_passthru -- nvmf/common.sh@47 -- # : 0 00:28:23.978 11:36:19 nvmf_identify_passthru -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:28:23.978 11:36:19 nvmf_identify_passthru -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:28:23.978 11:36:19 nvmf_identify_passthru -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:28:23.978 11:36:19 nvmf_identify_passthru -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:23.978 11:36:19 nvmf_identify_passthru -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:23.978 11:36:19 nvmf_identify_passthru -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:28:23.978 11:36:19 nvmf_identify_passthru -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:28:23.978 11:36:19 nvmf_identify_passthru -- nvmf/common.sh@51 -- # have_pci_nics=0 00:28:23.978 11:36:19 nvmf_identify_passthru -- target/identify_passthru.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:28:23.978 11:36:19 nvmf_identify_passthru -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:23.978 11:36:19 nvmf_identify_passthru -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:23.978 11:36:19 nvmf_identify_passthru -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:23.979 11:36:19 nvmf_identify_passthru -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:23.979 11:36:19 nvmf_identify_passthru -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:23.979 11:36:19 nvmf_identify_passthru -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:23.979 11:36:19 nvmf_identify_passthru -- paths/export.sh@5 -- # export PATH 00:28:23.979 11:36:19 nvmf_identify_passthru -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:23.979 11:36:19 nvmf_identify_passthru -- target/identify_passthru.sh@12 -- # nvmftestinit 00:28:23.979 11:36:19 nvmf_identify_passthru -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:28:23.979 11:36:19 nvmf_identify_passthru -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:23.979 11:36:19 nvmf_identify_passthru -- nvmf/common.sh@448 -- # prepare_net_devs 00:28:23.979 11:36:19 nvmf_identify_passthru -- nvmf/common.sh@410 -- # local -g is_hw=no 00:28:23.979 11:36:19 nvmf_identify_passthru -- nvmf/common.sh@412 -- # remove_spdk_ns 00:28:23.979 11:36:19 nvmf_identify_passthru -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:23.979 11:36:19 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:28:23.979 11:36:19 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:23.979 11:36:19 nvmf_identify_passthru -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:28:23.979 11:36:19 nvmf_identify_passthru -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:28:23.979 11:36:19 nvmf_identify_passthru -- nvmf/common.sh@285 -- # xtrace_disable 00:28:23.979 11:36:19 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:28:29.249 11:36:24 nvmf_identify_passthru -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:28:29.249 11:36:24 nvmf_identify_passthru -- nvmf/common.sh@291 -- # pci_devs=() 00:28:29.249 11:36:24 nvmf_identify_passthru -- nvmf/common.sh@291 -- # local -a pci_devs 00:28:29.249 11:36:24 nvmf_identify_passthru -- nvmf/common.sh@292 -- # pci_net_devs=() 00:28:29.249 11:36:24 nvmf_identify_passthru -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:28:29.249 11:36:24 nvmf_identify_passthru -- nvmf/common.sh@293 -- # pci_drivers=() 00:28:29.249 11:36:24 nvmf_identify_passthru -- nvmf/common.sh@293 -- # local -A pci_drivers 00:28:29.249 11:36:24 nvmf_identify_passthru -- nvmf/common.sh@295 -- # net_devs=() 00:28:29.249 11:36:24 nvmf_identify_passthru -- nvmf/common.sh@295 -- # local -ga net_devs 00:28:29.249 11:36:24 nvmf_identify_passthru -- nvmf/common.sh@296 -- # e810=() 00:28:29.249 11:36:24 nvmf_identify_passthru -- nvmf/common.sh@296 -- # local -ga e810 00:28:29.249 11:36:24 nvmf_identify_passthru -- nvmf/common.sh@297 -- # x722=() 00:28:29.249 11:36:24 nvmf_identify_passthru -- nvmf/common.sh@297 -- # local -ga x722 00:28:29.249 11:36:24 nvmf_identify_passthru -- nvmf/common.sh@298 -- # mlx=() 00:28:29.249 11:36:24 nvmf_identify_passthru -- nvmf/common.sh@298 -- # local -ga mlx 00:28:29.249 11:36:24 nvmf_identify_passthru -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:29.249 11:36:24 nvmf_identify_passthru -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:29.249 11:36:24 nvmf_identify_passthru -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:29.249 11:36:24 nvmf_identify_passthru -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:29.249 11:36:24 nvmf_identify_passthru -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:29.249 11:36:24 nvmf_identify_passthru -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:29.249 11:36:24 nvmf_identify_passthru -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:29.249 11:36:24 nvmf_identify_passthru -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:29.249 11:36:24 nvmf_identify_passthru -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:29.249 11:36:24 nvmf_identify_passthru -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:29.249 11:36:24 nvmf_identify_passthru -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:29.249 11:36:24 nvmf_identify_passthru -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:28:29.249 11:36:24 nvmf_identify_passthru -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:28:29.249 11:36:24 nvmf_identify_passthru -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:28:29.249 11:36:24 nvmf_identify_passthru -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:28:29.249 11:36:24 nvmf_identify_passthru -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:28:29.249 11:36:24 nvmf_identify_passthru -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:28:29.249 11:36:24 nvmf_identify_passthru -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:28:29.249 11:36:24 nvmf_identify_passthru -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:28:29.249 Found 0000:86:00.0 (0x8086 - 0x159b) 00:28:29.249 11:36:24 nvmf_identify_passthru -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:28:29.249 11:36:24 nvmf_identify_passthru -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:28:29.249 11:36:24 nvmf_identify_passthru -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:29.249 11:36:24 nvmf_identify_passthru -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:29.249 11:36:24 nvmf_identify_passthru -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:28:29.249 11:36:24 nvmf_identify_passthru -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:28:29.249 11:36:24 nvmf_identify_passthru -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:28:29.249 Found 0000:86:00.1 (0x8086 - 0x159b) 00:28:29.249 11:36:24 nvmf_identify_passthru -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:28:29.249 11:36:24 nvmf_identify_passthru -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:28:29.249 11:36:24 nvmf_identify_passthru -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:29.249 11:36:24 nvmf_identify_passthru -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:29.249 11:36:24 nvmf_identify_passthru -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:28:29.249 11:36:24 nvmf_identify_passthru -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:28:29.249 11:36:24 nvmf_identify_passthru -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:28:29.249 11:36:24 nvmf_identify_passthru -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:28:29.249 11:36:24 nvmf_identify_passthru -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:28:29.249 11:36:24 nvmf_identify_passthru -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:29.249 11:36:24 nvmf_identify_passthru -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:28:29.249 11:36:24 nvmf_identify_passthru -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:29.249 11:36:24 nvmf_identify_passthru -- nvmf/common.sh@390 -- # [[ up == up ]] 00:28:29.249 11:36:24 nvmf_identify_passthru -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:28:29.250 11:36:24 nvmf_identify_passthru -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:29.250 11:36:24 nvmf_identify_passthru -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:28:29.250 Found net devices under 0000:86:00.0: cvl_0_0 00:28:29.250 11:36:24 nvmf_identify_passthru -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:28:29.250 11:36:24 nvmf_identify_passthru -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:28:29.250 11:36:24 nvmf_identify_passthru -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:29.250 11:36:24 nvmf_identify_passthru -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:28:29.250 11:36:24 nvmf_identify_passthru -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:29.250 11:36:24 nvmf_identify_passthru -- nvmf/common.sh@390 -- # [[ up == up ]] 00:28:29.250 11:36:24 nvmf_identify_passthru -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:28:29.250 11:36:24 nvmf_identify_passthru -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:29.250 11:36:24 nvmf_identify_passthru -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:28:29.250 Found net devices under 0000:86:00.1: cvl_0_1 00:28:29.250 11:36:24 nvmf_identify_passthru -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:28:29.250 11:36:24 nvmf_identify_passthru -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:28:29.250 11:36:24 nvmf_identify_passthru -- nvmf/common.sh@414 -- # is_hw=yes 00:28:29.250 11:36:24 nvmf_identify_passthru -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:28:29.250 11:36:24 nvmf_identify_passthru -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:28:29.250 11:36:24 nvmf_identify_passthru -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:28:29.250 11:36:24 nvmf_identify_passthru -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:28:29.250 11:36:24 nvmf_identify_passthru -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:28:29.250 11:36:24 nvmf_identify_passthru -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:28:29.250 11:36:24 nvmf_identify_passthru -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:28:29.250 11:36:24 nvmf_identify_passthru -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:28:29.250 11:36:24 nvmf_identify_passthru -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:28:29.250 11:36:24 nvmf_identify_passthru -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:28:29.250 11:36:24 nvmf_identify_passthru -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:28:29.250 11:36:24 nvmf_identify_passthru -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:29.250 11:36:24 nvmf_identify_passthru -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:28:29.250 11:36:24 nvmf_identify_passthru -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:28:29.250 11:36:24 nvmf_identify_passthru -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:28:29.250 11:36:24 nvmf_identify_passthru -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:28:29.509 11:36:24 nvmf_identify_passthru -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:28:29.509 11:36:24 nvmf_identify_passthru -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:28:29.509 11:36:25 nvmf_identify_passthru -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:28:29.509 11:36:25 nvmf_identify_passthru -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:28:29.509 11:36:25 nvmf_identify_passthru -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:28:29.509 11:36:25 nvmf_identify_passthru -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:28:29.509 11:36:25 nvmf_identify_passthru -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:28:29.509 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:29.509 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.184 ms 00:28:29.509 00:28:29.509 --- 10.0.0.2 ping statistics --- 00:28:29.509 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:29.509 rtt min/avg/max/mdev = 0.184/0.184/0.184/0.000 ms 00:28:29.509 11:36:25 nvmf_identify_passthru -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:28:29.509 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:29.509 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.180 ms 00:28:29.509 00:28:29.509 --- 10.0.0.1 ping statistics --- 00:28:29.509 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:29.509 rtt min/avg/max/mdev = 0.180/0.180/0.180/0.000 ms 00:28:29.509 11:36:25 nvmf_identify_passthru -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:29.509 11:36:25 nvmf_identify_passthru -- nvmf/common.sh@422 -- # return 0 00:28:29.509 11:36:25 nvmf_identify_passthru -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:28:29.509 11:36:25 nvmf_identify_passthru -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:29.509 11:36:25 nvmf_identify_passthru -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:28:29.509 11:36:25 nvmf_identify_passthru -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:28:29.509 11:36:25 nvmf_identify_passthru -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:29.509 11:36:25 nvmf_identify_passthru -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:28:29.509 11:36:25 nvmf_identify_passthru -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:28:29.509 11:36:25 nvmf_identify_passthru -- target/identify_passthru.sh@14 -- # timing_enter nvme_identify 00:28:29.509 11:36:25 nvmf_identify_passthru -- common/autotest_common.sh@724 -- # xtrace_disable 00:28:29.509 11:36:25 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:28:29.509 11:36:25 nvmf_identify_passthru -- target/identify_passthru.sh@16 -- # get_first_nvme_bdf 00:28:29.509 11:36:25 nvmf_identify_passthru -- common/autotest_common.sh@1524 -- # bdfs=() 00:28:29.509 11:36:25 nvmf_identify_passthru -- common/autotest_common.sh@1524 -- # local bdfs 00:28:29.509 11:36:25 nvmf_identify_passthru -- common/autotest_common.sh@1525 -- # bdfs=($(get_nvme_bdfs)) 00:28:29.768 11:36:25 nvmf_identify_passthru -- common/autotest_common.sh@1525 -- # get_nvme_bdfs 00:28:29.768 11:36:25 nvmf_identify_passthru -- common/autotest_common.sh@1513 -- # bdfs=() 00:28:29.768 11:36:25 nvmf_identify_passthru -- common/autotest_common.sh@1513 -- # local bdfs 00:28:29.768 11:36:25 nvmf_identify_passthru -- common/autotest_common.sh@1514 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:28:29.768 11:36:25 nvmf_identify_passthru -- common/autotest_common.sh@1514 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:28:29.768 11:36:25 nvmf_identify_passthru -- common/autotest_common.sh@1514 -- # jq -r '.config[].params.traddr' 00:28:29.768 11:36:25 nvmf_identify_passthru -- common/autotest_common.sh@1515 -- # (( 1 == 0 )) 00:28:29.768 11:36:25 nvmf_identify_passthru -- common/autotest_common.sh@1519 -- # printf '%s\n' 0000:5e:00.0 00:28:29.768 11:36:25 nvmf_identify_passthru -- common/autotest_common.sh@1527 -- # echo 0000:5e:00.0 00:28:29.768 11:36:25 nvmf_identify_passthru -- target/identify_passthru.sh@16 -- # bdf=0000:5e:00.0 00:28:29.768 11:36:25 nvmf_identify_passthru -- target/identify_passthru.sh@17 -- # '[' -z 0000:5e:00.0 ']' 00:28:29.768 11:36:25 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:5e:00.0' -i 0 00:28:29.768 11:36:25 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # awk '{print $3}' 00:28:29.768 11:36:25 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # grep 'Serial Number:' 00:28:29.768 EAL: No free 2048 kB hugepages reported on node 1 00:28:35.060 11:36:29 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # nvme_serial_number=PHLN951000C61P6AGN 00:28:35.060 11:36:29 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:5e:00.0' -i 0 00:28:35.060 11:36:29 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # grep 'Model Number:' 00:28:35.060 11:36:29 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # awk '{print $3}' 00:28:35.060 EAL: No free 2048 kB hugepages reported on node 1 00:28:39.241 11:36:34 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # nvme_model_number=INTEL 00:28:39.241 11:36:34 nvmf_identify_passthru -- target/identify_passthru.sh@26 -- # timing_exit nvme_identify 00:28:39.241 11:36:34 nvmf_identify_passthru -- common/autotest_common.sh@730 -- # xtrace_disable 00:28:39.241 11:36:34 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:28:39.241 11:36:34 nvmf_identify_passthru -- target/identify_passthru.sh@28 -- # timing_enter start_nvmf_tgt 00:28:39.241 11:36:34 nvmf_identify_passthru -- common/autotest_common.sh@724 -- # xtrace_disable 00:28:39.241 11:36:34 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:28:39.241 11:36:34 nvmf_identify_passthru -- target/identify_passthru.sh@31 -- # nvmfpid=1681596 00:28:39.241 11:36:34 nvmf_identify_passthru -- target/identify_passthru.sh@30 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:28:39.241 11:36:34 nvmf_identify_passthru -- target/identify_passthru.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:28:39.241 11:36:34 nvmf_identify_passthru -- target/identify_passthru.sh@35 -- # waitforlisten 1681596 00:28:39.241 11:36:34 nvmf_identify_passthru -- common/autotest_common.sh@831 -- # '[' -z 1681596 ']' 00:28:39.241 11:36:34 nvmf_identify_passthru -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:39.241 11:36:34 nvmf_identify_passthru -- common/autotest_common.sh@836 -- # local max_retries=100 00:28:39.241 11:36:34 nvmf_identify_passthru -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:39.241 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:39.241 11:36:34 nvmf_identify_passthru -- common/autotest_common.sh@840 -- # xtrace_disable 00:28:39.241 11:36:34 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:28:39.241 [2024-07-26 11:36:34.663695] Starting SPDK v24.09-pre git sha1 487ff9e1a / DPDK 24.03.0 initialization... 00:28:39.241 [2024-07-26 11:36:34.663742] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:39.241 EAL: No free 2048 kB hugepages reported on node 1 00:28:39.241 [2024-07-26 11:36:34.735150] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:28:39.241 [2024-07-26 11:36:34.807418] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:39.241 [2024-07-26 11:36:34.807460] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:39.241 [2024-07-26 11:36:34.807467] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:39.241 [2024-07-26 11:36:34.807472] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:39.241 [2024-07-26 11:36:34.807477] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:39.241 [2024-07-26 11:36:34.807538] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:28:39.241 [2024-07-26 11:36:34.807672] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:28:39.241 [2024-07-26 11:36:34.807717] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:28:39.241 [2024-07-26 11:36:34.807718] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:28:40.168 11:36:35 nvmf_identify_passthru -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:28:40.168 11:36:35 nvmf_identify_passthru -- common/autotest_common.sh@864 -- # return 0 00:28:40.168 11:36:35 nvmf_identify_passthru -- target/identify_passthru.sh@36 -- # rpc_cmd -v nvmf_set_config --passthru-identify-ctrlr 00:28:40.168 11:36:35 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:40.168 11:36:35 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:28:40.168 INFO: Log level set to 20 00:28:40.168 INFO: Requests: 00:28:40.168 { 00:28:40.168 "jsonrpc": "2.0", 00:28:40.168 "method": "nvmf_set_config", 00:28:40.168 "id": 1, 00:28:40.168 "params": { 00:28:40.168 "admin_cmd_passthru": { 00:28:40.168 "identify_ctrlr": true 00:28:40.168 } 00:28:40.168 } 00:28:40.168 } 00:28:40.168 00:28:40.168 INFO: response: 00:28:40.168 { 00:28:40.168 "jsonrpc": "2.0", 00:28:40.168 "id": 1, 00:28:40.168 "result": true 00:28:40.168 } 00:28:40.168 00:28:40.168 11:36:35 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:40.168 11:36:35 nvmf_identify_passthru -- target/identify_passthru.sh@37 -- # rpc_cmd -v framework_start_init 00:28:40.168 11:36:35 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:40.168 11:36:35 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:28:40.168 INFO: Setting log level to 20 00:28:40.168 INFO: Setting log level to 20 00:28:40.168 INFO: Log level set to 20 00:28:40.168 INFO: Log level set to 20 00:28:40.168 INFO: Requests: 00:28:40.168 { 00:28:40.168 "jsonrpc": "2.0", 00:28:40.168 "method": "framework_start_init", 00:28:40.168 "id": 1 00:28:40.168 } 00:28:40.168 00:28:40.168 INFO: Requests: 00:28:40.168 { 00:28:40.168 "jsonrpc": "2.0", 00:28:40.168 "method": "framework_start_init", 00:28:40.168 "id": 1 00:28:40.168 } 00:28:40.168 00:28:40.168 [2024-07-26 11:36:35.574535] nvmf_tgt.c: 451:nvmf_tgt_advance_state: *NOTICE*: Custom identify ctrlr handler enabled 00:28:40.168 INFO: response: 00:28:40.168 { 00:28:40.168 "jsonrpc": "2.0", 00:28:40.168 "id": 1, 00:28:40.168 "result": true 00:28:40.168 } 00:28:40.168 00:28:40.168 INFO: response: 00:28:40.168 { 00:28:40.168 "jsonrpc": "2.0", 00:28:40.168 "id": 1, 00:28:40.168 "result": true 00:28:40.168 } 00:28:40.168 00:28:40.168 11:36:35 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:40.168 11:36:35 nvmf_identify_passthru -- target/identify_passthru.sh@38 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:28:40.168 11:36:35 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:40.168 11:36:35 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:28:40.168 INFO: Setting log level to 40 00:28:40.168 INFO: Setting log level to 40 00:28:40.168 INFO: Setting log level to 40 00:28:40.168 [2024-07-26 11:36:35.588046] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:40.168 11:36:35 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:40.168 11:36:35 nvmf_identify_passthru -- target/identify_passthru.sh@39 -- # timing_exit start_nvmf_tgt 00:28:40.168 11:36:35 nvmf_identify_passthru -- common/autotest_common.sh@730 -- # xtrace_disable 00:28:40.168 11:36:35 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:28:40.168 11:36:35 nvmf_identify_passthru -- target/identify_passthru.sh@41 -- # rpc_cmd bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:5e:00.0 00:28:40.168 11:36:35 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:40.168 11:36:35 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:28:43.438 Nvme0n1 00:28:43.438 11:36:38 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:43.438 11:36:38 nvmf_identify_passthru -- target/identify_passthru.sh@42 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 1 00:28:43.438 11:36:38 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:43.438 11:36:38 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:28:43.438 11:36:38 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:43.438 11:36:38 nvmf_identify_passthru -- target/identify_passthru.sh@43 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:28:43.439 11:36:38 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:43.439 11:36:38 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:28:43.439 11:36:38 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:43.439 11:36:38 nvmf_identify_passthru -- target/identify_passthru.sh@44 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:28:43.439 11:36:38 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:43.439 11:36:38 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:28:43.439 [2024-07-26 11:36:38.478061] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:43.439 11:36:38 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:43.439 11:36:38 nvmf_identify_passthru -- target/identify_passthru.sh@46 -- # rpc_cmd nvmf_get_subsystems 00:28:43.439 11:36:38 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:43.439 11:36:38 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:28:43.439 [ 00:28:43.439 { 00:28:43.439 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:28:43.439 "subtype": "Discovery", 00:28:43.439 "listen_addresses": [], 00:28:43.439 "allow_any_host": true, 00:28:43.439 "hosts": [] 00:28:43.439 }, 00:28:43.439 { 00:28:43.439 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:28:43.439 "subtype": "NVMe", 00:28:43.439 "listen_addresses": [ 00:28:43.439 { 00:28:43.439 "trtype": "TCP", 00:28:43.439 "adrfam": "IPv4", 00:28:43.439 "traddr": "10.0.0.2", 00:28:43.439 "trsvcid": "4420" 00:28:43.439 } 00:28:43.439 ], 00:28:43.439 "allow_any_host": true, 00:28:43.439 "hosts": [], 00:28:43.439 "serial_number": "SPDK00000000000001", 00:28:43.439 "model_number": "SPDK bdev Controller", 00:28:43.439 "max_namespaces": 1, 00:28:43.439 "min_cntlid": 1, 00:28:43.439 "max_cntlid": 65519, 00:28:43.439 "namespaces": [ 00:28:43.439 { 00:28:43.439 "nsid": 1, 00:28:43.439 "bdev_name": "Nvme0n1", 00:28:43.439 "name": "Nvme0n1", 00:28:43.439 "nguid": "9554977B39D64C1B98687CBBA7919BCE", 00:28:43.439 "uuid": "9554977b-39d6-4c1b-9868-7cbba7919bce" 00:28:43.439 } 00:28:43.439 ] 00:28:43.439 } 00:28:43.439 ] 00:28:43.439 11:36:38 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:43.439 11:36:38 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:28:43.439 11:36:38 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # grep 'Serial Number:' 00:28:43.439 11:36:38 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # awk '{print $3}' 00:28:43.439 EAL: No free 2048 kB hugepages reported on node 1 00:28:43.439 11:36:38 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # nvmf_serial_number=PHLN951000C61P6AGN 00:28:43.439 11:36:38 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:28:43.439 11:36:38 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # grep 'Model Number:' 00:28:43.439 11:36:38 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # awk '{print $3}' 00:28:43.439 EAL: No free 2048 kB hugepages reported on node 1 00:28:43.439 11:36:38 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # nvmf_model_number=INTEL 00:28:43.439 11:36:38 nvmf_identify_passthru -- target/identify_passthru.sh@63 -- # '[' PHLN951000C61P6AGN '!=' PHLN951000C61P6AGN ']' 00:28:43.439 11:36:38 nvmf_identify_passthru -- target/identify_passthru.sh@68 -- # '[' INTEL '!=' INTEL ']' 00:28:43.439 11:36:38 nvmf_identify_passthru -- target/identify_passthru.sh@73 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:28:43.439 11:36:38 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:43.439 11:36:38 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:28:43.439 11:36:38 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:43.439 11:36:38 nvmf_identify_passthru -- target/identify_passthru.sh@75 -- # trap - SIGINT SIGTERM EXIT 00:28:43.439 11:36:38 nvmf_identify_passthru -- target/identify_passthru.sh@77 -- # nvmftestfini 00:28:43.439 11:36:38 nvmf_identify_passthru -- nvmf/common.sh@488 -- # nvmfcleanup 00:28:43.439 11:36:38 nvmf_identify_passthru -- nvmf/common.sh@117 -- # sync 00:28:43.439 11:36:38 nvmf_identify_passthru -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:28:43.439 11:36:38 nvmf_identify_passthru -- nvmf/common.sh@120 -- # set +e 00:28:43.439 11:36:38 nvmf_identify_passthru -- nvmf/common.sh@121 -- # for i in {1..20} 00:28:43.439 11:36:38 nvmf_identify_passthru -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:28:43.439 rmmod nvme_tcp 00:28:43.439 rmmod nvme_fabrics 00:28:43.439 rmmod nvme_keyring 00:28:43.439 11:36:38 nvmf_identify_passthru -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:28:43.439 11:36:38 nvmf_identify_passthru -- nvmf/common.sh@124 -- # set -e 00:28:43.439 11:36:38 nvmf_identify_passthru -- nvmf/common.sh@125 -- # return 0 00:28:43.439 11:36:38 nvmf_identify_passthru -- nvmf/common.sh@489 -- # '[' -n 1681596 ']' 00:28:43.439 11:36:38 nvmf_identify_passthru -- nvmf/common.sh@490 -- # killprocess 1681596 00:28:43.439 11:36:38 nvmf_identify_passthru -- common/autotest_common.sh@950 -- # '[' -z 1681596 ']' 00:28:43.439 11:36:38 nvmf_identify_passthru -- common/autotest_common.sh@954 -- # kill -0 1681596 00:28:43.439 11:36:38 nvmf_identify_passthru -- common/autotest_common.sh@955 -- # uname 00:28:43.439 11:36:38 nvmf_identify_passthru -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:28:43.439 11:36:38 nvmf_identify_passthru -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1681596 00:28:43.439 11:36:39 nvmf_identify_passthru -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:28:43.439 11:36:39 nvmf_identify_passthru -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:28:43.439 11:36:39 nvmf_identify_passthru -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1681596' 00:28:43.439 killing process with pid 1681596 00:28:43.439 11:36:39 nvmf_identify_passthru -- common/autotest_common.sh@969 -- # kill 1681596 00:28:43.439 11:36:39 nvmf_identify_passthru -- common/autotest_common.sh@974 -- # wait 1681596 00:28:45.965 11:36:41 nvmf_identify_passthru -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:28:45.965 11:36:41 nvmf_identify_passthru -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:28:45.965 11:36:41 nvmf_identify_passthru -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:28:45.965 11:36:41 nvmf_identify_passthru -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:28:45.965 11:36:41 nvmf_identify_passthru -- nvmf/common.sh@278 -- # remove_spdk_ns 00:28:45.965 11:36:41 nvmf_identify_passthru -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:45.965 11:36:41 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:28:45.965 11:36:41 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:47.880 11:36:43 nvmf_identify_passthru -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:28:47.880 00:28:47.880 real 0m23.755s 00:28:47.880 user 0m33.257s 00:28:47.880 sys 0m5.168s 00:28:47.880 11:36:43 nvmf_identify_passthru -- common/autotest_common.sh@1126 -- # xtrace_disable 00:28:47.880 11:36:43 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:28:47.880 ************************************ 00:28:47.880 END TEST nvmf_identify_passthru 00:28:47.880 ************************************ 00:28:47.880 11:36:43 -- spdk/autotest.sh@296 -- # run_test nvmf_dif /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/dif.sh 00:28:47.880 11:36:43 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:28:47.880 11:36:43 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:28:47.880 11:36:43 -- common/autotest_common.sh@10 -- # set +x 00:28:47.880 ************************************ 00:28:47.880 START TEST nvmf_dif 00:28:47.880 ************************************ 00:28:47.880 11:36:43 nvmf_dif -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/dif.sh 00:28:47.880 * Looking for test storage... 00:28:47.880 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:28:47.880 11:36:43 nvmf_dif -- target/dif.sh@13 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:28:47.880 11:36:43 nvmf_dif -- nvmf/common.sh@7 -- # uname -s 00:28:47.880 11:36:43 nvmf_dif -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:47.880 11:36:43 nvmf_dif -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:47.880 11:36:43 nvmf_dif -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:47.880 11:36:43 nvmf_dif -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:47.880 11:36:43 nvmf_dif -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:47.880 11:36:43 nvmf_dif -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:47.880 11:36:43 nvmf_dif -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:47.880 11:36:43 nvmf_dif -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:47.880 11:36:43 nvmf_dif -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:47.880 11:36:43 nvmf_dif -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:47.880 11:36:43 nvmf_dif -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:28:47.880 11:36:43 nvmf_dif -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:28:47.880 11:36:43 nvmf_dif -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:47.880 11:36:43 nvmf_dif -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:47.880 11:36:43 nvmf_dif -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:28:47.880 11:36:43 nvmf_dif -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:28:47.880 11:36:43 nvmf_dif -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:28:47.880 11:36:43 nvmf_dif -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:47.880 11:36:43 nvmf_dif -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:47.880 11:36:43 nvmf_dif -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:47.880 11:36:43 nvmf_dif -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:47.880 11:36:43 nvmf_dif -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:47.880 11:36:43 nvmf_dif -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:47.880 11:36:43 nvmf_dif -- paths/export.sh@5 -- # export PATH 00:28:47.880 11:36:43 nvmf_dif -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:47.880 11:36:43 nvmf_dif -- nvmf/common.sh@47 -- # : 0 00:28:47.880 11:36:43 nvmf_dif -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:28:47.880 11:36:43 nvmf_dif -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:28:47.880 11:36:43 nvmf_dif -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:28:47.880 11:36:43 nvmf_dif -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:47.880 11:36:43 nvmf_dif -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:47.880 11:36:43 nvmf_dif -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:28:47.880 11:36:43 nvmf_dif -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:28:47.880 11:36:43 nvmf_dif -- nvmf/common.sh@51 -- # have_pci_nics=0 00:28:47.880 11:36:43 nvmf_dif -- target/dif.sh@15 -- # NULL_META=16 00:28:47.880 11:36:43 nvmf_dif -- target/dif.sh@15 -- # NULL_BLOCK_SIZE=512 00:28:47.880 11:36:43 nvmf_dif -- target/dif.sh@15 -- # NULL_SIZE=64 00:28:47.880 11:36:43 nvmf_dif -- target/dif.sh@15 -- # NULL_DIF=1 00:28:47.880 11:36:43 nvmf_dif -- target/dif.sh@135 -- # nvmftestinit 00:28:47.880 11:36:43 nvmf_dif -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:28:47.880 11:36:43 nvmf_dif -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:47.880 11:36:43 nvmf_dif -- nvmf/common.sh@448 -- # prepare_net_devs 00:28:47.880 11:36:43 nvmf_dif -- nvmf/common.sh@410 -- # local -g is_hw=no 00:28:47.880 11:36:43 nvmf_dif -- nvmf/common.sh@412 -- # remove_spdk_ns 00:28:47.880 11:36:43 nvmf_dif -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:47.880 11:36:43 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:28:47.880 11:36:43 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:47.880 11:36:43 nvmf_dif -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:28:47.880 11:36:43 nvmf_dif -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:28:47.880 11:36:43 nvmf_dif -- nvmf/common.sh@285 -- # xtrace_disable 00:28:47.880 11:36:43 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:28:53.168 11:36:48 nvmf_dif -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:28:53.168 11:36:48 nvmf_dif -- nvmf/common.sh@291 -- # pci_devs=() 00:28:53.169 11:36:48 nvmf_dif -- nvmf/common.sh@291 -- # local -a pci_devs 00:28:53.169 11:36:48 nvmf_dif -- nvmf/common.sh@292 -- # pci_net_devs=() 00:28:53.169 11:36:48 nvmf_dif -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:28:53.169 11:36:48 nvmf_dif -- nvmf/common.sh@293 -- # pci_drivers=() 00:28:53.169 11:36:48 nvmf_dif -- nvmf/common.sh@293 -- # local -A pci_drivers 00:28:53.169 11:36:48 nvmf_dif -- nvmf/common.sh@295 -- # net_devs=() 00:28:53.169 11:36:48 nvmf_dif -- nvmf/common.sh@295 -- # local -ga net_devs 00:28:53.169 11:36:48 nvmf_dif -- nvmf/common.sh@296 -- # e810=() 00:28:53.169 11:36:48 nvmf_dif -- nvmf/common.sh@296 -- # local -ga e810 00:28:53.169 11:36:48 nvmf_dif -- nvmf/common.sh@297 -- # x722=() 00:28:53.169 11:36:48 nvmf_dif -- nvmf/common.sh@297 -- # local -ga x722 00:28:53.169 11:36:48 nvmf_dif -- nvmf/common.sh@298 -- # mlx=() 00:28:53.169 11:36:48 nvmf_dif -- nvmf/common.sh@298 -- # local -ga mlx 00:28:53.169 11:36:48 nvmf_dif -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:53.169 11:36:48 nvmf_dif -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:53.169 11:36:48 nvmf_dif -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:53.169 11:36:48 nvmf_dif -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:53.169 11:36:48 nvmf_dif -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:53.169 11:36:48 nvmf_dif -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:53.169 11:36:48 nvmf_dif -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:53.169 11:36:48 nvmf_dif -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:53.169 11:36:48 nvmf_dif -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:53.169 11:36:48 nvmf_dif -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:53.169 11:36:48 nvmf_dif -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:53.169 11:36:48 nvmf_dif -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:28:53.169 11:36:48 nvmf_dif -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:28:53.169 11:36:48 nvmf_dif -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:28:53.169 11:36:48 nvmf_dif -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:28:53.169 11:36:48 nvmf_dif -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:28:53.169 11:36:48 nvmf_dif -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:28:53.169 11:36:48 nvmf_dif -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:28:53.169 11:36:48 nvmf_dif -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:28:53.169 Found 0000:86:00.0 (0x8086 - 0x159b) 00:28:53.169 11:36:48 nvmf_dif -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:28:53.169 11:36:48 nvmf_dif -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:28:53.169 11:36:48 nvmf_dif -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:53.169 11:36:48 nvmf_dif -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:53.169 11:36:48 nvmf_dif -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:28:53.169 11:36:48 nvmf_dif -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:28:53.169 11:36:48 nvmf_dif -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:28:53.169 Found 0000:86:00.1 (0x8086 - 0x159b) 00:28:53.169 11:36:48 nvmf_dif -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:28:53.169 11:36:48 nvmf_dif -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:28:53.169 11:36:48 nvmf_dif -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:53.169 11:36:48 nvmf_dif -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:53.169 11:36:48 nvmf_dif -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:28:53.169 11:36:48 nvmf_dif -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:28:53.169 11:36:48 nvmf_dif -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:28:53.169 11:36:48 nvmf_dif -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:28:53.169 11:36:48 nvmf_dif -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:28:53.169 11:36:48 nvmf_dif -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:53.169 11:36:48 nvmf_dif -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:28:53.169 11:36:48 nvmf_dif -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:53.169 11:36:48 nvmf_dif -- nvmf/common.sh@390 -- # [[ up == up ]] 00:28:53.169 11:36:48 nvmf_dif -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:28:53.169 11:36:48 nvmf_dif -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:53.169 11:36:48 nvmf_dif -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:28:53.169 Found net devices under 0000:86:00.0: cvl_0_0 00:28:53.169 11:36:48 nvmf_dif -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:28:53.169 11:36:48 nvmf_dif -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:28:53.169 11:36:48 nvmf_dif -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:53.169 11:36:48 nvmf_dif -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:28:53.169 11:36:48 nvmf_dif -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:53.169 11:36:48 nvmf_dif -- nvmf/common.sh@390 -- # [[ up == up ]] 00:28:53.169 11:36:48 nvmf_dif -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:28:53.169 11:36:48 nvmf_dif -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:53.169 11:36:48 nvmf_dif -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:28:53.169 Found net devices under 0000:86:00.1: cvl_0_1 00:28:53.169 11:36:48 nvmf_dif -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:28:53.169 11:36:48 nvmf_dif -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:28:53.169 11:36:48 nvmf_dif -- nvmf/common.sh@414 -- # is_hw=yes 00:28:53.169 11:36:48 nvmf_dif -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:28:53.169 11:36:48 nvmf_dif -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:28:53.169 11:36:48 nvmf_dif -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:28:53.169 11:36:48 nvmf_dif -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:28:53.169 11:36:48 nvmf_dif -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:28:53.169 11:36:48 nvmf_dif -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:28:53.169 11:36:48 nvmf_dif -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:28:53.169 11:36:48 nvmf_dif -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:28:53.169 11:36:48 nvmf_dif -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:28:53.169 11:36:48 nvmf_dif -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:28:53.169 11:36:48 nvmf_dif -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:28:53.169 11:36:48 nvmf_dif -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:53.169 11:36:48 nvmf_dif -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:28:53.169 11:36:48 nvmf_dif -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:28:53.169 11:36:48 nvmf_dif -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:28:53.169 11:36:48 nvmf_dif -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:28:53.169 11:36:48 nvmf_dif -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:28:53.169 11:36:48 nvmf_dif -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:28:53.169 11:36:48 nvmf_dif -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:28:53.169 11:36:48 nvmf_dif -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:28:53.428 11:36:48 nvmf_dif -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:28:53.428 11:36:48 nvmf_dif -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:28:53.428 11:36:48 nvmf_dif -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:28:53.428 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:53.428 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.166 ms 00:28:53.428 00:28:53.428 --- 10.0.0.2 ping statistics --- 00:28:53.428 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:53.428 rtt min/avg/max/mdev = 0.166/0.166/0.166/0.000 ms 00:28:53.428 11:36:48 nvmf_dif -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:28:53.428 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:53.428 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.214 ms 00:28:53.428 00:28:53.428 --- 10.0.0.1 ping statistics --- 00:28:53.428 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:53.428 rtt min/avg/max/mdev = 0.214/0.214/0.214/0.000 ms 00:28:53.428 11:36:48 nvmf_dif -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:53.428 11:36:48 nvmf_dif -- nvmf/common.sh@422 -- # return 0 00:28:53.428 11:36:48 nvmf_dif -- nvmf/common.sh@450 -- # '[' iso == iso ']' 00:28:53.429 11:36:48 nvmf_dif -- nvmf/common.sh@451 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:28:55.963 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:28:55.963 0000:5e:00.0 (8086 0a54): Already using the vfio-pci driver 00:28:55.963 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:28:55.963 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:28:55.963 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:28:55.963 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:28:55.963 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:28:55.963 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:28:55.963 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:28:55.963 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:28:55.963 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:28:55.963 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:28:55.963 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:28:55.963 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:28:55.963 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:28:55.963 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:28:55.963 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:28:56.222 11:36:51 nvmf_dif -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:56.222 11:36:51 nvmf_dif -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:28:56.222 11:36:51 nvmf_dif -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:28:56.222 11:36:51 nvmf_dif -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:56.222 11:36:51 nvmf_dif -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:28:56.222 11:36:51 nvmf_dif -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:28:56.222 11:36:51 nvmf_dif -- target/dif.sh@136 -- # NVMF_TRANSPORT_OPTS+=' --dif-insert-or-strip' 00:28:56.222 11:36:51 nvmf_dif -- target/dif.sh@137 -- # nvmfappstart 00:28:56.222 11:36:51 nvmf_dif -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:28:56.222 11:36:51 nvmf_dif -- common/autotest_common.sh@724 -- # xtrace_disable 00:28:56.222 11:36:51 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:28:56.222 11:36:51 nvmf_dif -- nvmf/common.sh@481 -- # nvmfpid=1687288 00:28:56.222 11:36:51 nvmf_dif -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:28:56.222 11:36:51 nvmf_dif -- nvmf/common.sh@482 -- # waitforlisten 1687288 00:28:56.222 11:36:51 nvmf_dif -- common/autotest_common.sh@831 -- # '[' -z 1687288 ']' 00:28:56.222 11:36:51 nvmf_dif -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:56.222 11:36:51 nvmf_dif -- common/autotest_common.sh@836 -- # local max_retries=100 00:28:56.222 11:36:51 nvmf_dif -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:56.222 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:56.222 11:36:51 nvmf_dif -- common/autotest_common.sh@840 -- # xtrace_disable 00:28:56.222 11:36:51 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:28:56.222 [2024-07-26 11:36:51.805066] Starting SPDK v24.09-pre git sha1 487ff9e1a / DPDK 24.03.0 initialization... 00:28:56.222 [2024-07-26 11:36:51.805105] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:56.222 EAL: No free 2048 kB hugepages reported on node 1 00:28:56.222 [2024-07-26 11:36:51.876206] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:56.481 [2024-07-26 11:36:51.952922] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:56.481 [2024-07-26 11:36:51.952960] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:56.481 [2024-07-26 11:36:51.952967] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:56.481 [2024-07-26 11:36:51.952973] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:56.481 [2024-07-26 11:36:51.952977] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:56.481 [2024-07-26 11:36:51.952996] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:28:57.049 11:36:52 nvmf_dif -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:28:57.049 11:36:52 nvmf_dif -- common/autotest_common.sh@864 -- # return 0 00:28:57.049 11:36:52 nvmf_dif -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:28:57.049 11:36:52 nvmf_dif -- common/autotest_common.sh@730 -- # xtrace_disable 00:28:57.049 11:36:52 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:28:57.049 11:36:52 nvmf_dif -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:57.049 11:36:52 nvmf_dif -- target/dif.sh@139 -- # create_transport 00:28:57.049 11:36:52 nvmf_dif -- target/dif.sh@50 -- # rpc_cmd nvmf_create_transport -t tcp -o --dif-insert-or-strip 00:28:57.049 11:36:52 nvmf_dif -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:57.049 11:36:52 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:28:57.049 [2024-07-26 11:36:52.648275] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:57.049 11:36:52 nvmf_dif -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:57.049 11:36:52 nvmf_dif -- target/dif.sh@141 -- # run_test fio_dif_1_default fio_dif_1 00:28:57.049 11:36:52 nvmf_dif -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:28:57.049 11:36:52 nvmf_dif -- common/autotest_common.sh@1107 -- # xtrace_disable 00:28:57.049 11:36:52 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:28:57.049 ************************************ 00:28:57.049 START TEST fio_dif_1_default 00:28:57.049 ************************************ 00:28:57.049 11:36:52 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1125 -- # fio_dif_1 00:28:57.049 11:36:52 nvmf_dif.fio_dif_1_default -- target/dif.sh@86 -- # create_subsystems 0 00:28:57.049 11:36:52 nvmf_dif.fio_dif_1_default -- target/dif.sh@28 -- # local sub 00:28:57.049 11:36:52 nvmf_dif.fio_dif_1_default -- target/dif.sh@30 -- # for sub in "$@" 00:28:57.049 11:36:52 nvmf_dif.fio_dif_1_default -- target/dif.sh@31 -- # create_subsystem 0 00:28:57.049 11:36:52 nvmf_dif.fio_dif_1_default -- target/dif.sh@18 -- # local sub_id=0 00:28:57.049 11:36:52 nvmf_dif.fio_dif_1_default -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:28:57.049 11:36:52 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:57.049 11:36:52 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:28:57.049 bdev_null0 00:28:57.049 11:36:52 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:57.049 11:36:52 nvmf_dif.fio_dif_1_default -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:28:57.049 11:36:52 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:57.049 11:36:52 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:28:57.049 11:36:52 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:57.049 11:36:52 nvmf_dif.fio_dif_1_default -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:28:57.049 11:36:52 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:57.049 11:36:52 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:28:57.308 11:36:52 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:57.308 11:36:52 nvmf_dif.fio_dif_1_default -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:28:57.308 11:36:52 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:57.308 11:36:52 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:28:57.308 [2024-07-26 11:36:52.716564] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:57.308 11:36:52 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:57.308 11:36:52 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # fio /dev/fd/62 00:28:57.308 11:36:52 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # create_json_sub_conf 0 00:28:57.308 11:36:52 nvmf_dif.fio_dif_1_default -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:28:57.308 11:36:52 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@532 -- # config=() 00:28:57.308 11:36:52 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:28:57.308 11:36:52 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@532 -- # local subsystem config 00:28:57.308 11:36:52 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:28:57.308 11:36:52 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:28:57.308 11:36:52 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # gen_fio_conf 00:28:57.308 11:36:52 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:28:57.308 { 00:28:57.308 "params": { 00:28:57.308 "name": "Nvme$subsystem", 00:28:57.308 "trtype": "$TEST_TRANSPORT", 00:28:57.308 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:57.308 "adrfam": "ipv4", 00:28:57.308 "trsvcid": "$NVMF_PORT", 00:28:57.308 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:57.308 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:57.308 "hdgst": ${hdgst:-false}, 00:28:57.308 "ddgst": ${ddgst:-false} 00:28:57.308 }, 00:28:57.308 "method": "bdev_nvme_attach_controller" 00:28:57.308 } 00:28:57.308 EOF 00:28:57.308 )") 00:28:57.308 11:36:52 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:28:57.308 11:36:52 nvmf_dif.fio_dif_1_default -- target/dif.sh@54 -- # local file 00:28:57.308 11:36:52 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:28:57.308 11:36:52 nvmf_dif.fio_dif_1_default -- target/dif.sh@56 -- # cat 00:28:57.308 11:36:52 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1339 -- # local sanitizers 00:28:57.308 11:36:52 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:28:57.308 11:36:52 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1341 -- # shift 00:28:57.308 11:36:52 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1343 -- # local asan_lib= 00:28:57.308 11:36:52 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:28:57.308 11:36:52 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@554 -- # cat 00:28:57.308 11:36:52 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file = 1 )) 00:28:57.308 11:36:52 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:28:57.308 11:36:52 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file <= files )) 00:28:57.308 11:36:52 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # grep libasan 00:28:57.308 11:36:52 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:28:57.308 11:36:52 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@556 -- # jq . 00:28:57.308 11:36:52 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@557 -- # IFS=, 00:28:57.308 11:36:52 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:28:57.308 "params": { 00:28:57.308 "name": "Nvme0", 00:28:57.308 "trtype": "tcp", 00:28:57.308 "traddr": "10.0.0.2", 00:28:57.308 "adrfam": "ipv4", 00:28:57.308 "trsvcid": "4420", 00:28:57.308 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:28:57.308 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:28:57.308 "hdgst": false, 00:28:57.308 "ddgst": false 00:28:57.308 }, 00:28:57.308 "method": "bdev_nvme_attach_controller" 00:28:57.308 }' 00:28:57.308 11:36:52 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # asan_lib= 00:28:57.308 11:36:52 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:28:57.308 11:36:52 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:28:57.308 11:36:52 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:28:57.308 11:36:52 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:28:57.308 11:36:52 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:28:57.308 11:36:52 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # asan_lib= 00:28:57.308 11:36:52 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:28:57.308 11:36:52 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:28:57.308 11:36:52 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:28:57.567 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:28:57.567 fio-3.35 00:28:57.567 Starting 1 thread 00:28:57.567 EAL: No free 2048 kB hugepages reported on node 1 00:29:09.809 00:29:09.809 filename0: (groupid=0, jobs=1): err= 0: pid=1687668: Fri Jul 26 11:37:03 2024 00:29:09.809 read: IOPS=189, BW=758KiB/s (776kB/s)(7600KiB/10025msec) 00:29:09.809 slat (nsec): min=5727, max=25538, avg=5983.79, stdev=867.23 00:29:09.809 clat (usec): min=402, max=46187, avg=21087.12, stdev=20602.52 00:29:09.809 lat (usec): min=408, max=46213, avg=21093.10, stdev=20602.46 00:29:09.809 clat percentiles (usec): 00:29:09.809 | 1.00th=[ 408], 5.00th=[ 412], 10.00th=[ 416], 20.00th=[ 424], 00:29:09.809 | 30.00th=[ 433], 40.00th=[ 478], 50.00th=[40633], 60.00th=[41157], 00:29:09.809 | 70.00th=[41681], 80.00th=[41681], 90.00th=[42206], 95.00th=[42730], 00:29:09.809 | 99.00th=[42730], 99.50th=[42730], 99.90th=[46400], 99.95th=[46400], 00:29:09.809 | 99.99th=[46400] 00:29:09.809 bw ( KiB/s): min= 704, max= 768, per=99.99%, avg=758.40, stdev=21.02, samples=20 00:29:09.809 iops : min= 176, max= 192, avg=189.60, stdev= 5.26, samples=20 00:29:09.809 lat (usec) : 500=40.32%, 750=9.58% 00:29:09.809 lat (msec) : 50=50.11% 00:29:09.809 cpu : usr=94.20%, sys=5.55%, ctx=14, majf=0, minf=227 00:29:09.809 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:29:09.809 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:09.809 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:09.809 issued rwts: total=1900,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:09.809 latency : target=0, window=0, percentile=100.00%, depth=4 00:29:09.809 00:29:09.809 Run status group 0 (all jobs): 00:29:09.809 READ: bw=758KiB/s (776kB/s), 758KiB/s-758KiB/s (776kB/s-776kB/s), io=7600KiB (7782kB), run=10025-10025msec 00:29:09.809 11:37:03 nvmf_dif.fio_dif_1_default -- target/dif.sh@88 -- # destroy_subsystems 0 00:29:09.809 11:37:03 nvmf_dif.fio_dif_1_default -- target/dif.sh@43 -- # local sub 00:29:09.809 11:37:03 nvmf_dif.fio_dif_1_default -- target/dif.sh@45 -- # for sub in "$@" 00:29:09.809 11:37:03 nvmf_dif.fio_dif_1_default -- target/dif.sh@46 -- # destroy_subsystem 0 00:29:09.809 11:37:03 nvmf_dif.fio_dif_1_default -- target/dif.sh@36 -- # local sub_id=0 00:29:09.809 11:37:03 nvmf_dif.fio_dif_1_default -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:29:09.809 11:37:03 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:09.809 11:37:03 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:29:09.809 11:37:03 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:09.809 11:37:03 nvmf_dif.fio_dif_1_default -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:29:09.809 11:37:03 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:09.809 11:37:03 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:29:09.809 11:37:03 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:09.809 00:29:09.809 real 0m11.202s 00:29:09.809 user 0m16.205s 00:29:09.809 sys 0m0.845s 00:29:09.809 11:37:03 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1126 -- # xtrace_disable 00:29:09.809 11:37:03 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:29:09.809 ************************************ 00:29:09.809 END TEST fio_dif_1_default 00:29:09.809 ************************************ 00:29:09.809 11:37:03 nvmf_dif -- target/dif.sh@142 -- # run_test fio_dif_1_multi_subsystems fio_dif_1_multi_subsystems 00:29:09.809 11:37:03 nvmf_dif -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:29:09.809 11:37:03 nvmf_dif -- common/autotest_common.sh@1107 -- # xtrace_disable 00:29:09.809 11:37:03 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:29:09.809 ************************************ 00:29:09.809 START TEST fio_dif_1_multi_subsystems 00:29:09.809 ************************************ 00:29:09.809 11:37:03 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1125 -- # fio_dif_1_multi_subsystems 00:29:09.809 11:37:03 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@92 -- # local files=1 00:29:09.809 11:37:03 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@94 -- # create_subsystems 0 1 00:29:09.809 11:37:03 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@28 -- # local sub 00:29:09.809 11:37:03 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:29:09.809 11:37:03 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 0 00:29:09.809 11:37:03 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=0 00:29:09.809 11:37:03 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:29:09.809 11:37:03 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:09.809 11:37:03 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:29:09.809 bdev_null0 00:29:09.809 11:37:03 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:09.809 11:37:03 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:29:09.809 11:37:03 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:09.809 11:37:03 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:29:09.809 11:37:03 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:09.809 11:37:03 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:29:09.809 11:37:03 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:09.809 11:37:03 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:29:09.809 11:37:03 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:09.809 11:37:03 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:29:09.809 11:37:03 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:09.809 11:37:03 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:29:09.809 [2024-07-26 11:37:03.995346] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:09.809 11:37:03 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:09.809 11:37:03 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:29:09.809 11:37:03 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 1 00:29:09.809 11:37:04 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=1 00:29:09.809 11:37:04 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:29:09.809 11:37:04 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:09.809 11:37:04 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:29:09.809 bdev_null1 00:29:09.809 11:37:04 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:09.809 11:37:04 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:29:09.809 11:37:04 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:09.809 11:37:04 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:29:09.809 11:37:04 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:09.810 11:37:04 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:29:09.810 11:37:04 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:09.810 11:37:04 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:29:09.810 11:37:04 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:09.810 11:37:04 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:29:09.810 11:37:04 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:09.810 11:37:04 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:29:09.810 11:37:04 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:09.810 11:37:04 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # fio /dev/fd/62 00:29:09.810 11:37:04 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # create_json_sub_conf 0 1 00:29:09.810 11:37:04 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:29:09.810 11:37:04 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@532 -- # config=() 00:29:09.810 11:37:04 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:29:09.810 11:37:04 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@532 -- # local subsystem config 00:29:09.810 11:37:04 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:29:09.810 11:37:04 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:29:09.810 11:37:04 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # gen_fio_conf 00:29:09.810 11:37:04 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:29:09.810 { 00:29:09.810 "params": { 00:29:09.810 "name": "Nvme$subsystem", 00:29:09.810 "trtype": "$TEST_TRANSPORT", 00:29:09.810 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:09.810 "adrfam": "ipv4", 00:29:09.810 "trsvcid": "$NVMF_PORT", 00:29:09.810 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:09.810 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:09.810 "hdgst": ${hdgst:-false}, 00:29:09.810 "ddgst": ${ddgst:-false} 00:29:09.810 }, 00:29:09.810 "method": "bdev_nvme_attach_controller" 00:29:09.810 } 00:29:09.810 EOF 00:29:09.810 )") 00:29:09.810 11:37:04 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:29:09.810 11:37:04 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@54 -- # local file 00:29:09.810 11:37:04 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:29:09.810 11:37:04 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@56 -- # cat 00:29:09.810 11:37:04 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1339 -- # local sanitizers 00:29:09.810 11:37:04 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:29:09.810 11:37:04 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1341 -- # shift 00:29:09.810 11:37:04 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1343 -- # local asan_lib= 00:29:09.810 11:37:04 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:29:09.810 11:37:04 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # cat 00:29:09.810 11:37:04 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file = 1 )) 00:29:09.810 11:37:04 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:29:09.810 11:37:04 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:29:09.810 11:37:04 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@73 -- # cat 00:29:09.810 11:37:04 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # grep libasan 00:29:09.810 11:37:04 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:29:09.810 11:37:04 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:29:09.810 11:37:04 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:29:09.810 { 00:29:09.810 "params": { 00:29:09.810 "name": "Nvme$subsystem", 00:29:09.810 "trtype": "$TEST_TRANSPORT", 00:29:09.810 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:09.810 "adrfam": "ipv4", 00:29:09.810 "trsvcid": "$NVMF_PORT", 00:29:09.810 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:09.810 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:09.810 "hdgst": ${hdgst:-false}, 00:29:09.810 "ddgst": ${ddgst:-false} 00:29:09.810 }, 00:29:09.810 "method": "bdev_nvme_attach_controller" 00:29:09.810 } 00:29:09.810 EOF 00:29:09.810 )") 00:29:09.810 11:37:04 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file++ )) 00:29:09.810 11:37:04 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:29:09.810 11:37:04 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # cat 00:29:09.810 11:37:04 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@556 -- # jq . 00:29:09.810 11:37:04 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@557 -- # IFS=, 00:29:09.810 11:37:04 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:29:09.810 "params": { 00:29:09.810 "name": "Nvme0", 00:29:09.810 "trtype": "tcp", 00:29:09.810 "traddr": "10.0.0.2", 00:29:09.810 "adrfam": "ipv4", 00:29:09.810 "trsvcid": "4420", 00:29:09.810 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:29:09.810 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:29:09.810 "hdgst": false, 00:29:09.810 "ddgst": false 00:29:09.810 }, 00:29:09.810 "method": "bdev_nvme_attach_controller" 00:29:09.810 },{ 00:29:09.810 "params": { 00:29:09.810 "name": "Nvme1", 00:29:09.810 "trtype": "tcp", 00:29:09.810 "traddr": "10.0.0.2", 00:29:09.810 "adrfam": "ipv4", 00:29:09.810 "trsvcid": "4420", 00:29:09.810 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:29:09.810 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:29:09.810 "hdgst": false, 00:29:09.810 "ddgst": false 00:29:09.810 }, 00:29:09.810 "method": "bdev_nvme_attach_controller" 00:29:09.810 }' 00:29:09.810 11:37:04 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # asan_lib= 00:29:09.810 11:37:04 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:29:09.810 11:37:04 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:29:09.810 11:37:04 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:29:09.810 11:37:04 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:29:09.810 11:37:04 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:29:09.810 11:37:04 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # asan_lib= 00:29:09.810 11:37:04 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:29:09.810 11:37:04 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:29:09.810 11:37:04 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:29:09.810 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:29:09.810 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:29:09.810 fio-3.35 00:29:09.810 Starting 2 threads 00:29:09.810 EAL: No free 2048 kB hugepages reported on node 1 00:29:19.782 00:29:19.782 filename0: (groupid=0, jobs=1): err= 0: pid=1689639: Fri Jul 26 11:37:15 2024 00:29:19.782 read: IOPS=189, BW=760KiB/s (778kB/s)(7600KiB/10003msec) 00:29:19.782 slat (nsec): min=5791, max=32146, avg=6828.27, stdev=1944.47 00:29:19.782 clat (usec): min=406, max=42578, avg=21038.81, stdev=20533.80 00:29:19.782 lat (usec): min=412, max=42585, avg=21045.64, stdev=20533.21 00:29:19.782 clat percentiles (usec): 00:29:19.782 | 1.00th=[ 416], 5.00th=[ 420], 10.00th=[ 424], 20.00th=[ 433], 00:29:19.782 | 30.00th=[ 437], 40.00th=[ 529], 50.00th=[40633], 60.00th=[41157], 00:29:19.782 | 70.00th=[41681], 80.00th=[41681], 90.00th=[41681], 95.00th=[41681], 00:29:19.782 | 99.00th=[42730], 99.50th=[42730], 99.90th=[42730], 99.95th=[42730], 00:29:19.782 | 99.99th=[42730] 00:29:19.782 bw ( KiB/s): min= 704, max= 768, per=66.12%, avg=761.26, stdev=17.13, samples=19 00:29:19.782 iops : min= 176, max= 192, avg=190.32, stdev= 4.28, samples=19 00:29:19.782 lat (usec) : 500=38.42%, 750=11.26%, 1000=0.21% 00:29:19.782 lat (msec) : 50=50.11% 00:29:19.782 cpu : usr=97.71%, sys=2.04%, ctx=10, majf=0, minf=125 00:29:19.782 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:29:19.782 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:19.782 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:19.782 issued rwts: total=1900,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:19.782 latency : target=0, window=0, percentile=100.00%, depth=4 00:29:19.782 filename1: (groupid=0, jobs=1): err= 0: pid=1689640: Fri Jul 26 11:37:15 2024 00:29:19.782 read: IOPS=97, BW=392KiB/s (401kB/s)(3920KiB/10009msec) 00:29:19.782 slat (nsec): min=5793, max=25009, avg=7523.06, stdev=2588.85 00:29:19.782 clat (usec): min=430, max=42004, avg=40829.44, stdev=2590.83 00:29:19.782 lat (usec): min=437, max=42016, avg=40836.97, stdev=2590.87 00:29:19.782 clat percentiles (usec): 00:29:19.782 | 1.00th=[40633], 5.00th=[41157], 10.00th=[41157], 20.00th=[41157], 00:29:19.782 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:29:19.782 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:29:19.782 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:29:19.782 | 99.99th=[42206] 00:29:19.782 bw ( KiB/s): min= 384, max= 416, per=33.88%, avg=390.40, stdev=13.13, samples=20 00:29:19.782 iops : min= 96, max= 104, avg=97.60, stdev= 3.28, samples=20 00:29:19.782 lat (usec) : 500=0.41% 00:29:19.782 lat (msec) : 50=99.59% 00:29:19.782 cpu : usr=97.81%, sys=1.95%, ctx=10, majf=0, minf=151 00:29:19.782 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:29:19.782 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:19.782 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:19.782 issued rwts: total=980,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:19.782 latency : target=0, window=0, percentile=100.00%, depth=4 00:29:19.782 00:29:19.782 Run status group 0 (all jobs): 00:29:19.782 READ: bw=1151KiB/s (1179kB/s), 392KiB/s-760KiB/s (401kB/s-778kB/s), io=11.2MiB (11.8MB), run=10003-10009msec 00:29:19.782 11:37:15 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@96 -- # destroy_subsystems 0 1 00:29:19.782 11:37:15 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@43 -- # local sub 00:29:19.782 11:37:15 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:29:19.782 11:37:15 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 0 00:29:19.782 11:37:15 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=0 00:29:19.782 11:37:15 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:29:19.782 11:37:15 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:19.782 11:37:15 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:29:19.782 11:37:15 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:19.782 11:37:15 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:29:19.782 11:37:15 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:19.782 11:37:15 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:29:19.782 11:37:15 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:19.782 11:37:15 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:29:19.782 11:37:15 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 1 00:29:19.782 11:37:15 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=1 00:29:19.782 11:37:15 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:29:19.782 11:37:15 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:19.782 11:37:15 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:29:19.782 11:37:15 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:19.782 11:37:15 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:29:19.782 11:37:15 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:19.782 11:37:15 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:29:20.042 11:37:15 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:20.042 00:29:20.042 real 0m11.483s 00:29:20.042 user 0m26.806s 00:29:20.042 sys 0m0.742s 00:29:20.042 11:37:15 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1126 -- # xtrace_disable 00:29:20.042 11:37:15 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:29:20.042 ************************************ 00:29:20.042 END TEST fio_dif_1_multi_subsystems 00:29:20.042 ************************************ 00:29:20.042 11:37:15 nvmf_dif -- target/dif.sh@143 -- # run_test fio_dif_rand_params fio_dif_rand_params 00:29:20.042 11:37:15 nvmf_dif -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:29:20.042 11:37:15 nvmf_dif -- common/autotest_common.sh@1107 -- # xtrace_disable 00:29:20.042 11:37:15 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:29:20.042 ************************************ 00:29:20.042 START TEST fio_dif_rand_params 00:29:20.042 ************************************ 00:29:20.042 11:37:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1125 -- # fio_dif_rand_params 00:29:20.042 11:37:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@100 -- # local NULL_DIF 00:29:20.042 11:37:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@101 -- # local bs numjobs runtime iodepth files 00:29:20.042 11:37:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # NULL_DIF=3 00:29:20.042 11:37:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # bs=128k 00:29:20.042 11:37:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # numjobs=3 00:29:20.042 11:37:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # iodepth=3 00:29:20.042 11:37:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # runtime=5 00:29:20.042 11:37:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@105 -- # create_subsystems 0 00:29:20.042 11:37:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:29:20.042 11:37:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:29:20.042 11:37:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:29:20.042 11:37:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:29:20.042 11:37:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:29:20.042 11:37:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:20.042 11:37:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:29:20.042 bdev_null0 00:29:20.042 11:37:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:20.042 11:37:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:29:20.042 11:37:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:20.042 11:37:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:29:20.042 11:37:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:20.042 11:37:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:29:20.042 11:37:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:20.042 11:37:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:29:20.042 11:37:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:20.042 11:37:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:29:20.042 11:37:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:20.042 11:37:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:29:20.042 [2024-07-26 11:37:15.547961] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:20.042 11:37:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:20.042 11:37:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # fio /dev/fd/62 00:29:20.042 11:37:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # create_json_sub_conf 0 00:29:20.042 11:37:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:29:20.042 11:37:15 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # config=() 00:29:20.042 11:37:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:29:20.042 11:37:15 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # local subsystem config 00:29:20.042 11:37:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:29:20.042 11:37:15 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:29:20.042 11:37:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:29:20.042 11:37:15 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:29:20.042 { 00:29:20.042 "params": { 00:29:20.042 "name": "Nvme$subsystem", 00:29:20.042 "trtype": "$TEST_TRANSPORT", 00:29:20.042 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:20.042 "adrfam": "ipv4", 00:29:20.042 "trsvcid": "$NVMF_PORT", 00:29:20.042 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:20.042 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:20.042 "hdgst": ${hdgst:-false}, 00:29:20.042 "ddgst": ${ddgst:-false} 00:29:20.042 }, 00:29:20.042 "method": "bdev_nvme_attach_controller" 00:29:20.042 } 00:29:20.042 EOF 00:29:20.042 )") 00:29:20.042 11:37:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:29:20.042 11:37:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:29:20.042 11:37:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:29:20.042 11:37:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:29:20.042 11:37:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # local sanitizers 00:29:20.042 11:37:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:29:20.042 11:37:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # shift 00:29:20.042 11:37:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local asan_lib= 00:29:20.042 11:37:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:29:20.042 11:37:15 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:29:20.042 11:37:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:29:20.042 11:37:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:29:20.042 11:37:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:29:20.042 11:37:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libasan 00:29:20.042 11:37:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:29:20.042 11:37:15 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@556 -- # jq . 00:29:20.042 11:37:15 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@557 -- # IFS=, 00:29:20.042 11:37:15 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:29:20.042 "params": { 00:29:20.042 "name": "Nvme0", 00:29:20.042 "trtype": "tcp", 00:29:20.042 "traddr": "10.0.0.2", 00:29:20.042 "adrfam": "ipv4", 00:29:20.042 "trsvcid": "4420", 00:29:20.042 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:29:20.042 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:29:20.042 "hdgst": false, 00:29:20.042 "ddgst": false 00:29:20.042 }, 00:29:20.042 "method": "bdev_nvme_attach_controller" 00:29:20.042 }' 00:29:20.042 11:37:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:29:20.042 11:37:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:29:20.042 11:37:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:29:20.042 11:37:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:29:20.042 11:37:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:29:20.042 11:37:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:29:20.042 11:37:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:29:20.042 11:37:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:29:20.042 11:37:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:29:20.042 11:37:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:29:20.300 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:29:20.300 ... 00:29:20.300 fio-3.35 00:29:20.300 Starting 3 threads 00:29:20.300 EAL: No free 2048 kB hugepages reported on node 1 00:29:26.862 00:29:26.862 filename0: (groupid=0, jobs=1): err= 0: pid=1691600: Fri Jul 26 11:37:21 2024 00:29:26.862 read: IOPS=301, BW=37.7MiB/s (39.5MB/s)(190MiB/5046msec) 00:29:26.862 slat (nsec): min=6067, max=30474, avg=10710.43, stdev=2225.71 00:29:26.862 clat (usec): min=3664, max=51047, avg=9903.78, stdev=7508.05 00:29:26.862 lat (usec): min=3677, max=51058, avg=9914.49, stdev=7507.96 00:29:26.862 clat percentiles (usec): 00:29:26.863 | 1.00th=[ 3949], 5.00th=[ 5604], 10.00th=[ 6128], 20.00th=[ 7111], 00:29:26.863 | 30.00th=[ 7963], 40.00th=[ 8356], 50.00th=[ 8717], 60.00th=[ 9110], 00:29:26.863 | 70.00th=[ 9634], 80.00th=[10159], 90.00th=[10814], 95.00th=[11469], 00:29:26.863 | 99.00th=[49021], 99.50th=[49546], 99.90th=[50594], 99.95th=[51119], 00:29:26.863 | 99.99th=[51119] 00:29:26.863 bw ( KiB/s): min=30976, max=48640, per=32.60%, avg=38912.00, stdev=4807.53, samples=10 00:29:26.863 iops : min= 242, max= 380, avg=304.00, stdev=37.56, samples=10 00:29:26.863 lat (msec) : 4=1.18%, 10=77.00%, 20=18.33%, 50=3.02%, 100=0.46% 00:29:26.863 cpu : usr=94.51%, sys=5.19%, ctx=12, majf=0, minf=29 00:29:26.863 IO depths : 1=0.3%, 2=99.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:29:26.863 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:26.863 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:26.863 issued rwts: total=1522,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:26.863 latency : target=0, window=0, percentile=100.00%, depth=3 00:29:26.863 filename0: (groupid=0, jobs=1): err= 0: pid=1691601: Fri Jul 26 11:37:21 2024 00:29:26.863 read: IOPS=330, BW=41.3MiB/s (43.3MB/s)(207MiB/5003msec) 00:29:26.863 slat (nsec): min=6082, max=25661, avg=10580.44, stdev=2122.48 00:29:26.863 clat (usec): min=3322, max=51690, avg=9061.40, stdev=5879.03 00:29:26.863 lat (usec): min=3329, max=51702, avg=9071.98, stdev=5879.14 00:29:26.863 clat percentiles (usec): 00:29:26.863 | 1.00th=[ 3621], 5.00th=[ 4293], 10.00th=[ 5866], 20.00th=[ 6587], 00:29:26.863 | 30.00th=[ 7504], 40.00th=[ 8225], 50.00th=[ 8717], 60.00th=[ 8979], 00:29:26.863 | 70.00th=[ 9372], 80.00th=[ 9896], 90.00th=[10683], 95.00th=[11338], 00:29:26.863 | 99.00th=[48497], 99.50th=[49546], 99.90th=[51119], 99.95th=[51643], 00:29:26.863 | 99.99th=[51643] 00:29:26.863 bw ( KiB/s): min=37376, max=51200, per=35.43%, avg=42291.20, stdev=4188.82, samples=10 00:29:26.863 iops : min= 292, max= 400, avg=330.40, stdev=32.73, samples=10 00:29:26.863 lat (msec) : 4=3.87%, 10=78.54%, 20=15.60%, 50=1.57%, 100=0.42% 00:29:26.863 cpu : usr=94.14%, sys=5.58%, ctx=19, majf=0, minf=118 00:29:26.863 IO depths : 1=0.2%, 2=99.8%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:29:26.863 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:26.863 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:26.863 issued rwts: total=1654,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:26.863 latency : target=0, window=0, percentile=100.00%, depth=3 00:29:26.863 filename0: (groupid=0, jobs=1): err= 0: pid=1691602: Fri Jul 26 11:37:21 2024 00:29:26.863 read: IOPS=305, BW=38.2MiB/s (40.1MB/s)(191MiB/5003msec) 00:29:26.863 slat (nsec): min=6078, max=24684, avg=10698.85, stdev=2145.38 00:29:26.863 clat (usec): min=3273, max=51058, avg=9801.32, stdev=7354.67 00:29:26.863 lat (usec): min=3280, max=51070, avg=9812.02, stdev=7354.66 00:29:26.863 clat percentiles (usec): 00:29:26.863 | 1.00th=[ 3556], 5.00th=[ 4424], 10.00th=[ 6128], 20.00th=[ 7111], 00:29:26.863 | 30.00th=[ 8029], 40.00th=[ 8455], 50.00th=[ 8717], 60.00th=[ 9110], 00:29:26.863 | 70.00th=[ 9503], 80.00th=[10028], 90.00th=[10945], 95.00th=[12125], 00:29:26.863 | 99.00th=[49021], 99.50th=[50070], 99.90th=[50594], 99.95th=[51119], 00:29:26.863 | 99.99th=[51119] 00:29:26.863 bw ( KiB/s): min=29184, max=47360, per=32.75%, avg=39091.20, stdev=5465.40, samples=10 00:29:26.863 iops : min= 228, max= 370, avg=305.40, stdev=42.70, samples=10 00:29:26.863 lat (msec) : 4=4.19%, 10=74.56%, 20=17.92%, 50=3.07%, 100=0.26% 00:29:26.863 cpu : usr=95.18%, sys=4.52%, ctx=15, majf=0, minf=115 00:29:26.863 IO depths : 1=0.1%, 2=99.9%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:29:26.863 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:26.863 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:26.863 issued rwts: total=1529,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:26.863 latency : target=0, window=0, percentile=100.00%, depth=3 00:29:26.863 00:29:26.863 Run status group 0 (all jobs): 00:29:26.863 READ: bw=117MiB/s (122MB/s), 37.7MiB/s-41.3MiB/s (39.5MB/s-43.3MB/s), io=588MiB (617MB), run=5003-5046msec 00:29:26.863 11:37:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@107 -- # destroy_subsystems 0 00:29:26.863 11:37:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:29:26.863 11:37:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:29:26.863 11:37:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:29:26.863 11:37:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:29:26.863 11:37:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:29:26.863 11:37:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:26.863 11:37:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:29:26.863 11:37:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:26.863 11:37:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:29:26.863 11:37:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:26.863 11:37:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:29:26.863 11:37:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:26.863 11:37:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # NULL_DIF=2 00:29:26.863 11:37:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # bs=4k 00:29:26.863 11:37:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # numjobs=8 00:29:26.863 11:37:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # iodepth=16 00:29:26.863 11:37:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # runtime= 00:29:26.863 11:37:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # files=2 00:29:26.863 11:37:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@111 -- # create_subsystems 0 1 2 00:29:26.863 11:37:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:29:26.863 11:37:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:29:26.863 11:37:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:29:26.863 11:37:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:29:26.863 11:37:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 2 00:29:26.863 11:37:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:26.863 11:37:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:29:26.863 bdev_null0 00:29:26.863 11:37:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:26.863 11:37:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:29:26.863 11:37:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:26.863 11:37:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:29:26.863 11:37:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:26.863 11:37:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:29:26.863 11:37:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:26.863 11:37:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:29:26.863 11:37:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:26.863 11:37:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:29:26.863 11:37:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:26.863 11:37:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:29:26.863 [2024-07-26 11:37:21.764594] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:26.863 11:37:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:26.863 11:37:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:29:26.863 11:37:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:29:26.863 11:37:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:29:26.863 11:37:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 2 00:29:26.863 11:37:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:26.863 11:37:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:29:26.863 bdev_null1 00:29:26.863 11:37:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:26.863 11:37:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:29:26.863 11:37:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:26.863 11:37:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:29:26.863 11:37:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:26.863 11:37:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:29:26.863 11:37:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:26.863 11:37:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:29:26.863 11:37:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:26.863 11:37:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:29:26.863 11:37:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:26.863 11:37:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:29:26.863 11:37:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:26.863 11:37:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:29:26.863 11:37:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 2 00:29:26.863 11:37:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=2 00:29:26.863 11:37:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null2 64 512 --md-size 16 --dif-type 2 00:29:26.863 11:37:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:26.864 11:37:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:29:26.864 bdev_null2 00:29:26.864 11:37:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:26.864 11:37:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 --serial-number 53313233-2 --allow-any-host 00:29:26.864 11:37:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:26.864 11:37:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:29:26.864 11:37:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:26.864 11:37:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 bdev_null2 00:29:26.864 11:37:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:26.864 11:37:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:29:26.864 11:37:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:26.864 11:37:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:29:26.864 11:37:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:26.864 11:37:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:29:26.864 11:37:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:26.864 11:37:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # fio /dev/fd/62 00:29:26.864 11:37:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # create_json_sub_conf 0 1 2 00:29:26.864 11:37:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 2 00:29:26.864 11:37:21 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # config=() 00:29:26.864 11:37:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:29:26.864 11:37:21 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # local subsystem config 00:29:26.864 11:37:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:29:26.864 11:37:21 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:29:26.864 11:37:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:29:26.864 11:37:21 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:29:26.864 { 00:29:26.864 "params": { 00:29:26.864 "name": "Nvme$subsystem", 00:29:26.864 "trtype": "$TEST_TRANSPORT", 00:29:26.864 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:26.864 "adrfam": "ipv4", 00:29:26.864 "trsvcid": "$NVMF_PORT", 00:29:26.864 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:26.864 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:26.864 "hdgst": ${hdgst:-false}, 00:29:26.864 "ddgst": ${ddgst:-false} 00:29:26.864 }, 00:29:26.864 "method": "bdev_nvme_attach_controller" 00:29:26.864 } 00:29:26.864 EOF 00:29:26.864 )") 00:29:26.864 11:37:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:29:26.864 11:37:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:29:26.864 11:37:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:29:26.864 11:37:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:29:26.864 11:37:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # local sanitizers 00:29:26.864 11:37:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:29:26.864 11:37:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # shift 00:29:26.864 11:37:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local asan_lib= 00:29:26.864 11:37:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:29:26.864 11:37:21 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:29:26.864 11:37:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:29:26.864 11:37:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:29:26.864 11:37:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:29:26.864 11:37:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:29:26.864 11:37:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libasan 00:29:26.864 11:37:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:29:26.864 11:37:21 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:29:26.864 11:37:21 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:29:26.864 { 00:29:26.864 "params": { 00:29:26.864 "name": "Nvme$subsystem", 00:29:26.864 "trtype": "$TEST_TRANSPORT", 00:29:26.864 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:26.864 "adrfam": "ipv4", 00:29:26.864 "trsvcid": "$NVMF_PORT", 00:29:26.864 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:26.864 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:26.864 "hdgst": ${hdgst:-false}, 00:29:26.864 "ddgst": ${ddgst:-false} 00:29:26.864 }, 00:29:26.864 "method": "bdev_nvme_attach_controller" 00:29:26.864 } 00:29:26.864 EOF 00:29:26.864 )") 00:29:26.864 11:37:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:29:26.864 11:37:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:29:26.864 11:37:21 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:29:26.864 11:37:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:29:26.864 11:37:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:29:26.864 11:37:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:29:26.864 11:37:21 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:29:26.864 11:37:21 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:29:26.864 { 00:29:26.864 "params": { 00:29:26.864 "name": "Nvme$subsystem", 00:29:26.864 "trtype": "$TEST_TRANSPORT", 00:29:26.864 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:26.864 "adrfam": "ipv4", 00:29:26.864 "trsvcid": "$NVMF_PORT", 00:29:26.864 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:26.864 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:26.864 "hdgst": ${hdgst:-false}, 00:29:26.864 "ddgst": ${ddgst:-false} 00:29:26.864 }, 00:29:26.864 "method": "bdev_nvme_attach_controller" 00:29:26.864 } 00:29:26.864 EOF 00:29:26.864 )") 00:29:26.864 11:37:21 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:29:26.864 11:37:21 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@556 -- # jq . 00:29:26.864 11:37:21 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@557 -- # IFS=, 00:29:26.864 11:37:21 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:29:26.864 "params": { 00:29:26.864 "name": "Nvme0", 00:29:26.864 "trtype": "tcp", 00:29:26.864 "traddr": "10.0.0.2", 00:29:26.864 "adrfam": "ipv4", 00:29:26.864 "trsvcid": "4420", 00:29:26.864 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:29:26.864 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:29:26.864 "hdgst": false, 00:29:26.864 "ddgst": false 00:29:26.864 }, 00:29:26.864 "method": "bdev_nvme_attach_controller" 00:29:26.864 },{ 00:29:26.864 "params": { 00:29:26.864 "name": "Nvme1", 00:29:26.864 "trtype": "tcp", 00:29:26.864 "traddr": "10.0.0.2", 00:29:26.864 "adrfam": "ipv4", 00:29:26.864 "trsvcid": "4420", 00:29:26.864 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:29:26.864 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:29:26.864 "hdgst": false, 00:29:26.864 "ddgst": false 00:29:26.864 }, 00:29:26.864 "method": "bdev_nvme_attach_controller" 00:29:26.864 },{ 00:29:26.864 "params": { 00:29:26.864 "name": "Nvme2", 00:29:26.864 "trtype": "tcp", 00:29:26.864 "traddr": "10.0.0.2", 00:29:26.864 "adrfam": "ipv4", 00:29:26.864 "trsvcid": "4420", 00:29:26.864 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:29:26.864 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:29:26.864 "hdgst": false, 00:29:26.864 "ddgst": false 00:29:26.864 }, 00:29:26.864 "method": "bdev_nvme_attach_controller" 00:29:26.864 }' 00:29:26.864 11:37:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:29:26.864 11:37:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:29:26.864 11:37:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:29:26.864 11:37:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:29:26.864 11:37:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:29:26.864 11:37:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:29:26.864 11:37:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:29:26.864 11:37:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:29:26.864 11:37:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:29:26.864 11:37:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:29:26.864 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:29:26.864 ... 00:29:26.864 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:29:26.864 ... 00:29:26.864 filename2: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:29:26.864 ... 00:29:26.864 fio-3.35 00:29:26.864 Starting 24 threads 00:29:26.864 EAL: No free 2048 kB hugepages reported on node 1 00:29:39.115 00:29:39.115 filename0: (groupid=0, jobs=1): err= 0: pid=1692719: Fri Jul 26 11:37:33 2024 00:29:39.115 read: IOPS=535, BW=2144KiB/s (2195kB/s)(21.0MiB/10013msec) 00:29:39.115 slat (nsec): min=7398, max=80262, avg=31884.02, stdev=18991.45 00:29:39.115 clat (usec): min=17954, max=49670, avg=29518.18, stdev=1563.19 00:29:39.115 lat (usec): min=17963, max=49689, avg=29550.06, stdev=1565.58 00:29:39.115 clat percentiles (usec): 00:29:39.115 | 1.00th=[21627], 5.00th=[29230], 10.00th=[29230], 20.00th=[29492], 00:29:39.115 | 30.00th=[29492], 40.00th=[29492], 50.00th=[29492], 60.00th=[29754], 00:29:39.115 | 70.00th=[29754], 80.00th=[29754], 90.00th=[30016], 95.00th=[30016], 00:29:39.115 | 99.00th=[30540], 99.50th=[38011], 99.90th=[47449], 99.95th=[47449], 00:29:39.115 | 99.99th=[49546] 00:29:39.115 bw ( KiB/s): min= 2048, max= 2256, per=4.17%, avg=2139.75, stdev=64.67, samples=20 00:29:39.115 iops : min= 512, max= 564, avg=534.90, stdev=16.15, samples=20 00:29:39.115 lat (msec) : 20=0.82%, 50=99.18% 00:29:39.115 cpu : usr=98.94%, sys=0.64%, ctx=14, majf=0, minf=27 00:29:39.115 IO depths : 1=5.9%, 2=12.0%, 4=24.7%, 8=50.8%, 16=6.6%, 32=0.0%, >=64=0.0% 00:29:39.115 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:39.115 complete : 0=0.0%, 4=94.0%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:39.115 issued rwts: total=5366,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:39.115 latency : target=0, window=0, percentile=100.00%, depth=16 00:29:39.115 filename0: (groupid=0, jobs=1): err= 0: pid=1692720: Fri Jul 26 11:37:33 2024 00:29:39.115 read: IOPS=534, BW=2137KiB/s (2188kB/s)(20.9MiB/10002msec) 00:29:39.115 slat (nsec): min=8450, max=53748, avg=20735.58, stdev=6541.32 00:29:39.115 clat (usec): min=18678, max=38107, avg=29777.95, stdev=788.26 00:29:39.116 lat (usec): min=18718, max=38145, avg=29798.68, stdev=787.42 00:29:39.116 clat percentiles (usec): 00:29:39.116 | 1.00th=[29230], 5.00th=[29492], 10.00th=[29492], 20.00th=[29492], 00:29:39.116 | 30.00th=[29754], 40.00th=[29754], 50.00th=[29754], 60.00th=[29754], 00:29:39.116 | 70.00th=[29754], 80.00th=[30016], 90.00th=[30016], 95.00th=[30278], 00:29:39.116 | 99.00th=[30802], 99.50th=[31589], 99.90th=[38011], 99.95th=[38011], 00:29:39.116 | 99.99th=[38011] 00:29:39.116 bw ( KiB/s): min= 2048, max= 2176, per=4.16%, avg=2135.32, stdev=60.96, samples=19 00:29:39.116 iops : min= 512, max= 544, avg=533.79, stdev=15.22, samples=19 00:29:39.116 lat (msec) : 20=0.30%, 50=99.70% 00:29:39.116 cpu : usr=98.95%, sys=0.66%, ctx=10, majf=0, minf=17 00:29:39.116 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:29:39.116 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:39.116 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:39.116 issued rwts: total=5344,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:39.116 latency : target=0, window=0, percentile=100.00%, depth=16 00:29:39.116 filename0: (groupid=0, jobs=1): err= 0: pid=1692721: Fri Jul 26 11:37:33 2024 00:29:39.116 read: IOPS=541, BW=2167KiB/s (2219kB/s)(21.2MiB/10012msec) 00:29:39.116 slat (nsec): min=7431, max=80270, avg=13362.05, stdev=10084.42 00:29:39.116 clat (usec): min=1355, max=32029, avg=29419.19, stdev=3021.03 00:29:39.116 lat (usec): min=1368, max=32044, avg=29432.55, stdev=3020.79 00:29:39.116 clat percentiles (usec): 00:29:39.116 | 1.00th=[10159], 5.00th=[29230], 10.00th=[29492], 20.00th=[29754], 00:29:39.116 | 30.00th=[29754], 40.00th=[29754], 50.00th=[29754], 60.00th=[29754], 00:29:39.116 | 70.00th=[30016], 80.00th=[30016], 90.00th=[30016], 95.00th=[30278], 00:29:39.116 | 99.00th=[30540], 99.50th=[31327], 99.90th=[31851], 99.95th=[32113], 00:29:39.116 | 99.99th=[32113] 00:29:39.116 bw ( KiB/s): min= 2048, max= 2560, per=4.22%, avg=2163.20, stdev=109.09, samples=20 00:29:39.116 iops : min= 512, max= 640, avg=540.80, stdev=27.27, samples=20 00:29:39.116 lat (msec) : 2=0.88%, 20=1.01%, 50=98.10% 00:29:39.116 cpu : usr=98.97%, sys=0.64%, ctx=15, majf=0, minf=27 00:29:39.116 IO depths : 1=6.1%, 2=12.3%, 4=24.6%, 8=50.6%, 16=6.4%, 32=0.0%, >=64=0.0% 00:29:39.116 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:39.116 complete : 0=0.0%, 4=94.0%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:39.116 issued rwts: total=5424,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:39.116 latency : target=0, window=0, percentile=100.00%, depth=16 00:29:39.116 filename0: (groupid=0, jobs=1): err= 0: pid=1692723: Fri Jul 26 11:37:33 2024 00:29:39.116 read: IOPS=535, BW=2144KiB/s (2195kB/s)(21.0MiB/10013msec) 00:29:39.116 slat (nsec): min=7311, max=80433, avg=31293.05, stdev=18921.86 00:29:39.116 clat (usec): min=11894, max=47636, avg=29537.45, stdev=1704.32 00:29:39.116 lat (usec): min=11908, max=47672, avg=29568.74, stdev=1706.11 00:29:39.116 clat percentiles (usec): 00:29:39.116 | 1.00th=[21627], 5.00th=[29230], 10.00th=[29230], 20.00th=[29492], 00:29:39.116 | 30.00th=[29492], 40.00th=[29492], 50.00th=[29492], 60.00th=[29754], 00:29:39.116 | 70.00th=[29754], 80.00th=[29754], 90.00th=[30016], 95.00th=[30016], 00:29:39.116 | 99.00th=[31851], 99.50th=[38536], 99.90th=[47449], 99.95th=[47449], 00:29:39.116 | 99.99th=[47449] 00:29:39.116 bw ( KiB/s): min= 2048, max= 2256, per=4.17%, avg=2139.75, stdev=64.67, samples=20 00:29:39.116 iops : min= 512, max= 564, avg=534.90, stdev=16.15, samples=20 00:29:39.116 lat (msec) : 20=0.89%, 50=99.11% 00:29:39.116 cpu : usr=98.81%, sys=0.80%, ctx=11, majf=0, minf=20 00:29:39.116 IO depths : 1=5.9%, 2=12.0%, 4=24.5%, 8=50.9%, 16=6.6%, 32=0.0%, >=64=0.0% 00:29:39.116 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:39.116 complete : 0=0.0%, 4=94.0%, 8=0.2%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:39.116 issued rwts: total=5366,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:39.116 latency : target=0, window=0, percentile=100.00%, depth=16 00:29:39.116 filename0: (groupid=0, jobs=1): err= 0: pid=1692724: Fri Jul 26 11:37:33 2024 00:29:39.116 read: IOPS=534, BW=2137KiB/s (2188kB/s)(20.9MiB/10002msec) 00:29:39.116 slat (nsec): min=8482, max=56993, avg=23524.12, stdev=7072.52 00:29:39.116 clat (usec): min=17569, max=39716, avg=29746.54, stdev=805.93 00:29:39.116 lat (usec): min=17578, max=39743, avg=29770.07, stdev=805.41 00:29:39.116 clat percentiles (usec): 00:29:39.116 | 1.00th=[29230], 5.00th=[29492], 10.00th=[29492], 20.00th=[29492], 00:29:39.116 | 30.00th=[29492], 40.00th=[29754], 50.00th=[29754], 60.00th=[29754], 00:29:39.116 | 70.00th=[29754], 80.00th=[30016], 90.00th=[30016], 95.00th=[30278], 00:29:39.116 | 99.00th=[30802], 99.50th=[31589], 99.90th=[38011], 99.95th=[38011], 00:29:39.116 | 99.99th=[39584] 00:29:39.116 bw ( KiB/s): min= 2048, max= 2176, per=4.16%, avg=2135.32, stdev=60.96, samples=19 00:29:39.116 iops : min= 512, max= 544, avg=533.79, stdev=15.22, samples=19 00:29:39.116 lat (msec) : 20=0.30%, 50=99.70% 00:29:39.116 cpu : usr=98.95%, sys=0.66%, ctx=13, majf=0, minf=18 00:29:39.116 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:29:39.116 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:39.116 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:39.116 issued rwts: total=5344,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:39.116 latency : target=0, window=0, percentile=100.00%, depth=16 00:29:39.116 filename0: (groupid=0, jobs=1): err= 0: pid=1692725: Fri Jul 26 11:37:33 2024 00:29:39.116 read: IOPS=534, BW=2137KiB/s (2188kB/s)(20.9MiB/10005msec) 00:29:39.116 slat (nsec): min=6160, max=86025, avg=23602.30, stdev=6861.09 00:29:39.116 clat (usec): min=14066, max=60056, avg=29749.77, stdev=1980.30 00:29:39.116 lat (usec): min=14086, max=60073, avg=29773.37, stdev=1979.58 00:29:39.116 clat percentiles (usec): 00:29:39.116 | 1.00th=[29230], 5.00th=[29492], 10.00th=[29492], 20.00th=[29492], 00:29:39.116 | 30.00th=[29492], 40.00th=[29754], 50.00th=[29754], 60.00th=[29754], 00:29:39.116 | 70.00th=[29754], 80.00th=[30016], 90.00th=[30016], 95.00th=[30278], 00:29:39.116 | 99.00th=[30540], 99.50th=[31065], 99.90th=[60031], 99.95th=[60031], 00:29:39.116 | 99.99th=[60031] 00:29:39.116 bw ( KiB/s): min= 1920, max= 2176, per=4.15%, avg=2128.84, stdev=76.45, samples=19 00:29:39.116 iops : min= 480, max= 544, avg=532.21, stdev=19.11, samples=19 00:29:39.116 lat (msec) : 20=0.60%, 50=99.10%, 100=0.30% 00:29:39.116 cpu : usr=99.01%, sys=0.61%, ctx=9, majf=0, minf=17 00:29:39.116 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:29:39.116 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:39.116 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:39.116 issued rwts: total=5344,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:39.116 latency : target=0, window=0, percentile=100.00%, depth=16 00:29:39.116 filename0: (groupid=0, jobs=1): err= 0: pid=1692726: Fri Jul 26 11:37:33 2024 00:29:39.116 read: IOPS=534, BW=2137KiB/s (2189kB/s)(20.9MiB/10012msec) 00:29:39.116 slat (nsec): min=7342, max=80298, avg=31162.28, stdev=19284.47 00:29:39.116 clat (usec): min=14524, max=59639, avg=29615.83, stdev=1994.72 00:29:39.116 lat (usec): min=14532, max=59667, avg=29646.99, stdev=1995.53 00:29:39.116 clat percentiles (usec): 00:29:39.116 | 1.00th=[18744], 5.00th=[29230], 10.00th=[29230], 20.00th=[29492], 00:29:39.116 | 30.00th=[29492], 40.00th=[29492], 50.00th=[29492], 60.00th=[29754], 00:29:39.116 | 70.00th=[29754], 80.00th=[29754], 90.00th=[30016], 95.00th=[30278], 00:29:39.116 | 99.00th=[40633], 99.50th=[41157], 99.90th=[49546], 99.95th=[49546], 00:29:39.116 | 99.99th=[59507] 00:29:39.116 bw ( KiB/s): min= 1968, max= 2176, per=4.15%, avg=2131.37, stdev=69.66, samples=19 00:29:39.116 iops : min= 492, max= 544, avg=532.84, stdev=17.41, samples=19 00:29:39.116 lat (msec) : 20=1.10%, 50=98.86%, 100=0.04% 00:29:39.116 cpu : usr=98.89%, sys=0.73%, ctx=12, majf=0, minf=27 00:29:39.116 IO depths : 1=5.8%, 2=12.0%, 4=24.6%, 8=50.9%, 16=6.7%, 32=0.0%, >=64=0.0% 00:29:39.116 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:39.116 complete : 0=0.0%, 4=94.0%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:39.116 issued rwts: total=5350,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:39.116 latency : target=0, window=0, percentile=100.00%, depth=16 00:29:39.116 filename0: (groupid=0, jobs=1): err= 0: pid=1692727: Fri Jul 26 11:37:33 2024 00:29:39.116 read: IOPS=534, BW=2137KiB/s (2188kB/s)(20.9MiB/10002msec) 00:29:39.116 slat (nsec): min=7835, max=52071, avg=22949.33, stdev=7279.49 00:29:39.116 clat (usec): min=17698, max=39879, avg=29727.85, stdev=812.44 00:29:39.116 lat (usec): min=17713, max=39892, avg=29750.79, stdev=812.41 00:29:39.116 clat percentiles (usec): 00:29:39.116 | 1.00th=[29230], 5.00th=[29492], 10.00th=[29492], 20.00th=[29492], 00:29:39.116 | 30.00th=[29492], 40.00th=[29754], 50.00th=[29754], 60.00th=[29754], 00:29:39.116 | 70.00th=[29754], 80.00th=[29754], 90.00th=[30016], 95.00th=[30278], 00:29:39.116 | 99.00th=[30802], 99.50th=[31327], 99.90th=[38536], 99.95th=[38536], 00:29:39.116 | 99.99th=[40109] 00:29:39.116 bw ( KiB/s): min= 2048, max= 2176, per=4.16%, avg=2135.32, stdev=60.96, samples=19 00:29:39.116 iops : min= 512, max= 544, avg=533.79, stdev=15.22, samples=19 00:29:39.116 lat (msec) : 20=0.30%, 50=99.70% 00:29:39.116 cpu : usr=98.79%, sys=0.82%, ctx=8, majf=0, minf=22 00:29:39.116 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:29:39.116 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:39.116 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:39.116 issued rwts: total=5344,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:39.116 latency : target=0, window=0, percentile=100.00%, depth=16 00:29:39.116 filename1: (groupid=0, jobs=1): err= 0: pid=1692728: Fri Jul 26 11:37:33 2024 00:29:39.116 read: IOPS=534, BW=2137KiB/s (2188kB/s)(20.9MiB/10003msec) 00:29:39.116 slat (usec): min=6, max=137, avg=23.73, stdev= 6.81 00:29:39.116 clat (usec): min=13961, max=58189, avg=29739.68, stdev=1897.76 00:29:39.116 lat (usec): min=13984, max=58207, avg=29763.40, stdev=1897.08 00:29:39.116 clat percentiles (usec): 00:29:39.116 | 1.00th=[29230], 5.00th=[29492], 10.00th=[29492], 20.00th=[29492], 00:29:39.116 | 30.00th=[29492], 40.00th=[29754], 50.00th=[29754], 60.00th=[29754], 00:29:39.116 | 70.00th=[29754], 80.00th=[29754], 90.00th=[30016], 95.00th=[30278], 00:29:39.117 | 99.00th=[30802], 99.50th=[31065], 99.90th=[57934], 99.95th=[57934], 00:29:39.117 | 99.99th=[57934] 00:29:39.117 bw ( KiB/s): min= 1920, max= 2176, per=4.15%, avg=2128.84, stdev=76.45, samples=19 00:29:39.117 iops : min= 480, max= 544, avg=532.21, stdev=19.11, samples=19 00:29:39.117 lat (msec) : 20=0.60%, 50=99.10%, 100=0.30% 00:29:39.117 cpu : usr=98.49%, sys=1.13%, ctx=16, majf=0, minf=26 00:29:39.117 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:29:39.117 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:39.117 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:39.117 issued rwts: total=5344,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:39.117 latency : target=0, window=0, percentile=100.00%, depth=16 00:29:39.117 filename1: (groupid=0, jobs=1): err= 0: pid=1692729: Fri Jul 26 11:37:33 2024 00:29:39.117 read: IOPS=534, BW=2137KiB/s (2188kB/s)(20.9MiB/10002msec) 00:29:39.117 slat (nsec): min=7446, max=54051, avg=22688.21, stdev=7571.96 00:29:39.117 clat (usec): min=17726, max=39856, avg=29727.90, stdev=809.53 00:29:39.117 lat (usec): min=17741, max=39869, avg=29750.59, stdev=809.63 00:29:39.117 clat percentiles (usec): 00:29:39.117 | 1.00th=[29230], 5.00th=[29492], 10.00th=[29492], 20.00th=[29492], 00:29:39.117 | 30.00th=[29492], 40.00th=[29754], 50.00th=[29754], 60.00th=[29754], 00:29:39.117 | 70.00th=[29754], 80.00th=[29754], 90.00th=[30016], 95.00th=[30278], 00:29:39.117 | 99.00th=[30802], 99.50th=[31327], 99.90th=[38536], 99.95th=[38536], 00:29:39.117 | 99.99th=[40109] 00:29:39.117 bw ( KiB/s): min= 2048, max= 2176, per=4.16%, avg=2135.32, stdev=60.96, samples=19 00:29:39.117 iops : min= 512, max= 544, avg=533.79, stdev=15.22, samples=19 00:29:39.117 lat (msec) : 20=0.30%, 50=99.70% 00:29:39.117 cpu : usr=98.74%, sys=0.87%, ctx=10, majf=0, minf=18 00:29:39.117 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:29:39.117 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:39.117 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:39.117 issued rwts: total=5344,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:39.117 latency : target=0, window=0, percentile=100.00%, depth=16 00:29:39.117 filename1: (groupid=0, jobs=1): err= 0: pid=1692730: Fri Jul 26 11:37:33 2024 00:29:39.117 read: IOPS=534, BW=2137KiB/s (2188kB/s)(20.9MiB/10002msec) 00:29:39.117 slat (nsec): min=12141, max=48227, avg=23471.11, stdev=6789.09 00:29:39.117 clat (usec): min=17595, max=39814, avg=29733.74, stdev=811.46 00:29:39.117 lat (usec): min=17611, max=39833, avg=29757.21, stdev=811.29 00:29:39.117 clat percentiles (usec): 00:29:39.117 | 1.00th=[29230], 5.00th=[29492], 10.00th=[29492], 20.00th=[29492], 00:29:39.117 | 30.00th=[29492], 40.00th=[29754], 50.00th=[29754], 60.00th=[29754], 00:29:39.117 | 70.00th=[29754], 80.00th=[29754], 90.00th=[30016], 95.00th=[30278], 00:29:39.117 | 99.00th=[30802], 99.50th=[31589], 99.90th=[38011], 99.95th=[38536], 00:29:39.117 | 99.99th=[39584] 00:29:39.117 bw ( KiB/s): min= 2048, max= 2176, per=4.16%, avg=2135.32, stdev=60.96, samples=19 00:29:39.117 iops : min= 512, max= 544, avg=533.79, stdev=15.22, samples=19 00:29:39.117 lat (msec) : 20=0.30%, 50=99.70% 00:29:39.117 cpu : usr=98.78%, sys=0.83%, ctx=18, majf=0, minf=17 00:29:39.117 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:29:39.117 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:39.117 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:39.117 issued rwts: total=5344,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:39.117 latency : target=0, window=0, percentile=100.00%, depth=16 00:29:39.117 filename1: (groupid=0, jobs=1): err= 0: pid=1692732: Fri Jul 26 11:37:33 2024 00:29:39.117 read: IOPS=534, BW=2137KiB/s (2188kB/s)(20.9MiB/10002msec) 00:29:39.117 slat (nsec): min=10108, max=51986, avg=23157.07, stdev=6443.34 00:29:39.117 clat (usec): min=18665, max=38225, avg=29741.87, stdev=797.32 00:29:39.117 lat (usec): min=18689, max=38251, avg=29765.02, stdev=797.02 00:29:39.117 clat percentiles (usec): 00:29:39.117 | 1.00th=[29230], 5.00th=[29492], 10.00th=[29492], 20.00th=[29492], 00:29:39.117 | 30.00th=[29492], 40.00th=[29754], 50.00th=[29754], 60.00th=[29754], 00:29:39.117 | 70.00th=[29754], 80.00th=[30016], 90.00th=[30016], 95.00th=[30278], 00:29:39.117 | 99.00th=[30802], 99.50th=[31589], 99.90th=[38011], 99.95th=[38011], 00:29:39.117 | 99.99th=[38011] 00:29:39.117 bw ( KiB/s): min= 2048, max= 2176, per=4.16%, avg=2135.32, stdev=60.96, samples=19 00:29:39.117 iops : min= 512, max= 544, avg=533.79, stdev=15.22, samples=19 00:29:39.117 lat (msec) : 20=0.30%, 50=99.70% 00:29:39.117 cpu : usr=98.83%, sys=0.78%, ctx=13, majf=0, minf=25 00:29:39.117 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:29:39.117 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:39.117 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:39.117 issued rwts: total=5344,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:39.117 latency : target=0, window=0, percentile=100.00%, depth=16 00:29:39.117 filename1: (groupid=0, jobs=1): err= 0: pid=1692733: Fri Jul 26 11:37:33 2024 00:29:39.117 read: IOPS=534, BW=2137KiB/s (2188kB/s)(20.9MiB/10003msec) 00:29:39.117 slat (nsec): min=6376, max=93751, avg=23695.23, stdev=6672.87 00:29:39.117 clat (usec): min=13996, max=61223, avg=29732.13, stdev=1889.76 00:29:39.117 lat (usec): min=14011, max=61241, avg=29755.83, stdev=1889.19 00:29:39.117 clat percentiles (usec): 00:29:39.117 | 1.00th=[29230], 5.00th=[29492], 10.00th=[29492], 20.00th=[29492], 00:29:39.117 | 30.00th=[29492], 40.00th=[29754], 50.00th=[29754], 60.00th=[29754], 00:29:39.117 | 70.00th=[29754], 80.00th=[29754], 90.00th=[30016], 95.00th=[30278], 00:29:39.117 | 99.00th=[30802], 99.50th=[31065], 99.90th=[57410], 99.95th=[57410], 00:29:39.117 | 99.99th=[61080] 00:29:39.117 bw ( KiB/s): min= 1923, max= 2176, per=4.15%, avg=2129.00, stdev=76.00, samples=19 00:29:39.117 iops : min= 480, max= 544, avg=532.21, stdev=19.11, samples=19 00:29:39.117 lat (msec) : 20=0.60%, 50=99.10%, 100=0.30% 00:29:39.117 cpu : usr=98.82%, sys=0.79%, ctx=15, majf=0, minf=25 00:29:39.117 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:29:39.117 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:39.117 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:39.117 issued rwts: total=5344,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:39.117 latency : target=0, window=0, percentile=100.00%, depth=16 00:29:39.117 filename1: (groupid=0, jobs=1): err= 0: pid=1692734: Fri Jul 26 11:37:33 2024 00:29:39.117 read: IOPS=533, BW=2136KiB/s (2187kB/s)(20.9MiB/10009msec) 00:29:39.117 slat (nsec): min=5936, max=65635, avg=20878.23, stdev=7230.80 00:29:39.117 clat (usec): min=13965, max=63605, avg=29799.24, stdev=2144.84 00:29:39.117 lat (usec): min=13981, max=63621, avg=29820.12, stdev=2143.96 00:29:39.117 clat percentiles (usec): 00:29:39.117 | 1.00th=[29230], 5.00th=[29492], 10.00th=[29492], 20.00th=[29492], 00:29:39.117 | 30.00th=[29754], 40.00th=[29754], 50.00th=[29754], 60.00th=[29754], 00:29:39.117 | 70.00th=[29754], 80.00th=[30016], 90.00th=[30016], 95.00th=[30278], 00:29:39.117 | 99.00th=[30540], 99.50th=[31065], 99.90th=[63701], 99.95th=[63701], 00:29:39.117 | 99.99th=[63701] 00:29:39.117 bw ( KiB/s): min= 1920, max= 2176, per=4.15%, avg=2128.84, stdev=76.45, samples=19 00:29:39.117 iops : min= 480, max= 544, avg=532.21, stdev=19.11, samples=19 00:29:39.117 lat (msec) : 20=0.60%, 50=99.10%, 100=0.30% 00:29:39.117 cpu : usr=98.65%, sys=0.96%, ctx=15, majf=0, minf=20 00:29:39.117 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:29:39.117 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:39.117 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:39.117 issued rwts: total=5344,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:39.117 latency : target=0, window=0, percentile=100.00%, depth=16 00:29:39.117 filename1: (groupid=0, jobs=1): err= 0: pid=1692735: Fri Jul 26 11:37:33 2024 00:29:39.117 read: IOPS=535, BW=2141KiB/s (2192kB/s)(20.9MiB/10014msec) 00:29:39.117 slat (nsec): min=7616, max=59080, avg=10393.51, stdev=2804.52 00:29:39.117 clat (usec): min=18582, max=38077, avg=29795.86, stdev=996.34 00:29:39.117 lat (usec): min=18591, max=38116, avg=29806.26, stdev=996.56 00:29:39.117 clat percentiles (usec): 00:29:39.117 | 1.00th=[29230], 5.00th=[29492], 10.00th=[29492], 20.00th=[29754], 00:29:39.117 | 30.00th=[29754], 40.00th=[29754], 50.00th=[29754], 60.00th=[29754], 00:29:39.117 | 70.00th=[30016], 80.00th=[30016], 90.00th=[30278], 95.00th=[30278], 00:29:39.117 | 99.00th=[30802], 99.50th=[31327], 99.90th=[38011], 99.95th=[38011], 00:29:39.117 | 99.99th=[38011] 00:29:39.117 bw ( KiB/s): min= 2043, max= 2176, per=4.16%, avg=2137.35, stdev=60.58, samples=20 00:29:39.117 iops : min= 510, max= 544, avg=534.30, stdev=15.21, samples=20 00:29:39.117 lat (msec) : 20=0.60%, 50=99.40% 00:29:39.117 cpu : usr=98.96%, sys=0.65%, ctx=15, majf=0, minf=31 00:29:39.117 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:29:39.117 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:39.117 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:39.117 issued rwts: total=5360,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:39.117 latency : target=0, window=0, percentile=100.00%, depth=16 00:29:39.117 filename1: (groupid=0, jobs=1): err= 0: pid=1692736: Fri Jul 26 11:37:33 2024 00:29:39.117 read: IOPS=534, BW=2137KiB/s (2188kB/s)(20.9MiB/10002msec) 00:29:39.117 slat (nsec): min=7576, max=88539, avg=23426.64, stdev=7451.54 00:29:39.117 clat (usec): min=14028, max=56746, avg=29722.61, stdev=1828.56 00:29:39.117 lat (usec): min=14036, max=56763, avg=29746.04, stdev=1828.92 00:29:39.117 clat percentiles (usec): 00:29:39.117 | 1.00th=[29230], 5.00th=[29492], 10.00th=[29492], 20.00th=[29492], 00:29:39.117 | 30.00th=[29492], 40.00th=[29754], 50.00th=[29754], 60.00th=[29754], 00:29:39.117 | 70.00th=[29754], 80.00th=[29754], 90.00th=[30016], 95.00th=[30278], 00:29:39.117 | 99.00th=[30540], 99.50th=[31065], 99.90th=[56886], 99.95th=[56886], 00:29:39.117 | 99.99th=[56886] 00:29:39.117 bw ( KiB/s): min= 1920, max= 2176, per=4.15%, avg=2128.84, stdev=76.45, samples=19 00:29:39.117 iops : min= 480, max= 544, avg=532.21, stdev=19.11, samples=19 00:29:39.117 lat (msec) : 20=0.60%, 50=99.10%, 100=0.30% 00:29:39.117 cpu : usr=98.62%, sys=0.98%, ctx=8, majf=0, minf=20 00:29:39.117 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:29:39.117 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:39.118 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:39.118 issued rwts: total=5344,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:39.118 latency : target=0, window=0, percentile=100.00%, depth=16 00:29:39.118 filename2: (groupid=0, jobs=1): err= 0: pid=1692737: Fri Jul 26 11:37:33 2024 00:29:39.118 read: IOPS=533, BW=2136KiB/s (2187kB/s)(20.9MiB/10008msec) 00:29:39.118 slat (nsec): min=5774, max=47182, avg=20029.68, stdev=7712.07 00:29:39.118 clat (usec): min=13920, max=66515, avg=29807.50, stdev=2355.87 00:29:39.118 lat (usec): min=13950, max=66533, avg=29827.53, stdev=2355.06 00:29:39.118 clat percentiles (usec): 00:29:39.118 | 1.00th=[22676], 5.00th=[29492], 10.00th=[29492], 20.00th=[29492], 00:29:39.118 | 30.00th=[29754], 40.00th=[29754], 50.00th=[29754], 60.00th=[29754], 00:29:39.118 | 70.00th=[29754], 80.00th=[30016], 90.00th=[30016], 95.00th=[30278], 00:29:39.118 | 99.00th=[36439], 99.50th=[36963], 99.90th=[62653], 99.95th=[62653], 00:29:39.118 | 99.99th=[66323] 00:29:39.118 bw ( KiB/s): min= 1920, max= 2176, per=4.15%, avg=2128.84, stdev=76.45, samples=19 00:29:39.118 iops : min= 480, max= 544, avg=532.21, stdev=19.11, samples=19 00:29:39.118 lat (msec) : 20=0.60%, 50=99.10%, 100=0.30% 00:29:39.118 cpu : usr=98.90%, sys=0.72%, ctx=14, majf=0, minf=27 00:29:39.118 IO depths : 1=5.8%, 2=12.0%, 4=24.6%, 8=51.0%, 16=6.7%, 32=0.0%, >=64=0.0% 00:29:39.118 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:39.118 complete : 0=0.0%, 4=94.0%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:39.118 issued rwts: total=5344,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:39.118 latency : target=0, window=0, percentile=100.00%, depth=16 00:29:39.118 filename2: (groupid=0, jobs=1): err= 0: pid=1692738: Fri Jul 26 11:37:33 2024 00:29:39.118 read: IOPS=534, BW=2137KiB/s (2188kB/s)(20.9MiB/10004msec) 00:29:39.118 slat (nsec): min=6112, max=57238, avg=23589.90, stdev=6482.16 00:29:39.118 clat (usec): min=13981, max=59196, avg=29743.27, stdev=1942.36 00:29:39.118 lat (usec): min=14002, max=59213, avg=29766.86, stdev=1941.68 00:29:39.118 clat percentiles (usec): 00:29:39.118 | 1.00th=[29230], 5.00th=[29492], 10.00th=[29492], 20.00th=[29492], 00:29:39.118 | 30.00th=[29492], 40.00th=[29754], 50.00th=[29754], 60.00th=[29754], 00:29:39.118 | 70.00th=[29754], 80.00th=[29754], 90.00th=[30016], 95.00th=[30278], 00:29:39.118 | 99.00th=[30540], 99.50th=[31065], 99.90th=[58983], 99.95th=[58983], 00:29:39.118 | 99.99th=[58983] 00:29:39.118 bw ( KiB/s): min= 1923, max= 2176, per=4.15%, avg=2129.00, stdev=76.00, samples=19 00:29:39.118 iops : min= 480, max= 544, avg=532.21, stdev=19.11, samples=19 00:29:39.118 lat (msec) : 20=0.60%, 50=99.10%, 100=0.30% 00:29:39.118 cpu : usr=98.69%, sys=0.92%, ctx=15, majf=0, minf=16 00:29:39.118 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:29:39.118 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:39.118 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:39.118 issued rwts: total=5344,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:39.118 latency : target=0, window=0, percentile=100.00%, depth=16 00:29:39.118 filename2: (groupid=0, jobs=1): err= 0: pid=1692739: Fri Jul 26 11:37:33 2024 00:29:39.118 read: IOPS=534, BW=2137KiB/s (2188kB/s)(20.9MiB/10002msec) 00:29:39.118 slat (nsec): min=8721, max=47382, avg=21747.63, stdev=6283.65 00:29:39.118 clat (usec): min=18701, max=38202, avg=29766.89, stdev=793.01 00:29:39.118 lat (usec): min=18728, max=38224, avg=29788.64, stdev=792.28 00:29:39.118 clat percentiles (usec): 00:29:39.118 | 1.00th=[29230], 5.00th=[29492], 10.00th=[29492], 20.00th=[29492], 00:29:39.118 | 30.00th=[29754], 40.00th=[29754], 50.00th=[29754], 60.00th=[29754], 00:29:39.118 | 70.00th=[29754], 80.00th=[30016], 90.00th=[30016], 95.00th=[30278], 00:29:39.118 | 99.00th=[31065], 99.50th=[31589], 99.90th=[38011], 99.95th=[38011], 00:29:39.118 | 99.99th=[38011] 00:29:39.118 bw ( KiB/s): min= 2048, max= 2176, per=4.16%, avg=2135.32, stdev=60.96, samples=19 00:29:39.118 iops : min= 512, max= 544, avg=533.79, stdev=15.22, samples=19 00:29:39.118 lat (msec) : 20=0.30%, 50=99.70% 00:29:39.118 cpu : usr=98.77%, sys=0.84%, ctx=14, majf=0, minf=29 00:29:39.118 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:29:39.118 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:39.118 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:39.118 issued rwts: total=5344,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:39.118 latency : target=0, window=0, percentile=100.00%, depth=16 00:29:39.118 filename2: (groupid=0, jobs=1): err= 0: pid=1692740: Fri Jul 26 11:37:33 2024 00:29:39.118 read: IOPS=540, BW=2161KiB/s (2213kB/s)(21.1MiB/10001msec) 00:29:39.118 slat (nsec): min=7269, max=78882, avg=17676.98, stdev=13716.84 00:29:39.118 clat (usec): min=12964, max=88314, avg=29537.84, stdev=4042.92 00:29:39.118 lat (usec): min=12972, max=88357, avg=29555.52, stdev=4041.71 00:29:39.118 clat percentiles (usec): 00:29:39.118 | 1.00th=[20841], 5.00th=[23462], 10.00th=[25035], 20.00th=[26870], 00:29:39.118 | 30.00th=[29754], 40.00th=[29754], 50.00th=[29754], 60.00th=[29754], 00:29:39.118 | 70.00th=[30016], 80.00th=[30016], 90.00th=[33424], 95.00th=[35390], 00:29:39.118 | 99.00th=[35914], 99.50th=[39060], 99.90th=[70779], 99.95th=[70779], 00:29:39.118 | 99.99th=[88605] 00:29:39.118 bw ( KiB/s): min= 1907, max= 2256, per=4.20%, avg=2155.11, stdev=72.17, samples=19 00:29:39.118 iops : min= 476, max= 564, avg=538.74, stdev=18.19, samples=19 00:29:39.118 lat (msec) : 20=0.89%, 50=98.82%, 100=0.30% 00:29:39.118 cpu : usr=98.78%, sys=0.84%, ctx=6, majf=0, minf=28 00:29:39.118 IO depths : 1=0.1%, 2=0.7%, 4=4.4%, 8=78.8%, 16=16.1%, 32=0.0%, >=64=0.0% 00:29:39.118 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:39.118 complete : 0=0.0%, 4=89.5%, 8=8.5%, 16=2.1%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:39.118 issued rwts: total=5404,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:39.118 latency : target=0, window=0, percentile=100.00%, depth=16 00:29:39.118 filename2: (groupid=0, jobs=1): err= 0: pid=1692742: Fri Jul 26 11:37:33 2024 00:29:39.118 read: IOPS=534, BW=2137KiB/s (2188kB/s)(20.9MiB/10002msec) 00:29:39.118 slat (nsec): min=7874, max=52362, avg=16426.62, stdev=6565.47 00:29:39.118 clat (usec): min=18647, max=38143, avg=29813.61, stdev=790.86 00:29:39.118 lat (usec): min=18671, max=38177, avg=29830.03, stdev=790.01 00:29:39.118 clat percentiles (usec): 00:29:39.118 | 1.00th=[29230], 5.00th=[29492], 10.00th=[29492], 20.00th=[29754], 00:29:39.118 | 30.00th=[29754], 40.00th=[29754], 50.00th=[29754], 60.00th=[29754], 00:29:39.118 | 70.00th=[29754], 80.00th=[30016], 90.00th=[30016], 95.00th=[30278], 00:29:39.118 | 99.00th=[31065], 99.50th=[31589], 99.90th=[38011], 99.95th=[38011], 00:29:39.118 | 99.99th=[38011] 00:29:39.118 bw ( KiB/s): min= 2048, max= 2176, per=4.16%, avg=2135.32, stdev=60.96, samples=19 00:29:39.118 iops : min= 512, max= 544, avg=533.79, stdev=15.22, samples=19 00:29:39.118 lat (msec) : 20=0.30%, 50=99.70% 00:29:39.118 cpu : usr=98.85%, sys=0.76%, ctx=10, majf=0, minf=25 00:29:39.118 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:29:39.118 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:39.118 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:39.118 issued rwts: total=5344,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:39.118 latency : target=0, window=0, percentile=100.00%, depth=16 00:29:39.118 filename2: (groupid=0, jobs=1): err= 0: pid=1692743: Fri Jul 26 11:37:33 2024 00:29:39.118 read: IOPS=534, BW=2137KiB/s (2188kB/s)(20.9MiB/10002msec) 00:29:39.118 slat (nsec): min=9371, max=82877, avg=22754.54, stdev=6602.30 00:29:39.118 clat (usec): min=18743, max=38577, avg=29734.61, stdev=801.41 00:29:39.118 lat (usec): min=18768, max=38595, avg=29757.36, stdev=801.29 00:29:39.118 clat percentiles (usec): 00:29:39.118 | 1.00th=[29230], 5.00th=[29492], 10.00th=[29492], 20.00th=[29492], 00:29:39.118 | 30.00th=[29492], 40.00th=[29754], 50.00th=[29754], 60.00th=[29754], 00:29:39.118 | 70.00th=[29754], 80.00th=[29754], 90.00th=[30016], 95.00th=[30278], 00:29:39.118 | 99.00th=[30802], 99.50th=[31589], 99.90th=[38536], 99.95th=[38536], 00:29:39.118 | 99.99th=[38536] 00:29:39.118 bw ( KiB/s): min= 2048, max= 2176, per=4.16%, avg=2135.32, stdev=60.96, samples=19 00:29:39.118 iops : min= 512, max= 544, avg=533.79, stdev=15.22, samples=19 00:29:39.118 lat (msec) : 20=0.30%, 50=99.70% 00:29:39.118 cpu : usr=98.82%, sys=0.80%, ctx=13, majf=0, minf=24 00:29:39.118 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:29:39.118 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:39.118 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:39.118 issued rwts: total=5344,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:39.118 latency : target=0, window=0, percentile=100.00%, depth=16 00:29:39.118 filename2: (groupid=0, jobs=1): err= 0: pid=1692744: Fri Jul 26 11:37:33 2024 00:29:39.118 read: IOPS=533, BW=2135KiB/s (2186kB/s)(20.9MiB/10013msec) 00:29:39.118 slat (nsec): min=7464, max=68275, avg=24189.86, stdev=10393.99 00:29:39.118 clat (usec): min=15686, max=56890, avg=29757.66, stdev=878.09 00:29:39.118 lat (usec): min=15747, max=56903, avg=29781.85, stdev=877.04 00:29:39.118 clat percentiles (usec): 00:29:39.118 | 1.00th=[29230], 5.00th=[29492], 10.00th=[29492], 20.00th=[29492], 00:29:39.118 | 30.00th=[29492], 40.00th=[29754], 50.00th=[29754], 60.00th=[29754], 00:29:39.118 | 70.00th=[29754], 80.00th=[29754], 90.00th=[30016], 95.00th=[30278], 00:29:39.118 | 99.00th=[31065], 99.50th=[31851], 99.90th=[40633], 99.95th=[40633], 00:29:39.118 | 99.99th=[56886] 00:29:39.118 bw ( KiB/s): min= 2048, max= 2176, per=4.15%, avg=2130.95, stdev=62.46, samples=20 00:29:39.118 iops : min= 512, max= 544, avg=532.70, stdev=15.59, samples=20 00:29:39.118 lat (msec) : 20=0.04%, 50=99.93%, 100=0.04% 00:29:39.118 cpu : usr=98.35%, sys=0.93%, ctx=97, majf=0, minf=35 00:29:39.118 IO depths : 1=6.2%, 2=12.4%, 4=25.0%, 8=50.1%, 16=6.3%, 32=0.0%, >=64=0.0% 00:29:39.118 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:39.118 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:39.118 issued rwts: total=5344,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:39.118 latency : target=0, window=0, percentile=100.00%, depth=16 00:29:39.118 filename2: (groupid=0, jobs=1): err= 0: pid=1692745: Fri Jul 26 11:37:33 2024 00:29:39.118 read: IOPS=534, BW=2137KiB/s (2188kB/s)(20.9MiB/10002msec) 00:29:39.118 slat (usec): min=7, max=147, avg=23.62, stdev= 7.17 00:29:39.118 clat (usec): min=14020, max=56797, avg=29724.86, stdev=1833.31 00:29:39.118 lat (usec): min=14043, max=56838, avg=29748.48, stdev=1833.13 00:29:39.118 clat percentiles (usec): 00:29:39.119 | 1.00th=[29230], 5.00th=[29492], 10.00th=[29492], 20.00th=[29492], 00:29:39.119 | 30.00th=[29492], 40.00th=[29754], 50.00th=[29754], 60.00th=[29754], 00:29:39.119 | 70.00th=[29754], 80.00th=[29754], 90.00th=[30016], 95.00th=[30278], 00:29:39.119 | 99.00th=[30540], 99.50th=[31065], 99.90th=[56886], 99.95th=[56886], 00:29:39.119 | 99.99th=[56886] 00:29:39.119 bw ( KiB/s): min= 1920, max= 2176, per=4.15%, avg=2128.84, stdev=76.45, samples=19 00:29:39.119 iops : min= 480, max= 544, avg=532.21, stdev=19.11, samples=19 00:29:39.119 lat (msec) : 20=0.60%, 50=99.10%, 100=0.30% 00:29:39.119 cpu : usr=98.90%, sys=0.70%, ctx=11, majf=0, minf=18 00:29:39.119 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:29:39.119 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:39.119 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:39.119 issued rwts: total=5344,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:39.119 latency : target=0, window=0, percentile=100.00%, depth=16 00:29:39.119 00:29:39.119 Run status group 0 (all jobs): 00:29:39.119 READ: bw=50.1MiB/s (52.5MB/s), 2135KiB/s-2167KiB/s (2186kB/s-2219kB/s), io=502MiB (526MB), run=10001-10014msec 00:29:39.119 11:37:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@113 -- # destroy_subsystems 0 1 2 00:29:39.119 11:37:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:29:39.119 11:37:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:29:39.119 11:37:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:29:39.119 11:37:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:29:39.119 11:37:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:29:39.119 11:37:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:39.119 11:37:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:29:39.119 11:37:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:39.119 11:37:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:29:39.119 11:37:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:39.119 11:37:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:29:39.119 11:37:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:39.119 11:37:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:29:39.119 11:37:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:29:39.119 11:37:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:29:39.119 11:37:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:29:39.119 11:37:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:39.119 11:37:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:29:39.119 11:37:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:39.119 11:37:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:29:39.119 11:37:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:39.119 11:37:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:29:39.119 11:37:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:39.119 11:37:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:29:39.119 11:37:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 2 00:29:39.119 11:37:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=2 00:29:39.119 11:37:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:29:39.119 11:37:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:39.119 11:37:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:29:39.119 11:37:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:39.119 11:37:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null2 00:29:39.119 11:37:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:39.119 11:37:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:29:39.119 11:37:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:39.119 11:37:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # NULL_DIF=1 00:29:39.119 11:37:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # bs=8k,16k,128k 00:29:39.119 11:37:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # numjobs=2 00:29:39.119 11:37:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # iodepth=8 00:29:39.119 11:37:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # runtime=5 00:29:39.119 11:37:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # files=1 00:29:39.119 11:37:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@117 -- # create_subsystems 0 1 00:29:39.119 11:37:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:29:39.119 11:37:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:29:39.119 11:37:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:29:39.119 11:37:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:29:39.119 11:37:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:29:39.119 11:37:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:39.119 11:37:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:29:39.119 bdev_null0 00:29:39.119 11:37:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:39.119 11:37:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:29:39.119 11:37:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:39.119 11:37:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:29:39.119 11:37:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:39.119 11:37:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:29:39.119 11:37:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:39.119 11:37:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:29:39.119 11:37:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:39.119 11:37:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:29:39.119 11:37:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:39.119 11:37:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:29:39.119 [2024-07-26 11:37:33.390419] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:39.119 11:37:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:39.119 11:37:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:29:39.119 11:37:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:29:39.119 11:37:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:29:39.119 11:37:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:29:39.119 11:37:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:39.119 11:37:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:29:39.119 bdev_null1 00:29:39.119 11:37:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:39.119 11:37:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:29:39.119 11:37:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:39.119 11:37:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:29:39.119 11:37:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:39.119 11:37:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:29:39.119 11:37:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:39.119 11:37:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:29:39.119 11:37:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:39.119 11:37:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:29:39.119 11:37:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:39.119 11:37:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:29:39.119 11:37:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:39.119 11:37:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # fio /dev/fd/62 00:29:39.119 11:37:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # create_json_sub_conf 0 1 00:29:39.119 11:37:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:29:39.119 11:37:33 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # config=() 00:29:39.119 11:37:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:29:39.119 11:37:33 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # local subsystem config 00:29:39.119 11:37:33 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:29:39.119 11:37:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:29:39.119 11:37:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:29:39.119 11:37:33 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:29:39.119 { 00:29:39.119 "params": { 00:29:39.119 "name": "Nvme$subsystem", 00:29:39.119 "trtype": "$TEST_TRANSPORT", 00:29:39.119 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:39.119 "adrfam": "ipv4", 00:29:39.119 "trsvcid": "$NVMF_PORT", 00:29:39.119 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:39.119 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:39.119 "hdgst": ${hdgst:-false}, 00:29:39.119 "ddgst": ${ddgst:-false} 00:29:39.119 }, 00:29:39.119 "method": "bdev_nvme_attach_controller" 00:29:39.119 } 00:29:39.119 EOF 00:29:39.119 )") 00:29:39.119 11:37:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:29:39.119 11:37:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:29:39.120 11:37:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:29:39.120 11:37:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:29:39.120 11:37:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # local sanitizers 00:29:39.120 11:37:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:29:39.120 11:37:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # shift 00:29:39.120 11:37:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local asan_lib= 00:29:39.120 11:37:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:29:39.120 11:37:33 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:29:39.120 11:37:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:29:39.120 11:37:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:29:39.120 11:37:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:29:39.120 11:37:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:29:39.120 11:37:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libasan 00:29:39.120 11:37:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:29:39.120 11:37:33 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:29:39.120 11:37:33 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:29:39.120 { 00:29:39.120 "params": { 00:29:39.120 "name": "Nvme$subsystem", 00:29:39.120 "trtype": "$TEST_TRANSPORT", 00:29:39.120 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:39.120 "adrfam": "ipv4", 00:29:39.120 "trsvcid": "$NVMF_PORT", 00:29:39.120 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:39.120 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:39.120 "hdgst": ${hdgst:-false}, 00:29:39.120 "ddgst": ${ddgst:-false} 00:29:39.120 }, 00:29:39.120 "method": "bdev_nvme_attach_controller" 00:29:39.120 } 00:29:39.120 EOF 00:29:39.120 )") 00:29:39.120 11:37:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:29:39.120 11:37:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:29:39.120 11:37:33 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:29:39.120 11:37:33 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@556 -- # jq . 00:29:39.120 11:37:33 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@557 -- # IFS=, 00:29:39.120 11:37:33 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:29:39.120 "params": { 00:29:39.120 "name": "Nvme0", 00:29:39.120 "trtype": "tcp", 00:29:39.120 "traddr": "10.0.0.2", 00:29:39.120 "adrfam": "ipv4", 00:29:39.120 "trsvcid": "4420", 00:29:39.120 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:29:39.120 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:29:39.120 "hdgst": false, 00:29:39.120 "ddgst": false 00:29:39.120 }, 00:29:39.120 "method": "bdev_nvme_attach_controller" 00:29:39.120 },{ 00:29:39.120 "params": { 00:29:39.120 "name": "Nvme1", 00:29:39.120 "trtype": "tcp", 00:29:39.120 "traddr": "10.0.0.2", 00:29:39.120 "adrfam": "ipv4", 00:29:39.120 "trsvcid": "4420", 00:29:39.120 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:29:39.120 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:29:39.120 "hdgst": false, 00:29:39.120 "ddgst": false 00:29:39.120 }, 00:29:39.120 "method": "bdev_nvme_attach_controller" 00:29:39.120 }' 00:29:39.120 11:37:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:29:39.120 11:37:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:29:39.120 11:37:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:29:39.120 11:37:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:29:39.120 11:37:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:29:39.120 11:37:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:29:39.120 11:37:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:29:39.120 11:37:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:29:39.120 11:37:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:29:39.120 11:37:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:29:39.120 filename0: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:29:39.120 ... 00:29:39.120 filename1: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:29:39.120 ... 00:29:39.120 fio-3.35 00:29:39.120 Starting 4 threads 00:29:39.120 EAL: No free 2048 kB hugepages reported on node 1 00:29:44.386 00:29:44.386 filename0: (groupid=0, jobs=1): err= 0: pid=1694673: Fri Jul 26 11:37:39 2024 00:29:44.386 read: IOPS=2625, BW=20.5MiB/s (21.5MB/s)(103MiB/5001msec) 00:29:44.386 slat (nsec): min=5993, max=43025, avg=9633.99, stdev=3620.54 00:29:44.386 clat (usec): min=608, max=5578, avg=3017.59, stdev=519.87 00:29:44.386 lat (usec): min=615, max=5590, avg=3027.22, stdev=519.58 00:29:44.386 clat percentiles (usec): 00:29:44.386 | 1.00th=[ 1975], 5.00th=[ 2376], 10.00th=[ 2540], 20.00th=[ 2704], 00:29:44.386 | 30.00th=[ 2802], 40.00th=[ 2900], 50.00th=[ 2933], 60.00th=[ 2999], 00:29:44.386 | 70.00th=[ 3064], 80.00th=[ 3195], 90.00th=[ 3654], 95.00th=[ 4080], 00:29:44.386 | 99.00th=[ 4948], 99.50th=[ 5080], 99.90th=[ 5407], 99.95th=[ 5473], 00:29:44.386 | 99.99th=[ 5538] 00:29:44.386 bw ( KiB/s): min=19696, max=21616, per=24.39%, avg=20936.89, stdev=587.56, samples=9 00:29:44.386 iops : min= 2462, max= 2702, avg=2617.11, stdev=73.44, samples=9 00:29:44.386 lat (usec) : 750=0.02%, 1000=0.11% 00:29:44.386 lat (msec) : 2=1.02%, 4=91.87%, 10=6.98% 00:29:44.386 cpu : usr=96.70%, sys=2.98%, ctx=9, majf=0, minf=36 00:29:44.386 IO depths : 1=0.1%, 2=6.4%, 4=66.1%, 8=27.4%, 16=0.0%, 32=0.0%, >=64=0.0% 00:29:44.386 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:44.386 complete : 0=0.0%, 4=92.4%, 8=7.6%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:44.386 issued rwts: total=13129,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:44.386 latency : target=0, window=0, percentile=100.00%, depth=8 00:29:44.386 filename0: (groupid=0, jobs=1): err= 0: pid=1694675: Fri Jul 26 11:37:39 2024 00:29:44.386 read: IOPS=2691, BW=21.0MiB/s (22.1MB/s)(105MiB/5002msec) 00:29:44.386 slat (nsec): min=6012, max=37375, avg=9437.11, stdev=3443.52 00:29:44.386 clat (usec): min=692, max=5478, avg=2944.06, stdev=488.37 00:29:44.386 lat (usec): min=707, max=5490, avg=2953.50, stdev=488.25 00:29:44.386 clat percentiles (usec): 00:29:44.386 | 1.00th=[ 1942], 5.00th=[ 2245], 10.00th=[ 2474], 20.00th=[ 2638], 00:29:44.386 | 30.00th=[ 2737], 40.00th=[ 2868], 50.00th=[ 2933], 60.00th=[ 2966], 00:29:44.386 | 70.00th=[ 3032], 80.00th=[ 3097], 90.00th=[ 3458], 95.00th=[ 4047], 00:29:44.386 | 99.00th=[ 4621], 99.50th=[ 4817], 99.90th=[ 5276], 99.95th=[ 5407], 00:29:44.386 | 99.99th=[ 5473] 00:29:44.386 bw ( KiB/s): min=20768, max=22128, per=25.08%, avg=21531.20, stdev=495.92, samples=10 00:29:44.386 iops : min= 2596, max= 2766, avg=2691.40, stdev=61.99, samples=10 00:29:44.386 lat (usec) : 750=0.01%, 1000=0.01% 00:29:44.386 lat (msec) : 2=1.42%, 4=92.86%, 10=5.69% 00:29:44.386 cpu : usr=96.20%, sys=3.48%, ctx=8, majf=0, minf=44 00:29:44.386 IO depths : 1=0.4%, 2=4.2%, 4=67.3%, 8=28.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:29:44.386 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:44.386 complete : 0=0.0%, 4=93.1%, 8=6.9%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:44.386 issued rwts: total=13465,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:44.386 latency : target=0, window=0, percentile=100.00%, depth=8 00:29:44.386 filename1: (groupid=0, jobs=1): err= 0: pid=1694676: Fri Jul 26 11:37:39 2024 00:29:44.386 read: IOPS=2668, BW=20.8MiB/s (21.9MB/s)(104MiB/5001msec) 00:29:44.386 slat (nsec): min=6010, max=41848, avg=9655.93, stdev=3662.13 00:29:44.386 clat (usec): min=565, max=5504, avg=2967.07, stdev=501.90 00:29:44.386 lat (usec): min=577, max=5516, avg=2976.72, stdev=501.64 00:29:44.386 clat percentiles (usec): 00:29:44.386 | 1.00th=[ 1975], 5.00th=[ 2343], 10.00th=[ 2474], 20.00th=[ 2671], 00:29:44.386 | 30.00th=[ 2769], 40.00th=[ 2868], 50.00th=[ 2933], 60.00th=[ 2966], 00:29:44.386 | 70.00th=[ 3032], 80.00th=[ 3130], 90.00th=[ 3523], 95.00th=[ 4047], 00:29:44.386 | 99.00th=[ 4752], 99.50th=[ 4948], 99.90th=[ 5211], 99.95th=[ 5276], 00:29:44.386 | 99.99th=[ 5473] 00:29:44.386 bw ( KiB/s): min=20560, max=21984, per=24.86%, avg=21336.89, stdev=484.06, samples=9 00:29:44.386 iops : min= 2570, max= 2748, avg=2667.11, stdev=60.51, samples=9 00:29:44.386 lat (usec) : 750=0.05%, 1000=0.07% 00:29:44.386 lat (msec) : 2=1.09%, 4=92.40%, 10=6.38% 00:29:44.386 cpu : usr=96.82%, sys=2.86%, ctx=7, majf=0, minf=70 00:29:44.386 IO depths : 1=0.4%, 2=7.4%, 4=65.2%, 8=27.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:29:44.386 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:44.386 complete : 0=0.0%, 4=92.2%, 8=7.8%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:44.386 issued rwts: total=13344,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:44.386 latency : target=0, window=0, percentile=100.00%, depth=8 00:29:44.386 filename1: (groupid=0, jobs=1): err= 0: pid=1694677: Fri Jul 26 11:37:39 2024 00:29:44.386 read: IOPS=2746, BW=21.5MiB/s (22.5MB/s)(107MiB/5003msec) 00:29:44.386 slat (nsec): min=6016, max=48618, avg=9395.00, stdev=3510.26 00:29:44.386 clat (usec): min=554, max=5484, avg=2884.51, stdev=485.68 00:29:44.386 lat (usec): min=565, max=5502, avg=2893.90, stdev=485.58 00:29:44.386 clat percentiles (usec): 00:29:44.386 | 1.00th=[ 1713], 5.00th=[ 2212], 10.00th=[ 2376], 20.00th=[ 2573], 00:29:44.386 | 30.00th=[ 2704], 40.00th=[ 2802], 50.00th=[ 2900], 60.00th=[ 2966], 00:29:44.386 | 70.00th=[ 2999], 80.00th=[ 3064], 90.00th=[ 3326], 95.00th=[ 4015], 00:29:44.386 | 99.00th=[ 4490], 99.50th=[ 4686], 99.90th=[ 5211], 99.95th=[ 5342], 00:29:44.386 | 99.99th=[ 5473] 00:29:44.386 bw ( KiB/s): min=21168, max=23168, per=25.60%, avg=21971.20, stdev=618.15, samples=10 00:29:44.386 iops : min= 2646, max= 2896, avg=2746.40, stdev=77.27, samples=10 00:29:44.386 lat (usec) : 750=0.02%, 1000=0.04% 00:29:44.386 lat (msec) : 2=1.91%, 4=92.98%, 10=5.04% 00:29:44.386 cpu : usr=96.40%, sys=3.28%, ctx=10, majf=0, minf=18 00:29:44.386 IO depths : 1=0.3%, 2=5.9%, 4=65.5%, 8=28.3%, 16=0.0%, 32=0.0%, >=64=0.0% 00:29:44.386 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:44.386 complete : 0=0.0%, 4=93.2%, 8=6.8%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:44.386 issued rwts: total=13740,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:44.386 latency : target=0, window=0, percentile=100.00%, depth=8 00:29:44.386 00:29:44.386 Run status group 0 (all jobs): 00:29:44.386 READ: bw=83.8MiB/s (87.9MB/s), 20.5MiB/s-21.5MiB/s (21.5MB/s-22.5MB/s), io=419MiB (440MB), run=5001-5003msec 00:29:44.386 11:37:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@119 -- # destroy_subsystems 0 1 00:29:44.386 11:37:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:29:44.386 11:37:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:29:44.386 11:37:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:29:44.387 11:37:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:29:44.387 11:37:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:29:44.387 11:37:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:44.387 11:37:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:29:44.387 11:37:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:44.387 11:37:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:29:44.387 11:37:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:44.387 11:37:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:29:44.387 11:37:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:44.387 11:37:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:29:44.387 11:37:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:29:44.387 11:37:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:29:44.387 11:37:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:29:44.387 11:37:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:44.387 11:37:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:29:44.387 11:37:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:44.387 11:37:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:29:44.387 11:37:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:44.387 11:37:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:29:44.387 11:37:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:44.387 00:29:44.387 real 0m24.212s 00:29:44.387 user 4m52.937s 00:29:44.387 sys 0m4.196s 00:29:44.387 11:37:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1126 -- # xtrace_disable 00:29:44.387 11:37:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:29:44.387 ************************************ 00:29:44.387 END TEST fio_dif_rand_params 00:29:44.387 ************************************ 00:29:44.387 11:37:39 nvmf_dif -- target/dif.sh@144 -- # run_test fio_dif_digest fio_dif_digest 00:29:44.387 11:37:39 nvmf_dif -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:29:44.387 11:37:39 nvmf_dif -- common/autotest_common.sh@1107 -- # xtrace_disable 00:29:44.387 11:37:39 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:29:44.387 ************************************ 00:29:44.387 START TEST fio_dif_digest 00:29:44.387 ************************************ 00:29:44.387 11:37:39 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1125 -- # fio_dif_digest 00:29:44.387 11:37:39 nvmf_dif.fio_dif_digest -- target/dif.sh@123 -- # local NULL_DIF 00:29:44.387 11:37:39 nvmf_dif.fio_dif_digest -- target/dif.sh@124 -- # local bs numjobs runtime iodepth files 00:29:44.387 11:37:39 nvmf_dif.fio_dif_digest -- target/dif.sh@125 -- # local hdgst ddgst 00:29:44.387 11:37:39 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # NULL_DIF=3 00:29:44.387 11:37:39 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # bs=128k,128k,128k 00:29:44.387 11:37:39 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # numjobs=3 00:29:44.387 11:37:39 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # iodepth=3 00:29:44.387 11:37:39 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # runtime=10 00:29:44.387 11:37:39 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # hdgst=true 00:29:44.387 11:37:39 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # ddgst=true 00:29:44.387 11:37:39 nvmf_dif.fio_dif_digest -- target/dif.sh@130 -- # create_subsystems 0 00:29:44.387 11:37:39 nvmf_dif.fio_dif_digest -- target/dif.sh@28 -- # local sub 00:29:44.387 11:37:39 nvmf_dif.fio_dif_digest -- target/dif.sh@30 -- # for sub in "$@" 00:29:44.387 11:37:39 nvmf_dif.fio_dif_digest -- target/dif.sh@31 -- # create_subsystem 0 00:29:44.387 11:37:39 nvmf_dif.fio_dif_digest -- target/dif.sh@18 -- # local sub_id=0 00:29:44.387 11:37:39 nvmf_dif.fio_dif_digest -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:29:44.387 11:37:39 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:44.387 11:37:39 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:29:44.387 bdev_null0 00:29:44.387 11:37:39 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:44.387 11:37:39 nvmf_dif.fio_dif_digest -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:29:44.387 11:37:39 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:44.387 11:37:39 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:29:44.387 11:37:39 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:44.387 11:37:39 nvmf_dif.fio_dif_digest -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:29:44.387 11:37:39 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:44.387 11:37:39 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:29:44.387 11:37:39 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:44.387 11:37:39 nvmf_dif.fio_dif_digest -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:29:44.387 11:37:39 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:44.387 11:37:39 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:29:44.387 [2024-07-26 11:37:39.837351] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:44.387 11:37:39 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:44.387 11:37:39 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # fio /dev/fd/62 00:29:44.387 11:37:39 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # create_json_sub_conf 0 00:29:44.387 11:37:39 nvmf_dif.fio_dif_digest -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:29:44.387 11:37:39 nvmf_dif.fio_dif_digest -- nvmf/common.sh@532 -- # config=() 00:29:44.387 11:37:39 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:29:44.387 11:37:39 nvmf_dif.fio_dif_digest -- nvmf/common.sh@532 -- # local subsystem config 00:29:44.387 11:37:39 nvmf_dif.fio_dif_digest -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:29:44.387 11:37:39 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:29:44.387 11:37:39 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # gen_fio_conf 00:29:44.387 11:37:39 nvmf_dif.fio_dif_digest -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:29:44.387 { 00:29:44.387 "params": { 00:29:44.387 "name": "Nvme$subsystem", 00:29:44.387 "trtype": "$TEST_TRANSPORT", 00:29:44.387 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:44.387 "adrfam": "ipv4", 00:29:44.387 "trsvcid": "$NVMF_PORT", 00:29:44.387 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:44.387 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:44.387 "hdgst": ${hdgst:-false}, 00:29:44.387 "ddgst": ${ddgst:-false} 00:29:44.387 }, 00:29:44.387 "method": "bdev_nvme_attach_controller" 00:29:44.387 } 00:29:44.387 EOF 00:29:44.387 )") 00:29:44.387 11:37:39 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:29:44.387 11:37:39 nvmf_dif.fio_dif_digest -- target/dif.sh@54 -- # local file 00:29:44.387 11:37:39 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:29:44.387 11:37:39 nvmf_dif.fio_dif_digest -- target/dif.sh@56 -- # cat 00:29:44.387 11:37:39 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1339 -- # local sanitizers 00:29:44.387 11:37:39 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:29:44.387 11:37:39 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1341 -- # shift 00:29:44.387 11:37:39 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1343 -- # local asan_lib= 00:29:44.387 11:37:39 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:29:44.387 11:37:39 nvmf_dif.fio_dif_digest -- nvmf/common.sh@554 -- # cat 00:29:44.387 11:37:39 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file = 1 )) 00:29:44.387 11:37:39 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file <= files )) 00:29:44.387 11:37:39 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:29:44.387 11:37:39 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # grep libasan 00:29:44.387 11:37:39 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:29:44.387 11:37:39 nvmf_dif.fio_dif_digest -- nvmf/common.sh@556 -- # jq . 00:29:44.387 11:37:39 nvmf_dif.fio_dif_digest -- nvmf/common.sh@557 -- # IFS=, 00:29:44.387 11:37:39 nvmf_dif.fio_dif_digest -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:29:44.387 "params": { 00:29:44.387 "name": "Nvme0", 00:29:44.387 "trtype": "tcp", 00:29:44.387 "traddr": "10.0.0.2", 00:29:44.387 "adrfam": "ipv4", 00:29:44.387 "trsvcid": "4420", 00:29:44.387 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:29:44.387 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:29:44.387 "hdgst": true, 00:29:44.387 "ddgst": true 00:29:44.387 }, 00:29:44.387 "method": "bdev_nvme_attach_controller" 00:29:44.387 }' 00:29:44.387 11:37:39 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # asan_lib= 00:29:44.387 11:37:39 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:29:44.387 11:37:39 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:29:44.387 11:37:39 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:29:44.387 11:37:39 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:29:44.387 11:37:39 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:29:44.387 11:37:39 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # asan_lib= 00:29:44.387 11:37:39 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:29:44.387 11:37:39 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:29:44.387 11:37:39 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:29:44.645 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:29:44.645 ... 00:29:44.645 fio-3.35 00:29:44.645 Starting 3 threads 00:29:44.645 EAL: No free 2048 kB hugepages reported on node 1 00:29:56.847 00:29:56.847 filename0: (groupid=0, jobs=1): err= 0: pid=1695900: Fri Jul 26 11:37:50 2024 00:29:56.847 read: IOPS=291, BW=36.4MiB/s (38.2MB/s)(366MiB/10048msec) 00:29:56.847 slat (nsec): min=6292, max=31922, avg=11597.07, stdev=2107.15 00:29:56.847 clat (usec): min=7626, max=52674, avg=10273.94, stdev=1832.03 00:29:56.847 lat (usec): min=7639, max=52706, avg=10285.53, stdev=1832.08 00:29:56.847 clat percentiles (usec): 00:29:56.847 | 1.00th=[ 8586], 5.00th=[ 8979], 10.00th=[ 9372], 20.00th=[ 9634], 00:29:56.847 | 30.00th=[ 9896], 40.00th=[10028], 50.00th=[10290], 60.00th=[10421], 00:29:56.847 | 70.00th=[10552], 80.00th=[10814], 90.00th=[11076], 95.00th=[11338], 00:29:56.847 | 99.00th=[11994], 99.50th=[12125], 99.90th=[52691], 99.95th=[52691], 00:29:56.847 | 99.99th=[52691] 00:29:56.847 bw ( KiB/s): min=34816, max=38656, per=34.71%, avg=37427.20, stdev=812.10, samples=20 00:29:56.847 iops : min= 272, max= 302, avg=292.40, stdev= 6.34, samples=20 00:29:56.847 lat (msec) : 10=37.12%, 20=62.71%, 50=0.07%, 100=0.10% 00:29:56.847 cpu : usr=94.10%, sys=5.59%, ctx=32, majf=0, minf=99 00:29:56.847 IO depths : 1=0.1%, 2=100.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:29:56.847 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:56.847 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:56.847 issued rwts: total=2926,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:56.847 latency : target=0, window=0, percentile=100.00%, depth=3 00:29:56.847 filename0: (groupid=0, jobs=1): err= 0: pid=1695901: Fri Jul 26 11:37:50 2024 00:29:56.847 read: IOPS=280, BW=35.0MiB/s (36.7MB/s)(352MiB/10045msec) 00:29:56.847 slat (nsec): min=6326, max=51125, avg=11588.91, stdev=2159.42 00:29:56.847 clat (usec): min=6632, max=52909, avg=10683.71, stdev=1314.09 00:29:56.847 lat (usec): min=6648, max=52918, avg=10695.30, stdev=1313.96 00:29:56.847 clat percentiles (usec): 00:29:56.847 | 1.00th=[ 8848], 5.00th=[ 9503], 10.00th=[ 9765], 20.00th=[10159], 00:29:56.847 | 30.00th=[10290], 40.00th=[10552], 50.00th=[10683], 60.00th=[10814], 00:29:56.847 | 70.00th=[10945], 80.00th=[11207], 90.00th=[11600], 95.00th=[11863], 00:29:56.847 | 99.00th=[12387], 99.50th=[12780], 99.90th=[13698], 99.95th=[50070], 00:29:56.847 | 99.99th=[52691] 00:29:56.847 bw ( KiB/s): min=35328, max=36608, per=33.39%, avg=36001.68, stdev=374.01, samples=19 00:29:56.847 iops : min= 276, max= 286, avg=281.26, stdev= 2.92, samples=19 00:29:56.847 lat (msec) : 10=16.71%, 20=83.22%, 50=0.04%, 100=0.04% 00:29:56.847 cpu : usr=94.63%, sys=5.06%, ctx=31, majf=0, minf=161 00:29:56.847 IO depths : 1=0.1%, 2=99.9%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:29:56.847 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:56.847 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:56.847 issued rwts: total=2813,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:56.847 latency : target=0, window=0, percentile=100.00%, depth=3 00:29:56.847 filename0: (groupid=0, jobs=1): err= 0: pid=1695902: Fri Jul 26 11:37:50 2024 00:29:56.847 read: IOPS=272, BW=34.1MiB/s (35.7MB/s)(341MiB/10003msec) 00:29:56.847 slat (nsec): min=6270, max=33218, avg=11583.95, stdev=1879.05 00:29:56.847 clat (usec): min=6651, max=13586, avg=10999.04, stdev=792.14 00:29:56.847 lat (usec): min=6664, max=13599, avg=11010.63, stdev=792.05 00:29:56.847 clat percentiles (usec): 00:29:56.847 | 1.00th=[ 9110], 5.00th=[ 9765], 10.00th=[10028], 20.00th=[10421], 00:29:56.847 | 30.00th=[10552], 40.00th=[10814], 50.00th=[10945], 60.00th=[11207], 00:29:56.847 | 70.00th=[11338], 80.00th=[11600], 90.00th=[11994], 95.00th=[12256], 00:29:56.847 | 99.00th=[12911], 99.50th=[13042], 99.90th=[13566], 99.95th=[13566], 00:29:56.847 | 99.99th=[13566] 00:29:56.847 bw ( KiB/s): min=33792, max=36352, per=32.36%, avg=34887.05, stdev=587.21, samples=19 00:29:56.847 iops : min= 264, max= 284, avg=272.53, stdev= 4.56, samples=19 00:29:56.847 lat (msec) : 10=7.78%, 20=92.22% 00:29:56.847 cpu : usr=95.18%, sys=4.52%, ctx=22, majf=0, minf=111 00:29:56.847 IO depths : 1=0.1%, 2=100.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:29:56.847 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:56.847 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:56.847 issued rwts: total=2725,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:56.847 latency : target=0, window=0, percentile=100.00%, depth=3 00:29:56.847 00:29:56.847 Run status group 0 (all jobs): 00:29:56.847 READ: bw=105MiB/s (110MB/s), 34.1MiB/s-36.4MiB/s (35.7MB/s-38.2MB/s), io=1058MiB (1109MB), run=10003-10048msec 00:29:56.847 11:37:51 nvmf_dif.fio_dif_digest -- target/dif.sh@132 -- # destroy_subsystems 0 00:29:56.847 11:37:51 nvmf_dif.fio_dif_digest -- target/dif.sh@43 -- # local sub 00:29:56.847 11:37:51 nvmf_dif.fio_dif_digest -- target/dif.sh@45 -- # for sub in "$@" 00:29:56.847 11:37:51 nvmf_dif.fio_dif_digest -- target/dif.sh@46 -- # destroy_subsystem 0 00:29:56.847 11:37:51 nvmf_dif.fio_dif_digest -- target/dif.sh@36 -- # local sub_id=0 00:29:56.847 11:37:51 nvmf_dif.fio_dif_digest -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:29:56.847 11:37:51 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:56.847 11:37:51 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:29:56.847 11:37:51 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:56.847 11:37:51 nvmf_dif.fio_dif_digest -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:29:56.848 11:37:51 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:56.848 11:37:51 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:29:56.848 11:37:51 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:56.848 00:29:56.848 real 0m11.255s 00:29:56.848 user 0m35.669s 00:29:56.848 sys 0m1.859s 00:29:56.848 11:37:51 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1126 -- # xtrace_disable 00:29:56.848 11:37:51 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:29:56.848 ************************************ 00:29:56.848 END TEST fio_dif_digest 00:29:56.848 ************************************ 00:29:56.848 11:37:51 nvmf_dif -- target/dif.sh@146 -- # trap - SIGINT SIGTERM EXIT 00:29:56.848 11:37:51 nvmf_dif -- target/dif.sh@147 -- # nvmftestfini 00:29:56.848 11:37:51 nvmf_dif -- nvmf/common.sh@488 -- # nvmfcleanup 00:29:56.848 11:37:51 nvmf_dif -- nvmf/common.sh@117 -- # sync 00:29:56.848 11:37:51 nvmf_dif -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:29:56.848 11:37:51 nvmf_dif -- nvmf/common.sh@120 -- # set +e 00:29:56.848 11:37:51 nvmf_dif -- nvmf/common.sh@121 -- # for i in {1..20} 00:29:56.848 11:37:51 nvmf_dif -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:29:56.848 rmmod nvme_tcp 00:29:56.848 rmmod nvme_fabrics 00:29:56.848 rmmod nvme_keyring 00:29:56.848 11:37:51 nvmf_dif -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:29:56.848 11:37:51 nvmf_dif -- nvmf/common.sh@124 -- # set -e 00:29:56.848 11:37:51 nvmf_dif -- nvmf/common.sh@125 -- # return 0 00:29:56.848 11:37:51 nvmf_dif -- nvmf/common.sh@489 -- # '[' -n 1687288 ']' 00:29:56.848 11:37:51 nvmf_dif -- nvmf/common.sh@490 -- # killprocess 1687288 00:29:56.848 11:37:51 nvmf_dif -- common/autotest_common.sh@950 -- # '[' -z 1687288 ']' 00:29:56.848 11:37:51 nvmf_dif -- common/autotest_common.sh@954 -- # kill -0 1687288 00:29:56.848 11:37:51 nvmf_dif -- common/autotest_common.sh@955 -- # uname 00:29:56.848 11:37:51 nvmf_dif -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:29:56.848 11:37:51 nvmf_dif -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1687288 00:29:56.848 11:37:51 nvmf_dif -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:29:56.848 11:37:51 nvmf_dif -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:29:56.848 11:37:51 nvmf_dif -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1687288' 00:29:56.848 killing process with pid 1687288 00:29:56.848 11:37:51 nvmf_dif -- common/autotest_common.sh@969 -- # kill 1687288 00:29:56.848 11:37:51 nvmf_dif -- common/autotest_common.sh@974 -- # wait 1687288 00:29:56.848 11:37:51 nvmf_dif -- nvmf/common.sh@492 -- # '[' iso == iso ']' 00:29:56.848 11:37:51 nvmf_dif -- nvmf/common.sh@493 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:29:58.752 Waiting for block devices as requested 00:29:58.752 0000:5e:00.0 (8086 0a54): vfio-pci -> nvme 00:29:58.752 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:29:58.752 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:29:58.752 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:29:58.752 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:29:59.010 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:29:59.010 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:29:59.010 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:29:59.268 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:29:59.268 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:29:59.268 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:29:59.527 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:29:59.527 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:29:59.527 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:29:59.527 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:29:59.786 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:29:59.786 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:29:59.786 11:37:55 nvmf_dif -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:29:59.786 11:37:55 nvmf_dif -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:29:59.786 11:37:55 nvmf_dif -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:29:59.786 11:37:55 nvmf_dif -- nvmf/common.sh@278 -- # remove_spdk_ns 00:29:59.786 11:37:55 nvmf_dif -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:59.786 11:37:55 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:29:59.786 11:37:55 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:02.384 11:37:57 nvmf_dif -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:30:02.384 00:30:02.384 real 1m14.324s 00:30:02.384 user 7m12.114s 00:30:02.384 sys 0m18.917s 00:30:02.384 11:37:57 nvmf_dif -- common/autotest_common.sh@1126 -- # xtrace_disable 00:30:02.384 11:37:57 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:30:02.384 ************************************ 00:30:02.384 END TEST nvmf_dif 00:30:02.384 ************************************ 00:30:02.384 11:37:57 -- spdk/autotest.sh@297 -- # run_test nvmf_abort_qd_sizes /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort_qd_sizes.sh 00:30:02.384 11:37:57 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:30:02.384 11:37:57 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:30:02.384 11:37:57 -- common/autotest_common.sh@10 -- # set +x 00:30:02.384 ************************************ 00:30:02.384 START TEST nvmf_abort_qd_sizes 00:30:02.384 ************************************ 00:30:02.384 11:37:57 nvmf_abort_qd_sizes -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort_qd_sizes.sh 00:30:02.384 * Looking for test storage... 00:30:02.384 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:30:02.384 11:37:57 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:30:02.384 11:37:57 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # uname -s 00:30:02.384 11:37:57 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:02.384 11:37:57 nvmf_abort_qd_sizes -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:02.384 11:37:57 nvmf_abort_qd_sizes -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:02.384 11:37:57 nvmf_abort_qd_sizes -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:02.384 11:37:57 nvmf_abort_qd_sizes -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:02.384 11:37:57 nvmf_abort_qd_sizes -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:02.384 11:37:57 nvmf_abort_qd_sizes -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:02.384 11:37:57 nvmf_abort_qd_sizes -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:02.384 11:37:57 nvmf_abort_qd_sizes -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:02.384 11:37:57 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:02.384 11:37:57 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:30:02.384 11:37:57 nvmf_abort_qd_sizes -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:30:02.384 11:37:57 nvmf_abort_qd_sizes -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:02.384 11:37:57 nvmf_abort_qd_sizes -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:02.384 11:37:57 nvmf_abort_qd_sizes -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:30:02.384 11:37:57 nvmf_abort_qd_sizes -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:30:02.384 11:37:57 nvmf_abort_qd_sizes -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:30:02.384 11:37:57 nvmf_abort_qd_sizes -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:02.384 11:37:57 nvmf_abort_qd_sizes -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:02.384 11:37:57 nvmf_abort_qd_sizes -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:02.384 11:37:57 nvmf_abort_qd_sizes -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:02.384 11:37:57 nvmf_abort_qd_sizes -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:02.384 11:37:57 nvmf_abort_qd_sizes -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:02.384 11:37:57 nvmf_abort_qd_sizes -- paths/export.sh@5 -- # export PATH 00:30:02.384 11:37:57 nvmf_abort_qd_sizes -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:02.384 11:37:57 nvmf_abort_qd_sizes -- nvmf/common.sh@47 -- # : 0 00:30:02.384 11:37:57 nvmf_abort_qd_sizes -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:30:02.384 11:37:57 nvmf_abort_qd_sizes -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:30:02.384 11:37:57 nvmf_abort_qd_sizes -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:30:02.384 11:37:57 nvmf_abort_qd_sizes -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:02.384 11:37:57 nvmf_abort_qd_sizes -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:02.384 11:37:57 nvmf_abort_qd_sizes -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:30:02.384 11:37:57 nvmf_abort_qd_sizes -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:30:02.384 11:37:57 nvmf_abort_qd_sizes -- nvmf/common.sh@51 -- # have_pci_nics=0 00:30:02.384 11:37:57 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@70 -- # nvmftestinit 00:30:02.384 11:37:57 nvmf_abort_qd_sizes -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:30:02.384 11:37:57 nvmf_abort_qd_sizes -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:30:02.384 11:37:57 nvmf_abort_qd_sizes -- nvmf/common.sh@448 -- # prepare_net_devs 00:30:02.384 11:37:57 nvmf_abort_qd_sizes -- nvmf/common.sh@410 -- # local -g is_hw=no 00:30:02.384 11:37:57 nvmf_abort_qd_sizes -- nvmf/common.sh@412 -- # remove_spdk_ns 00:30:02.384 11:37:57 nvmf_abort_qd_sizes -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:02.384 11:37:57 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:30:02.384 11:37:57 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:02.384 11:37:57 nvmf_abort_qd_sizes -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:30:02.384 11:37:57 nvmf_abort_qd_sizes -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:30:02.384 11:37:57 nvmf_abort_qd_sizes -- nvmf/common.sh@285 -- # xtrace_disable 00:30:02.384 11:37:57 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:30:07.659 11:38:03 nvmf_abort_qd_sizes -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:30:07.659 11:38:03 nvmf_abort_qd_sizes -- nvmf/common.sh@291 -- # pci_devs=() 00:30:07.659 11:38:03 nvmf_abort_qd_sizes -- nvmf/common.sh@291 -- # local -a pci_devs 00:30:07.659 11:38:03 nvmf_abort_qd_sizes -- nvmf/common.sh@292 -- # pci_net_devs=() 00:30:07.659 11:38:03 nvmf_abort_qd_sizes -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:30:07.659 11:38:03 nvmf_abort_qd_sizes -- nvmf/common.sh@293 -- # pci_drivers=() 00:30:07.659 11:38:03 nvmf_abort_qd_sizes -- nvmf/common.sh@293 -- # local -A pci_drivers 00:30:07.659 11:38:03 nvmf_abort_qd_sizes -- nvmf/common.sh@295 -- # net_devs=() 00:30:07.659 11:38:03 nvmf_abort_qd_sizes -- nvmf/common.sh@295 -- # local -ga net_devs 00:30:07.659 11:38:03 nvmf_abort_qd_sizes -- nvmf/common.sh@296 -- # e810=() 00:30:07.659 11:38:03 nvmf_abort_qd_sizes -- nvmf/common.sh@296 -- # local -ga e810 00:30:07.659 11:38:03 nvmf_abort_qd_sizes -- nvmf/common.sh@297 -- # x722=() 00:30:07.659 11:38:03 nvmf_abort_qd_sizes -- nvmf/common.sh@297 -- # local -ga x722 00:30:07.659 11:38:03 nvmf_abort_qd_sizes -- nvmf/common.sh@298 -- # mlx=() 00:30:07.659 11:38:03 nvmf_abort_qd_sizes -- nvmf/common.sh@298 -- # local -ga mlx 00:30:07.659 11:38:03 nvmf_abort_qd_sizes -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:30:07.659 11:38:03 nvmf_abort_qd_sizes -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:30:07.659 11:38:03 nvmf_abort_qd_sizes -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:30:07.659 11:38:03 nvmf_abort_qd_sizes -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:30:07.659 11:38:03 nvmf_abort_qd_sizes -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:30:07.659 11:38:03 nvmf_abort_qd_sizes -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:30:07.659 11:38:03 nvmf_abort_qd_sizes -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:30:07.659 11:38:03 nvmf_abort_qd_sizes -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:30:07.659 11:38:03 nvmf_abort_qd_sizes -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:30:07.659 11:38:03 nvmf_abort_qd_sizes -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:30:07.659 11:38:03 nvmf_abort_qd_sizes -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:30:07.659 11:38:03 nvmf_abort_qd_sizes -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:30:07.659 11:38:03 nvmf_abort_qd_sizes -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:30:07.659 11:38:03 nvmf_abort_qd_sizes -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:30:07.659 11:38:03 nvmf_abort_qd_sizes -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:30:07.659 11:38:03 nvmf_abort_qd_sizes -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:30:07.659 11:38:03 nvmf_abort_qd_sizes -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:30:07.659 11:38:03 nvmf_abort_qd_sizes -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:30:07.659 11:38:03 nvmf_abort_qd_sizes -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:30:07.659 Found 0000:86:00.0 (0x8086 - 0x159b) 00:30:07.659 11:38:03 nvmf_abort_qd_sizes -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:30:07.659 11:38:03 nvmf_abort_qd_sizes -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:30:07.659 11:38:03 nvmf_abort_qd_sizes -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:07.659 11:38:03 nvmf_abort_qd_sizes -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:07.659 11:38:03 nvmf_abort_qd_sizes -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:30:07.659 11:38:03 nvmf_abort_qd_sizes -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:30:07.659 11:38:03 nvmf_abort_qd_sizes -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:30:07.659 Found 0000:86:00.1 (0x8086 - 0x159b) 00:30:07.659 11:38:03 nvmf_abort_qd_sizes -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:30:07.659 11:38:03 nvmf_abort_qd_sizes -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:30:07.659 11:38:03 nvmf_abort_qd_sizes -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:07.659 11:38:03 nvmf_abort_qd_sizes -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:07.659 11:38:03 nvmf_abort_qd_sizes -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:30:07.659 11:38:03 nvmf_abort_qd_sizes -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:30:07.659 11:38:03 nvmf_abort_qd_sizes -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:30:07.659 11:38:03 nvmf_abort_qd_sizes -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:30:07.659 11:38:03 nvmf_abort_qd_sizes -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:30:07.659 11:38:03 nvmf_abort_qd_sizes -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:07.659 11:38:03 nvmf_abort_qd_sizes -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:30:07.659 11:38:03 nvmf_abort_qd_sizes -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:07.659 11:38:03 nvmf_abort_qd_sizes -- nvmf/common.sh@390 -- # [[ up == up ]] 00:30:07.659 11:38:03 nvmf_abort_qd_sizes -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:30:07.659 11:38:03 nvmf_abort_qd_sizes -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:07.659 11:38:03 nvmf_abort_qd_sizes -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:30:07.659 Found net devices under 0000:86:00.0: cvl_0_0 00:30:07.659 11:38:03 nvmf_abort_qd_sizes -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:30:07.659 11:38:03 nvmf_abort_qd_sizes -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:30:07.659 11:38:03 nvmf_abort_qd_sizes -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:07.659 11:38:03 nvmf_abort_qd_sizes -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:30:07.659 11:38:03 nvmf_abort_qd_sizes -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:07.659 11:38:03 nvmf_abort_qd_sizes -- nvmf/common.sh@390 -- # [[ up == up ]] 00:30:07.659 11:38:03 nvmf_abort_qd_sizes -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:30:07.659 11:38:03 nvmf_abort_qd_sizes -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:07.659 11:38:03 nvmf_abort_qd_sizes -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:30:07.659 Found net devices under 0000:86:00.1: cvl_0_1 00:30:07.659 11:38:03 nvmf_abort_qd_sizes -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:30:07.659 11:38:03 nvmf_abort_qd_sizes -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:30:07.659 11:38:03 nvmf_abort_qd_sizes -- nvmf/common.sh@414 -- # is_hw=yes 00:30:07.659 11:38:03 nvmf_abort_qd_sizes -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:30:07.659 11:38:03 nvmf_abort_qd_sizes -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:30:07.659 11:38:03 nvmf_abort_qd_sizes -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:30:07.659 11:38:03 nvmf_abort_qd_sizes -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:30:07.659 11:38:03 nvmf_abort_qd_sizes -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:30:07.659 11:38:03 nvmf_abort_qd_sizes -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:30:07.659 11:38:03 nvmf_abort_qd_sizes -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:30:07.659 11:38:03 nvmf_abort_qd_sizes -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:30:07.659 11:38:03 nvmf_abort_qd_sizes -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:30:07.659 11:38:03 nvmf_abort_qd_sizes -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:30:07.659 11:38:03 nvmf_abort_qd_sizes -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:30:07.659 11:38:03 nvmf_abort_qd_sizes -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:30:07.659 11:38:03 nvmf_abort_qd_sizes -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:30:07.659 11:38:03 nvmf_abort_qd_sizes -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:30:07.659 11:38:03 nvmf_abort_qd_sizes -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:30:07.659 11:38:03 nvmf_abort_qd_sizes -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:30:07.659 11:38:03 nvmf_abort_qd_sizes -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:30:07.659 11:38:03 nvmf_abort_qd_sizes -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:30:07.659 11:38:03 nvmf_abort_qd_sizes -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:30:07.659 11:38:03 nvmf_abort_qd_sizes -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:30:07.659 11:38:03 nvmf_abort_qd_sizes -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:30:07.917 11:38:03 nvmf_abort_qd_sizes -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:30:07.917 11:38:03 nvmf_abort_qd_sizes -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:30:07.917 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:30:07.917 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.147 ms 00:30:07.917 00:30:07.917 --- 10.0.0.2 ping statistics --- 00:30:07.917 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:07.917 rtt min/avg/max/mdev = 0.147/0.147/0.147/0.000 ms 00:30:07.917 11:38:03 nvmf_abort_qd_sizes -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:30:07.917 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:30:07.917 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.140 ms 00:30:07.917 00:30:07.917 --- 10.0.0.1 ping statistics --- 00:30:07.917 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:07.917 rtt min/avg/max/mdev = 0.140/0.140/0.140/0.000 ms 00:30:07.917 11:38:03 nvmf_abort_qd_sizes -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:30:07.917 11:38:03 nvmf_abort_qd_sizes -- nvmf/common.sh@422 -- # return 0 00:30:07.917 11:38:03 nvmf_abort_qd_sizes -- nvmf/common.sh@450 -- # '[' iso == iso ']' 00:30:07.917 11:38:03 nvmf_abort_qd_sizes -- nvmf/common.sh@451 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:30:10.452 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:30:10.452 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:30:10.452 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:30:10.452 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:30:10.452 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:30:10.452 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:30:10.452 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:30:10.711 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:30:10.711 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:30:10.711 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:30:10.711 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:30:10.711 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:30:10.711 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:30:10.711 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:30:10.711 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:30:10.711 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:30:12.088 0000:5e:00.0 (8086 0a54): nvme -> vfio-pci 00:30:12.088 11:38:07 nvmf_abort_qd_sizes -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:30:12.088 11:38:07 nvmf_abort_qd_sizes -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:30:12.088 11:38:07 nvmf_abort_qd_sizes -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:30:12.088 11:38:07 nvmf_abort_qd_sizes -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:30:12.088 11:38:07 nvmf_abort_qd_sizes -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:30:12.088 11:38:07 nvmf_abort_qd_sizes -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:30:12.088 11:38:07 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@71 -- # nvmfappstart -m 0xf 00:30:12.088 11:38:07 nvmf_abort_qd_sizes -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:30:12.088 11:38:07 nvmf_abort_qd_sizes -- common/autotest_common.sh@724 -- # xtrace_disable 00:30:12.088 11:38:07 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:30:12.088 11:38:07 nvmf_abort_qd_sizes -- nvmf/common.sh@481 -- # nvmfpid=1703688 00:30:12.088 11:38:07 nvmf_abort_qd_sizes -- nvmf/common.sh@482 -- # waitforlisten 1703688 00:30:12.088 11:38:07 nvmf_abort_qd_sizes -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xf 00:30:12.088 11:38:07 nvmf_abort_qd_sizes -- common/autotest_common.sh@831 -- # '[' -z 1703688 ']' 00:30:12.088 11:38:07 nvmf_abort_qd_sizes -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:12.088 11:38:07 nvmf_abort_qd_sizes -- common/autotest_common.sh@836 -- # local max_retries=100 00:30:12.088 11:38:07 nvmf_abort_qd_sizes -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:12.088 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:12.088 11:38:07 nvmf_abort_qd_sizes -- common/autotest_common.sh@840 -- # xtrace_disable 00:30:12.088 11:38:07 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:30:12.347 [2024-07-26 11:38:07.786273] Starting SPDK v24.09-pre git sha1 487ff9e1a / DPDK 24.03.0 initialization... 00:30:12.347 [2024-07-26 11:38:07.786325] [ DPDK EAL parameters: nvmf -c 0xf --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:12.347 EAL: No free 2048 kB hugepages reported on node 1 00:30:12.347 [2024-07-26 11:38:07.858544] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:30:12.347 [2024-07-26 11:38:07.939962] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:12.347 [2024-07-26 11:38:07.940005] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:12.347 [2024-07-26 11:38:07.940012] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:30:12.347 [2024-07-26 11:38:07.940018] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:30:12.347 [2024-07-26 11:38:07.940023] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:12.347 [2024-07-26 11:38:07.940077] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:30:12.347 [2024-07-26 11:38:07.940107] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:30:12.347 [2024-07-26 11:38:07.940216] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:30:12.347 [2024-07-26 11:38:07.940217] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:30:13.283 11:38:08 nvmf_abort_qd_sizes -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:30:13.283 11:38:08 nvmf_abort_qd_sizes -- common/autotest_common.sh@864 -- # return 0 00:30:13.283 11:38:08 nvmf_abort_qd_sizes -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:30:13.283 11:38:08 nvmf_abort_qd_sizes -- common/autotest_common.sh@730 -- # xtrace_disable 00:30:13.283 11:38:08 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:30:13.283 11:38:08 nvmf_abort_qd_sizes -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:30:13.283 11:38:08 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@73 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini || :; clean_kernel_target' SIGINT SIGTERM EXIT 00:30:13.283 11:38:08 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # mapfile -t nvmes 00:30:13.283 11:38:08 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # nvme_in_userspace 00:30:13.283 11:38:08 nvmf_abort_qd_sizes -- scripts/common.sh@309 -- # local bdf bdfs 00:30:13.283 11:38:08 nvmf_abort_qd_sizes -- scripts/common.sh@310 -- # local nvmes 00:30:13.283 11:38:08 nvmf_abort_qd_sizes -- scripts/common.sh@312 -- # [[ -n 0000:5e:00.0 ]] 00:30:13.283 11:38:08 nvmf_abort_qd_sizes -- scripts/common.sh@313 -- # nvmes=(${pci_bus_cache["0x010802"]}) 00:30:13.283 11:38:08 nvmf_abort_qd_sizes -- scripts/common.sh@318 -- # for bdf in "${nvmes[@]}" 00:30:13.283 11:38:08 nvmf_abort_qd_sizes -- scripts/common.sh@319 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:5e:00.0 ]] 00:30:13.283 11:38:08 nvmf_abort_qd_sizes -- scripts/common.sh@320 -- # uname -s 00:30:13.283 11:38:08 nvmf_abort_qd_sizes -- scripts/common.sh@320 -- # [[ Linux == FreeBSD ]] 00:30:13.283 11:38:08 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # bdfs+=("$bdf") 00:30:13.283 11:38:08 nvmf_abort_qd_sizes -- scripts/common.sh@325 -- # (( 1 )) 00:30:13.283 11:38:08 nvmf_abort_qd_sizes -- scripts/common.sh@326 -- # printf '%s\n' 0000:5e:00.0 00:30:13.283 11:38:08 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@76 -- # (( 1 > 0 )) 00:30:13.283 11:38:08 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@78 -- # nvme=0000:5e:00.0 00:30:13.283 11:38:08 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@80 -- # run_test spdk_target_abort spdk_target 00:30:13.283 11:38:08 nvmf_abort_qd_sizes -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:30:13.283 11:38:08 nvmf_abort_qd_sizes -- common/autotest_common.sh@1107 -- # xtrace_disable 00:30:13.283 11:38:08 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:30:13.283 ************************************ 00:30:13.283 START TEST spdk_target_abort 00:30:13.283 ************************************ 00:30:13.283 11:38:08 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1125 -- # spdk_target 00:30:13.283 11:38:08 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@43 -- # local name=spdk_target 00:30:13.283 11:38:08 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@45 -- # rpc_cmd bdev_nvme_attach_controller -t pcie -a 0000:5e:00.0 -b spdk_target 00:30:13.283 11:38:08 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:13.283 11:38:08 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:30:16.570 spdk_targetn1 00:30:16.570 11:38:11 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:16.570 11:38:11 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@47 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:30:16.570 11:38:11 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:16.570 11:38:11 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:30:16.570 [2024-07-26 11:38:11.509726] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:16.570 11:38:11 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:16.570 11:38:11 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:testnqn -a -s SPDKISFASTANDAWESOME 00:30:16.570 11:38:11 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:16.570 11:38:11 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:30:16.570 11:38:11 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:16.570 11:38:11 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@49 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:testnqn spdk_targetn1 00:30:16.570 11:38:11 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:16.570 11:38:11 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:30:16.570 11:38:11 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:16.570 11:38:11 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@50 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:testnqn -t tcp -a 10.0.0.2 -s 4420 00:30:16.570 11:38:11 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:16.570 11:38:11 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:30:16.570 [2024-07-26 11:38:11.542662] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:16.570 11:38:11 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:16.570 11:38:11 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@52 -- # rabort tcp IPv4 10.0.0.2 4420 nqn.2016-06.io.spdk:testnqn 00:30:16.570 11:38:11 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:30:16.570 11:38:11 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:30:16.570 11:38:11 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.2 00:30:16.570 11:38:11 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:30:16.570 11:38:11 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:30:16.570 11:38:11 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:30:16.570 11:38:11 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:30:16.570 11:38:11 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:30:16.570 11:38:11 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:30:16.570 11:38:11 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:30:16.570 11:38:11 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:30:16.570 11:38:11 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:30:16.570 11:38:11 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:30:16.570 11:38:11 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2' 00:30:16.571 11:38:11 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:30:16.571 11:38:11 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:30:16.571 11:38:11 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:30:16.571 11:38:11 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:30:16.571 11:38:11 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:30:16.571 11:38:11 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:30:16.571 EAL: No free 2048 kB hugepages reported on node 1 00:30:19.101 Initializing NVMe Controllers 00:30:19.101 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:30:19.101 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:30:19.101 Initialization complete. Launching workers. 00:30:19.101 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 15661, failed: 0 00:30:19.101 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1410, failed to submit 14251 00:30:19.101 success 753, unsuccessful 657, failed 0 00:30:19.101 11:38:14 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:30:19.101 11:38:14 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:30:19.101 EAL: No free 2048 kB hugepages reported on node 1 00:30:22.387 Initializing NVMe Controllers 00:30:22.387 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:30:22.387 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:30:22.387 Initialization complete. Launching workers. 00:30:22.387 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 8607, failed: 0 00:30:22.387 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1239, failed to submit 7368 00:30:22.387 success 317, unsuccessful 922, failed 0 00:30:22.387 11:38:17 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:30:22.387 11:38:17 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:30:22.387 EAL: No free 2048 kB hugepages reported on node 1 00:30:25.672 Initializing NVMe Controllers 00:30:25.672 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:30:25.672 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:30:25.672 Initialization complete. Launching workers. 00:30:25.672 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 38541, failed: 0 00:30:25.672 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 2901, failed to submit 35640 00:30:25.672 success 610, unsuccessful 2291, failed 0 00:30:25.672 11:38:21 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@54 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:testnqn 00:30:25.672 11:38:21 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:25.672 11:38:21 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:30:25.672 11:38:21 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:25.672 11:38:21 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@55 -- # rpc_cmd bdev_nvme_detach_controller spdk_target 00:30:25.672 11:38:21 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:25.672 11:38:21 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:30:27.575 11:38:23 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:27.575 11:38:23 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@61 -- # killprocess 1703688 00:30:27.575 11:38:23 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@950 -- # '[' -z 1703688 ']' 00:30:27.575 11:38:23 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@954 -- # kill -0 1703688 00:30:27.575 11:38:23 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@955 -- # uname 00:30:27.575 11:38:23 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:30:27.575 11:38:23 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1703688 00:30:27.575 11:38:23 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:30:27.575 11:38:23 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:30:27.575 11:38:23 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1703688' 00:30:27.575 killing process with pid 1703688 00:30:27.575 11:38:23 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@969 -- # kill 1703688 00:30:27.575 11:38:23 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@974 -- # wait 1703688 00:30:27.840 00:30:27.840 real 0m14.658s 00:30:27.840 user 0m58.417s 00:30:27.840 sys 0m2.264s 00:30:27.840 11:38:23 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1126 -- # xtrace_disable 00:30:27.840 11:38:23 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:30:27.840 ************************************ 00:30:27.840 END TEST spdk_target_abort 00:30:27.840 ************************************ 00:30:27.840 11:38:23 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@81 -- # run_test kernel_target_abort kernel_target 00:30:27.840 11:38:23 nvmf_abort_qd_sizes -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:30:27.840 11:38:23 nvmf_abort_qd_sizes -- common/autotest_common.sh@1107 -- # xtrace_disable 00:30:27.840 11:38:23 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:30:27.840 ************************************ 00:30:27.840 START TEST kernel_target_abort 00:30:27.840 ************************************ 00:30:27.840 11:38:23 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1125 -- # kernel_target 00:30:27.840 11:38:23 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # get_main_ns_ip 00:30:27.840 11:38:23 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@741 -- # local ip 00:30:27.840 11:38:23 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@742 -- # ip_candidates=() 00:30:27.840 11:38:23 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@742 -- # local -A ip_candidates 00:30:27.840 11:38:23 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:30:27.840 11:38:23 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:30:27.840 11:38:23 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:30:27.840 11:38:23 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:30:27.840 11:38:23 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:30:27.840 11:38:23 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:30:27.840 11:38:23 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:30:27.840 11:38:23 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:30:27.840 11:38:23 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@632 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:30:27.840 11:38:23 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@634 -- # nvmet=/sys/kernel/config/nvmet 00:30:27.840 11:38:23 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@635 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:30:27.840 11:38:23 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@636 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:30:27.840 11:38:23 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@637 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:30:27.840 11:38:23 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@639 -- # local block nvme 00:30:27.840 11:38:23 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@641 -- # [[ ! -e /sys/module/nvmet ]] 00:30:27.840 11:38:23 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@642 -- # modprobe nvmet 00:30:27.840 11:38:23 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@645 -- # [[ -e /sys/kernel/config/nvmet ]] 00:30:27.840 11:38:23 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@647 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:30:31.137 Waiting for block devices as requested 00:30:31.137 0000:5e:00.0 (8086 0a54): vfio-pci -> nvme 00:30:31.137 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:30:31.137 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:30:31.138 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:30:31.138 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:30:31.138 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:30:31.138 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:30:31.138 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:30:31.138 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:30:31.394 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:30:31.394 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:30:31.394 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:30:31.727 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:30:31.727 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:30:31.727 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:30:31.727 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:30:31.988 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:30:31.988 11:38:27 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:30:31.988 11:38:27 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n1 ]] 00:30:31.988 11:38:27 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@652 -- # is_block_zoned nvme0n1 00:30:31.988 11:38:27 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:30:31.988 11:38:27 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:30:31.988 11:38:27 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:30:31.988 11:38:27 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@653 -- # block_in_use nvme0n1 00:30:31.988 11:38:27 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:30:31.988 11:38:27 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:30:31.988 No valid GPT data, bailing 00:30:31.988 11:38:27 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:30:31.988 11:38:27 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@391 -- # pt= 00:30:31.988 11:38:27 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@392 -- # return 1 00:30:31.988 11:38:27 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n1 00:30:31.988 11:38:27 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@656 -- # [[ -b /dev/nvme0n1 ]] 00:30:31.988 11:38:27 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@658 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:30:31.988 11:38:27 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@659 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:30:31.988 11:38:27 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@660 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:30:31.988 11:38:27 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@665 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:30:31.988 11:38:27 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@667 -- # echo 1 00:30:31.988 11:38:27 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@668 -- # echo /dev/nvme0n1 00:30:31.988 11:38:27 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@669 -- # echo 1 00:30:31.988 11:38:27 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@671 -- # echo 10.0.0.1 00:30:31.988 11:38:27 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@672 -- # echo tcp 00:30:31.988 11:38:27 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@673 -- # echo 4420 00:30:31.988 11:38:27 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@674 -- # echo ipv4 00:30:31.988 11:38:27 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@677 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:30:31.988 11:38:27 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@680 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid=00ad29c2-ccbd-e911-906e-0017a4403562 -a 10.0.0.1 -t tcp -s 4420 00:30:31.988 00:30:31.988 Discovery Log Number of Records 2, Generation counter 2 00:30:31.988 =====Discovery Log Entry 0====== 00:30:31.988 trtype: tcp 00:30:31.988 adrfam: ipv4 00:30:31.988 subtype: current discovery subsystem 00:30:31.988 treq: not specified, sq flow control disable supported 00:30:31.988 portid: 1 00:30:31.988 trsvcid: 4420 00:30:31.988 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:30:31.988 traddr: 10.0.0.1 00:30:31.988 eflags: none 00:30:31.988 sectype: none 00:30:31.988 =====Discovery Log Entry 1====== 00:30:31.988 trtype: tcp 00:30:31.988 adrfam: ipv4 00:30:31.988 subtype: nvme subsystem 00:30:31.988 treq: not specified, sq flow control disable supported 00:30:31.988 portid: 1 00:30:31.988 trsvcid: 4420 00:30:31.988 subnqn: nqn.2016-06.io.spdk:testnqn 00:30:31.988 traddr: 10.0.0.1 00:30:31.988 eflags: none 00:30:31.988 sectype: none 00:30:31.988 11:38:27 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@66 -- # rabort tcp IPv4 10.0.0.1 4420 nqn.2016-06.io.spdk:testnqn 00:30:31.988 11:38:27 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:30:31.988 11:38:27 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:30:31.988 11:38:27 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.1 00:30:31.988 11:38:27 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:30:31.988 11:38:27 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:30:31.988 11:38:27 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:30:31.988 11:38:27 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:30:31.988 11:38:27 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:30:31.988 11:38:27 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:30:31.988 11:38:27 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:30:31.988 11:38:27 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:30:31.988 11:38:27 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:30:31.988 11:38:27 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:30:31.988 11:38:27 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1' 00:30:31.988 11:38:27 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:30:31.988 11:38:27 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420' 00:30:31.988 11:38:27 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:30:31.988 11:38:27 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:30:31.988 11:38:27 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:30:31.988 11:38:27 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:30:32.261 EAL: No free 2048 kB hugepages reported on node 1 00:30:35.541 Initializing NVMe Controllers 00:30:35.541 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:30:35.541 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:30:35.541 Initialization complete. Launching workers. 00:30:35.541 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 93560, failed: 0 00:30:35.541 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 93560, failed to submit 0 00:30:35.541 success 0, unsuccessful 93560, failed 0 00:30:35.541 11:38:30 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:30:35.541 11:38:30 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:30:35.541 EAL: No free 2048 kB hugepages reported on node 1 00:30:38.820 Initializing NVMe Controllers 00:30:38.820 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:30:38.820 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:30:38.820 Initialization complete. Launching workers. 00:30:38.820 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 148781, failed: 0 00:30:38.820 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 37226, failed to submit 111555 00:30:38.820 success 0, unsuccessful 37226, failed 0 00:30:38.820 11:38:33 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:30:38.820 11:38:33 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:30:38.820 EAL: No free 2048 kB hugepages reported on node 1 00:30:41.350 Initializing NVMe Controllers 00:30:41.350 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:30:41.350 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:30:41.350 Initialization complete. Launching workers. 00:30:41.350 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 141563, failed: 0 00:30:41.350 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 35454, failed to submit 106109 00:30:41.350 success 0, unsuccessful 35454, failed 0 00:30:41.350 11:38:36 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@67 -- # clean_kernel_target 00:30:41.350 11:38:36 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@684 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:30:41.350 11:38:36 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@686 -- # echo 0 00:30:41.350 11:38:36 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@688 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:30:41.350 11:38:36 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@689 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:30:41.350 11:38:36 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@690 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:30:41.350 11:38:36 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@691 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:30:41.350 11:38:36 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@693 -- # modules=(/sys/module/nvmet/holders/*) 00:30:41.350 11:38:36 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@695 -- # modprobe -r nvmet_tcp nvmet 00:30:41.350 11:38:36 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@698 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:30:44.640 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:30:44.640 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:30:44.640 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:30:44.640 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:30:44.640 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:30:44.640 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:30:44.640 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:30:44.640 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:30:44.640 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:30:44.640 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:30:44.640 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:30:44.640 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:30:44.640 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:30:44.640 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:30:44.640 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:30:44.640 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:30:45.576 0000:5e:00.0 (8086 0a54): nvme -> vfio-pci 00:30:45.835 00:30:45.835 real 0m17.866s 00:30:45.835 user 0m8.877s 00:30:45.835 sys 0m4.986s 00:30:45.835 11:38:41 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1126 -- # xtrace_disable 00:30:45.835 11:38:41 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@10 -- # set +x 00:30:45.835 ************************************ 00:30:45.835 END TEST kernel_target_abort 00:30:45.835 ************************************ 00:30:45.835 11:38:41 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:30:45.835 11:38:41 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@84 -- # nvmftestfini 00:30:45.835 11:38:41 nvmf_abort_qd_sizes -- nvmf/common.sh@488 -- # nvmfcleanup 00:30:45.835 11:38:41 nvmf_abort_qd_sizes -- nvmf/common.sh@117 -- # sync 00:30:45.835 11:38:41 nvmf_abort_qd_sizes -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:30:45.835 11:38:41 nvmf_abort_qd_sizes -- nvmf/common.sh@120 -- # set +e 00:30:45.835 11:38:41 nvmf_abort_qd_sizes -- nvmf/common.sh@121 -- # for i in {1..20} 00:30:45.835 11:38:41 nvmf_abort_qd_sizes -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:30:45.835 rmmod nvme_tcp 00:30:45.835 rmmod nvme_fabrics 00:30:45.835 rmmod nvme_keyring 00:30:45.835 11:38:41 nvmf_abort_qd_sizes -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:30:45.835 11:38:41 nvmf_abort_qd_sizes -- nvmf/common.sh@124 -- # set -e 00:30:45.835 11:38:41 nvmf_abort_qd_sizes -- nvmf/common.sh@125 -- # return 0 00:30:45.835 11:38:41 nvmf_abort_qd_sizes -- nvmf/common.sh@489 -- # '[' -n 1703688 ']' 00:30:45.835 11:38:41 nvmf_abort_qd_sizes -- nvmf/common.sh@490 -- # killprocess 1703688 00:30:45.835 11:38:41 nvmf_abort_qd_sizes -- common/autotest_common.sh@950 -- # '[' -z 1703688 ']' 00:30:45.835 11:38:41 nvmf_abort_qd_sizes -- common/autotest_common.sh@954 -- # kill -0 1703688 00:30:45.835 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 954: kill: (1703688) - No such process 00:30:45.835 11:38:41 nvmf_abort_qd_sizes -- common/autotest_common.sh@977 -- # echo 'Process with pid 1703688 is not found' 00:30:45.835 Process with pid 1703688 is not found 00:30:45.835 11:38:41 nvmf_abort_qd_sizes -- nvmf/common.sh@492 -- # '[' iso == iso ']' 00:30:45.835 11:38:41 nvmf_abort_qd_sizes -- nvmf/common.sh@493 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:30:48.371 Waiting for block devices as requested 00:30:48.630 0000:5e:00.0 (8086 0a54): vfio-pci -> nvme 00:30:48.630 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:30:48.630 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:30:48.889 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:30:48.889 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:30:48.889 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:30:49.147 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:30:49.147 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:30:49.147 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:30:49.147 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:30:49.407 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:30:49.407 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:30:49.407 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:30:49.665 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:30:49.665 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:30:49.665 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:30:49.665 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:30:49.924 11:38:45 nvmf_abort_qd_sizes -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:30:49.924 11:38:45 nvmf_abort_qd_sizes -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:30:49.924 11:38:45 nvmf_abort_qd_sizes -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:30:49.924 11:38:45 nvmf_abort_qd_sizes -- nvmf/common.sh@278 -- # remove_spdk_ns 00:30:49.924 11:38:45 nvmf_abort_qd_sizes -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:49.924 11:38:45 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:30:49.924 11:38:45 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:51.829 11:38:47 nvmf_abort_qd_sizes -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:30:51.829 00:30:51.829 real 0m49.934s 00:30:51.829 user 1m11.533s 00:30:51.829 sys 0m15.792s 00:30:51.829 11:38:47 nvmf_abort_qd_sizes -- common/autotest_common.sh@1126 -- # xtrace_disable 00:30:51.829 11:38:47 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:30:51.829 ************************************ 00:30:51.829 END TEST nvmf_abort_qd_sizes 00:30:51.829 ************************************ 00:30:52.089 11:38:47 -- spdk/autotest.sh@299 -- # run_test keyring_file /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/file.sh 00:30:52.089 11:38:47 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:30:52.089 11:38:47 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:30:52.089 11:38:47 -- common/autotest_common.sh@10 -- # set +x 00:30:52.089 ************************************ 00:30:52.089 START TEST keyring_file 00:30:52.089 ************************************ 00:30:52.089 11:38:47 keyring_file -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/file.sh 00:30:52.089 * Looking for test storage... 00:30:52.089 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring 00:30:52.089 11:38:47 keyring_file -- keyring/file.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/common.sh 00:30:52.089 11:38:47 keyring_file -- keyring/common.sh@4 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:30:52.089 11:38:47 keyring_file -- nvmf/common.sh@7 -- # uname -s 00:30:52.089 11:38:47 keyring_file -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:52.089 11:38:47 keyring_file -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:52.089 11:38:47 keyring_file -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:52.089 11:38:47 keyring_file -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:52.089 11:38:47 keyring_file -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:52.089 11:38:47 keyring_file -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:52.089 11:38:47 keyring_file -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:52.089 11:38:47 keyring_file -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:52.089 11:38:47 keyring_file -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:52.089 11:38:47 keyring_file -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:52.089 11:38:47 keyring_file -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:30:52.089 11:38:47 keyring_file -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:30:52.089 11:38:47 keyring_file -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:52.089 11:38:47 keyring_file -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:52.089 11:38:47 keyring_file -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:30:52.089 11:38:47 keyring_file -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:30:52.089 11:38:47 keyring_file -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:30:52.089 11:38:47 keyring_file -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:52.089 11:38:47 keyring_file -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:52.089 11:38:47 keyring_file -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:52.089 11:38:47 keyring_file -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:52.089 11:38:47 keyring_file -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:52.089 11:38:47 keyring_file -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:52.089 11:38:47 keyring_file -- paths/export.sh@5 -- # export PATH 00:30:52.089 11:38:47 keyring_file -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:52.089 11:38:47 keyring_file -- nvmf/common.sh@47 -- # : 0 00:30:52.089 11:38:47 keyring_file -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:30:52.089 11:38:47 keyring_file -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:30:52.089 11:38:47 keyring_file -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:30:52.089 11:38:47 keyring_file -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:52.089 11:38:47 keyring_file -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:52.089 11:38:47 keyring_file -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:30:52.089 11:38:47 keyring_file -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:30:52.089 11:38:47 keyring_file -- nvmf/common.sh@51 -- # have_pci_nics=0 00:30:52.089 11:38:47 keyring_file -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:30:52.089 11:38:47 keyring_file -- keyring/file.sh@13 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:30:52.089 11:38:47 keyring_file -- keyring/file.sh@14 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:30:52.089 11:38:47 keyring_file -- keyring/file.sh@15 -- # key0=00112233445566778899aabbccddeeff 00:30:52.089 11:38:47 keyring_file -- keyring/file.sh@16 -- # key1=112233445566778899aabbccddeeff00 00:30:52.089 11:38:47 keyring_file -- keyring/file.sh@24 -- # trap cleanup EXIT 00:30:52.089 11:38:47 keyring_file -- keyring/file.sh@26 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:30:52.089 11:38:47 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:30:52.089 11:38:47 keyring_file -- keyring/common.sh@17 -- # name=key0 00:30:52.089 11:38:47 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:30:52.089 11:38:47 keyring_file -- keyring/common.sh@17 -- # digest=0 00:30:52.089 11:38:47 keyring_file -- keyring/common.sh@18 -- # mktemp 00:30:52.089 11:38:47 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.xOvQ2paAnz 00:30:52.089 11:38:47 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:30:52.089 11:38:47 keyring_file -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:30:52.089 11:38:47 keyring_file -- nvmf/common.sh@702 -- # local prefix key digest 00:30:52.089 11:38:47 keyring_file -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:30:52.089 11:38:47 keyring_file -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff 00:30:52.089 11:38:47 keyring_file -- nvmf/common.sh@704 -- # digest=0 00:30:52.089 11:38:47 keyring_file -- nvmf/common.sh@705 -- # python - 00:30:52.089 11:38:47 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.xOvQ2paAnz 00:30:52.089 11:38:47 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.xOvQ2paAnz 00:30:52.089 11:38:47 keyring_file -- keyring/file.sh@26 -- # key0path=/tmp/tmp.xOvQ2paAnz 00:30:52.089 11:38:47 keyring_file -- keyring/file.sh@27 -- # prep_key key1 112233445566778899aabbccddeeff00 0 00:30:52.089 11:38:47 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:30:52.089 11:38:47 keyring_file -- keyring/common.sh@17 -- # name=key1 00:30:52.089 11:38:47 keyring_file -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:30:52.089 11:38:47 keyring_file -- keyring/common.sh@17 -- # digest=0 00:30:52.089 11:38:47 keyring_file -- keyring/common.sh@18 -- # mktemp 00:30:52.089 11:38:47 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.jtB9rwL5mi 00:30:52.089 11:38:47 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:30:52.089 11:38:47 keyring_file -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:30:52.089 11:38:47 keyring_file -- nvmf/common.sh@702 -- # local prefix key digest 00:30:52.089 11:38:47 keyring_file -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:30:52.089 11:38:47 keyring_file -- nvmf/common.sh@704 -- # key=112233445566778899aabbccddeeff00 00:30:52.089 11:38:47 keyring_file -- nvmf/common.sh@704 -- # digest=0 00:30:52.089 11:38:47 keyring_file -- nvmf/common.sh@705 -- # python - 00:30:52.348 11:38:47 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.jtB9rwL5mi 00:30:52.348 11:38:47 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.jtB9rwL5mi 00:30:52.348 11:38:47 keyring_file -- keyring/file.sh@27 -- # key1path=/tmp/tmp.jtB9rwL5mi 00:30:52.348 11:38:47 keyring_file -- keyring/file.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:30:52.348 11:38:47 keyring_file -- keyring/file.sh@30 -- # tgtpid=1712708 00:30:52.348 11:38:47 keyring_file -- keyring/file.sh@32 -- # waitforlisten 1712708 00:30:52.348 11:38:47 keyring_file -- common/autotest_common.sh@831 -- # '[' -z 1712708 ']' 00:30:52.348 11:38:47 keyring_file -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:52.348 11:38:47 keyring_file -- common/autotest_common.sh@836 -- # local max_retries=100 00:30:52.348 11:38:47 keyring_file -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:52.348 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:52.348 11:38:47 keyring_file -- common/autotest_common.sh@840 -- # xtrace_disable 00:30:52.348 11:38:47 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:30:52.348 [2024-07-26 11:38:47.799339] Starting SPDK v24.09-pre git sha1 487ff9e1a / DPDK 24.03.0 initialization... 00:30:52.348 [2024-07-26 11:38:47.799382] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1712708 ] 00:30:52.348 EAL: No free 2048 kB hugepages reported on node 1 00:30:52.348 [2024-07-26 11:38:47.863332] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:52.348 [2024-07-26 11:38:47.933537] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:30:53.282 11:38:48 keyring_file -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:30:53.282 11:38:48 keyring_file -- common/autotest_common.sh@864 -- # return 0 00:30:53.282 11:38:48 keyring_file -- keyring/file.sh@33 -- # rpc_cmd 00:30:53.282 11:38:48 keyring_file -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:53.282 11:38:48 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:30:53.282 [2024-07-26 11:38:48.618988] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:53.282 null0 00:30:53.282 [2024-07-26 11:38:48.651041] tcp.c: 956:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:30:53.282 [2024-07-26 11:38:48.651236] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:30:53.282 [2024-07-26 11:38:48.659057] tcp.c:3725:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:30:53.282 11:38:48 keyring_file -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:53.282 11:38:48 keyring_file -- keyring/file.sh@43 -- # NOT rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:30:53.282 11:38:48 keyring_file -- common/autotest_common.sh@650 -- # local es=0 00:30:53.282 11:38:48 keyring_file -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:30:53.282 11:38:48 keyring_file -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:30:53.282 11:38:48 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:30:53.282 11:38:48 keyring_file -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:30:53.282 11:38:48 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:30:53.282 11:38:48 keyring_file -- common/autotest_common.sh@653 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:30:53.282 11:38:48 keyring_file -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:53.282 11:38:48 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:30:53.282 [2024-07-26 11:38:48.675107] nvmf_rpc.c: 788:nvmf_rpc_listen_paused: *ERROR*: Listener already exists 00:30:53.282 request: 00:30:53.282 { 00:30:53.282 "nqn": "nqn.2016-06.io.spdk:cnode0", 00:30:53.282 "secure_channel": false, 00:30:53.282 "listen_address": { 00:30:53.282 "trtype": "tcp", 00:30:53.282 "traddr": "127.0.0.1", 00:30:53.282 "trsvcid": "4420" 00:30:53.282 }, 00:30:53.282 "method": "nvmf_subsystem_add_listener", 00:30:53.282 "req_id": 1 00:30:53.282 } 00:30:53.282 Got JSON-RPC error response 00:30:53.282 response: 00:30:53.282 { 00:30:53.282 "code": -32602, 00:30:53.282 "message": "Invalid parameters" 00:30:53.282 } 00:30:53.282 11:38:48 keyring_file -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:30:53.282 11:38:48 keyring_file -- common/autotest_common.sh@653 -- # es=1 00:30:53.283 11:38:48 keyring_file -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:30:53.283 11:38:48 keyring_file -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:30:53.283 11:38:48 keyring_file -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:30:53.283 11:38:48 keyring_file -- keyring/file.sh@46 -- # bperfpid=1712793 00:30:53.283 11:38:48 keyring_file -- keyring/file.sh@48 -- # waitforlisten 1712793 /var/tmp/bperf.sock 00:30:53.283 11:38:48 keyring_file -- keyring/file.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z 00:30:53.283 11:38:48 keyring_file -- common/autotest_common.sh@831 -- # '[' -z 1712793 ']' 00:30:53.283 11:38:48 keyring_file -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:30:53.283 11:38:48 keyring_file -- common/autotest_common.sh@836 -- # local max_retries=100 00:30:53.283 11:38:48 keyring_file -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:30:53.283 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:30:53.283 11:38:48 keyring_file -- common/autotest_common.sh@840 -- # xtrace_disable 00:30:53.283 11:38:48 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:30:53.283 [2024-07-26 11:38:48.727404] Starting SPDK v24.09-pre git sha1 487ff9e1a / DPDK 24.03.0 initialization... 00:30:53.283 [2024-07-26 11:38:48.727443] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1712793 ] 00:30:53.283 EAL: No free 2048 kB hugepages reported on node 1 00:30:53.283 [2024-07-26 11:38:48.791789] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:53.283 [2024-07-26 11:38:48.870077] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:30:54.215 11:38:49 keyring_file -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:30:54.215 11:38:49 keyring_file -- common/autotest_common.sh@864 -- # return 0 00:30:54.215 11:38:49 keyring_file -- keyring/file.sh@49 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.xOvQ2paAnz 00:30:54.215 11:38:49 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.xOvQ2paAnz 00:30:54.215 11:38:49 keyring_file -- keyring/file.sh@50 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.jtB9rwL5mi 00:30:54.215 11:38:49 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.jtB9rwL5mi 00:30:54.473 11:38:49 keyring_file -- keyring/file.sh@51 -- # get_key key0 00:30:54.473 11:38:49 keyring_file -- keyring/file.sh@51 -- # jq -r .path 00:30:54.473 11:38:49 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:30:54.473 11:38:49 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:30:54.473 11:38:49 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:30:54.473 11:38:50 keyring_file -- keyring/file.sh@51 -- # [[ /tmp/tmp.xOvQ2paAnz == \/\t\m\p\/\t\m\p\.\x\O\v\Q\2\p\a\A\n\z ]] 00:30:54.473 11:38:50 keyring_file -- keyring/file.sh@52 -- # get_key key1 00:30:54.473 11:38:50 keyring_file -- keyring/file.sh@52 -- # jq -r .path 00:30:54.473 11:38:50 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:30:54.473 11:38:50 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:30:54.473 11:38:50 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:30:54.730 11:38:50 keyring_file -- keyring/file.sh@52 -- # [[ /tmp/tmp.jtB9rwL5mi == \/\t\m\p\/\t\m\p\.\j\t\B\9\r\w\L\5\m\i ]] 00:30:54.730 11:38:50 keyring_file -- keyring/file.sh@53 -- # get_refcnt key0 00:30:54.730 11:38:50 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:30:54.730 11:38:50 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:30:54.730 11:38:50 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:30:54.730 11:38:50 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:30:54.730 11:38:50 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:30:54.987 11:38:50 keyring_file -- keyring/file.sh@53 -- # (( 1 == 1 )) 00:30:54.987 11:38:50 keyring_file -- keyring/file.sh@54 -- # get_refcnt key1 00:30:54.987 11:38:50 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:30:54.987 11:38:50 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:30:54.987 11:38:50 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:30:54.987 11:38:50 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:30:54.987 11:38:50 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:30:54.987 11:38:50 keyring_file -- keyring/file.sh@54 -- # (( 1 == 1 )) 00:30:54.987 11:38:50 keyring_file -- keyring/file.sh@57 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:30:54.987 11:38:50 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:30:55.245 [2024-07-26 11:38:50.783140] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:30:55.245 nvme0n1 00:30:55.245 11:38:50 keyring_file -- keyring/file.sh@59 -- # get_refcnt key0 00:30:55.245 11:38:50 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:30:55.245 11:38:50 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:30:55.245 11:38:50 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:30:55.245 11:38:50 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:30:55.245 11:38:50 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:30:55.502 11:38:51 keyring_file -- keyring/file.sh@59 -- # (( 2 == 2 )) 00:30:55.502 11:38:51 keyring_file -- keyring/file.sh@60 -- # get_refcnt key1 00:30:55.502 11:38:51 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:30:55.502 11:38:51 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:30:55.502 11:38:51 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:30:55.502 11:38:51 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:30:55.502 11:38:51 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:30:55.760 11:38:51 keyring_file -- keyring/file.sh@60 -- # (( 1 == 1 )) 00:30:55.760 11:38:51 keyring_file -- keyring/file.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:30:55.760 Running I/O for 1 seconds... 00:30:56.691 00:30:56.691 Latency(us) 00:30:56.691 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:56.691 Job: nvme0n1 (Core Mask 0x2, workload: randrw, percentage: 50, depth: 128, IO size: 4096) 00:30:56.691 nvme0n1 : 1.00 18280.32 71.41 0.00 0.00 6985.23 3651.29 14168.26 00:30:56.691 =================================================================================================================== 00:30:56.691 Total : 18280.32 71.41 0.00 0.00 6985.23 3651.29 14168.26 00:30:56.691 0 00:30:56.691 11:38:52 keyring_file -- keyring/file.sh@64 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:30:56.691 11:38:52 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:30:56.948 11:38:52 keyring_file -- keyring/file.sh@65 -- # get_refcnt key0 00:30:56.948 11:38:52 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:30:56.948 11:38:52 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:30:56.948 11:38:52 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:30:56.948 11:38:52 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:30:56.948 11:38:52 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:30:57.206 11:38:52 keyring_file -- keyring/file.sh@65 -- # (( 1 == 1 )) 00:30:57.206 11:38:52 keyring_file -- keyring/file.sh@66 -- # get_refcnt key1 00:30:57.206 11:38:52 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:30:57.206 11:38:52 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:30:57.206 11:38:52 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:30:57.206 11:38:52 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:30:57.206 11:38:52 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:30:57.206 11:38:52 keyring_file -- keyring/file.sh@66 -- # (( 1 == 1 )) 00:30:57.206 11:38:52 keyring_file -- keyring/file.sh@69 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:30:57.206 11:38:52 keyring_file -- common/autotest_common.sh@650 -- # local es=0 00:30:57.206 11:38:52 keyring_file -- common/autotest_common.sh@652 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:30:57.206 11:38:52 keyring_file -- common/autotest_common.sh@638 -- # local arg=bperf_cmd 00:30:57.206 11:38:52 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:30:57.206 11:38:52 keyring_file -- common/autotest_common.sh@642 -- # type -t bperf_cmd 00:30:57.206 11:38:52 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:30:57.206 11:38:52 keyring_file -- common/autotest_common.sh@653 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:30:57.206 11:38:52 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:30:57.464 [2024-07-26 11:38:53.014503] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:30:57.464 [2024-07-26 11:38:53.015184] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x709820 (107): Transport endpoint is not connected 00:30:57.464 [2024-07-26 11:38:53.016179] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x709820 (9): Bad file descriptor 00:30:57.464 [2024-07-26 11:38:53.017180] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:30:57.464 [2024-07-26 11:38:53.017191] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:30:57.464 [2024-07-26 11:38:53.017197] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:30:57.464 request: 00:30:57.464 { 00:30:57.464 "name": "nvme0", 00:30:57.464 "trtype": "tcp", 00:30:57.464 "traddr": "127.0.0.1", 00:30:57.464 "adrfam": "ipv4", 00:30:57.464 "trsvcid": "4420", 00:30:57.464 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:30:57.464 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:30:57.464 "prchk_reftag": false, 00:30:57.464 "prchk_guard": false, 00:30:57.464 "hdgst": false, 00:30:57.464 "ddgst": false, 00:30:57.464 "psk": "key1", 00:30:57.464 "method": "bdev_nvme_attach_controller", 00:30:57.464 "req_id": 1 00:30:57.464 } 00:30:57.464 Got JSON-RPC error response 00:30:57.464 response: 00:30:57.464 { 00:30:57.464 "code": -5, 00:30:57.464 "message": "Input/output error" 00:30:57.464 } 00:30:57.464 11:38:53 keyring_file -- common/autotest_common.sh@653 -- # es=1 00:30:57.464 11:38:53 keyring_file -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:30:57.464 11:38:53 keyring_file -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:30:57.464 11:38:53 keyring_file -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:30:57.464 11:38:53 keyring_file -- keyring/file.sh@71 -- # get_refcnt key0 00:30:57.464 11:38:53 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:30:57.464 11:38:53 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:30:57.464 11:38:53 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:30:57.464 11:38:53 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:30:57.464 11:38:53 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:30:57.729 11:38:53 keyring_file -- keyring/file.sh@71 -- # (( 1 == 1 )) 00:30:57.729 11:38:53 keyring_file -- keyring/file.sh@72 -- # get_refcnt key1 00:30:57.729 11:38:53 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:30:57.729 11:38:53 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:30:57.729 11:38:53 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:30:57.729 11:38:53 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:30:57.729 11:38:53 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:30:58.014 11:38:53 keyring_file -- keyring/file.sh@72 -- # (( 1 == 1 )) 00:30:58.014 11:38:53 keyring_file -- keyring/file.sh@75 -- # bperf_cmd keyring_file_remove_key key0 00:30:58.014 11:38:53 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:30:58.014 11:38:53 keyring_file -- keyring/file.sh@76 -- # bperf_cmd keyring_file_remove_key key1 00:30:58.014 11:38:53 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key1 00:30:58.319 11:38:53 keyring_file -- keyring/file.sh@77 -- # bperf_cmd keyring_get_keys 00:30:58.320 11:38:53 keyring_file -- keyring/file.sh@77 -- # jq length 00:30:58.320 11:38:53 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:30:58.320 11:38:53 keyring_file -- keyring/file.sh@77 -- # (( 0 == 0 )) 00:30:58.320 11:38:53 keyring_file -- keyring/file.sh@80 -- # chmod 0660 /tmp/tmp.xOvQ2paAnz 00:30:58.320 11:38:53 keyring_file -- keyring/file.sh@81 -- # NOT bperf_cmd keyring_file_add_key key0 /tmp/tmp.xOvQ2paAnz 00:30:58.320 11:38:53 keyring_file -- common/autotest_common.sh@650 -- # local es=0 00:30:58.320 11:38:53 keyring_file -- common/autotest_common.sh@652 -- # valid_exec_arg bperf_cmd keyring_file_add_key key0 /tmp/tmp.xOvQ2paAnz 00:30:58.320 11:38:53 keyring_file -- common/autotest_common.sh@638 -- # local arg=bperf_cmd 00:30:58.320 11:38:53 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:30:58.320 11:38:53 keyring_file -- common/autotest_common.sh@642 -- # type -t bperf_cmd 00:30:58.320 11:38:53 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:30:58.320 11:38:53 keyring_file -- common/autotest_common.sh@653 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.xOvQ2paAnz 00:30:58.320 11:38:53 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.xOvQ2paAnz 00:30:58.578 [2024-07-26 11:38:54.077272] keyring.c: 34:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.xOvQ2paAnz': 0100660 00:30:58.578 [2024-07-26 11:38:54.077297] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:30:58.578 request: 00:30:58.578 { 00:30:58.578 "name": "key0", 00:30:58.578 "path": "/tmp/tmp.xOvQ2paAnz", 00:30:58.578 "method": "keyring_file_add_key", 00:30:58.578 "req_id": 1 00:30:58.578 } 00:30:58.578 Got JSON-RPC error response 00:30:58.578 response: 00:30:58.578 { 00:30:58.578 "code": -1, 00:30:58.578 "message": "Operation not permitted" 00:30:58.578 } 00:30:58.578 11:38:54 keyring_file -- common/autotest_common.sh@653 -- # es=1 00:30:58.578 11:38:54 keyring_file -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:30:58.578 11:38:54 keyring_file -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:30:58.578 11:38:54 keyring_file -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:30:58.578 11:38:54 keyring_file -- keyring/file.sh@84 -- # chmod 0600 /tmp/tmp.xOvQ2paAnz 00:30:58.578 11:38:54 keyring_file -- keyring/file.sh@85 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.xOvQ2paAnz 00:30:58.578 11:38:54 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.xOvQ2paAnz 00:30:58.836 11:38:54 keyring_file -- keyring/file.sh@86 -- # rm -f /tmp/tmp.xOvQ2paAnz 00:30:58.836 11:38:54 keyring_file -- keyring/file.sh@88 -- # get_refcnt key0 00:30:58.836 11:38:54 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:30:58.836 11:38:54 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:30:58.836 11:38:54 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:30:58.836 11:38:54 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:30:58.836 11:38:54 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:30:58.836 11:38:54 keyring_file -- keyring/file.sh@88 -- # (( 1 == 1 )) 00:30:58.836 11:38:54 keyring_file -- keyring/file.sh@90 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:30:58.836 11:38:54 keyring_file -- common/autotest_common.sh@650 -- # local es=0 00:30:58.836 11:38:54 keyring_file -- common/autotest_common.sh@652 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:30:58.836 11:38:54 keyring_file -- common/autotest_common.sh@638 -- # local arg=bperf_cmd 00:30:58.836 11:38:54 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:30:58.836 11:38:54 keyring_file -- common/autotest_common.sh@642 -- # type -t bperf_cmd 00:30:58.836 11:38:54 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:30:58.836 11:38:54 keyring_file -- common/autotest_common.sh@653 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:30:58.836 11:38:54 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:30:59.093 [2024-07-26 11:38:54.646768] keyring.c: 29:keyring_file_check_path: *ERROR*: Could not stat key file '/tmp/tmp.xOvQ2paAnz': No such file or directory 00:30:59.093 [2024-07-26 11:38:54.646791] nvme_tcp.c:2582:nvme_tcp_generate_tls_credentials: *ERROR*: Failed to obtain key 'key0': No such file or directory 00:30:59.093 [2024-07-26 11:38:54.646810] nvme.c: 683:nvme_ctrlr_probe: *ERROR*: Failed to construct NVMe controller for SSD: 127.0.0.1 00:30:59.093 [2024-07-26 11:38:54.646816] nvme.c: 830:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:30:59.093 [2024-07-26 11:38:54.646822] bdev_nvme.c:6296:bdev_nvme_create: *ERROR*: No controller was found with provided trid (traddr: 127.0.0.1) 00:30:59.093 request: 00:30:59.093 { 00:30:59.093 "name": "nvme0", 00:30:59.093 "trtype": "tcp", 00:30:59.093 "traddr": "127.0.0.1", 00:30:59.093 "adrfam": "ipv4", 00:30:59.093 "trsvcid": "4420", 00:30:59.093 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:30:59.093 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:30:59.093 "prchk_reftag": false, 00:30:59.093 "prchk_guard": false, 00:30:59.093 "hdgst": false, 00:30:59.093 "ddgst": false, 00:30:59.093 "psk": "key0", 00:30:59.093 "method": "bdev_nvme_attach_controller", 00:30:59.093 "req_id": 1 00:30:59.093 } 00:30:59.093 Got JSON-RPC error response 00:30:59.093 response: 00:30:59.093 { 00:30:59.093 "code": -19, 00:30:59.093 "message": "No such device" 00:30:59.093 } 00:30:59.093 11:38:54 keyring_file -- common/autotest_common.sh@653 -- # es=1 00:30:59.093 11:38:54 keyring_file -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:30:59.093 11:38:54 keyring_file -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:30:59.093 11:38:54 keyring_file -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:30:59.093 11:38:54 keyring_file -- keyring/file.sh@92 -- # bperf_cmd keyring_file_remove_key key0 00:30:59.093 11:38:54 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:30:59.351 11:38:54 keyring_file -- keyring/file.sh@95 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:30:59.351 11:38:54 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:30:59.351 11:38:54 keyring_file -- keyring/common.sh@17 -- # name=key0 00:30:59.351 11:38:54 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:30:59.351 11:38:54 keyring_file -- keyring/common.sh@17 -- # digest=0 00:30:59.351 11:38:54 keyring_file -- keyring/common.sh@18 -- # mktemp 00:30:59.351 11:38:54 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.C3XvuU97ZZ 00:30:59.351 11:38:54 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:30:59.351 11:38:54 keyring_file -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:30:59.351 11:38:54 keyring_file -- nvmf/common.sh@702 -- # local prefix key digest 00:30:59.351 11:38:54 keyring_file -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:30:59.351 11:38:54 keyring_file -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff 00:30:59.351 11:38:54 keyring_file -- nvmf/common.sh@704 -- # digest=0 00:30:59.351 11:38:54 keyring_file -- nvmf/common.sh@705 -- # python - 00:30:59.351 11:38:54 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.C3XvuU97ZZ 00:30:59.351 11:38:54 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.C3XvuU97ZZ 00:30:59.351 11:38:54 keyring_file -- keyring/file.sh@95 -- # key0path=/tmp/tmp.C3XvuU97ZZ 00:30:59.351 11:38:54 keyring_file -- keyring/file.sh@96 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.C3XvuU97ZZ 00:30:59.351 11:38:54 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.C3XvuU97ZZ 00:30:59.609 11:38:55 keyring_file -- keyring/file.sh@97 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:30:59.609 11:38:55 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:30:59.867 nvme0n1 00:30:59.868 11:38:55 keyring_file -- keyring/file.sh@99 -- # get_refcnt key0 00:30:59.868 11:38:55 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:30:59.868 11:38:55 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:30:59.868 11:38:55 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:30:59.868 11:38:55 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:30:59.868 11:38:55 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:30:59.868 11:38:55 keyring_file -- keyring/file.sh@99 -- # (( 2 == 2 )) 00:30:59.868 11:38:55 keyring_file -- keyring/file.sh@100 -- # bperf_cmd keyring_file_remove_key key0 00:30:59.868 11:38:55 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:31:00.126 11:38:55 keyring_file -- keyring/file.sh@101 -- # get_key key0 00:31:00.126 11:38:55 keyring_file -- keyring/file.sh@101 -- # jq -r .removed 00:31:00.126 11:38:55 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:31:00.126 11:38:55 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:31:00.126 11:38:55 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:31:00.384 11:38:55 keyring_file -- keyring/file.sh@101 -- # [[ true == \t\r\u\e ]] 00:31:00.384 11:38:55 keyring_file -- keyring/file.sh@102 -- # get_refcnt key0 00:31:00.384 11:38:55 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:31:00.384 11:38:55 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:31:00.384 11:38:55 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:31:00.384 11:38:55 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:31:00.384 11:38:55 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:31:00.384 11:38:55 keyring_file -- keyring/file.sh@102 -- # (( 1 == 1 )) 00:31:00.384 11:38:55 keyring_file -- keyring/file.sh@103 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:31:00.384 11:38:55 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:31:00.642 11:38:56 keyring_file -- keyring/file.sh@104 -- # bperf_cmd keyring_get_keys 00:31:00.642 11:38:56 keyring_file -- keyring/file.sh@104 -- # jq length 00:31:00.642 11:38:56 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:31:00.900 11:38:56 keyring_file -- keyring/file.sh@104 -- # (( 0 == 0 )) 00:31:00.900 11:38:56 keyring_file -- keyring/file.sh@107 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.C3XvuU97ZZ 00:31:00.900 11:38:56 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.C3XvuU97ZZ 00:31:00.900 11:38:56 keyring_file -- keyring/file.sh@108 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.jtB9rwL5mi 00:31:00.900 11:38:56 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.jtB9rwL5mi 00:31:01.158 11:38:56 keyring_file -- keyring/file.sh@109 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:31:01.158 11:38:56 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:31:01.416 nvme0n1 00:31:01.416 11:38:56 keyring_file -- keyring/file.sh@112 -- # bperf_cmd save_config 00:31:01.416 11:38:56 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock save_config 00:31:01.675 11:38:57 keyring_file -- keyring/file.sh@112 -- # config='{ 00:31:01.675 "subsystems": [ 00:31:01.675 { 00:31:01.675 "subsystem": "keyring", 00:31:01.675 "config": [ 00:31:01.675 { 00:31:01.675 "method": "keyring_file_add_key", 00:31:01.675 "params": { 00:31:01.675 "name": "key0", 00:31:01.675 "path": "/tmp/tmp.C3XvuU97ZZ" 00:31:01.675 } 00:31:01.675 }, 00:31:01.675 { 00:31:01.675 "method": "keyring_file_add_key", 00:31:01.675 "params": { 00:31:01.675 "name": "key1", 00:31:01.675 "path": "/tmp/tmp.jtB9rwL5mi" 00:31:01.675 } 00:31:01.675 } 00:31:01.675 ] 00:31:01.675 }, 00:31:01.675 { 00:31:01.675 "subsystem": "iobuf", 00:31:01.675 "config": [ 00:31:01.675 { 00:31:01.675 "method": "iobuf_set_options", 00:31:01.675 "params": { 00:31:01.675 "small_pool_count": 8192, 00:31:01.675 "large_pool_count": 1024, 00:31:01.675 "small_bufsize": 8192, 00:31:01.675 "large_bufsize": 135168 00:31:01.675 } 00:31:01.675 } 00:31:01.675 ] 00:31:01.675 }, 00:31:01.675 { 00:31:01.675 "subsystem": "sock", 00:31:01.675 "config": [ 00:31:01.675 { 00:31:01.675 "method": "sock_set_default_impl", 00:31:01.675 "params": { 00:31:01.675 "impl_name": "posix" 00:31:01.675 } 00:31:01.675 }, 00:31:01.675 { 00:31:01.675 "method": "sock_impl_set_options", 00:31:01.675 "params": { 00:31:01.675 "impl_name": "ssl", 00:31:01.675 "recv_buf_size": 4096, 00:31:01.675 "send_buf_size": 4096, 00:31:01.675 "enable_recv_pipe": true, 00:31:01.675 "enable_quickack": false, 00:31:01.675 "enable_placement_id": 0, 00:31:01.675 "enable_zerocopy_send_server": true, 00:31:01.675 "enable_zerocopy_send_client": false, 00:31:01.675 "zerocopy_threshold": 0, 00:31:01.675 "tls_version": 0, 00:31:01.675 "enable_ktls": false 00:31:01.675 } 00:31:01.675 }, 00:31:01.675 { 00:31:01.675 "method": "sock_impl_set_options", 00:31:01.675 "params": { 00:31:01.675 "impl_name": "posix", 00:31:01.675 "recv_buf_size": 2097152, 00:31:01.675 "send_buf_size": 2097152, 00:31:01.675 "enable_recv_pipe": true, 00:31:01.675 "enable_quickack": false, 00:31:01.675 "enable_placement_id": 0, 00:31:01.675 "enable_zerocopy_send_server": true, 00:31:01.675 "enable_zerocopy_send_client": false, 00:31:01.675 "zerocopy_threshold": 0, 00:31:01.675 "tls_version": 0, 00:31:01.675 "enable_ktls": false 00:31:01.675 } 00:31:01.675 } 00:31:01.675 ] 00:31:01.675 }, 00:31:01.675 { 00:31:01.675 "subsystem": "vmd", 00:31:01.675 "config": [] 00:31:01.675 }, 00:31:01.675 { 00:31:01.675 "subsystem": "accel", 00:31:01.675 "config": [ 00:31:01.675 { 00:31:01.675 "method": "accel_set_options", 00:31:01.675 "params": { 00:31:01.675 "small_cache_size": 128, 00:31:01.675 "large_cache_size": 16, 00:31:01.675 "task_count": 2048, 00:31:01.675 "sequence_count": 2048, 00:31:01.675 "buf_count": 2048 00:31:01.675 } 00:31:01.675 } 00:31:01.675 ] 00:31:01.675 }, 00:31:01.675 { 00:31:01.675 "subsystem": "bdev", 00:31:01.675 "config": [ 00:31:01.675 { 00:31:01.675 "method": "bdev_set_options", 00:31:01.675 "params": { 00:31:01.675 "bdev_io_pool_size": 65535, 00:31:01.675 "bdev_io_cache_size": 256, 00:31:01.675 "bdev_auto_examine": true, 00:31:01.675 "iobuf_small_cache_size": 128, 00:31:01.675 "iobuf_large_cache_size": 16 00:31:01.675 } 00:31:01.675 }, 00:31:01.675 { 00:31:01.675 "method": "bdev_raid_set_options", 00:31:01.675 "params": { 00:31:01.675 "process_window_size_kb": 1024, 00:31:01.675 "process_max_bandwidth_mb_sec": 0 00:31:01.675 } 00:31:01.675 }, 00:31:01.675 { 00:31:01.675 "method": "bdev_iscsi_set_options", 00:31:01.675 "params": { 00:31:01.675 "timeout_sec": 30 00:31:01.675 } 00:31:01.675 }, 00:31:01.675 { 00:31:01.675 "method": "bdev_nvme_set_options", 00:31:01.675 "params": { 00:31:01.675 "action_on_timeout": "none", 00:31:01.675 "timeout_us": 0, 00:31:01.675 "timeout_admin_us": 0, 00:31:01.675 "keep_alive_timeout_ms": 10000, 00:31:01.675 "arbitration_burst": 0, 00:31:01.675 "low_priority_weight": 0, 00:31:01.675 "medium_priority_weight": 0, 00:31:01.675 "high_priority_weight": 0, 00:31:01.675 "nvme_adminq_poll_period_us": 10000, 00:31:01.675 "nvme_ioq_poll_period_us": 0, 00:31:01.675 "io_queue_requests": 512, 00:31:01.675 "delay_cmd_submit": true, 00:31:01.675 "transport_retry_count": 4, 00:31:01.675 "bdev_retry_count": 3, 00:31:01.675 "transport_ack_timeout": 0, 00:31:01.675 "ctrlr_loss_timeout_sec": 0, 00:31:01.675 "reconnect_delay_sec": 0, 00:31:01.675 "fast_io_fail_timeout_sec": 0, 00:31:01.675 "disable_auto_failback": false, 00:31:01.675 "generate_uuids": false, 00:31:01.675 "transport_tos": 0, 00:31:01.675 "nvme_error_stat": false, 00:31:01.675 "rdma_srq_size": 0, 00:31:01.675 "io_path_stat": false, 00:31:01.675 "allow_accel_sequence": false, 00:31:01.675 "rdma_max_cq_size": 0, 00:31:01.675 "rdma_cm_event_timeout_ms": 0, 00:31:01.675 "dhchap_digests": [ 00:31:01.675 "sha256", 00:31:01.675 "sha384", 00:31:01.675 "sha512" 00:31:01.675 ], 00:31:01.675 "dhchap_dhgroups": [ 00:31:01.675 "null", 00:31:01.675 "ffdhe2048", 00:31:01.675 "ffdhe3072", 00:31:01.675 "ffdhe4096", 00:31:01.675 "ffdhe6144", 00:31:01.675 "ffdhe8192" 00:31:01.675 ] 00:31:01.675 } 00:31:01.675 }, 00:31:01.675 { 00:31:01.675 "method": "bdev_nvme_attach_controller", 00:31:01.675 "params": { 00:31:01.675 "name": "nvme0", 00:31:01.675 "trtype": "TCP", 00:31:01.675 "adrfam": "IPv4", 00:31:01.675 "traddr": "127.0.0.1", 00:31:01.675 "trsvcid": "4420", 00:31:01.675 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:31:01.675 "prchk_reftag": false, 00:31:01.675 "prchk_guard": false, 00:31:01.675 "ctrlr_loss_timeout_sec": 0, 00:31:01.675 "reconnect_delay_sec": 0, 00:31:01.675 "fast_io_fail_timeout_sec": 0, 00:31:01.675 "psk": "key0", 00:31:01.675 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:31:01.675 "hdgst": false, 00:31:01.675 "ddgst": false 00:31:01.675 } 00:31:01.675 }, 00:31:01.675 { 00:31:01.675 "method": "bdev_nvme_set_hotplug", 00:31:01.675 "params": { 00:31:01.675 "period_us": 100000, 00:31:01.675 "enable": false 00:31:01.675 } 00:31:01.675 }, 00:31:01.675 { 00:31:01.675 "method": "bdev_wait_for_examine" 00:31:01.675 } 00:31:01.675 ] 00:31:01.675 }, 00:31:01.675 { 00:31:01.675 "subsystem": "nbd", 00:31:01.675 "config": [] 00:31:01.675 } 00:31:01.675 ] 00:31:01.675 }' 00:31:01.675 11:38:57 keyring_file -- keyring/file.sh@114 -- # killprocess 1712793 00:31:01.675 11:38:57 keyring_file -- common/autotest_common.sh@950 -- # '[' -z 1712793 ']' 00:31:01.675 11:38:57 keyring_file -- common/autotest_common.sh@954 -- # kill -0 1712793 00:31:01.675 11:38:57 keyring_file -- common/autotest_common.sh@955 -- # uname 00:31:01.675 11:38:57 keyring_file -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:31:01.675 11:38:57 keyring_file -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1712793 00:31:01.675 11:38:57 keyring_file -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:31:01.675 11:38:57 keyring_file -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:31:01.675 11:38:57 keyring_file -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1712793' 00:31:01.675 killing process with pid 1712793 00:31:01.675 11:38:57 keyring_file -- common/autotest_common.sh@969 -- # kill 1712793 00:31:01.676 Received shutdown signal, test time was about 1.000000 seconds 00:31:01.676 00:31:01.676 Latency(us) 00:31:01.676 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:01.676 =================================================================================================================== 00:31:01.676 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:31:01.676 11:38:57 keyring_file -- common/autotest_common.sh@974 -- # wait 1712793 00:31:01.935 11:38:57 keyring_file -- keyring/file.sh@117 -- # bperfpid=1714350 00:31:01.935 11:38:57 keyring_file -- keyring/file.sh@119 -- # waitforlisten 1714350 /var/tmp/bperf.sock 00:31:01.935 11:38:57 keyring_file -- common/autotest_common.sh@831 -- # '[' -z 1714350 ']' 00:31:01.935 11:38:57 keyring_file -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:31:01.935 11:38:57 keyring_file -- keyring/file.sh@115 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z -c /dev/fd/63 00:31:01.935 11:38:57 keyring_file -- common/autotest_common.sh@836 -- # local max_retries=100 00:31:01.935 11:38:57 keyring_file -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:31:01.935 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:31:01.935 11:38:57 keyring_file -- keyring/file.sh@115 -- # echo '{ 00:31:01.935 "subsystems": [ 00:31:01.935 { 00:31:01.935 "subsystem": "keyring", 00:31:01.935 "config": [ 00:31:01.935 { 00:31:01.935 "method": "keyring_file_add_key", 00:31:01.935 "params": { 00:31:01.935 "name": "key0", 00:31:01.935 "path": "/tmp/tmp.C3XvuU97ZZ" 00:31:01.935 } 00:31:01.935 }, 00:31:01.935 { 00:31:01.935 "method": "keyring_file_add_key", 00:31:01.935 "params": { 00:31:01.935 "name": "key1", 00:31:01.935 "path": "/tmp/tmp.jtB9rwL5mi" 00:31:01.935 } 00:31:01.935 } 00:31:01.935 ] 00:31:01.935 }, 00:31:01.935 { 00:31:01.935 "subsystem": "iobuf", 00:31:01.935 "config": [ 00:31:01.935 { 00:31:01.935 "method": "iobuf_set_options", 00:31:01.935 "params": { 00:31:01.935 "small_pool_count": 8192, 00:31:01.935 "large_pool_count": 1024, 00:31:01.935 "small_bufsize": 8192, 00:31:01.935 "large_bufsize": 135168 00:31:01.935 } 00:31:01.935 } 00:31:01.935 ] 00:31:01.935 }, 00:31:01.935 { 00:31:01.935 "subsystem": "sock", 00:31:01.935 "config": [ 00:31:01.935 { 00:31:01.935 "method": "sock_set_default_impl", 00:31:01.935 "params": { 00:31:01.935 "impl_name": "posix" 00:31:01.935 } 00:31:01.935 }, 00:31:01.935 { 00:31:01.935 "method": "sock_impl_set_options", 00:31:01.935 "params": { 00:31:01.935 "impl_name": "ssl", 00:31:01.935 "recv_buf_size": 4096, 00:31:01.935 "send_buf_size": 4096, 00:31:01.935 "enable_recv_pipe": true, 00:31:01.935 "enable_quickack": false, 00:31:01.935 "enable_placement_id": 0, 00:31:01.935 "enable_zerocopy_send_server": true, 00:31:01.935 "enable_zerocopy_send_client": false, 00:31:01.935 "zerocopy_threshold": 0, 00:31:01.935 "tls_version": 0, 00:31:01.935 "enable_ktls": false 00:31:01.935 } 00:31:01.935 }, 00:31:01.935 { 00:31:01.935 "method": "sock_impl_set_options", 00:31:01.935 "params": { 00:31:01.935 "impl_name": "posix", 00:31:01.935 "recv_buf_size": 2097152, 00:31:01.935 "send_buf_size": 2097152, 00:31:01.935 "enable_recv_pipe": true, 00:31:01.935 "enable_quickack": false, 00:31:01.935 "enable_placement_id": 0, 00:31:01.935 "enable_zerocopy_send_server": true, 00:31:01.935 "enable_zerocopy_send_client": false, 00:31:01.935 "zerocopy_threshold": 0, 00:31:01.935 "tls_version": 0, 00:31:01.935 "enable_ktls": false 00:31:01.935 } 00:31:01.935 } 00:31:01.935 ] 00:31:01.935 }, 00:31:01.935 { 00:31:01.935 "subsystem": "vmd", 00:31:01.935 "config": [] 00:31:01.935 }, 00:31:01.935 { 00:31:01.935 "subsystem": "accel", 00:31:01.935 "config": [ 00:31:01.935 { 00:31:01.935 "method": "accel_set_options", 00:31:01.935 "params": { 00:31:01.935 "small_cache_size": 128, 00:31:01.935 "large_cache_size": 16, 00:31:01.935 "task_count": 2048, 00:31:01.935 "sequence_count": 2048, 00:31:01.935 "buf_count": 2048 00:31:01.935 } 00:31:01.935 } 00:31:01.935 ] 00:31:01.935 }, 00:31:01.935 { 00:31:01.935 "subsystem": "bdev", 00:31:01.935 "config": [ 00:31:01.935 { 00:31:01.935 "method": "bdev_set_options", 00:31:01.935 "params": { 00:31:01.935 "bdev_io_pool_size": 65535, 00:31:01.935 "bdev_io_cache_size": 256, 00:31:01.935 "bdev_auto_examine": true, 00:31:01.935 "iobuf_small_cache_size": 128, 00:31:01.935 "iobuf_large_cache_size": 16 00:31:01.935 } 00:31:01.935 }, 00:31:01.935 { 00:31:01.935 "method": "bdev_raid_set_options", 00:31:01.935 "params": { 00:31:01.935 "process_window_size_kb": 1024, 00:31:01.935 "process_max_bandwidth_mb_sec": 0 00:31:01.935 } 00:31:01.935 }, 00:31:01.935 { 00:31:01.935 "method": "bdev_iscsi_set_options", 00:31:01.935 "params": { 00:31:01.935 "timeout_sec": 30 00:31:01.935 } 00:31:01.935 }, 00:31:01.935 { 00:31:01.935 "method": "bdev_nvme_set_options", 00:31:01.935 "params": { 00:31:01.935 "action_on_timeout": "none", 00:31:01.935 "timeout_us": 0, 00:31:01.935 "timeout_admin_us": 0, 00:31:01.935 "keep_alive_timeout_ms": 10000, 00:31:01.935 "arbitration_burst": 0, 00:31:01.935 "low_priority_weight": 0, 00:31:01.935 "medium_priority_weight": 0, 00:31:01.935 "high_priority_weight": 0, 00:31:01.935 "nvme_adminq_poll_period_us": 10000, 00:31:01.935 "nvme_ioq_poll_period_us": 0, 00:31:01.935 "io_queue_requests": 512, 00:31:01.935 "delay_cmd_submit": true, 00:31:01.935 "transport_retry_count": 4, 00:31:01.935 "bdev_retry_count": 3, 00:31:01.935 "transport_ack_timeout": 0, 00:31:01.935 "ctrlr_loss_timeout_sec": 0, 00:31:01.935 "reconnect_delay_sec": 0, 00:31:01.935 "fast_io_fail_timeout_sec": 0, 00:31:01.935 "disable_auto_failback": false, 00:31:01.935 "generate_uuids": false, 00:31:01.935 "transport_tos": 0, 00:31:01.935 "nvme_error_stat": false, 00:31:01.935 "rdma_srq_size": 0, 00:31:01.935 "io_path_stat": false, 00:31:01.935 "allow_accel_sequence": false, 00:31:01.935 "rdma_max_cq_size": 0, 00:31:01.935 "rdma_cm_event_timeout_ms": 0, 00:31:01.935 "dhchap_digests": [ 00:31:01.935 "sha256", 00:31:01.935 "sha384", 00:31:01.935 "sha512" 00:31:01.935 ], 00:31:01.935 "dhchap_dhgroups": [ 00:31:01.935 "null", 00:31:01.935 "ffdhe2048", 00:31:01.935 "ffdhe3072", 00:31:01.935 "ffdhe4096", 00:31:01.935 "ffdhe6144", 00:31:01.935 "ffdhe8192" 00:31:01.935 ] 00:31:01.935 } 00:31:01.935 }, 00:31:01.935 { 00:31:01.935 "method": "bdev_nvme_attach_controller", 00:31:01.935 "params": { 00:31:01.935 "name": "nvme0", 00:31:01.935 "trtype": "TCP", 00:31:01.935 "adrfam": "IPv4", 00:31:01.935 "traddr": "127.0.0.1", 00:31:01.935 "trsvcid": "4420", 00:31:01.935 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:31:01.935 "prchk_reftag": false, 00:31:01.935 "prchk_guard": false, 00:31:01.935 "ctrlr_loss_timeout_sec": 0, 00:31:01.935 "reconnect_delay_sec": 0, 00:31:01.935 "fast_io_fail_timeout_sec": 0, 00:31:01.935 "psk": "key0", 00:31:01.935 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:31:01.935 "hdgst": false, 00:31:01.935 "ddgst": false 00:31:01.935 } 00:31:01.935 }, 00:31:01.935 { 00:31:01.935 "method": "bdev_nvme_set_hotplug", 00:31:01.935 "params": { 00:31:01.935 "period_us": 100000, 00:31:01.935 "enable": false 00:31:01.935 } 00:31:01.935 }, 00:31:01.935 { 00:31:01.935 "method": "bdev_wait_for_examine" 00:31:01.935 } 00:31:01.935 ] 00:31:01.935 }, 00:31:01.935 { 00:31:01.935 "subsystem": "nbd", 00:31:01.935 "config": [] 00:31:01.935 } 00:31:01.935 ] 00:31:01.935 }' 00:31:01.936 11:38:57 keyring_file -- common/autotest_common.sh@840 -- # xtrace_disable 00:31:01.936 11:38:57 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:31:01.936 [2024-07-26 11:38:57.474499] Starting SPDK v24.09-pre git sha1 487ff9e1a / DPDK 24.03.0 initialization... 00:31:01.936 [2024-07-26 11:38:57.474548] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1714350 ] 00:31:01.936 EAL: No free 2048 kB hugepages reported on node 1 00:31:01.936 [2024-07-26 11:38:57.537787] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:02.193 [2024-07-26 11:38:57.617489] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:31:02.193 [2024-07-26 11:38:57.775338] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:31:02.759 11:38:58 keyring_file -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:31:02.759 11:38:58 keyring_file -- common/autotest_common.sh@864 -- # return 0 00:31:02.759 11:38:58 keyring_file -- keyring/file.sh@120 -- # bperf_cmd keyring_get_keys 00:31:02.759 11:38:58 keyring_file -- keyring/file.sh@120 -- # jq length 00:31:02.759 11:38:58 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:31:03.016 11:38:58 keyring_file -- keyring/file.sh@120 -- # (( 2 == 2 )) 00:31:03.016 11:38:58 keyring_file -- keyring/file.sh@121 -- # get_refcnt key0 00:31:03.016 11:38:58 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:31:03.016 11:38:58 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:31:03.016 11:38:58 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:31:03.016 11:38:58 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:31:03.016 11:38:58 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:31:03.016 11:38:58 keyring_file -- keyring/file.sh@121 -- # (( 2 == 2 )) 00:31:03.016 11:38:58 keyring_file -- keyring/file.sh@122 -- # get_refcnt key1 00:31:03.016 11:38:58 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:31:03.016 11:38:58 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:31:03.016 11:38:58 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:31:03.016 11:38:58 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:31:03.016 11:38:58 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:31:03.274 11:38:58 keyring_file -- keyring/file.sh@122 -- # (( 1 == 1 )) 00:31:03.274 11:38:58 keyring_file -- keyring/file.sh@123 -- # bperf_cmd bdev_nvme_get_controllers 00:31:03.274 11:38:58 keyring_file -- keyring/file.sh@123 -- # jq -r '.[].name' 00:31:03.274 11:38:58 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_get_controllers 00:31:03.532 11:38:59 keyring_file -- keyring/file.sh@123 -- # [[ nvme0 == nvme0 ]] 00:31:03.532 11:38:59 keyring_file -- keyring/file.sh@1 -- # cleanup 00:31:03.532 11:38:59 keyring_file -- keyring/file.sh@19 -- # rm -f /tmp/tmp.C3XvuU97ZZ /tmp/tmp.jtB9rwL5mi 00:31:03.532 11:38:59 keyring_file -- keyring/file.sh@20 -- # killprocess 1714350 00:31:03.532 11:38:59 keyring_file -- common/autotest_common.sh@950 -- # '[' -z 1714350 ']' 00:31:03.532 11:38:59 keyring_file -- common/autotest_common.sh@954 -- # kill -0 1714350 00:31:03.532 11:38:59 keyring_file -- common/autotest_common.sh@955 -- # uname 00:31:03.532 11:38:59 keyring_file -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:31:03.532 11:38:59 keyring_file -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1714350 00:31:03.532 11:38:59 keyring_file -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:31:03.532 11:38:59 keyring_file -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:31:03.532 11:38:59 keyring_file -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1714350' 00:31:03.532 killing process with pid 1714350 00:31:03.532 11:38:59 keyring_file -- common/autotest_common.sh@969 -- # kill 1714350 00:31:03.532 Received shutdown signal, test time was about 1.000000 seconds 00:31:03.532 00:31:03.532 Latency(us) 00:31:03.532 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:03.532 =================================================================================================================== 00:31:03.532 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:31:03.532 11:38:59 keyring_file -- common/autotest_common.sh@974 -- # wait 1714350 00:31:03.791 11:38:59 keyring_file -- keyring/file.sh@21 -- # killprocess 1712708 00:31:03.791 11:38:59 keyring_file -- common/autotest_common.sh@950 -- # '[' -z 1712708 ']' 00:31:03.791 11:38:59 keyring_file -- common/autotest_common.sh@954 -- # kill -0 1712708 00:31:03.791 11:38:59 keyring_file -- common/autotest_common.sh@955 -- # uname 00:31:03.791 11:38:59 keyring_file -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:31:03.791 11:38:59 keyring_file -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1712708 00:31:03.791 11:38:59 keyring_file -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:31:03.791 11:38:59 keyring_file -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:31:03.791 11:38:59 keyring_file -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1712708' 00:31:03.791 killing process with pid 1712708 00:31:03.791 11:38:59 keyring_file -- common/autotest_common.sh@969 -- # kill 1712708 00:31:03.791 [2024-07-26 11:38:59.279893] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:31:03.791 11:38:59 keyring_file -- common/autotest_common.sh@974 -- # wait 1712708 00:31:04.050 00:31:04.050 real 0m12.049s 00:31:04.050 user 0m28.930s 00:31:04.050 sys 0m2.717s 00:31:04.050 11:38:59 keyring_file -- common/autotest_common.sh@1126 -- # xtrace_disable 00:31:04.050 11:38:59 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:31:04.050 ************************************ 00:31:04.050 END TEST keyring_file 00:31:04.050 ************************************ 00:31:04.050 11:38:59 -- spdk/autotest.sh@300 -- # [[ y == y ]] 00:31:04.050 11:38:59 -- spdk/autotest.sh@301 -- # run_test keyring_linux /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/linux.sh 00:31:04.050 11:38:59 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:31:04.050 11:38:59 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:31:04.050 11:38:59 -- common/autotest_common.sh@10 -- # set +x 00:31:04.050 ************************************ 00:31:04.050 START TEST keyring_linux 00:31:04.050 ************************************ 00:31:04.050 11:38:59 keyring_linux -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/linux.sh 00:31:04.309 * Looking for test storage... 00:31:04.309 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring 00:31:04.309 11:38:59 keyring_linux -- keyring/linux.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/common.sh 00:31:04.309 11:38:59 keyring_linux -- keyring/common.sh@4 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:31:04.309 11:38:59 keyring_linux -- nvmf/common.sh@7 -- # uname -s 00:31:04.309 11:38:59 keyring_linux -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:31:04.309 11:38:59 keyring_linux -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:31:04.309 11:38:59 keyring_linux -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:31:04.309 11:38:59 keyring_linux -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:31:04.309 11:38:59 keyring_linux -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:31:04.309 11:38:59 keyring_linux -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:31:04.309 11:38:59 keyring_linux -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:31:04.309 11:38:59 keyring_linux -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:31:04.309 11:38:59 keyring_linux -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:31:04.309 11:38:59 keyring_linux -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:31:04.309 11:38:59 keyring_linux -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:31:04.309 11:38:59 keyring_linux -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:31:04.309 11:38:59 keyring_linux -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:31:04.309 11:38:59 keyring_linux -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:31:04.309 11:38:59 keyring_linux -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:31:04.309 11:38:59 keyring_linux -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:31:04.309 11:38:59 keyring_linux -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:31:04.309 11:38:59 keyring_linux -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:31:04.309 11:38:59 keyring_linux -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:04.309 11:38:59 keyring_linux -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:04.309 11:38:59 keyring_linux -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:04.309 11:38:59 keyring_linux -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:04.309 11:38:59 keyring_linux -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:04.309 11:38:59 keyring_linux -- paths/export.sh@5 -- # export PATH 00:31:04.309 11:38:59 keyring_linux -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:04.309 11:38:59 keyring_linux -- nvmf/common.sh@47 -- # : 0 00:31:04.309 11:38:59 keyring_linux -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:31:04.309 11:38:59 keyring_linux -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:31:04.309 11:38:59 keyring_linux -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:31:04.309 11:38:59 keyring_linux -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:31:04.309 11:38:59 keyring_linux -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:31:04.309 11:38:59 keyring_linux -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:31:04.309 11:38:59 keyring_linux -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:31:04.309 11:38:59 keyring_linux -- nvmf/common.sh@51 -- # have_pci_nics=0 00:31:04.309 11:38:59 keyring_linux -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:31:04.309 11:38:59 keyring_linux -- keyring/linux.sh@11 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:31:04.309 11:38:59 keyring_linux -- keyring/linux.sh@12 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:31:04.309 11:38:59 keyring_linux -- keyring/linux.sh@13 -- # key0=00112233445566778899aabbccddeeff 00:31:04.309 11:38:59 keyring_linux -- keyring/linux.sh@14 -- # key1=112233445566778899aabbccddeeff00 00:31:04.309 11:38:59 keyring_linux -- keyring/linux.sh@45 -- # trap cleanup EXIT 00:31:04.309 11:38:59 keyring_linux -- keyring/linux.sh@47 -- # prep_key key0 00112233445566778899aabbccddeeff 0 /tmp/:spdk-test:key0 00:31:04.309 11:38:59 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 00:31:04.309 11:38:59 keyring_linux -- keyring/common.sh@17 -- # name=key0 00:31:04.309 11:38:59 keyring_linux -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:31:04.310 11:38:59 keyring_linux -- keyring/common.sh@17 -- # digest=0 00:31:04.310 11:38:59 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key0 00:31:04.310 11:38:59 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:31:04.310 11:38:59 keyring_linux -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:31:04.310 11:38:59 keyring_linux -- nvmf/common.sh@702 -- # local prefix key digest 00:31:04.310 11:38:59 keyring_linux -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:31:04.310 11:38:59 keyring_linux -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff 00:31:04.310 11:38:59 keyring_linux -- nvmf/common.sh@704 -- # digest=0 00:31:04.310 11:38:59 keyring_linux -- nvmf/common.sh@705 -- # python - 00:31:04.310 11:38:59 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key0 00:31:04.310 11:38:59 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key0 00:31:04.310 /tmp/:spdk-test:key0 00:31:04.310 11:38:59 keyring_linux -- keyring/linux.sh@48 -- # prep_key key1 112233445566778899aabbccddeeff00 0 /tmp/:spdk-test:key1 00:31:04.310 11:38:59 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 00:31:04.310 11:38:59 keyring_linux -- keyring/common.sh@17 -- # name=key1 00:31:04.310 11:38:59 keyring_linux -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:31:04.310 11:38:59 keyring_linux -- keyring/common.sh@17 -- # digest=0 00:31:04.310 11:38:59 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key1 00:31:04.310 11:38:59 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:31:04.310 11:38:59 keyring_linux -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:31:04.310 11:38:59 keyring_linux -- nvmf/common.sh@702 -- # local prefix key digest 00:31:04.310 11:38:59 keyring_linux -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:31:04.310 11:38:59 keyring_linux -- nvmf/common.sh@704 -- # key=112233445566778899aabbccddeeff00 00:31:04.310 11:38:59 keyring_linux -- nvmf/common.sh@704 -- # digest=0 00:31:04.310 11:38:59 keyring_linux -- nvmf/common.sh@705 -- # python - 00:31:04.310 11:38:59 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key1 00:31:04.310 11:38:59 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key1 00:31:04.310 /tmp/:spdk-test:key1 00:31:04.310 11:38:59 keyring_linux -- keyring/linux.sh@51 -- # tgtpid=1714794 00:31:04.310 11:38:59 keyring_linux -- keyring/linux.sh@53 -- # waitforlisten 1714794 00:31:04.310 11:38:59 keyring_linux -- keyring/linux.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:31:04.310 11:38:59 keyring_linux -- common/autotest_common.sh@831 -- # '[' -z 1714794 ']' 00:31:04.310 11:38:59 keyring_linux -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:04.310 11:38:59 keyring_linux -- common/autotest_common.sh@836 -- # local max_retries=100 00:31:04.310 11:38:59 keyring_linux -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:04.310 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:04.310 11:38:59 keyring_linux -- common/autotest_common.sh@840 -- # xtrace_disable 00:31:04.310 11:38:59 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:31:04.310 [2024-07-26 11:38:59.897880] Starting SPDK v24.09-pre git sha1 487ff9e1a / DPDK 24.03.0 initialization... 00:31:04.310 [2024-07-26 11:38:59.897927] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1714794 ] 00:31:04.310 EAL: No free 2048 kB hugepages reported on node 1 00:31:04.310 [2024-07-26 11:38:59.962188] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:04.568 [2024-07-26 11:39:00.046235] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:31:05.134 11:39:00 keyring_linux -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:31:05.134 11:39:00 keyring_linux -- common/autotest_common.sh@864 -- # return 0 00:31:05.134 11:39:00 keyring_linux -- keyring/linux.sh@54 -- # rpc_cmd 00:31:05.134 11:39:00 keyring_linux -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:05.134 11:39:00 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:31:05.134 [2024-07-26 11:39:00.690817] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:31:05.134 null0 00:31:05.134 [2024-07-26 11:39:00.722883] tcp.c: 956:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:31:05.134 [2024-07-26 11:39:00.723212] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:31:05.134 11:39:00 keyring_linux -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:05.134 11:39:00 keyring_linux -- keyring/linux.sh@66 -- # keyctl add user :spdk-test:key0 NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: @s 00:31:05.134 670125707 00:31:05.134 11:39:00 keyring_linux -- keyring/linux.sh@67 -- # keyctl add user :spdk-test:key1 NVMeTLSkey-1:00:MTEyMjMzNDQ1NTY2Nzc4ODk5YWFiYmNjZGRlZWZmMDA6CPcs: @s 00:31:05.134 156576551 00:31:05.134 11:39:00 keyring_linux -- keyring/linux.sh@70 -- # bperfpid=1715069 00:31:05.134 11:39:00 keyring_linux -- keyring/linux.sh@72 -- # waitforlisten 1715069 /var/tmp/bperf.sock 00:31:05.134 11:39:00 keyring_linux -- keyring/linux.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randread -t 1 -m 2 -r /var/tmp/bperf.sock -z --wait-for-rpc 00:31:05.134 11:39:00 keyring_linux -- common/autotest_common.sh@831 -- # '[' -z 1715069 ']' 00:31:05.134 11:39:00 keyring_linux -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:31:05.134 11:39:00 keyring_linux -- common/autotest_common.sh@836 -- # local max_retries=100 00:31:05.134 11:39:00 keyring_linux -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:31:05.134 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:31:05.134 11:39:00 keyring_linux -- common/autotest_common.sh@840 -- # xtrace_disable 00:31:05.134 11:39:00 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:31:05.134 [2024-07-26 11:39:00.793007] Starting SPDK v24.09-pre git sha1 487ff9e1a / DPDK 24.03.0 initialization... 00:31:05.134 [2024-07-26 11:39:00.793053] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1715069 ] 00:31:05.392 EAL: No free 2048 kB hugepages reported on node 1 00:31:05.392 [2024-07-26 11:39:00.859804] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:05.392 [2024-07-26 11:39:00.938322] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:31:05.956 11:39:01 keyring_linux -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:31:05.956 11:39:01 keyring_linux -- common/autotest_common.sh@864 -- # return 0 00:31:05.956 11:39:01 keyring_linux -- keyring/linux.sh@73 -- # bperf_cmd keyring_linux_set_options --enable 00:31:05.956 11:39:01 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_linux_set_options --enable 00:31:06.213 11:39:01 keyring_linux -- keyring/linux.sh@74 -- # bperf_cmd framework_start_init 00:31:06.213 11:39:01 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:31:06.471 11:39:01 keyring_linux -- keyring/linux.sh@75 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 00:31:06.471 11:39:01 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 00:31:06.471 [2024-07-26 11:39:02.116947] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:31:06.729 nvme0n1 00:31:06.729 11:39:02 keyring_linux -- keyring/linux.sh@77 -- # check_keys 1 :spdk-test:key0 00:31:06.729 11:39:02 keyring_linux -- keyring/linux.sh@19 -- # local count=1 name=:spdk-test:key0 00:31:06.729 11:39:02 keyring_linux -- keyring/linux.sh@20 -- # local sn 00:31:06.729 11:39:02 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 00:31:06.729 11:39:02 keyring_linux -- keyring/linux.sh@22 -- # jq length 00:31:06.729 11:39:02 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:31:06.988 11:39:02 keyring_linux -- keyring/linux.sh@22 -- # (( 1 == count )) 00:31:06.988 11:39:02 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 00:31:06.988 11:39:02 keyring_linux -- keyring/linux.sh@25 -- # get_key :spdk-test:key0 00:31:06.988 11:39:02 keyring_linux -- keyring/linux.sh@25 -- # jq -r .sn 00:31:06.988 11:39:02 keyring_linux -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:31:06.988 11:39:02 keyring_linux -- keyring/common.sh@10 -- # jq '.[] | select(.name == ":spdk-test:key0")' 00:31:06.988 11:39:02 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:31:06.988 11:39:02 keyring_linux -- keyring/linux.sh@25 -- # sn=670125707 00:31:06.988 11:39:02 keyring_linux -- keyring/linux.sh@26 -- # get_keysn :spdk-test:key0 00:31:06.988 11:39:02 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 00:31:06.988 11:39:02 keyring_linux -- keyring/linux.sh@26 -- # [[ 670125707 == \6\7\0\1\2\5\7\0\7 ]] 00:31:06.988 11:39:02 keyring_linux -- keyring/linux.sh@27 -- # keyctl print 670125707 00:31:06.988 11:39:02 keyring_linux -- keyring/linux.sh@27 -- # [[ NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: == \N\V\M\e\T\L\S\k\e\y\-\1\:\0\0\:\M\D\A\x\M\T\I\y\M\z\M\0\N\D\U\1\N\j\Y\3\N\z\g\4\O\T\l\h\Y\W\J\i\Y\2\N\k\Z\G\V\l\Z\m\Z\w\J\E\i\Q\: ]] 00:31:06.988 11:39:02 keyring_linux -- keyring/linux.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:31:07.246 Running I/O for 1 seconds... 00:31:08.180 00:31:08.180 Latency(us) 00:31:08.180 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:08.180 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:31:08.180 nvme0n1 : 1.01 20069.66 78.40 0.00 0.00 6353.09 2090.91 7427.41 00:31:08.180 =================================================================================================================== 00:31:08.180 Total : 20069.66 78.40 0.00 0.00 6353.09 2090.91 7427.41 00:31:08.180 0 00:31:08.180 11:39:03 keyring_linux -- keyring/linux.sh@80 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:31:08.180 11:39:03 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:31:08.438 11:39:03 keyring_linux -- keyring/linux.sh@81 -- # check_keys 0 00:31:08.438 11:39:03 keyring_linux -- keyring/linux.sh@19 -- # local count=0 name= 00:31:08.438 11:39:03 keyring_linux -- keyring/linux.sh@20 -- # local sn 00:31:08.438 11:39:03 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 00:31:08.438 11:39:03 keyring_linux -- keyring/linux.sh@22 -- # jq length 00:31:08.438 11:39:03 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:31:08.438 11:39:04 keyring_linux -- keyring/linux.sh@22 -- # (( 0 == count )) 00:31:08.438 11:39:04 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 00:31:08.438 11:39:04 keyring_linux -- keyring/linux.sh@23 -- # return 00:31:08.438 11:39:04 keyring_linux -- keyring/linux.sh@84 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:31:08.438 11:39:04 keyring_linux -- common/autotest_common.sh@650 -- # local es=0 00:31:08.438 11:39:04 keyring_linux -- common/autotest_common.sh@652 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:31:08.438 11:39:04 keyring_linux -- common/autotest_common.sh@638 -- # local arg=bperf_cmd 00:31:08.438 11:39:04 keyring_linux -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:31:08.438 11:39:04 keyring_linux -- common/autotest_common.sh@642 -- # type -t bperf_cmd 00:31:08.438 11:39:04 keyring_linux -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:31:08.438 11:39:04 keyring_linux -- common/autotest_common.sh@653 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:31:08.438 11:39:04 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:31:08.697 [2024-07-26 11:39:04.222931] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:31:08.697 [2024-07-26 11:39:04.223315] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7c5770 (107): Transport endpoint is not connected 00:31:08.697 [2024-07-26 11:39:04.224310] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7c5770 (9): Bad file descriptor 00:31:08.697 [2024-07-26 11:39:04.225311] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:31:08.697 [2024-07-26 11:39:04.225321] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:31:08.697 [2024-07-26 11:39:04.225327] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:31:08.697 request: 00:31:08.697 { 00:31:08.697 "name": "nvme0", 00:31:08.697 "trtype": "tcp", 00:31:08.697 "traddr": "127.0.0.1", 00:31:08.697 "adrfam": "ipv4", 00:31:08.697 "trsvcid": "4420", 00:31:08.697 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:31:08.697 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:31:08.697 "prchk_reftag": false, 00:31:08.697 "prchk_guard": false, 00:31:08.697 "hdgst": false, 00:31:08.697 "ddgst": false, 00:31:08.697 "psk": ":spdk-test:key1", 00:31:08.697 "method": "bdev_nvme_attach_controller", 00:31:08.697 "req_id": 1 00:31:08.697 } 00:31:08.697 Got JSON-RPC error response 00:31:08.697 response: 00:31:08.697 { 00:31:08.697 "code": -5, 00:31:08.697 "message": "Input/output error" 00:31:08.697 } 00:31:08.697 11:39:04 keyring_linux -- common/autotest_common.sh@653 -- # es=1 00:31:08.697 11:39:04 keyring_linux -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:31:08.697 11:39:04 keyring_linux -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:31:08.697 11:39:04 keyring_linux -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:31:08.697 11:39:04 keyring_linux -- keyring/linux.sh@1 -- # cleanup 00:31:08.697 11:39:04 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 00:31:08.697 11:39:04 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key0 00:31:08.697 11:39:04 keyring_linux -- keyring/linux.sh@31 -- # local name=key0 sn 00:31:08.697 11:39:04 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key0 00:31:08.697 11:39:04 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 00:31:08.697 11:39:04 keyring_linux -- keyring/linux.sh@33 -- # sn=670125707 00:31:08.697 11:39:04 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 670125707 00:31:08.697 1 links removed 00:31:08.697 11:39:04 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 00:31:08.697 11:39:04 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key1 00:31:08.697 11:39:04 keyring_linux -- keyring/linux.sh@31 -- # local name=key1 sn 00:31:08.697 11:39:04 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key1 00:31:08.697 11:39:04 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key1 00:31:08.697 11:39:04 keyring_linux -- keyring/linux.sh@33 -- # sn=156576551 00:31:08.697 11:39:04 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 156576551 00:31:08.697 1 links removed 00:31:08.697 11:39:04 keyring_linux -- keyring/linux.sh@41 -- # killprocess 1715069 00:31:08.697 11:39:04 keyring_linux -- common/autotest_common.sh@950 -- # '[' -z 1715069 ']' 00:31:08.697 11:39:04 keyring_linux -- common/autotest_common.sh@954 -- # kill -0 1715069 00:31:08.697 11:39:04 keyring_linux -- common/autotest_common.sh@955 -- # uname 00:31:08.697 11:39:04 keyring_linux -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:31:08.697 11:39:04 keyring_linux -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1715069 00:31:08.697 11:39:04 keyring_linux -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:31:08.697 11:39:04 keyring_linux -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:31:08.697 11:39:04 keyring_linux -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1715069' 00:31:08.697 killing process with pid 1715069 00:31:08.697 11:39:04 keyring_linux -- common/autotest_common.sh@969 -- # kill 1715069 00:31:08.697 Received shutdown signal, test time was about 1.000000 seconds 00:31:08.697 00:31:08.697 Latency(us) 00:31:08.697 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:08.697 =================================================================================================================== 00:31:08.698 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:31:08.698 11:39:04 keyring_linux -- common/autotest_common.sh@974 -- # wait 1715069 00:31:08.957 11:39:04 keyring_linux -- keyring/linux.sh@42 -- # killprocess 1714794 00:31:08.957 11:39:04 keyring_linux -- common/autotest_common.sh@950 -- # '[' -z 1714794 ']' 00:31:08.957 11:39:04 keyring_linux -- common/autotest_common.sh@954 -- # kill -0 1714794 00:31:08.957 11:39:04 keyring_linux -- common/autotest_common.sh@955 -- # uname 00:31:08.957 11:39:04 keyring_linux -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:31:08.957 11:39:04 keyring_linux -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1714794 00:31:08.957 11:39:04 keyring_linux -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:31:08.957 11:39:04 keyring_linux -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:31:08.957 11:39:04 keyring_linux -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1714794' 00:31:08.957 killing process with pid 1714794 00:31:08.957 11:39:04 keyring_linux -- common/autotest_common.sh@969 -- # kill 1714794 00:31:08.957 11:39:04 keyring_linux -- common/autotest_common.sh@974 -- # wait 1714794 00:31:09.216 00:31:09.216 real 0m5.181s 00:31:09.216 user 0m9.441s 00:31:09.216 sys 0m1.476s 00:31:09.216 11:39:04 keyring_linux -- common/autotest_common.sh@1126 -- # xtrace_disable 00:31:09.216 11:39:04 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:31:09.216 ************************************ 00:31:09.216 END TEST keyring_linux 00:31:09.216 ************************************ 00:31:09.216 11:39:04 -- spdk/autotest.sh@312 -- # '[' 0 -eq 1 ']' 00:31:09.216 11:39:04 -- spdk/autotest.sh@316 -- # '[' 0 -eq 1 ']' 00:31:09.216 11:39:04 -- spdk/autotest.sh@320 -- # '[' 0 -eq 1 ']' 00:31:09.216 11:39:04 -- spdk/autotest.sh@325 -- # '[' 0 -eq 1 ']' 00:31:09.216 11:39:04 -- spdk/autotest.sh@334 -- # '[' 0 -eq 1 ']' 00:31:09.216 11:39:04 -- spdk/autotest.sh@339 -- # '[' 0 -eq 1 ']' 00:31:09.216 11:39:04 -- spdk/autotest.sh@343 -- # '[' 0 -eq 1 ']' 00:31:09.216 11:39:04 -- spdk/autotest.sh@347 -- # '[' 0 -eq 1 ']' 00:31:09.216 11:39:04 -- spdk/autotest.sh@351 -- # '[' 0 -eq 1 ']' 00:31:09.216 11:39:04 -- spdk/autotest.sh@356 -- # '[' 0 -eq 1 ']' 00:31:09.216 11:39:04 -- spdk/autotest.sh@360 -- # '[' 0 -eq 1 ']' 00:31:09.216 11:39:04 -- spdk/autotest.sh@367 -- # [[ 0 -eq 1 ]] 00:31:09.216 11:39:04 -- spdk/autotest.sh@371 -- # [[ 0 -eq 1 ]] 00:31:09.216 11:39:04 -- spdk/autotest.sh@375 -- # [[ 0 -eq 1 ]] 00:31:09.216 11:39:04 -- spdk/autotest.sh@379 -- # [[ 0 -eq 1 ]] 00:31:09.216 11:39:04 -- spdk/autotest.sh@384 -- # trap - SIGINT SIGTERM EXIT 00:31:09.216 11:39:04 -- spdk/autotest.sh@386 -- # timing_enter post_cleanup 00:31:09.216 11:39:04 -- common/autotest_common.sh@724 -- # xtrace_disable 00:31:09.216 11:39:04 -- common/autotest_common.sh@10 -- # set +x 00:31:09.475 11:39:04 -- spdk/autotest.sh@387 -- # autotest_cleanup 00:31:09.475 11:39:04 -- common/autotest_common.sh@1392 -- # local autotest_es=0 00:31:09.475 11:39:04 -- common/autotest_common.sh@1393 -- # xtrace_disable 00:31:09.475 11:39:04 -- common/autotest_common.sh@10 -- # set +x 00:31:14.742 INFO: APP EXITING 00:31:14.742 INFO: killing all VMs 00:31:14.742 INFO: killing vhost app 00:31:14.742 INFO: EXIT DONE 00:31:17.275 0000:5e:00.0 (8086 0a54): Already using the nvme driver 00:31:17.275 0000:00:04.7 (8086 2021): Already using the ioatdma driver 00:31:17.275 0000:00:04.6 (8086 2021): Already using the ioatdma driver 00:31:17.275 0000:00:04.5 (8086 2021): Already using the ioatdma driver 00:31:17.275 0000:00:04.4 (8086 2021): Already using the ioatdma driver 00:31:17.275 0000:00:04.3 (8086 2021): Already using the ioatdma driver 00:31:17.275 0000:00:04.2 (8086 2021): Already using the ioatdma driver 00:31:17.275 0000:00:04.1 (8086 2021): Already using the ioatdma driver 00:31:17.275 0000:00:04.0 (8086 2021): Already using the ioatdma driver 00:31:17.275 0000:80:04.7 (8086 2021): Already using the ioatdma driver 00:31:17.275 0000:80:04.6 (8086 2021): Already using the ioatdma driver 00:31:17.275 0000:80:04.5 (8086 2021): Already using the ioatdma driver 00:31:17.275 0000:80:04.4 (8086 2021): Already using the ioatdma driver 00:31:17.275 0000:80:04.3 (8086 2021): Already using the ioatdma driver 00:31:17.275 0000:80:04.2 (8086 2021): Already using the ioatdma driver 00:31:17.275 0000:80:04.1 (8086 2021): Already using the ioatdma driver 00:31:17.275 0000:80:04.0 (8086 2021): Already using the ioatdma driver 00:31:20.565 Cleaning 00:31:20.565 Removing: /var/run/dpdk/spdk0/config 00:31:20.565 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:31:20.565 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:31:20.565 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:31:20.565 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:31:20.565 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-0 00:31:20.565 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-1 00:31:20.565 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-2 00:31:20.565 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-3 00:31:20.565 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:31:20.565 Removing: /var/run/dpdk/spdk0/hugepage_info 00:31:20.565 Removing: /var/run/dpdk/spdk1/config 00:31:20.565 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-0 00:31:20.565 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-1 00:31:20.565 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-2 00:31:20.565 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-3 00:31:20.565 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-0 00:31:20.565 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-1 00:31:20.565 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-2 00:31:20.565 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-3 00:31:20.565 Removing: /var/run/dpdk/spdk1/fbarray_memzone 00:31:20.565 Removing: /var/run/dpdk/spdk1/hugepage_info 00:31:20.565 Removing: /var/run/dpdk/spdk1/mp_socket 00:31:20.565 Removing: /var/run/dpdk/spdk2/config 00:31:20.565 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-0 00:31:20.565 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-1 00:31:20.565 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-2 00:31:20.565 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-3 00:31:20.565 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-0 00:31:20.565 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-1 00:31:20.565 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-2 00:31:20.565 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-3 00:31:20.565 Removing: /var/run/dpdk/spdk2/fbarray_memzone 00:31:20.565 Removing: /var/run/dpdk/spdk2/hugepage_info 00:31:20.565 Removing: /var/run/dpdk/spdk3/config 00:31:20.565 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-0 00:31:20.565 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-1 00:31:20.565 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-2 00:31:20.565 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-3 00:31:20.565 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-0 00:31:20.565 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-1 00:31:20.565 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-2 00:31:20.565 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-3 00:31:20.565 Removing: /var/run/dpdk/spdk3/fbarray_memzone 00:31:20.565 Removing: /var/run/dpdk/spdk3/hugepage_info 00:31:20.565 Removing: /var/run/dpdk/spdk4/config 00:31:20.565 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-0 00:31:20.565 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-1 00:31:20.565 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-2 00:31:20.565 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-3 00:31:20.565 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-0 00:31:20.565 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-1 00:31:20.565 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-2 00:31:20.565 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-3 00:31:20.565 Removing: /var/run/dpdk/spdk4/fbarray_memzone 00:31:20.565 Removing: /var/run/dpdk/spdk4/hugepage_info 00:31:20.565 Removing: /dev/shm/bdev_svc_trace.1 00:31:20.565 Removing: /dev/shm/nvmf_trace.0 00:31:20.565 Removing: /dev/shm/spdk_tgt_trace.pid1332765 00:31:20.565 Removing: /var/run/dpdk/spdk0 00:31:20.565 Removing: /var/run/dpdk/spdk1 00:31:20.565 Removing: /var/run/dpdk/spdk2 00:31:20.565 Removing: /var/run/dpdk/spdk3 00:31:20.565 Removing: /var/run/dpdk/spdk4 00:31:20.565 Removing: /var/run/dpdk/spdk_pid1330403 00:31:20.565 Removing: /var/run/dpdk/spdk_pid1331477 00:31:20.565 Removing: /var/run/dpdk/spdk_pid1332765 00:31:20.565 Removing: /var/run/dpdk/spdk_pid1333399 00:31:20.565 Removing: /var/run/dpdk/spdk_pid1334351 00:31:20.565 Removing: /var/run/dpdk/spdk_pid1334588 00:31:20.565 Removing: /var/run/dpdk/spdk_pid1335559 00:31:20.565 Removing: /var/run/dpdk/spdk_pid1335619 00:31:20.565 Removing: /var/run/dpdk/spdk_pid1335913 00:31:20.565 Removing: /var/run/dpdk/spdk_pid1337660 00:31:20.565 Removing: /var/run/dpdk/spdk_pid1338933 00:31:20.565 Removing: /var/run/dpdk/spdk_pid1339223 00:31:20.565 Removing: /var/run/dpdk/spdk_pid1339521 00:31:20.565 Removing: /var/run/dpdk/spdk_pid1339968 00:31:20.565 Removing: /var/run/dpdk/spdk_pid1340311 00:31:20.565 Removing: /var/run/dpdk/spdk_pid1340550 00:31:20.565 Removing: /var/run/dpdk/spdk_pid1340743 00:31:20.565 Removing: /var/run/dpdk/spdk_pid1341028 00:31:20.565 Removing: /var/run/dpdk/spdk_pid1341848 00:31:20.565 Removing: /var/run/dpdk/spdk_pid1344834 00:31:20.565 Removing: /var/run/dpdk/spdk_pid1345104 00:31:20.565 Removing: /var/run/dpdk/spdk_pid1345434 00:31:20.565 Removing: /var/run/dpdk/spdk_pid1345592 00:31:20.565 Removing: /var/run/dpdk/spdk_pid1346086 00:31:20.565 Removing: /var/run/dpdk/spdk_pid1346127 00:31:20.565 Removing: /var/run/dpdk/spdk_pid1346589 00:31:20.565 Removing: /var/run/dpdk/spdk_pid1346819 00:31:20.565 Removing: /var/run/dpdk/spdk_pid1347077 00:31:20.565 Removing: /var/run/dpdk/spdk_pid1347116 00:31:20.565 Removing: /var/run/dpdk/spdk_pid1347355 00:31:20.565 Removing: /var/run/dpdk/spdk_pid1347583 00:31:20.565 Removing: /var/run/dpdk/spdk_pid1347987 00:31:20.565 Removing: /var/run/dpdk/spdk_pid1348188 00:31:20.565 Removing: /var/run/dpdk/spdk_pid1348502 00:31:20.565 Removing: /var/run/dpdk/spdk_pid1352360 00:31:20.565 Removing: /var/run/dpdk/spdk_pid1356620 00:31:20.565 Removing: /var/run/dpdk/spdk_pid1367366 00:31:20.565 Removing: /var/run/dpdk/spdk_pid1367859 00:31:20.565 Removing: /var/run/dpdk/spdk_pid1372245 00:31:20.565 Removing: /var/run/dpdk/spdk_pid1372580 00:31:20.565 Removing: /var/run/dpdk/spdk_pid1376849 00:31:20.565 Removing: /var/run/dpdk/spdk_pid1382721 00:31:20.565 Removing: /var/run/dpdk/spdk_pid1385410 00:31:20.565 Removing: /var/run/dpdk/spdk_pid1395972 00:31:20.565 Removing: /var/run/dpdk/spdk_pid1404895 00:31:20.565 Removing: /var/run/dpdk/spdk_pid1407225 00:31:20.565 Removing: /var/run/dpdk/spdk_pid1408152 00:31:20.565 Removing: /var/run/dpdk/spdk_pid1425014 00:31:20.565 Removing: /var/run/dpdk/spdk_pid1429063 00:31:20.565 Removing: /var/run/dpdk/spdk_pid1472860 00:31:20.565 Removing: /var/run/dpdk/spdk_pid1478253 00:31:20.565 Removing: /var/run/dpdk/spdk_pid1484008 00:31:20.565 Removing: /var/run/dpdk/spdk_pid1490061 00:31:20.565 Removing: /var/run/dpdk/spdk_pid1490137 00:31:20.565 Removing: /var/run/dpdk/spdk_pid1490939 00:31:20.565 Removing: /var/run/dpdk/spdk_pid1491848 00:31:20.565 Removing: /var/run/dpdk/spdk_pid1492771 00:31:20.565 Removing: /var/run/dpdk/spdk_pid1493237 00:31:20.565 Removing: /var/run/dpdk/spdk_pid1493269 00:31:20.565 Removing: /var/run/dpdk/spdk_pid1493555 00:31:20.565 Removing: /var/run/dpdk/spdk_pid1493696 00:31:20.565 Removing: /var/run/dpdk/spdk_pid1493698 00:31:20.565 Removing: /var/run/dpdk/spdk_pid1494619 00:31:20.565 Removing: /var/run/dpdk/spdk_pid1495483 00:31:20.565 Removing: /var/run/dpdk/spdk_pid1496259 00:31:20.565 Removing: /var/run/dpdk/spdk_pid1496929 00:31:20.565 Removing: /var/run/dpdk/spdk_pid1496937 00:31:20.565 Removing: /var/run/dpdk/spdk_pid1497168 00:31:20.565 Removing: /var/run/dpdk/spdk_pid1498405 00:31:20.565 Removing: /var/run/dpdk/spdk_pid1499395 00:31:20.565 Removing: /var/run/dpdk/spdk_pid1508206 00:31:20.565 Removing: /var/run/dpdk/spdk_pid1532852 00:31:20.565 Removing: /var/run/dpdk/spdk_pid1537453 00:31:20.565 Removing: /var/run/dpdk/spdk_pid1539544 00:31:20.565 Removing: /var/run/dpdk/spdk_pid1541440 00:31:20.565 Removing: /var/run/dpdk/spdk_pid1541594 00:31:20.565 Removing: /var/run/dpdk/spdk_pid1541781 00:31:20.565 Removing: /var/run/dpdk/spdk_pid1542020 00:31:20.565 Removing: /var/run/dpdk/spdk_pid1542750 00:31:20.565 Removing: /var/run/dpdk/spdk_pid1544585 00:31:20.565 Removing: /var/run/dpdk/spdk_pid1545577 00:31:20.565 Removing: /var/run/dpdk/spdk_pid1546076 00:31:20.565 Removing: /var/run/dpdk/spdk_pid1548182 00:31:20.565 Removing: /var/run/dpdk/spdk_pid1548899 00:31:20.565 Removing: /var/run/dpdk/spdk_pid1549626 00:31:20.565 Removing: /var/run/dpdk/spdk_pid1553674 00:31:20.565 Removing: /var/run/dpdk/spdk_pid1563642 00:31:20.565 Removing: /var/run/dpdk/spdk_pid1567685 00:31:20.565 Removing: /var/run/dpdk/spdk_pid1573660 00:31:20.565 Removing: /var/run/dpdk/spdk_pid1574976 00:31:20.565 Removing: /var/run/dpdk/spdk_pid1576516 00:31:20.565 Removing: /var/run/dpdk/spdk_pid1581363 00:31:20.565 Removing: /var/run/dpdk/spdk_pid1585603 00:31:20.565 Removing: /var/run/dpdk/spdk_pid1592967 00:31:20.565 Removing: /var/run/dpdk/spdk_pid1592982 00:31:20.565 Removing: /var/run/dpdk/spdk_pid1597681 00:31:20.565 Removing: /var/run/dpdk/spdk_pid1597911 00:31:20.565 Removing: /var/run/dpdk/spdk_pid1598141 00:31:20.565 Removing: /var/run/dpdk/spdk_pid1598493 00:31:20.565 Removing: /var/run/dpdk/spdk_pid1598603 00:31:20.565 Removing: /var/run/dpdk/spdk_pid1603085 00:31:20.565 Removing: /var/run/dpdk/spdk_pid1603653 00:31:20.565 Removing: /var/run/dpdk/spdk_pid1607994 00:31:20.565 Removing: /var/run/dpdk/spdk_pid1610746 00:31:20.565 Removing: /var/run/dpdk/spdk_pid1616145 00:31:20.565 Removing: /var/run/dpdk/spdk_pid1621694 00:31:20.565 Removing: /var/run/dpdk/spdk_pid1630764 00:31:20.565 Removing: /var/run/dpdk/spdk_pid1637813 00:31:20.565 Removing: /var/run/dpdk/spdk_pid1637863 00:31:20.565 Removing: /var/run/dpdk/spdk_pid1656063 00:31:20.565 Removing: /var/run/dpdk/spdk_pid1656762 00:31:20.565 Removing: /var/run/dpdk/spdk_pid1657452 00:31:20.565 Removing: /var/run/dpdk/spdk_pid1658074 00:31:20.565 Removing: /var/run/dpdk/spdk_pid1658907 00:31:20.566 Removing: /var/run/dpdk/spdk_pid1659602 00:31:20.566 Removing: /var/run/dpdk/spdk_pid1660301 00:31:20.566 Removing: /var/run/dpdk/spdk_pid1660790 00:31:20.566 Removing: /var/run/dpdk/spdk_pid1665073 00:31:20.566 Removing: /var/run/dpdk/spdk_pid1665384 00:31:20.566 Removing: /var/run/dpdk/spdk_pid1671329 00:31:20.566 Removing: /var/run/dpdk/spdk_pid1671578 00:31:20.566 Removing: /var/run/dpdk/spdk_pid1673937 00:31:20.566 Removing: /var/run/dpdk/spdk_pid1682301 00:31:20.566 Removing: /var/run/dpdk/spdk_pid1682306 00:31:20.566 Removing: /var/run/dpdk/spdk_pid1687502 00:31:20.566 Removing: /var/run/dpdk/spdk_pid1689376 00:31:20.566 Removing: /var/run/dpdk/spdk_pid1691334 00:31:20.566 Removing: /var/run/dpdk/spdk_pid1692537 00:31:20.566 Removing: /var/run/dpdk/spdk_pid1694515 00:31:20.566 Removing: /var/run/dpdk/spdk_pid1695578 00:31:20.566 Removing: /var/run/dpdk/spdk_pid1704317 00:31:20.566 Removing: /var/run/dpdk/spdk_pid1704779 00:31:20.566 Removing: /var/run/dpdk/spdk_pid1705385 00:31:20.566 Removing: /var/run/dpdk/spdk_pid1707728 00:31:20.566 Removing: /var/run/dpdk/spdk_pid1708196 00:31:20.566 Removing: /var/run/dpdk/spdk_pid1708740 00:31:20.566 Removing: /var/run/dpdk/spdk_pid1712708 00:31:20.566 Removing: /var/run/dpdk/spdk_pid1712793 00:31:20.825 Removing: /var/run/dpdk/spdk_pid1714350 00:31:20.825 Removing: /var/run/dpdk/spdk_pid1714794 00:31:20.825 Removing: /var/run/dpdk/spdk_pid1715069 00:31:20.825 Clean 00:31:20.825 11:39:16 -- common/autotest_common.sh@1451 -- # return 0 00:31:20.825 11:39:16 -- spdk/autotest.sh@388 -- # timing_exit post_cleanup 00:31:20.825 11:39:16 -- common/autotest_common.sh@730 -- # xtrace_disable 00:31:20.825 11:39:16 -- common/autotest_common.sh@10 -- # set +x 00:31:20.825 11:39:16 -- spdk/autotest.sh@390 -- # timing_exit autotest 00:31:20.825 11:39:16 -- common/autotest_common.sh@730 -- # xtrace_disable 00:31:20.825 11:39:16 -- common/autotest_common.sh@10 -- # set +x 00:31:20.825 11:39:16 -- spdk/autotest.sh@391 -- # chmod a+r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt 00:31:20.825 11:39:16 -- spdk/autotest.sh@393 -- # [[ -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/udev.log ]] 00:31:20.825 11:39:16 -- spdk/autotest.sh@393 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/udev.log 00:31:20.825 11:39:16 -- spdk/autotest.sh@395 -- # hash lcov 00:31:20.825 11:39:16 -- spdk/autotest.sh@395 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:31:20.825 11:39:16 -- spdk/autotest.sh@397 -- # hostname 00:31:20.825 11:39:16 -- spdk/autotest.sh@397 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -c -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk -t spdk-wfp-06 -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_test.info 00:31:21.085 geninfo: WARNING: invalid characters removed from testname! 00:31:43.078 11:39:35 -- spdk/autotest.sh@398 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -a /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_base.info -a /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_test.info -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:31:43.078 11:39:38 -- spdk/autotest.sh@399 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/dpdk/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:31:44.985 11:39:40 -- spdk/autotest.sh@400 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '/usr/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:31:46.361 11:39:41 -- spdk/autotest.sh@401 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/examples/vmd/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:31:48.263 11:39:43 -- spdk/autotest.sh@402 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:31:50.164 11:39:45 -- spdk/autotest.sh@403 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:31:51.540 11:39:47 -- spdk/autotest.sh@404 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:31:51.799 11:39:47 -- common/autobuild_common.sh@15 -- $ source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:31:51.799 11:39:47 -- scripts/common.sh@508 -- $ [[ -e /bin/wpdk_common.sh ]] 00:31:51.799 11:39:47 -- scripts/common.sh@516 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:51.799 11:39:47 -- scripts/common.sh@517 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:51.799 11:39:47 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:51.799 11:39:47 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:51.799 11:39:47 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:51.799 11:39:47 -- paths/export.sh@5 -- $ export PATH 00:31:51.799 11:39:47 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:51.799 11:39:47 -- common/autobuild_common.sh@446 -- $ out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:31:51.799 11:39:47 -- common/autobuild_common.sh@447 -- $ date +%s 00:31:51.799 11:39:47 -- common/autobuild_common.sh@447 -- $ mktemp -dt spdk_1721986787.XXXXXX 00:31:51.799 11:39:47 -- common/autobuild_common.sh@447 -- $ SPDK_WORKSPACE=/tmp/spdk_1721986787.vGIRc4 00:31:51.799 11:39:47 -- common/autobuild_common.sh@449 -- $ [[ -n '' ]] 00:31:51.799 11:39:47 -- common/autobuild_common.sh@453 -- $ '[' -n '' ']' 00:31:51.799 11:39:47 -- common/autobuild_common.sh@456 -- $ scanbuild_exclude='--exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/' 00:31:51.799 11:39:47 -- common/autobuild_common.sh@460 -- $ scanbuild_exclude+=' --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp' 00:31:51.799 11:39:47 -- common/autobuild_common.sh@462 -- $ scanbuild='scan-build -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/scan-build-tmp --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/ --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp --status-bugs' 00:31:51.799 11:39:47 -- common/autobuild_common.sh@463 -- $ get_config_params 00:31:51.799 11:39:47 -- common/autotest_common.sh@398 -- $ xtrace_disable 00:31:51.799 11:39:47 -- common/autotest_common.sh@10 -- $ set +x 00:31:51.799 11:39:47 -- common/autobuild_common.sh@463 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user' 00:31:51.799 11:39:47 -- common/autobuild_common.sh@465 -- $ start_monitor_resources 00:31:51.799 11:39:47 -- pm/common@17 -- $ local monitor 00:31:51.799 11:39:47 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:31:51.799 11:39:47 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:31:51.799 11:39:47 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:31:51.799 11:39:47 -- pm/common@21 -- $ date +%s 00:31:51.799 11:39:47 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:31:51.799 11:39:47 -- pm/common@21 -- $ date +%s 00:31:51.799 11:39:47 -- pm/common@25 -- $ sleep 1 00:31:51.799 11:39:47 -- pm/common@21 -- $ date +%s 00:31:51.799 11:39:47 -- pm/common@21 -- $ date +%s 00:31:51.799 11:39:47 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1721986787 00:31:51.799 11:39:47 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1721986787 00:31:51.799 11:39:47 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1721986787 00:31:51.799 11:39:47 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1721986787 00:31:51.799 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1721986787_collect-vmstat.pm.log 00:31:51.799 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1721986787_collect-cpu-load.pm.log 00:31:51.799 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1721986787_collect-cpu-temp.pm.log 00:31:51.799 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1721986787_collect-bmc-pm.bmc.pm.log 00:31:52.736 11:39:48 -- common/autobuild_common.sh@466 -- $ trap stop_monitor_resources EXIT 00:31:52.736 11:39:48 -- spdk/autopackage.sh@10 -- $ MAKEFLAGS=-j96 00:31:52.736 11:39:48 -- spdk/autopackage.sh@11 -- $ cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:31:52.736 11:39:48 -- spdk/autopackage.sh@13 -- $ [[ 0 -eq 1 ]] 00:31:52.736 11:39:48 -- spdk/autopackage.sh@18 -- $ [[ 0 -eq 0 ]] 00:31:52.736 11:39:48 -- spdk/autopackage.sh@19 -- $ timing_finish 00:31:52.736 11:39:48 -- common/autotest_common.sh@736 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:31:52.736 11:39:48 -- common/autotest_common.sh@737 -- $ '[' -x /usr/local/FlameGraph/flamegraph.pl ']' 00:31:52.736 11:39:48 -- common/autotest_common.sh@739 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt 00:31:52.736 11:39:48 -- spdk/autopackage.sh@20 -- $ exit 0 00:31:52.736 11:39:48 -- spdk/autopackage.sh@1 -- $ stop_monitor_resources 00:31:52.736 11:39:48 -- pm/common@29 -- $ signal_monitor_resources TERM 00:31:52.736 11:39:48 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:31:52.736 11:39:48 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:31:52.736 11:39:48 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-load.pid ]] 00:31:52.736 11:39:48 -- pm/common@44 -- $ pid=1725531 00:31:52.736 11:39:48 -- pm/common@50 -- $ kill -TERM 1725531 00:31:52.736 11:39:48 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:31:52.736 11:39:48 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-vmstat.pid ]] 00:31:52.736 11:39:48 -- pm/common@44 -- $ pid=1725533 00:31:52.736 11:39:48 -- pm/common@50 -- $ kill -TERM 1725533 00:31:52.736 11:39:48 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:31:52.736 11:39:48 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-temp.pid ]] 00:31:52.736 11:39:48 -- pm/common@44 -- $ pid=1725535 00:31:52.736 11:39:48 -- pm/common@50 -- $ kill -TERM 1725535 00:31:52.736 11:39:48 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:31:52.736 11:39:48 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-bmc-pm.pid ]] 00:31:52.736 11:39:48 -- pm/common@44 -- $ pid=1725559 00:31:52.736 11:39:48 -- pm/common@50 -- $ sudo -E kill -TERM 1725559 00:31:52.736 + [[ -n 1225589 ]] 00:31:52.736 + sudo kill 1225589 00:31:53.006 [Pipeline] } 00:31:53.026 [Pipeline] // stage 00:31:53.032 [Pipeline] } 00:31:53.050 [Pipeline] // timeout 00:31:53.056 [Pipeline] } 00:31:53.074 [Pipeline] // catchError 00:31:53.081 [Pipeline] } 00:31:53.101 [Pipeline] // wrap 00:31:53.107 [Pipeline] } 00:31:53.123 [Pipeline] // catchError 00:31:53.134 [Pipeline] stage 00:31:53.136 [Pipeline] { (Epilogue) 00:31:53.151 [Pipeline] catchError 00:31:53.153 [Pipeline] { 00:31:53.168 [Pipeline] echo 00:31:53.170 Cleanup processes 00:31:53.176 [Pipeline] sh 00:31:53.464 + sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:31:53.464 1725645 /usr/bin/ipmitool sdr dump /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/sdr.cache 00:31:53.464 1725930 sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:31:53.478 [Pipeline] sh 00:31:53.762 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:31:53.763 ++ grep -v 'sudo pgrep' 00:31:53.763 ++ awk '{print $1}' 00:31:53.763 + sudo kill -9 1725645 00:31:53.775 [Pipeline] sh 00:31:54.059 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:32:04.051 [Pipeline] sh 00:32:04.391 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:32:04.391 Artifacts sizes are good 00:32:04.409 [Pipeline] archiveArtifacts 00:32:04.416 Archiving artifacts 00:32:04.578 [Pipeline] sh 00:32:04.862 + sudo chown -R sys_sgci /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:32:04.878 [Pipeline] cleanWs 00:32:04.889 [WS-CLEANUP] Deleting project workspace... 00:32:04.889 [WS-CLEANUP] Deferred wipeout is used... 00:32:04.896 [WS-CLEANUP] done 00:32:04.898 [Pipeline] } 00:32:04.920 [Pipeline] // catchError 00:32:04.933 [Pipeline] sh 00:32:05.216 + logger -p user.info -t JENKINS-CI 00:32:05.225 [Pipeline] } 00:32:05.241 [Pipeline] // stage 00:32:05.247 [Pipeline] } 00:32:05.265 [Pipeline] // node 00:32:05.272 [Pipeline] End of Pipeline 00:32:05.329 Finished: SUCCESS